The main purpose of this project is to obtain a semi-automated tool capable of making certain processes involved in data analysis as fast and automatic as possible. The proposed tool is designed to perform the following tasks semi-automatically: recognizing the type of variables contained in a database; cleaning the database; converting the variables; generating possible linear regression models from selectable variables; comparing the generated models; selecting the two best models (one model without mixed effects and one model with mixed effects); and finally producing graphical and diagnostic representations of the two best selected models. This project could be framed within an exploratory approach to data analysis. It is important to point out that this work was done with the aim of being able to be as functional as possible for analysis of datasets from data collection activities in the field of psychology, and may have limitations both with these and in the use of other types of data. The scripts that make up the software were created in the Jupyter Notebook work environment and include the use of two programming languages: Python and R. It is believed that the tool can be useful primarily in significantly speeding up the creation and comparison of linear regression models, with all the limitations that semi-automation of these stages of data analysis can entail, as they are not supervised by humans. An attempt was made to create the tool in question in ways and methods that could limit as much as possible the problems that could potentially arise in this regard. This tool could be very useful for exploratory but also educational purposes. Finally, it should be pointed out that a frequentist approach to data analysis was purposely chosen, and the use of techniques or tools from the artificial intelligence or machine learning landscape was avoided, partly because of their often non-deterministic nature. In fact, an attempt was made to pursue the simplest and most transparent route possible in the creation of this tool. As a result of preliminary research, it was found that there are very few or no instances of similar tools in the literature and on the web, especially when considering tools that exclude the use of machine learning or AI.
Lo scopo principale di questo progetto è quello di ottenere uno strumento semi-automatizzato capace di rendere il più possibile rapidi e automatici alcuni processi coinvolti nell’analisi dei dati. Lo strumento proposto è stato ideato per eseguire in maniera semi-automatica i seguenti compiti: riconoscimento del tipo di variabili contenute in un database; pulizia del database; conversione delle variabili; generazione dei possibili modelli di regressione lineare a partire da variabili selezionabili; confronto fra i modelli generati; selezione dei due modelli migliori (un modello senza effetti misti e un modello a effetti misti) ed infine produzione di rappresentazioni grafiche e diagnostiche dei due modelli migliori selezionati. Questo progetto potrebbe essere inquadrato all’interno di un approccio esplorativo all'analisi dei dati. È importante evidenziare che questo lavoro è stato compiuto con lo scopo di poter essere il più possibile funzionale per analisi di dataset provenienti da attività di raccolta dati in ambito psicologico, e può presentare dei limiti sia con questi che nell’uso di altri tipi di dati. Gli script che compongono il software sono stati creati nell’ambiente di lavoro di Jupyter Notebook e includono l’uso di due linguaggi di programmazione: Python e R. Si ritiene che lo strumento possa essere utile soprattutto nel velocizzare in modo significativo la creazione e il confronto tra modelli di regressione lineare, con tutti i limiti che può comportare la semi-automatizzazione di queste fasi dell’analisi dei dati, in quanto non supervisionata da esseri umani. Si è naturalmente cercato di creare lo strumento in questione con modi e metodi che potessero limitare il più possibile le problematiche che possono potenzialmente sorgere in tal senso. Questo strumento potrebbe essere molto utile per scopi esplorativi ma anche didattici. Infine è necessario evidenziare che si è scelto di proposito di seguire un approccio frequentista all’analisi dei dati, e di evitare l’uso di tecniche o strumenti provenienti dal panorama dell’intelligenza artificiale o del machine learning, anche a causa della loro natura spesso non deterministica. Infatti si è cercato di perseguire la via più semplice e trasparente possibile nella creazione di questo strumento. A seguito di ricerche preliminari si è constatato che in letteratura e nel web sono molto rari o assenti casi di strumenti simili, soprattutto se si considerano strumenti che escludono l’ausilio di machine learning o AI.
Un tentativo di approccio esplorativo semi-automatizzato nel processo di selezione di modelli di regressione in analisi dei dati
GARIBOLDI, FRANCESCO
2023/2024
Abstract
The main purpose of this project is to obtain a semi-automated tool capable of making certain processes involved in data analysis as fast and automatic as possible. The proposed tool is designed to perform the following tasks semi-automatically: recognizing the type of variables contained in a database; cleaning the database; converting the variables; generating possible linear regression models from selectable variables; comparing the generated models; selecting the two best models (one model without mixed effects and one model with mixed effects); and finally producing graphical and diagnostic representations of the two best selected models. This project could be framed within an exploratory approach to data analysis. It is important to point out that this work was done with the aim of being able to be as functional as possible for analysis of datasets from data collection activities in the field of psychology, and may have limitations both with these and in the use of other types of data. The scripts that make up the software were created in the Jupyter Notebook work environment and include the use of two programming languages: Python and R. It is believed that the tool can be useful primarily in significantly speeding up the creation and comparison of linear regression models, with all the limitations that semi-automation of these stages of data analysis can entail, as they are not supervised by humans. An attempt was made to create the tool in question in ways and methods that could limit as much as possible the problems that could potentially arise in this regard. This tool could be very useful for exploratory but also educational purposes. Finally, it should be pointed out that a frequentist approach to data analysis was purposely chosen, and the use of techniques or tools from the artificial intelligence or machine learning landscape was avoided, partly because of their often non-deterministic nature. In fact, an attempt was made to pursue the simplest and most transparent route possible in the creation of this tool. As a result of preliminary research, it was found that there are very few or no instances of similar tools in the literature and on the web, especially when considering tools that exclude the use of machine learning or AI.File | Dimensione | Formato | |
---|---|---|---|
Gariboldi_Francesco.pdf
accesso aperto
Dimensione
4.52 MB
Formato
Adobe PDF
|
4.52 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/74033