The purpose of this thesis is to design and implement a novel framework for test automation that combines Cucumber, artificial intelligence, and the Page Object Model (POM). The overarching goal is to transform requirements expressed in natural language into executable tests, thereby mitigating the need for extensive manual intervention in the test development process. In contemporary software engineering practices, requirements are often documented in management platforms such as Jira. The adoption of Behavior-Driven Development (BDD) with Cucumber supports a shared understanding of requirements across technical and non-technical stakeholders. However, despite this advantage, the manual writing of test cases remains a persistent bottleneck, slowing down the overall software validation pipeline. The proposed framework addresses this limitation by integrating LLLMs capable of automatically generating Gherkin scenarios and Java code from textual requirements. The introduction of the Page Object Model (POM) further enhances the structure of the framework, promoting a clear separation of concerns between page representation and test logic, and ensuring modularity, scalability, and long-term maintainability. Finally, a qualitative and quantitative evaluation has been conducted to measure the accuracy of the generated test artifacts, the reduction in manual authoring time, and the percentage of test steps automatically generated versus those requiring manual refinement.
The purpose of this thesis is to design and implement a novel framework for test automation that combines Cucumber, artificial intelligence, and the Page Object Model (POM). The overarching goal is to transform requirements expressed in natural language into executable tests, thereby mitigating the need for extensive manual intervention in the test development process. In contemporary software engineering practices, requirements are often documented in management platforms such as Jira. The adoption of Behavior-Driven Development (BDD) with Cucumber supports a shared understanding of requirements across technical and non-technical stakeholders. However, despite this advantage, the manual writing of test cases remains a persistent bottleneck, slowing down the overall software validation pipeline. The proposed framework addresses this limitation by integrating LLLMs capable of automatically generating Gherkin scenarios and Java code from textual requirements. The introduction of the Page Object Model (POM) further enhances the structure of the framework, promoting a clear separation of concerns between page representation and test logic, and ensuring modularity, scalability, and long-term maintainability. Finally, a qualitative and quantitative evaluation has been conducted to measure the accuracy of the generated test artifacts, the reduction in manual authoring time, and the percentage of test steps automatically generated versus those requiring manual refinement.
Generative AI for Automated Testing: A Framework for Test Generation from Natural Language Requirements.
FRANCESCHINI, FILIPPO
2024/2025
Abstract
The purpose of this thesis is to design and implement a novel framework for test automation that combines Cucumber, artificial intelligence, and the Page Object Model (POM). The overarching goal is to transform requirements expressed in natural language into executable tests, thereby mitigating the need for extensive manual intervention in the test development process. In contemporary software engineering practices, requirements are often documented in management platforms such as Jira. The adoption of Behavior-Driven Development (BDD) with Cucumber supports a shared understanding of requirements across technical and non-technical stakeholders. However, despite this advantage, the manual writing of test cases remains a persistent bottleneck, slowing down the overall software validation pipeline. The proposed framework addresses this limitation by integrating LLLMs capable of automatically generating Gherkin scenarios and Java code from textual requirements. The introduction of the Page Object Model (POM) further enhances the structure of the framework, promoting a clear separation of concerns between page representation and test logic, and ensuring modularity, scalability, and long-term maintainability. Finally, a qualitative and quantitative evaluation has been conducted to measure the accuracy of the generated test artifacts, the reduction in manual authoring time, and the percentage of test steps automatically generated versus those requiring manual refinement.| File | Dimensione | Formato | |
|---|---|---|---|
|
Franceschini_Filippo.pdf
Accesso riservato
Dimensione
973.67 kB
Formato
Adobe PDF
|
973.67 kB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/99273