Quality software has long been synonymous with software “without bugs”. Today, however, quality software has come to mean “easy to adapt” because of the constant pressure to change. Consequently, modern software teams seek for a delicate balance between two opposing forces: striving for reliability and striving for agility.
The TESTOMAT project will support software teams to strike the right balance by increasing the development speed without sacrificing quality. The achieve this goal, the project will advance the state-of-the-art in test automation for software teams moving towards a more agile development process.
The project will ultimately result in a Test Automation Improvement Model, which will define key improvement areas in test automation, with the focus on measurable improvement steps. Moreover, the project will advance the state of the art in test automation tools, investigating topics like test effectiveness, test prioritisation and testing for quality standards.
The results of the TESTOMAT project will allow the software testing teams in the consortium to make their testing more effective so that more resources are available for adding value to their products. The tool vendors and consultants on the other hand, will improve their offering and as such gain market share in a growing but highly competitive market.
|First Name||Last Name||Title|
|Mahshid||Helali Moghadam||Industrial Doctoral Student|
|Mehrdad||Saadatmand||Affiliated researcher,Postdoctoral research fellow|
|Sahar||Tahvili||Postdoctoral research fellow|
Poster: Performance Testing Driven by Reinforcement Learning (Oct 2020) Mahshid Helali Moghadam, Mehrdad Saadatmand, Markus Borg , Markus Bohlin, Björn Lisper IEEE 13th International Conference on Software Testing, Validation and Verification (ICST2020)
Performance Comparison of Two Deep Learning Algorithms in Detecting Similarities Between Manual Integration Test Cases (Oct 2020) Cristina Landin, Leo Hatvani, Sahar Tahvili, Hugo Haggren , Martin Längkvist , Amy Loutfi , Anne Håkansson The Fifteenth International Conference on Software Engineering Advances (ICSEA 2020)
A Novel Methodology to Classify Test Cases Using Natural Language Processing and Imbalanced Learning (Aug 2020) Sahar Tahvili, Leo Hatvani, Enislay Ramentol , Rita Pimentel , Wasif Afzal, Francisco Herrera Engineering Applications of Artificial Intelligence (EAAI)
Cluster-Based Parallel Testing Using Semantic Analysis (Apr 2020) Cristina Landin, Sahar Tahvili, Hugo Haggren , Martin Längkvist , Auwn Muhammad , Amy Loutfi The Second IEEE International Conference On Artificial Intelligence Testing (AITest 2020)
From Requirements to Verifiable Executable Models using Rebeca (Nov 2019) Marjan Sirjani, Luciana Provenzano, Sara Abbaspour Asadollah, Mahshid Helali Moghadam International Workshop on Automated and verifiable Software sYstem DEvelopment (ASYDE 2019)