A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests
Industrial reports indicate that flaky tests are one of the primary concerns of software testing mainly due to the false signals they provide. To deal with this issue, researchers have developed tools and techniques aiming at (automatically) identifying flaky tests with encouraging results. However, to reach industrial adoption and practice, these techniques need to be replicated and evaluated extensively on multiple datasets, occasions and settings. In view of this, we perform a replication study of a recently proposed method that predicts flaky tests based on their vocabulary. We thus replicate the original study on three different dimensions. First we replicate the approach on the same subjects as in the original study but using a different evaluation methodology, i.e., we adopt a time-sensitive selection of training and test sets that better reflect the envisioned use case. Second, we consolidate the findings of the initial study by building a new dataset of 837 flaky tests from 9 projects in a different programming language, i.e., Python while the original study was in Java, thus comforting the generalisability of the results. Third, we propose an extension to the original approach by experimenting with different features from the Code Under Test. Our results demonstrate that a more robust validation has a consistent negative impact on the reported results of the original study, but, fortunately, these do not invalidate the key conclusions of the study. We also find re-assuring results that the vocabulary-based models can also be used to predict test flakiness in Python and that the information lying in the Code Under Test has a limited impact in the performance of the vocabulary-based models.
Mon 17 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:00 - 10:50 | Testing and code reviewTechnical Papers / Data Showcase / Registered Reports at MSR Room 2 Chair(s): Jürgen Cito TU Wien and Facebook | ||
10:01 3mTalk | A Traceability Dataset for Open Source Systems Data Showcase Mouna Hammoudi JOHANNES KEPLER UNIVERSITY LINZ, Christoph Mayr-Dorn Johannes Kepler University, Linz, Atif Mashkoor Johannes Kepler University Linz, Alexander Egyed Johannes Kepler University Media Attached | ||
10:04 4mTalk | How Java Programmers Test Exceptional Behavior Technical Papers Diego Marcilio USI Università della Svizzera italiana, Carlo A. Furia Università della Svizzera italiana (USI) Pre-print | ||
10:08 4mTalk | An Exploratory Study of Log Placement Recommendation in an Enterprise System Technical Papers Jeanderson Cândido Delft University of Technology, Jan Haesen Adyen N.V., Maurício Aniche Delft University of Technology, Arie van Deursen Delft University of Technology, Netherlands Pre-print Media Attached | ||
10:12 3mTalk | Does Code Review Promote Conformance? A Study of OpenStack Patches Technical Papers Panyawut Sri-iesaranusorn Nara Institute of Science and Technology, Raula Gaikovina Kula NAIST, Takashi Ishio Nara Institute of Science and Technology Pre-print | ||
10:15 4mTalk | A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests Technical Papers Guillaume Haben University of Luxembourg, Sarra Habchi University of Luxembourg, Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg Pre-print Media Attached | ||
10:19 3mTalk | On the Use of Mutation in Injecting Test Order-Dependency Registered Reports Sarra Habchi University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg Pre-print Media Attached | ||
10:22 28mLive Q&A | Discussions and Q&A Technical Papers |
Go directly to this room on Clowdr