Mon 17 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
02:00 - 02:50 | Opening/AwardsTechnical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
02:50 - 03:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
03:10 - 04:00 | Welcome Event Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 The MSR welcoming sessions will feature informal networking opportunities for newcomers to meet each other, learn about the MSR conference series, and interact with some established MSR veterans. All are welcome! | ||
10:00 - 10:50 | Resources for MSR ResearchTechnical Papers / Data Showcase at MSR Room 1 Chair(s): Felipe Ebert Eindhoven University of Technology | ||
10:01 3mTalk | PSIMiner: A Tool for Mining Rich Abstract Syntax Trees from Code Technical Papers Egor Spirin JetBrains Research; National Research University Higher School of Economics, Egor Bogomolov JetBrains Research, Vladimir Kovalenko JetBrains Research, Timofey Bryksin JetBrains Research, Saint Petersburg State University Pre-print | ||
10:04 3mTalk | Mining DEV for social and technical insights about software development Technical Papers Maria Papoutsoglou Aristotle University of Thessaloniki, Johannes Wachs Vienna University of Economics and Business & Complexity Science Hub Vienna, Georgia Kapitsaki University of Cyprus Pre-print | ||
10:07 3mTalk | TNM: A Tool for Mining of Socio-Technical Data from Git Repositories Technical Papers Nikolai Sviridov ITMO University, Mikhail Evtikhiev JetBrains Research, Vladimir Kovalenko JetBrains Research Pre-print | ||
10:10 3mTalk | Identifying Versions of Libraries used in Stack Overflow Code Snippets Technical Papers Ahmed Zerouali Vrije Universiteit Brussel, Camilo Velázquez-Rodríguez Vrije Universiteit Brussel, Coen De Roover Vrije Universiteit Brussel Pre-print Media Attached | ||
10:13 3mTalk | Sampling Projects in GitHub for MSR Studies Data Showcase Ozren Dabic Software Institute, Università della Svizzera italiana (USI), Switzerland, Emad Aghajani Software Institute, USI Università della Svizzera italiana, Gabriele Bavota Software Institute, USI Università della Svizzera italiana Pre-print | ||
10:16 3mTalk | gambit – An Open Source Name Disambiguation Tool for Version Control Systems Technical Papers Christoph Gote Chair of Systems Design, ETH Zurich, Christian Zingg Chair of Systems Design, ETH Zurich Pre-print Media Attached | ||
10:19 31mLive Q&A | Discussions and Q&A Technical Papers |
10:00 - 10:50 | Testing and code reviewTechnical Papers / Data Showcase / Registered Reports at MSR Room 2 Chair(s): Jürgen Cito TU Wien and Facebook | ||
10:01 3mTalk | A Traceability Dataset for Open Source Systems Data Showcase Mouna Hammoudi JOHANNES KEPLER UNIVERSITY LINZ, Christoph Mayr-Dorn Johannes Kepler University, Linz, Atif Mashkoor Johannes Kepler University Linz, Alexander Egyed Johannes Kepler University Media Attached | ||
10:04 4mTalk | How Java Programmers Test Exceptional Behavior Technical Papers Diego Marcilio USI Università della Svizzera italiana, Carlo A. Furia Università della Svizzera italiana (USI) Pre-print | ||
10:08 4mTalk | An Exploratory Study of Log Placement Recommendation in an Enterprise System Technical Papers Jeanderson Cândido Delft University of Technology, Jan Haesen Adyen N.V., Maurício Aniche Delft University of Technology, Arie van Deursen Delft University of Technology, Netherlands Pre-print Media Attached | ||
10:12 3mTalk | Does Code Review Promote Conformance? A Study of OpenStack Patches Technical Papers Panyawut Sri-iesaranusorn Nara Institute of Science and Technology, Raula Gaikovina Kula NAIST, Takashi Ishio Nara Institute of Science and Technology Pre-print | ||
10:15 4mTalk | A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests Technical Papers Guillaume Haben University of Luxembourg, Sarra Habchi University of Luxembourg, Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg Pre-print Media Attached | ||
10:19 3mTalk | On the Use of Mutation in Injecting Test Order-Dependency Registered Reports Sarra Habchi University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg Pre-print Media Attached | ||
10:22 28mLive Q&A | Discussions and Q&A Technical Papers |
10:50 - 11:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
11:10 - 12:00 | Welcome Event Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 The MSR welcoming sessions will feature informal networking opportunities for newcomers to meet each other, learn about the MSR conference series, and interact with some established MSR veterans. All are welcome! | ||
17:50 - 18:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
18:10 - 19:00 | |||
Tue 18 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
02:00 - 02:50 | |||
02:50 - 03:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
03:10 - 04:00 | Time series dataData Showcase / Technical Papers at MSR Room 2 Chair(s): Shane McIntosh University of Waterloo | ||
03:11 3mTalk | AndroCT: Ten Years of App Call Traces in Android Data Showcase Pre-print Media Attached | ||
03:14 4mTalk | Mining Workflows for Anomalous Data Transfers Technical Papers Huy Tu North Carolina State University, USA, George Papadimitriou University of Southern California, Mariam Kiran ESnet, LBNL, Cong Wang Renaissance Computing Institute, Anirban Mandal Renaissance Computing Institute, Ewa Deelman University of Southern California, Tim Menzies North Carolina State University, USA Pre-print | ||
03:18 4mTalk | Escaping the Time Pit: Pitfalls and Guidelines for Using Time-Based Git Data Technical Papers Samuel W. Flint University of Nebraska-Lincoln, Jigyasa Chauhan University of Nebraska-Lincoln, Robert Dyer University of Nebraska-Lincoln Pre-print Media Attached | ||
03:22 4mPaper | On the Naturalness and Localness of Software Logs Technical Papers Pre-print | ||
03:26 4mTalk | How Do Software Developers Use GitHub Actions to Automate Their Workflows? Technical Papers Timothy Kinsman University of Adelaide, Mairieli Wessel University of Sao Paulo, Marco Gerosa Northern Arizona University, USA, Christoph Treude University of Adelaide Pre-print | ||
03:30 30mLive Q&A | Discussions and Q&A Technical Papers |
10:00 - 10:50 | ML and Deep LearningTechnical Papers / Data Showcase / Registered Reports at MSR Room 2 Chair(s): Hongyu Zhang The University of Newcastle | ||
10:01 4mTalk | Fast and Memory-Efficient Neural Code Completion Technical Papers Alexey Svyatkovskiy Microsoft, Sebastian Lee University of Oxford, Anna Hadjitofi Alan Turing Institute, Maik Riechert Microsoft Research, Juliana Franco Microsoft Research, Miltiadis Allamanis Microsoft Research, UK Pre-print Media Attached | ||
10:05 4mResearch paper | Comparative Study of Feature Reduction Techniques in Software Change Prediction Technical Papers Ruchika Malhotra Delhi Technological University, Ritvik Kapoor Delhi Technological University, Deepti Aggarwal Delhi Technological University, Priya Garg Delhi Technological University Pre-print | ||
10:09 4mTalk | An Empirical Study on the Usage of BERT Models for Code Completion Technical Papers Matteo Ciniselli Università della Svizzera Italiana, Nathan Cooper William & Mary, Luca Pascarella Delft University of Technology, Denys Poshyvanyk College of William & Mary, Massimiliano Di Penta University of Sannio, Italy, Gabriele Bavota Software Institute, USI Università della Svizzera italiana Pre-print | ||
10:13 3mTalk | ManyTypes4Py: A benchmark Python dataset for machine learning-based type inference Data Showcase Amir Mir Delft University of Technology, Evaldas Latoskinas Delft University of Technology, Georgios Gousios Facebook & Delft University of Technology Pre-print | ||
10:16 3mTalk | KGTorrent: A Dataset of Python Jupyter Notebooks from Kaggle Data Showcase Luigi Quaranta University of Bari, Italy, Fabio Calefato University of Bari, Filippo Lanubile University of Bari | ||
10:19 3mTalk | Exploring the relationship between performance metrics and cost saving potential of defect prediction models Registered Reports Steffen Herbold University of Göttingen Pre-print | ||
10:22 28mLive Q&A | Discussions and Q&A Technical Papers |
10:50 - 11:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
11:10 - 12:00 | |||
11:10 50mTutorial | PyDriller 1.0 -- Ready to grow together Tutorials Pre-print |
17:00 - 17:50 | HackathonTechnical Papers / Hackathon at MSR Room 1 Chair(s): Jim Herbsleb Carnegie Mellon University, Audris Mockus The University of Tennessee, Alexander Nolte University of Tartu | ||
17:01 2m | Welcome by the MSR Hackathon Co-Chairs Hackathon Jim Herbsleb Carnegie Mellon University, Audris Mockus The University of Tennessee, Alexander Nolte University of Tartu | ||
17:03 3mTalk | An Exploratory Study of Project Activity Changepoints in Open Source Software Evolution Hackathon | ||
17:06 3mPaper | The Diversity-Innovation Paradox in Open-Source Software Hackathon Mengchen Sam Yong Carnegie Mellon University, Pittsburgh, Pennsylvania, United States, Lavinia Francesca Paganini Federal University of Pernambuco, Huilian Sophie Qiu Carnegie Mellon University, Pittsburgh, Pennsylvania, United States, José Bayoán Santiago Calderón University of Virginia, USA DOI Pre-print | ||
17:09 4mTalk | The Secret Life of Hackathon Code Technical Papers Ahmed Samir Imam Mahmoud University of Tartu, Tapajit Dey Lero - The Irish Software Research Centre and University of Limerick, Alexander Nolte University of Tartu, Audris Mockus The University of Tennessee, Jim Herbsleb Carnegie Mellon University Pre-print | ||
17:13 3mTalk | Tracing Vulnerable Code Lineage Hackathon David Reid University of Tennessee, Kalvin Eng University of Alberta, Chris Bogart Carnegie Mellon University, Adam Tutko University of Tennessee - Knoxville Pre-print | ||
17:16 3mTalk | Building the Collaboration Graph of Open-Source Software Ecosystem Hackathon Pre-print | ||
17:19 1mTalk | The Secret Life of Hackathon Code Hackathon Ahmed Samir Imam Mahmoud University of Tartu, Tapajit Dey Lero - The Irish Software Research Centre and University of Limerick Pre-print | ||
17:20 30mLive Q&A | Discussions and Q&A Technical Papers |
17:00 - 17:50 | |||
17:01 4mTalk | What Code Is Deliberately Excluded from Test Coverage and Why? Technical Papers Andre Hora UFMG Pre-print Media Attached | ||
17:05 3mTalk | AndroR2: A Dataset of Manually-Reproduced Bug Reports for Android apps Data Showcase Tyler Wendland University of Minnesota, Jingyang Sun University of Bristish Columbia, Junayed Mahmud George Mason University, S M Hasan Mansur George Mason University, Steven Huang University of Bristish Columbia, Kevin Moran George Mason University, Julia Rubin University of British Columbia, Canada, Mattia Fazzini University of Minnesota | ||
17:08 3mTalk | Apache Software Foundation Incubator Project Sustainability Dataset Data Showcase Likang Yin University of California, Davis, Zhiyuan Zhang University of California, Davis, Qi Xuan Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023, China, Vladimir Filkov University of California at Davis, USA | ||
17:11 4mTalk | Leveraging Models to Reduce Test Cases in Software Repositories Technical Papers Pre-print Media Attached | ||
17:15 4mTalk | Which contributions count? Analysis of attribution in open source Technical Papers Jean-Gabriel Young University of Vermont, amanda casari Open Source Programs Office, Google, Katie McLaughlin Open Source Programs Office, Google, Milo Trujillo University of Vermont, Laurent Hébert-Dufresne University of Vermont, James P. Bagrow University of Vermont Pre-print Media Attached | ||
17:19 4mTalk | On Improving Deep Learning Trace Analysis with System Call Arguments Technical Papers Quentin Fournier Polytechnique Montréal, Daniel Aloise Polytechnique Montréal, Seyed Vahid Azhari Ciena, François Tetreault Ciena Pre-print | ||
17:23 27mLive Q&A | Discussions and Q&A Technical Papers |
17:50 - 18:10 | Break / Discussion RoomsTechnical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
18:10 - 19:00 | |||
18:10 50mTutorial | Crafting your next MSR paper: suggestions from my (good and bad) experiences Tutorials Massimiliano Di Penta University of Sannio, Italy Pre-print |
Wed 19 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
02:00 - 02:50 | |||
02:01 4mTalk | Automatic Part-of-Speech Tagging for Security Vulnerability Descriptions Technical Papers Sofonias Yitagesu Tianjin University, Xiaowang Zhang Tianjin University, Zhiyong Feng Tianjin University, Xiaohong Li TianJin University, Zhenchang Xing Australian National University Pre-print | ||
02:05 4mTalk | Attention-based model for predicting question relatedness on Stack Overflow Technical Papers Jiayan Pei South China University of Technology, Yimin Wu South China University of Technology, Research Institute of SCUT in Yangjiang, Zishan Qin South China University of Technology, Yao Cong South China University of Technology, Jingtao Guan Research Institute of SCUT in Yangjiang Pre-print | ||
02:09 4mTalk | Characterising the Knowledge about Primitive Variables in Java Code Comments Technical Papers Mahfouth Alghamdi The University of Adelaide, Shinpei Hayashi Tokyo Institute of Technology, Takashi Kobayashi Tokyo Institute of Technology, Christoph Treude University of Adelaide Pre-print | ||
02:13 4mTalk | Googling for Software Development: What Developers Search For and What They Find Technical Papers Andre Hora UFMG Pre-print Media Attached | ||
02:17 3mTalk | Evaluating Pre-Trained Models for User Feedback Analysis in Software Engineering: A Study on Classification of App-Reviews Registered Reports Mohammad Abdul Hadi University of British Columbia, Fatemeh Hendijani Fard University of British Columbia Pre-print | ||
02:20 3mTalk | Cross-status Communication and Project Outcomes in OSS Development–A Language Style Matching Perspective Registered Reports Yisi Han Nanjing University, Zhendong Wang University of California, Irvine, Yang Feng State Key Laboratory for Novel Software Technology, Nanjing University, Zhihong Zhao Nanjing Tech Unniversity, Yi Wang Beijing University of Posts and Telecommunications Pre-print | ||
02:23 27mLive Q&A | Discussions and Q&A Technical Papers |
02:50 - 03:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
03:10 - 04:00 | |||
03:10 50mTutorial | Elasticsearch Full-Text Search Internals Tutorials Philipp Krenn Elastic |
10:00 - 10:50 | DatasetsData Showcase / Technical Papers at MSR Room 1 Chair(s): Sridhar Chimalakonda Indian Institute of Technology Tirupati | ||
10:01 3mTalk | AndroidCompass: A Dataset of Android Compatibility Checks in Code Repositories Data Showcase Sebastian Nielebock Otto-von-Guericke University Magdeburg, Germany, Paul Blockhaus Otto-von-Guericke-University Magdeburg, Germany, Jacob Krüger Otto von Guericke University Magdeburg, Frank Ortmeier Otto-von-Guericke-University Magdeburg, Faculty of Computer Science, Chair of Software Engineering Pre-print Media Attached | ||
10:04 3mTalk | GE526: A Dataset of Open Source Game Engines Data Showcase Dheeraj Vagavolu Indian Institute of Technology Tirupati, Vartika Agrahari Indian Institute of Technology Tirupati, Sridhar Chimalakonda Indian Institute of Technology Tirupati, Akhila Sri Manasa Venigalla IIT Tirupati, India | ||
10:07 3mTalk | Andromeda: A Dataset of Ansible Galaxy Roles and Their Evolution Data Showcase Ruben Opdebeeck Vrije Universiteit Brussel, Ahmed Zerouali Vrije Universiteit Brussel, Coen De Roover Vrije Universiteit Brussel | ||
10:10 3mTalk | The Wonderless Dataset for Serverless Computing Data Showcase Pre-print | ||
10:13 3mTalk | DUETS: A Dataset of Reproducible Pairs of Java Library-Clients Data Showcase Thomas Durieux KTH Royal Institute of Technology, Sweden, César Soto-Valero KTH Royal Institute of Technology, Benoit Baudry KTH Royal Institute of Technology Pre-print | ||
10:16 3mTalk | EQBENCH: A Dataset of Equivalent and Non-equivalent Program Pairs Data Showcase Sahar Badihi University of British Columbia, Canada, Yi Li Nanyang Technological University, Julia Rubin University of British Columbia, Canada | ||
10:19 31mLive Q&A | Discussions and Q&A Technical Papers |
10:50 - 11:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
11:10 - 12:00 | |||
11:10 12mKeynote | Code review at speed: How can we use data to help developers do code review faster? Keynotes Patanamon Thongtanunam The University of Melbourne | ||
11:22 12mKeynote | To Sustain a Smart, Dependent and Social Software Ecosystem Keynotes Raula Gaikovina Kula NAIST | ||
11:34 8mKeynote | Measure what matters – but don’t be creepy: The ethics of using data about people Keynotes | ||
11:42 18m | Discussions and Q&A Keynotes |
17:00 - 17:50 | Energy, logging, and APIsTechnical Papers at MSR Room 1 Chair(s): Akond Rahman Tennessee Tech University | ||
17:01 3mTalk | S3M: Siamese Stack (Trace) Similarity Measure Technical Papers Aleksandr Khvorov JetBrains, ITMO University, Roman Vasiliev JetBrains, George Chernishev Saint-Petersburg State University, Irving Muller Rodrigues Polytechnique Montreal, Montreal, Canada, Dmitrij Koznov Saint-Petersburg State University, Nikita Povarov JetBrains Pre-print | ||
17:04 4mTalk | Mining the ROS ecosystem for Green Architectural Tactics in Robotics and an Empirical Evaluation Technical Papers Ivano Malavolta Vrije Universiteit Amsterdam, Katerina Chinnappan Vrije Universiteit Amsterdam, Stan Swanborn Vrije Universiteit Amsterdam, The Netherlands, Grace Lewis Carnegie Mellon Software Engineering Institute, Patricia Lago Vrije Universiteit Amsterdam Pre-print Media Attached | ||
17:08 4mTalk | Mining Energy-Related Practices in Robotics Software Technical Papers Michel Albonico UTFPR, Ivano Malavolta Vrije Universiteit Amsterdam, Gustavo Pinto Federal University of Pará, Emitzá Guzmán Vrije Universiteit Amsterdam, Katerina Chinnappan Vrije Universiteit Amsterdam, Patricia Lago Vrije Universiteit Amsterdam Pre-print Media Attached | ||
17:12 3mTalk | Mining API Interactions to Analyze Software Revisions for the Evolution of Energy Consumption Technical Papers Andreas Schuler University of Applied Sciences Upper Austria, Gabriele Anderst-Kotsis Johannes Kepler University, Linz, Austria Pre-print | ||
17:15 4mTalk | Can I Solve it? Identifying the APIs required to complete OSS tasks Technical Papers Fabio Marcos De Abreu Santos Northern Arizona University, USA, Igor Scaliante Wiese Federal University of Technology – Paraná - UTFPR, Bianca Trinkenreich Northern of Arizona Univeristy, Igor Steinmacher Northern Arizona University, USA, Anita Sarma Oregon State University, Marco Gerosa Northern Arizona University, USA Pre-print | ||
17:19 31mLive Q&A | Discussions and Q&A Technical Papers |
17:00 - 17:50 | Change Management and AnalysisTechnical Papers / Registered Reports at MSR Room 2 Chair(s): Sarah Nadi University of Alberta | ||
17:01 4mTalk | Studying the Change Histories of Stack Overflow and GitHub Snippets Technical Papers Pre-print Media Attached | ||
17:05 4mTalk | Learning Off-By-One Mistakes: An Empirical Study Technical Papers Hendrig Sellik Delft University of Technology, Onno van Paridon Adyen N.V., Georgios Gousios Facebook & Delft University of Technology, Maurício Aniche Delft University of Technology Pre-print | ||
17:09 4mTalk | Predicting Design Impactful Changes in Modern Code Review: A Large-Scale Empirical Study Technical Papers Anderson Uchôa Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Caio Barbosa Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Daniel Coutinho Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Willian Oizumi Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Wesley Assunção Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Silvia Regina Vergilio Federal University of Paraná, Juliana Alves Pereira PUC-Rio, Anderson Oliveira PUC-Rio, Alessandro Garcia PUC-Rio Pre-print | ||
17:13 4mTalk | Rollback Edit Inconsistencies in Developer Forum Technical Papers Saikat Mondal University of Saskatchewan, Gias Uddin University of Calgary, Canada, Chanchal K. Roy University of Saskatchewan Pre-print | ||
17:17 3mTalk | Assessing the Exposure of Software Changes: The DiPiDi Approach Registered Reports Pre-print | ||
17:20 4mTalk | On the Use of Dependabot Security Pull Requests Technical Papers Mahmoud Alfadel Concordia Univerisity, Diego Costa Concordia University, Canada, Emad Shihab Concordia University, Mouafak Mkhallalati Concordia University Pre-print | ||
17:24 26mLive Q&A | Discussions and Q&A Technical Papers |
17:50 - 18:10 | Break / Discussion Rooms Technical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
18:10 - 19:00 | ClosingTechnical Papers / Tutorials / MIP Award / FOSS Award / content / Mining Challenge / Hackathon / MSR Awards / Registered Reports / Data Showcase / Shadow PC / Keynotes at MSR Room 1 | ||
18:11 23mAwards | MIP Award 2011 MIP Award | ||
18:34 15mLive Q&A | Discussions and Q&A Technical Papers | ||
18:49 8m | MSR 2022 Technical Papers | ||
18:57 3m | Closing by the General Chair and the Program Co-Chair Technical Papers |
Accepted Papers
Call for Registrations
Following the successful experience of 2020, Empirical Software Engineering journal (EMSE), in conjunction with the conference on Mining Software Repositories (MSR), is continuing the RR track. The RR track of MSR 2021 has two goals: (1) to prevent HARKing (hypothesizing after the results are known) for empirical studies; (2) to provide early feedback to authors in their initial study design. For papers submitted to the RR track, methods and proposed analyses are reviewed prior to execution. Pre-registered studies follow a two-step process:
-
Stage 1: A report is submitted that describes the planned study. The submitted report is evaluated by the reviewers of the RR track of MSR 2021. Authors of accepted pre-registered studies will be given the opportunity to present their work at MSR.
-
Stage 2: Once a report has passed Phase 1, the study will be conducted and actual data collection and analysis take place. The results may also be negative! The full paper is submitted for review to EMSE.
Paper Types, Evaluation Criteria, and Acceptance Types
The RR track of MSR 2021 supports two types of papers:
Confirmatory: The researcher has a fixed hypothesis (or several fixed hypotheses) and the objective of the study is to find out whether the hypothesis is supported by the facts/data.
An example of a completed confirmatory study:
- Inozemtseva, L., & Holmes, R. (2014, May). Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th international conference on software engineering (pp. 435-445).
Exploratory: The researcher does not have a hypothesis (or has one that may change during the study). Often, the objective of such a study is to understand what is observed and answer questions such as WHY, HOW, WHAT, WHO, or WHEN. We include in this category registrations for which the researcher has an initial proposed solution for an automated approach (e.g., a new deep-learning-based defect prediction approach) that serves as a starting point for his/her exploration to reach an effective solution.
Examples of completed exploratory studies:
-
Gousios, G., Pinzger, M., & Deursen, A. V. (2014, May). An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (pp. 345-355).
-
Rodrigues, I. M., Aloise, D., Fernandes, E. R., & Dagenais, M. (2020, June). A Soft Alignment Model for Bug Deduplication. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 43-53).
The reviewers will evaluate RR track submissions based on the following criteria:
- The importance of the research question(s).
- The logic, rationale, and plausibility of the proposed hypotheses.
- The soundness and feasibility of the methodology and analysis pipeline (including statistical power analysis where appropriate).
- (For confirmatory study) Whether the clarity and degree of methodological detail is sufficient to exactly replicate the proposed experimental procedures and analysis pipeline.
- (For confirmatory study) Whether the authors have pre-specified sufficient outcome-neutral tests for ensuring that the results obtained can test the stated hypotheses, including positive controls and quality checks.
- (For exploratory study, if applicable) The description of the data set that is the base for exploration.
The outcome of the RR report review is one of the following:
-
In-Principal Acceptance (IPA): The reviewers agree that the study is relevant, the outcome of the study (whether confirmation / rejection of hypothesis) is of interest to the community, the protocol for data collection is sound, and that the analysis methods are adequate. The authors can engage in the actual study for Stage 2. If the protocol is adhered to (or deviations are thoroughly justified), the study is published. Of course, this being a journal submission, a revision of the submitted manuscript may be necessary. Reviewers will especially evaluate how precisely the protocol of the accepted pre-registered report is followed, or whether deviations are justified.
-
Continuity Acceptance (CA): The reviewers agree that the study is relevant, that the (initial) methods appear to be appropriate. However, for exploratory studies, implementation details and post-experiment analyses or discussion (e.g., why the proposed automated approach does not work) may require follow-up checks. We’ll try our best to get the original reviewers. All PC members will be invited on the condition that they agree to review papers in both, Stage 1 and Stage 2. Four (4) PC members will review the Stage 1 submission, and three (3) will review the Stage 2 submission.
-
Rejection The reviewers do not agree on the relevance of the study or are not convinced that the study design is sufficiently mature. Comments are provided to the authors to improve the study design before starting it.
Note: For MSR 2021, we will only offer IPA to confirmatory study. Exploratory study in software engineering often cannot be adequately assessed until after the study has been completed and the findings are elaborated and discussed in a full paper. For example, consider a study in an RR proposing defect prediction using a new deep learning architecture. This work falls under the exploratory category. It is difficult to offer IPA, as we do not know whether it is any better than a traditional approach based on e.g., decision trees. Negative results are welcome; however, it is important that the negative results paper goes beyond presenting “we tried and failed”, but rather provide interesting insights to readers, e.g., why the results are negative or what that means for further studies on this topic (following criteria of REplication and Negative Results (RENE) tracks, e.g., https://saner2019.github.io/cfp/RENETrack.html).
Submission Process and Instructions
The timeline for MSR 2021 RR track will be as follows:
Jan 29: Authors submit their initial report. * Submissions must not exceed 6 pages (plus 1 additional page of references). The page limit is strict. * Submissions must conform to the IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran}
without including the compsoc
or compsocconf
options).
Mar 5: Authors receive PC members’ reviews.
Mar 19: Authors submit a response letter + revised report in a single PDF.
- The response letter should address reviewer comments and questions.
- The response letter + revised report must not exceed 12 pages (plus 1 additional page of references).
- The response letter does not need to follow IEEE formatting instructions.
April 9: Notification of Stage 1
- (Outcome: in-principal acceptance, continuity acceptance, or rejection).
April 16: Authors submit their accepted RR report to arXiv
- To be checked by PC members for Stage 2
- Note: Due to the timeline, RR reports will not be published in the MSR 2021 proceedings.
Before Jan 28, 2022: Authors submit a full paper to EMSE. Instructions will be provided later. However, the following constraints will be enforced:
-
Justifications need to be given to any change of authors. If the authors are added/removed or the author order is changed between the original Stage 1 and the EMSE submission, all authors will need to complete and sign a “Change of authorship request form”. The Editors in Chief of EMSE and chairs of the RR track reserve the right to deny author changes. If you anticipate any authorship changes please reach out to the chairs of the RR track as early as possible.
-
PC members who reviewed an RR report in Stage 1 and their directly supervised students cannot be added as authors of the corresponding submission in Stage 2.
Submissions can be made via the submission site (https://msr2021-registered-reports.hotcrp.com/) by the submission deadline. Any submission that does not comply with the aforementioned instructions and the mandatory information specified in the Author Guide is likely to be desk rejected. In addition, by submitting, the authors acknowledge that they are aware of and agree to be bound by the following policies:
-
The (ACM Policy and Procedures on Plagiarism)[https://www.acm.org/publications/policies/plagiarism] and the (IEEE Plagiarism FAQ)[https://www.ieee.org/publications/rights/plagiarism/plagiarism-faq.html]. In particular, papers submitted to MSR 2021 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for MSR 2021. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases (including immediate rejection and reporting of the incident to ACM/IEEE). To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM or IEEE, to detect violations of these policies.
-
The authorship policy of the ACM and the authorship policy of the IEEE.
Author's Guide
NB: Please contact the MSR RR track chairs with any questions, feedback, or requests for clarification. Specific analysis approaches mentioned below are intended as examples, not mandatory components.
I. Title (required)
Provide the working title of your study. It may be the same title that you submit for publication of your final manuscript, but it is not mandatory
Example: Should your family travel with you on the enterprise? Subtitle (optional): Effect of accompanying families on the work habits of crew members
II. Authors (required)
At this stage, we believe that a single blind review is most productive
III. Structured Abstract (required)
The abstract should describe the following in 200 words or so:
-
Background/Context
What is your research about? Why are you doing this research, why is it interesting?
Example: “The enterprise is the flag ship of the federation, and it allows families to travel onboard. However, there are no studies that evaluate how this affects the crew members.”
-
Objective/Aim
What exactly are you studying/investigating/evaluating? What are the objects of the study? We welcome both confirmatory and exploratory types of studies.
Example (Confirmatory): We evaluate whether the frequency of sick days, the work effectiveness and efficiency differ between science officers who bring their family with them, compared to science officers who are serving without their family.
Example (Exploratory): We investigate the problem of frequent Holodeck use on interpersonal relationships with an ethnographic study using participant observation, in order to derive specific hypotheses about Holodeck usage.
-
Method How are you addressing your objective? What data sources are you using?
Example: We conduct an observational study and use a between subject design. To analyze the data, we use a t-test or Wilcoxon test, depending on the underlying distribution. Our data comes from computer monitoring of Enterprise crew members.
IV. Introduction
Give more details on the bigger picture of your study and how it contributes to this bigger picture. An important component of phase 1 review is assessing the importance and relevance of the study questions, so be sure to explain this.
V. Hypotheses (required for confirmatory study) or research questions
Clearly state the research hypotheses that you want to test with your study, and a rationalization for the hypotheses.
Hypothesis: Science officers with their family on board have more sick days than science officers without their family
Rationale: Since toddlers are often sick, we can expect that crew members with their family onboard need to take sick days more often.
VI. Variables (required for confirmatory study)
- Independent Variable(s) and their operationalization
- Dependent Variable(s) and their operationalization (e.g., time to solve a specified task)
- Confounding Variable(s) and how their effect will be controlled (e.g., species type (Vulcan, Human, Tribble) might be a confounding factor; we control for it by separating our sample additionally into Human/Non-Human and using an ANOVA (normal distribution) or Friedman (non-normal distribution) to distill its effect).
For each variable, you should give: - name (e.g., presence of family) - abbreviation (if you intend to use one) - description (whether the family of the crew members travels on board) - scale type (nominal: either the family is present or not) - operationalization (crew members without family on board vs. crew members with family onboard)
VII. Participants/Subjects/Datasets (required)
Describe how and why you select the sample. When you conduct a meta analysis, describe the primary studies / work on which you base your meta analysis.
Example: We recruit crew members from the science department on a voluntary basis. They are our targeted population.
VIII. Execution Plan (required)
Describe the experimental setting and procedure. This includes the methods/tools that you plan to use (be specific on whether you developed it (and how) or whether it is already defined), and the concrete steps that you plan to take to support/reject the hypotheses or answer the research questions.
Example: Each crew member needs to sign the informed consent and agreement to process their data according to GDPR. Then, we conduct the interviews. Afterwards, participants need to complete the simulated task …
Examples:
Confirmatory:
https://osf.io/5fptj/ - Do Explicit Review Strategies Improve Code Review Performance?
Exploratory:
https://osf.io/kfu9t - The Impact of Dynamics of Collaborative Software Engineering on Introverts: A Study Protocol
https://osf.io/acnwk - Large-Scale Manual Validation of Bugfixing Changes