Write a Blog >>
MSR 2021
Mon 17 - Wed 19 May 2021
co-located with ICSE 2021

The International Conference on Mining Software Repositories (MSR) has hosted a mining challenge since 2006. With this challenge, we call upon everyone interested to apply their tools to a common dataset. The challenge is for researchers and practitioners to bravely use their mining tools and approaches on a dare.

Dates
Tracks

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone - change time zone

Mon 17 May
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

02:00 - 02:50
03:10 - 04:00
10:00 - 10:50
Resources for MSR ResearchTechnical Papers / Data Showcase at MSR Room 1
10:00
8m
Talk
PSIMiner: A Tool for Mining Rich Abstract Syntax Trees from Code
Technical Papers
Egor SpirinJetBrains Research; National Research University Higher School of Economics, Egor BogomolovJetBrains Research, Vladimir KovalenkoJetBrains Research, Timofey BryksinJetBrains Research, Saint Petersburg State University
10:08
8m
Talk
Mining DEV for social and technical insights about software development
Technical Papers
Maria PapoutsoglouAristotle University of Thessaloniki, Johannes WachsVienna University of Economics and Business & Complexity Science Hub Vienna, Georgia KapitsakiUniversity of Cyprus
Pre-print
10:16
8m
Talk
TNM: A Tool for Mining of Socio-Technical Data from Git Repositories
Technical Papers
Nikolai SviridovITMO University, Mikhail EvtikhievJetBrains Research, Vladimir KovalenkoJetBrains Research
10:25
8m
Talk
Identifying Versions of Libraries used in Stack Overflow Code Snippets
Technical Papers
Ahmed ZeroualiVrije Universiteit Brussel, Camilo Velázquez-RodríguezVrije Universiteit Brussel, Coen De RooverVrije Universiteit Brussel
Pre-print
10:33
8m
Talk
Sampling Projects in GitHub for MSR Studies
Data Showcase
Ozren DabicSoftware Institute, Università della Svizzera italiana (USI), Switzerland, Emad AghajaniSoftware Institute, USI Università della Svizzera italiana, Gabriele BavotaSoftware Institute, USI Università della Svizzera italiana
Pre-print
10:41
8m
Talk
gambit – An Open Source Name Disambiguation Tool for Version Control Systems
Technical Papers
Christoph GoteChair of Systems Design, ETH Zurich, Christian ZinggChair of Systems Design, ETH Zurich
Pre-print
10:00 - 10:50
10:00
8m
Talk
A Traceability Dataset for Open Source Systems
Data Showcase
Mouna HammoudiJOHANNES KEPLER UNIVERSITY LINZ, Christoph Mayr-DornJohannes Kepler University, Linz, Atif MashkoorJohannes Kepler University Linz, Alexander EgyedJohannes Kepler University
10:08
8m
Talk
How Java Programmers Test Exceptional Behavior
Technical Papers
Diego MarcilioUSI Università della Svizzera italiana, Carlo A. FuriaUniversità della Svizzera italiana (USI)
Pre-print
10:16
8m
Talk
An Exploratory Study of Log Placement Recommendation in an Enterprise System
Technical Papers
Jeanderson CândidoDelft University of Technology, Jan HaesenAdyen N.V., Maurício AnicheDelft University of Technology, Arie van DeursenDelft University of Technology, Netherlands
Pre-print
10:25
8m
Talk
Does Code Review Promote Conformance? A Study of OpenStack Patches
Technical Papers
Panyawut Sri-iesaranusornNara Institute of Science and Technology, Raula Gaikovina KulaNAIST, Takashi IshioNara Institute of Science and Technology
Pre-print
10:33
8m
Talk
A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests
Technical Papers
Guillaume HabenUniversity of Luxembourg, Sarra HabchiUniversity of Lille, Mike PapadakisUniversity of Luxembourg, Luxembourg, Maxime CordyUniversity of Luxembourg, Luxembourg, Yves Le TraonUniversity of Luxembourg, Luxembourg
Pre-print
10:41
8m
Talk
On the Use of Mutation in Injecting Test Order-Dependency
Registered Reports
Sarra HabchiUniversity of Lille, Maxime CordyUniversity of Luxembourg, Luxembourg, Mike PapadakisUniversity of Luxembourg, Luxembourg, Yves Le TraonUniversity of Luxembourg, Luxembourg
11:10 - 12:00
17:00 - 17:50
Mining Challenge Session Technical Papers at MSR Room 1
17:50 - 18:10
Break / Discussion Rooms Technical Papers at MSR Room 1

Tue 18 May
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

02:00 - 02:50
03:10 - 04:00
03:10
10m
Talk
AndroCT: Ten Years of App Call Traces in Android
Data Showcase
Wen Li, Xiaoqin FuWashington State University, Haipeng CaiWashington State University, USA
Pre-print Media Attached
03:20
10m
Talk
Mining Workflows for Anomalous Data Transfers
Technical Papers
Huy TuNorth Carolina State University, USA, George PapadimitriouUniversity of Southern California, Mariam KiranESnet, LBNL, Cong WangRenaissance Computing Institute, Anirban MandalRenaissance Computing Institute, Ewa DeelmanUniversity of Southern California, Tim MenziesNorth Carolina State University, USA
Pre-print
03:30
10m
Talk
Escaping the Time Pit: Pitfalls and Guidelines for Using Time-Based Git Data
Technical Papers
Samuel W. FlintUniversity of Nebraska-Lincoln, Jigyasa ChauhanUniversity of Nebraska-Lincoln, Robert DyerUniversity of Nebraska - Lincoln
Pre-print
03:40
10m
Paper
On the Naturalness and Localness of Software Logs
Technical Papers
Sina GholamianUniversity of Waterloo, Paul A. S. WardUniversity of Waterloo
Pre-print
03:50
10m
Talk
How Do Software Developers Use GitHub Actions to Automate Their Workflows?
Technical Papers
Timothy KinsmanUniversity of Adelaide, Mairieli WesselUniversity of Sao Paulo, Marco GerosaNorthern Arizona University, USA, Christoph TreudeUniversity of Adelaide
Pre-print
10:00 - 10:50
Developer communicationsTechnical Papers / Data Showcase at MSR Room 1
10:00
10m
Talk
Waiting around or job half-done? Sentiment in self-admitted technical debt
Technical Papers
Gianmarco FucciUniversity of Sannio, Nathan CasseeEindhoven University of Technology, Fiorella ZampettiUniversity of Sannio, Italy, Nicole NovielliUniversity of Bari, Alexander SerebrenikEindhoven University of Technology, Massimiliano Di PentaUniversity of Sannio, Italy
Pre-print
10:10
10m
Research paper
Automatically Selecting Follow-up Questions for Deficient Bug Reports
Technical Papers
Mia Mohammad ImranVirginia Commonwealth University, Agnieszka Ciborowska Virginia Commonwealth University, Kostadin DamevskiVirginia Commonwealth University
Pre-print
10:20
10m
Full-paper
An Empirical Study of Developer Discussions on Low Code Software Development Challenges
Technical Papers
Md Abdullah Al AlaminUniversity of Calgary, Sanjay MalakarBangladesh University of Engineering and Technology, Gias UddinUniversity of Calgary, Canada, Sadia AfrozBangladesh University of Engineering and Technology, Tameem Bin HaiderBangladesh University of Engineering and Technology, Anindya IqbalBangladesh University of Engineering and Technology Dhaka, Bangladesh
Pre-print
10:30
10m
Talk
Challenges in Developing Desktop Web Apps: a Study of Stack Overflow and GitHub
Technical Papers
Gian Luca ScocciaUniversity of L'Aquila, Patrizio MigliariniDISIM, University of L'Aquila, Marco AutiliUniversity of L'Aquila, Italy
Pre-print
10:40
10m
Talk
Search4Code: Code Search Intent Classification Using Weak Supervision
Data Showcase
Nikitha RaoMicrosoft Research, Chetan BansalMicrosoft Research, Joe GuanMicrosoft
Pre-print
10:00 - 10:50
10:00
8m
Talk
Fast and Memory-Efficient Neural Code Completion
Technical Papers
Alexey SvyatkovskiyMicrosoft, Sebastian LeeUniversity of Oxford, Anna HadjitofiAlan Turing Institute, Maik RiechertMicrosoft Research, Juliana Vicente FrancoMicrosoft Research, Miltiadis AllamanisMicrosoft Research, UK
Pre-print
10:08
8m
Research paper
Comparative Study of Feature Reduction Techniques in Software Change Prediction
Technical Papers
Ruchika MalhotraDelhi Technological University, Ritvik KapoorDelhi Technological University, Deepti AggarwalDelhi Technological University, Priya GargDelhi Technological University
Pre-print
10:16
8m
Talk
An Empirical Study on the Usage of BERT Models for Code Completion
Technical Papers
Matteo CiniselliUniversità della Svizzera Italiana, Nathan CooperWilliam & Mary, Luca PascarellaUniversità della Svizzera italiana, Denys PoshyvanykCollege of William & Mary, Massimiliano Di PentaUniversity of Sannio, Italy, Gabriele BavotaSoftware Institute, USI Università della Svizzera italiana
Pre-print
10:25
8m
Talk
ManyTypes4Py: A benchmark Python dataset for machine learning-based type inference
Data Showcase
Amir MirDelft University of Technology, Evaldas LatoskinasDelft University of Technology, Georgios GousiosFacebook & Delft University of Technology
10:33
8m
Talk
KGTorrent: A Dataset of Python Jupyter Notebooks from Kaggle
Data Showcase
Luigi QuarantaUniversity of Bari, Italy, Fabio CalefatoUniversity of Bari, Filippo LanubileUniversity of Bari
10:41
8m
Talk
Exploring the relationship between performance metrics and cost saving potential of defect prediction models
Registered Reports
Steffen HerboldUniversity of Göttingen
Pre-print
11:10 - 12:00
17:00 - 17:50
17:00
8m
Talk
An Exploratory Study of Project Activity Changepoints in Open Source Software Evolution
Hackathon
James WaldenNorthern Kentucky University, NoahBurgin, Kuljit Kaur ChahalKaur
17:08
8m
Paper
The Diversity-Innovation Paradox in Open-Source Software
Hackathon
Mengchen Sam YongCarnegie Mellon University, Pittsburgh, Pennsylvania, United States, Lavinia Francesca PaganiniFederal University of Pernambuco, Huilian Sophie QiuCarnegie Mellon University, Pittsburgh, Pennsylvania, United States, José Bayoán Santiago CalderónUniversity of Virginia, USA
DOI Pre-print
17:16
8m
Talk
The Secret Life of Hackathon Code
Technical Papers
Ahmed Samir Imam MahmoudUniversity of Tartu, Tapajit DeyLero - The Irish Software Research Centre and University of Limerick, Alexander NolteUniversity of Tartu, Audris MockusThe University of Tennessee, Jim HerbslebCarnegie Mellon University
Pre-print
17:25
8m
Talk
The Secret Life of Hackathon Code
Hackathon
Ahmed Samir Imam MahmoudUniversity of Tartu, Tapajit DeyLero - The Irish Software Research Centre and University of Limerick
Pre-print
17:33
8m
Talk
Tracing Vulnerable Code Lineage
Hackathon
David ReidUniversity of Tennessee, Kalvin EngUniversity of Alberta, Chris BogartCarnegie Mellon University, Adam TutkoUniversity of Tennessee - Knoxville
Pre-print
17:41
8m
Talk
Building the Collaboration Graph of Open-Source Software Ecosystem
Hackathon
Pre-print
17:00 - 17:50
17:00
8m
Talk
What Code Is Deliberately Excluded from Test Coverage and Why?
Technical Papers
Pre-print
17:08
8m
Talk
AndroR2: A Dataset of Manually-Reproduced Bug Reports for Android apps
Data Showcase
Tyler WendlandUniversity of Minnesota, Jingyang SunUniversity of Bristish Columbia, Junayed MahmudGeorge Mason University, S M Hasan MansurGeorge Mason University, Steven HuangUniversity of Bristish Columbia, Kevin MoranGeorge Mason University, Julia RubinUniversity of British Columbia, Canada, Mattia FazziniUniversity of Minnesota
17:16
8m
Talk
Apache Software Foundation Incubator Project Sustainability Dataset
Data Showcase
Likang YinUniversity of California, Davis, Zhiyuan ZhangUniversity of California, Davis, Qi XuanInstitute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023, China, Vladimir FilkovUniversity of California at Davis, USA
17:25
8m
Talk
Leveraging Models to Reduce Test Cases in Software Repositories
Technical Papers
Golnaz GharachorluSimon Fraser University, Nick SumnerSimon Fraser University
Pre-print
17:33
8m
Talk
Which contributions count? Analysis of attribution in open source
Technical Papers
Jean-Gabriel YoungUniversity of Vermont, Amanda CasariOpen Source Programs Office, Google, Katie McLaughlinOpen Source Programs Office, Google, Milo TrujilloUniversity of Vermont, Laurent Hébert-DufresneUniversity of Vermont, James P. BagrowUniversity of Vermont
Pre-print
17:41
8m
Talk
On Improving Deep Learning Trace Analysis with System Call Arguments
Technical Papers
Quentin FournierPolytechnique Montréal, Daniel AloisePolytechnique Montréal, Seyed Vahid AzhariCiena, François TetreaultCiena
Pre-print
17:50 - 18:10
Break / Discussion RoomsTechnical Papers at MSR Room 1
18:10 - 19:00

Wed 19 May
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

02:00 - 02:50
02:00
10m
Talk
Practitioners' Perceptions of the Goals and Visual Explanations of Defect Prediction Models
Technical Papers
Jirayus JiarpakdeeMonash University, Australia, Chakkrit TantithamthavornMonash University, John GrundyMonash University
Pre-print
02:10
10m
Talk
On the Effectiveness of Deep Vulnerability Detectors to Simple Stupid Bug Detection
Mining Challenge
Jiayi HuaBeijing University of Posts and Telecommunications, Haoyu WangBeijing University of Posts and Telecommunications
Pre-print
02:20
10m
Talk
An Empirical Study of OSS-Fuzz Bugs
Technical Papers
Zhen Yu DingMotional, Claire Le GouesCarnegie Mellon University
Pre-print
02:30
10m
Talk
Denchmark: A Bug Benchmark of Deep Learning-related Software
Data Showcase
Misoo KimSungkyunkwan University, Youngkyoung KimSungkyunkwan University, Eunseok LeeSungkyunkwan University
02:40
10m
Talk
JITLine: A Simpler, Better, Faster, Finer-grained Just-In-Time Defect Prediction
Technical Papers
Chanathip PornprasitMonash University, Chakkrit TantithamthavornMonash University
Pre-print
02:00 - 02:50
02:00
8m
Talk
Automatic Part-of-Speech Tagging for Security Vulnerability Descriptions
Technical Papers
SOFONIAS YITAGESUTianjin University, Xiaowang ZhangTianjin University, Zhiyong FengTianjin University, Li XiaohongTianJin University, Zhenchang XingAustralian National University
Pre-print
02:08
8m
Talk
Attention-based model for predicting question relatedness on Stack Overflow
Technical Papers
Jiayan PeiSouth China University of Technology, Yimin WuSouth China University of Technology, Research Institute of SCUT in Yangjiang, Zishan QinSouth China University of Technology, Yao CongSouth China University of Technology, Jingtao GuanResearch Institute of SCUT in Yangjiang
Pre-print
02:16
8m
Talk
Characterising the Knowledge about Primitive Variables in Java Code Comments
Technical Papers
Mahfouth AlghamdiThe University of Adelaide, Shinpei HayashiTokyo Institute of Technology, Takashi KobayashiTokyo Institute of Technology, Christoph TreudeUniversity of Adelaide
Pre-print
02:25
8m
Talk
Googling for Software Development: What Developers Search For and What They Find
Technical Papers
Pre-print
02:33
8m
Talk
Evaluating Pre-Trained Models for User Feedback Analysis in Software Engineering: A Study on Classification of App-Reviews
Registered Reports
Mohammad Abdul HadiUniversity of British Columbia, Fatemeh Hendijani FardUniversity of British Columbia
02:41
8m
Talk
Cross-status Communication and Project Outcomes in OSS Development–A Language Style Matching Perspective
Registered Reports
Yisi HanNanjing University, Zhendong WangUniversity of California, Irvine, Yang FengState Key Laboratory for Novel Software Technology, Nanjing University, Zhihong ZhaoNanjing Tech Unniversity, Yi WangBeijing University of Posts and Telecommunications
02:50 - 03:10
Break / Discussion Rooms Technical Papers at MSR Room 1
03:10 - 04:00
10:00 - 10:50
10:00
8m
Talk
AndroidCompass: A Dataset of Android Compatibility Checks in Code Repositories
Data Showcase
Sebastian NielebockOtto-von-Guericke University Magdeburg, Germany, Paul BlockhausOtto-von-Guericke-University Magdeburg, Germany, Jacob KrügerOtto von Guericke University Magdeburg, Frank OrtmeierOtto-von-Guericke-University Magdeburg, Faculty of Computer Science, Chair of Software Engineering
Pre-print Media Attached
10:08
8m
Talk
GE526: A Dataset of Open Source Game Engines
Data Showcase
Dheeraj VagavoluIndian Institute of Technology Tirupati, Vartika AgrahariIndian Institute of Technology Tirupati, Sridhar ChimalakondaIndian Institute of Technology Tirupati, Akhila Sri Manasa VenigallaIIT Tirupati, India
10:16
8m
Talk
Andromeda: A Dataset of Ansible Galaxy Roles and Their Evolution
Data Showcase
Ruben OpdebeeckVrije Universiteit Brussel, Ahmed ZeroualiVrije Universiteit Brussel, Coen De RooverVrije Universiteit Brussel
10:25
8m
Talk
The Wonderless Dataset for Serverless Computing
Data Showcase
Nafise EskandaniTU Darmstadt, Guido SalvaneschiUniversity of St. Gallen
10:33
8m
Talk
DUETS: A Dataset of Reproducible Pairs of Java Library-Clients
Data Showcase
Thomas DurieuxKTH Royal Institute of Technology, Sweden, César Soto-ValeroKTH Royal Institute of Technology, Benoit BaudryKTH Royal Institute of Technology
Pre-print
10:41
8m
Talk
EQBENCH: A Dataset of Equivalent and Non-equivalent Program Pairs
Data Showcase
Sahar BadihiUniversity of British Columbia, Canada, Yi LiNanyang Technological University, Singapore, Julia RubinUniversity of British Columbia, Canada
10:00 - 10:50
10:00
10m
Talk
Identifying Critical Projects via PageRank and Truck Factor
Technical Papers
Rolf-Helge PfeifferIT University of Copenhagen
10:10
10m
Talk
Revisiting Dockerfiles in Open Source Software Over Time
Technical Papers
Kalvin EngUniversity of Alberta, Abram HindleUniversity of Alberta
Pre-print
10:20
10m
Talk
Can I Solve it? Identifying the APIs required to complete OSS tasks
Technical Papers
Fabio Marcos De Abreu SantosNorthern Arizona University, USA, Igor Scaliante WieseFederal University of Technology – Paraná - UTFPR, Bianca TrinkenreichNorthern of Arizona Univeristy, Igor SteinmacherUnivesidade Tecnologica Federal do Parana, Anita SarmaOregon State University, Marco GerosaNorthern Arizona University, USA
Pre-print
10:30
10m
Talk
On the Use of Dependabot Security Pull Requests
Technical Papers
Mahmoud AlfadelConcordia Univerisity, Diego CostaConcordia University, Canada, Emad ShihabConcordia University, Mouafak MkhallalatiConcordia University
Pre-print
10:40
10m
Talk
Does the First-Response Matter for Future Contributions? A Study of First Contributions
Registered Reports
Noppadol AssavakamhaenghanNara Institute of Science and Technology, Supatsara WattanakriengkraiNara Institute of Science and Technology, Naomichi ShimadaNara Institute of Science and Technology, Raula Gaikovina KulaNAIST, Takashi IshioNara Institute of Science and Technology, Kenichi MatsumotoNara Institute of Science and Technology
10:50 - 11:10
Break / Discussion Rooms Technical Papers at MSR Room 1
11:10 - 12:00
Mini-Keynotes Technical Papers at MSR Room 1
17:00 - 17:50
Energy and LoggingTechnical Papers at MSR Room 1
17:00
12m
Talk
S3M: Siamese Stack (Trace) Similarity Measure
Technical Papers
Aleksandr KhvorovJetBrains, ITMO University, Roman VasilievJetBrains, George ChernishevSaint-Petersburg State University, Irving Muller RodriguesPolytechnique Montreal, Montreal, Canada, Dmitrij KoznovSaint-Petersburg State University, Nikita PovarovJetBrains
Pre-print
17:12
12m
Talk
Mining the ROS ecosystem for Green Architectural Tactics in Robotics and an Empirical Evaluation
Technical Papers
Ivano MalavoltaVrije Universiteit Amsterdam, Katerina ChinnappanVrije Universiteit Amsterdam, Stan SwanbornVrije Universiteit Amsterdam, The Netherlands, Grace LewisCarnegie Mellon Software Engineering Institute, Patricia LagoVrije Universiteit Amsterdam
17:25
12m
Talk
Mining Energy-Related Practices in Robotics Software
Technical Papers
Michel AlbonicoUTFPR, Ivano Malavolta, Gustavo PintoFederal University of Pará, Emitzá Guzmán, Katerina ChinnappanVrije Universiteit Amsterdam, Patricia LagoVrije Universiteit Amsterdam
Pre-print
17:37
12m
Talk
Mining API Interactions to Analyze Software Revisions for the Evolution of Energy Consumption
Technical Papers
Andreas SchulerUniversity of Applied Sciences Upper Austria, Gabriele Anderst-KotsisJohannes Kepler University, Linz, Austria
Pre-print
17:00 - 17:50
Change Management and AnalysisTechnical Papers / Registered Reports at MSR Room 2
17:00
10m
Talk
Studying the Change Histories of Stack Overflow and GitHub Snippets
Technical Papers
Saraj Singh ManesCarleton University, Olga BaysalCarleton University
Pre-print
17:10
10m
Talk
Learning Off-By-One Mistakes: An Empirical Study
Technical Papers
Hendrig SellikDelft University of Technology, Onno van ParidonAdyen N.V., Georgios GousiosFacebook & Delft University of Technology, Maurício AnicheDelft University of Technology
Pre-print
17:20
10m
Talk
Predicting Design Impactful Changes in Modern Code Review: A Large-Scale Empirical Study
Technical Papers
Anderson UchôaPontifical Catholic University of Rio de Janeiro (PUC-Rio), Caio BarbosaPontifical Catholic University of Rio de Janeiro (PUC-Rio), Daniel CoutinhoPontifical Catholic University of Rio de Janeiro (PUC-Rio), Willian OizumiPontifical Catholic University of Rio de Janeiro (PUC-Rio), Wesley AssunçãoPontifical Catholic University of Rio de Janeiro (PUC-Rio), Silvia Regina VergilioFederal University of Paraná, Juliana Alves PereiraPUC-Rio, Anderson OliveiraPUC-Rio, Alessandro GarciaPUC-Rio
Pre-print
17:30
10m
Talk
Rollback Edit Inconsistencies in Developer Forum
Technical Papers
Saikat MondalUniversity of Saskatchewan, Gias UddinUniversity of Calgary, Canada, Chanchal K. RoyUniversity of Saskatchewan
Pre-print
17:40
10m
Talk
Assessing the Exposure of Software Changes: The DiPiDi Approach
Registered Reports
Mehran MeidaniUniversity of Waterloo, Maxime LamotheUniversity of Waterloo, Shane McIntosh
Pre-print
17:50 - 18:10
Break / Discussion Rooms Technical Papers at MSR Room 1
18:10 - 19:00
18:10
7m
Talk
A large-scale study on human-cloned changes for automated program repair
Mining Challenge
Fernanda MadeiralKTH Royal Institute of Technology, Thomas DurieuxKTH Royal Institute of Technology, Sweden
Pre-print
18:17
7m
Talk
Applying CodeBERT for Automated Program Repair of Java Simple Bugs
Mining Challenge
Ehsan MashhadiUniversity of Calgary, Hadi HemmatiUniversity of Calgary
Pre-print
18:24
7m
Talk
PySStuBs: Characterizing Single-Statement Bugs in Popular Open-Source Python Projects
Mining Challenge
Arthur Veloso KamienskiUniversity of Alberta, Luisa Palechor AnaconaUniversity of Alberta, Abram HindleUniversity of Alberta, Cor-Paul BezemerUniversity of Alberta
18:31
7m
Talk
How Effective is Continuous Integration in Indicating Single-Statement Bugs?
Mining Challenge
Jasmine LatendresseConcordia University, Rabe AbdalkareemQueens University, Kingston, Canada, Diego CostaConcordia University, Canada, Emad ShihabConcordia University
Pre-print
18:38
7m
Talk
Mea culpa: How developers fix their own simple bugs differently from other developers
Mining Challenge
Wenhan ZhuUniversity of Waterloo, Michael W. GodfreyUniversity of Waterloo, Canada
Pre-print
18:45
7m
Talk
On the Distribution of "Simple Stupid Bugs" in Unit Test Files: An Exploratory Study
Mining Challenge
Anthony PerumaRochester Institute of Technology, Christian D. NewmanRochester Institute of Technology
Pre-print Media Attached
18:52
7m
Talk
On the Rise and Fall of Simple Stupid Bugs: a Life-Cycle Analysis of SStuBs
Mining Challenge
Balázs MosolygóUniversity of Szeged, Norbert VándorUniversity of Szeged, Gabor AntalUniversity of Szeged, Peter HegedusUniversity of Szeged
Pre-print

Call for Mining Challenge Papers

This year, the mining challenge is about ManySStuBs4J, a dataset of fixes to Java simple bugs. The dataset focuses on “SStuBs“, i.e., simple stupid bugs. SStuBs are bugs that appear on a single statement and the corresponding fix is within that statement.

The dataset was collected to facilitate the study of such bugs towards addressing empirical questions about these bugs and related program repair techniques. The included fixes are classified — where possible — into one of 16 syntactic templates, such as accidentally swapped method arguments, incorrect operator usage, or wrong variable usage. For each bug in the challenge dataset, we include the provenance of the bug (GitHub project, commit SHA), the diff between the buggy and fixed version of the files, and an annotation of which (if any) of the 16 SStuBs patterns it matches.

In this challenge, participants can use two variants of the dataset. A small version of the dataset that contains 25,539 SStuBs changes mined from 100 Java projects in GitHub that use Maven. These projects can be built and tested (if they contain tests) in an automated fashion. The large version of the data set contains 153,652 SStuBs mined from 1,000 popular open-source Java projects in GitHub, but not all of these projects use Maven, so it may not be feasible to build and test them.

The challenge is open-ended: participants can choose the research questions that they find most interesting. Our suggestions include:

  • Bug detection: What methods are most effective for locating SStuBs?
  • Program repair: What methods are most effective for proposing repairs to SStuBs? This could be a separate step or combined.
  • Why SStubS occur: What context do SStuBs appear in? What are common root causes? Are there characteristics of the software project, the development team, or the individual source file that make SStuBs more likely to appear?
  • What encourages fixing SStuBs: What factors characterize SStuBs that are more quickly found and fixed?
  • Testing: How is testing related to SStuBs? Do projects with more or better unit tests have fewer or easier-to-fix SStuBs?

These are just some of the questions that could be answered using the ManySStuBs4J dataset. Participants may combine the SStuBs data with other code or metadata: for example, data about project popularity or contributor experience. We will not provide such data, but participants are encouraged to “bring their own data” (BYOD) by joining SStuBs data with data from other public, readily available, sources such as GHTorrent or GitHub. We ask the participants to carefully consider any ethical implications that stem from using other sources of data, such as the use of personally identifiable information.

How to Participate in the Challenge

First, familiarize yourself with the ManySStuBs4J dataset:

  • Read the MSR 2020 paper about ManySStuBs4J.
  • Study the download page of ManySStuBs4J, which includes the most recent version and links to download the dataset as well as the documentation page.
  • Create a new issue here in case you have problems with the dataset or want to suggest ideas for improvements.

Finally, use the dataset to answer your research questions, report your findings in a four-page challenge paper (see information below), submit your abstract before January 19, 2021, and your final paper before January 26, 2021. If your paper is accepted, present your results at MSR 2021 in Madrid, Spain!

Join us on Slack for informal discussions among participants at http://msr2021challenge.slack.com/

Submission

A challenge paper should describe the results of your work by providing an introduction to the problem you address and why it is worth studying, the version of the dataset you used, the approach and tools you used, your results and their implications, and conclusions. Make sure your report highlights the contributions and the importance of your work. See also our open science policy regarding the publication of software and additional data you used for the challenge.

Challenge papers must not exceed 4 pages plus 1 additional page only with references and must conform to the MSR 2021 format and submission guidelines. Each submission will be reviewed by at least three members of the program committee. Submissions should follow the IEEE Conference Proceedings Formatting Guidelines, with title in 24pt font and full text in 10pt type. LaTEX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf option.

IMPORTANT: The mining challenge track of MSR 2021 follows the double-blind submission model. Submissions should not reveal the identity of the authors in any way. This means that authors should:

  • leave out author names and affiliations from the body and metadata of the submitted pdf
  • ensure that any citations to related work by themselves are written in the third person, for example “the prior work of XYZ” as opposed to “our prior work [2]”
  • not refer to their personal, lab or university website; similarly, care should be taken with personal accounts on GitHub, BitBucket, Google Drive, etc.
  • not upload unblinded versions of their paper on archival websites during bidding/reviewing, however uploading unblinded versions prior to submission is allowed and sometimes unavoidable (e.g., thesis)

Authors having further questions on double blind reviewing are encouraged to contact the Mining Challenge Chairs via email.

Papers must be submitted electronically through HotCRP, should not have been published elsewhere, and should not be under review or submitted for review elsewhere for the duration of consideration. ACM plagiarism policy and procedures shall be followed for cases of double submission. The submission must also comply with the IEEE Policy on Authorship.

Upon notification of acceptance, all authors of accepted papers will be asked to complete a copyright form and will receive further instructions for preparing their camera ready versions. At least one author of each accepted paper is expected to register and present the results at MSR 2021. All accepted contributions will be published in the electronic conference proceedings.

This year’s mining challenge can be cited as:

title={MSR Mining Challenge: The Life of Simple, Stupid Bugs (SStubS)},
author={Karampatsis, Rafael-Michael and Allamanis, Miltiadis and Sutton, Charles},
year={2021},
booktitle={Proceedings of the International Conference on Mining Software Repositories (MSR 2021)},
}

The dataset itself can be cited as:

@inproceedings{sstubs,
title={How Often Do Single-Statement Bugs Occur? The ManySStuBs4J Dataset},
author={Karampatsis, Rafael-Michael and Sutton, Charles},
year={2020},
booktitle={Proceedings of the International Conference on Mining Software Repositories (MSR 2020)},
preprint={https://arxiv.org/abs/1905.13334}
}

Open Science Policy

Openness in science is key to fostering progress via transparency, reproducibility and replicability. Our steering principle is that all research output should be accessible to the public and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles. To increase reproducibility and replicability, we encourage all contributing authors to disclose:

  • the source code of the software they used to retrieve and analyze the data
  • the (anonymized and curated) empirical data they retrieved in addition to the SOTorrent dataset
  • a document with instructions for other researchers describing how to reproduce or replicate the results

Already upon submission, authors can privately share their anonymized data and software on archives such as Zenodo or Figshare (tutorial available here). Zenodo accepts up to 50GB per dataset (more upon request). There is no need to use Dropbox or Google Drive. After acceptance, data and software should be made public so that they receive a DOI and become citable. Zenodo and Figshare accounts can easily be linked with GitHub repositories to automatically archive software releases. In the unlikely case that authors need to upload terabytes of data, Archive.org may be used.

We recognise that anonymising artifacts such as source code is more difficult than preserving anonymity in a paper. We ask authors to take a best effort approach to not reveal their identities. We will also ask reviewers to avoid trying to identify authors by looking at commit histories and other such information that is not easily anonymised. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-blind your repository.

We encourage authors to self-archive pre- and postprints of their papers in open, preserved repositories such as arXiv.org. This is legal and allowed by all major publishers including ACM and IEEE and it lets anybody in the world reach your paper. Note that you are usually not allowed to self-archive the PDF of the published article (that is, the publisher proof or the Digital Library version).

Please note that the success of the open science initiative depends on the willingness (and possibilities) of authors to disclose their data and that all submissions will undergo the same review process independent of whether or not they disclose their analysis code or data. We encourage authors who cannot disclose industrial or otherwise non-public data, for instance due to non-disclosure agreements, to provide an explicit (short) statement in the paper.

Best Mining Challenge Paper Award

As mentioned above, all submissions will undergo the same review process independent of whether or not they disclose their analysis code or data. However, only accepted papers for which code and data are available on preserved archives, as described in the open science policy, will be considered by the program committee for the best mining challenge paper award.

Best Student Presentation Award

Like in the previous years, there will be a public voting during the conference to select the best mining challenge presentation. This award often goes to authors of compelling work who present an engaging story to the audience. Only students can compete for this award.

One of the secret ingredients behind the success of the International Conference on Mining Software Repositories (MSR) is its annual Mining Challenge, in which MSR participants can showcase their techniques, tools and creativity on a common data set. In true MSR fashion, this data set is a “real” data set contributed by researchers in the community, solicited through an open call. There are many benefits of sharing a data set for the MSR Mining Challenge. The selected challenge proposal explaining the data set will appear in the MSR 2021 proceedings, and the challenge papers using the data set will be required to cite the challenge proposal or an existing paper of the researchers about the selected data set. Furthermore, the authors of the data set will become the official 2021 Mining Challenge Chairs, responsible for the reviewing process (e.g., composing a Challenge PC, managing the submissions and review assignments, etc.). Finally, it is not uncommon for challenge data sets to feature in MSR and other publications well after the conference is finished! If you would like to compete for a chance to have your data set featured in the 2021 MSR Mining Challenge and all the benefits that come with it, please submit a 1-page proposal with up to 3 pages of appendix at https://msr2021.hotcrp.com/, containing the following information:

  1. Title of data set.
  2. What does your data set contain?
  3. How large is it?
  4. How accessible is it and how can the data be obtained?
  5. How representative is it?
  6. Does it require specialized tools to mine it?
  7. What would challenge participants need to work with the data set?
  8. What kind of questions do you expect challenge participants to answer?
  9. A link to a small sample of the data (e.g., via dropbox, github, etc.).

The above conform to the IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). For more information see here: https://www.ieee.org/conferences/publishing/templates.html Once the winning proposal will be selected, its authors will become the MSR 2021 Challenge Chairs and will be responsible for choosing the MSR 2021 Mining Challenge program committee. The major deadline for this is the 15th of September 2020, at which time the Challenge CFP along with the PC will be announced, and the full challenge data set will be publicly released. By making the challenge data set available by early fall, we hope that many student teams will be able to use the challenge data set for their graduate class projects.

Timeline:

  • Deadline for proposals: August 14th, 2020
  • Proposal accepted and all submitters notified: August 21st, 2020
  • Challenge CFP: September 16th, 2020
  • Challenge PC formed: September 16th, 2020
  • Challenge data made available: September 14th, 2020
  • Challenge papers deadline: Feb 19th, 2021 (tentative)