Fazzini receives Facebook Testing and Verification Research Award

Professor Mattia Fazzini is one of the winners of this year’s Facebook Testing and Verification (TAV) research awards for his proposal, Moka: Improving app testing with automated mocking.

This work will thoroughly assess app quality to avoid severe failures due to undetected bugs for mobile apps relied on by millions of users for daily activities, like reading the news, streaming content, and communicating with family and friends. This research should lead to improved, easier-to-maintain app testing.

Fazzini’s proposal was one of 10 chosen out from a highly competitive field of over 100 applications. For this year’s TAV request for proposals, Facebook was looking for projects with the potential to have profound impact on the tech sector, based on advances on the theory and practice of testing and verification.

“We are excited that, once again, we received over 100 proposals, and of such high quality,” says Mark Harman, Research Scientist and Facebook TAV co-chair. “We are really excited to see this research develop, and to see the deployment of advanced research in industry.

This is the first Facebook TAV award that the University of Minnesota has received. Professor Fazzini and his collaborators, Alessandra Gorla (IMDEA Software Institute) and Alessandro Orso (Georgia Institute of Technology), will tackle multiple research tasks over the next year before preparing their outcomes and evaluation for publication.

Project details

Testing has been shown to be very powerful in identifying software bugs. However, its effectiveness in the domain of mobile apps is hindered by the fact that apps interact extensively with their software environment, and these interactions significantly complicate testing activities. For example, when the environment behaves in non-deterministic ways, these interactions can cause test flakiness, which prevents developers from fully trusting the test results.

To account for this situation, developers create test mocks that can eliminate the need for (part of) the environment to be present during testing. Manual mock creation, however, can be extremely tedious and time-consuming. Moreover, the generated mocks can typically only be used in the context of the specific tests for which they were created, which makes them difficult to maintain over time.

Specifically, Fazzini and his team will develop Moka, a framework that (i) observes the interactions between an app and its environment, (ii) studies the relationship between the inputs and outputs to and from the software environment, and (iii) generates increasingly sophisticated test mocks—from basic record-replay-based mocks to advanced mocks generated through program synthesis. The mocks generated by the framework will not only improve app testing but will also be easier to maintain as they will be able to operate with newly generated tests.

Share