Blogs >>
Sun 30 October - Fri 4 November 2016 Amsterdam, Netherlands

This paper presents Apex, a system that can automatically
generate explanations for programming assignment bugs,
regarding where the bugs are and how the root causes led to the
runtime failures. It works by comparing the passing execution
of a correct implementation (provided by the instructor) and
the failing execution of the buggy implementation (submitted
by the student). The technique overcomes a number of
technical challenges caused by syntactic and semantic differences
of the two implementations. It collects the symbolic traces of
the executions and matches assignment statements in the two
execution traces by reasoning about symbolic equivalence.
It then matches predicates by aligning the control
dependences of the matched assignment statements, avoiding direct
matching of path conditions which are usually quite different.
Our evaluation shows that Apex is every effective for 205
buggy real world student submissions of 4 programming
assignments, and a set of 15 programming assignment type of
buggy programs collected from,
precisely pinpointing the root causes and capturing the causality
for 94.5% of them. The evaluation on a standard benchmark
set with over 700 student bugs shows similar results. A user
study in the classroom shows that Apex has substantially
improved student productivity.