In this paper we introduce a new approach for learning precise and general probabilistic models of code based on decision tree learning. Our approach directly benefits an emerging class of statistical programming tools which leverage probabilistic models of code learned over large codebases (e.g., GitHub) to make predictions about new programs (e.g., code completion, repair, etc).
The key idea is to phrase the problem of learning a probabilistic model of code as learning a decision tree in a domain specific language over abstract syntax trees (called TGen). This allows us to condition the prediction of a program element on a dynamically computed context. Further, our problem formulation enables us to easily instantiate known decision tree learning algorithms such as ID3, but also to obtain new variants we refer to as ID3+ and E13, not previously explored and ones that outperform ID3 in prediction accuracy.
Thu 3 NovDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
15:40 - 17:20
|Computing Repair Alternatives for Malformed Programs using Constraint Attribute Grammars
Friedrich Steimann Fernuniversität, Jörg Hagemann Fernuniversität in Hagen, Bastian Ulke Fernuniversität in HagenDOI Media Attached
|Probabilistic Model for Code with Decision Trees
|Ringer: Web Automation by Demonstration
Shaon Barman UC Berkeley, Sarah E. Chasins University of California, Berkeley, Rastislav Bodík University of Washington, USA, Sumit Gulwani Microsoft ResearchDOI Media Attached
|Scalable Verification of Border Gateway Protocol Configurations with an SMT Solver
Konstantin Weitz University of Washington, Doug Woos University of Washington, Emina Torlak University of Washington, Michael D. Ernst University of Washington, Arvind Krishnamurthy University of Washington, Zachary Tatlock University of Washington, SeattleDOI Media Attached