Write a Blog >>
Thu 21 Jan 2016 14:45 - 15:10 at Grand Bay North - Track 1: Learning and verification Chair(s): David Monniaux

The core challenge in designing an effective static program analysis is to find a good program abstraction – one that retains only details relevant to a given query. In this paper, we present a new approach for automatically finding such an abstraction, by using guidance from a probabilistic model, which itself is tuned by observing prior runs of the analysis. Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement. For each untried abstraction, our probabilistic model provides a probability of success, while the size of the abstraction provides an estimate of its cost in terms of analysis time. Combining these two metrics, probability and cost, our refinement algorithm picks an optimal abstraction. Our probabilistic model is a variant of the Erdos–Renyi random graph model, and it is tunable by what we call hyperparameters. We present a method to learn good values for these hyperparameters, by observing past runs of the analysis on an existing codebase. We implemented our approach on an object-sensitive pointer analysis for Java programs with two client analyses (PolySite and Downcast). Experiments show the benefits of our approach on reducing the runtime of the analysis.