Whole-program optimizations are powerful tools that can dramatically improve performance, size and other aspects of programs. Because they depend on global knowledge, they must typically be reapplied to the whole program when small changes are made, which makes them too slow for the development cycle. This is an issue for some environments that require, or benefit a lot from, whole-program optimizations, such as compilation to JavaScript or to the Dalvik VM, because their development cycle is slowed down either by the lack of optimizations, or by the time spent on applying them.
We present a new approach to designing incremental whole-program optimizers for object-oriented and functional languages: when part of a program changes, only the portions affected by the changes are reoptimized. An incremental optimizer using this approach for Scala.js, the Scala to JavaScript compiler, demonstrates speedups from 10x to 100x compared to its batch version. As a result, the optimizer's running time becomes insignificant compared to separate compilation, making it fit for use on every compilation run during the development cycle. We also show how to parallelize the incremental algorithm to take advantage of multicore hardware.
Wed 2 NovDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:10 | |||
10:30 25mTalk | A Compiler for Throughput Optimization of Graph Algorithms on GPUs OOPSLA DOI Pre-print | ||
10:55 25mTalk | Automatic Parallelization of Pure Method Calls via Conditional Future Synthesis OOPSLA DOI | ||
11:20 25mTalk | Portable Inter-workgroup Barrier Synchronisation for GPUs OOPSLA Tyler Sorensen Imperial College London, Alastair F. Donaldson Imperial College London, Mark Batty University of Kent, Ganesh Gopalakrishnan University of Utah, Zvonimir Rakamaric University of Utah DOI Pre-print | ||
11:45 25mTalk | Parallel Incremental Whole-Program Optimizations for Scala.js OOPSLA DOI Pre-print |