Blogs (9) >>
SPLASH 2016
Sun 30 October - Fri 4 November 2016 Amsterdam, Netherlands
Mon 31 Oct 2016 16:30 - 16:55 at Matterhorn 1 - Session 2

The need to translate linear algebra operations into efficient code arises in a multitude of applications, for instance, in information theory and regularization. Given such expressions, we are interested in the automatic generation of code that is at least as fast and as numerically stable as what an expert would produce.

Conceptually, the problem is similar to how compilers cast scalar expressions in terms of the available instruction set. The corresponding problem for linear algebra expressions (involving matrices) is much more challenging, and requires expertise in both numerical linear algebra and high-performance computing. On the one hand, one wants to take advantage of highly optimized building blocks for matrix operations, such as those as provided by the BLAS and LAPACK libraries. On the other hand, transformations based on associativity, commutativity and distributivity play an essential role. Further complication comes from the fact that matrices frequently have structures and properties that can be exploited both to transform—and thus simplify—expressions, and to evaluate them more efficiently. The application of this kind of knowledge affects not only the computational cost, but also the necessary amount of storage space, and numerical accuracy.