Blogs (9) >>
SPLASH 2016
Sun 30 October - Fri 4 November 2016 Amsterdam, Netherlands

The Third International Workshop on Software Engineering for Parallel Systems

This workshop provides a stable forum for researchers and practitioners dealing with compelling challenges of the software development life cycle on modern parallel platforms. The increased complexity of parallel applications on modern parallel platforms (e.g. multicore/manycore, distributed or hybrid) requires more insight into development processes, and necessitates the use of advanced methods and techniques supporting developers in creating parallel applications or parallelizing and re-engineering sequential legacy applications. We aim to advance the state of the art in different phases of parallel software development, covering software engineering aspects such as requirements engineering and software specification; design and implementation; program analysis, profiling and tuning; testing and debugging.

Workshop Goal

The purpose of the workshop is to present a stimulating environment where topics relevant to parallel software engineering can be discussed by members of the SPLASH community and software and languages researchers. The intention of the workshop is to initiate collaborations focused on solving challenges introduced by ongoing research in the parallel programming field. Through Q&A sessions, presenters have the opportunity to receive feedback and opinions of other domain experts as well as to discuss obstacles and promising approaches in current research. Both authors and attendees can discover new ideas and directions to solve software engineering issues related to parallel programming.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 1 Nov

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:30 - 10:00
Session 1SEPS at St Gallen
08:30
90m
Talk
SEPS Keynote: Patterns for Parallel Programming: New and Improved!
SEPS
Beverly Sanders University of Florida
10:30 - 12:10
Session 2SEPS at St Gallen
Chair(s): Tamer Dallou LG Electronics San Jose Lab
10:30
25m
Talk
Reducing Parallelizing Compilation Time by Removing Redundant Analysis
SEPS
Jixin Han Waseda University, Japan, Rina Fujino Waseda University, Japan, Ryota Tamura Waseda University, Japan, Mamoru Shimaoka Waseda University, Japan, Hiroki Mikami Waseda University, Japan, Moriyuki Takamura OSCAR TECHNOLOGY, Japan, Sachio Kamiya OSCAR TECHNOLOGY, Japan, Kazuhiko Suzuki OSCAR TECHNOLOGY, Japan, Takahiro Miyajima OSCAR TECHNOLOGY, Japan, Keiji Kimura Waseda University, Hironori Kasahara Waseda University, Japan
DOI
10:55
25m
Talk
A Divide-and-Conquer Parallel Pattern Implementation for Multicores
SEPS
Marco Danelutto University of Pisa, Italy, Tiziano De Matteis University of Pisa, Italy, Gabriele Mencagli University of Pisa, Italy, Massimo Torquati University of Pisa, Italy
DOI
11:20
25m
Talk
Parallel Evaluation of a DSP Algorithm using Julia
SEPS
Peter Kourzanov NXP, Netherlands
DOI
11:45
25m
Talk
Exhaustive Analysis of Thread-Level Speculation
SEPS
Clark Verbrugge McGill University, Canada, Christopher J. F. Pickett McGill University, Canada, Alexander Krolik McGill University, Canada, Allan Kielstra IBM, Canada
DOI
13:30 - 15:10
Session 3SEPS at St Gallen
13:30
1h40m
Talk
SEPS Tutorial: Task-Based Programming Everywhere from Supercomputing to Embedded Multicore Systems
SEPS
P: Jaume Bosch Barcelona Supercomputing Center, P: Lucas Morais UNICAMP, P: Tamer Dallou LG Electronics San Jose Lab
15:40 - 17:20
Session 4SEPS at St Gallen
15:40
1h40m
Talk
SEPS Panel discussion: Engineering software performance for upcoming heterogeneous computing systems - through programming patterns, runtime systems and tools
SEPS
P: Yukinori Sato Tokyo Institute of Technology, P: Clemens Grelck University of Amsterdam, P: Keiji Kimura Waseda University, P: Beverly Sanders University of Florida

Call for Papers

Specific topics of interest include, but are not limited to:

  • Process models for parallel software development
  • Requirement engineering of parallel software
  • Design and build of parallel programs
  • Parallel design patterns
  • Structural and application patterns of parallel programs
  • Parallel software architectures
  • Modeling techniques for parallel software
  • Parallel programming models and paradigms
  • Profiling and program analysis
  • Dynamic and static analysis
  • Refactoring and reengineering for parallelism
  • Performance tuning and auto-tuning
  • Energy-efficient parallel computing
  • Testing and debugging of parallel applications
  • Tools and environments for parallel software development
  • Case studies and experience reports

The workshop welcomes the following two types of submissions:

  • Original, unpublished regular papers on current research (max. 10 pages)
  • Short papers (max. 4 pages, without references) including:
    • Industrial and practical experiences
    • Tool presentations/demonstration
    • Early results & novel ideas without a comprehensive/extensive evaluation
    • Preliminary and exploratory work with unconventional approaches or wild and crazy ideas

Especially, we encourage early-stage work, tool papers categorized in short papers on parallel systems explicitly targeted at interaction in SEPS 2016 workshop venue.

Paper Submission: Papers submitted to SEPS 2016 must not have been published or simultaneously submitted anywhere else. Accepted papers will be published as formal proceedings in the ACM Digital Library. Contributions should be submitted electronically in PDF format via EasyChair. The authos must follow the ACM SIGPLAN Conference Foramt, 10pt font, using the font family Times New Roman and numeric citation style and use appropriate SIGPLAN template. For further information, please check the format instructions for the SPLASH events:

https://2016.splashcon.org/track/splash-2016-oopsla#Instructions-for-Authors

All of paper submissions must be done using the submission site:

https://seps16.hotcrp.com

Publication: All accepted papers will be published in as formal proceedings in the ACM Digital Library.

Registration: Authors of accepted papers are expected to register and present their papers at the Workshop.

Patterns for Parallel Programming: New and Improved!

Beverly Sanders, University of Florida

In the years since the 2004 publication of Patterns for Parallel Programming by Mattson, Sanders, and Massingill, the parallel computing landscape has changed significantly.  Not only has the need to be able to exploit concurrency and parallelism expanded beyond the realm of specialists, which has greatly increased the number of software engineers developing parallel programs, but new hardware platforms such as GPGPUs, have become widely available.   The talk will discuss an updated version of the pattern language for parallel programming.  The updated version maintains the goal of the original pattern language, which was to convey expert knowledge of parallel program design to non-experts, but has been updated and extended to reflect recent advances in hardware.  It also reflects the authors’ experiences teaching parallel programming using patterns, and collaborating with domain scientists to develop a large parallel software system for computational chemistry.  

Task-Based Programming Everywhere from Supercomputing to Embedded Multicore Systems

Presenters:

Jaume Bosch, Barcelona Supercomputing Center (BSC), Spain

Lucas Morais, University of Campinas (UNICAMP), Brazil

Tamer Dallou, LG Electronics Lab, San Jose, USA

This tutorial presents task-based programming models, such as the OmpSs and OpenMP 4.0 Models. OmpSs is a programming model developed at Barcelona Supercomputing Center (BSC). Like OpenMP, it is based on compiler directives. It is the base platform where BSC has developed OpenMP tasking, support for dependences, priorities, task reductions, and it also includes support for heterogeneous devices.

We will introduce the OmpSs basic concepts related to task-based parallelism for Symmetric Multi-Processor (SMP) cores and then quickly move to the support for heterogeneous devices. OmpSs allows to leverage existing OpenCL and CUDA kernels without the burden to have to deal with data copies to/from the devices. Data copies are just triggered automatically by the OmpSs runtime, based on the task dependence annotations. Furthermore, we will talk about the backend engine that enables the whole tasking model: the runtime system. For example, Nanos++ runtime system for OmpSs and OpenMP, and GOMP, IOMP, MTSP runtime systems for OpenMP. They provide services to support task parallelism using synchronizations based on data-dependencies, and they map task to threads in order to run them.

MTSP (Multicore Task Scheduling Platform) is a lightweight runtime system developed from scratch at UNICAMP (Brazil) for OpenMP 4.0. It focuses on supporting OpenMP 4.0 tasks with the least possible overhead, thus enabling task-based programming on embedded multicore systems, such as smartphones, tablets, etc. MTSP provides a hierarchical API to manage the creation, synchronization, and retirement of tasks, which eases the process of integrating a hardware accelerator into its pipeline, thus lowering the task management overhead and improving application scalability.

In addition to the presentations at the tutorial, there will also be a demo for some tools that ease programming and debugging taskified applications. In order to make the most out of the tutorial, it is highly recommended to visit https://pm.bsc.es/ompss-downloads, and download the OmpSs Virtual Machines before coming to the tutorial. The virtual machine contains the OmpSs environment already preinstalled, which helps the participants to quickly experiment with task-based programming.

Engineering software performance for upcoming heterogeneous computing systems – through programming patterns, runtime systems and tools


Panel organizer: Yukinori Sato (Tokyo Institute of Technology)
Panelists: Clemens Grelck (University of Amsterdam), Keiji Kimura (Waseda University), Beverly Sanders (University of Florida).

Aiming at sustainable performance improvement in the upcoming computing systems composed of tightly coupled heterogeneous components, the use of advanced methods and techniques supporting developers in creating parallel applications or parallelizing and re-engineering sequential legacy applications is essential. Programming patterns for parallel systems are important interfaces that connect programmers with the underlying systems. To generate high performance code from programs written with parallel patterns, tools such as translators or compilers are key enablers for it. Parallel software needs runtime libraries or runtime support, as we will discuss in task-based programming in the tutorial session of the Workshop SEPS 2016. In this panel session, we discuss these cross-layer techniques advancing toward heterogeneous computing systems that could appear in the near future.


Schedule

15:40-15:45 Introduction of panel from panel organizer

15:45-16:45 Position talks from:

  • Clemens Grelck. “Heterogeneous Computing without Heterogeneous Programming ?!”
  • Keiji Kimura. “OSCAR compiler for heterogeneous computing”.
  • Yukinori Sato. "Toward fully automated performance tuning system for deep hierarchical memory – from CPUs to FPGA accelerators”.

16:45-17:20 Interactive discussion