Language workbenches are tools for software language engineering. They distinguish themselves from traditional compiler tools by providing integrated development environment (IDE) support for defining, implementing, testing and maintaining languages. Not only that, languages built with a language workbench are supported by IDE features as well (e.g., syntax highlighting, outlining, reference resolving, completion etc.). As a result, language workbenches achieve a next level in terms of productivity and interactive editor support for building languages, in comparison to traditional batch-oriented, compiler construction tools.
There are many language workbenches under active development, both in industry and academia. Notable examples include Jetbrains MPS, MetaEdit+, Xtext, Rascal, Spoofax, SugarJ, Melange, Cedalion, Epsilon, EMFText, Intentional Software, Whole Platform, DrRacket, Eco, Ensō, MontiCore, and others.
The goal of this workshop is twofold. First: exercise and assess the state-of-the-art in language workbenches using challenge problems from the user perspective (i.e. the language designer). Second: foster knowledge exchange and opportunities for collaboration between language workbench implementors and researchers.
Tue 1 NovDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:10
|Solutions to Three Language Workbench Challenges using Wizards Workbench
|Whole Platform Solution to a Selection of LWC16 Benchmark Problems
|Cedalion's Response to the 2016 Language Workbench Challenge
|Language Workbench Challenge 2016: the JetBrains Meta Programming System
Eugen Schindler Oce, Klemens Schindler Sioux Embedded Systems, Federico Tomassetti Independent, Ana Maria Sutii Eindhoven University of TechnologyFile Attached
13:30 - 15:10
|Bootstrapping, Default Formatting, and Skeleton Editing in the Spoofax Language Workbench
Gabriël Konat Delft University of Technology, Luis Eduardo de Souza Amorim Delft University of Technology, Netherlands, Sebastian Erdweg Delft University of Technology, Netherlands, Eelco Visser Delft University of Technology, NetherlandsFile Attached
|The Rascal Approach to Code in Prose, Computed Properties, and Language Extension
|Languages the Racket Way: Submission to the 2016 Language Workbench Challenge
Daniel Feltey Northwestern University, Spencer P. Florence Northwestern University, Tim Knutson Univeristy of Utah, Vincent St-Amour Northeastern University, Ryan Culpepper Northeastern University, Matthew Flatt University of Utah, Robert Bruce Findler Northwestern University, Matthias Felleisen Northeastern UniversityFile Attached
Call for Solutions
Language workbenches, a term popularized by Martin Fowler in 2005, are tools that lower the development costs of implementing new languages and their associated tools (IDEs, debuggers etc.). As well as easing the development of traditional stand-alone languages, language workbenches also make multi-paradigm and language-oriented programming environments practical. The Language Workbench Challenge (LWC) aims to bring together language workbench users and implementers, to discuss the state-of-the-art in language workbenches and explore future directions.
Previous installments of the workshop were centered around a language for defining entities and instances, a modeling language for piping and instrumentation, and a DSL for questionnaires (QL). In this edition we use the benchmark challenges proposed in the following paper:
Sebastian Erdweg, Tijs van der Storm, Markus Völter, Laurence Tratt, Remi Bosman, William R. Cook, Albert Gerritsen, Angelo Hulshout, Steven Kelly, Alex Loh, Gabriël Konat, Pedro J. Molina, Martin Palatnik, Risto Pohjonen, Eugen Schindler, Klemens Schindler, Riccardo Solmi, Vlad Vergu, Eelco Visser, Kevin van der Vlist, Guido Wachsmuth, Jimi van der Woning. Evaluating and comparing language workbenches: Existing results and benchmarks for the future, Computer Languages, Systems & Structures, Volume 44, Part A, December 2015, Pages 24–47. [preprint]
LWC’16 solicits solutions to 3 benchmark problems proposed in Section 6.5 of this paper. The benchmark problems are categorized in the following categories:
- Notation: challenges dealing with the appearance of source code, including support for tabular notation, mathematical symbols, code in prose etc.
- Evolution and reuse: challenges related to modularity, composition, language versions and migration.
- Editing: challenges exercising how the language user interacts with code.
For instance, one of the proposed benchmark problems in the category Notation is to extend a language with support for using mathematical fraction bars, sum symbols etc. Another benchmark problem is to modularly embed one language into another, for instance to allow the use of SQL query syntax as a Java expressions. A summary of the benchmark problems is shown on this website here.
The choice of object language (the language that is used to implement the challenge on) is free, although the paper suggests MiniJava, or a DSL for state machines. Many of the problems can also be implemented on top of QL, the questionnaire DSL used in the past two editions of LWC. It is not required to use the same object language in solving each of the 3 problems.
Note that the goal of the workshop is to demonstrate, discuss and foster improvements in tools, as well as encourage the collaboration between and learnings among different teams developing different (kinds of) editors. To this end, we emphasize the implementation of the challenges, not writing about them. Nonetheless, we do require submission before the workshop: we solicit submissions in the form of short documents (in PDF format). You can either write a separate submission per addressed benchmark problem, or submit one paper that describes all addressed problems. In any case, each addressed problem should be described using the following structure:
- Assumptions: Are there any assumptions or prerequisites relevant to the implementation of the solution?
- Implementation: What are the important building blocks for defining the solution? What does it take to implement the solution to the problem?
- Variants: Are there any interesting and insightful variants of the implementation? What small change(s) to the challenge would make a big difference in the implementation strategy or effort?
- Usability: What is the resulting user experience? Is it convenient to use? Is it similar to other kinds of notations? Does it feel ’foreign’ to experienced users of the particular editor?
- Impact: Which artifacts have to be changed to make the solution work? Are changes required to (conceptually) unrelated artifacts? How modular is the solution?
- Composability: To what degree does the solution support composition with solutions to other benchmark problems, or with other instances of the same problem (e.g., same challenge problem, but different language feature)?
- Limitations: What are the limitations of this implementation?
- Uses and Examples: Are there examples of this problem in real-world systems? Where can the reader learn more?
- Effort (best-effort): How much effort has been spent to build the solution, assuming an experienced user of the technology?
- Other Comments: Anything that does not fit within the other categories.
- Artifact: a publicly accessible URL to the source code of the submission.
The paper cited above includes two example descriptions, one using Jetbrains MPS, solving the “Metadata annotations” problem, and one using Rascal, which describes a solution to “Persistent user-defined formatting” problem. Submissions should furthermore use the ACM SIGPLAN Conference Format, 10 point font, using the font family Times New Roman and numeric citation style.
The PC will review the submissions for inclusion in the workshop program, based on criteria of providing interest for discussion, conformance to the challenges, and whether the submission is on-topic (e.g., is using a language workbench). The PDFs of accepted submissions will be published on this website before the workshop.
This page summarizes the benchmark problems proposed in the paper:
Sebastian Erdweg, Tijs van der Storm, Markus Völter, Laurence Tratt, et al. Evaluating and comparing language workbenches: Existing results and benchmarks for the future, Computer Languages, Systems & Structures, Volume 44, Part A, December 2015, Pages 24–47. [preprint]
- Support mathematical symbols in addition to textual notation
- Support tabular notation in addition to textual notation
- Support diagrammatic notations in addition to textual notation (not in the paper)
- Generic metadata annotations: annotation of program elements without changing their core meaning
- Optional hiding: hide parts of the code, without losing the content and while retaining editability
- Alternative notations: multiple notations for the same language
- Computed properties: read only annotations that are automatically derived form the main program
- Computed structures: structured, editable views
- Skeleton editing: guide the user with syntactic templates with editable holes
- Embedding code in prose: mix structured code with free text
- Embedding blackboxes: allow program elements to be opaque non-textual elements (e.g., images)
- Language extension: modularly extend a language with new syntactic constructs
- Language embedding: embed a separate language inside another
- Extension composition: combine independently developed extensions
- Beyond grammar restrictions: disallow constructs in certain scopes, without modeling this in the (abstract) syntax
- Syntax migration: support migrating programs when concrete syntax changes
- Structure migration: support migrating programs when abstract syntax changes
- Editing incomplete programs: support for syntactically malformed programs
- Referencing missing items: support referencing items that have not been defined (yet)
- Structure agnostic copy-paste: copy-paste works across syntax boundaries
- Restructuring: changing syntactic structure without typing the complete expression again.
- Language demarcation: show how a combination multiple languages in one program are disambiguated
- Delayed decisions: show when the syntactic category of an expression is determined (at first character, or last, or something else)
- End-user defined formatting: show if and how user can change the visual appearance of the program
- Specification of default formatting: support for pretty printing
- Formatting preservation: how is formatting preserved when the code is automatically restructured (e.g., with refactoring, migration etc.).