Child pages
  • MPS User's Guide (one page)
Skip to end of metadata
Go to start of metadata

MPS User Guide for Language Designers

Icon

You are viewing documentation of MPS 3.2, which is not the most recently released version of MPS. Please refer to the documentation page  to choose the latest MPS version.

 

Welcome to MPS. This User Guide will navigate you through the many concepts and usage patterns that MPS offers and will give you a hand whenever you need to know more details about any particular aspect of the system.
First, the Introduction section will offer a high-level overview of the basic notions and their roles. In the second section, named Using MPS, you'll get familiar with the interface through which you'll communicate with MPS. Although very small, there still are some differences between how you interact with MPS and how you typically use other common programming tools.

In the third section, called Defining Languages, we'll get to the meat of MPS. We'll show details on how to define the many aspects of your custom languages - their structure, editors, generators and type systems rules. The IDE integration section will then provide some additional context necessary to help you improve the IDE aspect of your languages and integrate them nicely into MPS.

The Platform languages section gives you details on all languages bundled with MPS including the corner stone language of MPS - the BaseLanguage. Whatever didn't fit the mentioned scheme was placed into the last Miscelaneous section.

Icon

You can also view the user guide in pdf.

Tutorials and cookbooks

Don't forget to check out our tutorials and focused cookbooks listed in the Tutorials and Cookbooks sections, to learn more about individual aspects of MPS

Before you start

MPS glossary

Abstract Syntax Tree (AST)

a logical representation of code in memory (and disk) in the shape of a tree forest that describes hierarchies of nodes. These nodes have a notion of a parent-child relationship. Additionally, two nodes can be mutually connected with explicit references that go across the hierarchy structure.

BaseLanguage

a projectional clone of Java 6. It follows the Java specification and is 1:1 compatible with Java 6. Additionally, MPS provides several handy extensiont to BaseLanguagem such dates, collections, closures and many others. Extensions that enable some of the JDK 7 and JDK 8 Java capabilities are also available.

Code generation

the process of transfromation code from one model (AST) into another model. For example, code describing a set of business rules can be transformed into plain Java so that it can be compiled with javac and run as part of an enterprise application.
Code generation in MPS has two phases - first a series of model-to-model transformations gradually reduce the concepts used in the AST of the program until a bottom-line set of base concepts is reached. Then a text-generating phase translates the AST into textual files.

DevKit

A package of related languages that have been grouped for user convenience.

Domain Specific Language (DSL)

a language dedicated to a particular problem domain, typically created with the aim of simplicity and greater expressivity compared to a general purpose language.

Language plugin

a packaged library ( a zip file) containing all the required elements in order to use a language either inside either IntelliJ IDEA or MPS.

Projectional editor

an editor that allows the user to edit the AST representation of code directly, while mimicing the behavior of a text editor to some extent. The user sees text on teh screen and edits it, however, in reality the text is only an illusion (projection) of an AST.

Module

The top-level organization element of an MPS project that typically groups several models together. It can have three basic types: Solution, Language and DevKit and may depend on other modules and models.

Model

A lower-level organizational element grouping individual concepts. It may depend on other models.

Runtime solution

A solution that is required by a language, sometimes also called a library. Runtime solutions may contain normal models as well as stubs for Java sources, classes or jar files external to MPS.

Structure

A language aspect defining all types (concepts) of AST nodes that can be used in the language together with their relationships.

Concept

A definition that describes the abstract structure of a syntax element. E.g. the IfStatement concepts says that an if holds a boolean Expression and up-to two StatementLists.

Constraints

A language aspect holding additional restrictions on concepts, their properties and relationships.

Behavior

Allows the language designer to define behavior of the language concepts.

Editor

Holds vizualization definitions of individual language concepts. Since the way concepts are viewed and edited on the screen can be customized, the editors specify how the user will interact with the language.

Scope

The set of elements that are visible and applicable to a particular position within a program. Typically only a sub-set of all elements of a particular kind can be used at any given program location.

Typesystem

A set of rules that validate and infer types of concepts in a program. 

Actions

User-invoked commands that may perform changes to the code. Actions can be attached to keyboard shortcuts or menu items.

Intention actions

Context-sensitive actions offered to the language user through a small pop-up window triggered by the Alt + Enter key shortcut. These actions typically perform a relatively local refactoring to the code under carret or a selected block of code.

Surround With intention actions

Intentions applicable to a selected block of code that wrap the block by another concept. E.g. Surround with Try-Catch.

Refactoring

A potentially substantial automated change in code structure triggered by a user action.

Frequently Asked Questions (FAQ)

Check out the FAQ document to get some of your questions answered before you even ask them.

User guide for language designers

Basic notions

This chapter describes the basic MPS notions: nodes, concepts, and languages. These are key to proper understanding of how MPS works. They all only make sense when combined with the others and so we must talk about them all together. This section aims to give you the essence of each of the elements. For further details, you may consider checking out the sections devoted to nodes, concept (structure language), and languages (project structure).

Abstract Syntax Tree (AST)

MPS differentiates itself from many other language workbenches by avoiding the text form. Your programs are always represented by an AST. You edit the code as an AST, you save it as an AST you compile it as, well, as an AST. This allows you to avoid defining grammar and building a parser for your languages. You instead define your language in terms of types of AST nodes and rules for their mutual relationships. Almost everything you work with in the MPS editor is an AST-node, belonging to an Abstract Syntax Tree (AST). In this documentation we use a shorter name, node, for AST-node.

Node

Nodes form a tree. Each node has a parent node (except for root nodes), child nodes, properties, and references to other nodes.

The AST-nodes are organized into models. The nodes that don't have a parent, called root nodes. These are the top-most elements of a language. For example, in BaseLanguage (MPS' counterpart of Java), the root nodes are classes, interfaces, and enums.

Concept

Nodes can be very different from one another. Each node stores a reference to its declaration, its concept. A concept sets a "type" of connected nodes. It defines the class of nodes and coins the structure of nodes in that class. It specifies which children, properties, and references an instance of a node can have. Concept declarations form an inheritance hierarchy. If one concept extends another, it inherits all children, properties, and references from its parent.
Since everything in MPS revolves around AST, concept declarations are AST-nodes themselves. In fact, they are instances of a particular concept, ConceptDeclaration.

Language

Finally we get the language definition. A language in MPS is a set of concepts with some additional information. The additional information includes details on editors, completion menu, intentions, typesystem, generator, etc. associated with the language. This information forms several language aspects.
Obviously, a language can extend another language. An extending language can use any concepts defined in the extended language as types for its children or references, and its concepts can inherit from any concept of the extended language. You see, languages in MPS form fully reusable components.

Next 

MPS Project Structure

Introduction

When designing languages and writing code, good structure helps you navigate around and combine the pieces together. MPS is similar to other IDEs in this regard.

Project

Project is the main organizational unit in MPS. Projects consist of one or more modules, which themselves consist of models. Model is the smallest unit for generation/compilation. We describe these concepts in detail right below.

Models

Here's a major difference that MPS brings along - programs are not in text form. Ever.
You might be used to the fact that any programming is done in text. You edit text. The text is than parsed by a parser to build an AST. Grammars are typically used to define parsers. AST is then used as the core data structure to work with your program further, either by the compiler to generate runnable code or by an IDE to give you clever code assistance, refactorings and static code analysis.
Now, seeing that AST is such a useful, flexible and powerful data structure, how would it help if we could work with AST from the very beginning, avoiding text, grammar and parsers altogether? Well, this is exactly what MPS does.

To give your code some structure, programs in MPS are organized into models. Think of models as somewhat similar to compilation units in text based languages. To give you an example, BaseLanguage, the bottom-line language in MPS, which builds on Java and extends it in many ways, uses models so that each model represents a Java package. Models typically consist of root nodes, which represent top level declarations, and non-root nodes. For example, in BaseLanguage classes, interfaces, and enums are root nodes. (You can read more about nodes here ).

Models need to hold their meta information:

  • models they use (imported models)
  • languages (and also devkits) they are written in (in used languages section)
  • a few extra params, such as the model file and special generator parameters

This meta information can be altered in Model Properties of the model's pop-up menu or using Alt + Enter when positioned on the model.

Modules

Models themselves are the most fine-grained grouping elements. Modules organize models into higher level entities. A module typically consists of several models acompanied with meta information describing module's properties and dependencies. MPS distinguishes several types of modules: solutions, languages, devkits, and generators.
We'll now talk about the meta-information structure as well as the individual module types in detail.

Module meta information

Now when we have stuff organized into modules, we need a way to combine the modules together for better good. Relationships between modules are described through meta information they hold. The possible relationships among modules can be categorized into several groups:

  • Dependency - if one module depends on another, and so models inside the former can import models from the latter. The reexport property of the dependency relationship indicates whether the dependency is transitive or not. If module A depends on module B with the reexport property set to true, every other module that declares depency on A automatically depends on B as well.
  • Extended language dependency - if language L extends language M, then every concept from M can be used inside L as a target of a role or an extended concept. Also, all the aspects from language M are available for use and extension in the corresponding aspects of language L.
  • Generation Target dependency - a relation between two languages (L2 and L1), when one needs to specify that Generator of L2 generates into L1 and thus needs L1's runtime dependencies.
  • Used language - if module A uses language L, then models inside A can use language L.
  • Used devkit - if module A uses devkit D, then models inside A can use devkit D.
  • Generator output path - generator output path is a folder where all newly generated files will be placed. This is the place you can look for the stuff MPS generates for you.

Now we'll look at the different types of modules you can find in MPS.

Solutions

Solution is the simplest possible kind of module in MPS. It is just a set of models unified under a common name.

Languages

Language is a module that is more complex than a solution and represents a reusable language. It consists of several models, each defining a certain aspect of the language: structure, editor, actions, typesystem, etc.
Languages can extend other languages. An extending language can then use all concepts from the extended language - derive its own concepts, use inherited concepts as targets for references and also place inherited concepts directly as children inside its own concepts.

Languages frequently have runtime dependencies on third-party libraries or solutions. You may, for example, create a language wrapping any Java library, such as Hibernate or Swt. Your language will then give the users a better and smoother alternative to the standard Java API that these libraries come with.
Now, for your language to work, you need to include the wrapped library with your language. You do it either through a runtime classpath or through a runtime solution. Runtime classpath is suitable for typical scenarios, such as Java-written libraries, while runtime solutions should be prefered for more complex scenarios.

  • Runtime classpath - makes library classes available as stubs language generators
  • Runtime solutions - these models are visible to all models inside the generator

Language aspects

Language aspects describe different facets of a language:

  • structure - describes the nodes and structure of the language AST. This is the only mandatory aspect of any language.
  • editor - describes how a language will be presented and edited in the editor
  • actions - describes the completion menu customizations specific to a language, i.e. what happens when you type Control + Space
  • constraints - describes the constraints on AST: where a node is applicable, which property and reference are allowed, etc.
  • behavior - describes the behavioral aspect of AST, i.e. AST methods
  • typesystem - describes the rules for calculating types in a language
  • intentions - describes intentions (context dependent actions available when light bulb pops up or when the user types Alt + Enter)
  • plugin - allows a language to integrate into MPS IDE
  • data flow - describes the intented flow of data in code. It allows you to find unreachable statements, uninitialized reads etc.

You can read more about each aspect in the corresponding section of this guide.

Icon

To learn all about setting dependencies between modules and models, check out the Getting the dependencies right page.

Generators

Generators define possible transformations of a language into something else, typically into another languages. Generators may depend on other generators. Since the order in which generators are applied to code is important, ordering constraints can be set on generators. You can read more about generation in the corresponding section.

DevKits

DevKits have been created to make your life easier. If you have a large group of interconnected languages, you certainly appreciate a way to treat them as a single unit. For example, you may want to import them without listing all of the individual languages. DevKits make this possible. When building a DevKit, you simply list languages to include.
As expected, DevKits can extend other DevKits. The extending DevKit will then carry along all the inherited languages as if they were its own ones.

Projects

This one is easy. A project simply wraps modules that you need to group together and work with them as a unit. You can open the Properties of a project (Alt + Enter on the Project node in the Project View panel) and add or remove modules that should be included in the project. You can also create new modules from the project nodes' context pop-up menu.

Java compilation

MPS was born from Java and is frequently used in Java environment. Since MPS models are often generated into java files, a way to compile java is needed before we can run our programs. There are generally two options:

  • Compiling in MPS (recommended)
  • Compiling in IntelliJ IDEA (requires IntelliJ IDEA)

When you compile your classes in MPS, you have to set the module's source path. The source files will be compiled each time the module gets generated, or whenever you invoke compilation manually by the make or rebuild actions.
Previous Next

MPS Java compatibility

Configuration

The Java Compiler configuration tab in the preferences window only holds a single setting - “Project bytecode version”.

This setting defines the bytecode version of all Java classes compiled by MPS. These classes include classes generated from language’s aspects, classes of the runtime solutions, classes of the sandbox solutions, etc.

By default, the bytecode version is set to “JDK Default”. This means that the version of the compiled classes will be equal to the version of Java, which MPS is running under. E.g. if you run MPS under JDK 1.8 and “JDK Default” is selected, the bytecode version will be 1.8.

The other options for project bytecode version are 1.6, 1.7 and 1.8.

Icon

Note that if you compile languages to the 1.8 version, then if you try to run MPS with JDK, the version of which is earlier than 1.8, those languages won’t be loaded.

Build scripts

Also, don’t forget to set java compliance level in the build scripts of your project. It should be the same as the project bytecode version.

Using java classes compiled with JDK 1.8

In the MPS modules pool you can find the JDK solution, which holds the classes of the running Java. So when you start MPS under JDK 1.8, the latest Java Platform classes will be available in the JDK solution.

You can also use any external Java classes, compiled under JDK 1.8 by adding them as Java stubs.

Since version 1.8, Java interfaces can contain default and static methods. At present, MPS does not support creating them in your BaseLanguage code, but you can call static and default methods defined in external Java classes, e.g classes of the Java Platform.

Static interface method call

In the example, we sort a list with the Comparator.reverseOrder()Comparator is an interface from java.util, and reverseOrder() is its static method, which was introduced in Java 1.8.

Default interface methods

Java 8 introduced also default methods. These are methods implemented directly in the interface. You can read about default methods here: http://docs.oracle.com/javase/tutorial/java/IandI/defaultmethods.html

These methods can be called just like the usual instance methods. Sometimes, however, you need to call the default method directly from an interface that your class is implementing. E.g in case of multiple inheritance when a class implements several interfaces, each containing a default method with the same signature.

In that case foo() can be called explicitly on one of the interfaces via a SuperInterfaceMethodCall construction, which is newly located in the jetbrains.mps.baseLanguage.jdk8 language.

Using Java platform API

Java 8 introduced lambda expressions, of which you can learn more here: http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html

MPS 3.2 doesn’t yet have a language that would be generated into lambda-expressions. Instead, it has its own closure language, which is compatible with the new Java API!

Here’s the example of an interaction with the new JDK 8 Collections API:

The forEach() method is the new default method of java.lang.Iterable. It takes a Consumer interface as a parameter. Consumer is a functional interface as it only has one method. In Java 8 it would be possible to pass a lambda expression to forEach(). In MPS you can pass the MPS closure. A closure knows the type of the parameter taken by forEach() while generating and it will be generated exactly to the correct instance of the Consumer.

Commanding the editor

When coding in MPS you will notice there are some differences between how you normally type code in text editors and how code is edited in MPS. In MPS you manipulate the AST directly as you type your code through the projectional editor. The editor gives you an illusion of editing text, which, howver, has its limits. So you are slightly limited in where you can place your cursor and what you can type on that position. As we believe, projectional editor brings huge benefits in many areas. It requires some getting used to, but once you learn a few tricks you'll leave your plain-text-editor colleagues far behind in productivity and code quality. In general only the items suggested by a completion menu can be entered. MPS can always decide, which elements are allowed and which are disallowed at a certain position. Once the code you type is in red color you know you're off the track.

Code completion

Code completion (Control + Space) will be your good friend allowing you to quickly complete the statements you typed. Remember that CamelHumps are supported, so you only need to type the capital characters of long names and MPS will guess the rest for you.

Intentions

Frequently you can enhance or alter your code by means of predefined semi-automated procedures called Intentions. By pressing Alt + Enter MPS will show you a pop-up dialog with options applicable to your code at the current position. Some intentions are only applicable to a selected code region, e.g. to wrap code inside a try-catch block. These are called Surround With intentions and once you select the desired block of code, press Control + Alt + T to show the list of applicable intentions.

Navigation

Whenever you need to see the definition of an element you are looking at, press Control + B or Control + mouse click to open up the element definition in the editor. To quickly navigate around editable positions on the screen use the Tab/Shift + Tab key. Enter will typically insert a new element right after your current position and let you immediately edit it. The Insert key will do the same for a position right before your current position.
When a piece of code is underlined in either red or yellow, indicating an error or a warning respectively, you can display a pop-up with the error message by pressing Control + F1.

Selection

The Control + Up/Down key combination allows you to increase/decrease block selection. It ensures you always select valid subtrees of the AST. The usual Shift + Arrow keys way of text-like selection is also possible.

Investigation

To quickly find out the type of an element, press Control + Shift + T. Alt + F12 will open the selected element in the Node Explorer allowing you to investigate the appropriate part of the AST. Alt + F7 will enable you to search for usages of a selected element. To quickly visualize the inheritance hierarchy of an element, use Control + H.

Inspector window

The Inspector window opens after you press Alt + 2. Some code and properties (e.g. editor styles, macros etc.) are shown and edited inside the Inspector window so it is advisable to keep the window ready.

Icon

We've prepared an introductory screen-cast showing you the basics of using the MPS editor.

Most useful key shortcuts

Windows / Linux

MacOS

Action

Control + Space

Cmd + Space

Code completion

Control + B

Cmd + B

Go To Definition

Alt + Enter

Alt + Enter

Intentions

Tab

Tab

Move to the next cell

Shift + Tab

Shift + Tab

Move to the previous cell

Control + F1

N/A

Display the error message at the current position

Control + Up/Down

Cmd + Up/Down

Expand/Shrink the code selection

Shift + Arrow keys

Shift + Arrow keys

Select regions

Control + F9

Cmd + F9

Compile project

Shift + F10

Shift + F10

Run the current configuration

Control + Shift + T

Cmd + Shift + T

Show the type of the expression under carret

Alt + F12

Alt + F12

Open the expression under carret in the Node Explorer to inspect the apropriate node and its AST surroundings

Control + H

Ctrl + H

Show the structure (inheritance hierarchy)

Alt + Insert

Ctrl + N

Generate...

Ctrl + Alt + T

Cmd + Alt + T

Surround with...

Ctrl + O

Cmd + O

Override methods

Ctrl + I

Cmd + I

Implement methods

Ctrl + /

Cmd + /

Comment/uncomment line

Ctrl + Shift + /

Cmd + Shift + /

Comment/uncomment with block comment

Ctrl + X/ Shift + Delete

Cmd + X

Cut current line or selected block to buffer

Ctrl + C / Ctrl + Insert

Cmd + C

Copy current line or selected block to buffer

Ctrl + V / Shift + Insert

Cmd + V

Paste from buffer

Ctrl + Z

Cmd + Z

Undo

Ctrl + Shift + Z

Cmd + Shift + Z

Re-do

Ctrl + D

Cmd + D

Duplicate current line or selected block

A complete listing

Please refer to the Default Keymap Reference page for a complete listing of MPS keyboard shortcuts (Also available from the MPS Help menu).

IDE configuration

Many aspects of MPS can be configured through the Settings dialog (Control + Alt + S / Cmd + ,)

To quickly navigate to a particular configuration items you may use the convenient text search box in the upper left corner. Since the focus is set to the text field by default, you can just start typing. Notice that the search dives deep into the individual screens:

Plugins

MPS is modular and contains several plugins. If you open the MPS Plugin Manager you’ll see a list of plugins available in your installation.


Additionally installed languages are also listed here.

If some plugins are not necessary for your current work they can be simply switched off, which may have impact on the overall performance of the platform.

Getting dependencies right

Motivation

Modules and models are typically interconnected by a network of dependencies of various types. Assuming you have understood the basic principles and categorisations of modules and models, as described at the MPS project structure page, we can now dive deeper as learn all the details.

Getting dependencies right in MPS is a frequent cause of frustration among inexperienced users as well as seasoned veterans. This page aims to solve the problem once and for all. You should be able to find all the relevant information categorised into sections by the various module and dependency types.

Solution

Solutions represent programs written in one or more languages. They typically serve two purposes:

  1. Runtime solutions - represent "library" code that contains reusable pieces of code that can be leveraged both by other solutions and by languages
  2. Plain solutions - represent user code that is supposed to be run and fulfil some task required by the user

We'll start with the properties valid for all solutions and then cover the specifics of runtime solutions.

Common

Properties

  • Name - name of the solution
  • File path - path the the module file
  • Generator output path - points to the folder, where generated sources should be placed
  • Left-side panel - contains model roots, each of which may hold one or more models.
  • Right-side panel - displays the directory structure under the model root currently selected in the left-side panel. Folders and jar files can be selected and marked/unmarked as being models of the current model root.

Model root types

Solutions contain model roots, which in turn contain models. Each model root typically points to a folder and the contained models lie in one or more sub-folders of that folder. Depending on the type of contained models, the model roots are of different kinds:

  • default - the standard MPS model root type holding MPS models
  • java_classes - a set of directories or jar files containing Java class files
  • javasource_stubs - a set of directories or jar files containing Java sources 

    Icon

    When included in the project as models, Java classes in directories or jar files will become first-class citizens of the MPS model pool and will become available for direct references from other models, which import these stub models. A second option to include classes and jars in MPS is to use the Java tab and define them as libraries. In that case the classes will be loaded, but not directly referenceble from MPS code. This is useful for libraries that are needed by the stub models.

Dependencies

The dependencies of a solutions are other solutions and languages, the models of which will be visible from within this solution.

The Export flag then specifies whether the dependency should be transitively added as a dependency to all modules that depend on the current solution. For example, of module A depends on B with export on and C depends on A, then C depends on B.

Used Languages

The languages as well as devkits that the solution's models may use are specified among used languages. These language then become available for use in the solution's models.

Java

The Java tab contains several options:

  • Solution kind - different kinds of solutions are treated slightly differently by MPS and have access to different MPS internals
    • None - default, used for user code, which does not need any special class-loading strategy
    • Other - used by typical libraries of reusable code that are being leveraged by other languages and solutions
    • Core plugin - used by code that ties into the MPS IDE core and needs to have its class-loading managed accordingly
    • Editor plugin used by code that ties into the MPS editor and needs to have its class-loading managed in sync with the rest of the editor
  • Compile in MPS - indicates, whether the generated artifacts should be compiled with the Java compiler directly in MPS and part of the generation process
  • Source Paths - Java sources that should be made available to other Java code in the project
  • Libraries - Java classes and jars that are required at run-time by the Java code in one or more models of the solution

Facets

  • Idea Plugin - checked, if the solution hooks into the IDE functionality
  • Java - checked, if the solution relies on Java on some way. Keep this checked in most cases.
  • tests - checked, if the solution contains test models

Solution models

Solutions contain one or more models. Models can be mutually nested and form hierarchies, just like, for example, Java packages can. The properties dialog hides a few configuration options that can be tweaked:

Dependencies

Models from the current or imported modules can be listed here, so that their elements become accessible in code of this model.

Used languages

The languages used by this model must be listed here.

Advanced

A few extra options are listed on the Advanced tab:

  • Do not generate - exclude this model from code generation, perhaps because it cannot be meaningfully generated
  • File path - location of the model file
  • Languages engaged on generation - lists languages needed for proper generation of the model, if the languages are not directly or indirectly associated with any of the used languages and thus the generator fails finding these languages automatically

Virtual packages

Nodes in models can be logically organised into hierarchies of virtual packages. Use the Set Virtual Package option from the node's context pop-up menu and specify a name, possibly separating nested virtual folder names with the dot symbol.

Adding external Java classes and jars to a project - runtime solutions

Runtime solutions represent libraries of reusable code in MPS. They may contain models holding MPS code as well as models referring to external Java sources, classes or jar files. To properly include external Java code in a project, you need to follow a few steps:

  1. Create a new Solution
  2. In the Solution properties dialog (Alt + Enter) specify the Java code, such that:
    1. Common tab - click on Add Model Root, select javaclasses for classes or jars, select javasource_stubs for Java sources and navigate to your lib folder.
    2. Select the folder(s) or jar(s) listed in the right-side panel of the properties dialog and click on the blue "Models" button.
    3. Also on the Java tab add all the jars or the classes root folders to the Libraries part of the window, otherwise the solution using the library classes would not be able to compile. When using java_sourcestubs, add the sources into the Source paths part of the Java tab window, instead.
  3. A new folder named stubs should appear in your solution
  4. Now after you import the solution into another module (solution, language, generator) the classes will become available in that module's models

Language

Languages represent a language definition and consist of several models, each of which represent a distinct aspect of the language. Languages also contain a single Generator module. The properties dialog for languages is in many ways similar to the one of Solutions. Below we will only mention the differences:

Common

A language typically has a single model root that points to a directory, in which all the models for the distinct aspects are located.

Dependencies

The dependencies of a language are other solutions and languages, the models of which will be visible from within this solution. The Export flag then specifies whether the dependency should be transitively added as a dependency to all modules that depend on the current language.

A dependency on a language offers thee Scope options:

  • Default - only makes the models of the other language/solution available for references
  • Extends - allows the language to define concepts extending concepts from the there language
  • Generation Target - specifies that the current language is generated into the other language, thus placing a generator ordering constraint that the other language must only be generated after the current one has finished generating

Used Languages

This is the same as for solutions.

Runtime

  • Runtime Solutions - lists solutions of reusable code that the language requires. See the "Adding external Java classes and jars to a project - runtime solutions" section above for details on how to create such a solution.
  • Accessory models - lists accessory models that the language needs. Nodes contained in these accessory models are implicitly available on the Java classpath and the Dependencies of any model using this language.

Java

This is the same as for solutions, except for the two missing options that are not applicable to languages.

Facets

This is the same as for solutions.

Icon

When using a runtime solution in a language, you need to set both the dependency in the Dependencies tab and the Runtime Solutions on the Runtime tab.

Language models/aspects

Dependencies / Used Languages / Advanced

These settings are the same and have the same meaning as the settings on any other models, as described in the Solution section.

Generator

The generator module settings are very similar to those of other module types:

Common

This is the same as for languages.

Dependencies

This is the same as for solutions. Additionally generator modules may depend on other generator modules and specify Scope:

  • Default - only makes the models of the other language/solution available for references
  • Extends - the current generator will be able to extend the generator elements of the extended generator
  • Design - the target generator is only needed to be referred from a priority rule of this generator

Used Languages

This is the same as for languages.

Generators priorities

This tab allows to define priority rules for generators, in order to properly order the generators in the generation process. Additionally, three options are configurable through the check-boxes at the bottom of the dialog:

  • Generate Templates - indicates, whether the generator templates should be generated and compiled into Java, or whether they should be instead interpreted by the generator during generation
  • Reflective queries - indicates, whether the generated queries will be invoked through Java reflection or not. (Check out the Generator documentation for details)
  • IOperationContext parameter - indicates, whether the generator makes use of the operationContext parameter passed into the queries. The parameter will be removed in the future and generators should gradually stop using it.

Java

This is the same as for languages.

Facets

This is the same as for languages.

Generator models

This is the same as for solutions.

Useful keyboard shortcuts

Whenever positioned on a model or a node in the left-hand-side Project Tool Window or when editing in the editor, you can invoke quick actions with the keyboard that will add dependencies or used languages into the current model as well as its containing solution.

  • Control + L - Add a used language
  • Control + M - Add a dependency
  • Control/Cmd + R - Add a dependency that contains a root concept of a given name
  • Control/Cmd + Shift + A - brings up a generic action-selction dialog, in which you can select the desired action applicable in the current context

Resolving difficulties, understanding reported errors

This document should give you instant step-by-step advice on what to do and where to look to get over a problem with MPS. It is an organized collection of patterns and how-tos fed with our own experience.

Check out the type of the node

Knowing the type of the element you are looking at may give you very useful insight. All you need to do is pressing Control + Shift + T and MPS will pop-up a dialog window with the type of the element under carret.

Check the concept of the node under carret

The Control + Shift + S/Cmd + Shift + S keyboard shortcut will get you to the definition of the concept of the node you are currently looking at or that you have selected.

Check the editor of the node under carret

The Control + Shift + E/Cmd + Shift + E keyboard shortcut will get you to the definition of the editor for the concept you are currently looking at or that you have selected. This may be in particular useful if you want to familiarize yourself with the concrete syntax of a concept and all the options it gives you. 

Type-system Trace

When you run into problems with types, the Type-system Trace tool will give you an insight into how the types are being calculated and so could help you discover the root of the issues. Check out the details in Type-system Trace documentation page and in Type-system Debugging.

Investigate the structure

When you are learning a new language, the structure aspect of the language is most often the best place to start investigating. The shortcuts for easy navigation around concepts and searching for usages will certainly come in handy.

You should definitely familiarize youself with Control + B / Cmd + B (Go To Definition), Control + N / Cmd + N (Go To Concept), Control + Shift + S / Cmd + Shift + S (Go To Concept Declaration) and Alt + F7 (Find usages) to make your investigation smooth and effitient.

Before you learn the shortcuts by heart, you can find most of them in the Navigate menu:

Importing elements

You are trying to use an element or a language feature, however, MPS doesn't recognize the language construct or doesn't offer that element in the code-completion dialog. So you cannot update your code the way you want. This is a simptom of a typical beginer's problem - missing imports and used languages.

  • In order to use language constructs from a language, the language has to listed among Used Languages.
  • To be able to enter elements from a model, the model must be imported first.
  • Also, for your languages to enhance capabilities of another language, the language must be listed among the Extended Languages.


To quickly and conveniently add models or languages to the lists, you may use a couple of handly keyboard shortcuts in addition to the the Properties dialog:

Save transient models

If you are getting errors from the generator, you may consider turning the Save Transient Models functionality on. This will preserve all intermediate stages of code generation for your inspection.

Why the heck do I get this error/warning?

You see that MPS is unhappy about some piece of code and you want to find out why. Use Control + Alt + Click / Cmd + Alt + Click to open up a dialog with the details.

The Go To Rule button will get you to the rule that triggers the error/warning.

Where to find language plugins

MPS can be easily extended with additional languages. Languages come packaged as ordinary zip files, which you unzip into the MPS plugin directory and which MPS will load upon restart.

The most convenient way to install language plugins is through the Plugin Manager, which is available in the Settings dialog (Control + Alt + S / Cmd + ,).

 

You can either install a zip file you've received previously (the Install plugin from disk... option) or you may click the Browse repositories button and pick the desired plugin from the list of plugins that have been uploaded to the MPS plugin repository.

Version Control

VCS Add-ons

When you first open MPS with version control or add VCS mapping for existing project, it offers you installing some global settings and install so called VCS Add-ons (they can also be installed from main menu: Version Control → Install MPS VCS Add-ons).

What are VCS Add-ons

VCS Add-ons are special hooks, or merge drivers for Subversion and Git, which override merging mechanism for special types of files. In case of MPS, these addons determine merging for model files (*.mps) and generated model caches (dependenciesgeneratedand trace.info files, if you store them under version control). Every time you invoke some version control procedure (like merging branches or Git rebasing) which involves merging file modifications, these hooks are invoked. For models, it reads their XML content and tries to merge changes in high level, "model" terms (instead of merging lines of XML file which may lead to invalid XML markup). Sometimes models cannot be merged automatically. In that case, it stays in "conflicting" state, and it can be merged in UI of MPS.

In some cases during work of merge driver, there may happen id conflicts, situations when model has more than one node with the same id after applying all non-conflicting changes. In this situation, no automatic merging is performed, because it may lead to problems with references to nodes which are hard to find. In this case you should look through merge result by yourself and decide if it okay.

For model caches merge driver works in a different way (if you store them under version control, of course). Generator dependencies (generated files) and debugger trace caches (trace.info files) are just cleared after merging, so you will need to regenerate corresponding models. Java dependencies (dependencies files) which are used on compilation are merged using simple union algorithm which makes compilation possible after merging.

Different VCS Add-ons

Look at the dialog:

There are several types of VCS Add-ons which can be installed. It is recommended to install them all.

  • Git global autocrlf setting. Forces git to store text files in repository with standart Unix line endings (LF), while text files in working copy use local system-dependent line endings. Necessary when developers of your project use different operating systems with different line-endings (Windows and Unix).
  • Git global merge driver setting. Registers merge driver for MPS models in global Git settings so they can be referred in .gitattributes files of Git repositories (see below). It only maps merge driver name (in this case, "mps") with path to actual merge driver command.
  • Git file attributes for repositories. Enables MPS merge driver for concrete file types (*.mps, trace.info, etc) s in Git repositories used in opened MPS project. This creates or modifies .gitattributes file in root of Git repository. This file usually should be stored under version control so these settings will be shared among developers of the project.
  • Subversion custom diff3 cmd. Registers MPS merger in config file of Subversion. MPS may use its own config folder for Subversion, so there are two different checkboxes. One updates global config used when you invoke Subversion procedures from command line or tools like TortoiseSVN. Another one modifies config only for MPS Subversion plugin. By the way, directory for Subversion config used in MPS can be defined in Subversion settings.

Using MPS Debugger

Using MPS Debugger

MPS Debugger provides an API to create debuggers for custom languages. Java Debugger plugin, included into MPS distribution, allows user to debug programs which are written in languages which are finally generated into Base Language/Java. We use this plugin below to illustrate MPS Debugger features, which all are available to other lagnuages via API.

Execution

We start with description of how to debug a java application. If a user has a class with main method, a Java Application run configuration should be used to run/debug such a program.

Creating an instance of run configuration

A Java Application or an MPS instance run configurations can be created for a class with a main method or an MPS project, respectively. Go to Run -> Edit Configurations menu and press a button with "+" as shown at the picture below:

A menu appears, choose Java Application from it and a new Java Application configuration will be created:

If you select Java Application, you will be able to specify the Java class to run, plus a few optional configuration parameters:

A name should be given to each run configuration, and a main node i.e. a class with a main method should be specified. Also VM and program parameters may be specified in a configuration. Look at Run Configuration chapter to learn more about run configurations.

Debugging language definitions

Select MPS instance, if you want to debug MPS language definition code. MPS will start a new instance of MPS with a project that uses your language (it could also be the current project) and you will set breakpoints and debug in your original MPS instance.

In the Debug configuration dialog you need to indicate, which MPS project to open in the new MPS instance - either the current one by checking the Open current project check-box, or any project you specify in the field below. You could also leave both empty and create/open a project from he menu once the new MPS instance starts.

Debugging a configuration

To debug a run configuration, select it from configurations menu and then press the Debug button. The debugger starts, and the Debugger tool window appears below.

There are two tabs in a tool: one is for the console view and other for the debugger view. In the console an application's output is shown.

Breakpoints

Next section describes breakpoints usage.

Setting a breakpoint

A breakpoint can be set on a statement, field or exception. To set or remove a breakpoint, press Ctrl-F8 on a node in the editor or click on a left margin near a node. A breakpoint is marked with a red bubble on the left margin, a pink line inside the editor and a red frame around a node for the breakpoint. Exception breakpoints are created from the breakpoints dialog.

When the program is stared, breakpoints on which debugger can not stop are specially highlighted.

When debugger stops at a breakpoint, the current breakpoint line is marked blue, and the actual node for the breakpoint is decorated with black frame around it.

If the cell for a node on which the program is stopped is inside a table, table cell is highlighted instead of a line.

Viewing breakpoints via breakpoints dialog.

All breakpoints set in the project could be viewed via Breakpoints dialog.

Java breakpoints features include:

  • field watchpoints;
  • exception breakpoints;
  • suspend policy for java breakpoints;
  • relevant breakpoint data (like thrown exception or changed field value) is displayed in variables tree.

Examining a state of a program at a breakpoint

When at a breakpoint, a Debugger tab can be used to examine a state of a program. There are three panels available:

  • a "Frames" panel with a list of stack frames for a thread, selected using a combo box;
  • a "Variable" tree which shows watchables (variables, parameters, fields and static fields) visible in the selected stack frame;
  • a "Watches" panel with list of watches and their values.

In java debugger "Copy Value" action is available from the context menu of the variable tree.

Runtime

Controlling execution of a program

  • To step over, use Run -> Step Over or F8.
  • To step out from a method, use Run -> Step Out or Shift-F8.
  • To step into a method call, use Run -> Step Into or F7.
  • To resume program execution, use Resume button or Run -> Resume or F9.
  • To pause a program manually, use Pause button or Run -> Pause. When paused manually i.e. not at a breakpoint, info about variables is unavailable.

There is a toolbar in Debugger window from where stepping actions are available.

Expressions

Expression evaluation

MPS Java debugger allows user to evaluate expressions during debug, using info from program stack. It is called low-level evaluation, because user is only allowed to use pure java variables/fields/etc from generated code, not entities from high-level source code.

To activate evaluation mode, a program should be stopped at a breakpoint. Press Alt-F8, and a dialog appears.
In a dialog there's a MPS Editor with a statement list inside it. Some code may be written there, which uses variables and fields from stack frame. To evaluate this code, press Evaluate button. The evaluated value will appear in a tree view below.

To evaluate a piece of code from the editor, select it and press Alt+F8, and the code will be copied to the evaluation window.

Watches

Watches API and low-level watches for java debugger are implemented. "Low-level" means that user can write expressions using variables, available on the stack. To edit a watch, a so-called "context"(used variables, static context type and this type) must be specified. If the stack frame is available at the moment, context is filled automatically.

Watches can be viewed in "Watches" tree in "Debug" tool window. Watches could be created, edited and removed via context menu or toolbar buttons.

Console

Console is a tool which allows developers to conveniently run DSL code directly in the MPS environment.

The Console tool window allows line-by-line execution of any DSL construction in the realtime. After the command is written in the console, it is generated by the MPS generator and executed in the IDE's context. This way the code in the console can access and modify the program's AST, display project statistic, execute IDE actions, launch code generation or initiate classes reloading.

For discoverability reasons, most of the console-specific DSL constructs start with symbol '#'.

In general, there are 3 kinds of commands:

  1. BaseLanguage statement lists. These commands can contain any BaseLanguage constructions. If some construction or class is not available in completion, it may not have been imported. Missing imports can easily be added as in the normal editor, using actions 'Add model import', 'Add model import by root', 'Add language import', or by the corresponding keyboard shortcuts.

  2. BaseLanguage expressions. Expression is evaluated and, if its type is not void, printed in console as text, AST, or interactive response.
  3. Non-BaseLanguage commands. These are simple non-customizable commands, such as #reloadClasses.

There is also a set of languages containing the console commands and BaseLanguage constructions, which allow developers to easily make custom refactorings, complex usages search etc.

  1. BaseLanguage constructions for iterating over IDE objects (#nodes, #references, #models, #modules). These expressions are lazy sequences, including all nodes/references/models/modules in project or in custom scope.
    Icon

    To inspect read-only modules and models, such as imported libraries and used languages, you need to include the r/o+ parameter to the desired search scope.

  2. BaseLanguage constructions for usages searching (#usages, #instances). These expressions are also sequences, which can be iterated over, but not lazy. When these expressions are evaluated, find usages mechanism is called, so it runs faster then iterating over all nodes or references and then filtering by concept/target.
  3. Commands for quering data from the IDE (#stat, #showBrokenRefs, #showGenPlan)
  4. Commands for interacting with the IDE (#reloadClasses, #make, #clean, #removeGenSources)
  5. BaseLanguage constructions for showing results to user
    • #show expression opens usages view and shows there nodes, models or modules from sequence passed to the expression as a parameter.
    • #print expression writes result to the console. There are also specialized versions of this construction:
      • #printText converts result to string and add it to the response.
      • #printNode is applicable only to nodes. This construction adds to response the the whole node and its subnodes. Since the response is also part of the AST, the node is displayed with its normal editor.
      • #printNodeRef makes sense with only nodes locates in the project models. This construction prints to the console an interactive response, which can be clicked on in order to open the node in the editor.
      • #printSeq is applicable to collections of nodes, models or modules. This command prints to the console an interactive response, which describes the size of the collection. When the response is clicked on, the usage view opens to show the nodes or the models.
      • #print expression is a universal construction, which tries to choose the most appropriate way of displaying its argument, according to its type and value
  6. refactor operation. This operation applies a function to sequence of nodes (like forEach operation), but before that it opens the found nodes in the usages view, where user can review the nodes before the refactoring is started and manually select the nodes to include/exclude in the refactoring and then apply or cancel the refactoring.

Additionally, the console languages can be extended by the user, if needed.

In order to point to a concrete node in project from the console, this node can be copied from the editor and then pasted into the console. The node will be pasted as a special construction, called nodeRef, with is a BaseLanguage expression of type node<>, with value of the pasted node. If there is a necessity to paste the piece of code as is, the 'Paste Original Node' action is available from the context menu.

Structure

Since MPS frees you from defining a grammar for your intented languages, you obviously need different ways to specify the structure of your languages. This is where the Structure Language comes in handy. It gives you all the means to define the language structure. As we discussed earlier, when coding in MPS you're effectively building the AST directly, so the structure of your language needs to specify the elements, the bricks, you use to build the AST.

The bricks are called Concepts and the Structure Language exposes concepts and concept interfaces as well as their members: properties, references, children, concept(-wide) properties, and concept(-wide) links.

Concepts and Concept Interfaces

Now let's look at those in more detail. A Concept defines the structure of a concept instance, a node of the future AST representing code written using your language. The Concept says which properties the nodes might contain, which nodes may be referred to, and what children nodes are allowed (for more information about nodes see the Basic notions section). Concepts also define concept-wide members - concept properties and concept links, which are shared among all nodes of the particular Concept. You may think of them as "static" members.

Apart from Concepts, there are also Concept Interfaces. Concept interfaces represent independent traits, which can be inherited and implemented by many different concepts. You typically use them to bring orthogonal concepts together in a single concept. For example, if your Concept instance has a name by which it can be identified, you can implement the INamedConcept interface in your Concept and you get the name property plus associated behavior and constraints added to your Concept.

Concepts inheritance

Just like in OO programming, a Concept can extend another Concept, and implement many Concept Interfaces. A Concept Interface can extend multiple other Concept Interfaces. This system is similar to Java classes, where a class can have only one super-class but many implemented interfaces, and where interfaces may extend many other interfaces.

If a concept extends another concept or implements a concept interface, it transitively inherits all members (i.e if A has member m, A is extended by B and B is extended by C, then C also has the member m)

Concept interfaces with special meaning

There are several concept interfaces in MPS that have a special meaning or behavior when implemented by your concepts. Here's a list of the most useful ones:

Concept Interface

Meaning

IDeprecatable

Used if instances of your concept can be deprecated. It's isDeprecated behavior method indicates whether or not the node is deprecated. The editor sets a strikeout style for reference cells if isDeprecated of the target returns true.

INamedConcept

Used if instances of your concept have an identifying name. This name appears in the code completion list.

IType

Is used to mark all concepts representing types

IWrapper

Deleting a node whose immediate parent is an instance of IWrapper deletes the parent node as well.

Concept members

Properties

Property is a value stored inside a concept instance. Each property must have a type, which for properties is limited to: primitives, such as boolean, string and integer; enumerations, which can have a value from a predefined set; and constrained data types (strings constrained by a regular expression).

References

Holding scalar values would not get as far. To increase expressiveness of our languages nodes are allowed to store references to other nodes. Each reference has a name, a type, and a cardinality. The type restricts the allowed type of a reference target. Cardinality defines how many references of this kind a node can have. References can only have two types of cardinalities: 1:0..1 and 1:1.

Smart references

A node containing a single reference of 1:1 cardinality is called a smart reference. These are somewhat special references. Provided the language author has not specified an alias for them, they do their best to hide from the language user and be as transparent as possible. MPS treats the node as if it was a the actual reference itself, which simplifies code editing and code-completion. For example, default completion items are created whenever the completion menu is required: for each possible reference target, a menu item is created with matching text equal to the presentation of a target node.

Children

To compose nodes into trees, we need to allow children to be hooked up to them. Each child declaration holds a target concept, its role and cardinality. Target concept specifies the type of children. Role specifies the name for this group of children. Finally, cardinality specifies how many children from this group can be contained in a single node. There are 4 allowed types of cardinality: 1:1, 1:0..1, 1:0..n, and 1:1..n.

Specialized references and children

Sometimes, when one concept extends another, we not only want to inherit all of its members, but also want to override some of its traits. This is possible with children and references specialization. When you specialize a child or reference, you narrow its target type. For example, if you have concept A which extends B, and have a reference r in concept C with target type B, you might narrow the type of reference r in C's subconcepts. It works the same way for concept's children.

Alias

The alias, referred to from code as conceptAlias, optionally specifies a string that will be recognized by MPS as a representation of the Concept. The alias will appear in completion boxes and MPS will instantiate the Concept, whenever the alias or a part of it is typed by the user. 

Constrained Data Types

Constrained Data Type allows you to define string-based types constrained with a regular expression. MPS will then make sure all property values with this constrained data type hold values that match the constraint.

Enumeration Data Types

Enumeration Data Types allow you to use properties that hold values from pre-defined sets.


Each enumeration data type member has a value and a presentation. Optionally an identifier can be specified explicitly.

Presentation vs. Value vs. Identifier

  • Presentation -  this string value will be used to represent the enum members in the UI (completion menu, editor)
  • Value - this value, the type of which is set by the member type property, will represent the enum members in code
  • Identifier - this optional value will be used as the name of the generated Java enum. This value is typically derived from either the presentation or the value, since it is meant to be transparent to the language users and has no meaning in the language. It only needs to be specified when the id deriving process fails to generate unique valid identifiers.
  • Name - when accessing enum data type's members from code, name refers to either presentationvalue or identifier, depending on which option member identifier is active

Deriving identifiers automaticaly

When deriving identifiers from either presentation or values, MPS will make best efforts to eliminate characters that are not allowed in Java identifiers. If the derived identifiers for multiple enum data type members end up being identical, an error will be reported. Explicit identifiers should be specified in such cases.

Programmatic access

To access enumeration data types and their members programmatically, use the enum operations defined in the jetbrains.mps.lang.smodel language.

Icon

Note that the name in memberForName and february.name above means the actual member identifier, whether it is set to be custom, derive from presentation or derive from internal value.

Checking a value of a property against an enum data type value can be done with the is operation. To print out the presentation of the property value, you need to obtain the corresponding enum member first: 


 

Attributes

Attributes, sometimes called Annotations, allow language designers to express orthogonal language constructs and apply them to existing languages without the need to modify them. For example, the generator templates allow for special generator marks, such as LOOP, ->$ and $[], to be embedded within the target language:


The target language (BaseLanguage in our example here) does not need to know anything about the MPS generator, yet the generator macros can be added to the abstract model (AST) and edited in the editor. Similarly, anti-quotations and Patterns may get attributed to BaseLanguage concepts.

MPS provides three types of attributes:

  • LinkAttribute - to annotate references
  • NodeAttribute - to annotate individual nodes
  • PropertyAttribute - to annotate properties

By extending these you can introduce your own additions to existing languages. For a good example of attributes in use, check out the Commenting out cookbook.

Previous Next

Constraints

The Structure Language may sometimes be insufficient to express advanced constraints on the language structure. The Constraints aspect gives you a way to define such additional constraints.

Can be child/parent/ancestor/root

These are the first knobs to turn when defining constraints for a concept. They determine whether instances of this concept can be hooked as children (parents, ancestors) nodes of other nodes or root nodes in models. You specify them as boolean-returning closures, which MPS invokes each time when evaluating allowed possition for a node in the AST.

 

Languages to import

You will most likely need at least two languages imported in the constraints aspect in order to be able to define constraints - the j.m.baselanguage and j.m.lang.smodel languages. 

can be child

Return false if an instance of the concept is not allowed to be a child of specific nodes.

parameter

description

operationContext

IOperationContext

scope

current context (IScope)

parentNode

the parent node we are checking

childConcept

concept of the child node (can be a subconcept of this concept)

link

LinkDeclaration of the child node (child role can be taken from there)

can be parent

Return false if an instance of concept is not allowed to be a parent of specific concept node (in a given role).

parameter

description

operationContext

IOperationContext

scope

context (IScope)

node

the parent node we are checking (instance of this concept)

childConcept

the concept of the child node we are checking

link

LinkDeclaration of the child node

can be ancestor

Return false if an instance of the concept is not allowed to be an ancestor of specific nodes.

parameter

description

operationContext

IOperationContext

scope

context (IScope)

node

the ancestor node we are checking (instance of this concept)

childConcept

the concept of the descendant node

can be root

This constraint is available only for rootable concepts (instance can be root is true in the concept structure description). Return false if instance of concept cannot be a root in the given model.

parameter

description

operationContext

IOperationContext

scope

context (IScope)

model

model of the root

Property constraints

Technically speaking, "pure" concept properties are not properties in its original meaning, but only public fields. Property constraints allow you to make them real properties. Using these constraints, the behavior of concept's properties can be customized. Each propertz constraint is applied to a single specified property.

property - the property to which this constraint is applied.

get - this method is executed to get property value every time property is accessed.

parameter

description

node

node to get property from

scope

context (IScope)

set - this method is executed to set property value on every write. The property value is guaranteed to be valid.

parameter

description

node

node to set property

propertyValue

new property value

scope

context (IScope)

is valid - this method should determine whether the value of the property is valid. This method is executed every time before changing the value, and if it returns false, the set() method is not executed.

parameter

description

node

node to check property

propertyValue

value to be checked

scope

context (IScope)

Referent constraints

Constraints of this type help to add behavior to concept's links and make them look more properties-like.

referent set handler - if specified, this method is executed on every set of this link.

parameter

description

referenceNode

node that contains link.

oldReferentNode

old value of the reference.

newReferentNode

new value of the reference.

scope

context: IScope interface to object that shows you models, languages and devkits you can see from the code.


scope - defines the set of nodes to which this link can point. The method returns a Scope instance. Please refer to the Scopes documentation for more information on scoping. There are two types of scope referent constraint:

  • inherited
  • reference

While inherited scope simply declares the target concept, the reference scope provides a function that calculates the scope on the fly from the parameters.

parameter

description

exists

false when the reference is being created, try if it is being edited

referenceNode

*(deprecated) *the node that contains the actual link. It can be null when a new node is being created for a concept with smart reference. In this situation smart reference is used to determine what type of node to create in the context of enclosingNode, so the search scope method is called with a null referenceNode.

contextNode

node with the reference or the closest not-null context node

containingLink

(deprecated) LinkDeclaration describing parent-child relationship between enclosingNode and referenceNode

linkTarget

(deprecated) the concept that this link can refer to. Usually it is a concept of the reference, so it is known statically. If we specialize reference in subconcept and do not define search scope for specialized reference, then linkTarget parameter can be used to determine what reference specialization is required.

enclosingNode

*(deprecated) *parent of the node that contains the actual link, null for root nodes. Both referenceNode and enclosingNode cannot be null at the same time.

model

the model that contains the node with the link. This is included for convenience, since both referenceNode and enclosingNode keep the model too.

position

the target index in contextRole

contextRole

the target role in contextNode

If scope is not set for the reference then default scope from the referenced concept is used. If the default scope is also not set then "global" scope is used: all instances of referent concept from all imported models.


validator - Each reference is checked against it's search scope and if, after changes in the model, a reference ends up pointing out of the search scope, MPS marks such a reference with an error message. Sometimes it is not efficient to build the whole search scope just to check if the reference is in scope. The search scope can be big or it may be much easier to check if the given node is in scope than to calculate what nodes are in scope. You can create quick reference check procedure here to speed up reference validation in such situations.

parameter

description

model
scope
referenceNode
enclosingNode
linkTarget

context of reference usage, the same meaning as in the search scope method. The main difference: referenceNode cannot be null here because validator is not used during node creation.

checkedNode

the node to be validated (referenceNode has a reference to checkedNode of type linkTarget)

It is possible to create validation routine only if you have created a list of nodes in the corresponding search scope. If ISearchScope is returned from search scope method, then isInScope(SNode) method of ISearchScope interface will be used for validation, you should override this method with your validation routine.
It is not possible to define validation routine without defining search scope.


presentation - here you specify how the reference will look like in the editor and in the completion list. Sometimes it is convenient to show reference differently depending on context. For example, in Java all references to an instance field f should be shown as this.f, if the field is being shadowed by the local variable declaration with the same name. By default, if no presentation is set, the name of the reference node will be used as its presentation (provided it is an INamedConcept).

parameter

description

model
scope
referenceNode
enclosingNode
linkTarget

the context of reference usage, the same meaning as in the search scope function.

parameterNode

the node to be presented (referenceNode has a reference to parameterNode of type linkTarget)

visible

true - presentation of existing node, false - for new node (to be created after selection in completion menu)

smartReference

true - node is presented in the smart reference

inEditor

true - presentation for editor, false - for completion menu

ISearchScope (deprecated)

Low level interface that can be implemented to support search scope. We recommend to subclass AbstractSearchScope (implements ISearchScope) abstract class instead of direct implementation of ISearchScope interface. The only abstract method in AbstractSearchScope class is
@NotNull List<SNode> getNodes(Condition<SNode> condition) - return list of nodes in the current search scope satisfying condition, similar to search scope but with the additional condition.
Other useful methods to override:
boolean isInScope(SNode node) - the same function as in validator method.
IReferenceInfoResolver getReferenceInfoResolver(SNode, AbstractConceptDeclaration);

Default scope

Suppose we have a link pointing to an instance of concept C and we have no scope defined for this link in referent constraints. When you edit this link, all instances of concept C from all imported models are visible by default. If you want to restrict set of visible instances for all links to concept C you can set default scope for the concept. As in referent constraint you can set search scope, validator and presentation methods. All the parameters are the same.

Please refer to the Scopes documentation for more information on scoping.

Previous Next

Behavior

During syntax tree manipulation, common operations are often extracted to utility methods in order to simplify the task and reuse functionality. It is possible to extract such utilities into static methods or create node wrappers holding the utility code in virtual methods. However, in MPS a better solution is available: the behavior language aspect. It makes it possible to create virtual and non-virtual instance methods, static methods, and concept instance constructors on nodes.

Concept instance methods

A Concept instance method is a method, which can be invoked on any specified concept instance. They can be both virtual and non-virtual. While virtual methods can be overridden in extending concepts, non-virtual ones cannot. Also a virtual concept method can be declared abstract, forcing the inheritors to provide an implementation.

Concept instance methods can be implemented both in concept declarations and in concept interfaces. This may lead to some method resolution issues. When MPS needs to decide, which virtual method to invoke in the inheritance hierarchy, the following algorithm is applied:

  • If the current concept implements a matching method, invoke it. Return the computed value.
  • Invoke the algorithm recursively for all implemented concept interfaces in the order of their definition in the implements section. The first found interface implementing the method is used. In case of success return the computed value.
  • Invoke the algorithm recursively for an extended concept, if there is one. In case of success return the computed value.
  • Return failure.

Concept constructors

When a concept instance is created, it is often useful to initialize some properties/references/children to the default values. This is what concept constructors can be used for. The code inside the concept construction is invoked on each instantiation of a new node of a particular concept.

Icon

The node's constructor is invoked before the node gets attached to the model. Therefore it is pointless to investigate the node's parent, ancestors, children or descendants in the behaviour constructor. These calls will always evaluate to null. You should define NodeFactories (Editor Actions) in order to have your nodes initialized with values depending on their context within the model.

Concept static methods

Some utility methods do not belong to concept instances and so should not be created as instance methods. For concept-wide functionality, MPS provides static concept methods. See also Constraints

Previous Next

SModel Language

The purpose of SModel language is to query and modify MPS models. It allows you to investigate nodes, attributes, properties, links and many other essential qualities of your models. The language is needed to encode several different aspects of your languages - actions, refactorings, generator, to name the most prominent ones. You typically use the jetbrains.mps.lang.smodel language in combination with BaseLanguage.

Treatment of null values

SModel language treats null values in a very safe manner. It is pretty common, in OO-languages, such as Java or C#, to have a lot of checks for null values in the form of expr == null and expr != null statements scattered across the code. These are necessary to prevent null pointer exceptions. However, they at the same time increase code clutter and often make the code more difficult to read. In order to alleviate this problem, MPS treats null values in a liberal way. For example, if you ask a null node for a property, you will get back a null value. If you ask a null node for its children list, you will get empty list, etc. This should make your life as a language designer easier.

Types

SModel language has the following types:

  • node<ConceptType> - corresponds to an AST node (e.g. node<IfStatement> myIf = ...)
  • nlist<ConceptType> - corresponds to a list of AST nodes (e.g. nlist<Statement> body = ...)
  • model - corresponds to an instance of the MPS model
  • search scope - corresponds to a search scope of a node's reference, i.e. the set of allowed targets for the reference
  • reference - corresponds to an AST node that represents reference instance
  • concept<Concept> - corresponds to the org.jetbrains.mps.openapi.language.SConcept concept that represents a concept (e.g. concept<IfStatement> = concept/IfStatement/)
  • conceptNode<Concept> - (deprecated) corresponds to an AST node that represents a concept (e.g. conceptNode<IfStatement> = conceptNode/IfStatement/)
  • enummember<Enum Data Type> - corresponds to an AST node that represents an enumeration member (e.g. enummember<FocusPolicy> focus = ...)

Most of the SModel language operations are applicable to all of these types.

Operation parameters

A lot of the operations in the SModel language accept parameters. The parameters can be specified once you open the parameter list by entering < at the end of an operation. E.g. myNode.ancestors<concept = IfStatement, concept = ForStatement>.

Icon

MPS allows you to down-cast from the concepts of the smodel concepts to the underlying Java API (Open API), may you need more power when manipulating the model. Check out the Open API documentation for details.

Queries

Features access

The SModel language can be used to access the following features:

  • children
  • properties
  • references
  • concept properties
  • concept references

To access them, the following syntax is used:

If the feature is a property, then the type of whole expression is the property's type. If the feature is a reference or a child of 0..1 or 1 cardinality, then the type of this expression is node<LinkTarget>, where LinkTarget is the target concept in the reference or child declaration. If the feature is a child of 0..n cardinality, then the type of this expression is nlist<LinkTarget>.

You can use so-called implicit select to access features of the child nodes. For example, the following query:

will be automatically transformed by MPS to something like:

resulting in a plain collection of all non-null model elements accessible through the specified chain of link declarations.

Null checks

Since nulls are treated liberally in MPS, we need a way to check for null values. The isNull and isNotNull operations are our friends here.

IsInstanceOf check and type casts

Often, we need to check whether a node is an instance of a particular concept. We can't use Java's instanceof operator since it only understands java objects, not our MPS nodes. To perform this type of check, the following syntax should be used:

Also, there's the isExactly operation, which checks whether a node's concept is exactly the one specified by a user.

Once we've checked a node's type against a concept, we usually want to cast an expression to a concept instance and access some of this concept's features. To do so, the following syntax should be used:


Another way to cast node to particular concept instance is by using as cast expression:

The difference between the regular cast (using colon) and the as cast is in a way it handles the situation when the result of the left-side expression cannot be safely cast to the specified Concept instance: A NullPointer exception will be thrown by the regular cast in this case, while null will be returned by the as cast.

Combine this with the null-safe dot operator in the smodel language and you get a very convenient way to navigate around the model:

Icon

Intention are available to easily migrate from one type of cast expression to the other:

Parent

In order to find a node's parent, the parent operation is available on every node.

Children

The children operation can be used to access all direct child nodes of the current node. This operation has an optional parameter linkQualifier. With this parameter result of children<linkQualifier> operation is equivalent to node.linkQualifier operation call and so will recall only the children belonging to the linkQualifier group/role. E.g. classDef.children<annotation, member>

Sibling queries

When you manipulate the AST, you will often want to access a node's siblings (that is, nodes with the same role and parent as the node under consideration). For this task we have the following operations:

  • next-sibling/prev-sibling - returns next/previous sibling of a node. If there is no such sibling, null is returned.
  • next-siblings/prev-siblings - returns nlist of next/previous siblings of a node. These operations have an optional parameter that specifies whether to include the current node.
  • siblings - returns nlist of all siblings of a node. These operations have an optional parameter that specifies whether to include the current node.

Ancestors

During model manipulation, it's common to find all ancestors (parent, parent of a parent, parent of a parent of a parent, etc) of a specified node. For such cases we have two operations:

  • ancestor - return a single ancestor of the node
  • ancestors - returns all ancestors of the node
    Both of them have the following parameters to narrow down the list:
  • concept type constraint: concept=Concept, concept in [ConceptList]
  • a flag indicating whether to include the current node: +

E.g. myNode.ancestors<concept = InstanceMethodDeclaration, +>

Descendants

It's also useful to find all descendants (direct children, children of children etc) of a specified node. We have the descendants operation for such purposes. It has the following parameters:

  • concept type constraint: concept=Concept, concept in [ConceptList]
  • a flag indicating whether to include current node: +

E.g. myNode.descendants<concept = InstanceMethodDeclaration>

Containing root and model

To access top-most ancestor node of a specified node you can make use of containing root operation. Containing model is available as a result of the model operation.

For example,

  • node<> containingRoot = myNode.containing root
  • model owningModel = myNode.model

Model queries

Often we want to find all nodes in a model which satisfy a particular condition. We have several operations that are applicable to expressions of model type:

  • roots(Concept) - returns all roots in a model, which are instances of the specified Concept
  • nodes(Concept) - returns all nodes in a model, which are instances of the specified Concept

E.g. model.roots(<all>) or model.nodes(IfStatement)

Search scope queries

In some situations, we want to find out, which references can be set on a specified node. For such cases we have the search scope operation. It can be invoked with the following syntax:

The Concept literal

Often we want to have a reference to a specified concept. For this task we have the concept literal. It has the following syntax:

E.g. concept<IfStatement> concept = concept/IfStatement/

Concept operation

If you want to find the concept of a specified node, you can call the concept operation on the node.

E.g. concept<IfStatement> concept = myNode.concept

Migrating away from deprecated types

The conceptNode<> type as well as the conceptNode operation have been deprecated. The asConcept operation will convert a conceptNode<> to a concept<>. The asNode operation, on the other hand, will do the opposite conversion and will return a node<AbstractConceptDeclaration> for a concept<>.

Icon

The conceptNode<> type was called concept<> in MPS 3.1. The conceptNode operation was called concept in MPS 3.1.

Concept hierarchy queries

We can query super/sub-concepts of expression with the concept type. The following operations are at your disposal:

  • super-concepts/all - returns all super-concepts of the specified concept. There is an option to include/exclude the current concept - super-concepts/all<+>
  • super-concepts/direct - returns all direct super-concepts of the specified concept. Again, there is an option to include/exclude the current concept - super-concepts/direct<+>
  • sub-concepts - returns sub-concepts

For example:

concept<IfStatement> concept = myNode.concept; 
list<concept<>> superConceptsAll = concept.super-concepts/all; 
concept.super-concepts/direct<+>; 
concept.sub-concepts(model);
concept<IfStatement> concept = myNode.concept; 
list<concept<>> superConceptsAll = concept.super-concepts/all; 
concept.super-concepts/direct<+>; 
concept.sub-concepts(model, myScope);

Is Role operation

Sometimes we may want to check whether a node has a particular role. For this we have the following syntax:

For example,

myNode.hasRole(IfStatement : elsifClauses) 

Containing link queries

If one node was added to another one (parent) using the following expression:

then you can call the following operations to access the containment relationship information:

  • containingRole - returns a string representing the child role of the parent node containing this node ("childLinkRole" in above case)
  • containingLink - returns node<LinkDeclaration> representing a link declaration of the parent node containing this node
  • index - returns int value representing index of this node in a list of children with corresponding role. Identical to the following query upon the model represented above:

Reference operations

Accessing references

Following operation were created co access reference instance representing a reference from source node to target one. Operations are applicable on source node:

  • reference< > - returns an instance of reference type representing specified reference. This operation requires "linkQuelifier" parameter used as reference specification. Parameter can be either link declaration of source node's concept or expression returning node<LinkDeclaration> as a result
  • references - returns sequence<reference> representing all references specified in source node.

Working with

Having an instance of reference type you can call following operations on it:

  • linkDeclaration - returns node<LinkDeclaration> representing this reference
  • resolveInfo - returns string resolve info object
  • role - returns reference role - similar to reference.linkDeclaration.role;
  • target - returns node<> representing reference target is it was specified and located in model(s)

Downcast to lower semantic level

SModel language generates code that works with raw MPS classes. These classes are quite low-level for the usual work, but in some exceptional cases we may still need to access them. To access the low-level objects, you should use the downcast to lower semantic level construct. It has the following syntax:

For example,

myNode/.getConcept().findProperty("name")

Modification operations

Feature changes

The most commonly used change operation in SModel is the act of changing a feature. In order to set a value of a property, or assign a child or reference node of 0..1 or 1 cardinality, you can use straight assignment (with =) or the set operation. In order to add a child to 0..n or 1..n children collection, you can either use the.add operation from the collection language or call add next-sibling/add prev-sibling operations on a node<> passing another node as a parameter.

For example,

  • classDef.name = "NewClassName";
  • classDef.name.set( "NewClassName");
  • myNode.condition = trueConstant;
  • node<InstanceMethodDeclaration> method = classDef.member.add new initialized(InstanceMethodDeclaration);

New node creation

There are several ways to create a new node:

  • new operation: new node<Concept>()
  • new instance operation on a model: model.newInstance()
  • new instance operation on a concept: concept.newInstance()
  • add new(Concept) and set new(Concept) operations applied to feature expressions
  • replace with new(Concept) operation
  • new root node(Concept) operation applied to a model. In this case the concept should be rootable
  • new next-sibling<Concept>/new prev-sibling<Concept> operations adding new sibling to an existing node
    Icon

    Note that the jetbrains.mps.lang.actions language adds the possibility to initialize the newly created nodes using the rules specified in NodeFactories. Upon importing the jetbrains.mps.lang.actions language you are able to call:

    • new initialized node<Concept>()
    • model.new initialized instance(Concept)
    • node.new initialized instance (Concept)
    • add new initialized()
    • set new initialized()
    • replace with new initialized(Concept)
    • replace with initialized next/previous-sibling(Concept)

Copy

To create a copy of an existing node, you can use the copy operation. E.g., node<> yourNode = myNode.copy

Replace with

To replace a node in the AST with an instance of another node, you can use the 'replace with' operation. If you want to replace and create at the same time, there is a shortcut operation 'replace with new(Concept)', which takes a concept as a parameter.

Delete and detach operations

If you want to completely delete a node from the model, you can use the delete operation. In order to detach a node from it's parent only, so that you can for example attach the node to another parent later, you use the detach operation.

 
Previous Next

Pattern Language

The pattern language has a single purpose - to define patterns of model structures. Those patterns form visual representations of nodes you want to match. A pattern matches a node if the node's property values are equal to those specified in the pattern, node's references point to the same targets that the ones of the pattern do and the corresponding children match the appropriate children of the pattern.

Also patterns may contain variables for nodes, references and properties, which then match any node/reference/property. On top of that the variables will hold the actual values upon a successful match.

PatternExpression

The single most important concept of the pattern language is PatternExpression. It contains a pattern as its single arbitrary node. Also, the node can specify the following variables:

  • #name - a node variable, a placeholder for a node. Stores the matching node
  • #name - a reference variable, a placeholder for a reference. Stores the reference's target, i.e. a node.
  • $name - a property variable, a placeholder for a property value. Stores the property value, i.e. a string.
  • *name - a list variable, a placeholder for nodes in the same role. Stores the list of nodes.

Antiquotations may be in particular useful when used inside a pattern, just like inside quotations (see Antiquotations).

Examples

1. The following pattern matches against any InstanceMethodDeclaration without parameters and a return type:




Captured variables:

$methodName

string

method's name

#statementList

node<StatementList>

statements

2. The following pattern matches against a ClassifierType with the actual classifier specified inside an antiquotation expression and with any quantity of any type parameters:


Captured variables:

*l

nlist<Type>

class type's parameters

#ignored

node<Type>

used as wildcard, its contents is ignored. Means that parameters are arbitrary

Using patterns

Match statement

Patterns are typically used as conditions in match statements. Pattern variables can be referenced from inside of the match statement.
For example:

this piece of code examines a node n and checks whether it satisfies the first or the second condition. Then the statement in the corresponding (matching) block is executed. A pattern variable $name is used in a first block to print out the name of a node. In our case the node holds a variable declaration.

Other usages

Patterns are also used in several other language constructs in MPS. They may appear:

  • as conditions on applicable nodes of typesystem/replacement/subtyping/other rules of typesystem language (See Inference rules)
  • as supertype patterns in coerce statement and coerce expression (See Coerce)
  • as conditions on node in generator rules
  • as pattern in TransformStatement used to define language migrations (See Migrations)

You can also use patterns in your own languages.
Basically what  happens is that a class is generated from a PatternExpression and the expression itself is reduced to a constructor of this class. This class extends GeneratedMatchingPattern and has a boolean method match(SNode), which returns a boolean value indicating whether the node matches the pattern. It also holds a method getFieldValue(Stirng) to get the values stored in pattern variables after a successful match.
So to develop your own language constructs using patterns, you can call these two methods in the generator template for your constructs.


Previous Next

Editor


Once the structure for your language is defined, you will probably go and create the means to allow developers to conveniently build ASTs with it. Manipulating the ASTs directly would not be very intuitive nor productive. To hide the AST and offer the user comfortable and intuitive interaction is the role for language editors.

Editor Overview

An editor for a node serves as its view as well as its controller. An editor displays the node and lets the user modify, replace, delete it and so on. Nodes of different concepts have different editors. A language designer should create an editor for every concept in his/her language.

In MPS, an editor consists of cells, which themselves contain other cells, some text, or a UI component. Each editor has its concept for which it is specified. A concept may have no more than one editor declaration (or can have none). If a concept does not have an editor declaration, its instances will be edited with an editor for the concept's nearest ancestor that has an editor declaration.

To describe an editor for a certain concept (i.e. which cells have to appear in an editor for nodes of this concept), a language designer will use a dedicated language simply called editor language. You see, MPS applies the Language Oriented Programming principles to itself.

The description of an editor consists of descriptions for cells it holds. We call such descriptions "cell models." For instance, if you want your editor to consist of a unique cell with unmodifiable text, you create in your editor description a constant cell model and specify that text. If you want your editor to consist of several cells, you create a collection cell model and then, inside it, you specify cell models for its elements. And so on.

Icon

For a quick how-to document on the MPS editor please check out the Editor Cookbook.

Types Of Cell Models

Constant cell

This model describes a cell which will always contain the same text. Constant cells typically mirror "keywords" in text-based progamming languages.

Collection cell

A cell which contains other cells. Can be horizontal (cells in a collection are arranged in a row), vertical (cells are on top of each other) or have so-called "indent layout" (cells are arranged horizontally but if a line is too long it is wrapped like text to the next line, with indent before each next line).

Property cell

This cell model describes a cell which will show a value of a certain property of a node. The value of a property can be edited in a property cell, therefore, a property cell serves not only as a view also but as a controller. In an inspector, you can specify whether the property cell will be read-only or will allow its property value to be edited.

Child cell

This cell model contains a reference to a certain link declaration in a node's concept. The resulting cell will contain an editor for the link's target (almost always for the child, not the referent). For example if you have a binary operation, say " + ", with two children, "leftOperand" and "rightOperand", an editor model for your operation will be the following: a indent collection cell containing a referenced node cell for the left operand, a constant cell with " + ", and a referenced node cell for the right operand. It will be rendered as an editor for the right operand, then a cell with " + ", and then an editor for the left operand, arranged in a row. As we have seen, and as follows from its name, this type of cell model is typically used to show editors for children.

Referent cell

Used mainly to show reference targets. The main difference between a referent cell and a child cell is that we don't need, or don't want, to show the whole editor for a reference target. For example, when a certain node, say, a class type, has a reference to a java class, we don't want to show the whole editor for that class with its methods, fields, etc - we just want to show its name. Therefore child cells cannot be used for such a purpose. One should use referent cells.
Referent cell allows you to show a different inlined editor for a reference target, instead of using target's own editor. In most cases it's very simple: a cell for a reference target usually consists only of a property cell with the target's name.

Child list cell

This cell is a collection containing multiple child cells for a node's children of the same role. For instance, an editor for a method call will contain a child list cell for rendering its actual arguments. Child list can be indent (text like), horizontal or vertical.
The cell generated from this cell model supports insertion and deletion of the children of the role given, thus serving both as a view and as a controller. The default keys for insertion are Insert and Enter (to insert a child before or after the selected one, respectively), and the default key for deletion is Delete. You also can specify a separator for your list.
A separator is a character which will be shown in constant cells between cells for the children. When you are inside the cell list and you press a key with this character, a new child will be inserted after the selected child. For instance, a separator for a list representing actual parameters in a method call is a comma.
In inspector, you can specify whether the resulting cell list will use folding or not, and whether it will use braces or not. Folding allows your cell list to contract into a single cell (fold) and to expand from it (unfold) when necessary. It is useful for a programmer writing in your language when editing a large root: he/she is able to fold some cells and hide all the information in editor that is not necessary for the current task at the moment. For instance, when editing a large class, one can fold all method bodies except the method he/she is editing at the moment.

Indent cell

An indent cell model will be generated into a non-selectable constant cell containing a whitespace. The main difference between a cell generated from an indent cell and one generated from a constant cell model containing whitespaces as its text is that the width of an indent cell will vary according to user-defined global editor settings. For instance, if a user defines an indent to be 4 spaces long, then every indent cell will occupy a space of 4 characters; if 2 spaces long, then every indent cell will be 2 characters.

UI component cell

This cell model allows a language designer to insert an arbitrary UI component inside an editor for a node. A language designer should write a function that returns a JComponent, and that component will be inserted into the generated cell. Note that such a component will be re-created every time an editor is rebuilt, so don't try to keep any state inside your component. Every state should be taken from and written into a model (i.e. node, its properties and references) - not a view (your component).
A good use case for such a cell model is when you keep a path to some file in a property, and your component is a button which activates a modal file chooser. The default selected path in a file chooser is read from the above-mentioned property, and the file path chosen by the user is written to that property.

Model access

A model access cell model is a generalization of a property cell and, therefore, is more flexible. While a property cell simply shows the value of a property and allows the user to change that value, a model access cell may show an arbitrary text based on the node's state and modify the node in an arbitrary way based on what changes the user has made to the cell's text.
While making a property cell work requires you only to specify a property to access via that cell, making a model access cell work requires a language designer to write three methods: "get," "set," and "validate." The latter two are somewhat optional.
A "get" method takes a node and should return a String, which will be shown as the cell's text. A "set" method takes a String - the cell's text - and should modify a node according to this String, if necessary. A "validate" method takes the cell's text and returns whether it is valid or not. If a text in a cell becomes invalid after a user change, then it is marked red and is not passed to the "set" method.
If a "validate" method is not specified, a cell will always be valid. If a "set" method is not specified, no changes in a cell's text will affect its node itself.

Custom cell

If other cell models are not enough for a language designer to create the editor he/she wants, there's one more option left for him/her: to create a cell provider which will return an arbitrary custom cell. The only restriction is that it should implement an "EditorCell" interface.

Editor Components and editor component cells

Sometimes two or more editor declarations for different concepts have a common part, which is duplicated in each of those editors. To avoid redundancy, there's a mechanism called editor components. You specify a concept for which an editor component is created and create a cell model, just as in concept editor declaration. When written, the component could then be used in editor declarations for any of the specified concept's descendants. To use an editor component inside your editor declarations, one will create a specific cell model: editor component cell model, and set your editor component declaration as the target of this cell model's reference.

Cell layouts

Each collection cell has property "cell layout", which describes how child nodes will be placed. There is several layouts:

  • indent layout - places cells like text.
  • horizontal layout - places cells horizontally in row.
  • vertical layout - places cells vertically.

Styles

Styling the editor cells gives language designers a very powerful way to improve readability of the code. Having keywords, constants, calls, definitions, expressions, comments and other language elements displayed each in different colors or fonts helps developers grasp the syntax more easily. You can also use styling to mask areas of the editor as read-only, so that developers cannot edit them.

Each cell model has some appearance settings that determine the cell's presentation. They are, for instance, font color, font style, whether a cell is selectable, and some others. Those settings are combined into an entity called stylesheet. A stylesheet could be either inline, i.e. be described together with a particular cell model, or it could be declared separately and used in many cell models. Both inline stylesheet and style reference are specified for each cell in its Inspector View.

It is a good practice to declare a few stylesheets for different purposes. Another good practice is to have a style guideline in mind when developing an editor for your language, as well as when developing extensions for your language. For example, in BaseLanguage there are styles for keywords (applied to those constant cells in the BaseLanguage editor, which correspond to keywords in Java), static fields (applied to static field declarations and static field references), instance fields, numeric literals, string literals, and so forth. When developing an extension to BaseLanguage, you should apply keyword style to new keywords, field style to new types of fields, and so forth.

A stylesheet is quite similar to CSS stylesheets; it consists of a list of style classes, in which the values for some style properties are specified. MPS additionally provides a mechanism for extending styles as well as for property value overriding.

Style properties

Boolean style properties

  • selectable - whether the cell can be selected. True by default.
  • read-only - whether one can modify the cell and the nested cells or not. False by default. Designed for freezing fragments of cell tree.
  • editable - whether one can modify text in a cell or not. By default is false for constant cell models, true for other cell models.
  • draw-border - whether border will be drawn around a cell
  • draw-brackets - whether brackets will be drawn around a cell
  • first-position-allowedlast-position-allowed - for text-containing cells, specifies whether it is allowed that a caret is on the first/last position (i.e. before/after the whole text of a cell)

You can either choose a property value from a completion menu or specify a query i.e. a function which returns a boolean value.

Padding properties.

  • padding-left/right/top/bottom - a floating point number, which specifies the padding of a text cell, i.e. how much space will be between cell's text and cell's left and right sides, respectively.

Punctuation properties.

All cells in a collection are separated with one space by default. Sometimes we need cells placed together.

  • punctuation-left - if this property is true, space from left side of the cell is deleted and first position in cell becomes not allowed.
  • punctuation-right - if this property is true, space from right side of the cell is deleted and last position in cell becomes not allowed.
  • horizontal-gap - specifies gap size between cells in collection. Default value is 1 space.

For example in code

we don't want spaces between "(" and "1", and between "1" and ")". So we should add property punctuation-right to the cell "(", and property
punctuation-left to the cell ")".

Color style properties

  • Text foreground color - cell text's color (affect text cells only)
  • Text background color - cell text's background color (affects text cells only)
  • Background color - the background color of a cell. Affects any cell. If a text cell has non-zero padding and some text background color, the cell's background color will be the color of its margins.
    You can either choose a color from the completion menu or specify a query i.e. a function which returns a color.

Indent layout properties

  • indent-layout-indent - all lines will be placed with indent. This property can be used for indent in code block.


  • indent-layout-new-line - after this cell there will be a new line marker.


  • indent-layout-on-new-line - this cell will be placed on a new line
  • indent-layout-new-line-children - all children of collection will be placed on new line


  • indent-layout-no-wrap - the line won't be wrapped before this cell

Other style properties

  • font size
  • font style - can be either plain, bold, italic, or bold italic.
  • layout constraint
    • For flow layout
      • none - default behavior
      • punctation - means that previous item in flow layout should always be placed on the same line as the item, which this constraint is assigned to.
      • noflow - excludes a cell from flow layout. Current line is finished and item is placed below it. After this item a new line is started and normal flow layout is applied. This style can be used to embed a picture inside of text.
  • underlined - Can be either underlined, not underlined, or as is ('as is' means it depends on properties of the enclosing cell collection).

Style properties propagation

While some style properties affect only the cell to which they are applied, values of other properties are pushed down the cell subtree (nested cells) and applied to them until some of the child cells specifies its own value for the property. Such inheritable properties that are pushed down the cell hierarchy include text-foreground-color, text-background-color, background-color, font-style, font-size and many others.

Custom styles

Language designers can define their own style attributes in style sheets and then use them in the editor. This increases the flexibility of the language editor definition. The attributes may hold values of different types and can optionally provide default values.

There are two types of custom style attributes:

  • simple - applied to a single editor cell only
  • inherited - applied to a cell and all its descendant cells recursively

In order to use the style attribute in an editor definition, your language has to import the language defining the attribute and the editor aspect has to list the defining language among the used languages.
To refer to the custom attribute from within BaseLanguage code, you need to import jetbrains.mps.lang.editor to get access to the StyleAttributeReferenceExpression concept.

Style inheritance

To be truly usable, style classes need an extension mechanism in order to describe that a particular style class inherits values of all style properties, which are not overridden explicitly. We can use a special style property apply to copy values of all properties specified in the parent style class into our style class. Using the apply property is semantically equivalent to copy-pasting all of the properties from the parent style class. An apply-if variant is also available to apply a style property value conditionally. Unlike traditional style-extension, the apply mechanism allows multiple classes to be inherited from.

The unapply property allows style classes to cease the effect of selected inherited properties. For example, a style class for commented-out code will push down styles that make code elements look all gray. Yet, links may need to be rendered in their usual colors so that the user can spot them and potentially click on them.

Potential conflicts between properties specified in parent styles and/or the ones defined explicitly in the inheriting cell are resolved on the order basis.The last specified value overrides all previous values of the same style property.

For example, the ConsoleRoot concept provides a read-only editor with only a single point (the commandHolder cell), where edits are allowed. First the readOnly style class is set on the editor:

and then the readOnly style class is unapplied for the commandHolder cell:

The readOnly style class is defined as follows:

Style priorities

A style class can be declared to take precedence over some other style class or multiple classes.

  1. If a style class does not dominate over anything, it is a low-level style class
  2. If a style class declares to dominate, but does not specifies a style class that it dominates over (no style class is specifies but words dominate over present), the style class is considered dominating over all low-level style classes.
  3. The domination relation is transitive, cycles are not allowed.

The domination relation makes sense only for styles with inheritable attributes. When one value of some style property is pushed down from parent and another value for the same property is specified in the style class applied to the current cell, the resulting behavior depends on the relationship between the two style classes:

  1. If both style classes are low-level, the value pushed from parent will be ignored and replaced with value from style class of current cell.
  2. If one of style classes dominates over the other, both values are kept and pushed down, but values from the style class, which dominates, hides the values from the other style class.
  3. If, however, in some child cell the style class that dominates is unapplied (with special style property unapply), values from the other style class will become resulting values for this property.

For example, a comment containing the word TODO should be styled more prominently then a plain comment. Thus the language concept representing a comment needs to apply a TODO-aware style (TODO_Style), which declares its dominance over a plain Comment_Style. The actual styling properties are, however, only applied if the comment really contains the TODO text (isToDo()), otherwise the plain Comment_Style properties are used.

Use the "Add Dominance" intention to append the dominates over clause to a style:

Cell actions

Every cell model may have some actions associated with it. Such actions are meant to improve usability of editing. You can specify them in an inspector of any cell model.

Key maps

You may specify a reference to a key map for your cell model. A key map is a root concept - a set of key map items each consisting of a keystroke and an action to perform. A cell generated from a cell model with a reference to a certain key map will execute appropriate actions on keystrokes.

In a key map you must specify a concept for which a key map is applicable. For instance, if you want to do some actions with an expression, you must specify Expression as an applicable concept; then you may specify such a key map only for those cell models which are contained inside editor declarations for descendants of Expression, otherwise it is a type error.

If a key map property "everyModel" is "true," then this key map behaves as if it is specified for every cell in the editor. It is useful when you have many descendants of a certain concept which have many different editors, and your key map is applicable to their ancestor. You need not specify such a key map in every editor if you mark it as an "every model" key map.

A key map item consists of the following features:

  • A function which is executed when a key map item is triggered (returns nothing)
  • A set of keystrokes which trigger this key map item
  • A boolean function which determines if a key map item is applicable here (if not specified, then it's always applicable). If a key map item is not applicable the moment it is triggered, then it will not perform an action.
  • You may specify caret policy for a key map item. Caret policy says where in a cell a caret should be located to make this key map item enabled. Caret policy may be either first position, last position, intermediate position, or any position. By default, caret policy is "any position." If a caret in a cell does not match the caret policy of a key map item the moment it is triggered, then this key map item will not perform an action.

Action maps

A cell model may contain a reference to an action map. An action map overrides some default cell actions (delete and right transform) for a certain concept. An action map consists of several action map items. In an action map, you must specify a concept for which the action map is applicable.

An action map item contains:

  • an action description which is a string,
  • and a function which performs an action (returns nothing).

An action map item may override one of two default actions: default delete action or right transform (see Actions). For instance, when you have a return statement without any action maps in its editor, and you press Delete on a cell with the keyword "return," the whole statement is deleted. But you may specify an action map containing a delete action map item, which instead of just deleting return statement replaces it with an expression statement containing the same expression as the deleted return statement.

action DELETE description : <no description>
              execute : (node, editorContext)->void {
                           node < ExpressionStatement > expressionStatement = node . replace with new ( ExpressionStatement ) ;
                           expressionStatement . expression . set ( node . expression ) ;
                        }

Cell menus

One may specify a custom completion menu for a certain cell. Open an inspector for your cell declaration, find a table named Common, find a row named menu, create a new cell menu descriptor. Cell menu descriptor consists of menu parts, which are of differend kinds, which are discussed below.

Property values menu part

This menu part is available on property cells, it specifies a list of property values for your property which will be shown in completion. One should write a function which returns a value of type list<String>.

Property postfix hints menu part

This menu part is available on property cells, it specifies a list of strings which serve as "good" postfixes for your property value. In such a menu part one should write a function which returns a value of type list<String>. Such a menu is useful if you want MPS to "guess" a good value for a property. For instance, one may decide that it will be a good variable name which is a variable type name but with the first letter being lowercased, or which ends with its type name: for a variable of type "Foo" good names will be "foo", "aFoo", "firstFoo", "goodFoo", etc. So one should write in a variable declaration's editor in a menu for property cell for variable name such a menu part:

property postfix hints
   postfixes : (scope, operationContext, node)->list<String> {
                  list < String > result ;
                  node < Type > nodeType = node . type ;
                  if ( nodeType != null ) {
                     result = MyUtil.splitByCamels( nodeType . getPresentation() );
                  } else {
                     result = new list < String > { empty } ;
                  }
                  return  result ;
               }

where splitByCamels() will be a function which returns a list of postfixes of a string starting with capitals (for instance MyFooBar -> MyFooBar, FooBar, Bar).

Primary replace child menu

It's a cell menu part which returns primary actions for child (those by default, as if no cell menu exists).

Primary choose referent menu

It's a cell menu part which returns primary actions for referent (those by default, as if no cell menu exists).

Replace node menu (custom node's concept)

This kind of cell menu parts allows to replace an edited node (i.e. node on which a completion menu is called) with instances of a certain specified concept and its subconcepts. Such a cell menu part is useful, for example, when you want a particular cell of your node's editor to be responsible for replacement of a whole node. For instance, consider an editor for binary operations. There's a common editor for all binary operations which consists of a cell for left operand, a cell for operation sign which is a cell for concept property "alias" and a cell for right operand.

[> % leftExpression % ^{{ alias }} % rightExpression % <]

It is natural to create a cell menu for a cell with operation sign, which will allow to replace an operation sign with another one, (by replacing a whole node of course). For such a purpose one will write in the cell for operation sign a replace node menu part:

replace node (custom node concept)
   replace with : BinaryOperation

The former left child and right child are added to newly created BinaryOperation according to Node Factories for BinaryOperation concept.

Replace child menu (custom child's concept)

Such a cell menu part is applicable to a cell for a certain child and specifies a specific concept which and subconcepts of which will be shown in completion menu (and instantiated when chosen and the instance will be set as a child). To specify that concept one should write a function which returns a value of a type node<ConceptDeclaration>.

Replace child menu (custom action).

This kind of cell menu parts is applicable to a cell for a certain child and allows one to customize not only child concept, but the whole replace child action: matching text (text which will be shown in completion menu), description text (a description of an action, shown in the right part of completion menu), and the function which creates a child node when the action is selected from completion menu. Hence, to write such a menu one should specify matching text, description text and write a function returning a node (this node should be an instance of a target concept specified in a respective child link).

Generic menu item

This kind of cell menu part allows one to make MPS perform an arbitrary action when a respective menu item will be selected in a completion menu. One should specify matching text for a menu item and write a function which does what one wants. For instance, one may not want to show a child list cell for class fields if no class fields exist. Hence one can't use its default actions to create a new field. Instead, one can create somewhere in a class' editor a generic menu item with matching text "create field" which creates a new field for a class.

generic item
   matching text : add field
   handler : (node, model, scope, operationContext)->void {
                node . field . add new ( <default> ) ;
             }

Action groups

An action group is a cell menu part which returns a group of custom actions. At runtime, during the menu construction, several objects of a certain type, which are called parameter objects, are collected or created. For that parameter object type of an action group functions, which return their matching text and description text, are specified. A function which is triggered when a menu item with a parameter object is chosen is specified also.

Thus, an action group description consists of:

  • a parameter object type;
  • a function which returns a list of parameter objects of a specified type (takes an edited node, scope and operation context);
  • a function which takes a parameter object of a specified type and returns matching text (a text which will be shown in a completion menu);
  • a function which takes a parameter object of a specified type and returns description text for a parameter object;
  • a function which performs an action when parameter object is chosen in a completion menu.

A function which performs an action may be of different kinds, so there are three different kinds of cell action group menu parts:

  • Generic action group. Its action function, given a parameter object, performs an arbitrary action. Besides the parameter object, the function is provided with edited node, its model, scope and operation context.
  • Replace child group. It is applicable to child cells and its action function, given a parameter object, returns a new child, which must have a type specified in a respective child link declaration. Besides the parameter object, the function is provided with edited node, its model, current child(i.e. a child being replaced), scope and operation context.
  • Replace node group. Its action function, given a parameter object, returns a node. Usually it is some referent of an edited node (i.e. node on which a completion menu is called). Besides the parameter object, the function is provided with edited node, its model, scope and operation context.

Cell menu components

When some menu parts in different cells are equal one may want to extract them into a separate and unique entity, to avoid duplications. For such a purpose cell menu components are meant. A cell menu component consists of a cell menu descriptor (a container for cell menu parts) and a specification of an applicable feature. A specification of applicable feature contains a reference to a feature (i.e. child link declaration, reference link declaration or property declaration), to which a menu is applicable. For instance if your menu component will be used to replace some child its child link declaration should be specified here; etc.

When a cell menu component is created, it can be used in cell menus via cell menu component menu part, which is a cell menu part which contains a reference to a certain menu component.SModel language

Previous Next

Actions

MPS editor has quite sensible defaults in completion actions, node creation policy. But when you want to customize them, you have to work with the actions language. This language is also used to define Left and Right Transform actions (RT/LT-actions for short), which allows editing binary operations in a text-like way.

Substitute Actions

Substitute actions are actions which are available when you press Ctrl+Space in the editor. MPS has the following default behavior for them:

  • If your selection is inside of a position which allows concept A, then all enabled subconcepts of A will be available in the completion menu.
  • All abstract concepts are excluded
  • All concepts which implement IDontSubstituteByDefault interface concept are removed
  • All concepts for which the 'can be a child' constraint returns false are excluded
  • All concepts for which the 'can be a parent' constraint of a parent node returns false are excluded
  • If a concept has a 1:1 reference, then it is not added to the completion menu. Instead, an item is added for each element of a scope for that reference. We use a name smart reference for such items.

When you want to customize this behavior, you have to create a node substitute actions root. Inside it you can create substitute actions builders. Each of them has a substitute node concept, where you type a name of a concept which you want to substitute. It has a condition; when this condition is satisfied, your actions will be added to the completion menu. Also it has an actions part where action behavior is specified. Each action has an associated concept which can be used for action filtering (see remove by condition).

Add concept

If a concept isn't available because of default settings, you can add it with the 'add concept' construct.

Remove defaults

Use this construct if you want to completely override defaults. It removes all default actions and adds only those actions which are specified in the actions language.

Remove By Condition

Use this construct if you want to remove only some of the actions. It can be useful when you extend a language and want to remove some of its actions in a particular context.

Custom items

If you aren't satisfied with default items, you can create your own. In this case you can override everything: output concept, matching text, description text, icon, and behavior on item invocation. When you create custom items, you have to specify output concept so it can be used to filter out your actions from extending language.

Simple

Simple item adds one item to substitute menu. You can specify all the properties of substitute action (matching text, description, icon etc). It can be useful for entering literals (boolean, integer, float, string, char).

Parametrized

This concept allows you to create an item in substitute menu based on a query. A query should return a list of something. For example, if you want to create completion for files in a directory, you can use it. This concept is similar to simple item but it has additional parameter parameterObject in all of its blocks.

Wrapper

Sometimes we want to add all the completion items from a context of one type of concepts into a context of other concepts. Let's consider a couple of examples from baseLanguage. For example, we want to see all available items for expression in statement's context, since we may wrap all of them in ExpressionStatement. Or we can add all items of Type's completion menu, since we can create a local variable declaration. Wrapper blocks has a concept from whose context we want to add completion items. It also has a wrapper block with a nodeToWrap parameter, which the author of wrapper block should wrap.

Concepts Menu

Sometimes you want to add items for subconcepts of a particular item but want to override its handler. Concepts menu block allows you to do so.

Generic

If the above actions are not suitable, you can resort to generic item. It has a block which returns a list of INodeSubstitueAction items.

Side Transform Actions

When you edit code in a text editor, you can type it either from left to right:

1 <caret> press +
1+<caret> press 2
1+2<caret>

or from right to left

<caret>1 press 2
2<caret>1 press +
2+<caret>1

In order to emulate this behavior, MPS has side transform actions: left and right transforms. They allow you to create actions which will be available when you type on left or right part of your cell. For example, in MPS you can do the following:

1<caret> press + (red cell with + inside appears)
1+<caret> press 2 (red cell disappear)
1+2<caret>

or the following:

<caret>1 press + (red cell with + inside appears)
+<caret>1 press 2 (red cell disappear)
2<caret>+1

The first case is called right transform. The second case is called left transform.

In order to create transformation actions you have to create transform menu actions root. Inside it, you can create transform action builders. You can specify the following:

  • whether it is left or right transform
  • a concept which instance you want to transform
  • a condition which defines where your actions will be applicable

Add custom items

Custom items is similar to its counterpart in substitute actions part of a language. It allows you to add either one or many items to menu. Let's consider them in detail.

Simple item

Simple item adds an item with matching text, description, icon, and substitute handler.

Parametrized item.

Parametrized item adds a group of items based on a query which returning a list of something. It's similar to simple item but has additional parameterObject parameter in every block.

Add concept

Add concept adds an item for every non-abstract subconcept of a specified concept. This item has a handler block where you can replace sourceNode with a newly created node. For example, this is useful when you want to create a transformation for each subconcept of a BinaryOperation, such as +, -, *, or /. The code for replacing sourceNode is the same in each of these cases - the only difference is the resulting concept.

Include transform for

This construct allows you to include all the right transform action associated with a particular node.

Remove By Condition

This allows you to filter actions in case of a language extension.

Remove Concept

This allows you remove all the actions associated with a particular concept.

Node Factories


When you have a baseLanguage expression selected, press Ctrl+Space on it and choose (expr). You will have your expressions surrounded by parenthesis. Node factories allows you to implement this and similar functionality by customizing instantiation of a new node. In order to create node factory, you first have to create a new node factories root node. Inside of this root you can create node factories for concepts. Each node factory consists of node creation block which has the following parameters: newNode (created node), sampleNode (currently substituted node; can be null), enclosing node (a node which will be a parent of newNode), and a model.

Icon

To leverage node factories when creating nodes from code, use the "initialized" variants of "replace with ...smodel language constructs. See SModel language Modification operations for details.

Previous Next

The diagramming support in MPS allows the language designers to provide graphical editors to their concepts. The diagrams typically consist of blocks, represented by boxes, and connectors, represented by lines connecting the boxes. Both blocks and connectors are visualization of nodes from the underlying model. 

Ports (optional) are predefined places on the shapes of the blocks, to which connectors may be attached to. MPS allows for two types of ports - input and output ones.

Optionally, a palette of available blocks may be displayed on the side of the diagram, so the user could quickly pick the type of the box they need to add to the diagram.

Adding elements

Blocks get added by double-clicking in a free area of the editor. The type of the block is chosen either by activating the particular block type in the palette or by choosing from a pop-up completion menu that shows up after clicking in the free area.

Connectors get created by dragging from an output port of a block to an input port of another or the same block.

Samples

MPS comes with bundled samples of diagramming editors. You can try the componentDependencies or the mindMaps sample projects for initial familiarization with how diagrams can be created.

Icon

This document uses the componentDependencies sample for most of the code examples. The sample defines a simple language for expressing dependencies among components in a system (a component set). Use the "Push Editor Hints" option in the pop-up menu to activate the diagramming editor.

Dependencies

In order to be able to define diagramming editors in your language, the language has to have the required dependencies and used languages properly set:


  • jetbrains.mps.lang.editor.diagram - the language for defining diagrams
  • jetbrains.mps.lang.editor.figures (optional) - a language for defining custom visual elements (blocks and connectors)
  • jetbrains.jetpad and jetbrains.mps.lang.editor.diagram.runtime - runtime libraries that handle the diagram rendering and behavior

Diagram definition

Let's start from the concept that should be the root of the diagram. The diagramming editor for that node will contain the diagram editor cell:

Icon

Note that the diagram editor cell does not have to be the root of the editor definition. Just like any other editor cell it can be composed with other editor cells into a larger editor definition.

The diagram cell needs its content parameter to hold all the nodes that should become part of the diagram. In our case we pass in all the components (will be rendered as blocks) and their dependencies (will be rendered as connectors). The way these nodes are rendered is defined by their respective editor definitions, as explained later.

Down in the Inspector element creation handlers can be defined. These get invoked whenever a new visual block is to be created in the diagram. Each handler has several properties to set:

  • name - an arbitrary name to represent the option of creating a new element in the completion menu and in the palette
  • container - a collection of nodes that the newly created node should be added to
  • concept - the concept of the node that gets created through the handler, defaults to the type of the nodes in the container, but allows sub-types to be specified instead
  • on create - a handler that can manipulate the node before it gets added to the model and rendered in the diagram. Typically the name is set to some meaningful value and the position of the block on the screen is saved into the model.

There can be multiple element creation handlers defined.

Similarly, connector creation handlers can be defined for the diagram cell to handle connector creation. On top of the attributes already described for element creation handlers, connector creation handlers have these specific attributes:

  • can create - a concept function returning a boolean value and indicating whether a connector with the specified properties can be legally constructed and added to the diagram.
  • on create - a concept function that handles creation of a now connector.
  • the from and to parameters to these functions specify the source and target nodes (represented by a Block or a Port) for the new connection.
  • the fromId and toId parameters to these functions specify the ids of the source and target nodes (represented by a Block or a Port) for the new connection.

Elements get created when the user double-clicks in the editor. If multiple element types are available, a completion pop-up menu shows up.

Connectors get created when the user drags from the source block or its output port to a target block or its input port.

Palette

The optional palette will allow developers to pick a type of blocks and links to create whenever double-clicking or dragging in the diagram. The palette is defined for diagram editor cells and apart from specifying the creation components allows for visual grouping and separating of the palette items..

Blocks

The concepts for the nodes that want to participate in diagramming as blocks need to provide properties that will preserve useful diagramming qualities, such as x/y coordinates, size, color, title, etc.


Additionally, the nodes should provide input and output ports, which connectors can visually connect to.

The editor will then use the diagram node cell:


The diagram node cell requires a figure to be specified. This is a reference to a figure class that defines the visual layout of the block using the jetpad framework. MPS comes with a set of pre-defined graphical shapes in the jetbrains.mps.lang.editor.figures.library solution, which you can import and use. Each figure may expose several property fields that hold visual characteristics of the figure. All the figure parameters should be specified in the editor definition, most likely by mapping them to the node's properties defined in the concept:

The values for parameters may ether be references to the node's properties, or BaseLanguage expressions prepended with the # character. You can use this to refer to the edited node from within the expression.

If the node defines input and output ports, they should also be specified as parameters here so that they get displayed in the diagram. Again, to specify ports you can either refer to the node's properties or use a BaseLanguage expression prepended with the # character.

Icon

As all editor cells, diagramming cells can have Action Maps associated with them. This way you can enable the Delete key to delete a block or a connector.

Custom figures

Alternatively you can define your own figures. These are BaseLanguage classes implementing the jetbrains.jetpad.projectional.view.View interface (or its descendants) and annotated with the @Figure annotation. Use the @FigureParameter annotation to demarcate property fields, such as widthheight etc.

The MovableContentView interface provides additional parameters to the figure class:

By studying jetbrains.mps.lang.editor.figures.library you may get a better understanding of the jetpad library and its inner workings.

Connectors

The nodes that will be represented by connectors do not need to preserve any diagramming properties. As of version 3.1 connectors cannot be visually customized and will be always rendered as a solid black line. This will most likely change in one of the following versions of MPS.
The editor for the node needs to contain a diagram connector cell:

The cell requires a source and a target for the connector. These can either be ports:

or nodes themselves:

The values may again be direct references to node's properties or BaseLanguage expressions prepended with the # character.

Rendering ports

Input and output ports should use the input port and output port editor cells, respectively. The rendering of ports cannot be customized in MPS 3.1, but will be most likely enabled in later versions.

Icon

Use the T key to rotate the ports of a selected block by 90 degrees. This way you can easily switch between the left-to-right and top-to-bottom port positions.

Using implicit ports

In some situations you will not be able to represent ports directly in the model. You'll only want to use blocks and connectors, but ports will have to be somehow derived from the model. This case can easily be supported:

  1. Decide on the representation of ports. Each port will be represented by a unique identifier, such as number or a string
  2. Have the concept for the blocks define behavior methods that return collections of identifiers - separately for input and output ports
  3. Use the methods to provide the inputPorts and outputPorts parameters to the DiagramNode editor cell
  4. In the connector editor cell refer to the block's node as source and target. Append the requested id after the # symbol

Previous Next

Generator User Guide

Introduction

Generator is a part of language specification that defines the denotational semantics for the concepts in the language.

MPS follows the model-to-model transformation approach. The MPS generator specifies translation of constructions encoded in the input language into constructions encoded in the output language. The process of model-to-model transformation may involve many intermediate models and ultimately results in sn output model, in which all constructions are in a language whose semantics are already defined elsewhere.

For instance, most concepts in baseLanguage (classes, methods etc) are "machine understandable", therefore baseLanguage is often used as the output language.

The target assets are created by applying model-to-text transformation, which must be supported by the output language. The language aspect to define model-to-text transformation is called TextGen and is available as a separate tab in concept's editor. MPS provides destructive update of generated assets only.

For instance, baseLanguage's TextGen aspect generates *.java files at the following location:
<generator output path>\<model name>\<ClassName>.java
where:
Generator output path - is specified in the module, which owns the input model (see MPS modules).
Model name - is a path segment created by replacing '.' with the file separator in the input model's name.

Icon

For a quick how-to document on the MPS generator please check out the Generator Cookbook. For details on how to do Cross-model generation check out the cross-model generation cookbook.

Overview

Generator Module

Unlike any other language aspect, the generator aspect is not a single model. Generator specification can comprise many generator models as well as utility models. A Generator Model contains templates, mapping configurations and other constructions of the generator language.

A Generator Model is distinguished from a regular model by the model stereotype - 'generator' (shown after the model name as <name>@generator).
The screenshot below shows the generator module of the smodel language as an example.

Research bundled languages yourself

Icon

You can research the smodel (and any other) language generator by yourself:

  • download MPS (here);
  • create new project (can be empty project);
  • use the Go To -> Go to Language command in the main menu to navigate to the smodel language (its full name is jetbrains.mps.lang.smodel)

Creating a New Generator

A new generator is created by using the New -> Generator command in the language's popup menu.

Technically, it is possible to create more than one generator for one language, but at the time of writing MPS does not provide full support for this feature. Therefore, languages normally have only one (or none) generator. For that reason, the generator's name is not important. Everywhere in the MPS GUI a generator module can be identified by its language name.

When creating a new generator module, MPS will also create the generator model 'main@generator' containing an empty mapping configuration node.

Generator Properties

As a module, generator can depend on other modules, have used languages and used devkits (see Module meta-information).

The generator properties dialog also has two additional properties:

Generating Generator

MPS generator engine (or the Generator language runtime) uses mixed compilation/interpretation mode for transformation execution.

Templates are interpreted and filled at runtime, but all functions in rules, macros, and scripts must be pre-compiled.

(lightbulb) To avoid any confusion, always follow this rule: after any changes made to the generator model, the model must be re-generated (Shift+F9). Even better is to use Ctrl+F9, which will re-generate all modified models in the generator module.

Transformation

The transformation is described by means of templates. Templates are written using the output language and so can be edited with the same cell editor that would normally be used to write 'regular code' in that language. Therefore, without any additional effort the 'template editor' has the same level of tooling support right away - syntax/error highlighting, auto-completion, etc. The templates are then parametrized by referencing into the input model.

The applicability of individual templates is defined by #Generator Rules, which are grouped into #Mapping Configurations.

Mapping Configurations

A Mapping Configuration is a minimal unit, which can form a single generation step. It contains #Generator Rules, defines mapping labels and may include pre- and post-processing scripts.

Generator Rules

Applicability of each transformation is defined by generator rules.
There are six types of generator rules:

  • conditional root rule
  • root mapping rule
  • weaving rule
  • reduction rule
  • pattern rule
  • abandon root rule

Each generator rule consists of a premise and a consequence (except for the abandon root rule, whose consequence is predefined and cannot be specified by the user).

All rules except for the conditional root rule contain a reference to the concept of the input node (or just input concept) in its premises. All rule premises also contain an optional condition function.

Rule consequence commonly contains a reference to an external template (i.e. a template declared as a root node in the same or different model) or so-called in-line template (conditional root rule and root mapping rule can only have reference to an external template). There are also several other versions of consequences.

The following screenshot shows the contents of a generator model and a mapping configuration example.

Macros

The code in templates can be parameterized through macros. The generator language defines three kinds of macros:

  • property macro - computes a property value;
  • reference macro - computes the target (node) of a reference;
  • node macro - is used to control template filling at generation time. There are several versions of node macro - LOOP-macro is an example.

Macros implement a special kind of so-called annotation concept and can wrap property, reference or node cells (depending on the kind of macro) in the template code.

Code wrapping (i.e. the creation of a new macro) is done by pressing Ctrl+Shift+M or by applying the 'Create macro' intention.

The following screenshot shows an example of a property macro.

Macro functions and other parameterization options are edited in the inspector view. Property macro, for instance, requires specifying the value function, which will provide the value of the property at generation time. In the example above, output class node will get the same name that the input node has.

The node parameter in all functions of the generator language always represents the context node to which the transformation is currently being applied (the input node).

Some macros (such as LOOP and SWITCH-macro) can replace the input node with a new one, so that subsequent template code (i.e. code that is wrapped by those macros) will be applied to the new input node.

External Templates

External templates are created as a root node in the generator model.

There are two kinds of external templates in MPS.

One of them is root template. Any root node created in generator model is treated as a root template unless this node is a part of the generator language (i.e. mapping configuration is not a root template). Root template is created as a normal root node (via Create Root Node menu in the model's popup).

The following screenshot shows an example of a root template.

This root template will transform input node (a Document) into a class (baseLanguage). The root template header is added automatically upon creation, but the concept of the input node is specified by the user.

(lightbulb) It is a good practice to specify the input concept, because this allows MPS to perform static type checking in the code of the macro function.

A Root template (reference) can be used as a consequence in conditional root rules and root mapping rules. ((warning) When used in a conditional root rule, the input node is not available).

The second kind of template is defined in the generator language and its concept name is 'TemplateDeclaration'. It is created via the 'template declaration' action in the Create Root Node menu.

The following screenshot shows an example of template declaration.

The actual template code is 'wrapped' in a template fragment. Any code outside template fragment is not used in transformation and serves as a context (for example you can have a Java class, but export only one of its method as a template).

Template declaration can have parameters, declared in the header. Parameters are accessible through the #generation context.

Template declaration is used in consequence of weaving, reduction and pattern rules. It is also used as an included template in INCLUDE-macro (only for templates without parameters) or as a callee in CALL-macro.

Template Switches

A template switch is used when two or more alternative transformations are possible in a certain place in template code. In that case, the template code that allows alternatives is wrapped in a SWITCH-macro, which has reference to a Template Switch. Template Switch is created as a root node in the generator model via the Create Root Node menu (this command can be seen in the 'menu' screenshot above).

The following screenshot shows an example of a template switch.


Generator Language Reference

Mapping Configuration

Mapping Configuration is a container for generator rules, mapping label declarations and references to pre- and post-processing scripts. A generator model can contain any number of mapping configurations - all of them will be involved in the generation process, if the owning generator module is involved. Mapping configuration is a minimal generator unit that can be referenced in the mapping priority rules (see Generation Process: Defining the Order of Priorities).

Generator Rule

Generator Rule specifies a transformation of an input node to an output node (except for the conditional root rule which doesn't have an input node). All rules consist of two parts - premise and consequence (except for the abandon root rule which doesn't have a consequence). Any generator rule can be tagged by a mapping label.

All generator rules functions have following parameters:

  • node - current input node (all except condition-function in conditional root rule)
  • genContext - generation context - allows searching of output nodes, generating of unique names and others (see #generation context)

Generator Rules:

Rule

Description

Premise

Consequence

conditional root rule

Generates root node in output model. Applied only one time (max) during the generation step.

condition function (optional), missing condition function is equivalent to a function always returning true.

root template (ref)

root mapping rule

Generates a root node in output model.

concept - applicable concept (concept of input node)
inheritors - if true then the rule is applicable to the specified concept and all its sub-concepts. If false (default) then the sub-concepts are not applicable.
condition function (optional) - see conditional root rule above.
keep input root - if false then the input root node (if it's a root node) will be dropped. If true then input root will be copied to output model.

root template (ref)

weaving rule

Allows to insert additional child nodes into output model. Weaving rules are processed at the end of generation micro-step just before map_src and reference resolving. Rule is applied on each input node of specified concept. Parent node for insertion should be provided by context function.
(see #Model Transformation)

concept - same as above
inheritors - same as above
condition function (optional) - same as above

  • external template (ref)
  • weave-each
    context function - computes (parent) output node into which the output node(s) generated by this rule will be inserted.

reduction rule

Transforms input node while this node is being copied to output model.

concept - same as above
inheritors - same as above
condition function (optional) - same as above

  • external template (ref)
  • in-line template
  • in-line switch
  • dismiss top rule
  • abandon input

pattern rule

Transforms input node, which matches pattern.

pattern - pattern expression
condition function (optional) - same as above

  • external template (ref)
  • in-line template
  • dismiss top rule
  • abandon input

abandon root rule

Allows to drop an input root node with otherwise would be copied into output model.

applicable concept ((warning) including all its sub-concepts)
condition function (optional) - same as above

n/a

drop attributesDoes not propagate the listed attributes to the output modelinheritors - same as above n/a

Rule Consequences:

Consequence

Usage

Description

root template (ref)

  • conditional root rule
  • (root) mapping rule

Applies root template

external template (ref)

  • weaving rule
  • reduction rule
  • pattern rule

Applies an external template. Parameters should be passed if required, can be one of:

  • pattern captured variable (starting with # sign)
  • integer or string literal
  • null, true, false
  • query function

weave-each

weaving rule

Applies an external template to a set of input nodes.
Weave-each consequence consists of:

  • foreach function - returns a sequence of input nodes
  • reference on an external template

in-line template

  • reduction rule
  • pattern rule

Applies the template code which is written right here.

in-line switch

reduction rule

Consists of set of conditional cases and a default case.
Each case specify a consequence, which can be one of:

  • external template (ref)
  • in-line template
  • dismiss top rule
  • abandon input

dismiss top rule

  • reduction rule
  • pattern rule

Drops all reduction-transformations up to the point where this sequence of transformations has been initiated by an attempt to copy input node to output model. The input node will be copied 'as is' (unless some other reduction rules are applicable). User can also specify an error, warning or information message.

abandon input

  • reduction rule
  • pattern rule

Prevents input node from being copied into output model.

Root Template

Root Template is used in conditional root rules and (root) mapping rules. Generator language doesn't define specific concept for root template. Any root node in output language is treated as a Root Template when created in generator model. The generator language only defines a special kind of annotation - root template header, which is automatically added to each new root template. The root template header is used for specifying of an expected input concept (i.e. concept of input node). MPS use this setting to perform a static type checking in a code in various macro-functions which are used in the root template.

External Template

External Template is a concept defined in the generator language. It is used in weaving rules and reduction rules.

In external template user specifies the template name, input concept, parameters and a content node.

The content node can be any node in output language. The actual template code in external templates is surrounded by template fragment 'tags' (the template fragment is also a special kind of annotation concept). The code outside template fragment serves as a framework (or context) for the real template code (template fragment) and is ignored by the generator. In external template for weaving rule, the template's context node is required (it is a design-time representation of the rule's context node), while template for reduction rule can be just one context-free template fragment. External template for a reduction rule must contain exactly one template fragment, while a weaving rule's template can contain more than one template fragments.

Template fragment has following properties (edited in inspector view):

  • mapping label
  • fragment context - optional function returning new context node which will replace main context node while applying the code in fragment. (warning) Can only be used in weaving rules.

Mapping Label

Mapping Labels are declared in a mapping configuration and references in this declaration are used to label generator rules, macros and template fragments. Such marks allow finding of an output node by input node (see #generation context).

Properties:

  • name
  • input concept (optional) - expected concept of input node of transformation performed by the tagged rule, macro or template fragment
  • output concept (optional) - expected concept of output node of transformation performed by the tagged rule, macro or template fragment

MPS makes use of the input/output concept settings to perform static type checking in get output ... operations (see #generation context).

Export Label

Export Labels are declared in mapping configuration. Export labels resemble mapping labels in many ways. They add a persistence mechanism that enables access to the labels from other models.

Each export label specifies:

  • name to identify it in the macros
  • input and output concepts indicating the concept before and after the generation phase
  • keeper concept, instance of which will be used for storing the exported information
  • marshal function, to encode the inputNode and the generated outputNode into the keeper
  • an unmarshal function, to decode the information using the original inputNode and the keeper to correctly initialize the outputNode in the referring model

Macro

Macro is a special kind of an annotation concept which can be attached to any node in template code. Macro brings dynamic aspect into otherwise static template-based model transformation.

Property- and reference-macro is attached to property- and reference-cell, and node-macro (which comes in many variations - LOOP, IF etc.) is attached to a cell representing the whole node in cell-editor. All properties of macro are edited using inspector view.

All macro have the mapping label property - reference on a mapping label declaration. All macro can be parameterized by various macro-functions - depending on type of the macro. Any macro-function has at least three following parameters:

  • node - current input node;
  • genContext - generation context - allows searching of output nodes, generating of unique names and others;
  • operationContext - instance of jetbrains.mps.smodel.IOperationContext interface (used rarely).

Many of macro have mapped node or mapped nodes function. This function computes new input node - substitution for current input node. If mapped node function returns null or mapped nodes function returns an empty sequence, then generator will skip this macro altogether. I.e. no output will be generated in this place.

Macro

Description

Properties (if not mentioned above)

Property macro

Computes value of property.

value function:

  • return type - string, boolean or int - depending on the property type.
  • parameters - standard + templateValue - value in template code wrapped by the macro.

Reference macro

Computes referent node in output model.
Normally executed in the end of a generation micro-step, when the output model (tree) is already constructed.
Can also be executed earlier if a user code is trying to obtain the target of the reference.

referent function:

  • return type - node (type depends on the reference link declaration) or, in many cases, string identifying the target node (see note).
  • parameters - standard + outputNode - source of the reference link (in output model).

IF

Wrapped template code is applied only if condition is true. Otherwise the template code is ignored and an 'alternative consequence' (if any) is applied.

condition function
alternative consequence (optional) - any of:

  • external template (ref)
  • in-line template
  • abandon input
  • dismiss top rule

LOOP

Computes new input nodes and applies the wrapped template to each of them.

mapped nodes function

INCLUDE

Wrapped template code is ignored (it only serves as an anchor for the INCLUDE-macro), a reusable external template will be used instead.

mapped node function (optional)
include template - reference on reusable external template

CALL

Invokes template, replaces wrapped template code with the result of template invocation. Supports templates with parameters.

mapped node function (optional)
call template - reference on reusable external template

argument - one of

  • pattern captured variable
  • integer or string literal
  • null, true, false
  • query function

SWITCH

Provides a way to many alternative transformations in the given place in template code.
The wrapped template code is applied if none of switch cases is applicable and no default consequence is specified in #template switch.

mapped node function (optional)
template switch - reference on template switch

COPY-SRC

Copies input node to output model. The wrapped template code is ignored.

mapped node function - computes the input node to be copied.

COPY-SRCL

Copies input nodes to output model. The wrapped template code is ignored.
Can be used only for children with multiple aggregation cardinality.

mapped nodes function - computes the input nodes to be copied.

MAP-SRC

Multifunctional macro, can be used for:

  • marking a template code with mapping label;
  • replacing current input node with new one;
  • perform none-template based transformation;
  • accessing output node for some reason.
    MAP-SRC macro is executed in the end of generator micro-step - after all node- and property-macro but before reference-macro.

mapped node function (optional)
mapping func function (optional) - performes none-template based transformation.
If defined then the wrapped template code will be ignored.
Parameters: standard + parentOutputNode - parent node in output model.
post-processing function (optional) - give an access to output node.
Parameters: standard + outputNode

MAP-SRCL

Same as MAP-SRC but can handle many new input nodes (similar to LOOP-macro)

mapped nodes function
mapping func function (optional)
post-processing function (optional)

WEAVE

Allows to insert additional child nodes into the output model in a similar way Weaving rules are used. The node wrapped in the WEAVE macro (or provided by the use input function) will have the supplied template applied to it and the generated nodes will be inserted into macro's context.

use input a function returning a collection of nodes to apply the macro to
weave reference to a template to weave into the nodes supplied as the input

EXPORT

saves a node for cross-model reference, so it can be retrieved when generating other models

 

Note

Icon

Reference resolving by identifier is only supported in BaseLanguage.
The identifier string for classes and class constructors may require (if class is not in the same output model) package name in square brackets preceding the class name:
[package.name]ClassName

Template Switch

A template switch is used in pair with the SWITCH-macro (the TemplateSwitchMacro concept since MPS 3.1). A single template switch can be re-used in many different SWITCH-macros. A template switch consists of set of cases and one default case. Each switch case is a reduction rule, i.e. a template switch contains a list of reduction rules actually (see #reduction rule).

The default case consequence can be one of:

  • external template (ref)
  • in-line template
  • abandon input
  • dismiss top rule
    .. or can be omitted. In this case the template code surrounded by corresponding SWITCH-macro will be applied.

A template switch can inherit reduction rules from other switches via the extends property. When the generator is executing a SWITCH-macro it tries to find most specific template switch (available in scope). Therefore the actually executed template switch is not necessarily the same as it is defined in template switch property in the SWITCH-macro.

In the null-input message property user can specify an error, warning or info message, which will be shown in MPS messages view in case when the mapped node function in SWITCH-macro returns null (by default no messages are shown and macro is skipped altogether).

A template switch can accept parameters, the same way as template declarations. A use of parametrized switch mandates arguments to be supplied in the SWITCH macro. The TemplateSwitchMacro concept supports switches both with and without arguments.

Icon

The old macro concept (TemplateSwitch) has been deprecated in 3.1. Note, visually both macros look the same, SWITCH and SWITCH, respectively. There's migration script to replace old macro instances with the new one; you need to invoke the script manually to update the concepts.

Generation Context (operations)

Generation context (genContext parameter in macro- and rule-functions) allows finding of nodes in output model, generating unique names and provides other useful functionality.

Generation context can be used not only in generator models but also in utility models - as a variable of type gencontext.

Operations of genContext are invoked using familiar dot-notation: genContext.operation

Finding Output Node

get output <mapping label>

Returns output node generated by labeled conditional root rule.
Issues an error if there are more than one matching output nodes.

get output <mapping label> for ( <input node> )

Returns output node generated from the input node by labeled generator rule, macro or template fragment.
Issues an error if there are more than one matching output nodes.

pick output <mapping label> for ( <input node> )

(warning) only used in context of the referent function in reference-macro and only if the required output node is target of reference which is being resolved by that reference-macro.
Returns output node generated from the input node by labeled generator rule, macro or template fragment. Difference with previous operation is that this operation can automatically resolve the many-output-nodes conflict - it picks the output node which is visible in the given context (see search scope).

get output list <mapping label> for ( <input node> )

Returns list of output nodes generated from the input node by labeled generator rule, macro or template fragment.

get copied output for ( <input node> )

Returns output node which has been created by copying of the input node. If during the copying, the input node has been reduced but concept of output node is the same (i.e. it wasn't reduced into something totally different), then this is still considered 'copying'.
Issues an error if there are more than one matching output nodes.

Generating of Unique Name

unique name from <base name> in context <node>

The uniqueness is secured throughout the whole generation session.
(warning) Clashing with names that wasn't generated using this service is still possible.

The context node is optional, though we recommend to specify it to guarantee generation stability. If specified, then MPS tries its best to generated names 'contained' in a scope (usually a root node). Then when names are re-calculated (due to changes in input model or in generator model), this won't affect other names outside the scope.

Template Parameters

#patternvar

Value of captured pattern variable
(warning) available only in rule consequence

param

Value of template parameter
(warning) available only in external template

Getting Contextual Info

inputModel

Current input model

originalModel

Original input model

outputModel

Current output model

invocation context

Operation context (jetbrains.mps.smodel.IOperationContext java interface) associated with module - owner of the original input model

scope

Scope - jetbrains.mps.smodel.IScope java interface

templateNode

Template code surrounded by macro.
Only used in macro-functions

get prev input <mapping label>

Returns input node that has been used for enclosing template code surrounded by the labeled macro.
Only used in macro-functions.

Transferring User Data

During a generation MPS maintains three maps user objects, each have different life span:

  • session object - kept throughout whole generation session;
  • step object - kept through a generation step;
  • transient object - only live during a micro step.

Developer can access user object maps using an array (square brackets) notation:

The key can be any object (java.lang.Object).

binding user data with particular node

Icon

The session- and step-object cannot be used to pass a data associated with a particular input node across steps and micro-steps because neither an input node nor its id can serve as a key (output nodes always have different id).
To pass such data use methods: putUserObject, getUserObject and removeUserObject defined in class jetbrains.mps.smodel.SNode.
The data will be transferred to all output copies of the input node. The data will be also transferred to output node if a slight reduction (i.e. without changing of the node concept) took place while the node copying.

Logging

Creates message in MPS message view. If the node parameter is specified then clicking on the message will navigate to that node. In case of an error message, MPS will also output some additional diagnostic info.

Utilities (Re-usable Code)

If you have duplicated code (in rules, macros, etc.) and want to say, extract it to re-usable static methods, then you must create this class in a separate, non-generator model.

If you create an utility class in the generator model (i.e. in a model with the 'generator' stereotype), then it will be treated as a root template (unused) and no code will be generated from it.

Mapping Script

Mapping script is a user code which is executed either before a model transformation (pre-processing script) or after it (post-processing script). It should be referenced from #Mapping Configuration to be invoked as a part of it's generation step. Mapping script provides the ability to perform a non-template based model transformation.

Pre-processing scripts are also commonly used for collecting certain information from input model that can be later used in the course of template-based transformation. The information collected by script is saved as a transient-, step- or session-object (see generation context).

Script sample:

Properties:

script kind

  • pre-process input model - script is executed in the beginning of generation step, before template-based transformation;
  • post-process output model - script is executed at the end of generation step, after template-based transformation.

modifies model

only available if script kind = pre-process input model
If set true and input model is the original input model, then MPS will create a transient input model before applying the script.
If set false but script tries to modify input model, then MPS will issue an error.

Code context:

model

Current model

genContext

Generation context to access transient/session or step objects.

invocation context

Operation context (jetbrains.mps.smodel.IOperationContext java interface) associated with module - owner of the original input model


The Generator Algorithm

The process of generation of target assets from an input model (generation session) includes 5 stages:

  • Defining all generators that must be involved
  • Defining the order of priorities of transformations
  • Step-by-step model transformation
  • Generating text and saving it to a file (for each root in output model)
  • Post-processing assets: compiling, etc.

We will discuss the first three stages of this process in detail.

Defining the Generators Involved

To define the required generators, MPS examines the input model and determines which languages are used in it. Doing this job MPS doesn't make use of 'Used Languages' specified in the model properties dialog. Instead MPS examines each node in the model and gathers languages that are actually used.

From each 'used language' MPS obtains its generator module. If there are more than one generator module in a language, MPS chooses the first one (multiple generators for the same language are not fully supported in the current version of MPS). If any generator in this list depends on other generators (as specified in the 'depends on generators' property), then those generators are added to the list as well.

After MPS obtains the initial list of generators, it begins to scan the generator's templates in order to determine what languages will be used in intermediate (transient) models. The languages detected this way are handled in the same manner as the languages used in the original input model. This procedure is repeated until no more 'used languages' can be detected.

Explicit Engagement

In some rare cases, MPS is unable to detect the language whose generator must be involved in the model transformation. This may happen if that language is not used in the input model or in the template code of other (detected) languages. In this case, you can explicitly specify the generator engagement via the Languages Engaged on Generation section in the input model's properties dialog (Advanced tab).

Dependency scope/kind - 'Generation Target' and 'Design'.

'Generation Target' replaces 'Extends' relation between two languages (L2 extends L1), when one needed to specify that Generator of L2 generates into L1 and thus needs its runtime dependencies. Now, when a language (L2) is translated to another language (L1), and L1 has runtime dependencies, use L1 as 'Generation Target' of L2. Though this approach is much better than 'Extends', it's still not perfect as it's rather an attribute of a generator than of a language. Once Generators become fully independent from their languages, we might need to fix this approach (different generators may target different languages, thus target has to be specified for a generator, not the source language).

'Design' dependency replaces 'Extends' between two generators. Use it when you need to reference another generator to specify priority rules (though consider if you indeed need these priorities, see changes in the Generator Plan, below)

Defining the Order of Priorities

As we discussed earlier, a generator module contains generator models, and generator models contain mapping configurations. Mapping configuration (mapping for short) is a set of generator rules. It is often required that some mappings must be applied before (or not later than, or together with) some other mappings. The language developer specifies such a relationship between mappings by means of mapping constraints in the generator properties dialog (see also #Mapping Priorities and the Dividing Generation Process into Steps demo).

After MPS builds the list of involved generators, it divides all mappings into groups, according to the mapping priorities specified. All mappings for which no priority has been specified fall into the last (least-priority) group.

tip

Icon

You can check the mapping partitioning for any (input) model by selecting Show Generation Plan action in the model's popup menu.
The result of partitioning will be shown in the MPS Output View.

Optimized Generation Plan

When planning the generation phase, MPS prefers to keep every generator as lonely as possible. Eventually, you'll see many relatively small and fast to process generation steps. Of course, the generators forced to run together with priority rules still run at the same step. Handling several unrelated generators at the same generation step (MPS prior to 3.2) proved to be inefficient, since it imposed a lot of unnecessary checking for rule applicability across other generators from the same step. With in-place transformation in 3.2 and later, the performance penalty for each extra generation steps is negligible.

Ignored priority rules

In addition to conflicting priorities, there are rules that get ignored during the generation plan. This might happen if an input model doesn't have any concept of a language participating in a priority rule. Since there's no actual use of a language, the rule is ignored, and the 'Show Generation Plan' action reports them along with conflicting rules. Previous MPS versions used to include generators of otherwise unused languages into the generation process, now these generators get no chance to jump in.

Implicit priorities

Target languages (languages produced by templates) are considered as implicit 'not later than' rules. You don't need to specify these priorities manually.

Icon

This implicit priority rule between two generators is ignored if an explicit priority rule is defined for the language that generates into the other language.

Model Transformation

Each group of mappings is applied in a separate generation step. The entire generation session consists of as many generation steps as there were mapping groups formed during the mapping partitioning. The generation step includes three phases:

  • Executing pre-mapping scripts
  • Template-based model transformation
  • Executing post-mapping scripts

The template-based model transformation phase consists of one or more micro-steps. The micro-step is a single-pass model transformation of input model into a transient (output) model.

While executing micro-step MPS follows the next procedure:

  1. Apply conditional root rules (only once - on the 1-st micro-step)
  2. Apply root mapping rules
  3. Copy input roots for which no explicit root mapping is specified (this can be overridden by means of the 'keep input root' option in root mapping rules and by the 'abandon root' rules)
  4. Apply weaving rules
  5. Apply delayed mappings (from MAP_SRC macro)
  6. Revalidate references in the output model (all reference-macro are executed here)

There is no separate stage for the application of reduction and pattern rules. Instead, every time MPS copies an input node into the output model, it attempts to find an applicable reduction (or pattern) rule. MPS performs the node copying when it is either copying a root node or executing a COPY_SRC-macro. Therefore, the reduction can occur at either stage of the model transformation.

MPS uses the same rule set (mapping group) for all micro-steps within the generation step. After a micro-step is completed and some transformations have taken place during its execution, MPS starts the next micro-step and passes the output model of the previous micro-step as input to the next micro-step. The whole generation step is considered completed if no transformations have occurred during the execution of the last micro-step, that is, when there are no more rules in the current rule set that are applicable to nodes in the current input model.

The next generation step (if any) will receive the output model of previous generation step as its input.

tip

Icon

Intermediate models (transient models) that are the output/input of generation steps and micro-steps are normally destroyed immediately after their transformation to the next model is completed.
To keep transient models, enable the following option:
Settings -> Generator Settings -> Save transient models on generation

See also:

In-place transformation

Generators for languages employed in a model are applied sequentially (aka Generation Plan). Effectively, each generation step modifies just a fraction of original model, and the rest of the model is copied as-is. With huge models and numerous generation steps this approach proves to be quite ineffective. In-place transformation tries to address this with a well-known 'delta' approach, where changes only are collected and applied to original model to alter it in-place.

In version 3.1 in-place transformation is left as an option, enabled by default and configurable through the Project settings -> Generator. Clients are encouraged to fix their templates that fail in the in-place mode, as in-place generation is likely to become the only generation mode later down the road.

Use of in-place transformation brings certain limitations or might even break patterns that used to work in the previous MPS versions:

  • Most notable and important - there's no output model at the moment when rule's queries/conditions are executed. To consult the output model during the transformation process is a bad practice, and in-place transformation enforces removing it. Access to the output model from a transformation rule implies certain order of execution, thus effectively limiting the set of optimizations applicable by the MPS generator. The contract of a transformation rule, with a complete input and a fraction of the output that this particular rule is responsible for, is more rigorous than "a complete input model" and "an output model in some uncertain state".
  • The output model is indeed there for weaving rules, as their purpose is to deal with output nodes.
  • The process of delta building requires the generator to know about the nodes being added to the model. Thus, any implicit changes in the output model that used to work would fail with in-place generation enabled. As an example, consider MAP-SRC with a post-process function, which replaces the node with a new one:postprocess: if (node.someCondition()) node.replace with new(AnotherNode);. Generator records a new node produced by MAP-SRC, schedules it for addition, and delays post-processing. Once post-processing is over, there's no way for the generator to figure out the node it tracks as 'addition to output model' is no longer valid and there's another node which should be used instead. Of course, the post-process can safely alter anything below the node produced by MAP-SRC, but an attempt to go out from the sandbox of the node would lead to an error.
  • Presence of active weaving rules prevents in-place transformation as these rule require both input and output models.

Generation trace

Much like in-place transformation, the updated generation trace is inspired by the idea to track actual changes only. Now it's much less demanding, as only the transformed nodes are being tracked. Besides, various presentation options are available to give different perspective on the transformation process.

Support for non-reflective queries

Note: This is just a preview of incomplete functionality in 3.1

Queries in the generator end up in the QueriesGenerated class, with methods implementing individual queries. These methods are invoked through Java Reflection. This approach has certain limitations - extra effort is required to ensure consistency of method name and arguments in generated code and hand-written invocation code. Provisional API and a generation option has been added, to expose functionality of QueriesGenerated through a set of interfaces. With that, generator consults generated queries through regular Java calls, with compile-time checks for arguments, leaving implementation detail about naming and arguments of particular generated queries to QueriesGenerated and its generator.


Unable to render {include} The included page could not be found.

Generating from Ant

The Ant MPS generator task exposes properties configurable from the build script (parallel, threads, in-place, warnings). The build language uses the Ant Generate task under the hood to transform models during the build process. This task now exposes parameters familiar from the Generator settings page:

  • strict generation mode
  • parallel generation with configurable number of threads
  • option to enable in-place transformation
  • option to control generation warnings/errors.

These options are also exposed at build language with the BuildMps_GeneratorOptions concept, so that build scripts have more control over the process.

Examples

If you're feeling like it's time for more practical experience, check out the generator demos.
The demos contain examples of usage of all the concepts discussed above.


Previous Next

Defining A Typesystem For Your Language


This page describes the MPS type-system to a great detail. If you would prefer a more lightweight introduction into defining your first type-system rules, consider checking out the Type-system cookbook.

If you would like to get familiar with the ways you can use the type-system from your code, you may also look at the Using the type-system chapter. 

What is a typesystem?

A typesystem is a part of a language definition assigning types to the nodes in the models written using the language. The typesystem language is also used to check certain constraints on nodes and their types. Information about types of nodes is useful for:

  • finding type errors
  • checking conditions on nodes' types during generation to apply only appropriate generator rules
  • providing information required for certain refactorings (e.g. for the "extract variable" refactoring)
  • and more

Types

Any MPS node may serve as a type. To enable MPS to assign types to nodes of your language, you should create a language aspect for typesystem. The typesystem model for your language will be written in the typesystem language.

Inference Rules

The main concept of the typesystem language is an inference rule. An inference rule for a certain concept is mainly responsible for computing a type for instances of that concept.

An inference rule consists of a condition and a body. A condition is used to determine whether a rule is applicable to a certain node. A condition may be of two kinds: a concept reference or a pattern. A rule with a condition in the form of concept reference is applicable to every instance of that concept and its subconcepts. A rule with a pattern is applicable to nodes that match the pattern. A node matches a pattern if it has the same properties and references as the pattern, and if its children match the pattern's children. A pattern also may contain several variables which match everything.

The body of an inference rule is a list of statements which are executed when a rule is applied to a node. The main kind of statements of typesystem language are statements used for creating equations and inequations between types.

Inference Methods

To avoid duplications, one may want to extract identical parts of code of several inference rules to a method. An inference method is just a simple Base Language method marked with an annotation "@InferenceMethod". There are several language constructions you may use only inside inference rules and replacement rules and inference methods, they are: typeof expressions, equations and inequations, when concrete statements, type variable declarations and type variable references, and invocations of inference methods. That is made for not to use such constructions in arbitrary methods, which may be called in arbitrary context, maybe not during type checking.

Overriding

A type-system rule of a sub-concept can override the rules defined on the super-concepts. If the overrides flag is set to false, the rule is added to the list of rules applied to a concept together with the rules defined for the super-concepts, while, if the flag is set to true, the overriding rule replaces the rules of the super-concepts in the rule engine and so they do not take effect. This applies both to Inference and NonTypeSystem rules.

Equations And Inequations

The main process which is performed by the type-system engine is the process of solving equations and inequations among all the types. A language designer tells the engine, which equations it should solve by writing them in inference rules. To add an equation into an engine, the following statement is used: 

expr1 :==: expr2, where expr1 and expr2 are expressions, which evaluate to a node.

Consider the following use case. You want to say that the type of a local variable reference is equal to the type of the variable declaration it points to. So, you write typeof (varRef) :==: typeof (varRef.localVariableDeclaration), and that's all. The typesystem engine will solve such equations automatically.

The above-mentioned expression typeof(expr) (where expr must evaluate to an MPS node) is a language construct, which returns a so-called type variable, which serves as a type of that node. Type variables become concrete types gradually during the process of equation solving.

In certain situations you want to say that a certain type doesn't have to exactly equal another type, but also may be a subtype or a supertype of that type. For instance, the type of the actual parameter of a method call does not necessarily have to be the same type as that of the method's formal parameter - it can be its subtype. For example, a method, which requires an Object as a parameter, may be applied also to a String.

To express such a constraint, you may use an inequation instead of an equation. An inequation expresses the fact that a certain type should be a subtype of another type. It is written as follows: expr1 :<=: expr2.

Weak And Strong Subtyping

A relationship of subtyping is useful for several different cases. You want a type of an actual parameter to be a subtype of formal parameter type, or you want a type of an assigned value to be a subtype of variable's declared type; in method calls or field access operations you want a type of an operand to be a subtype of a method's declaring class.

Sometimes such demands are somewhat controversial: consider, for instance, two types, int and Integer, which you want to be interchangeable when you pass parameters of such types to a method: if a method is doSomething(int i), it is legal to call doSomething(1) as well as doSomething(new Integer(1)). But when these types are used as types for operand of, say, a method call, the situation is the different: you shouldn't be able to call a method of an expression of type int, of an integer constant for example. So, we have to conclude that in one sense int and Integer are subtypes of one another, while in the other sense they are not.

For solving such a controversy, we introduce two relationships of subtyping: namely, weak and strong subtyping. Weak subtyping will follow from strong subtyping: if a node is a strong subtype of another node, then it is it's weak subtype also.

Then, we can say about our example, that int and Integer are weak subtypes of each other, but they are not strong subtypes. Assignment and parameters passing require weak subtyping only, method calls require strong subtyping.

When you create an inequation in you typesystem, you may choose it to be a strong or weak inequation. Also subtyping rules, those which state subtyping relationship (see below), can be either weak or strong. A weak inequation looks like :<=:, a strong inequation looks like :<<=:

In most cases you want to state strong subtyping, and to check weak subtyping. If you are not sure, which subtyping you need, use weak one for inequations, strong one for subtyping rules.

Subtyping Rules

When the typesystem engine solves inequations, it requires information about whether a type is a subtype of another type. But how does the typesystem engine know about that? It uses subtyping rules. Subtyping rules are used to express subtyping relationship between types. In fact, a subtyping rule is a function which, given a type, returns its immediate supertypes.

A subtyping rule consists of a condition (which can be either a concept reference or a pattern) and a body, which is a list of statements that compute and return a node or a list of nodes that are immediate supertypes of the given node. When checking whether some type A is a supertype of another type B, the typesystem engine applies subtyping rules to B and computes its immediate supertypes, then applies subtyping rules to those supertypes and so on. If type A is among the computed supertypes of type B, the answer is "yes".

By default, subtyping stated by subtyping rules is a strong one. If you want to state only weak subtyping, set "is weak" property of a rule to "true".

Comparison Inequations And Comparison Rules

Consider you want to write a rule for EqualsExpression (operator == in Java, BaseLanguage and some other languages): you want left operand and right operand of EqualsExpression to be comparable, that is either type of a left operand should be a (non-strict) subtype of a right operand, or vice versa. To express this, you write a comparison inequation, in a form expr1 :~: expr2, where expr1 and expr2 are expressions, which represent types. Such an inequation is fulfilled if expr1 is a subtype of expr2 (expr1 <: expr2), or expr2 <: expr1.

Then, consider that, say, any Java interfaces should also be comparable, even if such interfaces are not subtypes of one another. That is because there always can be written a class, which implements both of interfaces, so variables of interface types can contain the same node, and variable of an interface type can be cast to any other interface. hence an equation, cast, or instanceof expressions with both types being interface types should be legal (and, for example, in Java they are).

To state such a comparability, which does not stem from subtyping relationships, you should use comparison rules. A comparison rule consists of two conditions for the two applicable types and a body which returns true if the types are comparable or false if they are not.

Here's the comparison rule for interface types:

comparison rule interfaces_are_comparable

applicable for  concept = ClassifierType as classifierType1 , concept = ClassifierType as classifierType2

rule {
  if (classifierType1.classifier.isInstanceOf(Interface) && classifierType2.classifier.isInstanceOf(Interface)) {
    return true;
  } else {
    return false;
  }
}

Quotations

A quotation is a language construct that lets you easily create a node with a required structure. Of course, you can create a node using the smodelLanguage and then populate it with appropriate children, properties and references by hand, using the same smodelLanguage. However, there's a simpler - and more visual - way to accomplish this.

A quotation is an expression, whose value is the MPS node written inside the quotation. Think about a quotation as a "node literal", a construction similar to numeric constants and string literals. That is, you write a literal if you statically know what value do you mean. So inside a quotation you don't write an expression, which evaluates to a node, you rather write the node itself. For instance, an expression 2 + 3 evaluates to 5, an expression < 2 + 3 > (angled braces being quotation braces) evaluates to a node PlusExpression with leftOperand being an IntegerConstant 3 and rightOperand being IntegerConstant 5.

(See the Quotations documentation for more details on quotations, anti quotations and light quotations)

Antiquotations

For it is a literal, a value of quotation should be known statically. On the other hand, in cases when you know some parts (i.e. children, referents or properties) of your node only dynamically, i.e. those parts that can only be evaluated at runtime and are not known at design time, then you can't use just a quotation to create a node with such parts.

The good news, however, is that if you know the most part of a node statically and you want to replace only several parts by dynamically-evaluated nodes you can use antiquotations. An antiquotation can be of 4 types: child, reference, property and list antiquotation. They all contain an expression, which evaluates dynamically to replace a part of the quoted node by its result. Child and referent antiquotations evaluate to a node, property antiquotation evaluates to string and list antiquotation evaluates to a list of nodes.

For instance, you want to create a ClassifierType with the class ArrayList, but its type parameter is known only dynamically, for instance by calling a method, say, "computeMyTypeParameter()".

Thus, you write the following expression: < ArrayList < %( computeMyTypeParameter() )% > >. The construction %(...)% here is a node antiquotation.

You may also antiquotate reference targets and property values, with ^(...)^ and $(...)$, respectively; or a list of children of one role, using *(...)*.

a) If you want to replace a node somewhere inside a quoted node with a node evaluated by an expression, you use node antiquotation, that is %( )%. As you may guess there's no sense to replace the whole quoted node with an antiquotation with an expression inside, because in such cases you could instead write such an expression directly in your program.

So node antiquotations are used to replace children, grandchildren, great-grandchildren and other descendants of a quoted node. Thus, an expression inside of antiquotation should return a node. To write such an antiquotation, position your caret on a cell for a child and type "%".

b) If you want to replace a target of a reference from somewhere inside a quoted node with a node evaluated by an expression, you use reference antiquotation, that is ^(...)^ . To write such an antiquotation, position your caret on a cell for a referent and type "^".

c) If you want to replace a child (or some more deeply located descendant), which is of a multiple-cardinality role, and if for that reason you may want to replace it not with a single node but rather with several ones, then use child list (simply list for brevity) antiquotations, *( ). An expression inside a list antiquotation should return a list of nodes, that is of type * *nlist<..>* * or compatible type (i.e.* **{}list<node<..>>* * is ok, too, as well as some others). To write such an antiquotation, position your caret on a cell for a child inside a child collection and type "" . You cannot use it on an empty child collection, so before you press "" you have to enter a single child inside it.

d) If you want to replace a property value of a quoted node by a dynamicаlly calculated value, use property antiquotation $( )$. An expression inside a quotation should return string, which will be a value for an antiquoted property of a quoted node. To write such an antiquotation, position your caret on a cell for a property and type "$".

(See the Quotations documentation for more details on quotations, anti quotations and light quotations)

Examples Of Inference Rules

Here are the simplest basic use cases of an inference rule:

  • to assign the same type to all instances of a concept (useful mainly for literals):
    applicable to concept = StringLiteral as nodeToCheck
    {
      typeof (nodeToCheck) :==: < String >
    }
    
  • to equate a type of a declaration and the references to it (for example, for variables and their usages):
    applicable to concept = VariableReference as nodeToCheck
    {
      typeof (nodeToCheck) :==: typeof (nodeToCheck.variableDeclaration)
    }
    
  • to give a type to a node with a type annotation (for example, type of a variable declaration):
    applicable to concept = VariableDeclaration as nodeToCheck
    {
      typeof (nodeToCheck) :==: nodeToCheck.type
    }
    
  • to establish a restriction for a type of a certain node: useful for actual parameters of a method, an initializer of a type variable, the right-hand part of an assignment, etc.
    applicable to concept = AssignmentExpression as nodeToCheck
    {
      typeof (nodeToCheck.rValue) :<=: typeof (nodeToCheck.lValue)
    }
    

Type Variables

Inside the typesystem engine during type evaluation, a type may be either a concrete type (a node) or a so-called type variable. Also, it may be a node which contains some type variables as its children or further descendants. A type variable represents an undefined type, which may then become a concrete type, as a result of solving equations that contain this type variable.

Type variables appear at runtime mainly as a result of the "typeof" operation, but you can create them manually, if you want to. There's a statement called TypeVarDeclaration in the typesystem lanuage to do so. You write it like "var T" or "var X" or "var V", i.e. "var" followed by the name of a type variable. Then you may use your variable, for example, in antiquotations to create a node with type variables inside.

Example: an inference rule for "for each" loop. A "for each" loop in Java consists of a loop body, an iterable to iterate over, and a variable into which the next member of an iterable is assigned before the next iteration. An iterable should be either an instance of a subclass of the Iterable interface, or an array. To simplify the example, we don't consider the case of the iterable being an array. Therefore, we need to express the following: an iterable's type should be a subtype of an Iterable of something, and the variable's type should be a supertype of that very something. For instance, you can write the following:

for (String s : new ArrayList<String>(...)) {
  ...
}

or the following:

for (Object o : new ArrayList<String>(...)) {
  ...
}

Iterables in both examples above have the type ArrayList<String>, which is a subtype of Iterable<String>. Variables have types String and Object, respectively, both of which are subtypes of String.

As we see, an iterable's type should be a subtype of an Iterable of something, and the variable's type should be a supertype of that very something. But how to say "that very something" in the typesystem language? The answer is, it's a type variable that we use to express the link between the type of an iterable and the type of a variable. So we write the following inference rule:

applicable for concept = ForeachStatement as nodeToCheck
{
  var T ;
  typeof ( nodeToCheck . iterable ) :<=:  Iterable < %( T )% >;
  typeof ( nodeToCheck . variable ) :>=:  T ;
}

Meet and Join types

Meet and Join types are special types, which are treated differently by the typesystem engine. Technically Meet and Join types are instances of MeetType and JoinType concepts, respectively. They may have an arbitrary number of argument types, which could be any nodes. Semantically, a Join type is a type, which is a supertype of all its arguments, and a node which has a type Join(T1|T2|..Tn) can be regarded as if it had type T1 or type T2 or... or type Tn. A Meet type is a type, which is a subtype of its every argument, so one can say that a node, which has a type Meet(T1&T2&..&Tn) inhabits type T1 and type T2 and.. and type Tn. The separators of arguments of the Join and Meet types (i.e. "|" and "&") are chosen respectively to serve as a mnemonics.

Meet and Join types are very useful at certain situations. Meet types appear even in MPS BaseLanguage (which is very close to Java). For instance, the type of such an expression:

true ? new Integer(1) : "hello"

is Meet(Serializable & Comparable), because both Integer (the type of new Integer(1)) and String (the type of "hello") implement both Serializable and Comparable.

Join type is useful when, say, you want some function-like concept return values of two different types (node or list of nodes, for instance). Then you should make type of its invocation be Join(node<> | list<node<>>).

You can create Meet and Join types by yourself, if you need to. Use quotations to create them, just as with other types and other nodes. The concepts of Meet and Join types are MeetType and JoinType, as it is said above.

"When Concrete" Blocks

Sometimes you may want not only to write equations and inequations for a certain types, but to perform some complex analysis with type structure. That is, inspect inner structure of a concrete type: its children, children of children, referents, etc.

It may seem that one just may write typeof(some expression), and then analyze this type. The problem is, however, that one can't just inspect a result of "typeof" expression because it may be a type variable at that moment. Although a type variable usually will become a concrete type at some moment, it can't be guaranteed that it is concrete in some given point of your typesystem code.

To solve such a problem you can use a "when concrete" block.

when concrete ( expr as var ) {
  body
}

Here, "expr" is an expression which will evaluate to a mere type you want to inspect (not to a node type of which you want to inspect), and "var" is a variable to which an expression will be assigned. Then this variable may be used inside a body of "when concrete" block. A body is a list of statements which will be executed only when a type denoted by "expr" becomes concrete, thus inside the body of a when concrete block you may safely inspect its children, properties, etc. if you need to.

If you have written a when concrete block and look into its inspector you will see two options: "is shallow" and "skip error". If you set "is shallow" to "true", the body of your when concrete block will be executed when expression becomes shallowly concrete, i.e. becomes not a type variable itself but possibly has type variables as children or referents. Normally, if your expression in condition of when concrete block is never concrete, then an error is reported. If it is normal for a type denoted by your expression to never become a concrete type, you can disable such error reporting by setting "skip error" to true.

Overloaded Operators

Sometimes an operator (like +, -, etc.) has different semantics when applied to different values. For example, + in Java means addition when applied to numbers, and it means string concatenation if one of its operands is of type String. When the semantics of an operator depends on the types of its operands, it's called operator overloading. In fact, we have many different operators denoted by the same syntactic construction.

Let's try to write an inference rule for a plus expression. First, we should inspect the types of operands, because if we don't know the types of operands (whether they are numbers or Strings), we cannot choose the type for an operation (it will be either a number or a String). To be sure that types of operands are concrete we'll surround our code with two when concrete blocks, one for left operand's type and another one for right operand's type.

when concrete(typeof(plusExpression.leftExpression) as leftType) {
  when concrete(typeof(plusExpression.rightExpression) as rightType) {
    ...
  }
}

Then, we can write some inspections, where we check whether our types are strings or numbers and choose an appropriate type of operation. But there will be a problem here: if someone writes an extension of BaseLanguage, where they want to use the plus expression for addition of some other entities, say, matrices or dates, they won't be able to use plus expression because types for plus expression are hard-coded in the already existing inference rule. So, we need an extension point to allow language-developers to overload existing binary operations.

Typesystem language has such an extension point. It consists of:

  • overloading operation rules and
  • a construct which provides a type of operation by operation and types of its operands.

For instance, a rule for PlusExpression in BaseLanguage is written as follows:

when concrete(typeof(plusExpression.leftExpression) as leftType) {
  when concrete(typeof(plusExpression.rightExpression) as rightType) {
    node<> opType = operation type( plusExpression , leftType , rightType );
    if (opType.isNotNull) {
      typeof(plusExpression) :==: opType;
    } else {
      error "+ can't be applied to these operands" -> plusExpression;
    }
  }
}

Here, "operation type" is a construct which provides a type of an operation according to types of operation's left operand's type, right operand's type and the operation itself. For such a purpose it uses overloading operation rules.

Overloaded Operation Rules

Overloaded operation rules reside within a root node of concept OverloadedOpRulesContainer. Each overloaded operation rule consists of:

  • an applicable operation concept, i.e. a reference to a concept of operation to which a rule is applicable (e.g. PlusExpression);
  • left and right operand type restrictions, which contain a type which restricts a type of left/right operand, respectively. A restriction can be either exact or not, which means that a type of an operand should be exactly a type in a restriction (if the restriction is exact), or its subtype (if not exact), for a rule to be applicable to such operand types;
  • a function itself, which returns a type of the operation knowing the operation concept and the left and right operand types.

Here's an example of one of overloading operation rules for PlusExpression in BaseLanguage:

operation concept: PlusExpression
left operand type: <Numeric>.descriptor is exact: false
right operand type: <Numeric>.descriptor is exact: false
operation type:
(operation, leftOperandType, rightOperandType)->node< > {
  if (leftOperandType.isInstanceOf(NullType) || rightOperandType.isInstanceOf(NullType)) {
    return null;
  } else {
    return Queries.getBinaryOperationType(leftOperandType, rightOperandType);
  }
}

Replacement Rules

Motivation

Consider the following use case: you have types for functions in your language, e.g. (a 1, a 2,...a N ) -> r, where a 1, a 2, .., a N, and r are types: a K is a type of K-th function argument and r is a type of a result of a function. Then you want to say that your function types are covariant by their return types and contravariant by their argument types. That is, a function type F = (T 1, .., T N) -> R is a subtype of a function type G = (S 1, .., S N) -> Q (written as F <: G) if and only if R <: Q (covariant by return type) and for any K from 1 to N, T K :> S K (that is, contravariant by arguments types).

The problem is, how to express covariance and contravariance in the typesystem language? Using subtyping rules you may express covariance by writing something like this:

nlist <  > result = new nlist <  > ;
for ( node <  > returnTypeSupertype : immediateSupertypes ( functionType . returnType ) ) {
  node <FunctionType> ft = functionType . copy;
  ft . returnType = returnTypeSupertype;
  result . add ( ft ) ;
}
return  result ;

Okay, we have collected all immediate supertypes for a function's return type and have created a list of function types with those collected types as return types and with original argument types. But, first, if we have many supertypes of return type, it's not very efficient to perform such an action each time we need to solve an inequation, and second, although now we have covariance by function's return type, we still don't have contravariance by function's arguments' types. We can't collect immediate subtypes of a certain type because subtyping rules give us supertypes, not subtypes.

In fact, we just want to express the abovementioned property: F = (T 1, .., T N) -> R <: G = (S 1, .., S N) -> Q (written as F <: G) if and only if R <: Q and for any K from 1 to N, T K :> S K . For this and similar purposes the typesystem language has a notion called "replacement rule."

What's a replacement rule?

A replacement rule provides a convenient way to solve inequations. While the standard way is to transitively apply subtyping rules to a should-be-subtype until a should-be-supertype is found among the results (or is never found among the results), a replacement rule, if applicable to an inequation, removes the inequation and then executes its body (which usually contains "create equation" and "create inequation" statements).

Examples

A replacement rule for above-mentioned example is written as follows:

replacement rule  FunctionType_subtypeOf_FunctionType

applicable for  concept = FunctionType as functionSubType <: concept = FunctionType as functionSuperType

rule {
   if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      error " different parameter numbers " -> equationInfo . getNodeWithError (  ) ;
      return ;
   }
   functionSubType . returnType :<=:  functionSuperType . returnType ;
   foreach ( node <  > paramType1 : functionSubType . parameterType ; node <  > paramType2 : functionSuperType . parameterType ) {
       paramType2 :<=:  paramType1 ;
   }
}

Here we say that a rule is applicable to a should-be-subtype of concept FunctionType and a should-be-supertype of concept FunctionType. The body of a rule ensures that the number of parameter types of function types are equal, otherwise it reports an error and returns. If the numbers of parameter types of both function types are equal, a rule creates an inequation with return types and appropriate inequation for appropriate parameter types.

Another simple example of replacement rules usage is a rule, which states that a Null type (a type of null literal) is a subtype of every type except primitive ones. Of course, we can't write a subtyping rule for Null type, which returns a list of all types. Instead, we write the following replacement rule:

replacement rule  any_type_supertypeof_nulltype

applicable for  concept = NullType as nullType <: concept = BaseConcept as baseConcept

rule {
   if ( baseConcept . isInstanceOf ( PrimitiveType ) ) {
      error " null type is not a subtype of primitive type " -> equationInfo . getNodeWithError (  ) ;
   }
}

This rule is applicable to any should-be-supertype and to those should-be-subtypes which are Null types. The only thing this rule does is checking whether should-be-supertype is an instance of PrimitiveType concept. If it is, the rule reports an error. If is not, the rule does nothing, therefore the inequation to solve is simply removed from the typesystem engine with no further effects.

Different semantics

A semantic of a replacement rule, as explained above, is to replace an inequation with some other equations and inequations or to perform some other actions when applied. This semantic really doesn't state that a certain type is a subtype of another type under some conditions. It just defines how to solve an inequation with those two types.

For example, suppose that during generation you need to inspect whether some statically unknown type is a subtype of String. What will an engine answer when a type to inspect is Null type? When we have an inequation, a replacement rule can say that it is true, but for this case its abovementioned semantics is unuseful: we have no inequations, we have a question to answer yes or no. With function types, it is worse because a rule says that we should create some inequations. So, what do we have to do with them in our use case?

To make replacement rules usable when we want to inspect whether a type is a subtype of another type, a different semantic is given to replacement rules in such a case.

This semantic is as follows: each "add equation" statement is treated as an inspection of whether two nodes match; each "add inequation" statement is treated as an inspection of whether one node is a subtype of another; each report error statement is treated as "return false."

Consider the above replacement rule for function types:

replacement rule  FunctionType_subtypeOf_FunctionType

applicable for  concept = FunctionType as functionSubType <: concept = FunctionType as functionSuperType

rule {
   if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      error " different parameter numbers " -> equationInfo . getNodeWithError (  ) ;
      return ;
   }
   functionSubType . returnType :<=:  functionSuperType . returnType ;
   foreach ( node <  > paramType1 : functionSubType . parameterType ; node <  > paramType2 : functionSuperType . parameterType ) {
       paramType2 :<=:  paramType1 ;
   }
}

In a different semantic, it will be treated as follows:

boolean result = true;
if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      result = false;
      return result;
   }
   result = result && isSubtype( functionSubType . returnType <:  functionSuperType . returnType );
   foreach ( node <  > paramType1 : functionSubType . parameterType ; node <  > paramType2 : functionSuperType . parameterType ) {
       result = result && isSubtype (paramType2 <: paramType1) ;
   }
return result;

So, as we can see, the other semantic is quite an intuitive mapping between creating equations/inequations and performing inspections.

Type-system, trace

MPS provides a handy debugging tool that gives you insight into how the type-system engine evaluates the type-system rules on a particular problem and calculates the types. You invoke it from the context menu or by a keyboard shortcut (Control + Shift + X / Cmd + Shift + X):

The console has two panels. The one on the left shows the sequence or rules as they were applied, while the one on the right gives you a snapshot of the type-system engine's working memory at the time of evaluating the rule selected in the left panel:

Type errors are marked inside the Type-system Trace panel with red color:

Additionally, if you spot an error in your code, use Control + Alt + Click / Cmd + Alt + Click to navigate quickly to the rule that fails to validate the types:

       

Advanced features of typesystem language

Check-only inequations

Basically, inequations may affect nodes' types, for instance if one of the inequation part is a type variable, it may become a concrete type because of this inequation. But, sometimes one does not want a certain inequation to create types, only to check whether such an inequation is satisfied. We call such inequations check-only inequations. To mark an inequation as a check-only, one should go to this inequation's inspector and should set a flag "check-only" to "true". To visually distinguish such inequations, the "less or equals" sign for check-only inequation is gray, while for normal ones it is black, so you can see whether an inequation is check-only without looking at its inspector.

Dependencies

When writing a generator for a certain language (see generator), one may want to ask for a type of a certain node in generator queries. When generator generates a model, such a query will make typesystem engine do some typechecking to find out the type needed. Performing full typechecking of a node's containing root to obtain the node's type is expensive and almost always unnecessary. In most cases, the typechecker should check only the node given. In more difficult cases, obtaining the type of a given node may require checking its parent or maybe a further ancestor. The typechecking engine will check a given node if the computed type is not fully concrete (i.e. contains one or more type variables); then the typechecker will check the node's parent, and so on.

Sometimes there's an even more complex case: the type of a certain node being computed in isolation is fully concrete; and the type of the same node - in a certain environment - is fully concrete also, but differs from the first one. In such a case, the abovementioned algorithm will break, returning the type of an node as if being isolated, which is not the correct type for the given node.

To solve this kind of problem, you can give some hints to the typechecker. Such hints are called dependencies - they express a fact that a node's type depends on some other node. Thus, when computing a type of a certain node during generation, if this node has some dependencies they will be checked also, so the node will be type-checked in an appropriate environment.

A dependency consists of a "target" concept (a concept of a node being checked, whose type depends on some other node), an optional "source" concept (a concept of another node on which a type of a node depends), and a query, which returns dependencies for a node, which is being checked, i.e. a query returns a node or a set of nodes.

For example, sometimes a type of a variable initializer should be checked with the enclosing variable declaration to obtain the correct type. A dependency which implements such a behavior may be written as follows:

target concept: Expression  find source: (targetNode)->JOIN(node< > | Set<node< >>) {
                                            if ( targetNode / . getRole_ (  ) . equals ( " initializer " ) ) {
                                               return  targetNode . parent ;
                                            }
                                            return  null ;
                                         }
source concept(optional): <auto>

That means the following: if the typechecker is asked for a type of a certain Expression during generation, it will check whether such an expression is of a role initializer, and if it is, will say that not only the given Expression, but also its parent should be checked to get the correct type for the given Expression.

Previous Next

Using a typesystem

If you have defined a typesystem for a language, a typechecker will automatically use it in editors to highlight opened nodes with errors and warnings. You may additionally want also to use the information about types in queries, like editor actions, generator queries, etc. You may want to use the type of a node, or you may want to know whether a certain type is a subtype of another one, or you may want to find a supertype of a type which has a given form.

Type Operation

You may obtain a type of a node in your queries using the type operation. Just write <expr>.type, where <expr> is an expression which is evaluated to a node.

Do not use type operation inside inference rules and inference methods! Inference rules are used to compute types, and type operation returns an already computed type.

Is Subtype expression

To inspect whether one type is a subtype of another one, use the isSubtype expression. Write isSubtype( type1 :< type2 ) or isStrongSubtype( type1 :<< type2 ), it will return true if type1 is a subtype of type2, or if type1 is a strong subtype of type2, respectively.

Coerce expression

A result of a coerce expression is a boolean value, which says whether a certain type may be coerced to a given form, i.e. whether this type has a supertype, which has a given form (satisfies a certain condition). A condition could be written either as a reference to a concept declaration, which means that a sought-for supertype should be an instance of this concept; or as a pattern, which a sought-for supertype must matche.
A coerce expression is written coerce( type :< condition ) or coerceStrong( type :<< condition ), where condition is what has just been discussed above.

Coerce Statement

A coerce statement consists of a list of statements, which are executed if a certain type can be coerced to a certain form. It is written as follows:

coerce ( type :< condition ) {
  ...
} else {
  ...
}

If a type can be coerced so as to satisfy a condition, the first (if) block will be executed, otherwise the else block will be executed. The supertype to which a type is coerced can be used inside the first block of a coerce statement. If the condition is a pattern and contains some pattern variables, which match parts of the supertype to which the type is coerced, such pattern variables can also be used inside the first block of the coerce statement.

Previous Next

For debugging typesystem MPS provides Typesystem Trace - an integrated visual tool that gives you insight into the evaluation process that happens inside the typesystem engine.

Try it out for yourself

We prepared a dedicated sample language for you to easily experiment with the typesystem. Open the Expressions sample project that comes bundled with MPS and should be available among the sample projects in the user home folder.

The sample language

The language to experiment with is a simplified expression language with several types, four arithmetic operations (+, -, *, /), assignment (:=), two types of variable declarations and a variable reference. The editor is very basic with almost no customization, so editing the expressions will perhaps be quite rough. Nevertheless, we expect you to inspect the existing samples and debug their types more than writing new code, so the lack of smooth editing should not be an issue.


The language can be embedded into Java thanks to the SimpleMathWrapper concept, but no interaction between the language and BaseLanguage is possible.

The expression language supports six types, organized by subtyping rules into two branches:

  1. Element -> Number -> Float -> Long -> Int
  2. Element -> Bool

Inspecting the types

If you open the Simple example class, you can position the cursor to any part of the expression or select a valid expression block. As soon as you hit Control/Cmd +Shift + T, you'll see the type of the selected node in a pop-up dialog.


The Main sample class will give you a more involved example showing how Type-inference correctly propagates the suitable type to variables:


Just check the calculated types for yourself.

Type errors

The TypeError sample class shows a simple example of a type error. Just uncomment the code (Control/Cmd + /) and check the reported error:

Since this variable declaration declares its type explicitly to be an Int, while the initializer is of type Float, the type-system reports an error. You may check the status bar at the bottom or hover your mouse over the incorrect piece of code.

Type-system Trace

When you hit Control/Cmd + Shift + X or navigate through the pop-up menu, you get the Typesystem Trace panel displayed on the right hand-side.
The Trace shows in Panel 2 all steps (i.e. type system rules) that the type-system engine executed. The steps are ordered top-to-bottom in the order in which they were performed. When you have Button 1 _selected, the_ Panel 2 highlights the steps that directly or indirectly influence the type of the node selected in the editor (Panel 1). Panel 3 details the step selected in Panel 2 - it describes what changes have been made to the type-system engine's state in the step. The actual state of the engine's working memory is displayed in Panel 4.

Step-by-step debugging

The Simple sample class is probably the easiest one to start experimenting with. The types get resolved in six steps, following the typesystem rules specified in the language. You may want to refer to these rules quickly by pressing F4 _or using the _Control/Cmd + N "Go to Root Node" command. F3 will navigate you to the node, which is being affected by the current rule.

  1. The type of a variable declaration has to be a super-type of the type of the initializer. The aValue variable is assigned the a type-system variable, the initializer expression is assigned the b type-system variable and a>=b (b sub-type or equal type to a) is added into the working memory.
  2. Following the type-system rule for Arithmetic Expressions, b has to be a sub-type of Number, the value 10 is assigned the c variable, 1.3F is assigned the d variable and a when-concrete handler is added to wait for c to be calculated.
  3. Following the rules for float constants d is solved as Float.
  4. Following the rules for integer constants c is solved as Int. This triggers the when-concrete handler registered in step 2 and registered another when-concrete handler to wait for d. Since d has already been resolved to Float, the handler triggered and resolves b (the whole arithmetic expression) as Float. This also solves the earlier equation (step 2) that b<=Number.
  5. Now a can be resolved as Float, which also solves the step 1 equation that a>=b.
  6. If you enable type expansions by pressing the button in the tool-bar, you'll get the final expansions of all nodes to concrete types as the last step.

Scopes

We are going to look at two ways to define scopes for custom language elements - the inherited (hierarchical) and the referential approaches. We chose the Calculator tutorial language as a testbed for our experiments. You can find the calculator-tutorial project included in the set of sample projects that comes with the MPS distribution.

Two ways

All references need to know the set of allowed targets. This enables MPS to populate the completion menu whenever the user is about to supply a value for the reference. Existing references can be validated against that set and marked as invalid, if they refer to elements out of the scope.

MPS offers two ways to define scopes:

  • Inherited scopes
  • Reference scopes

Reference scope offers lower ceremony, while Inherited scopes allow the scope to be built gradually following the hierarchy of nodes in the model.

Icon

The oldest type of scopes in MPS is called Search scope and it has been deprecated in favor of the two types mentioned above, because the scoping API has changed significantly since its introduction. The Reference scope can be viewed as the closest replacement for Search scope compatible with the new API.

Inherited scopes

We will describe the new hierarchical (inherited) mechanism of scope resolution first. This mechanism delegates scope resolution to the ancestors, who implement ScopeProvider.

  1. MPS starts looking for the closest ancestor to the reference node that implements ScopeProvider and who can provide scope for the current kind.
  2. If the ScopeProvider returns null, we continue searching for more distant ancestors.
  3. Each ScopeProvider can 
    • build and return a Scope implementation (more on these later)
    • delegate to the parent scope 
    • add its own elements to the parent scope
    • hide elements from parent scope (more on how to work with scopes will be discussed later)

Our InputFieldReference thus searches for InputField nodes and relies on its ancestors to build a list of those.

Once we have specified that the scope for InputFieldReference when searching for an InputField is inherited, we must indicate that Calculator is a ScopeProvider. This ensures that Calculator will have say in building the scope for all InputFieldReferences that are placed as its descendants.

The Calculator in our case should return a list of all its InputFields whenever queried for scope of InputField. So in the Behavior aspect of Calculator we override (Control + O) the getScope() method:

If Scope remains unresolved, we need to import the model (Control + R) that contains it (jetbrains.mps.scope):


We also need BaseLanguage since we need to encode some functionality. The smodel language needs to be imported in order to query nodes. These languages should have been imported for you automatically. If not, you can import them using the Control + L shortcut.

Now we can complete the scope definition code, which, in essence, returns all input fields from within the calculator:

A quick tip: Notice the use of SimpleRoleScope class. It is one of several helper classes that can help you build your own custom scopes. Check them out by Navigating to SimpleRoleScope (Control + N) and opening up the containing package structure (Alt + F1).

Scope helper implementations

MPS comes with several helper Scope implementations that cover many possible scenarios and you can use them to ease the task of defining a scope:

  • ListScope - represents the nodes passed into its constructor
  • DelegatingScope - delegates to a Scope instance passed into its constructor
  • CompositeScope - delegates to a group of (wrapped) Scope instances
  • FilteringScope - delegates to a single Scope instance, filtering its nodes with a predicate (the isExcluded method)
  • FilteringByNameScope - delegates to a single Scope instance, filtering its nodes by a name blacklist, which it gets as a constructor parameter
  • EmptyScope - scope with no nodes
  • SimpleRoleScope - a scope providing all child nodes of a node, which match a given role
  • ModelsScope - a scope containing all nodes of a given concept contained in the supplied set of models
  • ModelPlusImportedScope - like ModelsScope, but includes all models imported by the given model

VariableReference

A slightly more advanced example can be found in BaseLanguage. VariableReference uses inherited scope for its variableDeclaration reference.

Concepts such as ForStatement, LocalVariableDeclaration, BaseMethodDeclaration, Classifier as well as some others add variable declarations to the scope and thus implement ScopeProvider.

For example, ForStatement uses the Scopes.forVariables helper function to build a scope that enriches the parent scope with all variables declared in the for loop, potentially hiding variables of the same name in the parent scope. The come from expression detects whether the reference that we're currently resolving the scope for lies in the given part of the sub-tree.

Icon
  • The parent scope construct will create an instance of LazyParentScope() and effectively delegate to an ancestor in the model, which implements ScopeProvider, to supply the scope.
  • The come from construct will delegate to ScopeUtils.comeFrom() in order to check, whether the scope is being calculated for a direct child of the current node in the given role.
  • The composite with construct (used as composite <expr> with parent scope) will create a combined scope of the supplied scope expression and the parent scope.

Using reference scope

Scopes can alternatively be implemented in a faster but less scalable way - using the reference scope:

Instead of delegating to the ancestors of type ScopeProvider to do the resolution, you can insert the scope resolution code right into the constraint definition.

Instead of the code that originally was inside the Calculator's getScope() method, it is now InputFieldReference itself that defines the scope. The function for reference scope is supposed to return a Scope instance, just like the ScopeProvider.getScope() method. Scope is essentially a list of potential reference targets together with logic to resolve these targets with textual values.

There are several predefined Scope implementations and related helper factory methods ready for you to use:

  • SimpleRoleScope - simply adds all nodes connected to the supplied node and being in the specified role
  • ModelPlusImportedScope - provides reference targets from imported models. Allows the user add targets to scope by ctrl + R / cmd + R (import containing model).
  • FilteringScope - allow you to exclude some elements from another scope. Subclasses of FilteringScope with override the isExcluded() method.
  • DelegatingScope - delegates to another scope. Meant to be overridden to customize the behavior of the original scope.

You may also look around yourself in the scope model:

Intentions

Intentions are a very good example of how MPS enables language authors to smoothen the user experience of people using their language. Intentions provide fast access to the most used operations with syntactical constructions of a language, such as "negate boolean", "invert if condition," etc. If you've ever used IntelliJ IDEA's intentions or similar features of any modern IDEs, you will find MPS intentions very familiar.

Using intentions


Like in IDEA, if there are avaliable intentions applicable to the code at the current position, a light bulb is shown. To view the list of avaliable intentions, press Alt+Enter or click the light bulb. To apply an intention, either click it or select it and press Enter. This will trigger the intention and alter the code accordingly.
Example: list of applicable intentions

Intention types

All intentions are "shortcuts" of a sort, bringing some operations on node structure closer to the user. Two kinds of intentions can be distinguished: regular intentions (possibly with parameters) and "surround with" intentions.
Generally speaking, there is no technical difference between these types of intentions. They only differ in how they are typically used by the user.

regular intentions are listed on the intentions list (the light bulb) and they directly perform transformations on a node without asking the user for parameters customizing the operations.

"surround with" intentions are used to implement a special kind of transformation - surrounding some node(s) with another construct (e.g. "surround with parenthesis"). These intentions are not offered to the users unless they press ctrl-alt-T (the surround with command) on a node. Neither they are shown in general intentions pop-up menu.

Common Intention Structure

name

The name of an intention. You can choose any name you like, the only obvious constraint being that names must be unique in scope of the model. 

for concept

Intention will be tested for applicability only to nodes that are instances of this concept and its subconcepts.

available in child nodes

Suppose N is a node for which the intention can be applied. If this flag is set to false, intention will be visible only when the cursor is over the node N itself. If set to true, it will be also visible in N's descendants (but still will be applied to N)

child filter

Used to show an intention only in some children. E.g. "make method final" intention is better not to be shown inside method's body, but preferrably to be shown in the whole header, including "public" child.

description

The value returned by this function is what users will see in the list of intentions.

isApplicable

Intentions that have passed the "for concept" test are tested for applicability to the current node. If this method returns "true," the intention is shown in the list and can be applied. Otherwise the intention is not shown in the list. The node argument of this method is guaranteed to be an instance of the concept specified in "for concept" or one of its subconcepts.

execute

This method performs a code transformation. It is guaranteed that the node parameter has passed the "for concept" and "is applicable" tests.

Regular Intentions

is error intention - This flag is responsible for an intention's presentation. It distinguishes two types of intentions - "error" intentions which correct some errors in the code (e.g. a missing 'cast') and "regular" intentions, which are intended to help the user perform some genuine code transformations. To visually distinguish the two types, error intentions are shown with a red bulb, instead of an orange one, and are placed above regular intentions in the applicable intentions list.

Parameterized regular intentions

Intentions can sometimes be very close to one another. They may all need to perform the same transformation with a node, just slightly differently. E.g. all "Add ... macro" intentions in the generator ultimately add a macro, but the added macro itself is different for different intentions. This is the case when parameterized intention is needed. Instead of creating separate intentions, you create a single intention and allow for its parametrization. The intention has a parameter function, which returns a list of parameter values. Based on the list, a number of intentions are created , each with a diferent parameter value. The parameter values can then be accessed in almost every intention's method.

Note

Icon

You don't have an access to the parameter in the isApplicable function. This is because of performance reasons. As isApplicable is executed very often and delays would quickly become noticeable by the user, you should perform only base checks in isApplicable. All parameter-dependent checks should be performed in the parameter function, and if a check was not passed, this parameter should not be returned

Surround With - Intentions

This type of intentions is very similar to regular intentions and all the mentioned details apply to these intentions as well.

Where to store my intentions?

You can create intentions in any model by importing the intentions language. However, MPS collects intentions only from the Intentions language aspects. If you want your intentions to be used by the MPS intentions subsystem, they must be stored in the Intentions aspect of your language.

Previous Next

Testing

Testing languages

Introduction

Testing is an essential part of language designer's work. To be of any good MPS has to provide testing facilities both for BaseLanguage code and for languages. While the jetbrains.mps.baselanguage.unitTest language enables JUnit-like unit tests to test BaseLanguage code, the Language test language jetbrains.mps.lang.test provides a useful interface for creating language tests.

Icon

To minimize impact of test assertions on the test code, the Language test language describes the testing aspects through annotations (in a similar way that the generator language annotates template code with generator macros).

Quick navigation table (repeated from the top of the page)

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use a NodesTestCase to test a method that migrates one node in the migration.
See the Nodes Tests section for details.


Tests creation

There are two options to add test models into your projects.

1. Create a Test aspect in your language

This is easier to setup, but can only contain tests that do not need to run in a newly started MPS instance. So typically can hold plain baselanguage unit tests. To create the Test aspect, right-click on the language node and choose chose New->Test Aspect.

Now you can start creating unit tests in the Test aspect.


Right-clicking on the Test aspect will give you the option to run all tests. The test report will then show up in a Run panel at the bottom of the screen.

2. Create a test model

This option gives you more flexibility. Create a test model, either in a new or an existing solution. Make sure the model's stereotype is set to tests.

Open the model's properties and add the jetbrains.mps.baselanguage.unitTest language in order to be able to create unit tests. Add the jetbrains.mps.lang.test language in order to create language (node) tests.

Additionally, you need to make sure the solution containing your test model has a kind set - typically choose Other, if you do not need either of the two other options (Core plugin or Editor plugin). 


Right-clicking on the model allows you to create new unit or language tests. See all the root concepts that are available:


Unit testing with BTestCase

As for BaseLanguage Test Case, represents a unit test written in baseLanguage. Those are familiar with JUnit will be quickly at home.

A BTestCase has four sections - one to specify test members (fields), which are reused by test methods, one to specify initialization code, one for clean up code and finally a section for the actual test methods. The language also provides a couple of handy assertion statements, which code completion reveals.

TestInfo

In order to be able to run node tests, you need to provide more information through a TestInfo node in the root of your test model.

Especially the Project path attribute is worth your attention. This is where you need to provide a path to the project root, either as an absolute or relative path, or as a reference to a Path Variable defined in MPS (Project Settings -> Path Variables).

Testing aspects of language definitions

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use a NodesTestCase to test a method that migrates one node in the migration.
See the Nodes Tests section for details.

Node tests

A NodesTestCase contains three sections:


The first one contains code that should be verified. The section for test methods may contain baseLanguage code that further investigates nodes specified in the first section. The utility methods section may hold reusable baseLanguage code, typically invoked from the test methods.

Checking for correctness

To test that the type system correctly calculates types and that proper errors and warnings are reported, you write a piece of code in your desired language first. Then select the nodes, that you'd like to have tested for correctness and choose the Add Node Operations Test Annotation intention.
This will annotate the code with a check attribute, which then can be made more concrete by setting a type of the check:


Note that many of the options have been deprecated and should no longer be used.

The for error messages option ensures that potential error messages inside the checked node get reported as test failures. So, in the given example, we are checking that there are no errors in the whole Script.

Checking for type system and data-flow errors and warnings

If, on the other hand, you want to test that a particular node is correctly reported by MPS as having an error or a warning, use the has error / has warning option.


This works for both warnings and errors.


You can even tie the check with the rule that you expect to report the error / warning. Hit Alt + Enter when with cursor over the node and pick the Specify Rule References option:


An identifier of the rule has been added. You can navigate by Control/Cmd + B (or click) to the definition of the rule.


When run, the test will check that the specified rule is really the one that reports the error.

Type-system specific options

The check command offers several options to test the calculated type of a node.


Multiple expectations can be combined conveniently:

Testing scopes

The Scope Test Annotation allows the test to verify that the scoping rules bring the correct items into the applicable scope:


The Inspector panel holds the list of expected items that must appear in the completion menu and that are valid targets for the annotated cell:


Test and utility methods

The test methods may refer to nodes in your tests through labels. You assign labels to nodes using intentions:


The labels then become available in the test methods.

Editor tests

Editor tests allow you to test the dynamism of the editor - actions, intentions and substitutions.

An empty editor test case needs a name, an optional description, setup the code as it should look before an editor transformation, the code after the transformation (result) and finally the actual trigger that transforms the code in the code section.


For example, a test that an IfStatement of the Robot_Kaja language can be transformed into a WhileStatement by typing while in front of the if keyword would look as follows:


In the code section the jetbrains.mps.lang.test language gives you several options to invoke user-initiated actions - use type, press keys, invoke action or invoke intention. Obviously you can combine the special test commands for the plain baseLanguage code.

To mark the position of the caret in the code, use the appropriate intention with the cursor located at the desired position:

The cursor position can be specified in both the before and the after code:

The cell editor annotation has extra properties to fine-tune the position of the caret in the annotated editor cell. These can be set in the Inspector panel.

Running the tests

Inside MPS

To run tests in a model, just right-click the model in the Project View panel and choose Run tests:

If the model contains any of the jetbrains.mps.lang.test tests, a new instance of MPS is silently started in the background (that's why it takes quite some time to run these compared to plain baseLanguage unit tests) and the tests are executed in that new MPS instance. A new run configuration is created, which you can then re-use or customize:

The Run configurations dialog gives you options to tune the performance of tests.

  • Reuse caches - reusing the old caches of headless MPS instance when running tests cuts away a lot of time that would be needed to setup a test instance of MPS. It is possible to set and unset this option in the run configuration dialog.
  • Save caches in - specify the directory to save the caches in. By default, MPS choses the temp directory. Thus with the option Reuse caches set on, MPS saves its caches in the specified folder and reuses them whenever possible. If the option is unset, the directory is cleared on every run.
  • Execute in the same process - to speed up testing tests can be run in a so-called in-process mode. It was designed specifically for tests, which need to have an MPS instance running. (For example, for the language type-system tests MPS should safely be able to check the types of nodes on the fly.)
    The original way was to have a new MPS instance started in the background and run the tests in this instance. This option, instead, allows to have all tests run in the same original MPS process, so no new instance needs to be created. When the option Execute in the same process is set (the default setting), the test is executed in the current MPS environment. To run tests in the original way (in a separate process) you should uncheck this option. This way of tests' execution is applicable to all test kinds in MPS. Thus it works even for the editor tests!
    Icon

    Although the performance is so much better for in-process test execution, there are certain drawbacks in this workflow. Note, that the tests are executed in the same MPS environment that holds the project, so there is a possibility, that the code you write in your test may be potentially dangerous and sometimes cause real harm. For example, a test, which disposes the current project, could destroy the whole project. So the user of this feature needs to be careful when writing the tests.
    There are certain cases when the test must not be executable in-process. In that case it is possible to switch an option in the inspector to prohibit the in-process execution for that specific test.

    The test report is shown in the Run panel at the bottom of the screen:

From a build script

In order to have your generated build script offer the test target that you could use to run the tests using Ant, you need to import the jetbrains.mps.build.mps and jetbrains.mps.build.mps.tests languages into your build script, declare using the module-tests plugin and specify a test modules configuration.

Previous Next

Refactoring

Changes in the Refactoring language

In order to make the structure of MPS core languages more consistent and clear, the Refactoring language has been changed considerably. Several new and easy-to-use constructs have been added and parts of the functionality was deprecated and moved into the Actions language.

The UI for retrieving the refactoring parameters has been removed from the refactoring language. Choosers for parameters are no longer called, it is not allowed to show UI in init (e.g. ask and ask boolean) and keystroke has no effect. All this functionality should be moved to an action corresponding to the refactoring.

The following constructs have been added to the refactoring language. These new constructs are intended to to be used from code, typically from within the actions:

  • is applicable refactoring<Refactoring>(target)
    returns true if the refactoring target corresponds to the current target (type, single/multiple) and applicable as in refactoring isApplicable method, and there is no refactoring that overrides current refactoring for this target.
  • execute refactoring<Refactoring>(target : project, parameters );
    executes the refactoring for the target with parameters
  • create refcontext<Refactoring>(target : project, parameters )
    create a refactoring context for the refactoring, target and fill parameters in context, this context then can be used for refactoring execution or for further work with parameters; UI is not shown during this call

It is necessary to manually migrate existing user refactorings. The migration consists of several steps:

  • create a UI action for the refactoring (This is a simple action from the plugin language. You can check the Rename action from jetbrains.mps.ide.platform.actions.core as an example of proper refactoring action registration)
  • copy the caption, create context parameters
  • add a refactoring keystroke with the newly created action to KeymapChangesDeclaration
  • create ActionGroupDeclaration for the refactoring that modifies the jetbrains.mps.ide.platform.actions.NodeRefactoring action group at the default position
  • add an isApplicable clause to the action created; usually it is just is applicable refactoring< >() call
  • add an execute clause to the action created; all the parameter preparations that were in init of the refactoring should be moved here; at the end it is necessary to execute the refactoring with the prepared parameters (with execute refactoring< >(); statement)
  • remove all parameter preparation code from init of the refactoring, they are now prepared before the entry to init; you can still validate parameters and return false if the validation fails

Data Flow

A language's data flow aspect allows you to find unreachable statements, detect unused assignments, check whether a variable might not be initialized before it's read, and so on. It also allows performing some code transformations, for example the 'extract method' refactoring.

Most users of data flow analyses aren't interested in details of its inner working, but are interested in getting the results they need. They want to know which of their statements are unreachable, and what can be read before it's initialized. In order to shield a user from the complexities of these analyses, we provide assembly-like intermediate language into which you translate your program. After translation, this intermediate presentation is analyzed and a user can find which of the statements of original language are unreachable etc.

For example here is the translation of a 'for' loop from baseLanguage:

First, we translate the expression for node.iterable. Then we emit a label so we can jump after it. Then we perform a conditional jump after the current node. Then we emit code for writing to node.variable. This means that we change the value of node.variable on each iteration. We don't need to know what we write to node.variable, since this information isn't used by our analysis. Finally, we emit code for the loop's body, and jump to the previously emitted label.

Commands of intermediate language

Here are the commands of our intermediate language:

  • read x - reads a variable x
  • write x - writes to variable x
  • jump before node - jumps before node
  • jump after node - jumps after node
  • jump label - jumps to label
  • ifjump ((before|after)node)| label - conditional jump before/after node / to label
  • code for node - insert code for node
  • ret - returns from current subroutine

May be unreachable

Some commands shouldn't be highlighted as unreachable. For example we might want to write some code like this:

If you generate data flow intermediate code for this statement, the last command: the jump after condition command will be unreachable. On the other hand, it's a legal base language statement, so we want to ignore this command during reachable statement analysis. To do so we mark it is as may be unreachable, which is indicated by curly braces around it. You can toggle this settings with the appropriate intention.

You may like to try our Dataflow cookbook, as well.

Links:

http://www.itu.dk/people/brabrand/UFPE/Data-Flow-Analysis/static.pdf - good introduction to static analyses including data flow and type systems.

Previous Next

TextGen language aspect

Introduction

The TextGen language aspect defines a model to text transformation. It comes in handy each time you need to convert your models into the text form directly. The language contains constructs to print out text, transform nodes into text values and give the output some reasonable layout.

Operations

The append command performs the transformation and adds resulting text to the output. You can use found error command to report problems in the model. The with indent command demarcates blocks with increased indentation. Alternatively, the increase depth and decrease depth commands manipulate the current indentation depth without being limited to a block structure. The indent buffer command applies the current indentation (as specified by with ident or increase/decrease depth) for the current line.

Operation

Arguments

append

any number of:

  • {string value}, to insert use the " char, or pick constant from the completion menu
  • \n
  • $list{node.list} - list without separator
  • $list{node.list with ,} - with separator
  • $ref{node.reference}, e.g. $ref{node.reference<target>}
  • ${node.child}

found error

error text

decrease depth

decrease indentation level from now onwards

increase depth

increase indentation level from now on

indent buffer

apply indentation to the current line

with indent { <code> }

increase indentation level for the <code>

Icon

The parameters to the append command may have the with indent flag in the Inspector tool window set to true to get prepended with the current indentation buffer.

Indentation

Proper indentation is easy to get right once you understand the underlying principle. TextGen flushes the AST into text. The TextGen commands simply manipulate sequentially the output buffer and output some text to it, one node at a time. A variable holding the current depth of indentation (indentation buffer) is preserved for each root concept. Indentation buffer starts at zero and is changed by increase/decrease depth and with indent commands.

The "indentation", however, must be inserted into the output stream explicitly by the append commands. Simply marking a block with with indent will not automatically indent the text generated by the wrapped TextGen code. The with indent block only increases the value of the indentation buffer, but the individual appends may or may not wish to be prepended with the indentation buffer of the current size.

There are two ways to explicitly insert indentation buffer into the output stream:

  • indent buffer command
  • with indent flag in the inspector for the parameters of the append command

For example, to properly indent Constants in a list of constants, we call indent buffer at the beginning of each emitted line. This ensures that the indentation is inserted only at the beginning of each line.

Alternatively, we could specify the with indent flag in the inspector for the first parameter to the append command. This will also insert the indentation only at the beginning of each line.

Root concepts

TextGen provides two types of root concepts:

  • text gen component, represented by the ConceptTextGenDeclaration concept, which encodes a transformation of a concept into text. For rootable concepts the target file can also be specified.
  • base text gen component, represented by the LanguageTextGenDeclaration concept, which allows definition of reusable textgen operations and utility methods. These can be called from other text gen components of the same language as well as extending languages

Examples

Here is an example of the text gen component for the ForeachStatement (jetbrains.mps.baseLanguage).

This is an artificial example of the text gen:

producing following code block containing a number of lines with indentation:


Previous Next

Testing languages

Introduction

Testing is an essential part of language designer's work. To be of any good MPS has to provide testing facilities both for BaseLanguage code and for languages. While the jetbrains.mps.baselanguage.unitTest language enables JUnit-like unit tests to test BaseLanguage code, the Language test language jetbrains.mps.lang.test provides a useful interface for creating language tests.

Icon

To minimize impact of test assertions on the test code, the Language test language describes the testing aspects through annotations (in a similar way that the generator language annotates template code with generator macros).

Quick navigation table (repeated from the top of the page)

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use a NodesTestCase to test a method that migrates one node in the migration.
See the Nodes Tests section for details.


Tests creation

There are two options to add test models into your projects.

1. Create a Test aspect in your language

This is easier to setup, but can only contain tests that do not need to run in a newly started MPS instance. So typically can hold plain baselanguage unit tests. To create the Test aspect, right-click on the language node and choose chose New->Test Aspect.

Now you can start creating unit tests in the Test aspect.


Right-clicking on the Test aspect will give you the option to run all tests. The test report will then show up in a Run panel at the bottom of the screen.

2. Create a test model

This option gives you more flexibility. Create a test model, either in a new or an existing solution. Make sure the model's stereotype is set to tests.

Open the model's properties and add the jetbrains.mps.baselanguage.unitTest language in order to be able to create unit tests. Add the jetbrains.mps.lang.test language in order to create language (node) tests.

Additionally, you need to make sure the solution containing your test model has a kind set - typically choose Other, if you do not need either of the two other options (Core plugin or Editor plugin). 


Right-clicking on the model allows you to create new unit or language tests. See all the root concepts that are available:


Unit testing with BTestCase

As for BaseLanguage Test Case, represents a unit test written in baseLanguage. Those are familiar with JUnit will be quickly at home.

A BTestCase has four sections - one to specify test members (fields), which are reused by test methods, one to specify initialization code, one for clean up code and finally a section for the actual test methods. The language also provides a couple of handy assertion statements, which code completion reveals.

TestInfo

In order to be able to run node tests, you need to provide more information through a TestInfo node in the root of your test model.

Especially the Project path attribute is worth your attention. This is where you need to provide a path to the project root, either as an absolute or relative path, or as a reference to a Path Variable defined in MPS (Project Settings -> Path Variables).

Testing aspects of language definitions

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use a NodesTestCase to test a method that migrates one node in the migration.
See the Nodes Tests section for details.

Node tests

A NodesTestCase contains three sections:


The first one contains code that should be verified. The section for test methods may contain baseLanguage code that further investigates nodes specified in the first section. The utility methods section may hold reusable baseLanguage code, typically invoked from the test methods.

Checking for correctness

To test that the type system correctly calculates types and that proper errors and warnings are reported, you write a piece of code in your desired language first. Then select the nodes, that you'd like to have tested for correctness and choose the Add Node Operations Test Annotation intention.
This will annotate the code with a check attribute, which then can be made more concrete by setting a type of the check:


Note that many of the options have been deprecated and should no longer be used.

The for error messages option ensures that potential error messages inside the checked node get reported as test failures. So, in the given example, we are checking that there are no errors in the whole Script.

Checking for type system and data-flow errors and warnings

If, on the other hand, you want to test that a particular node is correctly reported by MPS as having an error or a warning, use the has error / has warning option.


This works for both warnings and errors.


You can even tie the check with the rule that you expect to report the error / warning. Hit Alt + Enter when with cursor over the node and pick the Specify Rule References option:


An identifier of the rule has been added. You can navigate by Control/Cmd + B (or click) to the definition of the rule.


When run, the test will check that the specified rule is really the one that reports the error.

Type-system specific options

The check command offers several options to test the calculated type of a node.


Multiple expectations can be combined conveniently:

Testing scopes

The Scope Test Annotation allows the test to verify that the scoping rules bring the correct items into the applicable scope:


The Inspector panel holds the list of expected items that must appear in the completion menu and that are valid targets for the annotated cell:


Test and utility methods

The test methods may refer to nodes in your tests through labels. You assign labels to nodes using intentions:


The labels then become available in the test methods.

Editor tests

Editor tests allow you to test the dynamism of the editor - actions, intentions and substitutions.

An empty editor test case needs a name, an optional description, setup the code as it should look before an editor transformation, the code after the transformation (result) and finally the actual trigger that transforms the code in the code section.


For example, a test that an IfStatement of the Robot_Kaja language can be transformed into a WhileStatement by typing while in front of the if keyword would look as follows:


In the code section the jetbrains.mps.lang.test language gives you several options to invoke user-initiated actions - use type, press keys, invoke action or invoke intention. Obviously you can combine the special test commands for the plain baseLanguage code.

To mark the position of the caret in the code, use the appropriate intention with the cursor located at the desired position:

The cursor position can be specified in both the before and the after code:

The cell editor annotation has extra properties to fine-tune the position of the caret in the annotated editor cell. These can be set in the Inspector panel.

Running the tests

Inside MPS

To run tests in a model, just right-click the model in the Project View panel and choose Run tests:

If the model contains any of the jetbrains.mps.lang.test tests, a new instance of MPS is silently started in the background (that's why it takes quite some time to run these compared to plain baseLanguage unit tests) and the tests are executed in that new MPS instance. A new run configuration is created, which you can then re-use or customize:

The Run configurations dialog gives you options to tune the performance of tests.

  • Reuse caches - reusing the old caches of headless MPS instance when running tests cuts away a lot of time that would be needed to setup a test instance of MPS. It is possible to set and unset this option in the run configuration dialog.
  • Save caches in - specify the directory to save the caches in. By default, MPS choses the temp directory. Thus with the option Reuse caches set on, MPS saves its caches in the specified folder and reuses them whenever possible. If the option is unset, the directory is cleared on every run.
  • Execute in the same process - to speed up testing tests can be run in a so-called in-process mode. It was designed specifically for tests, which need to have an MPS instance running. (For example, for the language type-system tests MPS should safely be able to check the types of nodes on the fly.)
    The original way was to have a new MPS instance started in the background and run the tests in this instance. This option, instead, allows to have all tests run in the same original MPS process, so no new instance needs to be created. When the option Execute in the same process is set (the default setting), the test is executed in the current MPS environment. To run tests in the original way (in a separate process) you should uncheck this option. This way of tests' execution is applicable to all test kinds in MPS. Thus it works even for the editor tests!
    Icon

    Although the performance is so much better for in-process test execution, there are certain drawbacks in this workflow. Note, that the tests are executed in the same MPS environment that holds the project, so there is a possibility, that the code you write in your test may be potentially dangerous and sometimes cause real harm. For example, a test, which disposes the current project, could destroy the whole project. So the user of this feature needs to be careful when writing the tests.
    There are certain cases when the test must not be executable in-process. In that case it is possible to switch an option in the inspector to prohibit the in-process execution for that specific test.

    The test report is shown in the Run panel at the bottom of the screen:

From a build script

In order to have your generated build script offer the test target that you could use to run the tests using Ant, you need to import the jetbrains.mps.build.mps and jetbrains.mps.build.mps.tests languages into your build script, declare using the module-tests plugin and specify a test modules configuration.

Previous Next

Languages for IDE integration

Plugin is a way to integrate your code with the MPS IDE functionality.
The jetbrains.mps.lang.plugin and jetbrains.mps.lang.plugin.standalone languages give you a number of root concepts that can be used in your plugin. This chapter describes all of them.

Plugin instantiation

While developing a plugin, you have a solution holding the plugin and want the plugin classes to be automatically reloadable so as not to have to restart MPS after each change to see its effect. To set the development phase correctly, do the following:

  1. Create a new solution for your plugin
  2. Create a model in this solution named <solution_name>.plugin
  3. Import j.m.lang.plugin and j.m.lang.plugin.standalone languages into the solution and the model
  4. Create a root StandalonePluginDescriptor in the model (it comes from the  j.m.lang.plugin.standalone language)
  5. Set the solution's Solution Kind to Other

    You can now edit your plugin model and see the changes applied to the current MPS instance just after generation. You can also distribute the solution and have the plugin successfully working for the users.

Actions and action groups

One can add custom actions to any menu in MPS by using action and action group entities.

An action describes one concrete action. Action groups are named lists of actions intended for structuring of actions - adding them to other groups and MPS groups (which represent menus themselves) and combining them into popup menus. You can also create groups with dynamically changing contents.

How to add new actions to existing groups?

In order to add new actions to existing groups, the following should be done:

  1. actions should be described
  2. described actions should be composed into groups
  3. these groups should be added to existing groups (e.g. to predefined MPS groups to add new actions to MPS menus).

Predefined MPS groups are stored in the jetbrains.mps.ide.actions model, which is an accessory model to jetbrains.mps.lang.plugin language, so you don't need to import it explicitly into your model. 

Action structure

Action properties

Name - The name of an action. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Mnemonic -  if mnemonic is specified, the action will be available via the alt+mnemonic shortcut when any group that contains this action is displayed. Note that the mnemonic (if specified) must be one of the chars in action's caption. Mnemonic is displayed as an underlined symbol in the action's caption.

Execute outside command - all operations with MPS models are executed within commands. A command is an item in the undo list (you don't control it manually, MPS does it for you), so the user can undo changes brought into the model by action's execution. Also, all the code executed in a command, has read-write access to the model. The catch is that if you show visual dialogs to the user from inside of a command, it can cause a deadlock by blocking while holding the read/write locks. It is thus recommended to have the execute outside command option set to false, only if you are not using UI in your action. Otherwise it should be set to true and proper read/write access locking should be performed manually with the read action and command statements within the action.

Also available in - currently, this can only be set to "everywhere", which means the action will not only be available in the context, where you can invoke it through the completion menu, but also in any other context. E.g. if some action is added to the editor context menu group, but the author wants it to be available when the focus is in the logical view, or just when all the editors are closed, "also available in" should be set to "everywhere".

Caption - the string representing the action in menus

Description - this string (if specified) will be displayed in the status bar when this action is active (selected in any menu)

Icon - this icon will be displayed near the action in all menus. You can select the icon file by pressing the "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Construction parameters

Each action can be parameterized at construction time using construction parameters. This can be any data determining action's behavior. Thus, a single action that uses construction parameters can represent multiple different behaviors. To manage actions and handle keymaps MPS needs a unique identifier for each concrete behavior represented by an action. So, the toString function was introduced for each construction parameter (can be seen in the inspector). For primitive types there is no need to specify this function explicitly - MPS can do it automatically. For more complex parameters, you need to write this function explicitly so that for each concrete behavior of an action there is a different set of values returned from the toString() functions.

Enable/disable action control

Is always visible flag - if you want your action to be visible even in the disabled state (when the action is not applicable in the current context), set this to true, otherwise to false.

Context parameters - specifies which items must be present in the current context for the action to be able to execute. They are extracted from the context before any action's method is executed. Context parameters have conditions associated with them - required and custom are the two most frequently used ones. If some required parameters were not extracted, the action state is set to disabled and the isApplicable/update/execute methods are not executed. If all required action parameters were extracted, you can use their values in all the action methods. Custom context parameters give you the option to decide whether the context parameter is mandatory on case-by-case basis using the supplied function.

There are 2 types of action parameters - simple and complex action parameters.

  • Simple action parameters (represented by ActionDataParameterDeclaration) allow to simply extract all available data from the current data context. The data is provided "by key", so you should specify the name and the key in the declaration. The type of the parameter will be set automatically.
  • Complex action parameters (represented by ActionParameterDeclaration) were introduced to perform some frequently used checks and typecasts. Now there are 3 types available for the context parameter of this type:
    • node<concept> - currently selected node, which is an instance of a specified concept. Action won't be enabled, if the selected node isn't an instance of this concept.
    • nlist<concept> - currently selected nodes. It is checked that all nodes are instances of the concept (if specified). As with node<concept>, the action won't be enabled if the check fails.
    • model - the current model holding the selected node

Is Applicable / update - In cooperation with the context parameters, this method controls the enabled/disabled state of the action. You can pick either of the two options:

  • The isApplicable method returns the new state of an action
  • The update method is designed to update the state manually. You can also update any of your action's properties (caption, icon etc.) by accessing action's presentation via event.getPresentation(). Call the the setEnabledState() method on an action to enable or disable it manually.

These methods are executed only if all required context parameters have been successfully extracted from the context.

Note: The this keyword refers to the current action, use action<...> to get hold of any visible action from your code.

Note

Icon

Do not use the isApplicable() method if you want to modify the presentation manually. Although no errors would be reported from within isApplicable(), it is not guaranteed to work in all cases properly. The update() method is a more suitable place for complex presentation manipulations.


Execute - this method is executed when the action is performed. It is guaranteed that it is executed only if the action's update method for the same event left the action in active state (or isApplicable returned true) and all the required context parameters are present in the current context and were filled in.

Methods - in this section you can declare utility methods.

Group structure

Group describes a set of actions and provides the information about how to modify other groups with current group.

Presentation

Name - The name of the group. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

is popup - if this is true, the group represents a popup menu, otherwise it represents a list of actions.

When "is popup" is true:
  • Caption - string that will be displayed as the name of the popup menu
  • Mnemonic - if mnemonic is specified, the popup menu will be available via the alt+mnemonic shortcut when any group that contains it is displayed. Note that the mnemonic (if specified) must be one of the chars in caption. Mnemonic is displayed as an underlined symbol in the popup menu caption.
  • Is invisible when disabled - if set to true, the group will not be shown in case it has no enabled actions or is disabled manually in the update() method. Call the enable()/disable() methods on an action group to enable or disable it manually.
Contents

There are 3 possibilities to describe group contents:

Element list - this is just a static list of actions, groups and labels (see modifications). The available elements are:

  • ->name - an anchors. Anchors are used for modifying one group with another. See Add statement section for details.
  • <---> - separator
  • ActionName[parameters] - an actions.

Build - this alternative should be used in groups, the contents of which is static, but depends on some initial conditions - the group is built once and is not updated ever after. Use the add statement to add elements inside build block.

Update - this goes for dynamically changing groups. Group is updated every time right before it is rendered.

Note

Icon

In the update/build blocks use the add statement to add group members.


Modifications and labels

Add to <group> at position <position> - this statement adds the current group to a <group> at the given position. Every group has a <default> position, which tells to add the current group to the end of the target group. Some groups can provide additional positions by adding so-called anchors into themselves. Adding anchors is described in the contents section. The anchor itself is invisible and represents a position, in which a group can be inserted.

Note

Icon
  • You shouldn't care about the group creation order and modifications order - this statement is fully declarative.
  • If A is added into B, B into C, C will contain A

actionGroup <...> expression

There is a specific expression available in the jetbrains.mps.lang.plugin language to access any registered group - actionGroup<group> expression. 

Bootstrap groups

Bootstrap groups are a way to work with action groups that have been defined outside of MPS (e.g. groups contributed by IDEA or some IDEA plugin).
In this case, a bootstrap group is defined in MPS and its internal ID is set to the ID of the external group. After having this done, you can work with the bootstrap group just like with a normal one - insert it into your groups and vice versa.
A regular user rarely needs to use bootstrap groups.

Tutorial

Icon

A quick and simple tutorial by Federico Tomassetti on how to create action and show it in a context menu is available here: 
http://www.federico-tomassetti.it/tutorial-how-to-add-an-action-to-the-jetbrains-metaprogramming-system/

Please bear in mind that this tutorial uses an older version of MPS and the actual workings in MPS have changed since then, Especially we now recommend to use plugin solutions instead of the plugin aspect of a language to hold your actions. The tutorial may still give you some guidelines and useful insight.

Displaying progress indicators

Long-lasting actions should indicate their activity and progress to the user. Check out the Progress indicators page for details on how to use progress bars, how to allow for cancellation and how to enable actions for running in the background.

KeyMap Changes

The KeymapChangesDeclaration concept allows the plugin to assign key shortcuts to individual actions and group them into shortcuts schemes.

Any action can have a number of keyboard shortcuts. This can be specified using the KeyMapChanges concept. For a parameterized action, which has a number of "instances" (one instance per parameter value), a function can be specified, which returns different shortcut for a different parameter value.
In MPS, there are some "default keymaps", which you can see in Settings->Keymaps. The for keymap section allows you to specify a keycap that the KeyMapChanges definition is contributing to. E.g. one can set different shortcuts for the same action in the MacOS and the Windows keymaps.

Default Keymap

Icon

If you add a keyboard shortcut to the Default keymap, all keymaps are altered with this shortcut.

MacOs

Icon

Note that by default ctrl is changed to cmd in MacOs keymap. If you want your action to have a ctrl + something shortcut on MacOs, you should re-define this shortcut for the MacOs keymap.

All the actions added by plugins are visible in Settings->Keymap and Settings->Menus and Toolbars. This means that any user can customize the shortcuts used for all MPS actions.

A KeyMap Change should be given a name unique within the model, it must specify the Keymap that is being altered (or Default to change all keymaps) and then assign a keystroke to actions that should have one. The keystroke can either be SimpleShortcutChange with a directly specified keystroke or ParametrizedShortcutChange, which gives you the ability to handle parametrized actions.

NonDumbAwareActions

If your action uses platform indices (which is very rare), add it to NonDumbAwareActions. Those actions will be automatically disabled while the indices are being build.

Editor Tabs

If you look at any concept declaration you will certainly notice the tabs at the bottom of the editor. You are able to add the same functionality to the concepts from your language.

What is the meaning of these tabs? The answer is pretty simple - they contain the editors for some aspects of the "base" node. Each tab can be either single-tabbed (which means that only one node is displayed in it, e.g. editor tab) or multi-tabbed (if multiple nodes can be created for this aspect of the base node, see the Typesystem tab, for example).

How the editor for a node is created? When you open some node, call it N, MPS tries to find the "base" node for N. If there isn't any base node, MPS just opens the editor for the selected node. If the node is found (call it B), MPS opens some tabs for it, containing editors for some subordinate nodes. Then it selects the tab for N and sets the top icon and caption corresponding to B.

When you create tabbed editors, you actually provide rules for:

  • finding the base node
  • finding subordinate nodes
  • optionally an algorithms of subordinate nodes creation

The tabs that match the requested base concept are displayed and organized depending on their relative order rules specified in their respective order constraints sections.

Editor Tab Structure

Name - The name of the rule. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Icon - this icon will be displayed in the header of the tab. You can select the icon file by pressing the "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Shortcut char - a char to quickly navigate to the tab using the keyboard

Order constraints - an instance of the Order concept. Orders specify an order, in which the current tab should be displayed relative to the other tabs. You can either refer to an external order or specify one in-place.

Base node concept - the concept of the base node for this as well as all the related tabs.

Base Node - this is a rule for searching for the base node given a known node. It should return null, if the base node is not found or this TabbedEditor can't be applied.

Is applicable - indicates whether the tab can be used for the given base node

command - indicated whether the node creation should be performed as a command, i.e. whether it should be undoable and uses no additional UI interaction with the user.

getNode/getNodes - should return the node or a list of nodes to edit in this tab

getConcepts - return the concepts of nodes that this tab can be used for to edit

Create - if specified, this will be executed when user asks to create a new node from this tab. It is given a requested concept and the base node as parameters.

Tools

Tool is an instrument that has a graphical presentation and aimed to perform some specific tasks. For example, Usages View, Todo Viewer, Model and Module repository Viewers are all tools. MPS has rich UI support for tools - you can move it by drag-and-drop from one edge of the window to another, hide, show and perform many other actions.

Tools are created "per project". They are initialized/disposed on class reloading (after language generation, on "reload all" action etc.)

Tool structure

Name - The name of the tool. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Caption - this string will be displayed in tool's header and on the tool's button in tools pane

Number - if specified, alt+number becomes a shortcut for showing this tool (if it s available)

Icon - the icon to be displayed on the tool's button. You can select the icon file by pressing "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Position - on of top/bottom/left/right to add the tool to the desired MPS tool bar

Init - initialize the tool instance here

Dispose - dispose all the tool resources here

getComponent - should return a Swing component (instance of a class which extends JComponent) to display inside the tool's window. If you are planning to create tabs in your tool and you are familiar with the tools framework in IDEA, it's better to use IDEA's support for tabs. Using this framework greatly improves tabs functionality and UI.

Fields and methods - regular fields and methods, you can use them in your tool and in the external code.

Tool operation

We added the operation (GetToolInProjectOperation concept) to simply access a tool in some project. Use it as project.tool<toolName>, where project is an IDEA Project. Do not forget to import the jetbrains.mps.lang.plugin.standalone language to be able to use it.

Be careful

Icon

This operation can't currently be used in the dispose() method

Tabbed Tools

It's same as tool window, but additionally can contain multiple tabs

Tool structure

Name - The name of the tool. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Caption - this string will be displayed in tool's header and on the tool's button in tools pane

Number - if specified, alt+number becomes a shortcut for showing this tool (if it s available)

Icon - the icon to be displayed on the tool's button. You can select the icon file by pressing "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Position - on of top/bottom/left/right to add the tool to the desired MPS tool bar

Init - initialize the tool instance here

Dispose - dispose all the tool resources here

Fields and methods - regular fields and methods, you can use them in your tool and in the external code.

Preferences components

Sometimes you may want to be able to edit and save some settings (e.g. your tools' settings) between MPS startups. We have introduced preferences components for these purposes.

Each preferences component includes a number of preferences pages and a number of persistent fields.Preferences page is a dialog for editing user preferences. They are accessible through File->Settings.

Persistent fields are saved to the .iws files when the project is closed and restored from them on project open. The saving process uses reflection, so you don't need to care about serialization/deserialization in most cases.

Note

Icon

Only primitive types and non-abstract classes can be used as types of persistent fields. If you want to store some complex data, create a persistent field of type org.jdom.Element (do not forget to import the model org.jdom), annotate it with com.intellij.util.xmlb.annotations.Tag and serialize/deserialize your data manually after read / before write

Preferences component structure

name - component name. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

fields - these are the persistent fields. They are initialized before after read and pages creation, so their values will be correct in every moment they can be accessed. They can have default values specified, as well.

after read / before write - these blocks are used for custom serialization purposes and for applying/collecting preferences, which have no corresponding preferences pages (e.g. tool dimensions)

pages - preferences pages

Preferences page structure

name - the string to be used as a caption in Settings page. The name must be unique within a model.

component - a UI component to edit preferences.

Hint

Icon

The uiLanguage components can be used here

icon - the icon to show in Settings window. The size of the icon can be up to 32x32

reset - reset the preferences values in the UI component when this method is called.

commit - in this method preferences should be collected from the UI component and commited to wherever they are used.

isModified - if this method returns false, commit won't be executed. This is typically useful for preferences pages with long-running commit method.

PreferenceComponent expression

We added an expression to simply access a PreferenceComponent in some project. You can access it as project.preferenceComponent<componentName>, where project is an IDEA Project. Do not forget to import the jetbrains.mps.lang.plugin.standalone language to use it.

Be careful

Icon
 

This operation can't currently be used in the dispose() method

Custom plugin parts (ProjectPlugin, ApplicationPlugin)

Custom plugin parts are custom actions performed on plugin initializing/disposing. They behave exactly like plugins. You can create as many custom plugins for your language as you want. There are two types of custom plugins - project and application custom plugins. The project custom plugin is instantiated once per project, while the application custom plugin is instantiated once per application and therefore it doesn't have a project parameter.

Previous Next

In MPS, any model consists of nodes. Nodes can have many types of relations. These relations may be expressed in a node structure (e.g. "class descendants" relation on classes) or not (e.g. "overriding method" relation on methods). Find Usages is a tool to display some specifically related nodes for a given node.

In MPS, the Find Usages system is fully customizable - you can write your own entities, so-called finders, which represent algorithms for finding related nodes. For every type of relation there is a corresponding finder.

This is how "find usages" result looks like:

Using Find Usages Subsystem

You can press Alt+F7 on a node (no matter where - in the editor or in the project tree) to see what kind of usages MPS can search for.

You can also right-click a node and select "Find Usages" to open the "Find usages" window.


 

 Finders - select the categories of usages you want to search for

 Scope - this lets you select where you want to search for usages - in concrete model, module, current project or everywhere.

 View Options - additional view options

After adjusting your search, click OK to run it. Results will be shown in the Find Usages Tool as shown above.

Finders

To implement your own mechanism for finding related nodes, you should become familiar with Finders. For every relation there is a specific Finder that provides all the information about the search process.

Where to store my finders?

Finders can be created in any model by importing findUsages language. However, MPS collects finders only from findUsages language aspects. So, if you want your finder to be used by the MPS Find Usages subsystem, it must be stored in the findUsages aspect of your language.

Finder structure

name

The name of a finder. You can choose any name you want, the only obvious constraint being that the names must be unique in the scope of the model.

for concept

Finder will be tested for applicability only to those nodes that are instances of this concept and its subconcepts.

description

This string represents the finder in the list of finders. Should be rather short.

long description

If it's not clear from the description string what exactly the finder does, you can add a long description, which will be shown as a tooltip for the finder in the list of finders.

is visible

Determines whether the finder is visible for the current node. For example, a finder that finds ancestor classes of some class should not be visible when this class has no parent.

is applicable

Finders that have passed for concept are tested for applicability to the current node. If this method returns true, the finder is shown in the list of available finders; otherwise it is not shown. The node argument of this method is guaranteed to be an instance of the concept specified in "for concept" or its subconcepts.
Please note the difference between is visible and is applicable. The first one is responsible only for viewing. The second one represents a "valid call" contract between the finder and its caller. This is important because we have an execute statement in findUsagesLanguage, which will be described later. See execute section below for details.

find

This method should find given node usages in a given scope. For each found usage, use the add result statement to register it.

searched nodes

This method returns nodes for which the finder searched. These nodes are shown in searched nodes subtree in the tool.
For each node to display, use the add node statement to register it.

get category

There are a number of variants to group found nodes in the tool. One of them is grouping by category, that is given for every found node by the finder that has found it. This method gives a category to each node found by this finder.

What does the MPS Find Usages subsystem do automatically? 

  • Stores search options between multiple invocations and between MPS runs
  • Stores search results between MPS runs
  • Automatically handles deleted nodes
  • All the visualization and operations with found nodes is done by the subsystem, not by finders

Specific Statements

execute

Finders can be reused thanks to the execute statement. The execution of this statement consists of 2 steps: validating the search query (checking for concept and isApplicable), and executing the find method. That's where you can see the difference between isApplicable and isShown. If you use isApplicable for cases when the finder should be applicable, but not shown, you can get an error when using this finder in the execute statement.

Examples

You can see some finder examples in jetbrains.mps.baseLanguage.findUsages

You can also find all finders by going to the FinderDeclaration concept (Ctrl+N, type "FinderDeclaration", then press ENTER) and finding all instances of this concept (Alt+F7, check instances, then check Global Scope).

Previous Next

One of very effective ways to maintain high quality of code in MPS is the instant on-the-fly code analysis that highlights errors, warnings or potential problems directly in code. Just like with other code quality reporting tools, it is essential for the user to be able to mark false positives so that they are not reported repeatedly. MPS now provides the language developers with a customizable way to suppress errors in their languages. This functionality was used to implement Suppress Errors intention for BaseLanguage:
One place where this feature is also useful are the generators, since type errors, for example, are sometimes unavoidable in the templates.

If a node is an instance of a concept, which implements the ISuppressErrors interface, all issues on this node and all its children won't be shown. For example, comments in BaseLanguage implement ISupressErrors. It is also possible to define child roles, in which issues should be suppressed, by overriding the boolean method suppress(node<> child) of the ISupressErrors interface.
Additionally, if a node has an attribute of a concept that implements ISuppressErrors, issues in such node will be suppressed too. There is a convenience default implementation of an ISuppressErrors node attribute called SuppressErrorsAttribute. It can be applied to only those nodes that are instances of ICanSuppressErrors.

An example of using the SuppressErrorsAttribute attribute and the corresponding intention.

There is an error in editor:

 

BaseLanguage Statement implements ICanSuppressErrors, so the user can apply the highlighted intention here:

Now the error isn't highlighted any longer, but there is a newly added cross icon in the left pane. The SuppressErrorsAttribute can be removed either by pressing that cross or by applying the corresponding intention

Previous Next

Debugger

MPS provides an API for creating custom debuggers as well as integrating with debugger for java. See Debugger Usage page for a description of MPS debugger features.

Integration with MPS java debugger engine

To integrate your java-generated language with MPS java debugger engine, you should specify:

Not all of those steps are absolutely necessary; which are – depends on the language. See next parts for details.

Nodes to trace and breakpoints

Suppose you have a language, let's call it high.level, which generates code on some language low.level, which in turn is generated directly into text (there can be several other steps between high.level and low.level). Suppose that the text generated from low.level consists of java classes, and you want to have your high.level language integrated with MPS java debugger engine. See the following explanatory table:

 

low.level is baseLanguage

low.level is not baseLanguage

high.level extends baseLanguage
(uses concepts Statement,Expression, BaseMethodDeclaration etc)

Do not have to do anything.

Specify which concepts in low.level are traceable.

high.level does not extend baseLanguage

Use breakpoint creators to be able to set breakpoints for high.level.

Specify which concepts in low.level are traceable.
Use breakpoint creators to be able to set breakpoints for high.level.

Startup of a run configuration under java debugger

MPS provides a special language for creating run configurations for languages generated into java – jetbrains.mps.baseLanguage.runConfigurations. Those run configurations are able to start under debugger automatically. See Run configurations for languages generated into java for details.

Custom viewers

When one views variables and fields in a variable view, one may want to define one's own way to show certain values. For instance, collections could be shown as a collection of elements rather than as an ordinary object with all its internal structure.

For creating custom viewers MPS has jetbrains.mps.debugger.java.customViewers language.

The jetbrains.mps.debugger.java.customViewers language enables one to write one's own viewers for data of certain form.

A main concept of customViewers language is a custom data viewer. It receives a raw java value (an objects on stack) and returns a list of so-called watchables. A watchable is a pair of a value and its label (a string which cathegorizes a value, i.e. whether a value is a method, a field, an element, a size etc.) Labels for watchables are defined in custom watchables container. Each label could be assigned an icon.

The viewer for a specific type is defined in a custom viewer root. In the following table custom viewer parts are described:

Part

Description

for type

A type for which this viewer is intended.

can wrap

An additional filter for viewed objects.

get presentation

A string representation of an object.

get custom watchables

Subvalues of this object. Result of this funtion must be of type watchable list.

Custom Viewers language introduces two new types: watchable list and watchable.

This is the custom viewer specification for java.util.Map.Entry class:

And here we see how a map entry is displayed in debugger view:

Creating a non-java debugger

Debugger API provided by MPS allows to create a non-java debugger. All the necessary classes are located in the "Debugger API for MPS" plugin. See also Debugger API description.

Traceable Nodes

This section describes how to specify which nodes require to save some additional information in trace.info file (like information about positions text, generated from the node, visible variables, name of the file the node was generated into etc.). trace.info files contain information allowing to connect nodes in MPS with generated text. For example, if a breakpoint is hit, java debugger tells MPS the line number in source file and to get the actual node from this information MPS uses information from trace.info files.

Specifically, trace.info files contain the following information:

  • position information: name of text file and position in it where the node was generated;
  • scope information: for each "scope" node (such that has some variables, associated with it and visible in the scope of the node) – names and ids of variables visible in the scope;
  • unit information: for each "unit node" (such that represent some unit of a language, for example a class in java) – name of the unit the node is generated into.

Concepts TraceableConcept, ScopeConcept and UnitConcept of language jetbrains.mps.lang.traceable are used for that purpose. To save some information into trace.info file, user should derive from one of those concepts and implement the specific behavior method. The concepts are described in the table below.

Concept

Description

Behavior method to implement

Example

TraceableConcept

Concepts for which location in text is saved and for which breakpoints could be created.

getTraceableProperty – some property to be saved into trace.info file.

ScopeConcept

Concepts which have some local variables, visible in the scope.

getScopeVariables – variable declarations in the scope.

UnitConcept

Concepts which are generated into separate units, like classes or inner classes in java.

getUnitName – name of the generated unit.

trace.info files are created on the last stage of generation – while generating text. So the decsribed concepts are only to be used in languages generated into text.

When automatical tracing is impossible, $TRACE$ macro can be used in order to set input node for generated code (since MPS 2.5.2).

Breakpoint Creators

To specify how breakpoints are created on various nodes, root breakpoint creators is used. This is a root of concept BreakpointCreator from jetbrains.mps.debugger.api.lang language. The root should be located in the language plugin model. It contains a list of BreakpointableNodeItem, each of them specify a list of concept to create breakpoint for and a method actually creating a breakpoint. jetbrains.mps.debugger.api.lang provides several concepts to operate with debuggers, and specifially to create breakpoints. They are described below.

  • DebuggerReference – a reference to a specific debugger, like java debugger;
  • CreateBreakpointOperation – an operation which creates a location breakpoint of specified kind on a given node for a given project;
  • DebuggerType – a special type for references to debuggers.

On the following example breakpoint creators node from baseLanguage is shown.

In order to provide more complex filtering behavior, instead of a plain complex list breakpoint creators can use isApplicable function. There is an intention to switch to using this function.

Previous Next

Build Facets

Overview

Like basically any build or make system, the MPS make executes a sequence of steps, or targets, to build an artifact. A global ordering of the necessary make steps is derived from relative priorities specified for each build targets (target A has to run before B, and B has to run before C, so the global order is A, B, C).

A complete build process may address several concerns, for example generating models into text, compiling these models, deploying them to the server, and/or
generating .png files from graphviz source files. In MPS, such different build aspects are implemented with build facets. A facet is a collection of targets that address a common concern.

The targets within a facet can exchange configuration parameters. For example, a target that is declared to run early in the overall make process may collect
configuration parameters and pass them to the second facet, which then uses the parameters. The mechanism to achieve this intra-facet parameter exchange is
called properties. In addition, targets can use queries to obtain information from the user during the make process.

The overall make process is organized along the pipes and filters pattern. The targets act as filters, working on a stream of data being delivered to them. The data flowing among targets is called resources. There are different kinds of resources, all represented as different Java interfaces:

  • IMResource contains MPS models created by users, those that are contained in the project's solutions and languages.
  • IGResource represents the results of the generation process, which includes the output models, that is the final state of the models after generation has completed. These are transient models, which may be inspected by using the Save Transient Models build option.
  • ITResource represents the text files generated by textegen towards the end of the make process.

Build targets specify an interface. According to the pipes and filters pattern, the interface describes the kind of data that flows into and out of a make
target. It is specified in terms of the resouce types mentioned above, as well as in terms of the kind of processing the target applies to these resources. The following four processing policies are defined:

  • transform is the default. This policy consumes instances of the input resource type and produces instances of the output resource type (e.g. it may
    consume IMResources and produce ITResources.)
  • consume consumes the declared input, but produces no output. * produce consumes nothing, but produces output
  • pass through does not access any resources, neither produce nor consume.

Note that the make process is more coarse grained than model generation. In other words, there is one facet that runs all the model generators. If one needs
to "interject" additional targets into the MPS generation process (as opposed to doing something before or after model generation), this requires refactoring
the generate facets. This is beyond the scope of this discussion.

Building an Example Facet

As part of the mbeddr.com project to build a C base language for MPS, the actual C compiler has to be integrated into the MPS build process. More
specifically, programs written in the C base language contain a way to generate a Makefile. This Makefile has to be executed once it and all the
corresponding .c and .h files have been generated, i.e. at the very end of the MPS make process.

To do this, we built a make facet with two targets. The first one inspects input models and collects the absolute paths of the directories that may contain a
Makefile after textgen. The second target then checks if there is actually a file called Makefile in this directory and then runs make there. The two
targets exchange the directories via properties, as discussed in the overview above.

The first target: Collecting Directories

Facets live in the plugins aspect of a language definition. Make sure you include the {{jetbrains.mps.make.facets} language into the plugins model,
so you can create instances of FacetDeclaration. A facet is executed as part of the make process of a model if that model uses the language that
declares the facet.

The facet is called runMake. It depends on TextGen and Generate. The dependencies to those two facets has to be specified so we can then declare our targets' priorities relative to targets in those facets.

The first target is called collectPaths. It is specified as {{transform IMResource -> IMResource} in order to get in touch with the input models. The
facet specifies, as priorities, after configure and before generate. The latter is obvious, since we want to get at the models before they are
generated into text. The former priority essentially says that we want this target to run after the make process has been initialized (in other words: if
you want to do something "at the beginning", use these two priorities.)

We then declare a property pathes which we use to store information about the modules that contain make files, and the paths to the directories in which
the generated code will reside.

Let's now look at the implementation code of the target. Here is the basic structure. We first initialize the pathes list. We then iterate of the
input (which is a collection of resources) and do something with each input (explained below). We then use the output statement to output the input
data, i.e. we just pass through whatever came into out target. We use the success statement to finish this target successfully (using success
at the end is optional, since this is the default). If something goes wrong, the failure statement can be used to terminate the target unsuccessfully.

The actual processing is straight forward Java programming against MPS data structures:

We use the getGeneratorOutputPath method to get the path to which the particular module generates its code (this can be configured by the user in the
model properties). We then get the model's dotted name and replace the dots to slashes, since this is where the generated files of a model in that module will
end up (inspect any example MPS project to see this). We then store the module's name and the model's name, separated by a slash, as a way of improving the
logging messages in our second target (via the variable locationInfo}). We add the two strings to the {{pathes collection. This pathes property
is queried by the second target in the facet.

The second Target: Running Make

This one uses the pass through policy since it does not have to deal with resources. All the input it needs it can get from the properties of the collectPaths target discussed above. This second target runs after collectPaths}, {{after textGen and before reconcile. It is obvious that is has to run after collectPaths}, since it uses the property data populated by it. It has to run after {{textGen}, otherwise the make files aren't there yet. And it has to run before {{reconcile}, because basically everything has to run before {{reconcile (smile)

Let us now look at the implementation code. We start by grabbing all those entries from the collectPathes.pathes property that actually contain a
Makefile. If none is found, we return with success.

We then use the progress indicator language to set up the progress bar with as many work units as we have directories with make files in them.

We then iterate over all the entries in the {{modelDirectoriesWithMakefile} collection. In the loop we advance the progress indicator and then use Java
standard APIs to run the make file.

To wrap up the target, we use the finish statement to clean up the progress bar.

Extensions support

Extensions provide a possibility to extend certain aspects of a solution or a language, which are not covered by the standard language aspects and the plugin mechanisms. Typically you may need your language to slightly alter its behavior depending on the distribution model - MPS plugin, IntelliJ IDEA plugin or a standalone IDE. In such cases you define your extension points as interfaces to which then different implementations will be provided in different distributions.

Support for extensions exists in

  • languages
  • plugin solutions

Quick howto

  1. Create an extension point
  2. Create one or more extensions
  3. Both the extension point and the extension must be in the plugin model
    1. Each extension must provide a get method, returning an object
    2. Each extension may opt to receive the activate/deactivate notifications
    3. An extension may declare fields, just like classes can

Extension language

The language jetbrains.mps.lang.extension declares concepts necessary for building extensions.

Extension point

The ExtensionPoint concept represents an extension point. The extension object type must be specified as a parameter.

Extension

The Extension concept is used to create a concrete extension.

Accessing extension point

An extension point can be accessed by reference using extension point expression.

Accessing extension objects

An extension point includes a way to access all objects provided by its extensions.

Be Careful

Icon

Objects returned by the extensions have transient nature: they may become obsolete as soon as a module reloading event happens. It is not recommended to e.g. cache these objects. Instead is it better to get a fresh copy each time.

Java API

Extension points and extensions are managed by the ExtensionRegistry core component.

Unable to render {include} The included page could not be found.

Custom persistence cookbook

IDE tools

The Dependencies Analyzer can report dependencies among modules or models. It can be called from the main menu or from the popup menu of modules/models:

The interactive report, shown in a panel at the bottom, allows the user to view usages of modules by other modules. The panel on the right side displays modules and models dependent on the module selected in the left-hand side list.

Unlike the Module Dependencies Tool, which simply visualizes the dependency information specified in model properties, the Analyzer checks the actual code and performs dependency analysis. It detects and highlights the elements that you really depend on.

The Module Dependencies Tool allows the user to overview all the dependencies and used languages of a module or a set of modules, to detect potential cyclic dependencies as well as to see detailed paths that form the dependencies. The tool can be invoked from the project pane when one or more modules are selected.

Module Dependency Tool shows all transitive dependencies of the modules in the left panel. Optionally it can also display all directly or indirectly used languages. It is possible to expand any dependency node and get all dependencies of the expanded node as children. These will again be transitive dependencies, but this time for the expanded node.

Select one or more of the dependency nodes in the left panel. The right panel will show paths to each of the selected modules from its "parent" module. You can see a brief explanation of each relation between modules in the right tree. The types of dependencies can be one of: depends on, uses language, exports runtime, uses devkit, etc. For convinience the name of the target dependent module is shown in bold.

There are two types of dependency paths: Dependency and Used Language. When you select a module in the Used Language folder in the left tree, the right tree shows only the dependency paths that introduce the used language relation for the given module. To show "ordinary" dependencies on a language module, you should select it outside of the Used Languages folder (e.g. the jetbrains.mps.lang.core language in the picture below). It is also possible to select multiple nodes (e.g. the same language dependency both inside and outside of the Used Language folder). In that case you get a union of results for both paths.

When you are using a language that comes with its own libraries, those libraries are typically not needed to compile your project. It is the runtime when the libraries must be around for your code to work. For tracking runtime dependencies in addition to the "compile-time visible" ones, you should check the Runtime option in the toolbar. The runtime dependencies are marked with a "(runtime)" comment.

The default order for dependency paths is by their length starting from the shortest. However, there are paths that cannot be shown - paths that have the same tail part as one of the already shown path. It is still possible to display all such paths in the right tree with the "Show all paths" option. For these only the starting (distinct) part of the path is shown, while the symbols "... -->" mean that there is already a path shown in the tree somewhere above that describes the rest of the dependency path. You can follow the path by double-clicking its last element.

The modules in the left tree that participate in dependency cycles are shown in red color. It is possible to see paths forming the cycle by selecting the module dependency that refers to the parent or, for the user convinience, by using the popup menu:


For some types of dependencies the pop-up menu offers the possibility to invoke convenience actions such as Show Usages or Safe Delete. For the "depends on" dependencies (those without re-export) Dependencies Analyzer will be invoked for the Show Usages action.

Run Configurations

Introduction

Run configurations allow you to define how to execute programs, written in your language.

An existing run configuration can be executed either from run configurations box, located on the main toolbar,

by the "Run" menu item in the main menu

or through the run/debug popup (Alt+Shift+F10/Alt+Shift+F9).

Also run configurations could be executed/created for nodes, models, modules and project. For example, JUnit run configuration could run all tests in selected project, module or model. See Producers on how to implement such behavior for you run configurations.

To summarize, run configurations define the following things:

  • On creation stage:
    • configurations name, caption, icon;
    • configurations kind;
    • how to create a configuration from node(s), model(s), module(s), project.
  • On configuration stage:
    • persistent parameters;
    • editor for persistent parameters;
    • checker of persistent parameters validity.
  • On execution stage:
    • proccess which is actually executed;
    • console with all its tabs, action buttons and actual console window;
    • things required for debugging this configuration (if it is possible).

The following languages are introduced in MPS 2.0 to support run configurations in MPS.

  • jetbrains.mps.execution.common (common language) – contains concepts utilized by the other execution* languages;
  • jetbrains.mps.execution.settings (settings language) – language for defining different setting editors;
  • jetbrains.mps.execution.commands (command languages) – processes invocation from java;
  • jetbrains.mps.execution.configurations (configurations language) – run configurations definition;

Settings

Settings language allows to create setting editors and integrate them into one another. What we need from settings editor is the following:

  • fields to edit;
  • validation of fields correctness;
  • editor UI component;
  • apply/reset functions to apply settings from UI component and to reset settings in the UI component to the saved state;
  • dispose function to destroy UI component when it is no longer needed.

As you can see, settings have UI components. Usually, one UI component is created for multiple instances of settings. In the settings language settings are usually called "configurations" and their UI components are called "editors".

The main concept of settings language is PersistentConfigurationTemplate. It has the following sections:

  • persistent properties. This section describes the actual settings we are editing. Since we want also to persist this settings (i.e. to write to xml/read from xml) and to clone our configurations there is a restriction on their type: each property should be either Cloneable or String or any primitive type. There is also a special kind of property named template persistent property, but they are going to be discussed later.
  • editor. This section describes the editor of the configuration. It has functions create, apply to, reset from, dispose. Section also can define fields to store some objects in the editor. create function should return a swing component – the main UI component of the editor. apply to/reset from functions apply or reset settings in the editor to/from configuration given as a parameter. dispose function disposes the editor.
  • check. In this section persistent properties are checked for correctness. If some properties are not valid, a report error statement can be used. Essentially, this statement throws RuntimeConfigurationException.
  • additional methods. This section is for methods, used in the configurations. Essentially, this methods are configuration instance methods.

Persistent properties

It was said above that persistent properties could be either Cloneable or String or any primitive type. But if one uses settings language inside run configurations, those properties should also support xml persistent. Strings and primitives are persisted as usual. For objects persistent is more complicated. Two types of properties are persisted for an object: public instance fields and properties with setXXX and getXXX methods. So, if one wish to use some complex type in persistent property, he should either make all important field public or provide setXXX and getXXX for whatever he wants to persist.

Integrating configurations into one another

One of the two basic features of settings language is the ability to easy integration of one configuration into another. For that template persistent properties are used.

Template parameters

The second basic feature of settings language is template parameters. These are like constructor parameters in java. for example, if one creates a configuration for choosing a node, he may want to parametrize the configuration with nodes concept. The concept is not a persistent parameter in this case: it is not chosen by the user. This is a parameter specified on configuration creation.

Commands

Commands language allows to start up processes from the code in a way it is done from command line. The main concept of the language is CommandDeclaration. In the declaration, command parameters and the way to start process with this parameters are specified. Also, commands can have debugger parameters and some utility methods.

Execute command sections

Each command can have several execute sections. Each of this sections defines several execution parameters. There are parameters of two types: required and optional. Optional parameters can have default values and could be ommited when the command is started, while required can not have default values and are mandatory. Two execute sections of the command should have different (by types) lists of required parameters. One execute section can invoke another execute section. Each execute section should return either values of process or ProcessHandler types.

ProcessBuilderExpression

To start a process from a command execute section ProcessBuilderExpression isused. It is a simple list of command parts. Each part is either ProcessBuilderPart which consists of an expression of type string or list<string>, or a ProcessBuilderKeyPart which represents a parameter with a key (like "-classpath /path/to/classes"). When the code generated from ProcessBuilderExpression is invoked, each part is tested for being null or empty and ommited if so. Then, each part is splitted into many parts by spaces. So if you would like to provide a command part with a space in it and do not wish it to be splitted (for example, you have a file path with spaces), you have to put it into double quotes ("). Working directory of created process could be specified in inspector.

Debugger integration

To integrate a command with the debugger, two things are required to be specified:

  • the specific debugger to integrate with;
  • the command line arguments for a process.
    To specify a debugger one can use DebuggerReference – an expression of debugger type in jetbrains.mps.debug.apiLang – to reference a specific debugger. Debugger settings must be an object of type IDebuggerSettings.

Configurations

Configurations language allows to create run configurations. To create a run configuration, on should create an instance of RunConfiguration (essentially, configuration from settings language) and provide a RunConfigurationExecutor for it. One also may need a RunConfigurationKind to specify a kind of this configuration, RunConfigurationProducer to provide a way of creating this configuration from nodes, models, modules etc and a BeforeTask to specify how to prepare a configuration before execution.

Executors

Executor is a node which describes how a process is started for this run configuration. It takes settings user entered and created a process from it. So, the executor's execute methods should return an instance of type process. This is done via StartProcessHandlerStatement. Anything which has type process or ProcessHandler could be passed to it. A process could be created in three different ways:

  1. via command;
  2. via ProcessBuilderExpression (recommended to use in commands only);
  3. by creating new instance of ProcessHandler class; this method is recommended only if the above two do not fit you, for example when you creating a run configuration for remote debug and do not really need to start a process.

The executor itself consists of the following sections:

  1. "for" section where the configuration this executor is for and an alias for it is specified;
  2. "can" section where the ability of run/debug this configuration is specified; if the command is not used in this executor, one must provide an instance of DebuggerConfiguration here;
  3. "before" section with the call of tasks which could be executed before this configuration run, such as Make;
  4. "execute" section where the process itself is created.

Debugger integration

If a command is used to start a process, nothing should be done apart from specifying a configuration as debuggable (by selecting "debug" in the executor). However, if a custom debugger integration is required, it is done the same way as in command declaration.

Producers

Producer for a run configuration describes how to create this configuration for a node or group of nodes, model, module or a project. This makes run configurations easily discoverable for users since for each producer they will see an action in the context menu suggesting to run the selected item. Also this simplifies configuring because it gives a default way to execute somthing without seeing editing dialog.

Each producer specifies one run configuration it creates and can have several "produce from" sections for each kind of source the configuration can be produced. This source should be one of the following: node<>, nlist<>, model, module, project. Apart from source, each "produce from" section has a "create" section – a concept function parametrized with source which should return either created run configuration or null if somewhy it can not be created.

Useful examples

In this section you can find some useful tips and examples of run configurations usages.

Person Editor

In this example a editor for a "Person" is created. This editor edits two properties of a person: name and e-mail address.

PersonEditor could be used from java code in the following way:

Exec command

This is an example of a simple command which starts a given executable with programParameters in a given workingDirectory.

Compile with gcc before task

This is anexample of a BeforeTask wich does compilation of a source file with gcc command. It also demonstrates how to use commands outside of run configurations executors.

Note that this is just a toy example, in real life one should show a progress window while compiling.

Java Executor

This is an actual executor of Java run configuration from MPS.

Java Producer

This is a producer of Java run configuration from MPS.

One can see here three "produce from" sections. A Java run configuration is created here from nodes of ClassConcept, StaticMethodDeclaration or IMainClass.

Running a node, generated into java class

Lets suppose you have a node of a concept which is generated into a java class with a main method and you wish to run this node from MPS. Then you do not have to create a run confgiuration in this case, but you should do the following:

  1. The concept you wish to run should implement IMainClass concept from jetbrains.mps.execution.util language. To specify when the node can be executed, override isNodeRunnable method.
  2. Unit information should be generated for the concept. Unit information is required to correctly determine the class name which is to be executed. You can read more about unit information, as whell as about all trace information, in Debugger section of MPS documentation. To ensure this check that one of the following conditions are satisfied:
    1. a ClassConcept from jetbrains.mps.baseLanguage is generated from the node;
    2. a node is generated into text (the language uses textGen aspect for generation) and the concept of the node implements UnitConcept interface from jetbrains.mps.traceable;
    3. a node is genearted into a concept for which one of the three specified conditions is satisfied.

Previous Next

Changes highlighting

Changes highlighting is a handy way to show changes which were made since last update from version control system.
The changes in models are highlighted in the following places:

Project tree view

Models, nodes, properties and references are highlighted.
green means new items, blue means modified items, brown means unversioned items.

Editor tabs

Highlighting appears for all of the editor tabs: for language aspect tabs of a concept and also for custom tabbed editors declared in plugin aspect of a language (see Plugin: Editor Tabs).

Editor

Every kind of changes are highlighted in MPS editor: changing properties and references, adding, deleting and replacing nodes.

If you hover mouse cursor over the highlighter's strip on the left margin of editor, the corresponding changes become highlighted in editor pane.
If you want to have your changes highlighted in editor pane all the time (not only on hovering mouse cursor over highlighter's strip), you can select "Highlight Nodes With Changes Relative to Base Version" option in IDE Settings → Editor.

If you click on highlighter's strip on the left margin, there appears a panel with three buttons: "Go to Previous Change", "Go to Next Change" and "Rollback".

If you click "Rollback", all the corresponding changes are reverted.
This feature allows you to freely make any changes to the MPS model in the editor without fear, because at any moment you can revert your changes conveniently right from the editor.

Previous Next

Editing

Windows/Linux

MacOS

Action

Ctrl + Space

Ctrl + Space

Code completion

Ctrl + Alt + click

Cmd + Alt + click

Show descriptions of error or warning at caret

Alt + Enter

Alt + Enter

Show intention actions

Ctrl + Alt + T

Cmd + Alt + T

Surround with...

Ctrl + X/ Shift + Delete

Cmd + X

Cut current line or selected block to buffer

Ctrl + C / Ctrl + Insert

Cmd + C

Copy current line or selected block to buffer

Ctrl + V / Shift + Insert

Cmd + V

Paste from buffer

Ctrl + D

Cmd + D

Duplicate current line or selected block

Shift + F5

Shift + F5

Clone root

Ctrl + Up/Down

Cmd + Up/Down

Expand/Shrink block selection region

Ctrl + Shift + Up/Down

Cmd + Shift + Up/Down

Move statements up/down

Shift + Arrows

Shift + Arrows

Extend the selected region to siblings

Ctrl + W

Cmd + W

Select successively increasing code blocks

Ctrl + Shift + W

Cmd + Shift + W

Decrease current selection to previous state

Ctrl + Y

Cmd + Y

Delete line

Ctrl + Z

Cmd + Z

Undo

Ctrl + Shift + Z

Cmd + Shift + Z

Redo

Alt + F12

Alt + F12

Show note in AST explorer

F5

F5

Refresh

Ctrl + -

Cmd + -

Collapse

Ctrl + Shift + -

Cmd + Shift + -

Collapse all

Ctrl + +

Cmd + +

Expand

Ctrl + Shift + +

Cmd + Shift + +

Expand all

Ctrl + Shift + 0-9

Cmd + Shift + 0-9

Set bookmark

Ctrl + 0-9

Ctrl + 0-9

Go to bookmark

Tab

Tab

Move to the next cell

Shift + Tab

Shift + Tab

Move to the previous cell

Insert

Ctrl + N

Create Root Node (in the Project View)

Import

Windows/Linux

MacOS

Action

Ctrl + M

Cmd + M

Import model

Ctrl + L

Cmd + L

Import language

Ctrl + R

Cmd + R

Import model by root name

Usage/Text Search

Windows/Linux

MacOS

Action

Alt + F7

Alt + F7

Find usages

Ctrl + Alt + Shift + F7

Cmd + Alt + Shift + F7

Highlight cell dependencies

Ctrl + Shift + F6

Cmd + Shift + F6

Highlight instances

Ctrl + Shift + F7

Cmd + Shift + F7

Highlight usages

Ctrl + F

Cmd + F

Find text

F3

F3

Find next

Shift + F3

Shift + F3

Find previous

Generation

Windows/Linux

MacOS

Action

Ctrl + F9

Cmd + F9

Generate current module

Ctrl + Shift + F9

Cmd + Shift + F9

Generate current model

Shift + F10

Shift + F10

Run

Shift + F9

Shift + F9

Debug

Ctrl + Shift + F10

Cmd + Shift + F10

Run context configuration

Alt + Shift + F10

Alt + Shift + F10

Select and run a configuration

Ctrl + Shift + F9

Cmd + Shift + F9

Debug context configuration

Alt + Shift + F9

Alt + Shift + F9

Select and debug a configuration

Ctrl + Alt + Shift + F9

Cmd + Alt + Shift + F9

Preview generated text

Ctrl + Shift + X

Cmd + Shift + X

Show type-system trace

Navigation

Windows/Linux

MacOS

Action

Ctrl + B / Ctrl + click

Cmd + B / Cmd + click

Go to declaration

Ctrl + N

Cmd + N

Go to root node

Ctrl + Shift + N

Cmd + Shift + N

Go to file

Ctrl + G

Cmd + G

Go to node by id

Ctrl + Shift + A

Cmd + Shift + A

Go to action by name

Ctrl + Alt + Shift + M

Cmd + Alt + Shift + M

Go to model

Ctrl + Alt + Shift + S

Cmd + Alt + Shift + S

Go to solution

Ctrl + Shift + S

Cmd + shift + S

Go to concept declaration

Ctrl + Shift + E

Cmd + Shift + E

Go to concept editor declaration

Alt + Left/Right

Control + Left/Right

Go to next/previous editor tab

Esc

Esc

Go to editor (from tool window)

Shift + Esc

Shift + Esc

Hide active or last active window

Shift + F12

Shift + F12

Restore default window layout

Ctrl + Shift + F12

Cmd + Shift + F12

Hide all tool windows

F12

F12

Jump to the last tool window

Ctrl + E

Cmd + E

Recent nodes popup

Ctrl + Alt + Left/Right

Cmd + Alt + Left/Right

Navigate back/forward

Alt + F1

Alt + F1

Select current node in any view

Ctrl + H

Cmd + H

Concept/Class hierarchy

F4 / Enter

F4 / Enter

Edit source / View source

Ctrl + F4

Cmd + F4

Close active editor tab

Alt + 2

Alt + 2

Go to inspector

Ctrl + F10

Cmd + F10

Show structure

Ctrl + Alt + ]

Cmd + Alt + ]

Go to next project window

Ctrl + Alt + [

Cmd + Alt + [

Go to previous project window

Ctrl + Shift + Right

Ctrl + Shift + Right

Go to next aspect tab

Ctrl + Shift + Left

Ctrl + Shift + Left

Go to previous aspect tab

Ctrl + Alt + Shift + R

Cmd + Alt + Shift + R

Go to type-system rules

Ctrl + Shift + T

Cmd + Shift + T

Show type

Ctrl + H

Ctrl + H

Show in hierarchy view

Ctrl + I

Cmd + I

Inspect node

BaseLanguage Editing

Windows/Linux

MacOS

Action

Ctrl + O

Cmd + O

Override methods

Ctrl + I

Cmd + I

Implement methods

Ctrl + /

Cmd + /

Comment/uncomment with block comment

Ctrl + F12

Cmd + F12

Show nodes

Ctrl + P

Cmd + P

Show parameters

Ctrl + Q

Ctrl + Q

Show node information

Alt + Insert

Ctrl + N

Create new ...

Ctrl + Alt + B

Cmd + Alt + B

Go to overriding methods / Go to inherited classifiers

Ctrl + U

Cmd + U

Go to uverriden method

Refactoring

Windows/Linux

MacOS

Action

F6

F6

Move

Shift + F6

Shift + F6

Rename

Alt + Delete

Alt + Delete

Safe Delete

Ctrl + Alt + N

Cmd + Alt + N

Inline

Ctrl + Alt + M

Cmd + Alt + M

Extract Method

Ctrl + Alt + V

Cmd + Alt + V

Introduce Variable

Ctrl + Alt + C

Cmd + Alt + C

Introduce constant

Ctrl + Alt + F

Cmd + Alt + F

Introduce field

Ctrl + Alt + P

Cmd + Alt + P

Extract parameter

Ctrl + Alt + M

Cmd + Alt + M

Extract method

Ctrl + Alt + N

Cmd + Alt + N

Inline

Debugger

Windows/Linux

MacOS

Action

F8

F8

Step over

F7

F7

Step into

Shift + F8

Shift + F8

Step out

F9

F9

Resume

Alt + F8

Alt + F8

Evaluate expression

Ctrl + F8

Cmd + F8

Toggle breakpoints

Ctrl + Shift + F8

Cmd + Shift + F8

View breakpoints

VCS/Local History

Windows/Linux

MacOS

Action

Ctrl + K

Cmd + K

Commit project to VCS

Ctrl + T

Cmd + T

Update project from VCS

Ctrl + V

Ctrl + V

VCS operations popup

Ctrl + Alt + A

Cmd + Alt + A

Add to VCS

Ctrl + Alt + E

Cmd + Alt + E

Browse history

Ctrl + D

Cmd + D

Show differences

General

Windows/Linux

MacOS

Action

Alt + 0-9

Alt + 0-9

Open corresponding tool window

Ctrl + S

Cmd + S

Save all

Ctrl + Alt + F11

N/A

Toggle full screen mode

Ctrl + Shift + F12

N/A

Toggle maximizing editor

Ctrl + BackQuote (`)

Control + BackQuote (`)

Quick switch current scheme

Ctrl + Alt + S

Cmd + ,

Open Settings dialog

Ctrl + Alt + C

Cmd + Alt + C

Model Checker

Platform Languages 

Base Language

The BaseLanguage is an MPS' counterpart to Java, since it shares with Java almost the same set of constructs. BaseLanguage is the most common target of code generation in MPS and the most extensively extended language at the same time.

In order to simplify integration with Java, it is possible to specify the classpath for all modules in MPS. Classes found on the classpath will then be automatically imported into @java_stub models and so can be used directly in programs that use the BaseLanguage.

The frequently extended concepts of MPS include:

  • Expression. Constructs, which are evaluated to some results like 1, "abc", etc.
  • Statement. Constructs, which can be contained on a method level like if/while/synchronized statement.
  • Type. Types of variables, like int, double.
  • IOperation. Constructs, which can be placed after a dot like in node.parent. The parent element is a IOperation here.
  • AbstractCreator. Constructs, which can be used to instantiate various elements.

BaseLanguage was created as a copy of Java 6. Extensions to BaseLanguage for Java 7 and 8 compatibility have been gradually added.

  • Java 7 language constructs are contained in the jetbrains.mps.baselanguage.jdk7 language
  • Java 8 language extensions are contained in the jetbrains.mps.baselanguage.jdk8 language
  • You may like to check out a documentation dedicated to MPS interoperability with Java

Previous Next

Base Language is by far the most widely extended language in MPS. Since it is very likely that a typical MPS project will use a lot of different extensions from different sources or language vendors, the community might benefit from having a unified style across all languages. In this document we describe the conventions that creators should apply to all Base Language extensions.

Quick Reference

If you use...

Set its style to...

Dot

LeftBracket

RightBracket

LeftBrace

RightBrace

Operator

KeyWord

Keywords

A keyword is a widely used string, which identifies important concepts from a language. For example, all the primitive types from Base Language are keywords. Also names of statements such as ifStatement, forStatement are keywords. Use the KeyWord style from base language's stylesheet for keywords.

Curly braces

Curly braces are often used to demarcate a block of code inside of a containing construction. If you create an if-like construct, place opening curly brace on the same line as the construct header. I.e. use:

instead of

Use the LeftBrace and RightBrace styles to set correct offsets. Make sure that the space between a character which is to left to opening curly brace and the curly brace itself is equal to 1 space. You can do so with a help of padding-left/padding-right styles.

Parentheses

When you use parentheses, set the LeftParen/RightParen styles to the left/right parenthesis. If a parenthesis cell's sibling is a named node's property, disable the first/last position of a parenthesis with first/last-position-allowed style.

Identifiers

When you use named nodes: methods, variables, fields, etc, it's advisable to make their name properties have 0 left and right padding. Making identifier declaration and reference holding the same color is also a good idea. For example, in Base Language, field declarations and references have the same color.

Punctuation

If you have a semicolon somewhere, set its style to Semicolon. If you have a dot, use the Dot style. If you have a binary operator, use the Operator style for it.

Previous Next

Configuration

The Java Compiler configuration tab in the preferences window only holds a single setting - “Project bytecode version”.

This setting defines the bytecode version of all Java classes compiled by MPS. These classes include classes generated from language’s aspects, classes of the runtime solutions, classes of the sandbox solutions, etc.

By default, the bytecode version is set to “JDK Default”. This means that the version of the compiled classes will be equal to the version of Java, which MPS is running under. E.g. if you run MPS under JDK 1.8 and “JDK Default” is selected, the bytecode version will be 1.8.

The other options for project bytecode version are 1.6, 1.7 and 1.8.

Icon

Note that if you compile languages to the 1.8 version, then if you try to run MPS with JDK, the version of which is earlier than 1.8, those languages won’t be loaded.

Build scripts

Also, don’t forget to set java compliance level in the build scripts of your project. It should be the same as the project bytecode version.

Using java classes compiled with JDK 1.8

In the MPS modules pool you can find the JDK solution, which holds the classes of the running Java. So when you start MPS under JDK 1.8, the latest Java Platform classes will be available in the JDK solution.

You can also use any external Java classes, compiled under JDK 1.8 by adding them as Java stubs.

Since version 1.8, Java interfaces can contain default and static methods. At present, MPS does not support creating them in your BaseLanguage code, but you can call static and default methods defined in external Java classes, e.g classes of the Java Platform.

Static interface method call

In the example, we sort a list with the Comparator.reverseOrder()Comparator is an interface from java.util, and reverseOrder() is its static method, which was introduced in Java 1.8.

Default interface methods

Java 8 introduced also default methods. These are methods implemented directly in the interface. You can read about default methods here: http://docs.oracle.com/javase/tutorial/java/IandI/defaultmethods.html

These methods can be called just like the usual instance methods. Sometimes, however, you need to call the default method directly from an interface that your class is implementing. E.g in case of multiple inheritance when a class implements several interfaces, each containing a default method with the same signature.

In that case foo() can be called explicitly on one of the interfaces via a SuperInterfaceMethodCall construction, which is newly located in the jetbrains.mps.baseLanguage.jdk8 language.

Using Java platform API

Java 8 introduced lambda expressions, of which you can learn more here: http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html

MPS 3.2 doesn’t yet have a language that would be generated into lambda-expressions. Instead, it has its own closure language, which is compatible with the new Java API!

Here’s the example of an interaction with the new JDK 8 Collections API:

The forEach() method is the new default method of java.lang.Iterable. It takes a Consumer interface as a parameter. Consumer is a functional interface as it only has one method. In Java 8 it would be possible to pass a lambda expression to forEach(). In MPS you can pass the MPS closure. A closure knows the type of the parameter taken by forEach() while generating and it will be generated exactly to the correct instance of the Consumer.

Closures

Introduction

Closures are a handy extension to the base language. Not only they make code more consise, but you can use them as a vehicle to carry you through the lands of functional paradigm in programming. You can treat functions as first-class citizens in your programs - store them in variables, pass them around to methods as arguments or have methods and functions return other functions. The MPS Closures Support allows to you employ closures in your own languages. In fact, MPS itself uses closures heavily, for example, in the collections language.


This language loosely follows the "BGGA" proposal specification for closures in Java12. However, you don't need Java 7 to run code with MPS closures. The actual implementation uses anonymous inner classes, so any recent version of Java starting with 1.5 will run the generated code without problems. Only the closures runtime jar file is required to be on the classpath of the generated solutions.

Function type

{ Type1, Type2... => ReturnType }

Let's start with a trivial example of function type declaration. It declares a function that accepts no parameters and returns no value.

Subtyping rules

A function type is covariant by its return type and contravariant by parameter types.

For example, given we have defined a method that accepts {String => Number} :

we can pass an instance of {Object => Integer} (a function that accepts Object and returns int) to this method:

Simply put, you can use different actual types of parameters and the return value so long as you keep the promise made in the super-type's signature.

Notice the int type automatically converted to boxed type Integer.

Closure literal

Closure literal is created simply by entering a following construct: { <parameter decls> => <body> }. No "new" operator is necessary.

The result type is calculated following one or more of these rules:

  • last statement, if it's an ExpressionStatement;
  • return statement with an expression;
  • yield statement.

Note: it's impossible to combine return and yield within a single closure literal.

Closure invocation

The invoke operation is the only method you can call on a closure. Instead of entering

To invoke a closure, it is recommended to use the simplified version of this operation - parentheses enclosing the parameter list.

Invoking a closure then looks like a regular method call.

Some examples of closure literal definitions.

Recursion

Functional programing without recursion would be like making coffe without water, so obviously you have a natural way to call recursively a closure from within its body:

A standalone invoke within the closure's body calls the current closure.

Closure conversion


For practical purposes a closure literal can be used in places where an instance of a single-method interface is expected, and vice versa3.

The generated code is exactly the same as when using anonymous class:

Think of all the places where Java requires instances of Runnable, Callable or various observer or listener classes:

Updated for MPS 1.5

Icon

The following changes are applicable to the upcoming 1.5 version of MPS.

As with interfaces, an abstract class containing exactly one abstract method can also be adapted to from a closure literal. This can help, for example, in smooth transition to a new API, when existing interfaces serving as functions can be changed to abstract classes implementing the new interfaces.

Yield statement

The yield statement allows closures populate collections. If a yield statement is encountered within the body of a closure literal, the following are the consequences:

  • if the type of yield statement expression is Type, then the result type of the closure literal is sequence<Type>;
  • all control statements within the body are converted into a switch statement within an infinite do-while loop at the generation;
  • usage of return statement is forbidden and the the value of last ExpressionStatement is ignored.

Functions that return functions

A little bit of functional programming for the functional hearts out there:

The curry() method is defined as follows:

Runtime

In order to run the code generated by the closures language, it's necessary to add to the classpath of the solution the closures runtime library. This jar file contains the synthetic interfaces needed to support variables of function type and some utility classes. It's located in:

Differences from the BGGA proposal

  • No messing up with control flow. This means no support for control flow statements that break the boundaries of a closure literal.
  • No "early return" problem: since MPS allows return to be used anywhere within the body.
  • The yield statement.


[1] Closures for the Java Programming Language


[3] Version 0.5 of the BGGA closures specification is partially supported


[3] This is no longer true: only closure literal to interface conversion is supported, as an optimization measure.

Previous Next

Collections Language

An extension to the Base Language that adds support for collections.

Introduction

Collection language provides a set of abstractions that enable the use of a few most commonly used containers, as well as a set of powerful tools to construct queries. The fundamental type provided by the collections is sequence, which is an abstraction analogous to Iterable in Java, or IEnumerable in .NET . The containers include list (both array-based and linked list), set and map. The collections language also provides the means to build expressive queries using closures, in a way similar to what LINQ does.

Null handling

Collections language has a set of relaxed rules regarding null elements and null sequences.

Null sequence is still a sequence

Null is a perfectly accepted value that can be assigned to a sequence variable. This results simply in an empty sequence.

Null is returned instead of exception throwing

Whereas the standard collections framework would have to throw an exception as a result of calling a method that cannot successfully complete, the collection language's sequence and its subtypes would return null value. For example, invoking first operation on an empty sequence will yield a null value instead of throwing an exception.

Skip and stop statements

skip

Applicable within a selectMany or forEach closure. The effect of the skip statement is that the processing of the current input element stops, and the next element (if available) is immediately selected.

stop

Applicable within a selectMany closure or a sequence initializer closure. The stop statement causes the construction of the output sequence to end immediately, ignoring all the remaining elements in the input sequence (if any).

Collections Runtime

Collections language uses a runtime library as its back end, which is designed to be extensible. Prior to version 1.5, the collections runtime library was written in Java and used only standard Java APIs. The release 1.5 brings a change: now the runtime library is available as an MPS model and uses constructs from jetbrains.mps.baseLanguage.closures language to facilitate passing of function-type parameters around.

Important change!

Icon

In order to make the transition from Java interfaces to abstract function types possible, several of the former Java interfaces in the collections runtime library have been changed into abstract classes. While no existing code that uses collections runtime will be broken, unfortunately this breaks the so called binary compatibility, which means that a complete recompilation of all the generated code is required to avoid incompatibility with the changed classes in the runtime.

The classes which constitute the collections runtime library can be found in the collections.runtime solution, which is available from the jetbrains.mps.baseLanguage.collections language.


Sequence

Sequence is an abstraction of the order defined on a collection of elements of some type. The only operation that is allowed on a sequence is iterating its elements from first to last. A sequence is immutable. All operations defined in the following subsections and declared to return a sequence, always return a new instance of a sequence or the original sequence.

Although it is possible to create a sequence that produces infinite number of elements, it is not recommended. Some operations may require one or two full traversals of the sequence in order to compute, and invoking such an operation on an infinite sequence would never yield result.

Sequence type

sequence<Type>

Subtypes

Supertypes

Comparable types

list<Type>
set<Type>

none

java.lang.Iterable<Type>

Creation

new sequence

Parameter type

Result type

{ => sequence<Type> }

sequence<Type>

Sequence can be created with initializer.

closure invocation

Result type

sequence<Type>

A sequence may be returned from a closure (see Closures).

array as a sequence

Operand type

Parameter type

Result type

Type[]

none

sequence<Type>

An array can be used as a sequence.

A list, a set and a map are sequences, too. All operations defined on a sequence are also available on an instance of any of these types.

Sequence type is assignable to a variable of type java.lang.Iterable. The opposite is also true.

Operations on sequence

Iteration and querying
foreach statement

Loop statement

is equivalent to

forEach

Operand type

Parameter type

Result type

sequence<Type>

{ Type => void }

void

The code passed as a parameter (as a closure literal or by reference) is executed once for each element.

size

Operand type

Parameter type

Result type

sequence<Type>

none

int

Gives number of elements in a sequence.

isEmpty

Operand type

Parameter type

Result type

sequence<Type>

none

boolean

Test whether a sequence is empty, that is its size is 0.

isNotEmpty

Operand type

Parameter type

Result type

sequence<Type>

none

boolean

Test whether a sequence contains any elements.

indexOf

Operand type

Parameter type

Result type

sequence<Type>

Type

int

Gives the index of a first occurrence in a sequence of an element that is passed to it as a parameter.

contains

Operand type

Parameter type

Result type

sequence<Type>

Type

boolean

Produces boolean value, indicating whether or not a sequence contains the specified element.

any / all

Operand type

Parameter type

Result type

sequence<Type>

{ Type => boolean }

boolean

Produces boolean value that indicates whether any (in case of any operation) or all (in case of all) of the elements in the input sequence match the condition specified by the closure.

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

iterator<Type>

Produces an #iterator.

enumerator

Operand type

Parameter type

Result type

sequence<Type>

none

enumerator<Type>

Produces an #enumerator.

Selection and filtering
first

Operand type

Parameter type

Result type

sequence<Type>

none

Type

Yields the first element.

last

Operand type

Parameter type

Result type

sequence<Type>

none

Type

Yields the last element.

take

Operand type

Parameter type

Result type

sequence<Type>

int

sequence<Type>

Produces a sequence that is sub-sequence of the original one, starting from first element and of size count.

skip

Operand type

Parameter type

Result type

sequence<Type>

int

sequence<Type>

Produces a sequence that is sub-sequence of the original one, containing all elements starting with the element at index count.

cut

Operand type

Parameter type

Result type

sequence<Type>

int

sequence<Type>

Produces a sequence that is a sub-sequence of the original one, containing all elements starting with first and up to (but not including) the element at index size minus count. In other words, this operation returns a sequence with all elements from the original one except the last count elements.

tail

Operand type

Parameter type

Result type

sequence<Type>

int

sequence<Type>

Produces a sequence that is a sub-sequence of the original one, containing all elements starting with the element at index size minus count. In other words, this operations returns a sequence with count elements from the end of the original sequence, in the original order.

page

Operand type

Parameter type

Result type

sequence<Type>

int
int

sequence<Type>

Results in a sequence that is a sub-sequence of the original one, containing all elements starting with the element at index start and up to (but not including) the element at index end. It is a requirement that start is no greater than end.


This is equivalent to

Where skip = start, count = end - start .

where

Operand type

Parameter type

Result type

sequence<Type>

{ Type => boolean }

sequence<Type>

Produces a sequence that is a sub-sequence of the original one, with all elements for which the code passed as a parameter returns true.

findFirst

Operand type

Parameter type

Result type

sequence<Type>

{ Type => boolean }

Type

Results in the first element that matches the parameter closure.

findLast

Operand type

Parameter type

Result type

sequence<Type>

{ Type => boolean }

Type

Results in the last element that matches the parameter closure.

Transformation and sorting
select

Operand type

Parameter type

Result type

sequence<Type>

{ Type => Type2 }

sequence<Type2>

Results in a sequence consisting of elements, each of which is the result of applying the parameter function to each element of the original sequence in turn.

selectMany

Operand type

Parameter type

Result type

sequence<Type>

{ Type => sequence<Type2> }

sequence<Type2>

Produces a sequence that is a concatenation of all sequences, which are all the results of applying the parameter closure to each element of the original sequence in turn. The statements skip and stop are available within the parameter closure.

distinct

Operand type

Parameter type

Result type

sequence<Type>

none

sequence<Type>

Produces a sequence, which contains all elements from the original sequence in the original order, with all the elements having cardinality exactly 1. Of all occurrences of an element in the original sequence only the first occurrence is included in the resulting sequence.

sortBy

Operand type

Parameter type

Result type

sequence<Type>

{ Type => Type2 }
boolean

sequence<Type>

Produces a sequence with all elements from the original one in the order, which corresponds to an order induced by an imaginary sequence produced by applying the selector function to each element in the original sequence in turn. The selector function can be thought of as returning a key, which is used to sort elements in a sequence. The ascending parameter controls the sort order.

alsoSortBy

Operand type

Parameter type

Result type

sequence<Type>

{ Type => Type2 }
boolean

sequence<Type>

Equivalent to sortBy, unless used as a chain operation immediately following sortBy or another alsoSortBy. The result is a sequence sorted with a compound key, with the first component taken from previous sortBy or alsoSortBy (which is also a compound key), and the last component taken from this operation.

sort

Operand type

Parameter type

Result type

sequence<Type>

{ Type, Type => int }
boolean sequence<Type>

Produces a sequence containing all elements from the original one in the order produced by applying the comparator function (passed as a closure literal or by reference) to a list with elements from the original sequence. The ascending parameter controls the sort order (order is reversed if the value is false).

Binary operations
intersect

Operand type

Parameter type

Result type

sequence<Type>

sequence<Type>

sequence<Type>

Produces a sequence containing elements contained both by the original sequence and the parameter sequence.

except

Operand type

Parameter type

Result type

sequence<Type>

sequence<Type>

sequence<Type>

Produces a sequence containing all elements from the original sequence that are not also members of the parameter sequence.

union

Operand type

Parameter type

Result type

sequence<Type>

sequence<Type>

sequence<Type>

Produces a sequence containing elements both from the original sequence and the one passed as a parameter.

disjunction

Operand type

Parameter type

Result type

sequence<Type>

sequence<Type>

sequence<Type>

Produces exclusive disjunction (symmetric difference) of the original sequence and the one passed as a parameter.

concat

Operand type

Parameter type

Result type

sequence<Type>

sequence<Type> }

sequence<Type>

Produces a sequence, which is a concatenation of the original one with the sequence passed as a parameter

Conversion
reduceLeft / reduceRight

Operand type

Parameter type

Result type

sequence<Type>

{ Type, Type => Type }

Type

Operation reduceLeft/reduceRight applies the combinator function passed as a parameter to all elements of the sequence in turn. One of the function parameters is a sequence element, and another is the result of previous application of the function. Operation reduceLeft takes first two sequence elements and applies the function to them, then takes the result of the first application and the third sequence element, etc. Operation reduceRight does the same, but moving from the sequence's tail backwards.

  • reduceLeft
  • reduceRight
foldLeft / foldRight

Operand type

Parameter type

Result type

Applicable for

seed
sequence<Type>

{ Z, Type => Z }

Z

foldLeft

seed
sequence<Type>

{ Type, Z => Z }

Z

foldRight

Operation foldLeft/foldRight behaves similarly to reduceLeft/reduceRight, with the difference that it also accepts a seed value. Also the combinator function is asymmetric (it takes a Type and a Z parameters and returns a Z value). The result of the operation is of type Z.

  • foldLeft
  • foldRight
join

Operand type

Parameter type

Result type

sequence< ? extends string >

string (optional)

string

This operation is only available on a sequence of strings. The result is a string that is produced by concatenating all elements with the optional separator. The default separator is " " (single space).

toList

Operand type

Parameter type

Result type

sequence<Type>

none

list<Type>

Returns new list containing all the elements from the original sequence.

toArray

Operand type

Parameter type

Result type

sequence<Type>

none

Type*[]*

Returns new array containing all the elements from the original sequence.


List

A basic list container backed by either array list or linked list.

List type

list<Type>

Subtypes

Supertypes

Comparable types

none

sequence<Type>

java.util.List<Type>

List creation

new arraylist
new linkedlist

Parameter type

Result type

Type...
{*}sequence<? extends *Type>

list<Type>

Creates an empty list. Optionally, initial values may be specified right in the new list creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on list

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

modifying_iterator<Type>

This operation is redefined for list to return a modifying_iterator.

get

Operand type

Parameter type

Result type

list<Type>

int

Type

Yields the element at index position.

  • indexed access
    set

    Operand type

    Parameter type

    Result type

    list<Type>

    int
    Type

    Type

    Sets the element at index position to the specified value. Yields the new value.
  • indexed access
    add

    Operand type

    Parameter type

    Result type

    list<Type>

    Type

    Type

    Adds an element to the list.
    addFirst

    Operand type

    Parameter type

    Result type

    list<Type>

    Type

    Type

    Adds an element to the list as the first element.
    addLast

    Operand type

    Parameter type

    Result type

    list<Type>

    Type

    Type

    Adds an element to the list as the last element.
    insert

    Operand type

    Parameter type

    Result type

    list<Type>

    int
    Type

    Type

    Inserts an element into the list at the position index.
    remove

    Operand type

    Parameter type

    Result type

    list<Type>

    Type

    Type

    Removes an element from the list.
    removeFirst

    Operand type

    Parameter type

    Result type

    list<Type>

    none

    Type

    Removes the first element from the list.
    removeLast

    Operand type

    Parameter type

    Result type

    list<Type>

    none

    Type

    Removes the last element from the list.
    removeAt

    Operand type

    Parameter type

    Result type

    list<Type>

    int

    Type

    Removes an element from the list located at the position index.
    addAll

    Operand type

    Parameter type

    Result type

    list<Type>

    sequence<Type>

    list<Type>

    Adds all elements in the parameter sequence to the list.
    removeAll

    Operand type

    Parameter type

    Result type

    list<Type>

    sequence<Type>

    list<Type>

    Removes all elements in the parameter sequence from the list.
    clear

    Operand type

    Parameter type

    Result type

    list<Type>

    none

    void

    Clears all elements from the list.
reverse

Operand type

Parameter type

Result type

list<Type>

none

list<Type>

Produces a list with all elements from the original list in the reversed order.

Important

Icon

The reverse operation does not modify the original list, but rather produces another list.


Set

A basic set container backed by either hash set or linked hash set.

Set type

set<Type>

Subtypes

Supertypes

Comparable types

sorted_set<Type>

sequence<Type>

java.util.Set<Type>

Set creation

new hashset
new linked_hashset

Parameter type

Result type

Type...
sequence<? extends Type>

set<Type>

Creates an empty set. Optionally, initial values may be specified right in the new set creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on set

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

modifying_iterator<Type>

This operation is redefined for set to return a modifying_iterator.

add

Operand type

Parameter type

Result type

set<Type>

Type

Type

Adds an element to the set.

addAll

Operand type

Parameter type

Result type

set<Type>

sequence<Type>

set<Type>

Adds all elements in the parameter sequence to the set.

remove

Operand type

Parameter type

Result type

set<Type>

Type

Type

Removes an element from the set.

removeAll

Operand type

Parameter type

Result type

set<Type>

sequence<Type>

set<Type>

Removes all elements in the parameter sequence from the set.

clear

Operand type

Parameter type

Result type

set<Type>

none

void

Clears all elements from the set.


Sorted Set

A subtype of set that provides iteration over its elements in the natural sorting order, backed by a tree set.

Sorted Set type

set<Type>

Subtypes

Supertypes

Comparable types

none

set<Type>

java.util.SortedSet<Type>

Sorted set creation

new treeset

Parameter type

Result type

Type...
sequence<? extends Type>

set<Type>

Creates an empty set. Optionally, initial values may be specified right in the new set creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on sorted set

headSet

Operand type

Parameter type

Result type

sorted_set<Type>

Type

sorted_set<Type>

Results in a sorted_set that is a subset of all elements from the original set in the original sorting order, starting with the first element and up to but not including the specified element.

tailSet

Operand type

Parameter type

Result type

sorted_set<Type>

Type

sorted_set<Type>

Results in a sorted_set that is a subset of all elements from the original set in the original sorting order, starting with the specified element.

subSet

Operand type

Parameter type

Result type

sorted_set<Type>

Type, Type

sorted_set<Type>

Results in a sorted_set that is a subset of all elements from the original set in the original sorting order, starting with the first specified element and up to but not including the second specified element.


Map

A map container backed by either a hash map or a linked hash map.

Map type

map<KeyType, ValueType>

Subtypes

Supertypes

Comparable types

sorted_map<KeyType, ValueType>

sequence< mapping<KeyType, ValueType> >

java.util.Map<KeyType, ValueType>


The map type is retrofitted to be a subtype of sequence.

Map creation

new hashmap
new linked_hashmap

Parameter type

Result type

(KeyType => ValueType)...

map<KeyType, ValueType>

Creates an empty map. Optionally, initial values may be specified right in the new map creation expression.

Operations on map

get value by key

Operand type

Parameter type

Result type

map<KeyType, ValueType>

KeyType

ValueType

keys

Operand type

Parameter type

Result type

map<KeyType, ValueType>

none

sequence<KeyType>

Results in a sequence containing all the keys in the map.

containsKey

Operand type

Parameter type

Result type

map<KeyType, ValueType>

KeyType

boolean

Returns true if the map contains a mapping for the specified key, false otherwise.

values

Operand type

Parameter type

Result type

map<KeyType, ValueType>

none

sequence<ValueType>

Results in a sequence containing all the values in the map.

containsValue

Operand type

Parameter type

Result type

map<KeyType, ValueType>

ValueType

boolean

Returns true if the map contains a mapping with the specified value, false otherwise.


mappings

Operand type

Parameter type

Result type

map<KeyType, ValueType>

none

set< mapping<KeyType, ValueType> >

Results in a set of mappings contained by this map. The mappings can be removed from the set, but not added.

assign value to a key

Operand type

Parameter type

Result type

map<KeyType, ValueType>

KeyType

ValueType

remove

Operand type

Parameter type

Result type

map<KeyType, ValueType>

KeyType

void

Removes the specified key and the associated value from the map.

clear

Operand type

Parameter type

Result type

map<KeyType, ValueType>

none

void

Clears all key-value pairs from the map.

putAll

Operand type

Parameter type

Result type

map<KeyType, ValueType>

map<KeyType, ValueType>

void

Puts all mappings from the map specified as a parameter into this map, replacing existing mappings.

Sorted Map

A subtype of map that provides iteration over keys conforming to the natural sorting order, backed by a tree map.

Sorted map type

map<KeyType, ValueType>

Subtypes

Supertypes

Comparable types

none

map<KeyType, ValueType>

java.util.SortedMap<KeyType, ValueType>

Sorted map creation

new treemap

Parameter type

Result type

(KeyType => ValueType)...

map<KeyType, ValueType>

Creates an empty tree map. Optionally, initial values may be specified right in the new map creation expression.

Operations on sorted map

headMap

Operand type

Parameter type

Result type

sorted_map<KeyType, ValueType>

KeyType

sorted_map<KeyType, ValueType>

Results in a sorted_map that is a submap of the original map, containing all the mappings in the original sorting order, starting with the first key and up to but not including the specified key.

tailMap

Operand type

Parameter type

Result type

sorted_map<KeyType, ValueType>

KeyType

sorted_map<KeyType, ValueType>

Results in a sorted_map that is a submap of the original map, containing all the mappings in the original sorting order, starting with the specified key.

subMap

Operand type

Parameter type

Result type

sorted_map<KeyType, ValueType>

KeyType, KeyType

sorted_map<KeyType, ValueType>

Results in a sorted_map that is a submap of the original map, containing all the mappings in the original sorting order, starting with the first specified key and up to but not including the second specified key.

Stack

A simple stack abstraction, backed by linked list.

Stack type

stack<Type>

Subtypes

Supertypes

Comparable types

deque<Type>

sequence<Type>

java.util.Deque<Type>

Stack creation

new linkedlist

Parameter type

Result type

Type...
sequence<? extends Type>

stack<Type>

Creates an empty stack. Optionally, initial values may be specified right in the new linked list creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on stack

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

modifying_iterator<Type>

This operation is redefined for stack to return a modifying_iterator.

addFirst / push

Operand type

Parameter type

Result type

stack<Type>

Type

Type

Appends an element to the head of the stack.

removeFirst / pop

Operand type

Parameter type

Result type

stack<Type>

 

Type

Removes an element from the head of the stack.

first / peek

Operand type

Parameter type

Result type

stack<Type>

 

Type

Retrieves the first element at the head of the stack without removing it.

Queue

A simple queue abstraction, backed by linked list or priority queue.

Queue type

queue<Type>

Subtypes

Supertypes

Comparable types

deque<Type>

sequence<Type>

java.util.Deque<Type>

Queue creation

new linkedlist
new priority_queue

Parameter type

Result type

Type...
sequence<? extends Type>

queue<Type>

Creates an empty queue. Optionally, initial values may be specified right in the new linked list creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on queue

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

modifying_iterator<Type>

This operation is redefined for queue to return a modifying_iterator.

addLast

Operand type

Parameter type

Result type

queue<Type>

Type

Type

Appends an element to the tail of the queue.

removeFirst

Operand type

Parameter type

Result type

queue<Type>

 

Type

Removes an element from the head of the queue.

first

Operand type

Parameter type

Result type

queue<Type>

 

Type

Retrieves the first element at the head of the queue without removing it.

last

Operand type

Parameter type

Result type

queue<Type>

 

Type

Retrieves the first element at the tail of the queue without removing it.

Deque

A simple double-linked queue abstraction, backed by linked list.

Deque type

queue<Type>

Subtypes

Supertypes

Comparable types

 

sequence<Type>
queue<Type>
stack<Type>

java.util.Deque<Type>

Deque creation

new linkedlist

Parameter type

Result type

Type...
sequence<? extends Type>

deque<Type>

Creates an empty deque. Optionally, initial values may be specified right in the new linked list creation expression.


Alternatively, a sequence may be specified that is used to copy elements from.

Operations on deque

iterator

Operand type

Parameter type

Result type

sequence<Type>

none

modifying_iterator<Type>

This operation is redefined for deque to return a modifying_iterator.

addFirst / push

Operand type

Parameter type

Result type

deque<Type>

Type

Type

Appends an element to the head of the deque.

addLast

Operand type

Parameter type

Result type

deque<Type>

Type

Type

Appends an element to the tail of the deque.

removeFirst / pop

Operand type

Parameter type

Result type

deque<Type>

 

Type

Removes an element from the head of the deque.

first

Operand type

Parameter type

Result type

queue<Type>

 

Type

Retrieves the first element at the head of the deque without removing it.

last

Operand type

Parameter type

Result type

queue<Type>

 

Type

Retrieves the first element at the tail of the deque without removing it.

Iterator

A helper type that is analogous to java.util.Iterator. An instance of the iterator can be obtained with the iterator operation on a sequence.

Iterator type

iterator<Type>

Subtypes

Supertypes

Comparable types

modifying_iterator<Type>

none

java.util.Iterator<Type>

Operations on iterator

hasNext

Operand type

Parameter type

Result type

iterator<Type>

none

boolean

Tests if there is an element available.

next

Operand type

Parameter type

Result type

iterator<Type>

none

Type

Returns the next element.


Modifying Iterator

A subtype of iterator that supports remove operation.

Modifying Iterator type

modifying_iterator<Type>

Subtypes

Supertypes

Comparable types

none

iterator<Type>

java.util.Iterator<Type>

Operations on modifying iterator

remove

Operand type

Parameter type

Result type

modifying_iterator<Type>

none

none

Removes the element this iterator is currently positioned at.


Enumerator

An alternative to the iterator, a helper type that works similar to .NET's IEnumerator. An instance of the enumerator can be obtained with the enumerator operation on a sequence.

Enumerator type

enumerator<Type>

Subtypes

Supertypes

Comparable types

none

none

none

Operations on enumerator

moveNext

Operand type

Parameter type

Result type

enumerator<Type>

none

boolean

Moves to the next element. Returns true if there is an element available.

current

Operand type

Parameter type

Result type

enumerator<Type>

none

Type

Returns the element this enumerator is currently positioned at.


Mapping

A helper type used by map and sorted_map.

Mapping type

mapping<KeyType, ValueType>

Subtypes

Supertypes

Comparable types

none

none

none

Operations on mapping

get value

Operand type

Parameter type

Result type

mapping<KeyType, ValueType>

none

ValueType

set value

Operand type

Parameter type

Result type

mapping<KeyType, ValueType>

ValueType

ValueType

get key

Operand type

Parameter type

Result type

map<KeyType, ValueType>

none

KeyType


Custom Containers

Custom containers is a simple way to provide own implementation of standard container types, thus allowing for easy extensibility of the collections language.

Example: weakHashMap

Provided the following declaration is reachable from the model currently being edited...

... one can use the weak version of hashmap thusly:

Custom Containers Declaration

A root node of concept CustomContainers may have one or more declarations.

declaration part

allowed contents

containerName

any valid identifier

container_type

one of the existing container types of the collections language

runtime type

Java classifier which represent implementation of the container

factory

(optional) container creation expression;
the classifier's default constructor used if undefined


Primitive Containers

Collections framework include a set of custom containers designed to work with primitive data types. Using primitive types helps optimize speed and/or size of the containers. These containers are available with a separate language jetbrains.mps.baseLanguage.collections.trove.

Primitive list containers

list<?,?>

byteArrayList

list<byte>

doubleArrayList

list<double>

floatArrayList

list<float>

intArrayList

list<int>

longArrayList

list<long>

shortArrayList

list<short>

Primitive set containers

set<?,?>

byteHashSet

set<byte>

doubleHashSet

set<double>

floatHashSet

set<float>

intHashSet

set<int>

longHashSet

set<long>

shortHashSet

set<short>

Primitive maps

map<byte,?>

byteByteHashMap

map<byte, byte>

byteDoubleHashMap

map<byte, double>

byteFloatHashMap

map<byte, float>

byteIntHashMap

map<byte, int>

byteLongHashMap

map<byte, long>

byteShortHashMap

map<byte, short>

map<double,?>

doubleByteHashMap

map<double, byte>

doubleDoubleHashMap

map<double, double>

doubleFloatHashMap

map<double, float>

doubleIntHashMap

map<double, int>

doubleLongHashMap

map<double, long>

doubleShortHashMap

map<double, short>

map<float,?>

floatByteHashMap

map<float, byte>

floatDoubleHashMap

map<float, double>

floatFloatHashMap

map<float, float>

floatIntHashMap

map<float, int>

floatLongHashMap

map<float, long>

floatShortHashMap

map<float, short>

map<int,?>

intByteHashMap

map<int, byte>

intDoubleHashMap

map<int, double>

intFloatHashMap

map<int, float>

intIntHashMap

map<int, int>

intLongHashMap

map<int, long>

intShortHashMap

map<int, short>

map<long,?>

longByteHashMap

map<long, byte>

longDoubleHashMap

map<long, double>

longFloatHashMap

map<long, float>

longIntHashMap

map<long, int>

longLongHashMap

map<long, long>

longShortHashMap

map<long, short>

map<short,?>

shortByteHashMap

map<short, byte>

shortDoubleHashMap

map<short, double>

shortFloatHashMap

map<short, float>

shortIntHashMap

map<short, int>

shortLongHashMap

map<short, long>

shortShortHashMap

map<short, short>

<K> map<K,?>

ObjectByteHashMap<K>

map<K, byte>

ObjectDoubleHashMap<K>

map<K, double>

ObjectFloatHashMap<K>

map<K, float>

ObjectIntHashMap<K>

map<K, int>

ObjectLongHashMap<K>

map<K, long>

ObjectShortHashMap<K>

map<K, short>


Previous Next

Tuples

Tuples give you a way to group related data of different types into small collection-like data structures. In MPS, tuples are available within the jetbrains.mps.baseLanguage.tuples language.

Indexed tuples

Indexed tuple is a structure, which can contain several elements of arbitrary types and elements of which can be accessed by an index. The MPS implementation represents a tuple instance by a Java object. The usual meaning of '=' and '==' operations on Java objects within MPS remains unchanged.

Named tuples

Named tuples are similar to indexed tuples, with the difference that elements are accessed by name instead of by index. To use named tuples in the model you first need to explicitly define them in your model (new ->jetbrains.mps.baseLanguage.tuples/tuple).

Declaration of Pair:

Named tuple declaration

A root node of concept NamedTupleDeclaration contains a single declaration.

declaration part

allowed contents

tupleName

any valid identifier

elementType

either a primitive type or any type that reduces to Java classifier

elementName

any valid identifier

Previous Next

Dates language

An extension to the Base Language that adds support for dates.

Introduction

The Dates Language provides dedicated facilities to operate with date and time. Dates creation, comparison or adding and subtracting periods are done in a natural way, using common conventions and standard operators. As the backend implementation vehicle the Joda Time library is used.

Types

The following types are defined:

  • instant represents the number of milliseconds passed since the epoch, 1970-01-01T00:00Z. It's represented with a long. Thus an instant can be exchanged freely for a long value and vice versa.
  • datetime represents a date and time with time zone.
  • duration represents an interval of time measured in milliseconds.
  • period is a representation of a time interval defined in terms of fields: seconds, minutes, etc. A period is constructed using + (plus) operator.
  • timezone represents a time zone.

    Changes in 1.1

    Icon

    Old datetime type is renamed to instant. New datetime type contains timezone information. Use in expression to convert instant to datetime.

Predefined values

A special reserved keyword now represents the current time.

A reserved keyword never represents an instant before the beginning of times. All instants are placed after never on the time axis. Its actual representation is null.

Period constant contains of a number and a property. They can be summed up together to form more complicated periods.

Time zone can be created in different ways. All predefined values are available in completion menu. default timezone is a shortcut to the computer local time zone.

Converting between types

Datetime can be obtained from an instant by providing a time zone. For reverse conversion simply get the datetime's instant property.

Period can be converted to duration using toDuration operation. It converts the period to a duration assuming a 7 day week, 24 hour day, 60 minute hour and 60 second minute.

For compatibility there are ways to convert datetime or instant types to java.util.Date and java.util.Calendar.

Reverse conversion:

Properties

Each individual datetime field can be queried using dot expression.

To replace a field, use with expression.

Each period can be re-evaluated in terms of any field (with rounding if needed) leveraging the in operator.

Operations

Arithmetic

A period can be added to or substracted from a datetime. The result is a datetime.

Two datetimes can be subtracted, the result is a period.

A duration can be added to or subtracted from an instant. The result is an instant.

Two instants can be subtracted, the result is a duration.

Comparison

Two values of the same type (instant, datetime, period or duration) can be compared using standard operators: <, >, ==, etc.

Another form of comparison can be used for datetime by adding a keyword by following the field specification. In this case values are compared rounded to the field (see #Rounding).

Minimum and maximum operations are defined for instant and datetime types.

Rounding

Datetime values can be rounded to the nearest whole unit of the specified field (second, minute, hour, day, month, etc). There are several ways of rounding:

  • round returns the datetime that is closest to the original
  • round down to returns the largest datetime that does not exceed the original
  • round up to returns the smallest datetime that is not less than the original

Printing and Parsing

To print or parse datetime value we use date format describing its textual representation. The following formats are available by default:

  • defaultFormat, rssDate
  • shortDate, shortDateTime, shortTime, fullDate, longDate, etc (defined in Joda library)

Date format consists of one or more format tokens. The following kinds of tokens are supported:

  • literal (referenced with a single quote) - any text, commonly used to insert dash or space
  • datetime property (referenced with the name of a property in curly braces) - is replaced with the value of the property when printed
  • switch - composite token, which may vary the format depending on the date
  • offset (referenced as days ago, months ago, etc.) - calculates the difference between the provided datetime and the current moment
  • reference (the name in angle brackets) - a way to include existing format

Additional date formats can be introduced using the j.m.baseLanguage.dates.DateFormatsTable root concept. Each format has a name and visibility. Formats with private visibility are local for the table.

Datetime instance can be printed in the form of existing format by # operation.

Another possibility is to use {{#} operation, which allows to define format in-place.

Both printing operations accept optional locale argument in parentheses.

Parse operation accepts string, date format, timezone and two optional parameters: default value and locale.

Changes in 1.1

Icon

New print/parse expressions operate on datetime instead of instant. Use intention to convert deprecated expressions to a new ones.


Previous Next

Regular expressions language

Regular expressions is one of the earliest DSLs in wide use. Nearly all modern programing languages these days support regular expressions in one way or another. MPS is no exception. Regular expression support in MPS is implemented through a base language extension.

We also recommend checking out the Regular Expressions Cookbook, to get a more thorough introduction into the language.

Defining regular expression.

Regexp language allows you to create an instance of java.util.regex.Pattern class using a special pattern expression: /regexp/. In the generated code, MPS creates for each defined pattern expression a final static field in the outermost class, so the pattern is compiled only once during application runtime.

There are three options, you can add after the ending slash of the regexp.

/i

Case-insensitive matching

/s

Treat string as single line, dot character class will include newline delimiters

/m

Multiline mode: ^ and $ characters matches any line within the string (instead of start or end of the string)

The options can be turned on/off by typing or deleting the character in the editor, or through the Inspector. Generated regular expression preview is available in the Inspector.

Re-using Definitions

To reuse a regular expression for a frequently used pattern accross your project, create a separate root:

model -> New -> jetbrains.mp.baseLanguage.regexp -> Regexps

Each reusable regular expression should have a name and optionally a description.

Pattern Match Operator

The =~ operator returns true if the string matches against the specified pattern.

Capturing Text

Optional use of parentheses in the expression creates a capture group. To be able to refer to the group later, the best practice is to give it a name.

Examples

If the pattern matches against the string, the matched value is captured into identifier and can be accessed in the if-block.

Don't forget to check out the Regular Expressions Cookbook, to get a more thorough introduction into the language.

Previous Next

Type Extension Methods

The language jetbrains.mps.extensionMethods provides a way to extend any valid MPS type with newly defined or overriden methods, akin to Java static methods.

Whereas static methods never become an internal part of the extended class and one has to always specify the "extended" object to operate on as one of the parameters to the extended method, with an extension method the new method gets added directly to the list of operations available on the target type.

So, provided we wanted to add a reverse method to the string type, instead of the good old "static method" way:

we would create new Extension Methods through New -> j.m.baseLanguage.extensionMethods/type extension, define the new method and tie it to the string class:

The very same mechanism can be used to override existing methods. And when in need to call the original method, just call it on this:

Since MPS does a good job to visually distinguish the original and overriden methods through the extension methods mechanism, you can't make a mistake picking the right one from the drop-down list.
Obviously this mechanism can be used to implement orthgonal concepts on your own domain objects as well:

With the declaration as above, one could write an operation on type my_type:

Root Nodes

There are two equally good ways to extend types with methods. Type Extension allows to you to add methods to a single type in one place, while Simple Extension Method Container comes in handy, when you need one place to implement an orthogonal concept for multiple different types.

Type Extension

This root contains declarations of extension methods for a single type.

Extension method declaration.

Simple Extension Method Container

Extension method declaration. The target type is specified per method.

declaration part

allowed contents

containerName

any valid identifier

extendedType

any valid MPS type

Both roots may contain one or more static fields. These are available to all methods in the container.

Previous Next

Builders allow users to construct objects and object hierarchies in a more convenient way. Instead of a manual instantiation of each and every object in the hierarchy and setting its properties one-by-one, with a dedicated builder the same data structure can be created in a more concise and intuitive way.

As an example, let's assume we're building a house.

A house needs an address, which itself consists of several items, a bunch of rooms in it, each of which needs a couple of properties, and so on.

Instead of the cumbersome way, builders give you a syntaxtic shortcut to take:

Looking at the code you can quickly grasp the structure of the created object graph, since the structure of the code itself mirrors the dependencies among created objects. Builders are nested into one another and they can hold properties. Both the property values and mutual nesting of builders is then transformed into the object hierarchy built behind the scenes.

MPS brings a few of handy builders directly to your door as part of some of the languages - JavaBeans, XML, XMLSchema or XMLQuery being the most prominent users.

Building Builders

To build your own builder, you first need to invoke New -> j.m.baseLanguage.builders.SimpleBuilders. Now you define builders for each object type that participates in the hierarchy. These builders hold their own properties and children, out of which they build the requested data structure. To stick to our earlier "House building" example, check out the sample below:

We defined a builder for the Room class as well as for the Address class and also a root builder for the House class. Root builders, unlike plain builders, can be used directly in user code after the new keyword. Notice also that we have two builders for the Room class. The first definition allows properties to be nested inside the room block, while the second allows the two properties to come directly as parameters to the room method call. Both approaches can certainly be combined in a single builder.

The House, Room and Address classes in our case are ordinary classes with methods and properties. The methods as well as setters for the properties manipulated in builders must be visible to the builders. The "package" visibility will do in typical cases. To give you an example, see below the House class definition from our example.

The jetbrains.mps.baselanguage.logging language contains statements for writing arbitrary information into the MPS log as well as to the Messages tool view panel. The language offers several statements to log statements with different severity:
  • trace
  • info
  • debug
  • warn
  • error
  • fatal

Whenever you want to insert a log statement into code, start by typing the desired severity:

Upon completion the log statement with an empty message will be inserted.

The severity level can always be changed:

The log statement also supports exceptions to be specified. Use Alt + Enter to toggle the visibility of the attached exception:

XML Language

The jetbrains.mps.core.xml language is designed to model closely XML documents in MPS. The language aims at being a 1:1 match to plain XML and is generated into textual XML files.

Structure

The XmlFile root element should be used to represent an XML file.

It contains a single XmlDocument node, which itself holds one or more prolog entries and a root xml element:

There are several types of prolog elements to choose from and customize:

Use the Enter key to separate entries in the prolog, either within the same line or across multiple lines.

Editing

The elements, their attributes and values can then be entered naturally. The XML-specific symbols, such as e.g. '<', '>', '=', 'space', "&", are recognized as delimiters and the automatically invoked transformations will correctly insert proper instances of the desired concepts - XmlElement, XmlAttribute, XmlTextXmlTextValue, XmlEntityRef, XmlEntityRefValue, XmlComment and other. Code-completion should assist you to complete unfinished elements with little effort.

Generation

The language is transformed into textual XML using the TextGen aspect.

Previous Next

Here we introduce some handy BaseLanguage extensions

Checked dots

Language: jetbrains.mps.baseLanguage.checkedDots

A Checked Dot Expression is an dot expression extended with null checks on the operand.

If the operand is null, the value of the whole checked dot expression becomes null, otherwise it evaluates to the value of corresponding dot expression.

Ways to create a Checked Dot Expression
  • The Make dot expression checked intention
  • Enter "?" after dot, e.g. customer.?address.?street
  • Left transform of operation with "?"

You can transform checked dot expressions to the usual dot expressions using the Make dot expression not checked intention

Overloaded operators

Language: jetbrains.mps.baseLanguage.overloadedOperators

This language provides a way to overload binary operators.

Overloaded operator declarations are stored in an OverloadedOperatorContainer.

If there are several overloaded versions for one operator the most relevant is chosen.

Note that if an overloaded operators' usage is in the other model than its declaration, overloadedOperators language should be added to "languages engaged on generation" of usage's model.

Examples

Overloading plus operator for class Complex:

 

Also, you can define your own custom operators. Assume we want to create a binary boolean operator for strings, which tells if one string contains another:

 

Now, we can simply use this operator:


Custom constructors (since 1.5.1)

Language: jetbrains.mps.baseLanguage.constructors

Custom constructors provide a simple way to create complex objects. They are stored in a special root node - CustomConstructorsContainer.

Example

Assume we need a faster way to create rectangle.


Now, let's create a rectangle:

Previous Next

Delivering languages to the users 

What is MPS build language?

Build Language is an extensible build automation DSL for defining builds in a declarative way. Generated into Ant, it leverages Ant execution power while keeping your sources clean and free from clutter and irrelevant details. Organized as a stack of MPS languages with ANT at the bottom, it allows each part of your build procedure to be expressed at a different abstraction level. Building a complex artifact (like an MPS plug-in) could be specified in just one line of code, if you follow the language conventions, but, at the same time, nothing prevents you from diving deeper and customize the details like file management or manifest properties.

As with many build automation tools, project definition is the core of the script. Additionally, and unlike most of the other tools, Build Language gives you full control over the output directory layout. The expected build result is defined separately in the build script and not as a part of some (third-party) plugin.
Every build script is made up of three parts. The first is dependencies, something required that comes already built. Think of libraries or third-party languages, for example. Next is the project structure. It contains declarations of everything you have in your repository and what is going to be built, as well as the required build parameters. Note that declaring an item here does not trigger its build unless it is needed, i.e. referred to from the last part of the script - the output layout. The output could be as straightforward as a set of plain folders and copied files, or much more complex with zipped artifacts such as packaged plug-ins or MPS languages. For example, to build a jar file out of Java sources you need to declare a Java module in the project structure and the respective jar file with a reference to the module in the output layout.

Thanks to MPS, Build Language comes with concise textual notation and an excellent editing experience, including completion and on-the-fly validation. Extension languages (or plugins if we stick to the terminology of the other build tools) add additional abstractions on top of the language. In our experience, it is quite an easy process to create a new one compared to developing Maven or Gradle plugins.

Build script structure

See an example below of a build script, which builds a plugin for Intellij IDEA:

Let's look at it closely. The header of the script consists of general script information: the name of the script (Complex in the screenshot), the file it is generated into (build.xml) and the base directory of the script (in the screenshot one can see it is relative to the script location ../../ as well as full path).

The body of the script consists of the following sections:

  • use plugins contains a list of plugins used in the script. Plugins in Build Language are similar to those in Gradle: they are extensions to the language that provide a number of tasks to do useful things, like compiling java code, running unit tests, packaging modules, etc. In the screenshot two plugins are used: java and mps, which means that the script can build java and mps code.
  • macros section defines path macros and variables used in the project (idea_home and plugins_home) together with their default values, which could be overridden during execution of the script.
  • dependencies defines script dependencies on other build scripts. If a script references to something defined in the other build script it must specify this script in the dependencies section. The example script on the screenshot depends on two other scripts IDEA and mpsPlugin. These are provided by MPS, so in order to use them one has to specify the artifacts location for them, i.e. place where ant can find the result of their work (in the example, idea_home should point to the location of Intellij IDEA jars and plugins_home should point to the location of MPS plugins for IDEA). One can as well depend on some build scripts in the same MPS project. In that case, artifacts location is not required and it is assumed that the required script would be built just prior to the current script (there is a target buildDependents to do so).
  • project structure section contains the description of the project, i.e. which modules does it have, where the source code is located, what the modules classpath is etc. The example project in the screenshot consists of a single idea plugin named Complex and of a group of MPS modules.
  • default layout defines how to package the project into the distribution. The example project on the screenshot is packaged into a zip file named Complex.zip.
  • additional aspects defines some other things related to the project, for example, various settings, integration tests to run, and so on.

Built-in plugins

Build Language provides several built-in plugins.

Java plugin

The Java plugin adds capability to compile and package java code. Source code is represented as java modules and java libraries.

Java module defines its content (source folders locations) and dependencies on other modules, libraries and jars. In content section java module can have:

  • folder – a path to source folder on disk;
  • resources – a fileset of resources. Consists of a path to resources folder and a list of selectors (include, exclude or includes).
  • content root – a root with several content folders.

In dependencies section java module can have:

  • classpath – an arbitrary xml with classpath;
  • external jar – a jar file from other build script layout;
  • external jar in folder – a jar file referenced by name in a folder from some other build script layout;
  • jar – a path to local jar;
  • library – a reference to a java library;
  • module – a reference to a java module.

Each java module is generated into its own ant target that depends on other targets according to the source module dependencies. For compiling cyclic module dependencies, a two-step compilation is performed:

  1. A "cycle" target compiles all modules in the cycle together.
  2. Each module in the cycle is compiled with the result of compilation of "cycle" target in classpath.

Java library consists of jars (either specified by path or as references to the other project layout) and class folders. The available elements are:

  • classes folder – a folder with classes;
  • external jar – a jar file from other build script layout;
  • external jars from – a collection of jars from a folder from some other buils script layout;
  • jar – a path to local jar;
  • jars – a path to local folder with jars and a list of selectors (include, exclude or includes).

Compilation settings for java modules are specified in java options. There can be several java options in the build script, only one of them can be default. Each module can specify its own java options to be used for compilation.

Java Targets

Java plugin adds the following targets:

  • compileJava compiles all java modules in the project.
  • processResources extension point for additional resource processing.
  • classes does all compilation and resource processing in the project. It depends on targets compileJava, processResources.
  • test extension point target for running unit tests.
  • check does all testing and checking of project correctness. It depends on target test.

MPS plugin

The MPS plugin enables the build language in scripts to build mps modules. In order to use the MPS plugin one must add jetbrains.mps.build.mps language into used languages.

MPS modules and groups

The MPS plugin enables adding modules into project structure. On the screenshot there is an example of a language, declared in a build script.

Note that there is a lot of information about the module specified in the build script, most of which is displayed in the Inspector tool window: uuid and fully qualified name, full path to descriptor file, dependencies, runtime (for a language) etc. This information is required for packaging the module. So, every time something changes for this module, for example a dependency is added, the build script has to be changed as well. There is of course a number of tools to do it easily. The typical process of writing and managing mps modules in the script looks as following:

  1. Adding a module to the script. One specifies, which type of module to add (a solution, a language or a devkit) and the path to the module descriptor file. Then the intention "Load required information from file" can be used to read that file and fill the rest of the module specification automatically.
  2. Reflecting the changes made in the module. One can check a model with build scripts using the Model Checker to find whether it is consistent with the module files. Model checker will show all problems in the script and allow you to fix them using "Perform Quick Fixes" button. Instead of Model checker one can use the same "Load required information from file" intention to fix each module individually.

Another thing to remember about MPS module declarations in a build scripts is that they do not rely on modules being loaded in MPS. All the information is taken from a module descriptor file on disk, while module itself could be unavailable from the build script.

MPS modules can be added into an mps group in order to structure the build script. An MPS Group is just a named set of modules, which can be referenced from the outside, for example one can add a module group into an IDEA plugin as one unit.

How generating and compiling MPS modules works internally

As it was written above, a lot of information about a module is extracted into the build script and stored there. This mandates the user to properly update the script whenever module dependencies change. For a Solution, it's both the reexported and non-reexported dependencies that are extracted to the script. For a Language, apart from the dependencies, runtime solutions and extended languages are also extracted.

"Building" a module with build script consists of two parts: generating this module and compiling module sources. Generating is an optional step for projects that have their source code stored in version control system and keep it in sync with their models. For generating, one target is created that generates all modules in the build script. Modules are separated into "chunks" – groups of modules that can be generated together – and the generate task generates the chunks one by one. For example, a language and a solution written with the language cannot be generated together, therefore they go into separate chunks. Apart from the list of chunks to generate, the generate task is provided with a list of idea plugins to load and a list of modules from other build scripts that are required for the generation. This lists of plugins and modules is calculated from the dependencies and therefore their correctness is crucially important for successful generation. This is a major difference between generating a module from MPS and from a build script: while when generating a module from MPS, the generator has all modules in the project loaded and available; when generating a module from a build script, the generator only has whatever was explicitly specified in the module dependencies. So a build script can be used as some kind of a verifier of correctness of modules dependencies.

Compilation of a module is performed a bit differently: for every MPS module a java module is generated, so in the end each MPS module is compiled by an ordinary ant javac task (or similar other task, if it was selected in Java Options).

So in order to generate and compile, dependencies of a module are to be collected and embedded into the generated build xml file. Used languages and devkits are collected during generation of a build script from the module descriptor files. The other information is stored inside the build script node. In the picture below a module structure is shown for a project called "myproject", which uses some third-party MPS library called "mylibrary".

The arrows illustrate the dependency system between modules. The purple arrows denote dependencies that are extracted to the build script, the blue arrows indicate, which dependencies are not extracted. It can be easily observed that in order to compile and generate the modules from my project a knowledge of the "blue arrows" inside of mylibrary is not required. Which means that the actual module files from my library may not even be present during myproject build script generation. Every information that the generator needs is contained in the build script. Which is really very convenient: there is no need to download the whole library and specify its full location during build generation and so the generation process saves time and memory by not loading all module descriptors from project dependencies.

Sources and tests

When an MPS solution contains test models, i.e models with the stereotype "@tests", they are generated into a folder "tests_gen" which is not compiled by default. To compile tests, one needs to specify in the build script that a solution has test models. This is done manually in the inspector. There are three options available for a solution: "with sources" (the default), "with tests" and "with sources and tests".

MPS Settings

mps settings allow to change the MPS-specific parameters for a build script. No more than one instance of mps settings can exist in the build script in the "additional aspects" section. Parameters that can be changed:

  • bootstrap – setting this flag to "true" indicates that there are some bootstrapping dependencies between modules in the script. Normally the flag is set to false. See Removing bootstrapping dependency problems for details.
  • test generation – if set to true, the build script tests modules generation and difference between generated files and files on disk. Files can be excluded from diff in excludes section.
  • generation max heap size in mb – maximum heap size for generation and generation testing.

Testing Modules Generation

Projects that keep their generated source files in version control can check that these generated files are up-to-date using build script. After setting test generration in mps settings to true a call of gentest task appears in test target of generated build script. Similarly to generate task, gentest loads modules in the script, their dependencies from other build scripts and idea plugins that are required. For each module gentest task invokes two tests: "%MODULE_NAME%.Test.Generating" and "%MODULE_NAME%.Test.Diffing". Test.Generating fails when module has errors during generation and Test.Diffing fails when generated files are different from the ones on disk (checked out from version control). Test results and statistic are formatted into an xml file supported by the TeamCity build server.

IDEA plugins

idea plugin construction defines a plugin for IntelliJ IDEA or MPS with MPS modules in it. In the screenshot you can see an example of such plugin.

The first section of the plugin declaration consists of various information describing the plugin: its name and description, the name of the folder, the plugin vendor, etc.. The important string here is plugin id, which goes after the keywords idea plugin. This is the unique identifier of the plugin among all the others (in the example the plugin id is jetbrains.mps.samples.complex).

The next section is the actual plugin content – a set of modules or module groups included into the plugin. If some module included in the plugin needs to be packaged in some special way other than the default, this should also be specified here (see the line "custom packaging for jetbrains.mps.samples.complex.library").

The last section is dedicated to the plugin dependencies on other plugins. The rule is that if we have a "moduleA" located in plugin "pluginA", which depends on "moduleB" located in "pluginB", then there should be a dependency of "pluginA" on "pluginB". A typesystem check exists that will identify and report such problems.

The layout of the plugin is specified last:

In the screenshot, module jetbrains.mps.samples.complex.library is packaged into the plugin manually as it is specified in idea plugin construction not to package it automatically.

MPS Targets

The MPS plugin provides the following targets:

  • generate - generates the mps modules that are included in the project structure.
  • cleanSources - cleans the generated code (only for modules without bootstrapping dependencies). See more about bootstrapping dependencies in article Removing bootstrapping dependency problems.
  • declare-mps-tasks - a utility target that declares mps tasks such as generate or copyModels.
  • makeDependents - invokes the generate target for a transient closure of this script's dependencies (if there is one) and then invokes assemble to put them together. It is guaranteed that each script is executed only after all its dependencies have been built.

Module Testing plugin

The Module testing plugin, provided by jetbrains.mps.build.mps.tests language, adds to build scripts the capability to execute NodeTestCases and EditorTestCases in the MPS solutions. Tests are executed after all modules are compiled and packaged into a distribution, i.e. against the packaged code, so they are invoked in an environment that closely mimics the real use of the code.

Test modules configurations

Solutions/module groups with tests are grouped into test modules configurations, which is a group of solutions with tests to be executed together in the same environment. All required dependencies (i.e. modules and plugins) are loaded into that environment.
In the screenshot, you can see a test modules configuration, named execution, which contains a solution jetbrains.mps.execution.impl.tests and a module group debugger-tests.

There is a precondition for solutions to be included into a test modules configuration. A solution should be specified as containing tests (by selecting "with tests" or "with sources and tests" in inspector). A module group should contain at least one module with tests.

Test results and statistic are formatted into an xml file (which is supported by TeamCity).

How-to's

The following articles explain how to build a language plugin:

Articles on the topic of building with MPS:

So you have created a set of languages and would like to make them available to Java developers inside IntelliJ IDEA. In this document we are going to look at ways to package a set of languages, perhaps together with the runtimes they depend on, into a valid IntelliJ IDEA plugin.

Do you prefer video? Then you may also like to check out our screen-cast covering the topic of IntelliJ IDEA language plugin creation.

Note: The JavaExtensionsSample sample project that comes with MPS contains a fully functional build script to build and package the sample Java extensions into a plugin. You can take inspiration from there.

Starting point

I assume you have built your languages and now it is the time share them with the world. There are a couple of steps that you need to follow in order to get a nice zip file, which you can upload to the server for others to grab.

In brief:

  • Create a build script (manually or through a wizard)
  • Generate an Ant build xml file
  • Run Ant
  • Pick the generated files and share them

Now we'll continue in more detail. Alternatively you may like to try our new screen-casts that covers the topics of building as well as using IntelliJ IDEA language plugins.

Create a build script

First of all, we need to create a new build script in the newly created build solution. We have two options - using a wizard or creating the build description manually.

Using the Wizard

We can use the Build Solution wizard to generate a solution for us


The wizard will ask whether the new build script should become a part of an existing solution or whether a new one should be created.


A model has to be created inside the new solution:


You can also specify, whether you intend to package the outcome as an MPS or IntelliJ IDEA plugin:


Finally, you select the languages and solutions to include in the build srcipt:

The generated build description script will look something like this:

The manual approach

To get more control over the process, we can alternatively create the build script ourselves. So first we will have to pick an existing solution or create a new one. In the root of your project's logical view right-click and pick "New Solution". Once the solution exists, create a new model in it. The model should have jetbrains.mps.build and jetbrains.mps.build.mps listed as its Used languages and jetbrains.mps.ide.build should be mentioned as an Dependency.



With the solution and the model prepared you can now create a new build project through the pop-up menu:

Editing the build script

Either way you created the build script, now it is the time to edit your build description. In order to be able t