Child pages
  • MPS User's Guide (one page)
Skip to end of metadata
Go to start of metadata

MPS User Guide for Language Designers


 You are viewing documentation of MPS 2017.3, which is not the most recently released version of MPS. Please refer to the documentation page  to choose the latest MPS version.



Welcome to MPS. This User Guide is a complete reference documentation to MPS and it will navigate you through the many concepts and usage patterns that MPS offers and will give you a hand whenever you need to know more details about any particular aspect of the system.


How to use the keyboard efficiently

Go to the Default keymap reference page to learn the most useful keyboard shortcuts.

Terms and names explained

Check out the Glossary page for explanation of frequently used terms.

Frequently Asked Questions (FAQ)

Check out the FAQ document to get some of your questions answered before you even ask them.

User guide for language designers

Basic notions

This chapter describes the basic MPS notions: nodes, concepts, and languages. These are key to proper understanding of how MPS works. They all only make sense when combined with the others and so we must talk about them all together. This section aims to give you the essence of each of the elements. For further details, you may consider checking out the sections devoted to nodes, concept (structure language), and languages (project structure).

Abstract Syntax Tree (AST)

MPS differentiates itself from many other language workbenches by avoiding the text form. Your programs are always represented by an AST. You edit the code as an AST, you save it as an AST you compile it as, well, as an AST. This allows you to avoid defining grammar and building a parser for your languages. You instead define your language in terms of types of AST nodes and rules for their mutual relationships. Almost everything you work with in the MPS editor is an AST-node, belonging to an Abstract Syntax Tree (AST). In this documentation we use a shorter name, node, for AST-node.


Nodes form a tree. Each node has a parent node (except for root nodes), child nodes, properties, and references to other nodes.

The AST-nodes are organized into models. The nodes that don't have a parent, called root nodes. These are the top-most elements of a language. For example, in BaseLanguage (MPS' counterpart of Java), the root nodes are classes, interfaces, and enums.


Nodes can be very different from one another. Each node stores a reference to its declaration, its concept. A concept sets a "type" of connected nodes. It defines the class of nodes and coins the structure of nodes in that class. It specifies which children, properties, and references an instance of a node can have. Concept declarations form an inheritance hierarchy. If one concept extends another, it inherits all children, properties, and references from its parent.
Since everything in MPS revolves around AST, concept declarations are AST-nodes themselves. In fact, they are instances of a particular concept, ConceptDeclaration.


A language in MPS is a set of concepts with some additional information. The additional information includes details on editors, completion menus, intentions, typesystem, dataflow, etc. associated with the language. This information forms several language aspects.
Obviously, a language can extend another language. An extending language can use any concepts defined in the extended language as types for its children or references, and its concepts can inherit from any concept of the extended language. You see, languages in MPS form fully reusable components.


While languages allow their users to create code, which is stored in modelsgenerators can transform these source models into target models. Generators perform model-to-model conversion on AST models. The target models use different languages than the source models and serve one or more purposes:

  • can be converted to text source files and then compiled with standard compilers (Java, C, etc.)
  • can be converted to text documents and used as such (configuration, documentation - property files, xml, pdf, html, latex)
  • can be directly interpreted
  • can be used for code analysis or formal verification by a third party tool (CBMS, state-machine reachability analysis, etc.)
  • can be used for simulation of the real system

Generators typically rely on a Domain framework - a set of libraries that the generated code calls or inherits from. The framework encodes the stable part of the desired solution, while the variable part is contained in the actual generated code.

Generation in MPS is done in phases - the output of one generator can become the input for another generator in a pipeline. An optional model-to-text conversion phase (TextGen) may follow to generate code in a textual format. This staged approach helps bridge potentially big semantics gaps between the original problem domain and the technical implementation domain. It also encourages re-use of generators.


MPS project structure


When designing languages and writing code, good structure helps you navigate around and combine the pieces together. MPS is similar to other IDEs in this regard.


Project is the main organizational unit in MPS. Projects consist of one or more modules, which themselves consist of models. Model is the smallest unit for generation/compilation. We describe these concepts in detail right below.


Here's a major difference that MPS brings along - programs are not in text form. Ever.
You might be used to the fact that any programming is done in text. You edit text. The text is than parsed by a parser to build an AST. Grammars are typically used to define parsers. AST is then used as the core data structure to work with your program further, either by the compiler to generate runnable code or by an IDE to give you clever code assistance, refactorings and static code analysis.
Now, seeing that AST is such a useful, flexible and powerful data structure, how would it help if we could work with AST from the very beginning, avoiding text, grammar and parsers altogether? Well, this is exactly what MPS does.

To give your code some structure, programs in MPS are organized into models. Think of models as somewhat similar to compilation units in text based languages. To give you an example, BaseLanguage, the bottom-line language in MPS, which builds on Java and extends it in many ways, uses models so that each model represents a Java package. Models typically consist of root nodes, which represent top level declarations, and non-root nodes. For example, in BaseLanguage classes, interfaces, and enums are root nodes. (You can read more about nodes here ).

Models need to hold their meta information:

  • models they use (imported models)
  • languages (and also devkits) they are written in (in used languages section)
  • a few extra params, such as the model file and special generator parameters

This meta information can be altered in Model Properties of the model's pop-up menu or using Alt + Enter when positioned on the model.


Models themselves are the most fine-grained grouping elements. Modules organize models into higher level entities. A module typically consists of several models acompanied with meta information describing module's properties and dependencies. MPS distinguishes several types of modules: solutions, languages, devkits, and generators.
We'll now talk about the meta-information structure as well as the individual module types in detail.

Module meta information

Now when we have stuff organized into modules, we need a way to combine the modules together for better good. Relationships between modules are described through meta information they hold. The possible relationships among modules can be categorized into several groups:

  • Dependency - if one module depends on another, and so models inside the former can import models from the latter. The reexport property of the dependency relationship indicates whether the dependency is transitive or not. If module A depends on module B with the reexport property set to true, every other module that declares depency on A automatically depends on B as well.
  • Extended language dependency - if language L extends language M, then every concept from M can be used inside L as a target of a role or an extended concept. Also, all the aspects from language M are available for use and extension in the corresponding aspects of language L.
  • Generation Target dependency - a relation between two languages (L2 and L1), when one needs to specify that Generator of L2 generates into L1 and thus needs L1's runtime dependencies.
  • Used language - if module A uses language L, then models inside A can use language L.
  • Used devkit - if module A uses devkit D, then models inside A can use devkit D.
  • Generator output path - generator output path is a folder where all newly generated files will be placed. This is the place you can look for the stuff MPS generates for you.

Now we'll look at the different types of modules you can find in MPS.


Solution is the simplest possible kind of module in MPS. It is just a set of models holding code and unified under a common name. There are several types of solutions:

  • Sandbox solutions - these solutions hold an end user code. The IDE does not treat the code in any special way.
  • Runtime solutions - these solutions contain code that other modules (Solutions, Languages or Generators) depend on. The code can consist of MPS models as well as of Java classes, sources or jar files. The IDE will reload the classes, whenever they get compiled or changed externally.
  • Plugin solutions - these solutions extend the IDE functionality in some way. They can contribute new menu entries, add side tool panel windows, define custom preference screens fro the Project settings dialog, etc. Again, MPS will keep reloading the classes, whenever they change. Additionally the IDE functionality will be updated accordingly.


Language is a module that is more complex than a solution and represents a reusable language. It consists of several models, each defining a certain aspect of the language: structure, editor, actions, typesystem, etc.
Languages can extend other languages. An extending language can then use all concepts from the extended language - derive its own concepts, use inherited concepts as targets for references and also place inherited concepts directly as children inside its own concepts.

Languages frequently have runtime dependencies on third-party libraries or solutions. You may, for example, create a language wrapping any Java library, such as Hibernate or Swt. Your language will then give the users a better and smoother alternative to the standard Java API that these libraries come with.
Now, for your language to work, you need to include the wrapped library with your language. You do it either through a runtime classpath or through a runtime solution. Runtime classpath is suitable for typical scenarios, such as Java-written libraries, while runtime solutions should be prefered for more complex scenarios.

  • Runtime classpath - makes library classes available as stubs language generators
  • Runtime solutions - these models are visible to all models inside the generator

Language aspects

Language aspects describe different facets of a language:

  • structure - describes the nodes and structure of the language AST. This is the only mandatory aspect of any language.
  • editor - describes how a language will be presented and edited in the editor
  • actions - describes the completion menu customizations specific to a language, i.e. what happens when you type Control + Space
  • constraints - describes the constraints on AST: where a node is applicable, which property and reference are allowed, etc.
  • behavior - describes the behavioral aspect of AST, i.e. AST methods
  • typesystem - describes the rules for calculating types in a language
  • intentions - describes intentions (context dependent actions available when light bulb pops up or when the user types Alt + Enter)
  • plugin - allows a language to integrate into MPS IDE
  • data flow - describes the intented flow of data in code. It allows you to find unreachable statements, uninitialized reads etc.

You can read more about each aspect in the corresponding section of this guide.


To learn all about setting dependencies between modules and models, check out the Getting the dependencies right page.


Generators define possible transformations of a language into something else, typically into another languages. Generators may depend on other generators. Since the order in which generators are applied to code is important, ordering constraints can be set on generators. You can read more about generation in the corresponding section.


DevKits have been created to make your life easier. If you have a large group of interconnected languages, you certainly appreciate a way to treat them as a single unit. For example, you may want to import them without listing all of the individual languages. DevKits make this possible. When building a DevKit, you simply list languages to include.
As expected, DevKits can extend other DevKits. The extending DevKit will then carry along all the inherited languages as if they were its own ones.


This one is easy. A project simply wraps modules that you need to group together and work with them as a unit. You can open the Properties of a project (Alt + Enter on the Project node in the Project View panel) and add or remove modules that should be included in the project. You can also create new modules from the project nodes' context pop-up menu.

Java compilation

MPS was born from Java and is frequently used in Java environment. Since MPS models are often generated into java files, a way to compile java is needed before we can run our programs. There are generally two options:

  • Compiling in MPS (recommended)
  • Compiling in IntelliJ IDEA (requires IntelliJ IDEA)

When you compile your classes in MPS, you have to set the module's source path. The source files will be compiled each time the module gets generated, or whenever you invoke compilation manually by the make or rebuild actions.
Previous Next

MPS Java compatibility


The Java Compiler configuration tab in the preferences window only holds a single setting - “Project bytecode version”.

This setting defines the bytecode version of all Java classes compiled by MPS. These classes include classes generated from language’s aspects, classes of the runtime solutions, classes of the sandbox solutions, etc.

By default, the bytecode version is set to “JDK Default”. This means that the version of the compiled classes will be equal to the version of Java, which MPS is running under. E.g. if you run MPS under JDK 1.8 and “JDK Default” is selected, the bytecode version will be 1.8.

The other options for project bytecode version are 1.6, 1.7 and 1.8.


Note that MPS since version 3.4 can only run on JDK 1.8 and higher, so when compiling languages or MPS plugins you have to set the bytecode version to 1.8, otherwise your languages/plugins won’t be loaded. Setting the byte code version to earlier JDK versions is only useful for solution-only projects, which are generated into Java sources that you then compile and use outside of MPS.

Build scripts

Also, don’t forget to set java compliance level in the build scripts of your project. It should be the same as the project bytecode version.

Using java classes compiled with JDK 1.8

In the MPS modules pool you can find the JDK solution, which holds the classes of the running Java. So when you start MPS under JDK 1.8, the latest Java Platform classes will be available in the JDK solution.

You can also use any external Java classes, compiled under JDK 1.8 by adding them as Java stubs.

Since version 1.8, Java interfaces can contain default and static methods. At present, MPS does not support creating them in your BaseLanguage code, but you can call static and default methods defined in external Java classes, e.g classes of the Java Platform.

Static interface method call

In the example, we sort a list with the Comparator.reverseOrder()Comparator is an interface from java.util, and reverseOrder() is its static method, which was introduced in Java 1.8.

Default interface methods

Java 8 introduced also default methods. These are methods implemented directly in the interface. You can read about default methods here:

These methods can be called just like the usual instance methods. Sometimes, however, you need to call the default method directly from an interface that your class is implementing. E.g in case of multiple inheritance when a class implements several interfaces, each containing a default method with the same signature.

In that case foo() can be called explicitly on one of the interfaces via a SuperInterfaceMethodCall construction, which is newly located in the jetbrains.mps.baseLanguage.jdk8 language.

Using Java platform API

Java 8 introduced lambda expressions, of which you can learn more here:

MPS doesn’t yet have a language that would be generated into lambda-expressions. Instead, it has its own closure language, which is compatible with the new Java API!

Here’s the example of an interaction with the new JDK 8 Collections API:

The forEach() method is the new default method of java.lang.Iterable. It takes a Consumer interface as a parameter. Consumer is a functional interface as it only has one method. In Java 8 it would be possible to pass a lambda expression to forEach(). In MPS you can pass the MPS closure. A closure knows the type of the parameter taken by forEach() while generating and it will be generated exactly to the correct instance of the Consumer.

Commanding the editor

When coding in MPS you will notice there are some differences between how you normally type code in text editors and how code is edited in MPS. In MPS you manipulate the AST directly as you type your code through the projectional editor. The editor gives you an illusion of editing text, which, howver, has its limits. So you are slightly limited in where you can place your cursor and what you can type on that position. As we believe, projectional editor brings huge benefits in many areas. It requires some getting used to, but once you learn a few tricks you'll leave your plain-text-editor colleagues far behind in productivity and code quality. In general only the items suggested by a completion menu can be entered. MPS can always decide, which elements are allowed and which are disallowed at a certain position. Once the code you type is in red color you know you're off the track.

Code completion

Code completion (Control + Space) will be your good friend allowing you to quickly complete the statements you typed. Remember that CamelHumps are supported, so you only need to type the capital characters of long names and MPS will guess the rest for you.


Frequently you can enhance or alter your code by means of predefined semi-automated procedures called Intentions. By pressing Alt + Enter MPS will show you a pop-up dialog with options applicable to your code at the current position. Some intentions are only applicable to a selected code region, e.g. to wrap code inside a try-catch block. These are called Surround With intentions and once you select the desired block of code, press Control + Alt + T to show the list of applicable intentions.


Whenever you need to see the definition of an element you are looking at, press Control/Cmd + B or Control/Cmd + mouse click to open up the element definition in the editor. To quickly navigate around editable positions on the screen use the Tab/Shift + Tab key. Enter will typically insert a new element right after your current position and let you immediately edit it. The Insert key will do the same for a position right before your current position.
When a piece of code is underlined in either red or yellow, indicating an error or a warning respectively, you can display a pop-up with the error message by pressing Control + F1.


The Control/Cmd + Up/Down key combination allows you to increase/decrease block selection. It ensures you always select valid subtrees of the AST. The usual Shift + Arrow keys way of text-like selection is also possible.


To quickly find out the type of an element, press Control/Cmd + Shift + T. Alt + F12 will open the selected element in the Node Explorer allowing you to investigate the appropriate part of the AST. Alt + F7 will enable you to search for usages of a selected element. To quickly visualize the inheritance hierarchy of an element, use Control + H.

Inspector window

The Inspector window opens after you press Alt + 2. Some code and properties (e.g. editor styles, macros etc.) are shown and edited inside the Inspector window so it is advisable to keep the window ready.


We've prepared an introductory screen-cast showing you the basics of using the MPS editor.

Most useful key shortcuts

Windows / Linux



Control + Space

Cmd + Space

Code completion

Control + B

Cmd + B

Go To Definition

Alt + Enter

Alt + Enter




Move to the next cell

Shift + Tab

Shift + Tab

Move to the previous cell

Control + Up/Down

Cmd + Up/Down

Expand/Shrink the code selection

Shift + Arrow keys

Shift + Arrow keys

Select regions

Control + F9

Cmd + F9

Compile project

Shift + F10

Shift + F10

Run the current configuration

Control + Shift + T

Cmd + Shift + T

Show the type of the expression under carret

Alt + X

Control + X

Open the expression under carret in the Node Explorer to inspect the apropriate node and its AST surroundings

Control + H

Ctrl + H

Show the structure (inheritance hierarchy)

Alt + Insert

Ctrl + N

A generic contextual New command - will typically pop-up a menu with elements that can be created at the given location

Ctrl + Alt + T

Cmd + Alt + T

Surround with...

Ctrl + O

Cmd + O

Override methods

Ctrl + I

Cmd + I

Implement methods

Ctrl + /

Cmd + /

Comment/uncomment the current node

Ctrl + Shift + /

Cmd + Shift + /

Comment/uncomment with block comment (available in BaseLanguage only)

Ctrl + X/ Shift + Delete

Cmd + X

Cut current line or selected block to buffer

Ctrl + C / Ctrl + Insert

Cmd + C

Copy current line or selected block to buffer

Ctrl + V / Shift + Insert

Cmd + V

Paste from buffer

Ctrl + Shift + VCmd + Shift + VPaste from history (displays a pop-up dialog that lists all previously copied code blocks)

Ctrl + Z

Cmd + Z


Ctrl + Shift + Z

Cmd + Shift + Z


Ctrl + D

Cmd + D

Duplicate current line or selected block

A complete listing

Please refer to the Default Keymap Reference page for a complete listing of MPS keyboard shortcuts (Also available from the MPS Help menu).

IDE configuration

Many aspects of MPS can be configured through the Settings dialog (Control + Alt + S / Cmd + ,)

To quickly navigate to a particular configuration items you may use the convenient text search box in the upper left corner. Since the focus is set to the text field by default, you can just start typing. Notice that the search dives deep into the individual screens:


MPS is modular and contains several plugins. If you open the MPS Plugin Manager you’ll see a list of plugins available in your installation.

Additionally installed languages are also listed here.

If some plugins are not necessary for your current work they can be simply switched off, which may have impact on the overall performance of the platform.

Getting dependencies right


Modules and models are typically interconnected by a network of dependencies of various types. Assuming you have understood the basic principles and categorisations of modules and models, as described at the MPS project structure page, we can now dive deeper as learn all the details.

Getting dependencies right in MPS is a frequent cause of frustration among inexperienced users as well as seasoned veterans. This page aims to solve the problem once and for all. You should be able to find all the relevant information categorised into sections by the various module and dependency types.

All programming languages have a notion of "imports". In Java you get packages and the "import" statement. In Ruby or Python you have "modules" and "require" or "import" statements. In MPS we provide a similar mechanism for packaging code and expressing dependencies in a way that works universally across languages - code is packages into models and these models can express mutual dependencies.

In addtion, since MPS is a multi-language development environment, models can specify the languages (aka syntaxes) enabled in them. This is different from when writing code in Java, Ruby or other languages, where the language to be used is given and fixed.

Useful keyboard shortcuts

Whenever positioned on a model or a node in the left-hand-side Project Tool Window or when editing in the editor, you can invoke quick actions with the keyboard that will add dependencies or used languages into the current model as well as its containing solution.

  • Control + L - Add a used language
  • Control + M - Add a dependency
  • Control/Cmd + R - Add a dependency that contains a root concept of a given name
  • Control/Cmd + Shift + A - brings up a generic action-selction dialog, in which you can select the desired action applicable in the current context


Solutions represent programs written in one or more languages. They typically serve two purposes:

  1. Sandbox solutions - these solutions hold an end user code. The IDE does not treat the code in any special way.
  2. Runtime solutions - these solutions contain code that other modules (Solutions, Languages or Generators) depend on. The code can consist of MPS models as well as of Java classes, sources or jar files. The IDE will reload the classes, whenever they get compiled or changed externally.
  3. Plugin solutions - these solutions extend the IDE functionality in some way. They can contribute new menu entries, add side tool panel windows, define custom preference screens fro the Project settings dialog, etc. Again, MPS will keep reloading the classes, whenever they change. Additionally the IDE functionality will be updated accordingly.

We'll start with the properties valid for all solutions and then cover the specifics of runtime and plugin solutions.



  • Name - name of the solution
  • File path - path the the module file
  • Generator output path - points to the folder, where generated sources should be placed
  • Left-side panel - contains model roots, each of which may hold one or more models.
  • Right-side panel - displays the directory structure under the model root currently selected in the left-side panel. Folders and jar files can be selected and marked/unmarked as being models of the current model root.

Model root types

Solutions contain model roots, which in turn contain models. Each model root typically points to a folder and the contained models lie in one or more sub-folders of that folder. Depending on the type of contained models, the model roots are of different kinds:

  • default - the standard MPS model root type holding MPS models
  • java_classes - a set of directories or jar files containing Java class files
  • javasource_stubs - a set of directories or jar files containing Java sources


    When included in the project as models, Java classes in directories or jar files will become first-class citizens of the MPS model pool and will become available for direct references from other models, which import these stub models. A second option to include classes and jars in MPS is to use the Java tab and define them as libraries. In that case the classes will be loaded, but not directly referenceble from MPS code. This is useful for libraries that are needed by the stub models.


The dependencies of a solutions are other solutions and languages, the models of which will be visible from within this solution.

The Export flag then specifies whether the dependency should be transitively added as a dependency to all modules that depend on the current solution. For example, of module A depends on B with export on and C depends on A, then C depends on B.

Used Languages

The languages as well as devkits that the solution's models may use are listed among used languages. Used languages are specified on the model level and the Used Languages tab on modules only shows a collection of used languages of its all models.


This is where the different kinds of Solutions differ mostly.

The Java tab contains several options:

  • Solution kind - different kinds of solutions are treated slightly differently by MPS and have access to different MPS internals
    • None - default, used for user code, which does not need any special class-loading strategy - use for Sandbox solutions
    • Other - used by typical libraries of reusable code that are being leveraged by other languages and solutions - use for Runtime solutions
    • Core plugin - used by code that ties into the MPS IDE core and needs to have its class-loading managed accordingly - use for Plugin solutions
    • Editor plugin used by code that ties into the MPS editor and needs to have its class-loading managed in sync with the rest of the editor- use for Plugin solutions that only enhance the editor
  • Compile in MPS - indicates, whether the generated artifacts should be compiled with the Java compiler directly in MPS and part of the generation process
  • Source Paths - Java sources that should be made available to other Java code in the project
  • Libraries - Java classes and jars that are required at run-time by the Java code in one or more models of the solution


  • Idea Plugin - checked, if the solution hooks into the IDE functionality
  • Java - checked, if the solution relies on Java on some way. Keep this checked in most cases.
  • tests - checked, if the solution contains test models

Solution models

Solutions contain one or more models. Models can be mutually nested and form hierarchies, just like, for example, Java packages can. The properties dialog hides a few configuration options that can be tweaked:


Models from the current or imported modules can be listed here, so that their elements become accessible in code of this model.

Used languages

The languages used by this model must be listed here.


A few extra options are listed on the Advanced tab:

  • Do not generate - exclude this model from code generation, perhaps because it cannot be meaningfully generated
  • File path - location of the model file
  • Languages engaged on generation - lists languages needed for proper generation of the model, if the languages are not directly or indirectly associated with any of the used languages and thus the generator fails finding these languages automatically

Virtual packages

Nodes in models can be logically organised into hierarchies of virtual packages. Use the Set Virtual Package option from the node's context pop-up menu and specify a name, possibly separating nested virtual folder names with the dot symbol.

Adding external Java classes and jars to a project - runtime solutions

Runtime solutions represent libraries of reusable code in MPS. They may contain models holding MPS code as well as models referring to external Java sources, classes or jar files. To properly include external Java code in a project, you need to follow a few steps:

  1. Create a new Solution
  2. In the Solution properties dialog (Alt + Enter) specify the Java code, such that:
    1. Common tab - click on Add Model Root, select javaclasses for classes or jars, select javasource_stubs for Java sources and navigate to your lib folder.
    2. Select the folder(s) or jar(s) listed in the right-side panel of the properties dialog and click on the blue "Models" button.
    3. Also on the Java tab add all the jars or the classes root folders to the Libraries part of the window, otherwise the solution using the library classes would not be able to compile. When using java_sourcestubs, add the sources into the Source paths part of the Java tab window, instead.
  3. A new folder named stubs should appear in your solution
  4. Now after you import the solution into another module (solution, language, generator) the classes will become available in that module's models
  5. The languages that want to use the runtime solution will need to refer to it in the Runtime Solutions section of the Runtime tab of their module properties


Languages represent a language definition and consist of several models, each of which represent a distinct aspect of the language. Languages also contain a single Generator module. The properties dialog for languages is in many ways similar to the one of Solutions. Below we will only mention the differences:


A language typically has a single model root that points to a directory, in which all the models for the distinct aspects are located.


The dependencies of a language are other solutions and languages, the models of which will be visible from within this solution. The Export flag then specifies whether the dependency should be transitively added as a dependency to all modules that depend on the current language.

A dependency on a language offers thee Scope options:

  • Default - only makes the models of the other language/solution available for references
  • Extends - allows the language to define concepts extending concepts from the there language
  • Generation Target - specifies that the current language is generated into the other language, thus placing a generator ordering constraint that the other language must only be generated after the current one has finished generating

Used Languages

This is the same as for solutions.


  • Runtime Solutions - lists solutions of reusable code that the language requires. See the "Adding external Java classes and jars to a project - runtime solutions" section above for details on how to create such a solution.
  • Accessory models - lists accessory models that the language needs. Nodes contained in these accessory models are implicitly available on the Java classpath and the Dependencies of any model using this language.


This is the same as for solutions, except for the two missing options that are not applicable to languages.


This is the same as for solutions.


When using a runtime solution in a language, you need to set both the dependency in the Dependencies tab and the Runtime Solutions on the Runtime tab.

Language models/aspects

Dependencies / Used Languages / Advanced

These settings are the same and have the same meaning as the settings on any other models, as described in the Solution section.


The generator module settings are very similar to those of other module types:


This is the same as for languages.


This is the same as for solutions. Additionally generator modules may depend on other generator modules and specify Scope:

  • Default - only makes the models of the other language/solution available for references
  • Extends - the current generator will be able to extend the generator elements of the extended generator
  • Design - the target generator is only needed to be referred from a priority rule of this generator

Used Languages

This is the same as for languages.

Generators priorities

This tab allows to define priority rules for generators, in order to properly order the generators in the generation process. Additionally, three options are configurable through the check-boxes at the bottom of the dialog:

  • Generate Templates - indicates, whether the generator templates should be generated and compiled into Java, or whether they should be instead interpreted by the generator during generation
  • Reflective queries - indicates, whether the generated queries will be invoked through Java reflection or not. (Check out the Generator documentation for details)
  • IOperationContext parameter - indicates, whether the generator makes use of the operationContext parameter passed into the queries. The parameter will be removed in the future and generators should gradually stop using it.


This is the same as for languages.


This is the same as for languages.

Generator models

This is the same as for solutions.

Resolving difficulties, understanding reported errors

This document should give you instant step-by-step advice on what to do and where to look to get over a problem with MPS. It is an organized collection of patterns and how-tos fed with our own experience.

Reflective editor

Projectional editor by its nature presents the model to the user in a controlled way. Depending on the intent of the language designer the language may hide some information or some nodes from the user and prohibit some ways to manipulate the code. Also, if the editor definition is broken or incomplete in some sense, the editor may not allow the user to modify the code in a way she desires. Reflective editor provides the means to suppress the language editor and instead show the model in a default tree-like form. This way the developer has full and direct access to the model.

F5 returns the editor to the normal.

Node Explorer

The Control + X keyboard shortcut gives the user a way to visualise the AST that represents the piece of code that has been selected in the editor.

Check out the type of the node

Knowing the type of the element you are looking at may give you very useful insight. All you need to do is pressing Control + Shift + T and MPS will pop-up a dialog window with the type of the element under carret.

Check the concept of the node under carret

The Control + Shift + S/Cmd + Shift + S keyboard shortcut will get you to the definition of the concept of the node you are currently looking at or that you have selected.

Check the editor of the node under carret

The Control + Shift + E/Cmd + Shift + E keyboard shortcut will get you to the definition of the editor for the concept you are currently looking at or that you have selected. This may be in particular useful if you want to familiarize yourself with the concrete syntax of a concept and all the options it gives you. 

Type-system Trace

When you run into problems with types, the Type-system Trace tool will give you an insight into how the types are being calculated and so could help you discover the root of the issues. Check out the details in Type-system Trace documentation page and in Type-system Debugging.

Investigate the structure

When you are learning a new language, the structure aspect of the language is most often the best place to start investigating. The shortcuts for easy navigation around concepts and searching for usages will certainly come in handy.

You should definitely familiarize youself with Control + B / Cmd + B (Go To Definition), Control + N / Cmd + N (Go To Concept), Control + Shift + S / Cmd + Shift + S (Go To Concept Declaration) and Alt + F7 (Find usages) to make your investigation smooth and effitient.

Before you learn the shortcuts by heart, you can find most of them in the Navigate menu:

Importing elements

You are trying to use an element or a language feature, however, MPS doesn't recognize the language construct or doesn't offer that element in the code-completion dialog. So you cannot update your code the way you want. This is a simptom of a typical beginer's problem - missing imports and used languages.

  • In order to use language constructs from a language, the language has to listed among Used Languages.
  • To be able to enter elements from a model, the model must be imported first.
  • Also, for your languages to enhance capabilities of another language, the language must be listed among the Extended Languages.

To quickly and conveniently add models or languages to the lists, you may use a couple of handly keyboard shortcuts in addition to the the Properties dialog:

Save transient models

If you are getting errors from the generator, you may consider turning the Save Transient Models functionality on. This will preserve all intermediate stages of code generation for your inspection.

Why the heck do I get this error/warning?

You see that MPS is unhappy about some piece of code and you want to find out why. Use Control + Alt + Click / Cmd + Alt + Click to open up a dialog with the details.

The Go To Rule button will get you to the rule that triggers the error/warning.

Where to find language plugins

MPS can be easily extended with additional languages. Languages come packaged as ordinary zip files, which you unzip into the MPS plugin directory and which MPS will load upon restart.

The most convenient way to install language plugins is through the Plugin Manager, which is available in the Settings dialog (Control + Alt + S / Cmd + ,).


You can either install a zip file you've received previously (the Install plugin from disk... option) or you may click the Browse repositories button and pick the desired plugin from the list of plugins that have been uploaded to the MPS plugin repository.

Version Control

Compare two nodes

Any two arbitrary nodes in the Project View tool window can be visually compared:

The standard VCS comparison dialog shows up that visualizes the mutual differences and allows easy modifications.

VCS menu

The VCS menu contains commands and configurations related to version control:

VCS configuration

In order to configure VCS for your project, open the Preferences (Control + Alt + S or Cmd + ,) and choose the Version Control item:

Among other things you must configure the project roots for the individual version control systems used.

Changes View

The Changes View tool window at the bottom (Alt + 9 or Cmd + 9) lists all files/models that have been modified. The view can be configured using the buttons on the side of the window. A context pop-up menu provides quick access to frequently used VCS-related actions applicable to the selected items:

The Log tab of the Changes View visualizes the commit history:

VCS Add-ons

When you first open MPS with version control or add VCS mapping for existing project, it offers you installing some global settings and install so called VCS Add-ons (they can also be installed from main menu: Version Control → Install MPS VCS Add-ons).

What are VCS Add-ons

VCS Add-ons are special hooks, or merge drivers for Subversion and Git, which override merging mechanism for special types of files. In case of MPS, these addons determine merging for model files (*.mps) and generated model caches (dependenciesgeneratedand files, if you store them under version control). Every time you invoke some version control procedure (like merging branches or Git rebasing) which involves merging file modifications, these hooks are invoked. For models, it reads their XML content and tries to merge changes in high level, "model" terms (instead of merging lines of XML file which may lead to invalid XML markup). Sometimes models cannot be merged automatically. In that case, it stays in "conflicting" state, and it can be merged in UI of MPS.

In some cases during work of merge driver, there may happen id conflicts, situations when model has more than one node with the same id after applying all non-conflicting changes. In this situation, no automatic merging is performed, because it may lead to problems with references to nodes which are hard to find. In this case you should should look through merge result by himself and decide if it okay.

For model caches merge driver works in a different way (if you store them under version control, of course). Generator dependencies (generated files) and debugger trace caches ( files) are just cleared after merging, so you will need to regenerate corresponding models. Java dependencies (dependencies files) which are used on compilation are merged using simple union algorithm which makes compilation possible after merging.

Different VCS Add-ons

Look at the dialog:

There are several types of VCS Add-ons which can be installed. It is recommended to install them all.

  • Git global autocrlf setting. Forces git to store text files in repository with standart Unix line endings (LF), while text files in working copy use local system-dependent line endings. Necessary when developers of your project use different operating systems with different line-endings (Windows and Unix).
  • Git global merge driver setting. Registers merge driver for MPS models in global Git settings so they can be referred in .gitattributes files of Git repositories (see below). It only maps merge driver name (in this case, "mps") with path to actual merge driver command.
  • Git file attributes for repositories. Enables MPS merge driver for concrete file types (*.mps,, etc) s in Git repositories used in opened MPS project. This creates or modifies .gitattributes file in root of Git repository. This file usually should be stored under version control so these settings will be shared among developers of the project.
  • Subversion custom diff3 cmd. Registers MPS merger in config file of Subversion. MPS may use its own config folder for Subversion, so there are two different checkboxes. One updates global config used when you invoke Subversion procedures from command line or tools like TortoiseSVN. Another one modifies config only for MPS Subversion plugin. By the way, directory for Subversion config used in MPS can be defined in Subversion settings.

Using MPS Debugger

Using MPS Debugger

MPS Debugger provides an API to create debuggers for custom languages. Java Debugger plugin, included into MPS distribution, allows user to debug programs which are written in languages which are finally generated into Base Language/Java. We use this plugin below to illustrate MPS Debugger features, which all are available to other lagnuages via API.


We start with description of how to debug a java application. If a user has a class with main method, a Java Application run configuration should be used to run/debug such a program.

Creating an instance of run configuration

A Java Application or an MPS instance run configurations can be created for a class with a main method or an MPS project, respectively. Go to Run -> Edit Configurations menu and press a button with "+" as shown at the picture below:

A menu appears, choose Java Application from it and a new Java Application configuration will be created:

If you select Java Application, you will be able to specify the Java class to run, plus a few optional configuration parameters:

A name should be given to each run configuration, and a main node i.e. a class with a main method should be specified. Also VM and program parameters may be specified in a configuration. Look at Run Configuration chapter to learn more about run configurations.

Debugging language definitions

Select MPS instance, if you want to debug MPS language definition code. MPS will start a new instance of MPS with a project that uses your language (it could also be the current project) and you will set breakpoints and debug in your original MPS instance.

In the Debug configuration dialog you need to indicate, which MPS project to open in the new MPS instance - either the current one by checking the Open current project check-box, or any project you specify in the field below. You could also leave both empty and create/open a project from he menu once the new MPS instance starts.

Debugging a configuration

To debug a run configuration, select it from configurations menu and then press the Debug button. The debugger starts, and the Debugger tool window appears below.

There are two tabs in a tool: one is for the console view and other for the debugger view. In the console an application's output is shown.


Next section describes breakpoints usage.

Setting a breakpoint

A breakpoint can be set on a statement, field or exception. To set or remove a breakpoint, press Ctrl-F8 on a node in the editor or click on a left margin near a node. A breakpoint is marked with a red bubble on the left margin, a pink line inside the editor and a red frame around a node for the breakpoint. Exception breakpoints are created from the breakpoints dialog.

When the program is stared, breakpoints on which debugger can not stop are specially highlighted.

When debugger stops at a breakpoint, the current breakpoint line is marked blue, and the actual node for the breakpoint is decorated with black frame around it.

If the cell for a node on which the program is stopped is inside a table, table cell is highlighted instead of a line.

Viewing breakpoints via breakpoints dialog.

All breakpoints set in the project could be viewed via Breakpoints dialog.

Java breakpoints features include:

  • field watchpoints;
  • exception breakpoints;
  • suspend policy for java breakpoints;
  • relevant breakpoint data (like thrown exception or changed field value) is displayed in variables tree.

Examining a state of a program at a breakpoint

When at a breakpoint, a Debugger tab can be used to examine a state of a program. There are three panels available:

  • a "Frames" panel with a list of stack frames for a thread, selected using a combo box;
  • a "Variable" tree which shows watchables (variables, parameters, fields and static fields) visible in the selected stack frame;
  • a "Watches" panel with list of watches and their values.

In java debugger "Copy Value" action is available from the context menu of the variable tree.


Controlling execution of a program

  • To step over, use Run -> Step Over or F8.
  • To step out from a method, use Run -> Step Out or Shift-F8.
  • To step into a method call, use Run -> Step Into or F7.
  • To resume program execution, use Resume button or Run -> Resume or F9.
  • To pause a program manually, use Pause button or Run -> Pause. When paused manually i.e. not at a breakpoint, info about variables is unavailable.

There is a toolbar in Debugger window from where stepping actions are available.


Expression evaluation

MPS Java debugger allows user to evaluate expressions during debug, using info from program stack. It is called low-level evaluation, because user is only allowed to use pure java variables/fields/etc from generated code, not entities from high-level source code.

To activate evaluation mode, a program should be stopped at a breakpoint. Press Alt-F8, and a dialog appears.
In a dialog there's a MPS Editor with a statement list inside it. Some code may be written there, which uses variables and fields from stack frame. To evaluate this code, press Evaluate button. The evaluated value will appear in a tree view below.

To evaluate a piece of code from the editor, select it and press Alt+F8, and the code will be copied to the evaluation window.


Watches API and low-level watches for java debugger are implemented. "Low-level" means that user can write expressions using variables, available on the stack. To edit a watch, a so-called "context"(used variables, static context type and this type) must be specified. If the stack frame is available at the moment, context is filled automatically.

Watches can be viewed in "Watches" tree in "Debug" tool window. Watches could be created, edited and removed via context menu or toolbar buttons.


Console is a tool which allows developers to conveniently run DSL code directly in the MPS environment.

The Console tool window allows line-by-line execution of any DSL construction in the realtime. After the command is written in the console, it is generated by the MPS generator and executed in the IDE's context. This way the code in the console can access and modify the program's AST, display project statistic, execute IDE actions, launch code generation or initiate classes reloading.

For discoverability reasons, most of the console-specific DSL constructs start with symbol '#'.


We shot a short informative screen-cast about using the MPS Console to investigate and update the user models. Check it out!


In general, there are 3 kinds of commands:

  1. BaseLanguage statement lists. These commands can contain any BaseLanguage constructions. If some construction or class is not available in completion, it may not have been imported. Missing imports can easily be added as in the normal editor, using actions 'Add model import', 'Add model import by root', 'Add language import', or by the corresponding keyboard shortcuts.

  2. BaseLanguage expressions. Expression is evaluated and, if its type is not void, printed in console as text, AST, or interactive response.
  3. Non-BaseLanguage commands. These are simple non-customizable commands, such as #reloadClasses.

There is also a set of languages containing the console commands and BaseLanguage constructions, which allow developers to easily make custom refactorings, complex usages search etc.

  1. BaseLanguage constructions for iterating over IDE objects (#nodes, #references, #models, #modules). These expressions are lazy sequences, including all nodes/references/models/modules in project or in custom scope.


    To inspect read-only modules and models, such as imported libraries and used languages, you need to include the r/o+ parameter to the desired search scope.

  2. BaseLanguage constructions for usages searching (#usages, #instances). These expressions are also sequences, which can be iterated over, but not lazy. When these expressions are evaluated, find usages mechanism is called, so it runs faster then iterating over all nodes or references and then filtering by concept/target.
  3. Commands for quering data from the IDE (#stat, #showBrokenRefs, #showGenPlan)
  4. Commands for interacting with the IDE (#reloadClasses, #make, #clean, #removeGenSources


    Te initiate a rebuild of a model, first invoke #clean followed by #make.

  5. BaseLanguage constructions for showing results to user
    • #show expression opens usages view and shows there nodes, models or modules from sequence passed to the expression as a parameter.
    • #print expression writes result to the console. There are also specialized versions of this construction:
      • #printText converts result to string and add it to the response.
      • #printNode is applicable only to nodes. This construction adds to response the the whole node and its subnodes. Since the response is also part of the AST, the node is displayed with its normal editor.
      • #printNodeRef makes sense with only nodes locates in the project models. This construction prints to the console an interactive response, which can be clicked on in order to open the node in the editor.
      • #printSeq is applicable to collections of nodes, models or modules. This command prints to the console an interactive response, which describes the size of the collection. When the response is clicked on, the usage view opens to show the nodes or the models.
      • #print expression is a universal construction, which tries to choose the most appropriate way of displaying its argument, according to its type and value
  6. refactor operation. This operation applies a function to sequence of nodes (like forEach operation), but before that it opens the found nodes in the usages view, where user can review the nodes before the refactoring is started and manually select the nodes to include/exclude in the refactoring and then apply or cancel the refactoring.

Additionally, the console languages can be extended by the user, if needed.


The model-querying commands used in the Console are defined in the jetbrains.mps.lang.smodel.query language. After importing the language you can use the commands in code to programmatically access and query the models, as well. Details can be found in the smodel queries documentation.


In order to point to a concrete node in project from the console, this node can be copied from the editor and then pasted into the console. The node will be pasted as a special construction, called nodeRef, with is a BaseLanguage expression of type node<>, with value of the pasted node. If there is a necessity to paste the piece of code as is, the 'Paste Original Node' action is available from the context menu.


Since MPS frees you from defining a grammar for your intented languages, you obviously need different ways to specify the structure of your languages. This is where the Structure Language comes in handy. It gives you all the means to define the language structure. As we discussed earlier, when coding in MPS you're effectively building the AST directly, so the structure of your language needs to specify the elements, the bricks, you use to build the AST.

The bricks are called Concepts and the Structure Language exposes concepts and concept interfaces as well as their members: properties, references, children, concept(-wide) properties, and concept(-wide) links.

Concepts and Concept Interfaces

Now let's look at those in more detail. A Concept defines the structure of a concept instance, a node of the future AST representing code written using your language. The Concept says which properties the nodes might contain, which nodes may be referred to, and what children nodes are allowed (for more information about nodes see the Basic notions section). Concepts also define concept-wide members - concept properties and concept links, which are shared among all nodes of the particular Concept. You may think of them as "static" members.

Apart from Concepts, there are also Concept Interfaces. Concept interfaces represent independent traits, which can be inherited and implemented by many different concepts. You typically use them to bring orthogonal concepts together in a single concept. For example, if your Concept instance has a name by which it can be identified, you can implement the INamedConcept interface in your Concept and you get the name property plus associated behavior and constraints added to your Concept.

Concepts inheritance

Just like in OO programming, a Concept can extend another Concept, and implement many Concept Interfaces. A Concept Interface can extend multiple other Concept Interfaces. This system is similar to Java classes, where a class can have only one super-class but many implemented interfaces, and where interfaces may extend many other interfaces.

If a concept extends another concept or implements a concept interface, it transitively inherits all members (i.e if A has member m, A is extended by B and B is extended by C, then C also has the member m)

Concept interfaces with special meaning

There are several concept interfaces in MPS that have a special meaning or behavior when implemented by your concepts. Here's a list of the most useful ones:

Concept Interface



Used if instances of your concept can be deprecated. It's isDeprecated behavior method indicates whether or not the node is deprecated. The editor sets a strikeout style for reference cells if isDeprecated of the target returns true.


Used if instances of your concept have an identifying name. This name appears in the code completion list.


Is used to mark all concepts representing types


Deleting a node whose immediate parent is an instance of IWrapper deletes the parent node as well.

Concept members


Property is a value stored inside a concept instance. Each property must have a type, which for properties is limited to: primitives, such as boolean, string and integer; enumerations, which can have a value from a predefined set; and constrained data types (strings constrained by a regular expression).


Holding scalar values would not get as far. To increase expressiveness of our languages nodes are allowed to store references to other nodes. Each reference has a name, a type, and a cardinality. The type restricts the allowed type of a reference target. Cardinality defines how many references of this kind a node can have. References can only have two types of cardinalities: 1:0..1 and 1:1.

Smart references

A node containing a single reference of 1:1 cardinality and with no alias defined is called a smart reference. These are somewhat special references. Provided the language author has not specified an alias for them, they do their best to hide from the language user and be as transparent as possible. MPS treats the node as if it was a the actual reference itself, which simplifies code editing and code-completion. For example, default completion items are created whenever the completion menu is required: for each possible reference target, a menu item is created with matching text equal to the presentation of a target node.

In order to make a reference smart when it does not meet the above mentioned criteria for being treated as smart automatically, the concept declaration has to be annotated with the @smart reference attribute: A typical use-case would be a concept that customizes the presentation of the reference or holds additional references.


To compose nodes into trees, we need to allow children to be hooked up to them. Each child declaration holds a target concept, its role and cardinality. Target concept specifies the type of children. Role specifies the name for this group of children. Finally, cardinality specifies how many children from this group can be contained in a single node. There are 4 allowed types of cardinality: 1:1, 1:0..1, 1:0..n, and 1:1..n.

Specialized references and children

Sometimes, when one concept extends another, we not only want to inherit all of its members, but also want to override some of its traits. This is possible with children and references specialization. When you specialize a child or reference, you narrow its target type. For example, if you have concept A which extends B, and have a reference r in concept C with target type B, you might narrow the type of reference r in C's subconcepts. It works the same way for concept's children.


The alias, referred to from code as conceptAlias, optionally specifies a string that will be recognized by MPS as a representation of the Concept. The alias will appear in completion boxes and MPS will instantiate the Concept, whenever the alias or a part of it is typed by the user. 

Constrained Data Types

Constrained Data Type allows you to define string-based types constrained with a regular expression. MPS will then make sure all property values with this constrained data type hold values that match the constraint.

Enumeration Data Types

Enumeration Data Types allow you to use properties that hold values from pre-defined sets.

Each enumeration data type member has a value and a presentation. Optionally an identifier can be specified explicitly.

Presentation vs. Value vs. Identifier

  • Presentation -  this string value will be used to represent the enum members in the UI (completion menu, editor)
  • Value - this value, the type of which is set by the member type property, will represent the enum members in code
  • Identifier - this optional value will be used as the name of the generated Java enum. This value is typically derived from either the presentation or the value, since it is meant to be transparent to the language users and has no meaning in the language. It only needs to be specified when the id deriving process fails to generate unique valid identifiers.
  • Name - when accessing enum data type's members from code, name refers to either presentationvalue or identifier, depending on which option member identifier is active

Deriving identifiers automaticaly

When deriving identifiers from either presentation or values, MPS will make best efforts to eliminate characters that are not allowed in Java identifiers. If the derived identifiers for multiple enum data type members end up being identical, an error will be reported. Explicit identifiers should be specified in such cases.

Programmatic access

To access enumeration data types and their members programmatically, use the enum operations defined in the jetbrains.mps.lang.smodel language.


Note that the name in memberForName and above means the actual member identifier, whether it is set to be custom, derive from presentation or derive from internal value.

Checking a value of a property against an enum data type value can be done with the is operation. To print out the presentation of the property value, you need to obtain the corresponding enum member first: 



Attributes, sometimes called Annotations, allow language designers to express orthogonal language constructs and apply them to existing languages without the need to modify them. For example, the generator templates allow for special generator marks, such as LOOP, ->$ and $[], to be embedded within the target language:

The target language (BaseLanguage in our example here) does not need to know anything about the MPS generator, yet the generator macros can be added to the abstract model (AST) and edited in the editor. Similarly, anti-quotations and Patterns may get attributed to BaseLanguage concepts.

MPS provides three types of attributes:

  • LinkAttribute - to annotate references
  • NodeAttribute - to annotate individual nodes
  • PropertyAttribute - to annotate properties

By extending these you can introduce your own additions to existing languages. For a good example of attributes in use, check out the Description comments cookbook.

Previous Next


The Structure Language may sometimes be insufficient to express advanced constraints on the language structure. The Constraints aspect gives you a way to define such additional constraints.

Can be child/parent/ancestor/root

These are the first knobs to turn when defining constraints for a concept. They determine whether instances of this concept can be hooked as children (parents, ancestors) nodes of other nodes or root nodes in models. You specify them as boolean-returning closures, which MPS invokes each time when evaluating allowed possition for a node in the AST.

Languages to import

You will most likely need at least two languages imported in the constraints aspect in order to be able to define constraints - the j.m.baselanguage and j.m.lang.smodel languages. 

can be child

Return false if an instance of the concept is not allowed to be a child of specific nodes.





nodethe child node we are checking (instance of this concept)


the parent node we are checking


concept of the child node (can be a subconcept of this concept)


LinkDeclaration of the child node (child role can be taken from there)

can be parent

Return false if an instance of concept is not allowed to be a parent of specific concept node (in a given role).






the child node we are checking


the parent node we are checking (instance of this concept)


the concept of the child node we are checking


LinkDeclaration of the child node

can be ancestor

Return false if an instance of the concept is not allowed to be an ancestor of specific nodes.






the child node we are checking


the ancestor node we are checking (instance of this concept)


the concept of the descendant node

can be root

This constraint is available only for rootable concepts (instance can be root is true in the concept structure description). Return false if instance of concept cannot be a root in the given model.






model of the root

Property constraints

Technically speaking, "pure" concept properties are not properties in its original meaning, but only public fields. Property constraints allow you to make them real properties. Using these constraints, the behavior of concept's properties can be customized. Each property constraint is applied to a single specified property.

property - the property to which this constraint is applied.

get - this method is executed to get property value every time property is accessed.




node to get property from

set - this method is executed to set property value on every write. The property value is guaranteed to be valid.




node to set property


new property value

is valid - this method should determine whether the value of the property is valid. This method is executed every time before changing the value, and if it returns false, the set() method is not executed.




node to check property


value to be checked

Example - customizing the description in the completion menu

The completion menu lists available nodes together with some additional descriptive information:

In order to customize the additional information and provide more details on the individual options listed in the completion menu, you can override the getter of the shortDescription property of the target concept:


Referent constraints

Constraints of this type help to add behavior to concept's links and make them look more properties-like.

referent set handler - if specified, this method is executed on every set of this link.




node that contains link.


old value of the reference.


new value of the reference.

scope - defines the set of nodes to which this link can point. The method returns a Scope instance. Please refer to the Scopes documentation for more information on scoping. There are two types of scope referent constraint:

  • inherited
  • reference

While inherited scope simply declares the target concept, the reference scope provides a function that calculates the scope on the fly from the parameters.




false when the reference is being created, try if it is being edited


*(deprecated) *the node that contains the actual link. It can be null when a new node is being created for a concept with smart reference. In this situation smart reference is used to determine what type of node to create in the context of enclosingNode, so the search scope method is called with a null referenceNode.


node with the reference or the closest not-null context node


(deprecated) LinkDeclaration describing parent-child relationship between enclosingNode and referenceNode


(deprecated) the concept that this link can refer to. Usually it is a concept of the reference, so it is known statically. If we specialize reference in subconcept and do not define search scope for specialized reference, then linkTarget parameter can be used to determine what reference specialization is required.



*(deprecated) *parent of the node that contains the actual link, null for root nodes. Both referenceNode and enclosingNode cannot be null at the same time.


the model that contains the node with the link. This is included for convenience, since both referenceNode and enclosingNode keep the model too.


the target index in contextRole


the target role in contextNode

If scope is not set for the reference then default scope from the referenced concept is used. If the default scope is also not set then "global" scope is used: all instances of referent concept from all imported models.

presentation (deprecated - the editor aspect now specifies presentation of references, see Editor) - here you specify how the reference will look like in the editor and in the completion list. Sometimes it is convenient to show reference differently depending on context. For example, in Java all references to an instance field f should be shown as this.f, if the field is being shadowed by the local variable declaration with the same name. By default, if no presentation is set, the name of the reference node will be used as its presentation (provided it is an INamedConcept).



modelmodel of node containing reference


the node to be presented (referenceNode has a reference to parameterNode of type linkTarget)

positiontarget index in contextRole
existsfalse when reference is being created


true - presentation of existing node, false - for new node (to be created after selection in completion menu)


true - node is presented in the smart reference


true - presentation for editor, false - for completion menu

contextNodenode with the reference, or closest not-null context node
contextRoletarget role in contextNode

Default scope

Suppose we have a link pointing to an instance of concept C and we have no scope defined for this link in referent constraints. When you edit this link, all instances of concept C from all imported models are visible by default. If you want to restrict set of visible instances for all links to concept C you can set default scope for the concept. As in referent constraint you can set search scope, validator and presentation methods. All the parameters are the same.

Please refer to the Scopes documentation for more information on scoping.

Previous Next


During syntax tree manipulation, common operations are often extracted to utility methods in order to simplify the task and reuse functionality. It is possible to extract such utilities into static methods or create node wrappers holding the utility code in virtual methods. However, in MPS a better solution is available: the behavior language aspect. It makes it possible to create virtual and non-virtual instance methods, static methods, and concept instance constructors on nodes.

Concept instance methods

A Concept instance method is a method, which can be invoked on any specified concept instance. They can be both virtual and non-virtual. While virtual methods can be overridden in extending concepts, non-virtual ones cannot. Also a virtual concept method can be declared abstract, forcing the inheritors to provide an implementation.

Concept instance methods can be implemented both in concept declarations and in concept interfaces. This may lead to some method resolution issues. When MPS needs to decide, which virtual method to invoke in the inheritance hierarchy, the following algorithm is applied:

  • If the current concept implements a matching method, invoke it. Return the computed value.
  • Invoke the algorithm recursively for all implemented concept interfaces in the order of their definition in the implements section. The first found interface implementing the method is used. In case of success return the computed value.
  • Invoke the algorithm recursively for an extended concept, if there is one. In case of success return the computed value.
  • Return failure.

Overriding behavior methods

In order to override a method inherited from a super-concept, use the Control/Cmd + O keyboard shortcut to invoke the Override dialog. There you can select the method to override. By typing the name of the desired method to override you narrow down the list of methods.

Concept constructors

When a concept instance is created, it is often useful to initialize some properties/references/children to the default values. This is what concept constructors can be used for. The code inside the concept construction is invoked on each instantiation of a new node of a particular concept.


The node's constructor is invoked before the node gets attached to the model. Therefore it is pointless to investigate the node's parentancestorschildren or descendants in the behaviour constructor. These calls will always evaluate to null. You should define NodeFactories (Editor Actions) in order to have your nodes initialized with values depending on their context within the model.

Concept static methods

Some utility methods do not belong to concept instances and so should not be created as instance methods. For concept-wide functionality, MPS provides static concept methods. See also Constraints

Previous Next

SModel language

The purpose of SModel language is to query and modify MPS models. It allows you to investigate nodes, attributes, properties, links and many other essential qualities of your models. The language is needed to encode several different aspects of your languages - actions, refactorings, generator, to name the most prominent ones. You typically use the jetbrains.mps.lang.smodel language in combination with BaseLanguage.

Treatment of null values

SModel language treats null values in a very safe manner. It is pretty common, in OO-languages, such as Java or C#, to have a lot of checks for null values in the form of expr == null and expr != null statements scattered across the code. These are necessary to prevent null pointer exceptions. However, they at the same time increase code clutter and often make the code more difficult to read. In order to alleviate this problem, MPS treats null values in a liberal way. For example, if you ask a null node for a property, you will get back a null value. If you ask a null node for its children list, you will get empty list, etc. This should make your life as a language designer easier.


SModel language has the following types:

  • node<ConceptType> - corresponds to an AST node (e.g. node<IfStatement> myIf = ...)
  • nlist<ConceptType> - corresponds to a list of AST nodes (e.g. nlist<Statement> body = ...)
  • model - corresponds to an instance of the MPS model
  • search scope - corresponds to a search scope of a node's reference, i.e. the set of allowed targets for the reference
  • reference - corresponds to an AST node that represents reference instance
  • concept<Concept> - corresponds to the org.jetbrains.mps.openapi.language.SConcept concept that represents a concept (e.g. concept<IfStatement> = concept/IfStatement/)
  • conceptNode<Concept> - (deprecated) corresponds to an AST node that represents a concept (e.g. conceptNode<IfStatement> = conceptNode/IfStatement/)
  • enummember<Enum Data Type> - corresponds to an AST node that represents an enumeration member (e.g. enummember<FocusPolicy> focus = ...)

Most of the SModel language operations are applicable to all of these types.

Operation parameters

A lot of the operations in the SModel language accept parameters. The parameters can be specified once you open the parameter list by entering < at the end of an operation. E.g. myNode.ancestors<concept = IfStatement, concept = ForStatement>.


MPS allows you to down-cast from the concepts of the smodel concepts to the underlying Java API (Open API), may you need more power when manipulating the model. Check out the Open API documentation for details.


The :eq: and :ne: operators can be used to compare nodes for equality. The operators are null-safe and will compare the whole sub-trees represented by the two compared nodes.


Getting nodes by name

Use the nodePointer/.../ construct to obtain a reference to a node using its name. The Allow any named element property (set in the Inspector) to indicate whether only root nodes should be available or all named nodes. The node reference can then be resolved into a node<> using a repository:

Getting concepts by name

Use the concept/.../ construct to obtain a concept declaration by specifying its name:

The concept switch construct can be used to branch off the logic depending on the concept at hands:

Features access

The SModel language can be used to access the following features:

  • properties
  • children
  • references

To access them, the following syntax is used:

If the feature is a property, then the type of whole expression is the property's type. If the feature is a reference or a child of 0..1 or 1 cardinality, then the type of this expression is node<LinkTarget>, where LinkTarget is the target concept in the reference or child declaration. If the feature is a child of 0..n cardinality, then the type of this expression is nlist<LinkTarget>.

You can use so-called implicit select to access features of the child nodes. For example, the following query:

will be automatically transformed by MPS to something like:

resulting in a plain collection of all non-null model elements accessible through the specified chain of link declarations.

Null checks

Since nulls are treated liberally in MPS, we need a way to check for null values. The isNull and isNotNull operations are our friends here.

IsInstanceOf check and type casts

Often, we need to check whether a node is an instance of a particular concept. We can't use Java's instanceof operator since it only understands java objects, not our MPS nodes. To perform this type of check, the following syntax should be used:

Also, there's the isExactly operation, which checks whether a node's concept is exactly the one specified by a user.

Once we've checked a node's type against a concept, we usually want to cast an expression to a concept instance and access some of this concept's features. To do so, the following syntax should be used:

Another way to cast node to particular concept instance is by using as cast expression:

The difference between the regular cast (using colon) and the as cast is in a way it handles the situation when the result of the left-side expression cannot be safely cast to the specified Concept instance: A NullPointer exception will be thrown by the regular cast in this case, while null will be returned by the as cast.

Combine this with the null-safe dot operator in the smodel language and you get a very convenient way to navigate around the model:


Intention are available to easily migrate from one type of cast expression to the other:

Node collection cast

A collection of nodes can be filtered and cast by the concept of the nodes using the ofConcept construct:


In order to find a node's parent, the parent operation is available on every node.


The children operation can be used to access all direct child nodes of the current node. This operation has an optional parameter linkQualifier. With this parameter result of children<linkQualifier> operation is equivalent to node.linkQualifier operation call and so will recall only the children belonging to the linkQualifier group/role. E.g. classDef.children<annotation, member>

Sibling queries

When you manipulate the AST, you will often want to access a node's siblings (that is, nodes with the same role and parent as the node under consideration). For this task we have the following operations:

  • next-sibling/prev-sibling - returns next/previous sibling of a node. If there is no such sibling, null is returned.
  • next-siblings/prev-siblings - returns nlist of next/previous siblings of a node. These operations have an optional parameter that specifies whether to include the current node.
  • siblings - returns nlist of all siblings of a node. These operations have an optional parameter that specifies whether to include the current node.


During model manipulation, it's common to find all ancestors (parent, parent of a parent, parent of a parent of a parent, etc) of a specified node. For such cases we have two operations:

  • ancestor - return a single ancestor of the node
  • ancestors - returns all ancestors of the node
    Both of them have the following parameters to narrow down the list:
  • concept type constraint: concept=Concept, concept in [ConceptList]
  • a flag indicating whether to include the current node: +

E.g. myNode.ancestors<concept = InstanceMethodDeclaration, +>


It's also useful to find all descendants (direct children, children of children etc) of a specified node. We have the descendants operation for such purposes. It has the following parameters:

  • concept type constraint: concept=Concept, concept in [ConceptList]
  • a flag indicating whether to include current node: +

E.g. myNode.descendants<concept = InstanceMethodDeclaration>

Containing root and model

To access top-most ancestor node of a specified node you can make use of containing root operation. Containing model is available as a result of the model operation.

For example,

  • node<> containingRoot = myNode.containing root
  • model owningModel = myNode.model

Model queries

Often we want to find all nodes in a model which satisfy a particular condition. We have several operations that are applicable to expressions of model type:

  • roots(Concept) - returns all roots in a model, which are instances of the specified Concept
  • nodes(Concept) - returns all nodes in a model, which are instances of the specified Concept

E.g. model.roots(<all>) or model.nodes(IfStatement)

Search scope queries

In some situations, we want to find out, which references can be set on a specified node. For such cases we have the search scope operation. It can be invoked with the following syntax:

The Concept literal

Often we want to have a reference to a specified concept. For this task we have the concept literal. It has the following syntax:

E.g. concept<IfStatement> concept = concept/IfStatement/

Concept operation

If you want to find the concept of a specified node, you can call the concept operation on the node.

E.g. concept<IfStatement> concept = myNode.concept

Migrating away from deprecated types

The conceptNode<> type as well as the conceptNode operation have been deprecated. The asConcept operation will convert a conceptNode<> to a concept<>. The asNode operation, on the other hand, will do the opposite conversion and will return a node<AbstractConceptDeclaration> for a concept<>.


The conceptNode<> type was called concept<> in MPS 3.1. The conceptNode operation was called concept in MPS 3.1.

Concept hierarchy queries

We can query super/sub-concepts of expression with the concept type. The following operations are at your disposal:

  • super-concepts/all - returns all super-concepts of the specified concept. There is an option to include/exclude the current concept - super-concepts/all<+>
  • super-concepts/direct - returns all direct super-concepts of the specified concept. Again, there is an option to include/exclude the current concept - super-concepts/direct<+>
  • sub-concepts - returns sub-concepts

For example:

concept<IfStatement> concept = myNode.concept; 
list<concept<>> superConceptsAll = concept.super-concepts/all; 
concept<IfStatement> concept = myNode.concept; 
list<concept<>> superConceptsAll = concept.super-concepts/all; 
concept.sub-concepts(model, myScope);

The hasRole operation

Sometimes we may want to check whether a node has a particular role. For this we have the following syntax:

For example,

myNode.hasRole(IfStatement : elsifClauses) 

Link queries

The linklinkName and linkNode operations  give you access to the details of a link between nodes.


Containing link queries

If one node was added to another one (parent) using the following expression:

then you can call the following operations to access the containment relationship information:

  • containingRole - returns a string representing the child role of the parent node containing this node ("childLinkRole" in above case)
  • containingLink - returns node<LinkDeclaration> representing a link declaration of the parent node containing this node
  • index - returns int value representing index of this node in a list of children with corresponding role. Identical to the following query upon the model represented above:

Reference operations

Accessing references

Following operation were created co access reference instance representing a reference from source node to target one. Operations are applicable on source node:

  • reference< > - returns an instance of reference type representing specified reference. This operation requires "linkQualifier" parameter used as reference specification. Parameter can be either link declaration of source node's concept or expression returning node<LinkDeclaration> as a result
  • references - returns sequence<reference> representing all references specified in source node.

Working with

Having an instance of reference type you can call the following operations on it:

  • linkDeclaration - returns node<LinkDeclaration> representing this reference
  • resolveInfo - returns string resolve info object
  • role - returns reference role - similar to reference.linkDeclaration.role;
  • target - returns node<> representing reference target is it was specified and located in model(s)

Downcast to lower semantic level

SModel language generates code that works with raw MPS classes. These classes are quite low-level for the usual work, but in some exceptional cases we may still need to access them. To access the low-level objects, you should use the downcast to lower semantic level construct. It has the following syntax:

For example,


Advanced "console" queries

The jetbrains.mps.lang.smodel.query language enables the same type of queries that the MPS Console uses:

Within with-statements you can use queries like #instances#models to conveniently retrieve the desired nodes or links. For details on the available commands, please refer to the Console documentation.


Modification operations

Feature changes

The most commonly used change operation in SModel is the act of changing a feature. In order to set a value of a property, or assign a child or reference node of 0..1 or 1 cardinality, you can use straight assignment (with =) or the set operation. In order to add a child to 0..n or 1..n children collection, you can either use the.add operation from the collection language or call add next-sibling/add prev-sibling operations on a node<> passing another node as a parameter.

For example,

  • = "NewClassName";
  • "NewClassName");
  • myNode.condition = trueConstant;
  • node<InstanceMethodDeclaration> method = classDef.member.add new initialized(InstanceMethodDeclaration);

New node creation

There are several ways to create a new node:

  • new operation: new node<Concept>()
  • new instance operation on a model: model.newInstance()
  • new instance operation on a concept: concept.newInstance()
  • add new(Concept) and set new(Concept) operations applied to feature expressions
  • replace with new(Concept) operation
  • new root node(Concept) operation applied to a model. In this case the concept should be rootable
  • new next-sibling<Concept>/new prev-sibling<Concept> operations adding new sibling to an existing node

    Note that the jetbrains.mps.lang.actions language adds the possibility to initialize the newly created nodes using the rules specified in NodeFactories. Upon importing the jetbrains.mps.lang.actions language you are able to call:

    • new initialized node<Concept>()
    • initialized node(Concept)
    • initialized next/previous sibling(Concept)
    • add new initialized(Concept)
    • set new initialized(Concept)
    • replace with new initialized(Concept)
    • replace with initialized next/previous-sibling(Concept)


To create a copy of an existing node, you can use the copy operation. E.g., node<> yourNode = myNode.copy

Replace with

To replace a node in the AST with an instance of another node, you can use the 'replace with' operation. If you want to replace and create at the same time, there is a shortcut operation 'replace with new(Concept)', which takes a concept as a parameter.

Delete and detach operations

If you want to completely delete a node from the model, you can use the delete operation. In order to detach a node from it's parent only, so that you can for example attach the node to another parent later, you use the detach operation.

smodel.query language


There's a small extension to smodel language called smodel.query. It allows to perform project-wide model queries, e.g. in actions, migrations and other code. This language is allowed to be used inside with-statement, which constraints the scope, on which queries are performed.

The scope can be constrained to a project, module, model or a sequence of these.

Operation parameters

The behavior of smodel.query operations can be slightly changed using operation parameters, which can be specified after the operation name. 

Possible parameters include:

r/o+ Operations in smodel.query language are designed for simplest usage in model-modifying code like actions and migrations. So, each operation will skip entities, which can't be changed. <r/o+> parameter forces the command to operate on read-only models, as well.

scope - Each command operates in the scope specified in the surrounding with-statement. The Scope parameter changes the operating scope for a single command.

exact - can be used in #instances operations to find instances of the concept specified, excluding instances of descendant concepts

Commands of the smodel.query language

#instances - fast search for instances of a specified concept

#usages - fast search for usages of a specified node

#modules - all modules in scope

#models - all models in scope

#nodes - all nodes in scope

#references - all references in scope


Previous Next

Pattern language

The Pattern language

The pattern language has a single purpose - to define patterns of model structures. Those patterns form visual representations of nodes you want to match. A pattern matches a node if the node's property values are equal to those specified in the pattern, node's references point to the same targets that the ones of the pattern do and the corresponding children match the appropriate children of the pattern.

Also patterns may contain variables for nodes, references and properties, which then match any node/reference/property. On top of that the variables will hold the actual values upon a successful match.


The single most important concept of the pattern language is PatternExpression. It contains a pattern as its single arbitrary node. Also, the node can specify the following variables:

  • #name - a node variable, a placeholder for a node. Stores the matching node
  • #name - a reference variable, a placeholder for a reference. Stores the reference's target, i.e. a node.
  • $name - a property variable, a placeholder for a property value. Stores the property value, i.e. a string.
  • *name - a list variable, a placeholder for nodes in the same role. Stores the list of nodes.

Antiquotations may be in particular useful when used inside a pattern, just like inside quotations (see Antiquotations).


1. The following pattern matches against any InstanceMethodDeclaration without parameters and a return type:

Captured variables:



method's name




2. The following pattern matches against a ClassifierType with the actual classifier specified inside an antiquotation expression and with any quantity of any type parameters:

Captured variables:



class type's parameters



used as wildcard, its contents is ignored. Means that parameters are arbitrary

Using patterns

Match statement

Patterns are typically used as conditions in match statements. Pattern variables can be referenced from inside of the match statement.
For example:

this piece of code examines a node n and checks whether it satisfies the first or the second condition. Then the statement in the corresponding (matching) block is executed. A pattern variable $name is used in a first block to print out the name of a node. In our case the node holds a variable declaration.

Other usages

Patterns are also used in several other language constructs in MPS. They may appear:

  • as conditions on applicable nodes of typesystem/replacement/subtyping/other rules of typesystem language (See Inference rules)
  • as supertype patterns in coerce statement and coerce expression (See Coerce)
  • as conditions on node in generator rules
  • as pattern in TransformStatement used to define language migrations (See Migrations)

You can also use patterns in your own languages.
Basically what  happens is that a class is generated from a PatternExpression and the expression itself is reduced to a constructor of this class. This class extends GeneratedMatchingPattern and has a boolean method match(SNode), which returns a boolean value indicating whether the node matches the pattern. It also holds a method getFieldValue(Stirng) to get the values stored in pattern variables after a successful match.
So to develop your own language constructs using patterns, you can call these two methods in the generator template for your constructs.

Previous Next


Once the structure for your language is defined, you will probably go and create the means to allow developers to conveniently build ASTs with it. Manipulating the ASTs directly would not be very intuitive nor productive. To hide the AST and offer the user comfortable and intuitive interaction is the role for language editors.


There are sometimes situations when manipulating the AST directly is necessary. For example, when the available editor definition does not give you access to all the properties of a node. The reflective editor gives you the power to cease the editor for a selected node and instead access the AST directly. Hit F5 to revert back to the default editor.

Editor Overview

An editor for a node serves as its view as well as its controller. An editor displays the node and lets the user modify, replace, delete it and so on. Nodes of different concepts have different editors. A language designer should create an editor for every concept in his/her language.

In MPS, an editor consists of cells, which themselves contain other cells, some text, or a UI component. Each editor has its concept for which it is specified. A concept may have no more than one editor declaration (or can have none). If a concept does not have an editor declaration, its instances will be edited with an editor for the concept's nearest ancestor that has an editor declaration.

To describe an editor for a certain concept (i.e. which cells have to appear in an editor for nodes of this concept), a language designer will use a dedicated language simply called editor language. You see, MPS applies the Language Oriented Programming principles to itself.

The description of an editor consists of descriptions for cells it holds. We call such descriptions "cell models." For instance, if you want your editor to consist of a unique cell with unmodifiable text, you create in your editor description a constant cell model and specify that text. If you want your editor to consist of several cells, you create a collection cell model and then, inside it, you specify cell models for its elements. And so on.


For a quick how-to document on the MPS editor please check out the Editor Cookbook.

Types Of Cell Models

Constant cell

This model describes a cell which will always contain the same text. Constant cells typically mirror "keywords" in text-based progamming languages.

Collection cell

A cell which contains other cells. Can be horizontal (cells in a collection are arranged in a row), vertical (cells are on top of each other) or have so-called "indent layout" (cells are arranged horizontally but if a line is too long it is wrapped like text to the next line, with indent before each next line).
In Inspector, you can specify whether the resulting cell collection will use folding or not, and whether it will use braces or not. Folding allows your cell list to contract into a single cell (fold) and to expand from it (unfold) when necessary. It is useful for a programmer writing in your language when editing a large root: he/she is able to fold some cells and hide all the information that is not necessary for the current task. For instance, when editing a large class, one can fold all method bodies except the method he/she is editing at the moment. 
The collapse by default property when set to true will ensure, that the collection shows up folded when displayed for the first time, unless the user unfolds it manually.

Collection cells can also specify a Context assistant, which will provide intuitive visual actions to the user. Check out the Context assistant documentation for details. 

Property cell

This cell model describes a cell which will show a value of a certain property of a node. The value of a property can be edited in a property cell, therefore, a property cell serves not only as a view also but as a controller. In an inspector, you can specify whether the property cell will be read-only or will allow its property value to be edited.

Child cell

This cell model contains a reference to a certain link declaration in a node's concept. The resulting cell will contain an editor for the link's target (almost always for the child, not the referent). For example if you have a binary operation, say " + ", with two children, "leftOperand" and "rightOperand", an editor model for your operation will be the following: a indent collection cell containing a referenced node cell for the left operand, a constant cell with " + ", and a referenced node cell for the right operand. It will be rendered as an editor for the right operand, then a cell with " + ", and then an editor for the left operand, arranged in a row. As we have seen, and as follows from its name, this type of cell model is typically used to show editors for children.

Referent cell

Used mainly to show reference targets. The main difference between a referent cell and a child cell is that we don't need, or don't want, to show the whole editor for a reference target. For example, when a certain node, say, a class type, has a reference to a java class, we don't want to show the whole editor for that class with its methods, fields, etc - we just want to show its name. Therefore child cells cannot be used for such a purpose. One should use referent cells.
Referent cell allows you to show a different inlined editor for a reference target, instead of using target's own editor. In most cases it's very simple: a cell for a reference target usually consists only of a property cell with the target's name.

Child list cell

This cell is a collection containing multiple child cells for a node's children of the same role. For instance, an editor for a method call will contain a child list cell for rendering its actual arguments. Child list can be indent (text like), horizontal or vertical.
The cell generated from this cell model supports insertion and deletion of the children of the role given, thus serving both as a view and as a controller. The default keys for insertion are Insert and Enter (to insert a child before or after the selected one, respectively), and the default key for deletion is Delete. You also can specify a separator for your list.
A separator is a character which will be shown in constant cells between cells for the children. When you are inside the cell list and you press a key with this character, a new child will be inserted after the selected child. For instance, a separator for a list representing actual parameters in a method call is a comma.
In Inspector, you can specify whether the resulting cell list will use folding or not, and whether it will use braces or not. Folding allows your cell list to contract into a single cell (fold) and to expand from it (unfold) when necessary. It is useful for a programmer writing in your language when editing a large root: he/she is able to fold some cells and hide all the information in editor that is not necessary for the current task at the moment. For instance, when editing a large class, one can fold all method bodies except the method he/she is editing at the moment.

Indent cell

An indent cell model will be generated into a non-selectable constant cell containing a whitespace. The main difference between a cell generated from an indent cell and one generated from a constant cell model containing whitespaces as its text is that the width of an indent cell will vary according to user-defined global editor settings. For instance, if a user defines an indent to be 4 spaces long, then every indent cell will occupy a space of 4 characters; if 2 spaces long, then every indent cell will be 2 characters.

UI component cell

This cell model allows a language designer to insert an arbitrary UI component inside an editor for a node. A language designer should write a function that returns a JComponent, and that component will be inserted into the generated cell. Note that such a component will be re-created every time an editor is rebuilt, so don't try to keep any state inside your component. Every state should be taken from and written into a model (i.e. node, its properties and references) - not a view (your component).
A good use case for such a cell model is when you keep a path to some file in a property, and your component is a button which activates a modal file chooser. The default selected path in a file chooser is read from the above-mentioned property, and the file path chosen by the user is written to that property.

Model access

A model access cell model is a generalization of a property cell and, therefore, is more flexible. While a property cell simply shows the value of a property and allows the user to change that value, a model access cell may show an arbitrary text based on the node's state and modify the node in an arbitrary way based on what changes the user has made to the cell's text.
While making a property cell work requires you only to specify a property to access via that cell, making a model access cell work requires a language designer to write three methods: "get," "set," and "validate." The latter two are somewhat optional.
A "get" method takes a node and should return a String, which will be shown as the cell's text. A "set" method takes a String - the cell's text - and should modify a node according to this String, if necessary. A "validate" method takes the cell's text and returns whether it is valid or not. If a text in a cell becomes invalid after a user change, then it is marked red and is not passed to the "set" method.
If a "validate" method is not specified, a cell will always be valid. If a "set" method is not specified, no changes in a cell's text will affect its node itself.

Next applicable editor

A more specific editor may reuse a less specific editor of the same concept through the new next applicable editor editor cell. The next applicable editor cell is used as a place holder, which will re-apply the logic for finding the less specific editor and insert the found editor in its place. For example, an editor specific to a particular context hint may provide some visual ceremony around a next applicable editor cell. By removing the context hint on the next applicable editor cell, MPS will reevaluate the editor-discovery logic and supply the found editor into the next applicable editor cell.

In particular this mechanism is frequently being used to implement editors for "commented out" nodes when customizing the generic comment out functionality.

Custom cell

If other cell models are not enough for a language designer to create the editor he/she wants, there's one more option left for him/her: to create a cell provider which will return an arbitrary custom cell. The only restriction is that it should implement an "EditorCell" interface.

Editor Components and editor component cells

Sometimes two or more editor declarations for different concepts have a common part, which is duplicated in each of those editors. To avoid redundancy, there's a mechanism called editor components. You specify a concept for which an editor component is created and create a cell model, just as in concept editor declaration. When written, the component could then be used in editor declarations for any of the specified concept's descendants. To use an editor component inside your editor declarations, one will create a specific cell model: editor component cell model, and set your editor component declaration as the target of this cell model's reference.

Cell layouts

Each collection cell has property "cell layout", which describes how child nodes will be placed. There is several layouts:

  • indent layout - places cells like text.
  • horizontal layout - places cells horizontally in row.
  • vertical layout - places cells vertically.


Styling the editor cells gives language designers a very powerful way to improve readability of the code. Having keywords, constants, calls, definitions, expressions, comments and other language elements displayed each in different colors or fonts helps developers grasp the syntax more easily. You can also use styling to mask areas of the editor as read-only, so that developers cannot edit them.

Each cell model has some appearance settings that determine the cell's presentation. They are, for instance, font color, font style, whether a cell is selectable, and some others. Those settings are combined into an entity called stylesheet. A stylesheet could be either inline, i.e. be described together with a particular cell model, or it could be declared separately and used in many cell models. Both inline stylesheet and style reference are specified for each cell in its Inspector View.

The settings do not have to be specified by a single value. A query option is also available for all settings, in which case a concept function needs to be implemented by the developer, which returns the desired value:

It is a good practice to declare a few stylesheets for different purposes. Another good practice is to have a style guideline in mind when developing an editor for your language, as well as when developing extensions for your language. For example, in BaseLanguage there are styles for keywords (applied to those constant cells in the BaseLanguage editor, which correspond to keywords in Java), static fields (applied to static field declarations and static field references), instance fields, numeric literals, string literals, and so forth. When developing an extension to BaseLanguage, you should apply keyword style to new keywords, field style to new types of fields, and so forth.

A stylesheet is quite similar to CSS stylesheets; it consists of a list of style classes, in which the values for some style properties are specified. MPS additionally provides a mechanism for extending styles as well as for property value overriding.

Style properties

Boolean style properties

  • selectable - whether the cell can be selected. True by default.
  • read-only - whether one can modify the cell and the nested cells or not. False by default. Designed for freezing fragments of cell tree.
  • editable - whether one can modify text in a cell or not. By default is false for constant cell models, true for other cell models.
  • draw-border - whether border will be drawn around a cell
  • draw-brackets - whether brackets will be drawn around a cell
  • first-position-allowedlast-position-allowed - for text-containing cells, specifies whether it is allowed that a caret is on the first/last position (i.e. before/after the whole text of a cell)

You can either choose a property value from a completion menu or specify a query i.e. a function which returns a boolean value.

Padding properties.

  • padding-left/right/top/bottom - a floating point number, which specifies the padding of a text cell, i.e. how much space will be between cell's text and cell's left and right sides, respectively.

Punctuation properties.

All cells in a collection are separated with one space by default. Sometimes we need cells placed together.

  • punctuation-left - if this property is true, space from left side of the cell is deleted and first position in cell becomes not allowed.
  • punctuation-right - if this property is true, space from right side of the cell is deleted and last position in cell becomes not allowed.
  • horizontal-gap - specifies gap size between cells in collection. Default value is 1 space.

For example in code

we don't want spaces between "(" and "1", and between "1" and ")". So we should add property punctuation-right to the cell "(", and property
punctuation-left to the cell ")".

Color style properties

  • Text foreground color - cell text's color (affect text cells only)
  • Text background color - cell text's background color (affects text cells only)
  • Background color - the background color of a cell. Affects any cell. If a text cell has non-zero padding and some text background color, the cell's background color will be the color of its margins.
    You can either choose a color from the completion menu or specify a query i.e. a function which returns a color.

Indent layout properties

  • indent-layout-indent - all lines will be placed with indent. This property can be used for indent in code block.

  • indent-layout-new-line - after this cell there will be a new line marker.

  • indent-layout-on-new-line - this cell will be placed on a new line
  • indent-layout-new-line-children - all children of collection will be placed on new line

  • indent-layout-no-wrap - the line won't be wrapped before this cell

Other style properties

  • font family
  • font size
  • font style - can be either plain, bold, italic, or bold italic.
  • layout constraint
    • For flow layout
      • none - default behavior
      • punctation - means that previous item in flow layout should always be placed on the same line as the item, which this constraint is assigned to.
      • noflow - excludes a cell from flow layout. Current line is finished and item is placed below it. After this item a new line is started and normal flow layout is applied. This style can be used to embed a picture inside of text.
  • underlined - Can be either underlined, not underlined, or as is ('as is' means it depends on properties of the enclosing cell collection).

Style properties propagation

While some style properties affect only the cell to which they are applied, values of other properties are pushed down the cell subtree (nested cells) and applied to them until some of the child cells specifies its own value for the property. Such inheritable properties that are pushed down the cell hierarchy include text-foreground-color, text-background-color, background-color, font-style, font-size and many others.

Custom styles

Language designers can define their own style attributes in style sheets and then use them in the editor. This increases the flexibility of the language editor definition. The attributes may hold values of different types and can optionally provide default values.

There are two types of custom style attributes:

  • simple - applied to a single editor cell only
  • inherited - applied to a cell and all its descendant cells recursively

In order to use the style attribute in an editor definition, your language has to import the language defining the attribute and the editor aspect has to list the defining language among the used languages.
To refer to the custom attribute from within BaseLanguage code, you need to import jetbrains.mps.lang.editor to get access to the StyleAttributeReferenceExpression concept.

Style inheritance

To be truly usable, style classes need an extension mechanism in order to describe that a particular style class inherits values of all style properties, which are not overridden explicitly. We can use a special style property apply to copy values of all properties specified in the parent style class into our style class. Using the apply property is semantically equivalent to copy-pasting all of the properties from the parent style class. An apply-if variant is also available to apply a style property value conditionally. Unlike traditional style-extension, the apply mechanism allows multiple classes to be inherited from.

The unapply property allows style classes to cease the effect of selected inherited properties. For example, a style class for commented-out code will push down styles that make code elements look all gray. Yet, links may need to be rendered in their usual colors so that the user can spot them and potentially click on them.

Potential conflicts between properties specified in parent styles and/or the ones defined explicitly in the inheriting cell are resolved on the order basis.The last specified value overrides all previous values of the same style property.

For example, the ConsoleRoot concept provides a read-only editor with only a single point (the commandHolder cell), where edits are allowed. First the readOnly style class is set on the editor:

and then the readOnly style class is unapplied for the commandHolder cell:

The readOnly style class is defined as follows:

Style priorities

A style class can be declared to take precedence over some other style class or multiple classes.

  1. If a style class does not dominate over anything, it is a low-level style class
  2. If a style class declares to dominate, but does not specifies a style class that it dominates over (no style class is specifies but words dominate over present), the style class is considered dominating over all low-level style classes.
  3. The domination relation is transitive, cycles are not allowed.

The domination relation makes sense only for styles with inheritable attributes. When one value of some style property is pushed down from parent and another value for the same property is specified in the style class applied to the current cell, the resulting behavior depends on the relationship between the two style classes:

  1. If both style classes are low-level, the value pushed from parent will be ignored and replaced with value from style class of current cell.
  2. If one of style classes dominates over the other, both values are kept and pushed down, but values from the style class, which dominates, hides the values from the other style class.
  3. If, however, in some child cell the style class that dominates is unapplied (with special style property unapply), values from the other style class will become resulting values for this property.

For example, a comment containing the word TODO should be styled more prominently then a plain comment. Thus the language concept representing a comment needs to apply a TODO-aware style (TODO_Style), which declares its dominance over a plain Comment_Style. The actual styling properties are, however, only applied if the comment really contains the TODO text (isToDo()), otherwise the plain Comment_Style properties are used.

Use the "Add Dominance" intention to append the dominates over clause to a style:

Cell actions

Every cell model may have some actions associated with it. Such actions are meant to improve usability of editing. You can specify them in an inspector of any cell model.

Key maps

You may specify a reference to a key map for your cell model. A key map is a root concept - a set of key map items each consisting of a keystroke and an action to perform. A cell generated from a cell model with a reference to a certain key map will execute appropriate actions on keystrokes.

In a key map you must specify a concept for which a key map is applicable. For instance, if you want to do some actions with an expression, you must specify Expression as an applicable concept; then you may specify such a key map only for those cell models which are contained inside editor declarations for descendants of Expression, otherwise it is a type error.

If a key map property "everyModel" is "true," then this key map behaves as if it is specified for every cell in the editor. It is useful when you have many descendants of a certain concept which have many different editors, and your key map is applicable to their ancestor. You need not specify such a key map in every editor if you mark it as an "every model" key map.

A key map item consists of the following features:

  • A function which is executed when a key map item is triggered (returns nothing)
  • A set of keystrokes which trigger this key map item
  • A boolean function which determines if a key map item is applicable here (if not specified, then it's always applicable). If a key map item is not applicable the moment it is triggered, then it will not perform an action.
  • You may specify caret policy for a key map item. Caret policy says where in a cell a caret should be located to make this key map item enabled. Caret policy may be either first position, last position, intermediate position, or any position. By default, caret policy is "any position." If a caret in a cell does not match the caret policy of a key map item the moment it is triggered, then this key map item will not perform an action.

Action maps

A cell model may contain a reference to an action map. An action map overrides some default cell actions for a certain concept. An action map consists of several action map items. In an action map, you must specify a concept for which the action map is applicable.

An action map item contains:

  • an action description which is a string,
  • and a function which performs an action (returns nothing).

An action map item may override one of the default actions (see Actions). For instance, when you have a return statement without any action maps in its editor, and you press Delete on a cell with the keyword "return," the whole statement is deleted. But you may specify an action map containing a delete action map item, which instead of just deleting return statement replaces it with an expression statement containing the same expression as the deleted return statement.

action DELETE description : <no description>
    execute : (node, editorContext)->void {
        node < ExpressionStatement > expressionStatement = node . replace with new ( ExpressionStatement ) ;
        expressionStatement . expression . set ( node . expression ) ;

The SELECT_ALL action, which selects the whole contents of the editor and is triggered by Control/Cmd + A, can also be customised through action maps. The jetbrains.mps.nodeEditor.selection.SelectUpUtil class with executeWhile method can be leveraged to specify an upper selection boundary for this action.

Cell menus

One may specify a custom completion menu for a certain cell. Open an inspector for your cell declaration, find a table named Common, find a row named menu, create a new cell menu descriptor. Cell menu descriptor consists of menu parts, which are of differend kinds, which are discussed below.

Property values menu part

This menu part is available on property cells, it specifies a list of property values for your property which will be shown in completion. One should write a function which returns a value of type list<String>.

Property postfix hints menu part

This menu part is available on property cells, it specifies a list of strings which serve as "good" postfixes for your property value. In such a menu part one should write a function which returns a value of type list<String>. Such a menu is useful if you want MPS to "guess" a good value for a property. For instance, one may decide that it will be a good variable name which is a variable type name but with the first letter being lowercased, or which ends with its type name: for a variable of type "Foo" good names will be "foo", "aFoo", "firstFoo", "goodFoo", etc. So one should write in a variable declaration's editor in a menu for property cell for variable name such a menu part:

property postfix hints
   postfixes : (scope, operationContext, node)->list<String> {
                  list < String > result ;
                  node < Type > nodeType = node . type ;
                  if ( nodeType != null ) {
                     result = MyUtil.splitByCamels( nodeType . getPresentation() );
                  } else {
                     result = new list < String > { empty } ;
                  return  result ;

where splitByCamels() will be a function which returns a list of postfixes of a string starting with capitals (for instance MyFooBar -> MyFooBar, FooBar, Bar).

Primary replace child menu

It's a cell menu part which returns primary actions for child (those by default, as if no cell menu exists).

Primary choose referent menu

It's a cell menu part which returns primary actions for referent (those by default, as if no cell menu exists).

Replace node menu (custom node's concept)

This kind of cell menu parts allows to replace an edited node (i.e. node on which a completion menu is called) with instances of a certain specified concept and its subconcepts. Such a cell menu part is useful, for example, when you want a particular cell of your node's editor to be responsible for replacement of a whole node. For instance, consider an editor for binary operations. There's a common editor for all binary operations which consists of a cell for left operand, a cell for operation sign which is a cell for concept property "alias" and a cell for right operand.

[> % leftExpression % ^{{ alias }} % rightExpression % <]

It is natural to create a cell menu for a cell with operation sign, which will allow to replace an operation sign with another one, (by replacing a whole node of course). For such a purpose one will write in the cell for operation sign a replace node menu part:

replace node (custom node concept)
   replace with : BinaryOperation

The former left child and right child are added to newly created BinaryOperation according to Node Factories for BinaryOperation concept.

Replace child menu (custom child's concept)

Such a cell menu part is applicable to a cell for a certain child and specifies a specific concept which and subconcepts of which will be shown in completion menu (and instantiated when chosen and the instance will be set as a child). To specify that concept one should write a function which returns a value of a type node<ConceptDeclaration>.

Replace child menu (custom action).

This kind of cell menu parts is applicable to a cell for a certain child and allows one to customize not only child concept, but the whole replace child action: matching text (text which will be shown in completion menu), description text (a description of an action, shown in the right part of completion menu), and the function which creates a child node when the action is selected from completion menu. Hence, to write such a menu one should specify matching text, description text and write a function returning a node (this node should be an instance of a target concept specified in a respective child link).

Generic menu item

This kind of cell menu part allows one to make MPS perform an arbitrary action when a respective menu item will be selected in a completion menu. One should specify matching text for a menu item and write a function which does what one wants. For instance, one may not want to show a child list cell for class fields if no class fields exist. Hence one can't use its default actions to create a new field. Instead, one can create somewhere in a class' editor a generic menu item with matching text "create field" which creates a new field for a class.

generic item
   matching text : add field
   handler : (node, model, scope, operationContext)->void {
                node . field . add new ( <default> ) ;

Action groups

An action group is a cell menu part which returns a group of custom actions. At runtime, during the menu construction, several objects of a certain type, which are called parameter objects, are collected or created. For that parameter object type of an action group functions, which return their matching text and description text, are specified. A function which is triggered when a menu item with a parameter object is chosen is specified also.

Thus, an action group description consists of:

  • a parameter object type;
  • a function which returns a list of parameter objects of a specified type (takes an edited node, scope and operation context);
  • a function which takes a parameter object of a specified type and returns matching text (a text which will be shown in a completion menu);
  • a function which takes a parameter object of a specified type and returns description text for a parameter object;
  • a function which performs an action when parameter object is chosen in a completion menu.

A function which performs an action may be of different kinds, so there are three different kinds of cell action group menu parts:

  • Generic action group. Its action function, given a parameter object, performs an arbitrary action. Besides the parameter object, the function is provided with edited node, its model, scope and operation context.
  • Replace child group. It is applicable to child cells and its action function, given a parameter object, returns a new child, which must have a type specified in a respective child link declaration. Besides the parameter object, the function is provided with edited node, its model, current child(i.e. a child being replaced), scope and operation context.
  • Replace node group. Its action function, given a parameter object, returns a node. Usually it is some referent of an edited node (i.e. node on which a completion menu is called). Besides the parameter object, the function is provided with edited node, its model, scope and operation context.

Cell menu components

When some menu parts in different cells are equal one may want to extract them into a separate and unique entity, to avoid duplications. For such a purpose cell menu components are meant. A cell menu component consists of a cell menu descriptor (a container for cell menu parts) and a specification of an applicable feature. A specification of applicable feature contains a reference to a feature (i.e. child link declaration, reference link declaration or property declaration), to which a menu is applicable. For instance if your menu component will be used to replace some child its child link declaration should be specified here; etc.

When a cell menu component is created, it can be used in cell menus via cell menu component menu part, which is a cell menu part which contains a reference to a certain menu component.SModel language

Customizing reference presentation

Specification of the matching text and in-editor textual presentation for references can be done directly in the editor aspect.

The ref. presentation cell can have the displayed text customized:

The cell menu can customise the text displayed in the completion menu:

This functionality was previously achieved through Constraints.

Migration of presentation query in reference constraints

The design of the reference presentation part in the constraints aspect has been showing its its age and so has been replaced with the new functionality described above.Most of the code will be migrated automatically. Some code that produced with migration can be simplified so consider to review it.

There is a case when a presentation query can not be migrated: suppose you have an editor for a concept with reference link and then have a reference constraint with defined presentation part for its reference in one of its subconcepts. If editor component doesn't overriden in subconcept, MPS doesn't know where this presentation part should be inlined. In this case, you should manually migrate the presentation part usage to prevent uncorrected reference presentation in user code. There are several alternatives to do it:

  • Simply override the editor in subconcept. Move the code from presentation part to the proper reference cell.
  • Extract the reference cell into a separate component and override the component for subconcept.
  • Create new behavior method that provides a presentation for the reference. Make reference cell delegates to created method. Override this method in subconcept.

If you are expecting that your language may be extended in another project by someone else, do not remove deprecated presentation parts. Otherwise, extending languages may be migrated improperly.

Two-step deletion

In projectional editor it is sometimes hard to predict what part of the code will be deleted when you press Delete or Backspace. For example, when the caret is on the semicolon of the baseLanguage statement and you press Backspace, the whole statement will be deleted. With two step deletion, you now can see what part of the code which will be deleted.
Here how it works: you press Delete or Backspace and the part of the code, which is to be deleted, becomes highlighted. If it suits you, you press Delete or Backspace again and the code will be deleted. If after highlighting you realize that you don't want to delete this piece of code you can press Escape or just move the caret and the highlighting will disappear.

Let's see the example:
Put the caret to the statement semicolon.

Press Backspace. The whole statement is highlighted. This means that if you press Backspace again, the statement will be deleted.

Press Backspace again. The statement is deleted.

The same works by default for other nodes.


Note that if the node is selected, it will be removed immediately without highlighting. Also if the caret is on the editable text cell, the text parts will be also removed immediately.

To turn on the two step deletion, check the "two step deletion" checkbox in Preferences > Editor > General.

Invoking two-step deletion from code

The language designer may include the two step deletion scenario in her custom delete actions. The ApproveDelete_Operation in jetbrains.mps.lang.editor is introduced for that purpose. This operation is applied to the node:

This operation returns true iff it succeed and the node has not been approved for deletion before. More formally all the following conditions need to be met:

1) The two-step deletion preferences option is checked.

2) The node has not been fully selected.

3) The node has not been approved for deletion already.

When all of these conditions are met, the node approved for deletion gets highlighted and the custom delete action may stop at this point.

If the same custom delete action is called immediately after approving the deletion, the approveDelete operation will return false (because the node has been approved already) and the action will proceed with the deletion.

Let's see the typical scenario from the baseLanguage:

This is the part of the delete action for the Dot_Expression's operation. This action first tries to approve the operation for the deletion and if it succeed the action stops. If ir does not succeed, that means that either node's operation has already been approved (= highlighted), or the node has been selected by the user or the "two step deletion" preferences option is turned off. In this case, we delete the operation and replace it with the node of the abstract concept.

More complex cases

Sometimes the customized delete action needs to be more complicated than just deleting the current node.

Let's see an example scenario: we press delete on the "final" keyword on the IncompleteMemberDeclaration. There is a custom action, which sets the final property to false. In the editor, there is the cell, which is shown only if the final property of the node is true, so after the action, the cell wouldn't be shown.

If we want to highlight the final keyword before hiding it (by setting the final property to false), we approve it for the deletion this way:

Previous Next

Diagramming editor

The diagramming support in MPS allows the language designers to provide graphical editors to their concepts. The diagrams typically consist of blocks, represented by boxes, and connectors, represented by lines connecting the boxes. Both blocks and connectors are visualization of nodes from the underlying model. 

Ports (optional) are predefined places on the shapes of the blocks, to which connectors may be attached to. MPS allows for two types of ports - input and output ones.

Optionally, a palette of available blocks may be displayed on the side of the diagram, so the user could quickly pick the type of the box they need to add to the diagram.

Adding elements

Blocks get added by double-clicking in a free area of the editor. The type of the block is chosen either by activating the particular block type in the palette or by choosing from a pop-up completion menu that shows up after clicking in the free area.

Connectors get created by dragging from an output port of a block to an input port of another or the same block.


MPS comes with bundled samples of diagramming editors. You can try the componentDependencies or the mindMaps sample projects for initial familiarization with how diagrams can be created.


This document uses the componentDependencies sample for most of the code examples. The sample defines a simple language for expressing dependencies among components in a system (a component set). Use the "Push Editor Hints" option in the pop-up menu to activate the diagramming editor.


In order to be able to define diagramming editors in your language, the language has to have the required dependencies and used languages properly set:

  • jetbrains.mps.lang.editor.diagram - the language for defining diagrams
  • jetbrains.mps.lang.editor.figures (optional) - a language for defining custom visual elements (blocks and connectors)
  • jetbrains.jetpad and jetbrains.mps.lang.editor.diagram.runtime - runtime libraries that handle the diagram rendering and behavior

Diagram definition

Let's start from the concept that should be the root of the diagram. The diagramming editor for that node will contain the diagram editor cell:


Note that the diagram editor cell does not have to be the root of the editor definition. Just like any other editor cell it can be composed with other editor cells into a larger editor definition.

The diagram cell needs its content parameter to hold all the nodes that should become part of the diagram. In our case we pass in all the components (will be rendered as blocks) and their dependencies (will be rendered as connectors). The way these nodes are rendered is defined by their respective editor definitions, as explained later.

Down in the Inspector element creation handlers can be defined. These get invoked whenever a new visual block is to be created in the diagram. Each handler has several properties to set:

  • name - an arbitrary name to represent the option of creating a new element in the completion menu and in the palette
  • container - a collection of nodes that the newly created node should be added to
  • concept - the concept of the node that gets created through the handler, defaults to the type of the nodes in the container, but allows sub-types to be specified instead
  • on create - a handler that can manipulate the node before it gets added to the model and rendered in the diagram. Typically the name is set to some meaningful value and the position of the block on the screen is saved into the model.

There can be multiple element creation handlers defined.

Similarly, connector creation handlers can be defined for the diagram cell to handle connector creation. On top of the attributes already described for element creation handlers, connector creation handlers have these specific attributes:

  • can create - a concept function returning a boolean value and indicating whether a connector with the specified properties can be legally constructed and added to the diagram.
  • on create - a concept function that handles creation of a now connector.
  • the from and to parameters to these functions specify the source and target nodes (represented by a Block or a Port) for the new connection.
  • the fromId and toId parameters to these functions specify the ids of the source and target nodes (represented by a Block or a Port) for the new connection.

Elements get created when the user double-clicks in the editor. If multiple element types are available, a completion pop-up menu shows up.

Connectors get created when the user drags from the source block or its output port to a target block or its input port.


The optional palette will allow developers to pick a type of blocks and links to create whenever double-clicking or dragging in the diagram. The palette is defined for diagram editor cells and apart from specifying the creation components allows for visual grouping and separating of the palette items..


The concepts for the nodes that want to participate in diagramming as blocks need to provide properties that will preserve useful diagramming qualities, such as x/y coordinates, size, color, title, etc.

Additionally, the nodes should provide input and output ports, which connectors can visually connect to.

The editor will then use the diagram node cell:

The diagram node cell requires a figure to be specified. This is a reference to a figure class that defines the visual layout of the block using the jetpad framework. MPS comes with a set of pre-defined graphical shapes in the jetbrains.mps.lang.editor.figures.library solution, which you can import and use. Each figure may expose several property fields that hold visual characteristics of the figure. All the figure parameters should be specified in the editor definition, most likely by mapping them to the node's properties defined in the concept:

The values for parameters may ether be references to the node's properties, or BaseLanguage expressions prepended with the # character. You can use this to refer to the edited node from within the expression.

If the node defines input and output ports, they should also be specified as parameters here so that they get displayed in the diagram. Again, to specify ports you can either refer to the node's properties or use a BaseLanguage expression prepended with the # character.


As all editor cells, diagramming cells can have Action Maps associated with them. This way you can enable the Delete key to delete a block or a connector.

Custom figures

Alternatively you can define your own figures. These are BaseLanguage classes implementing the jetbrains.jetpad.projectional.view.View interface (or its descendants) and annotated with the @Figure annotation. Use the @FigureParameter annotation to demarcate property fields, such as widthheight etc.

The MovableContentView interface provides additional parameters to the figure class:

By studying jetbrains.mps.lang.editor.figures.library you may get a better understanding of the jetpad library and its inner workings.


The nodes that will be represented by connectors do not need to preserve any diagramming properties. As of version 3.1 connectors cannot be visually customized and will be always rendered as a solid black line. This will most likely change in one of the following versions of MPS.

The editor for the node needs to contain a diagram connector cell:

The cell requires a source and a target for the connector. These can either be ports:

or nodes themselves:

The values may again be direct references to node's properties or BaseLanguage expressions prepended with the # character.

Rendering ports

Input and output ports should use the input port and output port editor cells, respectively. The rendering of ports cannot be customized in MPS 3.1, but will be most likely enabled in later versions.


Use the T key to rotate the ports of a selected block by 90 degrees. This way you can easily switch between the left-to-right and top-to-bottom port positions.

Using implicit ports

In some situations you will not be able to represent ports directly in the model. You'll only want to use blocks and connectors, but ports will have to be somehow derived from the model. This case can easily be supported:

  1. Decide on the representation of ports. Each port will be represented by a unique identifier, such as number or a string
  2. Have the concept for the blocks define behavior methods that return collections of identifiers - separately for input and output ports
  3. Use the methods to provide the inputPorts and outputPorts parameters to the DiagramNode editor cell
  4. In the connector editor cell refer to the block's node as source and target. Append the requested id after the # symbol

Previous Next

Transformation Menu Language


Transformation menu language is used to define transformation menus that describe a hierarchical structure of submenus and actions that will appear in various locations in the editor. Currently there are several possible locations where transformation menus are shown: side transform menus, substitute menuscontext assistant, and context actions tool. Language designers and plugin authors can define additional locations and specify required or optional features for each location (such as an icon or a tooltip), as documented in Extending the Transformation Menu Language.

The Transformation Menu Language provides a way to describe side-transforms and substitute actions in MPS. The core capabilities of the new approach include:
  • the ability to explicitly specify side-transform / substitute menu content for a particular cell in the editor
  • no need for the "remove defaults" instruction, which often failed to work reliably in the past
  • easy mixing substitute actions from the substitute DSL with arbitrary /low-level/ UI actions
  • the same action can be visible in different places (parts) of the editor
  • supporting different action sets for alternative presentations (editors/projections)
  • actions are now defined in the editor aspect of the language

Defining a Menu

Transformation menus define UI actions that will be shown in various locations. At design time a menu is specified as a list of sections, each section contains a list of menu parts for a particular set of locations. At runtime the menu parts and the locations are used to generate the contents of the menus (menu items).

Menu definitions come in two flavors: default and named. Menu definitions can also be extended through menu contributions.

Default Menu

Each concept has a default transformation menu associated. If the language designer does not provide one explicitly, a transformation menu defined for the closest super-concept is assumed. If none is specified for any of the super-concepts, the one defined on BaseConcept is used, which contains the substitute actions suitable for that position (see below the section on substitute actions).

A default menu is used in situations where the language designer hasn't specified which menu to display.

Named Menu

A named menu is an additional menu for a concept. Like the default menu it also specifies an applicable concept and contains a list of sections. As the term suggests, a named menu has an explicitly set name. A named menu is meant to be set as the transformation menu of a cell or included into another menu via the Include Menu menu part.

Attaching a named menu to an editor cell:

Note: Default transformation menus can also be attached to individual cells the same way named menus can.

Menu Contributions

A menu contribution extends a given menu by contributing additional menu parts to it. This is in particular useful, when an extending language needs to add entries into a menu defined in the extended language. Contributions can actually only be defined in languages other than the one with the menu being contributed to.
When a menu is requested at runtime the original definition and all contributions are merged and the menu is created using the combined definition. A few important notes on contributions:
  1. The order, in which the individual definitions are merged is currently unspecified.
  2. A contribution cannot remove menu parts from the menu it contributes to.
  3. It is possible to define a contribution to the implicit default menu for a concept.

Section locations

By specifying location for a section within the menu you indicate, into which part of the UI the actions should be inserted:

  • completion - the completion menu
  • context actions tool - the Context Actions Tool (requires import of jetbrains.mps.editor.contextActionsTool.lang.menus language)
  • context assistant - the Context Assistant
  • side transform - left or right transformations

Menu Parts

The following standard menu parts are available:

  • action – a simple menu item specifying an action to be performed, its corresponding menu text and applicability.

  • group - a collection of menu items.
  • include - include a specific default or named menu (together with its contributions, if any). Inclusion cycles are detected at runtime and an error message is produced.
  • include substitute menu - include a default or named substitute menu to use as part of this menu.
  • parametrized - an action that is parametrized with multiple values.
  • submenu – a submenu containing further parts.
  • superconcepts menu – includes the default menus of the superconcepts of the applicable concept since these are not included by default.
  • wrap substitute menu - wraps a specified concept using the provided handler

Language jetbrains.mps.lang.editor.menus.extras contains adapters to include various action-like entities from transformation menus:

  • intention – wraps an intention (a subconcept of BaseIntentionDeclaration from jetbrains.mps.lang.intentions).
  • refactoring – wraps a refactoring (Refactoring from jetbrains.mps.lang.refactoring).
  • plugin Action – wraps a plugin action (ActionDeclaration from jetbrains.mps.lang.plugin).


You may also like to check out a video on Transformation Menu Language.

Side transformations

When you edit code in a text editor, you can type it either from left to right:

1 <caret> press +
1+<caret> press 2

or from right to left

<caret>1 press 2
2<caret>1 press +

In order to emulate this behavior, MPS has side transform actions: left and right transforms. They allow you to create actions which will be available when you type on left or right part of your cell. For example, in MPS you can do the following:

1<caret> press + (red cell with + inside appears)
1+<caret> press 2 (red cell disappear)

or the following:

<caret>1 press + (red cell with + inside appears)
+<caret>1 press 2 (red cell disappear)

The first case is called right transform. The second case is called left transform.

You define side transformations in the Transformation menus by choosing the side transform location for the section:

The language also enables language designers to include items of the side-transform menu to the completion menu on a specific cell. To do this you attach to the desired cell a transformation menu that contains a completion section. That completion section will hold an include menu part and specify the location of side transform. The items of side-transform menu will be included to the completion.

The location of the included menu can be specified using the intention "Specify Location".

Substitute menus

Substitute actions define user-invoked transformations to some parts of the model, during which one node is substituted by another node. The actually mapping of these substitute actions to the visual UI elements (completion menu, etc.) is then done through Transform menus (see the section above).

Typically substitute actions are triggered by pressing Ctrl + Space in the editor. The completion menu that shows up contains options that, when selected by the user, will replace the node under caret. Unlike side-transformationscontext assistant or context action toolsubstitutions have default behavior, which takes effect, unless the language author defines otherwise.

Default behavior for code-completion

Without any menus implemented explicitly by the language author, MPS will still provide a completion menu with substitutions for the current node in either of these two cases:

  • The cursor is positioned at the front of a single-cell editor  
  • The user has selected the whole editor of a node

In these cases pressing Control + Space will show a menu with all concepts from the imported languages applicable in the given context, which can substitute the current node in the model.

MPS follows these steps to populate the default completion menu:

  1. If your selection is inside of a position which allows concept A, then all enabled subconcepts of A will be available in the completion menu.
  2. All abstract concepts are excluded
  3. All concepts, for which the 'can be a child' constraint returns false, are excluded
  4. All concepts, for which the 'can be a parent' constraint of a parent node returns false, are excluded
  5. If a concept contains a 1:1 reference, then it is not added to the completion menu itself. Instead, an item is added for each element in scope for that reference. We use a name smart reference for such items.

To customize node substitutions, the substitute menus are used.

Substitute menu (default)

By defining a default substitute menu for a concept you may customize the contents of the completion menu, which displays when the user presses Control + Space. It also has effect on sub-concepts of the concept, unless these sub-concepts define their own default substitute menus.


If you define a default substitute menu for a concept and leave it empty, the concept will not be included in the completion menu. This is the equivalent of using the IDontSubstituteByDefault interface in the previous versions of MPS.

However, if you want to assign the substitute menu to a particular cell of an editor, you will need to include your substitute menu in a transformation menu, because only transformation menus can be attached to editor cells.

Substitute menu (named)

Named substitute menus give you the flexibility to create multiple substitute menus and use them in different contexts. Named substitute menus must be first included in another substitute menus or transformation menus to take effect.

Substitute menu contribution

Just like with Transform menus, Substitute menu contributions contribute new entries into substitute menus defined in an extended language.


  • add concept - adds a single concept to the menu
  • concept list - adds a collection of concepts
  • group - adds a groups of entries, if a condition is met
  • include - includes a specified menu
  • parameterized - adds a parametrized substitute action
  • reference actions -  includes and customises the appearance of the possible targets of a reference
  • subconcepts menu - includes all subconcepts of the concept
  • substitute action - adds a single substitute action
  • wrap substitute menu - wraps a specified concept using the provided handler

Interaction of Cell Menu and Cell-specific Transformation Menu

If a cell has both "menu" and "transformation menu" specified the applicable entries from both menus are combined. Some cell menu parts (descendants of CellMenuPart_Abstract such as CellMenuPart_PropertyPostfixHints) do not yet have an equivalent transformation/substitution menu part.

The menu discovery algorithm

Understanding the process of how MPS picks the transformation menu will help you design menus with more confidence.

The built-in behavior for discovery of transformation menus is to include the menu(s) of superconcept of the current concept, so by default MPS will look for a transform for the current concept, named <CurrentConcept>_TransformationMenu, then its super-concept's menu and so on, until BaseConcept_TransformationMenu.


BaseConcept_TransformationMenu really exists in MPS and contains an instruction to include the appropriate concept's substitute menu for the current link. Thanks to this a substitute menu that you define for a concept gets included and becomes available without any need to define a transformation menu explicitly.


Substitute menus are similar to transformation menus, but their discovery works in the opposite direction: where the transformation menu for a concept A includes the menus of its superconcepts (up to BaseConcept), i.e. walks up the hierarchy, the substitute menu for A includes menus for its subconcepts, because only the sub-concepts can safely replace A in the model. Note that substitute menus are a bit different from transformation menus, because they are looked up not based on the concept of an existing node, but instead based on the link's target concept.

Show Item Trace

To track, which transformation or substitute menu contributed a particular action to the completion menu or to the context assistant, users just press Control/Cmd + Alt + B on the completion menu entry and an interactive trace report will show up.

Sometimes it is hard to track how the action appeared in the completion or context assistant because of there are many substitute and transformation menus including each other. Now you may select some action in completion (by arrows) or in the context assistant (by pressing cmd/ctrl+alt+Enter) and then press cmd/ctrl+alt+B. You will see the trace in the project tool. This is the trace of the menu and menu part declaration which include each other starting from the top-level menu and ending with the action declaration. If the menu or the menu part declaration is explicit and is in the project, then it has bold style in tool and you can click on it and go to the declaration.

Here how it looks like when we put the caret on the statement, show completion for the variable reference and then invoke "Show Item Trace":
We see that what we see in completion is the default transformation menu for the Statement which includes the menu for superconcept which is the BaseConcept. It in its turn includes the substitute menu for the Statement, which in it's turn wraps the menu for the Expression. Then it comes to subconcepts of Expression, one of which is the VariableReference. VariableReference is the "smart reference" concept, so it tries to find all visible target of the Variable concept. So that's how the variable reference appears in the menu for the statement.

Context assistant


MPS provides several mechanisms for performing an action in a context: completion, intentions, refactorings, and various other popup menus. These mechanisms have in common that they are not immediately visible to new, inexperienced users. They also usually offer many possible choices and reveal the entire available functionality, which helps advanced users but may overwhelm the beginners.

To better guide the new users through the process of creating a script in your DSL, MPS 3.4 introduces a new UI mechanism, the context assistant. A context assistant shows a dynamically constructed menu with actions that are the most appropriate for a given context. The language author specifies where the menu should be shown by putting placeholders in the editor definition. The placeholders reserve screen space for the menu in advance so that the edited content does not shift around as the menu is being shown and hidden.

As an example consider the RobotKaja sample language which is bundled with MPS. The initial editor for a new script looks as follows (without a context assistant):

With a context assistant this initial UI might look like this:

and the menu now suggests several possible next steps to the user.


We've also shot a short video illustrating use and definition of Context Assistant. You might also like to check out a screen-cast on how context assistant is being utilized for the language definition languages.


Using Context Assistant UI

  • Jump to the context assistant by pressing Ctrl+Alt+Enter (Cmd+Option+Enter on Mac OS X).
  • Navigate through the menu by using arrow keys.
  • Invoke the selected menu item using Space or Enter.
  • Press Escape to jump back to the editor.

Using Context Assistant Framework

To add context assistant to a language, you as the language author have to do two things:

  1. Place context assistant placeholders (a special kind of cell) at appropriate spots in the editor. The context assistant menus will be shown in these placeholders.
  2. Define the menu hierarchy using the Transformation Menu Language (specifying location context assistant).

The MPS editor runtime will take care of building the appropriate menu at the appropriate point in time and showing it in the appropriate context assistant placeholder.

Placeholder Cells

Placeholder cells are added by choosing "context assistant menu placeholder" from the substitution menu when adding a new cell:

The placeholder cell reserves a certain amount of vertical screen space, about the size of one empty line, so that a menu can be shown in its place without shifting surrounding cells around. However, it doesn't reserve any horizontal space. It is therefore best to put the placeholder on a separate line or at the end of a short (or empty) line of text. For example, in the RobotKaja sample the placeholder is added after an empty line in the editor for the EmptyLine concept:

Menu Lookup

The menu to display is looked up by traversing the cell hierarchy from the currently selected cell to the top. You may specify the menu to show explicitly for a given cell (by setting the transformation menu property in the Inspector). In this case that menu is used. Otherwise, if the cell is a big cell (a cell that has no parent or whose parent is associated with a different node), an attempt is made to look up the menu based on the cell's node. The node's concept inheritance hierarchy is traversed in breadth-first order. If a non-empty menu is defined for the node's concept or one of its super-concepts and super-interfaces, this menu is used.

For example, consider a BaseLanguage PlusExpression which extends BinaryOperation, which in turn extends Expression and implements IBinaryLike. If during the traversal we reach the big cell of a PlusExpression, then menus of PlusExpressionBinaryOperationExpressionIBinaryLike, and finally BaseConcept are checked, in that order, and the first non-empty menu definition is used. If all menu definitions are empty, the search continues from the parent cell of the big cell (if any).

Note that a non-empty menu definition, although chosen, may still produce an empty menu. This may happen if none of its menu parts produce any items (for example if no defined actions are currently applicable).

Placeholder Lookup

The place where a menu should be displayed is looked up by traversing the cell hierarchy from the currently selected cell to the root until a collection cell is reached that contains a context assistant placeholder cell, either directly or indirectly (but only belonging to the same node as the collection). The first cell found during this search is chosen and the menu is displayed in this placeholder cell.

Context actions tool

The Context Actions Tool accommodates for the preference of some DSL users towards mouse navigation. It lists the actions applicable in the given context in a sidebar, potentially hierarchically organized.


A short video on the use and definition of Context Actions Tool for the sample robot Kaja language is available


By selecting an option in the tool window the user triggers the associated action, which typically modifies the model in the place of the current focus.

The content of the sidebar is specified by the language designer through the new Transformation Menu Language.

The language supports modularization of the menus and their easy reuse, so combining menus of super- or sub-concepts should be quite straightforward. Once you import the jetbrains.mps.editor.contextActionsTool.lang.menus language you'll be able to specify the context action tool location for your actions, groups and other relevant elements. The language also gives you the possibility to specify an icon and a tooltip for the given entries.


Since the tool window offers enough vertical space and since entries can be grouped into collapsable submenus, you may also consider including existing "substitute" menus into the tool window with the submenu action.


For full details on the actual language syntax see the Transformation Menu Language page.

Editor actions

MPS editor has quite sensible defaults in completion actions, node creation policy. But when you want to customize them, you have to work with the actions language.


The Side-transformation actions as well as Node substitute actions have been deprecated in MPS 3.4 and replaced by the new Transformation Menu Language.

Node Factories

When a node is replaced with another one, it may be useful to parametrize the process of creation of the replacing node with values held by the node that is being replaced, or perhaps also to reflect the future position of the replacing node in the model. Node Factories give you exactly that. You write a set of handlers that get invoked whenever a new node needs to be created in a substitution action or through one of the new initialized node<>set new initialized node<>add new initialized node<>replace with new initialized node<> and new initialized instance<> methods.

In brief, Node Factories allow you to customize instantiation of new nodes. In order to create node factory, you first have to create a new Node Factories root node. Inside of this root you can create node factories for concepts. Each node factory consists of node creation block which has the following parameters: newNode (the created node), sampleNode (the currently substituted node; can be null), enclosing node (a node which will be the parent of newNode in the model), and a model. The node factory handler is invoked before the new node gets inserted into the model.

You can leverage the concept inheritance hierarchy in Node Factories to reduce repetition.


To leverage node factories when creating nodes from code, use the "initialized" variants of "replace with ...smodel language constructs. See SModel language Modification operations for details.

Paste wrappers

These allow you to customize pasting of nodes into other contexts. For example, if you copy a LocalVariableDeclaration in BaseLanguage and paste it into a ClassConcept to make it a field of the class, a simple transformation must be triggered that will create a new FieldDeclaration out of the LocalVariableDeclaration.

Copy-paste handlers

These give you the possibility to customize the part of the models that is being copied to or pasted from the clipboard.

The copy parameter in a copy pre processor block gets contains an exact deep copy of the original parameter node. Unlike originalcopy is detached from the model and so has no parent node.

The task for the paste post processor typically is to re-resolve references so that they point to declarations valid in the new context.


Previous Next

Editor language generation API

The editor language is supposed to be extended by numerous MPS users, so we designed the generator for the Editor language for ease of use - straightforward templates, human-readable generated code and use of meta-information more at the generation-time than at run-time. If any of your languages extend the editor language in order to provide new cell types, this document is for you.

API: EditorCell contract

The contract of EditorCell.setBig()/.getBig() methods was slightly changed. Please check the javadoc for details.

API: EditorCellFactory is now available only within UpdateSession

We made the EditorCellFactory instance controlled by the current UpdateSession. In the same time we put some caches inside the EditorCellFactory implementation, making the editor building process faster in some situations. The EditorContext.getCellFactory() method was deprecated and will be removed in the next release. 

Language Runtime: AbstractEditorBuilder

AbstractEditorBuilder runtime class was introduced and should be used as a common super-class of any classes containing cell factory methods. This class implements common utility methods and provides access to generic contextual parameters of editor cell creation process like:

  • editorContext
  • node
  • CellFactory
  • UpdateSession

AbstractEditorBuilder is used to capture some context of cell creation process and execute consequent cell factory methods within this context.

Generator: EditorBuilder classes

A separate sub-class of AbstractEditorBuilder class will be generated now as a root class for each of available editor declaration hierarchies: 

  • ConceptEditorDeclaration.cellModel
  • ConceptEditorDeclaration.inspectedCellModel
  • EditorComponentDeclaration
  • InlineEditorComponent

The MPS editor generator will continue creating classes, implementing ConceptEditor & ConceptEditorComponent. These classes were used earlier as containers for cell factory methods. In the new version of MPS these classes are used as descriptors providing access to the contextual hints information & instantiating actual EditorBuilders. Descriptor classes may be cached by the EditorCellFactory implementation.

Contextual parameters available for cell builders

The code, generated as a part of AbstractEditorBuilder sub-classes may access contextual parameters by using existing methods of AbstractEditorBuilder class. In addition to that, all available meta-information is used to generate private fields with more specific types than those available in the method signatures of AbstractEditorBuilder. For now each sub-class of AbstractEditorBuilder will hold private node<TheConcept> myNode field, where TheConcept is the actual concept associated with this AbstractEditorBuilder. This means that any cell factory method may use such a private field in order to get typed access to the contextual node and directly access available properties, links and other information from the contextual node using s-model language.


CellFactoryContextClass is a handy utility class providing necessary context for templates, generating the code included into one of the generated AbstractEditorBuilder sub-classes. By using this class as a contextual class template authors will automatically obtain all available methods and fields, the code generation environment will be supported by MPS platform, and so it's not necessary to reconstruct it for each and every editor template anymore. At the same time, CellFactoryContextClass can be used as a marker interface highlighting templates, which will generate code for one of the EditorBuilders, simplifying the process of locating such code & supporting it in the future.


The GenericCellCreationContext interface provides limited subset of contextual information, which is always available for the code called either as a part of EditorBuilders or from a separate class, executed as a part of cell creation (editor update) process. This interface should be used as a template context instead of CellFactoryContextClass in those cases, when template authors are going to reuse the same template across the EditorBuilders generation process and some other places. For example, query methods which may be generated either inside EditorBuilders or within some style class.

New signature for createCell() methods

In the previous version of MPS, the cell factory methods were always generated with two additional parameters specifying the context of cell creation: EditorContext & node<>. From now on it's not necessary to specify these parameters any more - the generated code can always access this information (as well as any other contextual info) by calling methods from the containing EditorBuilder class. The new editor generator will generate cell factory methods without any parameters.

Automatic migration script for new createCell() methods

For compatibility with the existing generators, we provide a migration script patching available templates and introducing the new createCell() methods, which delegate to the old ones (with the two additional parameters) as a fallback. We recommend to execute this script first and then check all modifications and verify, if the modified generator still works correctly. The provided automatic migration supports only most frequent situations, so in some specific cases you may need to manually modify your generator in order to make it work again. The template, which generates the compatibility methods is called template_cellFactoryCompatibility. If you later modify your generator to generate directly the new createCellMethods, you should remove any calls to template_cellFactoryCompatibility. We do recommend to review all existing generators & patch obsolete templates generating the legacy createCell(...) methods in the scope of the current MPS release - we are going to drop the compatibility template in the next version.

Mapping labels

Several mapping labels were introduced into the Editor generator (MAPPING_main) and may be used to simplify code generation:


cellFactory.class.concept : ConceptEditorDeclaration -> ClassConcept

This label expose a java class, generated for the EditorBuilder of ConceptEditorDeclaration.cellModel


cellFactory.class.inspector : ConceptEditorDeclaration -> ClassConcept

This label exposes a java class, generated for the EditorBuilder of ConceptEditorDeclaration.inspectedCellModel


cellFactory.class.component : EditorComponentDeclaration -> ClassConcept

This label exposes a java class, generated for the EditorBuilder of EditorComponentDeclaration


cellFactory.constructor : EditorCellModel -> ConstructorDeclaration

Used to mark the constructor of the generated EditorBuilder class.


cellFactory.factoryMethod : EditorCellModel -> InstanceMethodDeclaration

The replacement for obsolete cellFactoryMethod label, containing the new cellFactory methods. This label should be used instead of cellFactoryMethod at the moment of modification of existing templates making them generating new cellFactorymethods.


generated.constructor : <no input concept> -> ConstructorDeclaration

This label may be used together with existing generatedClass one to mark generated constructor instances. This label may be used to avoid the ugly code for locating first constructor instance inside node<ClassConcept>, returned from the generatedClass mapping. 

CellLayoutConstructor switch introduced

This template switch is used to instantiate proper cell layout while creating a collection cell. The previously used static createxxx() methods inside EditorCell_Collection class have been deprecated and will be removed. 

New generator for RefCellCellProvider sub-classes

The generator for CellModel_RefCell has been modified. The newly generated anonymous inner classes for RefCellCellProvider do not use the logic located inside RefCellCellProvider.createRefCell() runtime method. The meta-information, available at generation-time, are used in oder to create complete content of this method. If you do generate sub-classes of RefCellCellProvider within your generators, you should consider reviewing such places and aligning your templates with the templates from MPS.

InlineCellProvider replaced with EditorBuilder sub-class

InlineCellProvider is not being used by the MPS generator anymore. MPS uses the generated sub-classes of AbstractEditorBuilder instead. Nevertheless, we modified some constraints inside InlineCellProvider in order to make the lifecycle more transparent. We recommend to check the javadoc for InlineCellProvider, if you are still using it.

Editor Styles generator

A separate static inner class will be generated for each entry inside StyleSheet & StyleKeyPack instances. The provided applyStyleClass template may be used to properly instantiate & call the new Style classes. Legacy static .applyxxx() methods should be removed in the next release.

StyleClassItem constraints modification

We removed the canBeChild constraints from the StyleClassItem concept. These constraints were replaced with canBeParent constraints of the node, containing the StyleClassItem. In addition to that isApplicableToCell(node<EditorCellModel> cellModel) behaviour method has been deprecated and is not used anymore. Instead we have introduced the following methods:

  • isApplicableToCellConcept()
  • isApplicableForLayout()
  • isApplicableInLayout()

We recommend you to check the javadoc of StyleClassItem behavior methods, if you are implementing any custom StyleClassItem in your language.



For a quick how-to document on the MPS generator please check out the Generator Cookbook.


Generator is a part of language specification that defines the denotational semantics for the concepts in the language.

MPS follows the model-to-model transformation approach. The MPS generator specifies translation of constructions encoded in the input language into constructions encoded in the output language. The process of model-to-model transformation may involve many intermediate models and ultimately results in sn output model, in which all constructions are in a language whose semantics are already defined elsewhere.

For instance, most concepts in baseLanguage (classes, methods etc) are "machine understandable", therefore baseLanguage is often used as the output language.

The target assets are created by applying model-to-text transformation, which must be supported by the output language. The language aspect to define model-to-text transformation is called TextGen and is available as a separate tab in concept's editor. MPS provides destructive update of generated assets only.

For instance, baseLanguage's TextGen aspect generates *.java files at the following location:
<generator output path>\<model name>\<ClassName>.java
Generator output path - is specified in the module, which owns the input model (see MPS modules).
Model name - is a path segment created by replacing '.' with the file separator in the input model's name.


For a quick how-to document on the MPS generator please check out the Generator Cookbook.


Generator Module

Unlike any other language aspect, the generator aspect is not a single model. Generator specification can comprise many generator models as well as utility models. A Generator Model contains templates, mapping configurations and other constructions of the generator language.

A Generator Model is distinguished from a regular model by the model stereotype - 'generator' (shown after the model name as <name>@generator).
The screenshot below shows the generator module of the smodel language as an example.

Research bundled languages yourself


You can research the smodel (and any other) language generator by yourself:

  • download MPS (here);
  • create new project (can be empty project);
  • use the Go To -> Go to Language command in the main menu to navigate to the smodel language (its full name is jetbrains.mps.lang.smodel)

Creating a New Generator

A new generator is created by using the New -> Generator command in the language's popup menu.

Technically, it is possible to create more than one generator for one language, but at the time of writing MPS does not provide full support for this feature. Therefore, languages normally have only one (or none) generator. For that reason, the generator's name is not important. Everywhere in the MPS GUI a generator module can be identified by its language name.

When creating a new generator module, MPS will also create the generator model 'main@generator' containing an empty mapping configuration node.

Generator Properties

As a module, generator can depend on other modules, have used languages and used devkits (see Module meta-information).

The generator properties dialog also has two additional properties:

Generating Generator

MPS generator engine (or the Generator language runtime) uses mixed compilation/interpretation mode for transformation execution.

Templates are interpreted and filled at runtime, but all functions in rules, macros, and scripts must be pre-compiled.

(lightbulb) To avoid any confusion, always follow this rule: after any changes made to the generator model, the model must be re-generated (Shift+F9). Even better is to use Ctrl+F9, which will re-generate all modified models in the generator module.


The transformation is described by means of templates. Templates are written using the output language and so can be edited with the same cell editor that would normally be used to write 'regular code' in that language. Therefore, without any additional effort the 'template editor' has the same level of tooling support right away - syntax/error highlighting, auto-completion, etc. The templates are then parametrized by referencing into the input model.

The applicability of individual templates is defined by #Generator Rules, which are grouped into #Mapping Configurations.

Mapping Configurations

A Mapping Configuration is a minimal unit, which can form a single generation step. It contains #Generator Rules, defines mapping labels and may include pre- and post-processing scripts.

Generator Rules

Applicability of each transformation is defined by generator rules.
There are six types of generator rules:

  • conditional root rule
  • root mapping rule
  • weaving rule
  • reduction rule
  • pattern rule
  • abandon root rule
  • drop attribute rule (new in 3.3)

Each generator rule consists of a premise and a consequence (except for the abandon root rule and drop attribute rule, with predefined consequence that cannot be specified by the user).

All rules except for the conditional root rule contain a reference to the concept of the input node (or just input concept) in its premises. All rule premises also contain an optional condition function.

Rule consequence commonly contains a reference to an external template (i.e. a template declared as a root node in the same or different model) or so-called in-line template (conditional root rule and root mapping rule can only have reference to an external template). There are also several other versions of consequences.

The following screenshot shows the contents of a generator model and a mapping configuration example.


The code in templates can be parameterized through macros. The generator language defines three kinds of macros:

  • property macro - computes a property value;
  • reference macro - computes the target (node) of a reference;
  • node macro - is used to control template filling at generation time. There are several versions of node macro - LOOP-macro is an example.

Macros implement a special kind of so-called annotation concept and can wrap property, reference or node cells (depending on the kind of macro) in the template code.

Code wrapping (i.e. the creation of a new macro) is done by pressing Ctrl+Shift+M or by applying the 'Create macro' intention.

The following screenshot shows an example of a property macro.

Macro functions and other parameterization options are edited in the inspector view. Property macro, for instance, requires specifying the value function, which will provide the value of the property at generation time. In the example above, output class node will get the same name that the input node has.

The node parameter in all functions of the generator language always represents the context node to which the transformation is currently being applied (the input node).

Some macros (such as LOOP and SWITCH-macro) can replace the input node with a new one, so that subsequent template code (i.e. code that is wrapped by those macros) will be applied to the new input node.

External Templates

External templates are created as a root node in the generator model.

There are two kinds of external templates in MPS.

One of them is root template. Any root node created in generator model is treated as a root template unless this node is a part of the generator language (i.e. mapping configuration is not a root template). Root template is created as a normal root node (via Create Root Node menu in the model's popup).

The following screenshot shows an example of a root template.

This root template will transform input node (a Document) into a class (baseLanguage). The root template header is added automatically upon creation, but the concept of the input node is specified by the user.

(lightbulb) It is a good practice to specify the input concept, because this allows MPS to perform static type checking in the code of the macro function.

A Root template (reference) can be used as a consequence in conditional root rules and root mapping rules. ((warning) When used in a conditional root rule, the input node is not available).

The second kind of template is defined in the generator language and its concept name is 'TemplateDeclaration'. It is created via the 'template declaration' action in the Create Root Node menu.

The following screenshot shows an example of template declaration.

The actual template code is 'wrapped' in a template fragment. Any code outside template fragment is not used in transformation and serves as a context (for example you can have a Java class, but export only one of its method as a template).

Template declaration can have parameters, declared in the header. Parameters are accessible through the #generation context.

Template declaration is used in consequence of weaving, reduction and pattern rules. It is also used as an included template in INCLUDE-macro (only for templates without parameters) or as a callee in CALL-macro.

Template Switches

A template switch is used when two or more alternative transformations are possible in a certain place in template code. In that case, the template code that allows alternatives is wrapped in a SWITCH-macro, which has reference to a Template Switch. Template Switch is created as a root node in the generator model via the Create Root Node menu (this command can be seen in the 'menu' screenshot above).

The following screenshot shows an example of a template switch.

Generator Language Reference

Mapping Configuration

Mapping Configuration is a container for generator rules, mapping label declarations and references to pre- and post-processing scripts. A generator model can contain any number of mapping configurations - all of them will be involved in the generation process, if the owning generator module is involved. Mapping configuration also serves as a minimal generator unit that can be referenced in the mapping priority rules (see Generation Process: Defining the Order of Priorities).

Generator Rule

Generator Rule specifies a transformation of an input node to an output node (except for the conditional root rule which doesn't have an input node and simply creates a new node in the output model). All rules consist of two parts - a premise and a consequence (except for the abandon root rule, which doesn't have a consequence and simply ignores the input node). Any generator rule can be tagged by a mapping label.

All generator rules' functions have the following parameters:

  • node - the current input node (all except the condition-function in conditional root rule)
  • genContext - generation context - allows searching for output nodes, generating of unique names and others (see #generation context)

Generator Rules:





conditional root rule

Generates a root node in the output model. Applied only one time (max) during a single generation step.

condition function (optional), missing condition function is equivalent to a function always returning true.

root template (ref)

root mapping rule

Generates a root node in the output model.

concept - applicable concept (concept of the input node)
inheritors - if true then the rule is applicable to the specified concept and all its sub-concepts. If false (default) then the sub-concepts are not applicable.
condition function (optional) - see conditional root rule above.
keep input root - if false then the input root node (if it's a root node) will be dropped. If true then input root will be copied to the output model.

root template (ref)

weaving rule

Allows to insert additional child nodes into the output model. Weaving rules are processed at the end of a generation micro-step just before map_src and reference resolving. The rule is applied on each input node of the specified concept. The parent node for insertion should be provided by the context function.
(see #Model Transformation)

concept - same as above
inheritors - same as above
condition function (optional) - same as above

  • external template (ref)
  • weave-each 
  • context - function that computes the (parent) output node into which the output node(s) generated by this rule will be inserted.
  • anchor (available in Inspector) - specifies a node within the context collection, in front of which the nodes should be inserted, null means insert at the end of the collection

reduction rule

Transforms the input node while this node is being copied to the output model.

concept - same as above
inheritors - same as above
condition function (optional) - same as above

  • external template (ref)
  • in-line template
  • in-line switch
  • dismiss top rule
  • abandon input

pattern rule

Transforms the input node, which matches the pattern.

pattern - pattern expression
condition function (optional) - same as above

  • external template (ref)
  • in-line template
  • dismiss top rule
  • abandon input

abandon root rule

Allows to drop an input root node which otherwise would be copied into the output model.

applicable concept ((warning) including all its sub-concepts)
condition function (optional) - same as above


drop attribute ruleFor a transformed node, controls which attributes get copied from the input nodeconcept - concept of the attribute node (subconcept of jetbrains.mps.lang.core.structure.Attribute)
inheritors - if true then the rule is applicable to the specified concept and all its sub-concepts. If false (default) then the sub-concepts are not applicable.
condition function (optional) - further restrictions to trigger the rule


Attribute is not copied when the rule matches

Rule Consequences:




root template (ref)

  • conditional root rule
  • (root) mapping rule

Applies the root template

external template (ref)

  • weaving rule
  • reduction rule
  • pattern rule

Applies an external template. Parameters should be passed if required, they can be one of:

  • pattern captured variable (starting with # sign)
  • integer or string literal
  • null, true, false
  • query function


weaving rule

Applies an external template to a set of input nodes.
Weave-each consequence consists of:

  • foreach function - returns a sequence of input nodes
  • reference on an external template

in-line template

  • reduction rule
  • pattern rule

Applies the template code which is written right here.

in-line switch

reduction rule

Consists of set of conditional cases and a default case.
Each case specify a consequence, which can be one of:

  • external template (ref)
  • in-line template
  • dismiss top rule
  • abandon input

dismiss top rule

  • reduction rule
  • pattern rule

Drops all reduction-transformations up to the point where this sequence of transformations has been initiated by an attempt to copy the input node to the output model. The input node will be copied 'as is' (unless some other reduction rules are applicable). The user can also specify an error, warning or an information message.

abandon input

  • reduction rule
  • pattern rule

Prevents the input node from being copied into the output model.

Root Template

Root Template is used in conditional root rules and (root) mapping rules. The Generator language doesn't define specific concept for root template. The generator language defines a special kind of annotation - root template header, which is automatically added to each new root template. The root template header is used for specifying an expected input concept (i.e. concept of input node). MPS uses this setting to perform a static type checking of code in various macro-functions, which are used in the root template.


Technically, any root node in the output language can be considered to be a Root Template, when created in the generator model. However, root nodes without annotations are treated by the generator as auxiliary, utility nodes e.g. to specify reference targets and thus are not considered for target language evaluation. A warning is displayed and a quick fix is offered to add the root template header annotation to the root template that miss one. We highly recommend language designers to fix their templates and to always create new root templates with the annotation included.


External Template

External Template is a concept defined in the generator language. It is used in weaving rules and reduction rules.

In external templates the user specifies the template name, input concept, parameters and a content node.

The content node can be any node in the output language. The actual template code in external templates is surrounded by template fragment 'tags' (the template fragment is also a special kind of annotation concept). The code outside template fragment serves as a framework (or context) for the real template code (template fragment) and is ignored by the generator. In external template for weaving rule, the template's context node is required (it is a design-time representation of the rule's context node), while the template for reduction rule can be just one context-free template fragment. An external template for a reduction rule must contain exactly one template fragment, while a weaving rule's template can contain more than one template fragments.

Template fragment has a mapping label property, which is edited in Inspector view.

Mapping Label

Mapping Labels are declared in a mapping configuration and references stored to this declaration are used to label generator rules, macros and template fragments. Such marks allow finding of an output node by a known input node (see #generation context).


  • name
  • input concept (optional) - expected concept of the input node of the transformation performed by the tagged rule, macro or template fragment
  • output concept (optional) - expected concept of the output node of the transformation performed by the tagged rule, macro or template fragment

MPS makes use of the input/output concept settings to perform static type checking in get output ... operations (see #generation context).

Export Label

Export Labels are declared in mapping configuration. Export labels resemble mapping labels in many ways. They add a persistence mechanism that enables access to the labels from other models.

Each export label specifies:

  • name to identify it in the macros
  • input and output concepts indicating the concept before and after the generation phase
  • keeper concept, instance of which will be used for storing the exported information
  • marshal function, to encode the inputNode and the generated outputNode into the keeper
  • an unmarshal function, to decode the information using the original inputNode and the keeper to correctly initialize the outputNode in the referring model


Macro is a special kind of an annotation concept which can be attached to any node in template code. Macro brings dynamic aspect into otherwise static template-based model transformation.

Property- and reference-macros are attached to property- and reference-cells respectively, while node-macros (which come in many variations - LOOP, IF etc.) are attached to cells representing the whole node in the cell-editor. All properties of a macro are edited using the inspector view.

All macros have the mapping label property - reference to a mapping label declaration. Additionally, all macros can be parameterized by various macro-functions - depending on the type of the macro. Any macro-function has at least the three following parameters:

  • node - the current input node;
  • genContext - generation context - allows searching for output nodes, generating of unique names and others;
  • operationContext - instance of jetbrains.mps.smodel.IOperationContext interface (used rarely).

Many types of macros have the mapped node (or mapped nodes) function. This function computes the new input node - a substitution for the current input node. If the mapped node function returns null or the mapped nodes function returns an empty sequence, then the generator will skip this macro altogether. I.e. no output will be generated in this place.



Properties (if not mentioned above)

Property macro

Computes the value of a property.

value function:

  • return type - string, boolean or int - depending on the property type.
  • parameters - standard + templateValue - value in the template code wrapped by the macro.

Reference macro

Computes the referent node in the output model.
Normally executed at the end of a generation micro-step, when the output model (tree) is already constructed.
Can also be executed earlier, if the user code is trying to obtain the target of the reference.

Reference macro supports SNodeReference as specification of a new target. With that, templates don't need access to a target's node model the moment they are applied. 

referent function:

  • return type
    • node (type depends on the reference link declaration)
    • SNodeReference
    • string identifying the target node (see note).
  • parameters - standard + outputNode - source of the reference link (in the output model).


The wrapped template code is applied only if the condition is true. Otherwise the template code is ignored and an 'alternative consequence' (if any) is applied.

condition function
alternative consequence (optional) - any of:

  • external template (ref)
  • in-line template
  • abandon input
  • dismiss top rule


Computes new input nodes and applies the wrapped template to each of them.

mapped nodes function


The wrapped template code is ignored (it only serves as an anchor for the INCLUDE-macro), a reusable external template will be used instead.

Null input makes INCLUDE effectively a no-op.

mapped node function (optional)
include template - reference to a reusable external template


Invokes template and replaces wrapped template code with the result of template invocation. Supports templates with parameters.

Null input node is tolerated, and the template is ignored altogether in this case, i.e. CALL yields empty collection of nodes as a result when input/mapped node is null.

mapped node function (optional)
call template - reference to a reusable external template

argument - one of

  • pattern captured variable
  • integer or string literal
  • null, true, false
  • query function


Provides a way to many alternative transformations in the given place in the template code.
The wrapped template code is applied, if none of switch cases is applicable and no default consequence is specified in #template switch.

For null input node, SWITCH may react with a message (specified along with its rules), anchor template node is ignored, and SWITCH macro yields no results.

mapped node function (optional)
template switch - reference to a template switch


Copies an input node to the output model. The wrapped template code is ignored.

mapped node function - computes the input node to be copied.


Copies input nodes to the output model. The wrapped template code is ignored.
Can be used only for children with multiple aggregation cardinality.

mapped nodes function - computes the input nodes to be copied.


Multifunctional macro, can be used for:

  • marking a template code with a mapping label;
  • replacing the current input node with a new one;
  • perform a non-template based transformation;
  • accessing the output node for some reason.
    The MAP-SRC macro is executed at the end of a generator micro-step - after all node- and property-macros, but before any reference-macro is run.

mapped node function (optional)
mapping func function (optional) - performes non-template based transformation.
If defined then the wrapped template code will be ignored.
Parameters: standard + parentOutputNode - parent node in the output model.
post-processing function (optional) - give access to the output node.
Parameters: standard + outputNode


Same as MAP-SRC but can handle many new input nodes (similar to the LOOP-macro)

mapped nodes function
mapping func function (optional)
post-processing function (optional)


Allows to insert additional child nodes into the output model in a similar way Weaving rules are used. The node wrapped in the WEAVE macro (or provided by the use input function) will have the supplied template applied to it and the generated nodes will be inserted.

use input a function returning a collection of nodes to apply the macro to
weave reference to a template to weave into the nodes supplied as the input


saves a node for cross-model reference, so it can be retrieved when generating other models




Reference resolving by identifier is only supported in BaseLanguage.
The identifier string for classes and class constructors may require (if class is not in the same output model) package name in square brackets preceding the class name:

Template Switch

A template switch is used in pair with the SWITCH-macro (the TemplateSwitchMacro concept). A single template switch can be re-used in many different SWITCH-macros. A template switch consists of set of cases and one default case. Each switch case is a reduction rule, i.e. a template switch contains a list of reduction rules (see #reduction rule).

The default case consequence can be one of:

  • external template (ref)
  • in-line template
  • abandon input
  • dismiss top rule
    .. or can be omitted. In this case the template code surrounded by corresponding SWITCH-macro will be applied.

A template switch can inherit reduction rules from other switches via the extends property. When the generator is executing a SWITCH-macro it tries to find most specific template switch (available in scope). Therefore the actually executed template switch is not necessarily the one that is defined in the template switch property of the SWITCH-macro.

Through the null-input message property the user can specify an error, warning or info message, which will be shown in the MPS messages view in case when the mapped node function in SWITCH-macro returns null (by default no messages are shown and the macro is skipped altogether).

A template switch can accept parameters, the same way as template declarations. A use of parametrized switch mandates arguments to be supplied in the SWITCH macro. The TemplateSwitchMacro concept supports switches both with and without arguments.


The old macro concept (TemplateSwitch) has been deprecated in 3.1. Note, visually both macros look the same, SWITCH and SWITCH, respectively. There's migration script to replace old macro instances with the new one; you need to invoke the script manually to update the concepts.

Generation Context (operations)

Generation context (the genContext parameter in macro- and rule-functions) allows finding of nodes in the output model, generating unique names and provides other useful functionality.

Generation context can be used not only in the generator models, but also in utility models - as a variable of type gencontext.

Operations of genContext are invoked using the familiar dot-notation: genContext.operation

Finding Output Node

get output <mapping label> for model ( <model> )

Returns the output node generated by a labeled conditional root rule in a specified model.
Issues an error, if there is more than one matching output node.

get output <mapping label> for ( <input node> )

Returns the output node generated from the input node by a labeled generator rule, a macro or a template fragment.
Issues an error if there is more than one matching output node.

pick output <mapping label> for ( <input node> )

(warning) only used in the context of the referent function in a reference-macro and only if the required output node is a target of the reference, which is being resolved by that reference-macro.
Returns the output node generated from the input node by a labeled generator rule, a macro or a template fragment. The difference between this and the previous operation is that this operation can automatically resolve the many-output-nodes conflict - it picks the output node, which is visible in the given context (see search scope).

get output list <mapping label> for ( <input node> )

Returns a list of output nodes generated from the input node by a labeled generator rule, a macro or a template fragment.

get copied output for ( <input node> )

Returns the output node, which has been created by copying of an input node. If during the copying, the input node has been reduced, but the concept of the output node is the same (i.e. it wasn't reduced into something totally different), then this is still considered 'copying'.
Issues an error if there is more than one matching output node.

Generating Unique Name

unique name from <base name> in context <node>

The uniqueness is secured throughout the whole generation session.
(warning) Clashing with names that weren't generated using this service is still possible.

The context node is optional, though we recommend to specify it to guarantee generation stability. If specified, then MPS tries its best to generated names 'contained' in a scope (usually a root node). Then when names are re-calculated (due to changes in the input model or in the generator model), this won't affect other names outside the scope.

Template Parameters


Value of the captured pattern variable
(warning) available only in rule consequences


Value of the template parameter
(warning) available only in external templates

Getting Contextual Info


The current input model


The original input model


The current output model

invocation context

Operation context (jetbrains.mps.smodel.IOperationContext java interface) associated with the module - the owner of the original input model

(warning) This operation has been deprecated in MPS 3.3 and will be removed in the next release along with other activities to eliminate IOperationContext


Scope - jetbrains.mps.smodel.IScope java interface

(warning) This operation has been deprecated since MPS 3.1 and is removed in MPS 3.3


The template code surrounded by the macro.
It is only used in macro-functions

(warning) This operation has been deprecated in MPS 3.3.

The primary flaw is that this operation implies interpreted templates. There is no template model when templates are generated.

Besides, contract of the operation is vague (i.e. what does it give in a context of argument query for a template call).

get prev input <mapping label>

Returns the input node that has been used for enclosing the template code surrounded by the labeled macro.
It is only used in macro-functions.

Transferring User Data

During generation MPS maintains three maps of user objects, each of which has different life span:

  • session object - kept throughout the whole generation session;
  • step object - kept through a single generation step;
  • transient object - only alive during a micro step.

The developer can access the user object maps using the array (square brackets) notation:

The key can be any object (java.lang.Object).

binding user data with particular node


The session- and step-object cannot be used to pass data associated with a particular input node across steps and micro-steps, because neither an input node nor its id can serve as a key (output nodes always have a different id).
To pass such data you should use the putUserObject and getUserObject methods defined in class jetbrains.mps.smodel.SNode.
The data will be transferred to all output copies of the input node. The data will be also transferred to the output node, if a slight reduction (i.e. without changing of the node concept) took place while the node was being coppied.


Creates message in the MPS message view. If the node parameter is specified then clicking on the message will navigate to that node. In case of an error message, MPS will also output some additional diagnostic info.

Utilities (Re-usable Code)

If you have duplicated code (in rules, macros, etc.) and want to say, extract it to re-usable static methods, then you must create this class in a separate, non-generator model.

If you create an utility class in the generator model (i.e. in a model with the 'generator' stereotype), then it will be treated as a root template (unused) and no code will be generated from it.

Mapping Script

A Mapping script is user code, which is executed either before a model transformation (pre-processing script) or after it (post-processing script). It should be referenced from #Mapping Configuration to be invoked as a part of it's generation step. Mapping scripts provide the ability to perform non-template based model transformations.

Pre-processing scripts are also commonly used for collecting certain information from input model that can be later used in the course of template-based transformation. The information collected by script is saved as a transient-, step- or session-object (see generation context).

Script sample:


script kind

  • pre-process input model - the script is executed at the beginning of a generation step, before the template-based transformation;
  • post-process output model - the script is executed at the end of a generation step, after the template-based transformation.

modifies model

only available if script kind = pre-process input model
If set true and the input model is the original input model, then MPS will create a transient input model before applying the script.
If set false but the script tries to modify the input model, then MPS will issue an error.

Code context:


The current model


The generation context to give access to transient/session or step objects.

invocation context

Operation context (jetbrains.mps.smodel.IOperationContext java interface) associated with the module - the owner of the original input model

(warning) Operation context has been deprecated will be removed in the next release, please don't use.

The Generator Algorithm

The process of generation of target assets from an input model (generation session) includes 5 stages:

  • Defining all generators that must be involved
  • Defining the order of priorities of transformations
  • Step-by-step model transformation
  • Generating text and saving it to a file (for each root in output model)
  • Post-processing assets: compiling, etc.

We will discuss the first three stages of this process in detail.

Defining the Generators Involved

To define the required generators, MPS examines the input model and determines which languages are used in it. Doing this job MPS doesn't make use of 'Used Languages' specified in the model properties dialog. Instead MPS examines each node in the model and gathers languages that are actually used.

From each 'used language' MPS obtains its generator module. If there are more than one generator module in a language, MPS chooses the first one (multiple generators for the same language are not fully supported in the current version of MPS). If any generator in this list depends on other generators (as specified in the 'depends on generators' property), then those generators are added to the list as well.

After MPS obtains the initial list of generators, it begins to scan the generator's templates in order to determine what languages will be used in intermediate (transient) models. The languages detected this way are handled in the same manner as the languages used in the original input model. This procedure is repeated until no more 'used languages' can be detected.

Explicit Engagement

In some rare cases, MPS is unable to detect the language whose generator must be involved in the model transformation. This may happen if that language is not used in the input model or in the template code of other (detected) languages. In this case, you can explicitly specify the generator engagement via the Languages Engaged on Generation section in the input model's properties dialog (Advanced tab).

Dependency scope/kind - 'Generation Target' and 'Design'.

'Generation Target' replaces 'Extends' relation between two languages (L2 extends L1), when one needed to specify that Generator of L2 generates into L1 and thus needs its runtime dependencies. Now, when a language (L2) is translated to another language (L1), and L1 has runtime dependencies, use L1 as 'Generation Target' of L2. Though this approach is much better than 'Extends', it's still not perfect as it's rather an attribute of a generator than of a language. Once Generators become fully independent from their languages, we might need to fix this approach (different generators may target different languages, thus target has to be specified for a generator, not the source language).

'Design' dependency replaces 'Extends' between two generators. Use it when you need to reference another generator to specify priority rules (though consider if you indeed need these priorities, see changes in the Generator Plan, below)

Defining the Order of Priorities

As we discussed earlier, a generator module contains generator models, and generator models contain mapping configurations. Mapping configuration (mapping for short) is a set of generator rules. It is often required that some mappings must be applied before (or not later than, or together with) some other mappings. The language developer specifies such a relationship between mappings by means of mapping constraints in the generator properties dialog (see also #Mapping Priorities and the Dividing Generation Process into Steps demo).

After MPS builds the list of involved generators, it divides all mappings into groups, according to the mapping priorities specified. All mappings for which no priority has been specified fall into the last (least-priority) group.



You can check the mapping partitioning for any (input) model by selecting Show Generation Plan action in the model's popup menu.
The result of partitioning will be shown in the MPS Output View.

Optimized Generation Plan

When planning the generation phase, MPS prefers to keep every generator as lonely as possible. Eventually, you'll see many relatively small and fast to process generation steps. Of course, the generators forced to run together with priority rules still run at the same step. Handling several unrelated generators at the same generation step (MPS prior to 3.2) proved to be inefficient, since it imposed a lot of unnecessary checking for rule applicability across other generators from the same step. With in-place transformation in 3.2 and later, the performance penalty for each extra generation steps is negligible.

Ignored priority rules

In addition to conflicting priorities, there are rules that get ignored during the generation plan. This might happen if an input model doesn't have any concept of a language participating in a priority rule. Since there's no actual use of a language, the rule is ignored, and the 'Show Generation Plan' action reports them along with conflicting rules. Previous MPS versions used to include generators of otherwise unused languages into the generation process, now these generators get no chance to jump in.

Implicit priorities

Target languages (languages produced by templates) are considered as implicit 'not later than' rules. You don't need to specify these priorities manually. MPS automatically inserts "not later than" rules for all generator models in the source and target languages. It is important to understand that priority rules work on the model granularity level.


This implicit priority rule between two generator models is ignored if an explicit priority rule is defined for these two models, one from the language that generates into the other language and one from the other language.

Model Transformation

Each group of mappings is applied in a separate generation step. The entire generation session consists of as many generation steps as there were mapping groups formed during the mapping partitioning. A generation step includes three phases:

  • Executing pre-mapping scripts
  • Template-based model transformation
  • Executing post-mapping scripts

The template-based model transformation phase consists of one or more micro-steps. A micro-step is a single-pass model transformation of an input model into a transient (output) model.

While executing micro-step MPS follows the next procedure:

  1. Apply conditional root rules (only once - on the 1-st micro-step)
  2. Apply root mapping rules
  3. Copy input roots for which no explicit root mapping is specified (this can be overridden by means of the 'keep input root' option in root mapping rules and by the 'abandon root' rules)
  4. Apply weaving rules
  5. Apply delayed mappings (from MAP_SRC macro)
  6. Revalidate references in the output model (all reference-macro are executed here)

There is no separate stage for the application of reduction and pattern rules. Instead, every time MPS copies an input node into the output model, it attempts to find an applicable reduction (or pattern) rule. MPS performs the node copying when it is either copying a root node or executing a COPY_SRC-macro. Therefore, the reduction can occur at either stage of the model transformation.

MPS uses the same rule set (mapping group) for all micro-steps within the generation step. After a micro-step is completed and some transformations have taken place during its execution, MPS starts the next micro-step and passes the output model of the previous micro-step as input to the next micro-step. The whole generation step is considered completed if no transformations have occurred during the execution of the last micro-step, that is, when there are no more rules in the current rule set that are applicable to nodes in the current input model.

The next generation step (if any) will receive the output model of previous generation step as its input.



Intermediate models (transient models) that are the output/input of generation steps and micro-steps are normally destroyed immediately after their transformation to the next model is completed.
To keep transient models, enable the following option:
Settings -> Generator Settings -> Save transient models on generation

See also:

Handling of node attributes during generation

Node attributes constitute generic extension mechanism, thus Generator shall preserve attributes along the transformation process (unless attribute designer opts not to keep them) without explicit support in any template. When an input node is transformed to another node, Generator copies attributes of the input node to the output. Copy is controlled by newly introduced drop attribute rule only, and happens regardless of @attribute info specification (i.e. its attributed concept or multiple restrictions). The fact that transformation rule may have produced attribute node itself is not taken into account (i.e. if a reduction rule explicitly copies node attributes to a newly created output node, the attributes would get duplicated due to automatic copy of attributes. However, it's rare for a reduction rule to copy node attributes, and the issue, if ever shows up, is easy to mitigate with drop rules).

While copying attributes of a node, Generator consults drop attribute rules (newly introduced in MPS 3.3, reside next to abandon root rules) to see if language designer don't need these attributes to survive transformation process. This rules are quite similar to abandon root rules - when any rule is triggered, attribute is not copied into output model.

With the growing adoption of attributes and their increasing complexity, we enabled the generator to transform the attribute contents using the regular template processing rules:

  • references to nodes in the same model get updated to point to the respective nodes in the output model
  • reduction rules are applied in order to transform children of the attribute node.

In-place transformation

Generators for languages employed in a model are applied sequentially (aka Generation Plan). Effectively, each generation step modifies just a fraction of original model, and the rest of the model is copied as-is. With huge models and numerous generation steps this approach proves to be quite ineffective. In-place transformation tries to address this with a well-known 'delta' approach, where changes only are collected and applied to original model to alter it in-place.

In version 3.1 in-place transformation is left as an option, enabled by default and configurable through the Project settings -> Generator. Clients are encouraged to fix their templates that fail in the in-place mode, as in-place generation is likely to become the only generation mode later down the road.

Use of in-place transformation brings certain limitations or might even break patterns that used to work in the previous MPS versions:

  • Most notable and important - there's no output model at the moment when rule's queries/conditions are executed. To consult the output model during the transformation process is a bad practice, and in-place transformation enforces removing it. Access to the output model from a transformation rule implies certain order of execution, thus effectively limiting the set of optimizations applicable by the MPS generator. The contract of a transformation rule, with a complete input and a fraction of the output that this particular rule is responsible for, is more rigorous than "a complete input model" and "an output model in some uncertain state".
  • The output model is indeed there for weaving rules, as their purpose is to deal with output nodes.
  • The process of delta building requires the generator to know about the nodes being added to the model. Thus, any implicit changes in the output model that used to work would fail with in-place generation enabled. As an example, consider MAP-SRC with a post-process function, which replaces the node with a new one:postprocess: if (node.someCondition()) node.replace with new(AnotherNode);. Generator records a new node produced by MAP-SRC, schedules it for addition, and delays post-processing. Once post-processing is over, there's no way for the generator to figure out the node it tracks as 'addition to output model' is no longer valid and there's another node which should be used instead. Of course, the post-process can safely alter anything below the node produced by MAP-SRC, but an attempt to go out from the sandbox of the node would lead to an error.
  • Presence of active weaving rules prevents in-place transformation as these rule require both input and output models.

Generation trace

Much like in-place transformation, the updated generation trace is inspired by the idea to track actual changes only. Now it's much less demanding, as only the transformed nodes are being tracked. Besides, various presentation options are available to give different perspective on the transformation process.

Support for non-reflective queries

Note: This is just a preview of incomplete functionality in 3.1

Queries in the generator end up in the QueriesGenerated class, with methods implementing individual queries. These methods are invoked through Java Reflection. This approach has certain limitations - extra effort is required to ensure consistency of method name and arguments in generated code and hand-written invocation code. Provisional API and a generation option has been added, to expose functionality of QueriesGenerated through a set of interfaces. With that, generator consults generated queries through regular Java calls, with compile-time checks for arguments, leaving implementation detail about naming and arguments of particular generated queries to QueriesGenerated and its generator.

Mapping Priorities

Mapping priorities are a set of mapping priority rules specifying an order of priority between sets of generator rules (mapping configurations).

Mapping priorities are specified in generator module property dialog.
See also: Generator Properties, Generation Process: Defining the Order of Priorities, Demo 6: Dividing Generation Process into Steps

Each mapping priority rule consists of left part, right part and priority symbol in the middle.
For instance:

Left part of priority rule can only refer to mapping configurations in this generator.


Refers to


All mappings in this generator

modelName . *

All mappings in model

modelName . mappingName

Mapping configuration

{ ... }

Set of mappings.
An entry can be either
modelName . * or
modelName . mappingName

Right part of priority rule can refer to mapping configurations in this generator as well as other generators visible in scope (see Generator Properties: dependency on other generators).


Refers to


All mappings in this generator

modelName . *

All mappings in model

modelName . mappingName

Mapping configuration

{ ... }

Set of mappings.
An entry can be either
modelName . * or
modelName . mappingName

[languageName / generatorName : * ]

All mapping in external generator

[languageName / generatorName : modelName]

All mapping in external generator model

[languageName / generatorName : modelName . mappingName]

External mapping configuration

[languageName / generatorName : { ... }]

All mapping in external generator


All possible mappings

Priority symbol




Mappings in left part are processed before mappings in right part


Mappings in left part are processed not later than mappings in right part


Mappings in left part and right part are processed together

Generation plan

Generation plans allow developers to specify the desired order of generation for their models explicitly and thus gain better control over the generation process.


Specifying mutual generator priorities may become cumbersome for larger projects. Additionally, in order to specify the priorities the involved languages need to know about one another by declaring appropriate mutual dependencies, which breaks their (sometimes desired) independence. Generation plans put the responsibility of proper ordering of generation steps into a single place - the generation plan. They allow language designers to provide intuitive means for end-user models to be processed in a desired order. Generation plans list all the languages that should be included in the generation process, order them appropriately and optionally specify checkpoints, at which the generator shall preserve the current transient models. These models can then be used for automatic cross-model reference resolution further down the generation process.



The mechanism described here is a preliminary implementation that is likely to be further evolved in the following MPS versions. The general direction is:

  • to give more flexibility to the language designers in how they organize builds for users of their languages
  • to relieve the end-developers (aka language users) from the need to handle directly build plans, genplan models or build facets

So the actual end-user mechanism is likely to become more dependent on the language designer's intentions. The approach described here, based on facet and a genplan model, is a mere sample of how to accomplish custom generation.

Defining a generation plan

In order to create a generation plan, you first need to create a model. You may consider giving the model a genplan stereotype to easily distinguish it from ordinary models, but this is not mandatory.

After importing the jetbrains.mps.lang.generator.plan and jetbrains.mps.lang.smodel languages, you can create root node of the Plan concept, which will represent your generation plan:

The generation plan consists of transforms and checkpoints.

It is also possible to specify the required generators explicitly.

Transforms represent generation steps and include languages that should be generated as part of that generation phase.

Apply represents an explicit invocation of a particular generator.  The apply with extended statement applies in a single step the specified generators and those that extend them. This allows the language designer to accommodate for possible extensions.


Checkpoints represent points during the generation, at which the intermediate models should be preserved. References that will be resolved later in the generation will be able to look-up nodes in the stored intermediate models for their resolution through mapping labels. You can view these checkpoint models in the Project View tool window:

<TODO - may be this is no longer valid> These intermediate checkpoint models are preserved until you shutdown MPS or until you rebuild the models or the models that they depend on. Alternatively you can remove them manually:

Checkpoints provide synchronization points between different plans. The checkpoint models are denoted with a stereotype that matches the name of a checkpoint the model has been created with. Models are persisted alongside the generated sources using the naming scheme of <plan-name>-<checkpoint name>. 

Distinct statements allow for capturing different aspects of a checkpoint.

  • declare checkpoint <name> statement - specify a label that generator plans could share among themselves. This statement does not record/persist the state of the transformed model, it is a mere declaration that other generation plans will be able to refer to.
  • checkpoint <checkpoint> - records/persists the state of the transformed model. It can either declare a checkpoint in-place or refer to a declared checkpoint.
  • synchronize with <checkpoint> statements -  instructs the generation plan to look up the target nodes in persisted models of the specified checkpoint, but do not persist its own nodes (read-only access to the check-point). This statement doesn't introduce any new state, but references a checkpoint declared elsewhere.

Specifying a generation plan for models

Modules that should have their models built following a generation plan need to enable the Custom generation facet and point to the actual desired generation plan:


Verifying the generation plan

The Show generation plan action in the models' pop-up menu will correctly take the generation plan into account when building the outline for the generation:

To view the original, on generator priorities based, generation plan that would be used without the explicit generation plan script, hold the Alt key while clicking the Show Generation Plan menu entry:

Note that the report states in the header that it is not the currently active plan.

Using DevKits to associate a generation plan

DevKits can associate a generation plan, as well.

First add dependencies on languages and solutions that the DevKit should be wrapping. Then specify the Generation plan from within the imported solutions, which will be associated with the DevKit. Any model that imports that DevKit will get the DevKit's associated generation plan applied to it.


Only one DevKit with a plan is allowed per model at the moment.

Cross-model generation

Model is the unit of generation in MPS. All entities in a single model are generated together and references between the nodes can be resolved with reference macros and mapping labels. Mapping labels, however, are not by default accessible from other models. This complicates generation of references that refer to nodes from other models. Fortunately, regular mapping labels can support mutually independent generation of models with cross-references, if the models share a generation plan. The mechanism leverages checkpoints to capture the intermediate transient models and then use them for reference resolution.

In essence, to preserve cross-model references when generating multiple models, make sure that your models share a generation plan. That generation plan must define checkpoints at moments, when the mapping labels that are used for cross-model reference resolution have been populated. The rest will be taken care of automatically. The reference macros can resolve nodes from mapping labels through the usual genContext.get output by label and input (for nodes generated through reduction or root mapping rules) or genContext.get output for model (for nodes generated through conditional root mapping rules) ways.


Linking checkpoint models


Models created at checkpoints now keep a reference to the previous checkpoint model in the sequence. This helps the Generator discover mapped nodes matching input that spans several generator phases.

Debug information in the checkpoint models

To ease debug of cross-model generation scenarios, a dedicated root inside each checkpoint model lists the mapping label names along with pointers to the stored input and output nodes. Investigation of the mapping labels exposed at each checkpoint can substantially help debugging cross-model generation scenarios and fix unresolved references. Thus, next time your cross-model reference doesn't resolve, inspect corresponding checkpoint model to see if there's indeed a label for your input.


A video on setting a generation plan for a solution as well as for a DevKit is available.

Generating language descriptor models

Generation plans have been enhanced to generate descriptor models for languages (known as <>@descriptor). The structure, textgen, typesystem, dataflow and constraints aspects are now generated with generation plans and they use the new cross-model reference resolution mechanism.
Custom aspects defined by language authors can join the generation plan, as well. If you got a custom aspect, you should make sure that its generator extends the generator of jetbrains.mps.lang.descriptor language, as this is the way to get custom extensions activated for the plan.


Generating from Ant

The Ant MPS generator task exposes properties configurable from the build script (parallel, threads, in-place, warnings). The build language uses the Ant Generate task under the hood to transform models during the build process. This task now exposes parameters familiar from the Generator settings page:

  • strict generation mode
  • parallel generation with configurable number of threads
  • option to enable in-place transformation
  • option to control generation warnings/errors.

These options are also exposed at build language with the BuildMps_GeneratorOptions concept, so that build scripts have more control over the process.


If you're feeling like it's time for more practical experience, check out the generator demos.
The demos contain examples of usage of all the concepts discussed above.

Previous Next


Defining A Typesystem For Your Language

This page describes the MPS type-system to a great detail. If you would prefer a more lightweight introduction into defining your first type-system rules, consider checking out the Type-system cookbook.

If you would like to get familiar with the ways you can use the type-system from your code, you may also look at the Using the type-system chapter. 

What is a typesystem?

A typesystem is a part of a language definition assigning types to the nodes in the models written using the language. The typesystem language is also used to check certain constraints on nodes and their types. Information about types of nodes is useful for:

  • finding type errors
  • checking conditions on nodes' types during generation to apply only appropriate generator rules
  • providing information required for certain refactorings (e.g. for the "extract variable" refactoring)
  • and more


Any MPS node may serve as a type. To enable MPS to assign types to nodes of your language, you should create a language aspect for typesystem. The typesystem model for your language will be written in the typesystem language.

Inference Rules

The main concept of the typesystem language is an inference rule. An inference rule for a certain concept is mainly responsible for computing a type for instances of that concept.

An inference rule consists of a condition and a body. A condition is used to determine whether a rule is applicable to a certain node. A condition may be of two kinds: a concept reference or a pattern. A rule with a condition in the form of concept reference is applicable to every instance of that concept and its subconcepts. A rule with a pattern is applicable to nodes that match the pattern. A node matches a pattern if it has the same properties and references as the pattern, and if its children match the pattern's children. A pattern also may contain several variables which match everything.

The body of an inference rule is a list of statements which are executed when a rule is applied to a node. The main kind of statements of typesystem language are statements used for creating equations and inequations between types.

An inference rule may define the overrides block, which is a boolean flag that tells the typechecker that, in case there are other inference rules applicable to the superconcepts of the concept specified in the condition, this inference rule takes precedence, and all the rules for superconcepts are ignored. The version 3.3 brings possibility to use a code block instead of a static flag.

Starting with the version 3.3, the inference rules that are applicable to instances of node attributes have additional features that allow for overriding or amending the rules applied to the attributed node. This, for example, can be used to implement alternate type inference based on presence conditions, which can take into account the parameters specified at the project or system level.

In case an inference rule is applicable to a node attribute, there is also a possibility to tell the typechecker that this rule supercedes the rules applicable to the attributed node, which are then ignored. Also, the attributed node is accessible in all rule's code blocks as attributedNode.


Inference Methods

To avoid duplications, one may want to extract identical parts of code of several inference rules to a method. An inference method is just a simple Base Language method marked with an annotation "@InferenceMethod". There are several language constructions you may use only inside inference rules and replacement rules and inference methods, they are: typeof expressions, equations and inequations, when concrete statements, type variable declarations and type variable references, and invocations of inference methods. That is made for not to use such constructions in arbitrary methods, which may be called in arbitrary context, maybe not during type checking.


A type-system rule of a sub-concept can override the rules defined on the super-concepts. If the overrides flag is set to false, the rule is added to the list of rules applied to a concept together with the rules defined for the super-concepts, while, if the flag is set to true, the overriding rule replaces the rules of the super-concepts in the rule engine and so they do not take effect. This applies both to Inference and NonTypeSystem rules.

Equations And Inequations

The main process which is performed by the type-system engine is the process of solving equations and inequations among all the types. A language designer tells the engine, which equations it should solve by writing them in inference rules. To add an equation into an engine, the following statement is used: 

expr1 :==: expr2, where expr1 and expr2 are expressions, which evaluate to a node.

Consider the following use case. You want to say that the type of a local variable reference is equal to the type of the variable declaration it points to. So, you write typeof (varRef) :==: typeof (varRef.localVariableDeclaration), and that's all. The typesystem engine will solve such equations automatically.

The above-mentioned expression typeof(expr) (where expr must evaluate to an MPS node) is a language construct, which returns a so-called type variable, which serves as a type of that node. Type variables become concrete types gradually during the process of equation solving.

In certain situations you want to say that a certain type doesn't have to exactly equal another type, but also may be a subtype or a supertype of that type. For instance, the type of the actual parameter of a method call does not necessarily have to be the same type as that of the method's formal parameter - it can be its subtype. For example, a method, which requires an Object as a parameter, may be applied also to a String.

To express such a constraint, you may use an inequation instead of an equation. An inequation expresses the fact that a certain type should be a subtype of another type. It is written as follows: expr1 :<=: expr2.

Weak And Strong Subtyping

A relationship of subtyping is useful for several different cases. You want a type of an actual parameter to be a subtype of formal parameter type, or you want a type of an assigned value to be a subtype of variable's declared type; in method calls or field access operations you want a type of an operand to be a subtype of a method's declaring class.

Sometimes such demands are somewhat controversial: consider, for instance, two types, int and Integer, which you want to be interchangeable when you pass parameters of such types to a method: if a method is doSomething(int i), it is legal to call doSomething(1) as well as doSomething(new Integer(1)). But when these types are used as types for operand of, say, a method call, the situation is the different: you shouldn't be able to call a method of an expression of type int, of an integer constant for example. So, we have to conclude that in one sense int and Integer are subtypes of one another, while in the other sense they are not.

For solving such a controversy, we introduce two relationships of subtyping: namely, weak and strong subtyping. Weak subtyping will follow from strong subtyping: if a node is a strong subtype of another node, then it is it's weak subtype also.

Then, we can say about our example, that int and Integer are weak subtypes of each other, but they are not strong subtypes. Assignment and parameters passing require weak subtyping only, method calls require strong subtyping.

When you create an inequation in you typesystem, you may choose it to be a strong or weak inequation. Also subtyping rules, those which state subtyping relationship (see below), can be either weak or strong. A weak inequation looks like :<=:, a strong inequation looks like :<<=:

In most cases you want to state strong subtyping, and to check weak subtyping. If you are not sure, which subtyping you need, use weak one for inequations, strong one for subtyping rules.

Subtyping Rules

When the typesystem engine solves inequations, it requires information about whether a type is a subtype of another type. But how does the typesystem engine know about that? It uses subtyping rules. Subtyping rules are used to express subtyping relationship between types. In fact, a subtyping rule is a function which, given a type, returns its immediate supertypes.

A subtyping rule consists of a condition (which can be either a concept reference or a pattern) and a body, which is a list of statements that compute and return a node or a list of nodes that are immediate supertypes of the given node. When checking whether some type A is a supertype of another type B, the typesystem engine applies subtyping rules to B and computes its immediate supertypes, then applies subtyping rules to those supertypes and so on. If type A is among the computed supertypes of type B, the answer is "yes".

By default, subtyping stated by subtyping rules is a strong one. If you want to state only weak subtyping, set "is weak" property of a rule to "true".

Comparison Inequations And Comparison Rules

Consider you want to write a rule for EqualsExpression (operator == in Java, BaseLanguage and some other languages): you want left operand and right operand of EqualsExpression to be comparable, that is either type of a left operand should be a (non-strict) subtype of a right operand, or vice versa. To express this, you write a comparison inequation, in a form expr1 :~: expr2, where expr1 and expr2 are expressions, which represent types. Such an inequation is fulfilled if expr1 is a subtype of expr2 (expr1 <: expr2), or expr2 <: expr1.

Then, consider that, say, any Java interfaces should also be comparable, even if such interfaces are not subtypes of one another. That is because there always can be written a class, which implements both of interfaces, so variables of interface types can contain the same node, and variable of an interface type can be cast to any other interface. hence an equation, cast, or instanceof expressions with both types being interface types should be legal (and, for example, in Java they are).

To state such a comparability, which does not stem from subtyping relationships, you should use comparison rules. A comparison rule consists of two conditions for the two applicable types and a body which returns true if the types are comparable or false if they are not.

Here's the comparison rule for interface types:

comparison rule interfaces_are_comparable

applicable for  concept = ClassifierType as classifierType1 , concept = ClassifierType as classifierType2
applicable always
overrides false
rule {
  if (classifierType1.classifier.isInstanceOf(Interface) &&
             classifierType2.classifier.isInstanceOf(Interface)) {
    return true;
  } else {
    return false;


A quotation is a language construct that lets you easily create a node with a required structure. Of course, you can create a node using the smodelLanguage and then populate it with appropriate children, properties and references by hand, using the same smodelLanguage. However, there's a simpler - and more visual - way to accomplish this.

A quotation is an expression, whose value is the MPS node written inside the quotation. Think about a quotation as a "node literal", a construction similar to numeric constants and string literals. That is, you write a literal if you statically know what value do you mean. So inside a quotation you don't write an expression, which evaluates to a node, you rather write the node itself. For instance, an expression 2 + 3 evaluates to 5, an expression < 2 + 3 > (angled braces being quotation braces) evaluates to a node PlusExpression with leftOperand being an IntegerConstant 3 and rightOperand being IntegerConstant 5.

(See the Quotations documentation for more details on quotations, anti quotations and light quotations)


For it is a literal, a value of quotation should be known statically. On the other hand, in cases when you know some parts (i.e. children, referents or properties) of your node only dynamically, i.e. those parts that can only be evaluated at runtime and are not known at design time, then you can't use just a quotation to create a node with such parts.

The good news, however, is that if you know the most part of a node statically and you want to replace only several parts by dynamically-evaluated nodes you can use antiquotations. An antiquotation can be of 4 types: child, reference, property and list antiquotation. They all contain an expression, which evaluates dynamically to replace a part of the quoted node by its result. Child and referent antiquotations evaluate to a node, property antiquotation evaluates to string and list antiquotation evaluates to a list of nodes.

For instance, you want to create a ClassifierType with the class ArrayList, but its type parameter is known only dynamically, for instance by calling a method, say, "computeMyTypeParameter()".

Thus, you write the following expression: < ArrayList < %( computeMyTypeParameter() )% > >. The construction %(...)% here is a node antiquotation.

You may also antiquotate reference targets and property values, with ^(...)^ and $(...)$, respectively; or a list of children of one role, using *(...)*.

a) If you want to replace a node somewhere inside a quoted node with a node evaluated by an expression, you use node antiquotation, that is %( )%. As you may guess there's no sense to replace the whole quoted node with an antiquotation with an expression inside, because in such cases you could instead write such an expression directly in your program.

So node antiquotations are used to replace children, grandchildren, great-grandchildren and other descendants of a quoted node. Thus, an expression inside of antiquotation should return a node. To write such an antiquotation, position your caret on a cell for a child and type "%".

b) If you want to replace a target of a reference from somewhere inside a quoted node with a node evaluated by an expression, you use reference antiquotation, that is ^(...)^ . To write such an antiquotation, position your caret on a cell for a referent and type "^".

c) If you want to replace a child (or some more deeply located descendant), which is of a multiple-cardinality role, and if for that reason you may want to replace it not with a single node but rather with several ones, then use child list (simply list for brevity) antiquotations, *( ). An expression inside a list antiquotation should return a list of nodes, that is of type * *nlist<..>* * or compatible type (i.e.* **{}list<node<..>>* * is ok, too, as well as some others). To write such an antiquotation, position your caret on a cell for a child inside a child collection and type "" . You cannot use it on an empty child collection, so before you press "" you have to enter a single child inside it.

d) If you want to replace a property value of a quoted node by a dynamicаlly calculated value, use property antiquotation $( )$. An expression inside a quotation should return string, which will be a value for an antiquoted property of a quoted node. To write such an antiquotation, position your caret on a cell for a property and type "$".

(See the Quotations documentation for more details on quotations, anti quotations and light quotations)

Examples Of Inference Rules

Here are the simplest basic use cases of an inference rule:

  • to assign the same type to all instances of a concept (useful mainly for literals):

    applicable to concept = StringLiteral as nodeToCheck
      typeof (nodeToCheck) :==: < String >
  • to equate a type of a declaration and the references to it (for example, for variables and their usages):

    applicable to concept = VariableReference as nodeToCheck
      typeof (nodeToCheck) :==: typeof (nodeToCheck.variableDeclaration)
  • to give a type to a node with a type annotation (for example, type of a variable declaration):

    applicable to concept = VariableDeclaration as nodeToCheck
      typeof (nodeToCheck) :==: nodeToCheck.type
  • to establish a restriction for a type of a certain node: useful for actual parameters of a method, an initializer of a type variable, the right-hand part of an assignment, etc.

    applicable to concept = AssignmentExpression as nodeToCheck
      typeof (nodeToCheck.rValue) :<=: typeof (nodeToCheck.lValue)

Type Variables

Inside the typesystem engine during type evaluation, a type may be either a concrete type (a node) or a so-called type variable. Also, it may be a node which contains some type variables as its children or further descendants. A type variable represents an undefined type, which may then become a concrete type, as a result of solving equations that contain this type variable.

Type variables appear at runtime mainly as a result of the "typeof" operation, but you can create them manually, if you want to. There's a statement called TypeVarDeclaration in the typesystem lanuage to do so. You write it like "var T" or "var X" or "var V", i.e. "var" followed by the name of a type variable. Then you may use your variable, for example, in antiquotations to create a node with type variables inside.

Example: an inference rule for "for each" loop. A "for each" loop in Java consists of a loop body, an iterable to iterate over, and a variable into which the next member of an iterable is assigned before the next iteration. An iterable should be either an instance of a subclass of the Iterable interface, or an array. To simplify the example, we don't consider the case of the iterable being an array. Therefore, we need to express the following: an iterable's type should be a subtype of an Iterable of something, and the variable's type should be a supertype of that very something. For instance, you can write the following:

for (String s : new ArrayList<String>(...)) {

or the following:

for (Object o : new ArrayList<String>(...)) {

Iterables in both examples above have the type ArrayList<String>, which is a subtype of Iterable<String>. Variables have types String and Object, respectively, both of which are subtypes of String.

As we see, an iterable's type should be a subtype of an Iterable of something, and the variable's type should be a supertype of that very something. But how to say "that very something" in the typesystem language? The answer is, it's a type variable that we use to express the link between the type of an iterable and the type of a variable. So we write the following inference rule:

applicable for concept = ForeachStatement as nodeToCheck
  var T ;
  typeof ( nodeToCheck . iterable ) :<=:  Iterable < %( T )% >;
  typeof ( nodeToCheck . variable ) :>=:  T ;

Meet and Join types

Meet and Join types are special types, which are treated differently by the typesystem engine. Technically Meet and Join types are instances of MeetType and JoinType concepts, respectively. They may have an arbitrary number of argument types, which could be any nodes. Semantically, a Join type is a type, which is a supertype of all its arguments, and a node which has a type Join(T1|T2|..Tn) can be regarded as if it had type T1 or type T2 or... or type Tn. A Meet type is a type, which is a subtype of its every argument, so one can say that a node, which has a type Meet(T1&T2&..&Tn) inhabits type T1 and type T2 and.. and type Tn. The separators of arguments of the Join and Meet types (i.e. "|" and "&") are chosen respectively to serve as a mnemonics.

Meet and Join types are very useful at certain situations. Meet types appear even in MPS BaseLanguage (which is very close to Java). For instance, the type of such an expression:

true ? new Integer(1) : "hello"

is Meet(Serializable & Comparable), because both Integer (the type of new Integer(1)) and String (the type of "hello") implement both Serializable and Comparable.

Join type is useful when, say, you want some function-like concept return values of two different types (node or list of nodes, for instance). Then you should make type of its invocation be Join(node<> | list<node<>>).

You can create Meet and Join types by yourself, if you need to. Use quotations to create them, just as with other types and other nodes. The concepts of Meet and Join types are MeetType and JoinType, as it is said above.

"When Concrete" Blocks

Sometimes you may want not only to write equations and inequations for a certain types, but to perform some complex analysis with type structure. That is, inspect inner structure of a concrete type: its children, children of children, referents, etc.

It may seem that one just may write typeof(some expression), and then analyze this type. The problem is, however, that one can't just inspect a result of "typeof" expression because it may be a type variable at that moment. Although a type variable usually will become a concrete type at some moment, it can't be guaranteed that it is concrete in some given point of your typesystem code.

To solve such a problem you can use a "when concrete" block.

when concrete ( expr as var ) {

Here, "expr" is an expression which will evaluate to a mere type you want to inspect (not to a node type of which you want to inspect), and "var" is a variable to which an expression will be assigned. Then this variable may be used inside a body of "when concrete" block. A body is a list of statements which will be executed only when a type denoted by "expr" becomes concrete, thus inside the body of a when concrete block you may safely inspect its children, properties, etc. if you need to.

If you have written a when concrete block and look into its inspector you will see two options: "is shallow" and "skip error". If you set "is shallow" to "true", the body of your when concrete block will be executed when expression becomes shallowly concrete, i.e. becomes not a type variable itself but possibly has type variables as children or referents. Normally, if your expression in condition of when concrete block is never concrete, then an error is reported. If it is normal for a type denoted by your expression to never become a concrete type, you can disable such error reporting by setting "skip error" to true.

Overloaded Operators

Sometimes an operator (like +, -, etc.) has different semantics when applied to different values. For example, + in Java means addition when applied to numbers, and it means string concatenation if one of its operands is of type String. When the semantics of an operator depends on the types of its operands, it's called operator overloading. In fact, we have many different operators denoted by the same syntactic construction.

Let's try to write an inference rule for a plus expression. First, we should inspect the types of operands, because if we don't know the types of operands (whether they are numbers or Strings), we cannot choose the type for an operation (it will be either a number or a String). To be sure that types of operands are concrete we'll surround our code with two when concrete blocks, one for left operand's type and another one for right operand's type.

when concrete(typeof(plusExpression.leftExpression) as leftType) {
  when concrete(typeof(plusExpression.rightExpression) as rightType) {

Then, we can write some inspections, where we check whether our types are strings or numbers and choose an appropriate type of operation. But there will be a problem here: if someone writes an extension of BaseLanguage, where they want to use the plus expression for addition of some other entities, say, matrices or dates, they won't be able to use plus expression because types for plus expression are hard-coded in the already existing inference rule. So, we need an extension point to allow language-developers to overload existing binary operations.

Typesystem language has such an extension point. It consists of:

  • overloading operation rules and
  • a construct which provides a type of operation by operation and types of its operands.

For instance, a rule for PlusExpression in BaseLanguage is written as follows:

when concrete(typeof(plusExpression.leftExpression) as leftType) {
  when concrete(typeof(plusExpression.rightExpression) as rightType) {
    node<> opType = operation type( plusExpression , leftType , rightType );
    if (opType.isNotNull) {
      typeof(plusExpression) :==: opType;
    } else {
      error "+ can't be applied to these operands" -> plusExpression;

Here, "operation type" is a construct which provides a type of an operation according to types of operation's left operand's type, right operand's type and the operation itself. For such a purpose it uses overloading operation rules.

Overloaded Operation Rules

Overloaded operation rules reside within a root node of concept OverloadedOpRulesContainer. Each overloaded operation rule consists of:

  • an applicable operation concept, i.e. a reference to a concept of operation to which a rule is applicable (e.g. PlusExpression);
  • left and right operand type restrictions, which contain a type which restricts a type of left/right operand, respectively. A restriction can be either exact or not, which means that a type of an operand should be exactly a type in a restriction (if the restriction is exact), or its subtype (if not exact), for a rule to be applicable to such operand types;
  • a function itself, which returns a type of the operation knowing the operation concept and the left and right operand types.

Here's an example of one of overloading operation rules for PlusExpression in BaseLanguage:

operation concept: PlusExpression
left operand type: <Numeric>.descriptor is exact: false
right operand type: <Numeric>.descriptor is exact: false
operation type:
(operation, leftOperandType, rightOperandType)->node< > {
  if (leftOperandType.isInstanceOf(NullType) || rightOperandType.isInstanceOf(NullType)) {
    return null;
  } else {
    return Queries.getBinaryOperationType(leftOperandType, rightOperandType);

Replacement Rules


Consider the following use case: you have types for functions in your language, e.g. (a 1, a 2,...a N ) -> r, where a 1, a 2, .., a N, and r are types: a K is a type of K-th function argument and r is a type of a result of a function. Then you want to say that your function types are covariant by their return types and contravariant by their argument types. That is, a function type F = (T 1, .., T N) -> R is a subtype of a function type G = (S 1, .., S N) -> Q (written as F <: G) if and only if R <: Q (covariant by return type) and for any K from 1 to N, T K :> S K (that is, contravariant by arguments types).

The problem is, how to express covariance and contravariance in the typesystem language? Using subtyping rules you may express covariance by writing something like this:

nlist <  > result = new nlist <  > ;
for ( node <  > returnTypeSupertype : immediateSupertypes ( functionType . returnType ) ) {
  node <FunctionType> ft = functionType . copy;
  ft . returnType = returnTypeSupertype;
  result . add ( ft ) ;
return  result ;

Okay, we have collected all immediate supertypes for a function's return type and have created a list of function types with those collected types as return types and with original argument types. But, first, if we have many supertypes of return type, it's not very efficient to perform such an action each time we need to solve an inequation, and second, although now we have covariance by function's return type, we still don't have contravariance by function's arguments' types. We can't collect immediate subtypes of a certain type because subtyping rules give us supertypes, not subtypes.

In fact, we just want to express the abovementioned property: F = (T 1, .., T N) -> R <: G = (S 1, .., S N) -> Q (written as F <: G) if and only if R <: Q and for any K from 1 to N, T K :> S K . For this and similar purposes the typesystem language has a notion called "replacement rule."

What's a replacement rule?

A replacement rule provides a convenient way to solve inequations. While the standard way is to transitively apply subtyping rules to a should-be-subtype until a should-be-supertype is found among the results (or is never found among the results), a replacement rule, if applicable to an inequation, removes the inequation and then executes its body (which usually contains "create equation" and "create inequation" statements).


A replacement rule for above-mentioned example is written as follows:

replacement rule  FunctionType_subtypeOf_FunctionType

applicable for  concept = FunctionType as functionSubType <: concept = FunctionType as functionSuperType

rule {
   if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      error " different parameter numbers " -> equationInfo . getNodeWithError (  ) ;
      return ;
   functionSubType . returnType :<=:  functionSuperType . returnType ;
   foreach ( node <  > paramType1 : functionSubType . parameterType ; 
             node <  > paramType2 : functionSuperType . parameterType ) {
       paramType2 :<=:  paramType1 ;

Here we say that a rule is applicable to a should-be-subtype of concept FunctionType and a should-be-supertype of concept FunctionType. The body of a rule ensures that the number of parameter types of function types are equal, otherwise it reports an error and returns. If the numbers of parameter types of both function types are equal, a rule creates an inequation with return types and appropriate inequation for appropriate parameter types.

Another simple example of replacement rules usage is a rule, which states that a Null type (a type of null literal) is a subtype of every type except primitive ones. Of course, we can't write a subtyping rule for Null type, which returns a list of all types. Instead, we write the following replacement rule:

replacement rule  any_type_supertypeof_nulltype

applicable for  concept = NullType as nullType <: concept = BaseConcept as baseConcept

rule {
   if ( baseConcept . isInstanceOf ( PrimitiveType ) ) {
      error "null type is not a subtype of primitive type " -> equationInfo.getNodeWithError (  ) ;

This rule is applicable to any should-be-supertype and to those should-be-subtypes which are Null types. The only thing this rule does is checking whether should-be-supertype is an instance of PrimitiveType concept. If it is, the rule reports an error. If is not, the rule does nothing, therefore the inequation to solve is simply removed from the typesystem engine with no further effects.

Different semantics

A semantic of a replacement rule, as explained above, is to replace an inequation with some other equations and inequations or to perform some other actions when applied. This semantic really doesn't state that a certain type is a subtype of another type under some conditions. It just defines how to solve an inequation with those two types.

For example, suppose that during generation you need to inspect whether some statically unknown type is a subtype of String. What will an engine answer when a type to inspect is Null type? When we have an inequation, a replacement rule can say that it is true, but for this case its abovementioned semantics is unuseful: we have no inequations, we have a question to answer yes or no. With function types, it is worse because a rule says that we should create some inequations. So, what do we have to do with them in our use case?

To make replacement rules usable when we want to inspect whether a type is a subtype of another type, a different semantic is given to replacement rules in such a case.

This semantic is as follows: each "add equation" statement is treated as an inspection of whether two nodes match; each "add inequation" statement is treated as an inspection of whether one node is a subtype of another; each report error statement is treated as "return false."

Consider the above replacement rule for function types:

replacement rule  FunctionType_subtypeOf_FunctionType

applicable for  concept = FunctionType as functionSubType <: concept = FunctionType as functionSuperType

rule {
   if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      error " different parameter numbers " -> equationInfo . getNodeWithError (  ) ;
      return ;
   functionSubType . returnType :<=:  functionSuperType . returnType ;
   foreach ( node <  > paramType1 : functionSubType . parameterType ; node <  > paramType2 : functionSuperType . parameterType ) {
       paramType2 :<=:  paramType1 ;

In a different semantic, it will be treated as follows:

boolean result = true;
if ( functionSubType . parameterType . count != functionSuperType . parameterType . count ) {
      result = false;
      return result;
   result = result && isSubtype( functionSubType . returnType <:  functionSuperType . returnType );
   foreach ( node <  > paramType1 : functionSubType . parameterType ; node <  > paramType2 : functionSuperType . parameterType ) {
       result = result && isSubtype (paramType2 <: paramType1) ;
return result;

So, as we can see, the other semantic is quite an intuitive mapping between creating equations/inequations and performing inspections.

Type-system, trace

MPS provides a handy debugging tool that gives you insight into how the type-system engine evaluates the type-system rules on a particular problem and calculates the types. You invoke it from the context menu or by a keyboard shortcut (Control + Shift + X / Cmd + Shift + X):

The console has two panels. The one on the left shows the sequence or rules as they were applied, while the one on the right gives you a snapshot of the type-system engine's working memory at the time of evaluating the rule selected in the left panel:

Type errors are marked inside the Type-system Trace panel with red color:

Additionally, if you spot an error in your code, use Control + Alt + Click / Cmd + Alt + Click to navigate quickly to the rule that fails to validate the types:


Advanced features of typesystem language

Overriding default type node

When type is assigned to a program node as a result of either applying an equation or resolving an inequality, the node to represent type is taken as is by default. That is to say, it may be a node in the program or be created with a quotation. In both cases, the result of evaluating the expression that specifies type to be assigned by either equation or inequality statement, literally represents target type. This feature allows to substitute another node to represent type instead.

For example, one might decide to use different types for different program configurations, such as using int or long depending on whether the task requires using one type or another. This is different from simply using the generator to produce the correct "implementation" type, as the substitution is done at the time the typechecking is performed, so possible errors can be caught early.

In its simplest form the type substitution can be used by creating an instance of Substitute Type Rule in the typesystem model.

substitute type rule substituteType_MyType {                                                                                       
  applicable for concept = MyType as mt                                                                          
  substitute {                                       
    if (mt.isConditionSatisfied()) { 
      return new node<IntegerType>;


The Substitute Type Rule is applicable to nodes that represent types. Whenever a new type is introduced by the typechecker, it searches for applicable substitution rules and executes them. The rule must either return an instance of `node<>` as the substitution, or null value, in which case the original node is used to represent the type (the default behaviour).

One other possibility to overrides types used by the typechecker comes with the use of node attributes. If there is a node attribute contained by the original type node, the typechecker tries to find a Substitute Type Rule applicable to the attribute first. This way one can override the type nodes even for languages, which implementation is sealed.

substitute type rule substituteType_SubstituteAnnotation {                                                                                       
  applicable for concept = SubstituteAnnotation as substituteAnnotation                                                                          

  substitute {                                                                                                                                        
    if (substituteAnnotation.condition.isSatisfied(attributedNode)) { 
      return substituteAnnotation.substitute; 

The rule above is defined for the attribute node, and it's the attribute node that is passed to the rule as the explicit parameter. The rule can check whether the condition for substituting the type node is satisfied, and it can also access the attributed node representing original type via attributedNode expression.

One caveat that should be mentioned, concerns the case when a type node just returned from a substitute rule, is itself a subject to another substitution. The typechecker tries to apply all matching substitution rules exhaustively, until no more substitutions are available. Only then the type appears in the internal model of the typechecker. Some precautions are taken to prevent the typechecker from going into an endless cycle of substitutions, such as A -> B -> A, but these are not perfect and one should be careful so as to not to introduce infinite cycles. 

Check-only inequations

Basically, inequations may affect nodes' types, for instance if one of the inequation part is a type variable, it may become a concrete type because of this inequation. But, sometimes one does not want a certain inequation to create types, only to check whether such an inequation is satisfied. We call such inequations check-only inequations. To mark an inequation as a check-only, one should go to this inequation's inspector and should set a flag "check-only" to "true". To visually distinguish such inequations, the "less or equals" sign for check-only inequation is gray, while for normal ones it is black, so you can see whether an inequation is check-only without looking at its inspector.


When writing a generator for a certain language (see generator), one may want to ask for a type of a certain node in generator queries. When generator generates a model, such a query will make typesystem engine do some typechecking to find out the type needed. Performing full typechecking of a node's containing root to obtain the node's type is expensive and almost always unnecessary. In most cases, the typechecker should check only the node given. In more difficult cases, obtaining the type of a given node may require checking its parent or maybe a further ancestor. The typechecking engine will check a given node if the computed type is not fully concrete (i.e. contains one or more type variables); then the typechecker will check the node's parent, and so on.

Sometimes there's an even more complex case: the type of a certain node being computed in isolation is fully concrete; and the type of the same node - in a certain environment - is fully concrete also, but differs from the first one. In such a case, the abovementioned algorithm will break, returning the type of an node as if being isolated, which is not the correct type for the given node.

To solve this kind of problem, you can give some hints to the typechecker. Such hints are called dependencies - they express a fact that a node's type depends on some other node. Thus, when computing a type of a certain node during generation, if this node has some dependencies they will be checked also, so the node will be type-checked in an appropriate environment.

A dependency consists of a "target" concept (a concept of a node being checked, whose type depends on some other node), an optional "source" concept (a concept of another node on which a type of a node depends), and a query, which returns dependencies for a node, which is being checked, i.e. a query returns a node or a set of nodes.

For example, sometimes a type of a variable initializer should be checked with the enclosing variable declaration to obtain the correct type. A dependency which implements such a behavior may be written as follows:

target concept: Expression  find source: (targetNode)->JOIN(node< > | Set<node< >>) {
                                            if ( targetNode / . getRole_ (  ) . equals ( " initializer " ) ) {
                                               return  targetNode . parent ;
                                            return  null ;
source concept(optional): <auto>

That means the following: if the typechecker is asked for a type of a certain Expression during generation, it will check whether such an expression is of a role initializer, and if it is, will say that not only the given Expression, but also its parent should be checked to get the correct type for the given Expression.

Overriding type of literal or expression

In addition to type substitution rules, which are only applicable to types, we introduce support for attributes in the inference rules. 

Inference rules

Literals or expressions usually have associated type inference rules that get triggered when the typechecker requires type of the node in question. The rules have a mechanism allowing subconcepts to extend or override the predefined rule. 

rule typeof_IntLiteral { applicable for concept = IntLiteral as nodeToCheck
applicable always
overrides true
do {
typeof(nodeToCheck) :==: <integer>;

Inference rules for node attributes

If a node has one or more attributes, the inference rules applicable to these attributes are applied before the rules applicable to the node itself. The process of applying inference rules can be described with a pseudo code. 

lookup-inference-rules(node) :
  let skipAttributed = false
  foreach a in attributesOf(node) do
    if hasInferenceRuleFor(a) then
      let rule = getInferenceRuleFor(a)
      yield rule
if isSuperceding(rule) then
        let skipAttributed = true
end if
      if isOverriding(rule) then
        break foreach loop
      end if
    end if
  end do

  if skipAttributed then
  end if

  /* proceed as usual */

An example of using an inference rule applicable to a node attribute shows how the presence condition can alter the type of a literal. Note that in this example the type of the annotated literal is affected by by both this inference rule and any other inference rule applicable to the node.

rule typeof_Literal { applicable for concept = PresenceConditionAnnotation as pca
applicable always
overrides false 
supercedes attributed false
 do {
typeof(pca.parent) :<=: pca.alternativeNode

Conditionally overriding type inference

Keeping in mind that the condition under which the user might want to override the type inference via attributes depend on the configuration, we don’t always want to override the default type.

rule typeof_Literal {
  applicable for concept = PresenceConditionAnnotation as pea
applicable always
supercedes attributed { 
do {
typeof(attributedNode) :==: pca.replacementType

Checking rules

Checking, (or Non-typesystem) rules can inspect the model searching for known error patterns in code and report them to the user. This kind of pre-compilation code inspection is generally known as static code analysis. Error patterns in typical tools for static code analysis can fall into several categories, such as correctness problems, multi-threaded correctness, I18N problems, vulnerability-increasing errors, styling issues, performance problems, etc. The found issues are reported to the user either on-demand through an interactive report:

or in a on-the-fly mode directly in the editor by colorful symbols and code underlines:


MPS distinguishes problems by severity:

  • errors - displayed in red
  • warnings - displayed in yellow
  • infos - displayed in grayish

The jetbrains.mps.lang.typesystem language offers corresponding statements that emit these problem categories together with their description and the node to highlight. The additional ensure statement gives the user a more succinct syntax to report an error in case a condition is not met:

Checking rules typically check for one or a few related issues in a given node or a small part of the model and report to the user, if a problem is discovered:


quick-fix provides a single model-transforming function, which will automatically eliminate the reported problem:

quick-fix must provide a description to represent it in the Intentions context menu, unless it is only ever referred to from callers with apply immediately set to true. A quick-fix may also declare fields, to hold reused values, and it can accept arguments from the caller.

Invoking quick-fixes

A quick-fix may be associated with each reported problem through the Inspector tool window using the intention to fix:

Normally the user invokes the quick-fix through the Intentions context menu, which is displayed after pressing the Alt + Enter key shortcut. If the apply immediately flag is set, however, MPS will run the associated quick-fix as soon as the problem is discovered during on-the-fly analysis without waiting for the user trigger.

The two other optional properties configured through the Inspector are needed less frequently:

  • node feature to highlight - specifies a node's property, child of reference to highlight as the source of the problem, instead of highlighting the whole node
  • foreign message source - when a user clicks (Control/Cmd + Alt + click) on the reported error in the editor, she is taken to the Checking rule's error/warning/info/ensure command that raised that error. With the foreign message source property you can override this behavior and provide your own node that the user will be taken to upon clicking on the error.


Previous Next

Using a typesystem

If you have defined a typesystem for a language, a typechecker will automatically use it in editors to highlight opened nodes with errors and warnings. You may additionally want also to use the information about types in queries, like editor actions, generator queries, etc. You may want to use the type of a node, or you may want to know whether a certain type is a subtype of another one, or you may want to find a supertype of a type which has a given form.

Type Operation

You may obtain a type of a node in your queries using the type operation. Just write <expr>.type, where <expr> is an expression which is evaluated to a node.

Do not use type operation inside inference rules and inference methods! Inference rules are used to compute types, and type operation returns an already computed type.

Is Subtype expression

To inspect whether one type is a subtype of another one, use the isSubtype expression. Write isSubtype( type1 :< type2 ) or isStrongSubtype( type1 :<< type2 ), it will return true if type1 is a subtype of type2, or if type1 is a strong subtype of type2, respectively.

Coerce expression

A result of a coerce expression is a boolean value, which says whether a certain type may be coerced to a given form, i.e. whether this type has a supertype, which has a given form (satisfies a certain condition). A condition could be written either as a reference to a concept declaration, which means that a sought-for supertype should be an instance of this concept; or as a pattern, which a sought-for supertype must matche.
A coerce expression is written coerce( type :< condition ) or coerceStrong( type :<< condition ), where condition is what has just been discussed above.

Coerce Statement

A coerce statement consists of a list of statements, which are executed if a certain type can be coerced to a certain form. It is written as follows:

coerce ( type :< condition ) {
} else {

If a type can be coerced so as to satisfy a condition, the first (if) block will be executed, otherwise the else block will be executed. The supertype to which a type is coerced can be used inside the first block of a coerce statement. If the condition is a pattern and contains some pattern variables, which match parts of the supertype to which the type is coerced, such pattern variables can also be used inside the first block of the coerce statement.

Previous Next

For debugging typesystem MPS provides Typesystem Trace - an integrated visual tool that gives you insight into the evaluation process that happens inside the typesystem engine.

Try it out for yourself

We prepared a dedicated sample language for you to easily experiment with the typesystem. Open the Expressions sample project that comes bundled with MPS and should be available among the sample projects in the user home folder.

The sample language

The language to experiment with is a simplified expression language with several types, four arithmetic operations (+, -, *, /), assignment (:=), two types of variable declarations and a variable reference. The editor is very basic with almost no customization, so editing the expressions will perhaps be quite rough. Nevertheless, we expect you to inspect the existing samples and debug their types more than writing new code, so the lack of smooth editing should not be an issue.

The language can be embedded into Java thanks to the SimpleMathWrapper concept, but no interaction between the language and BaseLanguage is possible.

The expression language supports six types, organized by subtyping rules into two branches:

  1. Element -> Number -> Float -> Long -> Int
  2. Element -> Bool

Inspecting the types

If you open the Simple example class, you can position the cursor to any part of the expression or select a valid expression block. As soon as you hit Control/Cmd +Shift + T, you'll see the type of the selected node in a pop-up dialog.

The Main sample class will give you a more involved example showing how Type-inference correctly propagates the suitable type to variables:

Just check the calculated types for yourself.

Type errors

The TypeError sample class shows a simple example of a type error. Just uncomment the code (Control/Cmd + /) and check the reported error:

Since this variable declaration declares its type explicitly to be an Int, while the initializer is of type Float, the type-system reports an error. You may check the status bar at the bottom or hover your mouse over the incorrect piece of code.

Type-system Trace

When you hit Control/Cmd + Shift + X or navigate through the pop-up menu, you get the Typesystem Trace panel displayed on the right hand-side.

The Trace shows in Panel 2 all steps (i.e. type system rules) that the type-system engine executed. The steps are ordered top-to-bottom in the order in which they were performed. When you have Button 1 _selected, the_ Panel 2 highlights the steps that directly or indirectly influence the type of the node selected in the editor (Panel 1). Panel 3 details the step selected in Panel 2 - it describes what changes have been made to the type-system engine's state in the step. The actual state of the engine's working memory is displayed in Panel 4.

Step-by-step debugging

The Simple sample class is probably the easiest one to start experimenting with. The types get resolved in six steps, following the typesystem rules specified in the language. You may want to refer to these rules quickly by pressing F4 _or using the _Control/Cmd + N "Go to Root Node" command. F3 will navigate you to the node, which is being affected by the current rule.

  1. The type of a variable declaration has to be a super-type of the type of the initializer. The aValue variable is assigned the a type-system variable, the initializer expression is assigned the b type-system variable and a>=b (b sub-type or equal type to a) is added into the working memory.
  2. Following the type-system rule for Arithmetic Expressions, b has to be a sub-type of Number, the value 10 is assigned the c variable, 1.3F is assigned the d variable and a when-concrete handler is added to wait for c to be calculated.
  3. Following the rules for float constants d is solved as Float.
  4. Following the rules for integer constants c is solved as Int. This triggers the when-concrete handler registered in step 2 and registered another when-concrete handler to wait for d. Since d has already been resolved to Float, the handler triggered and resolves b (the whole arithmetic expression) as Float. This also solves the earlier equation (step 2) that b<=Number.
  5. Now a can be resolved as Float, which also solves the step 1 equation that a>=b.
  6. If you enable type expansions by pressing the button in the tool-bar, you'll get the final expansions of all nodes to concrete types as the last step.


This cookbook should give you quick answers and guidelines when designing dataflow for your languages. For in-depth description of the typesystem please refer to the Dataflow section of the user guide.

Reading a value

The read operation instructs the dataflow engine that a particular value is read:

Writing a value

Similarly the write operation indicates that a value gets written to. In the example, a variable declaration with an initializer first executes the initializer through the code for command and then marks the node as being written the result of the initializer:

Code for

As seen above in the LocalVariableDeclaration dataflow or below in the DotExpression dataflow, the code for command indicates nodes that get executed and when. In the DotExpression, for example, code for the operand runs before the actual dot operation:


Dataflow for the TernaryOperatorExpression is a very straightforward example of using both conditional and unconditional jumps. Once the condition gets evaluated we can optionally jump to the ifFalse branch. Similarly, once the ifTrue branch is completed we unconditionally jump out of the scope of the node:


The WhileStatement shows a more involved usage of the dataflow language. Not also the built-in detection of boolean constants. Trying to use while(false) will thus be correctly reported by MPS as a while-loop with unreachable body. This is thanks to the unconditional jump to after node if the constant is false.

Inserting instructions

The TryStatement has even more needs from the dataflow language. It must insert extra ifjump instructions to jump to a catch clause wherever the particular exception can be thrown in the code:

Notice, we're using a few other helper methods and commands here - get code for to retrieve the dataflow instruction set for a node, isRet, isJump and isNop to exclude certain types of instructions (returns, jumps and no-operations respectively), label to create named places in the dataflow instruction set that we can jump to from elsewhere, and finally the insert command to insert a new command into an existing dataflow instruction set.

The data flow API

Check out the jetbrains.mps.dataflow.framework package for the classes that compose the API for accessing data flow information about code.

Data Flow

A language's data flow aspect allows you to find unreachable statements, detect unused assignments, check whether a variable might not be initialized before it's read, and so on. It also allows performing some code transformations, for example the 'extract method' refactoring.

Most users of data flow analyses aren't interested in details of its inner working, but are interested in getting the results they need. They want to know which of their statements are unreachable, and what can be read before it's initialized. In order to shield a user from the complexities of these analyses, we provide assembly-like intermediate language into which you translate your program. After translation, this intermediate presentation is analyzed and a user can find which of the statements of original language are unreachable etc.

For example here is the translation of a 'for' loop from baseLanguage:

First, we emit a label so we can jump after it. Then we perform a conditional jump after the current node. Then we emit code for node.variable. Finally, we emit code for the loop's body, and jump to the previously emitted label.

Commands of intermediate language

Here are the commands of our intermediate language:

  • read x - reads a variable x
  • write x - writes to variable x
  • jump before node - jumps before node
  • jump after node - jumps after node
  • jump label - jumps to label
  • ifjump ((before|after)node)| label - conditional jump before/after node / to label
  • code for node - insert code for node
  • ret - returns from current subroutine

May be unreachable

Some commands shouldn't be highlighted as unreachable. For example we might want to write some code like this:

If you generate data flow intermediate code for this statement, the last command: the jump after condition command will be unreachable. On the other hand, it's a legal base language statement, so we want to ignore this command during reachable statement analysis. To do so we mark it is as may be unreachable, which is indicated by curly braces around it. You can toggle this settings with the appropriate intention.

You may like to try our Dataflow cookbook, as well.

Links: - good introduction to static analyses including data flow and type systems.

Previous Next


We are going to look at two ways to define scopes for custom language elements - the inherited (hierarchical) and the referential approaches. We chose the Calculator tutorial language as a testbed for our experiments. You can find the calculator-tutorial project included in the set of sample projects that comes with the MPS distribution.

Two ways

All references need to know the set of allowed targets. This enables MPS to populate the completion menu whenever the user is about to supply a value for the reference. Existing references can be validated against that set and marked as invalid, if they refer to elements out of the scope.

MPS offers two ways to define scopes:

  • Inherited scopes
  • Reference scopes

Reference scope offers lower ceremony, while Inherited scopes allow the scope to be built gradually following the hierarchy of nodes in the model. 


The oldest type of scopes in MPS is called Search scope and it has been deprecated in favor of the two types mentioned above, because the scoping API has changed significantly since its introduction. The Reference scope can be viewed as the closest replacement for Search scope compatible with the new API.

Inherited scopes

We will describe the new hierarchical (inherited) mechanism of scope resolution first. This mechanism delegates scope resolution to the ancestors, who implement ScopeProvider.

  1. MPS starts looking for the closest ancestor to the reference node that implements ScopeProvider and who can provide scope for the current kind.
  2. If the ScopeProvider returns null, we continue searching for more distant ancestors.
  3. Each ScopeProvider can 
    • build and return a Scope implementation (more on these later)
    • delegate to the parent scope 
    • add its own elements to the parent scope
    • hide elements from parent scope (more on how to work with scopes will be discussed later)

Our InputFieldReference thus searches for InputField nodes and relies on its ancestors to build a list of those.

Once we have specified that the scope for InputFieldReference when searching for an InputField is inherited, we must indicate that Calculator is a ScopeProvider. This ensures that Calculator will have say in building the scope for all InputFieldReferences that are placed as its descendants.

The Calculator in our case should return a list of all its InputFields whenever queried for scope of InputField. So in the Behavior aspect of Calculator we override (Control + O) the getScope() method:

If Scope remains unresolved, we need to import the model (Control + R) that contains it (jetbrains.mps.scope):

The getScope() method takes two parameters:

  • kind - the concept of the possible targets for the reference
  • child - the child node of the current (this) ScopeProvider, from which the request came, so the actual reference is among descendants of the child node

We also need BaseLanguage since we need to encode some functionality. The jetbrains.mps.lang.smodel language needs to be imported in order to query nodes. These languages should have been imported for you automatically. If not, you can import them using the Control + L shortcut.

Now we can complete the scope definition code, which, in essence, returns all input fields from within the calculator:

A quick tip: Notice the use of SimpleRoleScope class. It is one of several helper classes that can help you build your own custom scopes. Check them out by Navigating to SimpleRoleScope (Control + N) and opening up the containing package structure (Alt + F1).

Scope helper implementations

MPS comes with several helper Scope implementations that cover many possible scenarios and you can use them to ease the task of defining a scope:

  • ListScope - represents the nodes passed into its constructor
  • DelegatingScope - delegates to a Scope instance passed into its constructor, typically to be extended by scopes that need to add functionality around an existing scope, e.g. LazyScope
  • CompositeScope - delegates to a group of (wrapped) Scope instances
  • FilteringScope - delegates to a single Scope instance, filtering its nodes with a predicate (the isExcluded method)
  • FilteringByNameScope - delegates to a single Scope instance, filtering its nodes by a name blacklist, which it gets as a constructor parameter
  • EmptyScope - scope with no nodes
  • SimpleRoleScope - a scope providing all child nodes of a node, which match a given role
  • ModelsScope - a scope containing all nodes of a given concept contained in the supplied set of models
  • ModelPlusImportedScope - like ModelsScope, but includes all models imported by the given model

For example, the getScope() method could be rewritten using ListScope this way:


A slightly more advanced example can be found in BaseLanguage. VariableReference uses inherited scope for its variableDeclaration reference.

Concepts such as ForStatement, LocalVariableDeclaration, BaseMethodDeclaration, Classifier as well as some others add variable declarations to the scope and thus implement ScopeProvider.

For example, ForStatement uses the Scopes.forVariables helper function to build a scope that enriches the parent scope with all variables declared in the for loop, potentially hiding variables of the same name in the parent scope. The come from expression detects whether the reference that we're currently resolving the scope for lies in the given part of the sub-tree.

  • The parent scope construct will create an instance of LazyParentScope() and effectively delegate to an ancestor in the model, which implements ScopeProvider, to supply the scope.
  • The come from construct will delegate to ScopeUtils.comeFrom() in order to check, whether the scope is being calculated for a direct child of the current node in the given role.
  • The composite with construct (used as composite <expr> with parent scope) will create a combined scope of the supplied scope expression and the parent scope.

Using reference scope

Scopes can alternatively be implemented in a faster but less scalable way - using the reference scope:

Instead of delegating to the ancestors of type ScopeProvider to do the resolution, you can insert the scope resolution code right into the constraint definition.


You may need to import (Control/Cmd + R) the jetbrains.mps.scope model in order to be able to use SimpleRoleScope.

Instead of the code that originally was inside the Calculator's getScope() method, it is now InputFieldReference itself that defines the scope. The function for reference scope is supposed to return a Scope instance, just like the ScopeProvider.getScope() method. Scope is essentially a list of potential reference targets together with logic to resolve these targets with textual values.

To remind you, there are several predefined Scope implementations and related helper factory methods ready for you to use:

  • SimpleRoleScope - simply adds all nodes connected to the supplied node and being in the specified role
  • ModelPlusImportedScope - provides reference targets from imported models. Allows the user add targets to scope by ctrl + R / cmd + R (import containing model).
  • FilteringScope - allow you to exclude some elements from another scope. Subclasses of FilteringScope with override the isExcluded() method.
  • DelegatingScope - delegates to another scope. Meant to be overridden to customize the behavior of the original scope.

You may also look around yourself in the scope model:


Intentions are a very good example of how MPS enables language authors to smoothen the user experience of people using their language. Intentions provide fast access to the most used operations with syntactical constructions of a language, such as "negate boolean", "invert if condition," etc. If you've ever used IntelliJ IDEA's intentions or similar features of any modern IDEs, you will find MPS intentions very familiar.

Using intentions

Like in IDEA, if there are avaliable intentions applicable to the code at the current position, a light bulb is shown. To view the list of avaliable intentions, press Alt+Enter or click the light bulb. To apply an intention, either click it or select it and press Enter. This will trigger the intention and alter the code accordingly. 
Example: list of applicable intentions

Intention types

All intentions are "shortcuts" of a sort, bringing some operations on node structure closer to the user. Two kinds of intentions can be distinguished: regular intentions (possibly with parameters) and "surround with" intentions.
Generally speaking, there is no technical difference between these types of intentions. They only differ in how they are typically used by the user.

regular intentions are listed on the intentions list (the light bulb) and they directly perform transformations on a node without asking the user for parameters customizing the operations.

"surround with" intentions are used to implement a special kind of transformation - surrounding some node(s) with another construct (e.g. "surround with parenthesis"). These intentions are not offered to the users unless they press ctrl-alt-T (the surround with command) on a node. Neither they are shown in general intentions pop-up menu.

Universal Intention is a new experimental feature introduced in 3.4, which allows to unify intentions and parameterized intentions. In addition, it allows to add methods and other class members to intention and has a more java-like editor. As the feature is still in an experimental stage, we decided not to replace the old functionality fully. We still recommend you to use the old intentions, but those, who like the new editor better, can experiment with the new ones. The structure of the universal intention is very similar to the old intentions and using them is very straightforward.

Common Intention Structure


The name of an intention. You can choose any name you like, the only obvious constraint being that names must be unique in scope of the model. 

for concept

Intention will be tested for applicability only to nodes that are instances of this concept and its subconcepts.

available in child nodes

Suppose N is a node for which the intention can be applied. If this flag is set to false, intention will be visible only when the cursor is over the node N itself. If set to true, it will be also visible in N's descendants (but still will be applied to N)

child filter

Used to show an intention only in some children. E.g. "make method final" intention is better not to be shown inside method's body, but preferrably to be shown in the whole header, including "public" child.


The value returned by this function is what users will see in the list of intentions.


Intentions that have passed the "for concept" test are tested for applicability to the current node. If this method returns "true," the intention is shown in the list and can be applied. Otherwise the intention is not shown in the list. The node argument of this method is guaranteed to be an instance of the concept specified in "for concept" or one of its subconcepts.


This method performs a code transformation. It is guaranteed that the node parameter has passed the "for concept" and "is applicable" tests.

Regular Intentions

is error intention - This flag is responsible for an intention's presentation. It distinguishes two types of intentions - "error" intentions which correct some errors in the code (e.g. a missing 'cast') and "regular" intentions, which are intended to help the user perform some genuine code transformations. To visually distinguish the two types, error intentions are shown with a red bulb, instead of an orange one, and are placed above regular intentions in the applicable intentions list.

Parameterized regular intentions

Intentions can sometimes be very close to one another. They may all need to perform the same transformation with a node, just slightly differently. E.g. all "Add ... macro" intentions in the generator ultimately add a macro, but the added macro itself is different for different intentions. This is the case when parameterized intention is needed. Instead of creating separate intentions, you create a single intention and allow for its parametrization. The intention has a parameter function, which returns a list of parameter values. Based on the list, a number of intentions are created , each with a diferent parameter value. The parameter values can then be accessed in almost every intention's method.



You don't have an access to the parameter in the isApplicable function. This is because of performance reasons. As isApplicable is executed very often and delays would quickly become noticeable by the user, you should perform only base checks in isApplicable. All parameter-dependent checks should be performed in the parameter function, and if a check was not passed, this parameter should not be returned

Surround With - Intentions

This type of intentions is very similar to regular intentions and all the mentioned details apply to these intentions as well.

Where to store my intentions?

You can create intentions in any model by importing the intentions language. However, MPS collects intentions only from the Intentions language aspects. If you want your intentions to be used by the MPS intentions subsystem, they must be stored in the Intentions aspect of your language.

Previous Next


Testing languages


Testing is an essential part of language designer's work. To be of any good MPS has to provide testing facilities both for BaseLanguage code and for languages. While the jetbrains.mps.baselanguage.unitTest language enables JUnit-like unit tests to test BaseLanguage code, the Language test language jetbrains.mps.lang.test provides a useful interface for creating language tests.


To minimize impact of test assertions on the test code, the Language test language describes the testing aspects through annotations (in a similar way that the generator language annotates template code with generator macros).

Quick navigation table

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Editor ActionMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.


Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.


There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.


Use the jetbrains.mps.lang.test language to create MigrationTestCases. In these test cases write pieces of code to run migration on them.
See the Migration Tests section for details.


Tests creation

There are two options to add test models into your projects.

1. Create a Test aspect in your language

This is easier to setup, but can only contain tests that do not need to run in a newly started MPS instance. So typically can hold plain baselanguage unit tests. To create the Test aspect, right-click on the language node and choose chose New->Test Aspect.

Now you can start creating unit tests in the Test aspect.

Right-clicking on the Test aspect will give you the option to run all tests. The test report will then show up in a Run panel at the bottom of the screen.

2. Create a test model

This option gives you more flexibility. Create a test model, either in a new or an existing solution. Make sure the model's stereotype is set to tests.

Open the model's properties and add the jetbrains.mps.baselanguage.unitTest language in order to be able to create unit tests. Add the jetbrains.mps.lang.test language in order to create language (node) tests.

Additionally, you need to make sure the solution containing your test model has a kind set - typically choose Other, if you do not need either of the two other options (Core plugin or Editor plugin). 

Right-clicking on the model allows you to create new unit or language tests. See all the root concepts that are available:

Unit testing with BTestCase

As for BaseLanguage Test Case, represents a unit test written in baseLanguage. Those are familiar with JUnit will be quickly at home.

A BTestCase has four sections - one to specify test members (fields), which are reused by test methods, one to specify initialization code, one for clean up code and finally a section for the actual test methods. The language also provides a couple of handy assertion statements, which code completion reveals.


In order to be able to run node tests, you need to provide more information through a TestInfo node in the root of your test model.

Especially the Project path attribute is worth your attention. This is where you need to provide a path to the project root, either as an absolute or relative path, or as a reference to a Path Variable defined in MPS (Project Settings -> Path Variables).

To make the path variable available in Ant scripts, define it in your build file with the mps.macro. prefix (see example below).

Testing aspects of language definitions

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Editor ActionMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.


Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.


There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.


Use the jetbrains.mps.lang.test language to create MigrationTestCases. In these test cases write pieces of code to run migration on them.
See the Migration Tests section for details.

Node tests

A NodesTestCase contains three sections:

The first one contains code that should be verified. The section for test methods may contain baseLanguage code that further investigates nodes specified in the first section. The utility methods section may hold reusable baseLanguage code, typically invoked from the test methods.

Checking for correctness

To test that the type system correctly calculates types and that proper errors and warnings are reported, you write a piece of code in your desired language first. Then select the nodes, that you'd like to have tested for correctness and choose the Add Node Operations Test Annotation intention.

This will annotate the code with a check attribute, which then can be made more concrete by setting a type of the check:

Note that many of the options have been deprecated and should no longer be used.

The for error messages option ensures that potential error messages inside the checked node get reported as test failures. So, in the given example, we are checking that there are no errors in the whole Script.

Checking for type system and data-flow errors and warnings

If, on the other hand, you want to test that a particular node is correctly reported by MPS as having an error or a warning, use the has error / has warning option.

This works for both warnings and errors.

You can even tie the check with the rule that you expect to report the error / warning. Hit Alt + Enter when with cursor over the node and pick the Specify Rule References option:

An identifier of the rule has been added. You can navigate by Control/Cmd + B (or click) to the definition of the rule.

When run, the test will check that the specified rule is really the one that reports the error.

Type-system specific options

The check command offers several options to test the calculated type of a node.

Multiple expectations can be combined conveniently:

Testing scopes

The Scope Test Annotation allows the test to verify that the scoping rules bring the correct items into the applicable scope:

The Inspector panel holds the list of expected items that must appear in the completion menu and that are valid targets for the annotated cell:

Test and utility methods

The test methods may refer to nodes in your tests through labels. You assign labels to nodes using intentions:

The labels then become available in the test methods.

Editor tests

Editor tests allow you to test the dynamism of the editor - actions, intentions and substitutions.

An empty editor test case needs a name, an optional description, setup the code as it should look before an editor transformation, the code after the transformation (result) and finally the actual trigger that transforms the code in the code section.

For example, a test that an IfStatement of the Robot_Kaja language can be transformed into a WhileStatement by typing while in front of the if keyword would look as follows:

In the code section the jetbrains.mps.lang.test language gives you several options to invoke user-initiated actions - use type, press keys, invoke action or invoke intention. Obviously you can combine the special test commands for the plain baseLanguage code.


In order to be able to specify the desired actions and intentions, you need to import their models into the test model. Typically the jetbrains.mps.ide.editor.actions model is the most needed one when testing the editor reactions to user-generated actions.


To mark the position of the caret in the code, use the appropriate intention with the cursor located at the desired position:

The cursor position can be specified in both the before and the after code:

The cell editor annotation has extra properties to fine-tune the position of the caret in the annotated editor cell. These can be set in the Inspector panel.

Inspecting the editor state

Some editor tests may wish to inspect the state of the editor more thoroughly. The editor component expression gives you access to the editor component under cursor. You can inspect its state as well as modify it, like in these samples:

The is intention applicable expression let's you test, whether a particular intention can be invoked in the given editor context:

You can also get hold of the model and project using the model and project expressions, respectively.

Migration tests

Migrations tests can be used to check that migration scripts produce expected results using specified input.

To create a migration test case you should specify its name and the migration scripts to test. In many cases it should be enough to test individual migration scripts separately, but you can safely specify more than one migration script in a single test case, if you need to test how migrations interact with one another.

Additionally, migration test cases contain nodes to be passed into the migration process and those also nodes that are expected to come out as the ouptut of the migration.

When running, migration tests behave the following way:

  1. Input nodes are copied as roots into an empty module with single model.
  2. Migration scripts run on that module.
  3. Roots contained in that module after migration are compared with the expected ouput
  4. The check() method of the concerned migration(s) is invoked to ensure that it returns an empty list of problems

To simplify the process of writing migration tests, the expected output can be generated automatically from the input nodes using the currently deployed migration scripts. To do this, use the intention called 'Generate Output from Input'.

Running the tests

Inside MPS

To run tests in a model, just right-click the model in the Project View panel and choose Run tests:

If the model contains any of the jetbrains.mps.lang.test tests, a new instance of MPS is silently started in the background (that's why it takes quite some time to run these compared to plain baseLanguage unit tests) and the tests are executed in that new MPS instance. A new run configuration is created, which you can then re-use or customize:

The Run configurations dialog gives you options to tune the performance of tests.

  • Reuse caches - reusing the old caches of headless MPS instance when running tests cuts away a lot of time that would be needed to setup a test instance of MPS. It is possible to set and unset this option in the run configuration dialog.
  • Save caches in - specify the directory to save the caches in. By default, MPS choses the temp directory. Thus with the option Reuse caches set on, MPS saves its caches in the specified folder and reuses them whenever possible. If the option is unset, the directory is cleared on every run.
  • Execute in the same process - to speed up testing tests can be run in a so-called in-process mode. It was designed specifically for tests, which need to have an MPS instance running. (For example, for the language type-system tests MPS should safely be able to check the types of nodes on the fly.)
    The original way was to have a new MPS instance started in the background and run the tests in this instance. This option, instead, allows to have all tests run in the same original MPS process, so no new instance needs to be created. When the option Execute in the same process is set (the default setting), the test is executed in the current MPS environment. To run tests in the original way (in a separate process) you should uncheck this option. This way of tests' execution is applicable to all test kinds in MPS. Thus it works even for the editor tests!


    Although the performance is so much better for in-process test execution, there are certain drawbacks in this workflow. Note, that the tests are executed in the same MPS environment that holds the project, so there is a possibility, that the code you write in your test may be potentially dangerous and sometimes cause real harm. For example, a test, which disposes the current project, could destroy the whole project. So the user of this feature needs to be careful when writing the tests.
    There are certain cases when the test must not be executable in-process. In that case it is possible to switch an option in the inspector to prohibit the in-process execution for that specific test.

    The test report is shown in the Run panel at the bottom of the screen:


From a build script

In order to have your generated build script offer the test target that you could use to run the tests using Ant, you need to import the and languages into your build script, declare using the module-tests plugin and specify a test modules configuration.

To define a macro that Ant will pass to JUnit (e.g. for use in TestInfo roots in your tests), prefix it with mps.macro.:

Running Editor tests in IDEA Plugin

With the new JUnit test suite (jetbrains.mps.idea.core.tests.PluginsTestSuite) it is possible to execute editor tests for your languages in IntelliJ IDEA, when using the MPS plugin. To make use of this functionality you have to create a simple ANT script that will install all the necessary plugins into the IntelliJ platform and executing the tests by specifying test module name(s).

Previous Next





The Accessories Models can be stored at two places - either as an aspect of a language (recommended), or as a regular model under a solution. In both cases, the model needs to be added to the Language Runtime Language Settings so as it could be used. A typical use case would be a default library of Concept instances to be available at any place the language is used.


Let's alter the Shapes sample project created as part of the introductory Shapes tutorial and bundled with MPS distributions as a sample project. The project allows the language users to define various colorful shapes and put them on a canvas. The colors of each shape are defined as references to one of the StaticFieldDeclarations defined in the Color class.

Accessories models allow us to define our own color constants instead of referencing directly the Color class and thus impose a dependency on BaseLanguage from the user solutions. You'll get finer control over what colors will be available and how they will get generated.

Define the concept to represent colors

First we need to define the concept that we will then use to define individual colors:

Update ColorReference

The ColorReference concept should now point to nodes of the MyColor concept:

Define a method to obtain the real Color

During generation we will need to replace nodes of MyColor with nodes of StaticFieldDeclaration representing the corresponding color constants defined in the Color class:

Change the generator templates for circle and square

These templates hold a reference macro that inserts a reference to the particular desired static field in the Color class. We need to change the macro so that it uses the findColor() behavior method that we defined above:

Define colors in the Accessories model

Now the colors can be safely defined:

After rebuilding the language the color constants will be available in the completion menu in your Canvas nodes and the generated code will hold correct references to Java colors.



Changes in the Refactoring language

In order to make the structure of MPS core languages more consistent and clear, the Refactoring language has been changed considerably. Several new and easy-to-use constructs have been added and parts of the functionality was deprecated and moved into the Actions language.

The UI for retrieving the refactoring parameters has been removed from the refactoring language. Choosers for parameters are no longer called, it is not allowed to show UI in init (e.g. ask and ask boolean) and keystroke has no effect. All this functionality should be moved to an action corresponding to the refactoring.

The following constructs have been added to the refactoring language. These new constructs are intended to to be used from code, typically from within the actions:

  • is applicable refactoring<Refactoring>(target)
    returns true if the refactoring target corresponds to the current target (type, single/multiple) and applicable as in refactoring isApplicable method, and there is no refactoring that overrides current refactoring for this target.
  • execute refactoring<Refactoring>(target : project, parameters );
    executes the refactoring for the target with parameters
  • create refcontext<Refactoring>(target : project, parameters )
    create a refactoring context for the refactoring, target and fill parameters in context, this context then can be used for refactoring execution or for further work with parameters; UI is not shown during this call

It is necessary to manually migrate existing user refactorings. The migration consists of several steps:

  • create a UI action for the refactoring (This is a simple action from the plugin language. You can check the Rename action from jetbrains.mps.ide.platform.actions.core as an example of proper refactoring action registration)
  • copy the caption, create context parameters
  • add a refactoring keystroke with the newly created action to KeymapChangesDeclaration
  • create ActionGroupDeclaration for the refactoring that modifies the jetbrains.mps.ide.platform.actions.NodeRefactoring action group at the default position
  • add an isApplicable clause to the action created; usually it is just is applicable refactoring< >() call
  • add an execute clause to the action created; all the parameter preparations that were in init of the refactoring should be moved here; at the end it is necessary to execute the refactoring with the prepared parameters (with execute refactoring< >(); statement)
  • remove all parameter preparation code from init of the refactoring, they are now prepared before the entry to init; you can still validate parameters and return false if the validation fails


TextGen language aspect


The TextGen language aspect defines a model to text transformation. It comes in handy each time you need to convert your models into the text form directly. The language contains constructs to print out text, transform nodes into text values and give the output some reasonable layout.


The append command performs the transformation and adds resulting text to the output. You can use found error command to report problems in the model. The with indent command demarcates blocks with increased indentation. Alternatively, the increase depth and decrease depth commands manipulate the current indentation depth without being limited to a block structure. The indent buffer command applies the current indentation (as specified by with ident or increase/decrease depth) for the current line.




any number of:

  • {string value}, to insert use the " char, or pick constant from the completion menu
  • \n
  • $list{node.list} - list without separator
  • $list{node.list with ,} - with separator (intentions to add/remove a separator are available)
  • $ref{node.reference}, e.g. $ref{node.reference<target>} - deprecated and will be removed
  • ${node.child}
  • ${attributed node}$ - available in attribute nodes, delegates to the attributed node

found error

error text

decrease depth

decrease indentation level from now onwards

increase depth

increase indentation level from now on

indent buffer

apply indentation to the current line

with indent { <code> }

increase indentation level for the <code>


The parameters to the append command may have the with indent flag in the Inspector tool window set to true to get prepended with the current indentation buffer.


Proper indentation is easy to get right once you understand the underlying principle. TextGen flushes the AST into text. The TextGen commands simply manipulate sequentially the output buffer and output some text to it, one node at a time. A variable holding the current depth of indentation (indentation buffer) is preserved for each root concept. Indentation buffer starts at zero and is changed by increase/decrease depth and with indent commands.

The "indentation", however, must be inserted into the output stream explicitly by the append commands. Simply marking a block with with indent will not automatically indent the text generated by the wrapped TextGen code. The with indent block only increases the value of the indentation buffer, but the individual appends may or may not wish to be prepended with the indentation buffer of the current size.

There are two ways to explicitly insert indentation buffer into the output stream:

  • indent buffer command
  • with indent flag in the inspector for the parameters of the append command

For example, to properly indent Constants in a list of constants, we call indent buffer at the beginning of each emitted line. This ensures that the indentation is inserted only at the beginning of each line.

Alternatively, we could specify the with indent flag in the inspector for the first parameter to the append command. This will also insert the indentation only at the beginning of each line.

Root concepts

TextGen provides two types of root concepts:

  • text gen component, represented by the ConceptTextGenDeclaration concept, which encodes a transformation of a concept into text. For rootable concepts the target file can also be specified.
  • base text gen component, represented by the LanguageTextGenDeclaration concept, which allows definition of reusable textgen operations and utility methods. These can be called from other text gen components of the same language as well as extending languages

TextGen in extended concepts

MPS does not create files for root concept automatically. Even sub-concepts of a concept that has TextGen defined will have no file created automatically. Only exact concept matches are considered. If an extending concept desires to re-use textgen component of an ancestor as is, it shall declare its own empty TextGen component, stating the essentials as the file name, encoding and extension, and leaving the body of the component empty.


There's provisional mechanism to control layout of output files. The text layout section of ConceptTextGenDeclaration (available only in rootable concepts) allows the authors to define multiple logical sections (with a default one) and then optionally specify for each append, to which section to append the text.

Text generation is not always possible in a sequence that corresponds to lines in a physical file. E.g. for a Java source, one could distinguish 2 distinct areas, e.g. imports and class body, where imports is populated along with the body. A passionate language designer might want to break up the file further, e.g. up to file commentpackage statementimports, and class body that consists of fields and methods, and populate each one independently while traversing a ClassConcept. That's what we call a layout of an output file, and that's what we give control over now. MPS veterans might be aware of two buffers (TOP and BOTTOM) that used to be available in TextGen for years. These were predefined, hard-coded values. Now it's up to language designer to designate areas of an output file and their order.

Note, distinct areas come handy especially when generating text from attributes, as they change order of execution. With them, it's even more tricky to make sure flow of text gen corresponds to physical text lines, and designated areas make generation a lot more comfortable.

Layout of the file could be specified for a top text gen, the one that produces files.

The support for this mechanism is preliminary and is quite rudimentary now. We utilize it in our BaseLanguage implementation, so this notice is to explain you what's going on rather than encourage you to put this into production. 

Context objects

It is vital for certain model-to-text conversion scenarios to preserve some context information during TextGen. In BaseLanguage, for example, TextGen has to track model imports and qualified class names. The cumbersome and low-level approach of previous versions based on direct text buffer manipulation has been replaced with the possibility to define and use customized objects as part of the concept's textgen specification.

At the moment, regular java class (with a no-arg or a single-arg constructor that takes concept instance) are supported as context objects. You reference context objects from code as a regular variable.

Handling attributes in TextGen

When nodes are annotated with Attributes, the TexGen for these attributes is processed first. The ${attributed node} construct within the attribute's TextGen will then insert the TextGen of the attributes node itself.
If there are multiple attributes on a single node, they are processed in turn, starting with the last-assigned (top-most) attribute. Attributes without TextGen associated are ignored and skipped.


The top-most attribute is technically the last in the containment (the way editor depicts the attributes is visually different from the order one may notice in Node Explorer). I.e. A node with attributes A1 and A2 in the editor looks like A2(A1(N)), and TextGen processes A2 first, then A1, then N. When TextGen is asked to generate text for N, it looks up the last attribute in the containment that has textgen defined (if any), and delegates to it. Let's assume A2 has TextGen, and A1 not. Then, if the attribute's component has ${attributedNode}, TextGen would check the previous attributes for associated textgen. If there are none (and in our sample A1 has none), text generation for the actual node N is the last to receive control.


Here is an example of the text gen component for the ForeachStatement (jetbrains.mps.baseLanguage).

This is an artificial example of the text gen:

producing following code block containing a number of lines with indentation:

An example of TextGen for attribute that adds extra text to output of attributed node:

Use of attributed node from TextGen of an attribute


Previous Next

Languages for IDE integration

Generic placeholders

A generic placeholder represents the whitespace between two nodes and can be added to any node collection. The Control/Cmd + Shift + Enter key combination inserts the placeholder at the current position within a collection. The placeholder behaves in a transparent way - you may still invoke the completion menu on the placeholder node to replace it nodes or press Enter to add the usual node in the next sibling position.

Using the generic placeholder users of any language can insert arbitrary visual separators (empty lines) into code, even if the language does not support such a concept.

Generic comments

The generic placeholder may itself contain content. MPS provides the text content for the placeholder in the jetbrains.mps.lang.text language or the general-purpose devkit. This will give you fully editable multiline text language with support for basic styling (bold, italic and underlined), clickable hyper-links and embedded nodes (code).

After including the jetbrains.mps.lang.text language or the general-purpose devkit press "[" (open square bracket) on the placeholder. You will a node that allows you to enter and edit text. The text is multiline and consists of words. Any word can be made bold (press Control + B), italic (press Control + I) and underlined (press Control + U). To add a link press  Alt + Enter and invoke the Add Link intention.

To insert an arbitrary node into the text, invoke code completion inside the text node and select "node". The node placeholder will appear, so you may input any sample node. The embedded code can use nodes from any imported language:

Generic support for commenting out nodes in MPS

MPS provides a universal way to comment out nodes in models. In previous versions this functionality had to be implemented in all languages separately, either through node attributes or dedicated “comment” nodes. Since MPS 3.3 the information about a node being commented out is stored in the model in a generic way. The smodel language ignores commented out nodes by default so that your queries do not have to filter out commented out nodes explicitly.
Additionally, actions have been created to comment/uncomment out a node by hitting "Control/Cmd + /".


You can watch a short screen-cast on generic commenting out that explains the feature and describes the customization options.

How to use it


In the previous versions of MPS language authors had to provide their own implementations of the comment-out functionality for their languages. Thus it may happen that the old language-specific functionality will clash with the new generic functionality of MPS 3.3, especially the keyboard shortcut Control/Cmd + / is now taken by the generic comment-out action and will not work for the specific implementations, if they were using this keyboard shortcut before. It is advisable for the language authors to:

  1. choose a different key combination to trigger the specific comment out/uncomment functionality
  2. deprecate the custom comment-out functionality
  3. customize the generic comment-out functionality
  4. provide a migration that automatically replaces usages of the custom comment-out functionality with the generic one
  5. eventually remove the custom comment-out functionality

A semi-automated migration process is available in MPS 3.3 to help you migrate painlessly. Please check out the Migrating away from your custom commenting out functionality section below.



You can select any node in MPS, except roots, and press Control/Cmd + “/”. That node will be commented out. Let’s watch some examples:

The node that you select or point the cursor at will get commented out. Every single non-root node can be commented out - irrespective of whether it occupies a whole line, several lines, or whether it is nested deeply in an expression-like hierarchy.

If you comment out a node, it will be physically removed from its place in the model. If the commented out node occupied a requested child link, an empty cell is provided so that the user can fill in a new child value.

In BaseLanguage, for example, this gives you possibilities beyond what the Java parsers allows. You can comment out an IfStatement’s condition:

a method parameter:

or a variable type:

To give another example, the editor definition language allows you to comment out an editor cell, for example:


To uncomment a commented out node you simply press Control/Cmd + “/” while positioned on it.

Smart commenting out

The comment and uncomment actions have some intelligence in them in order to decide, which node to comment/uncomment.

  • if a node or a set of nodes is selected, this node/nodes will be commented out/uncommented
  • if no node is selected, the editor attempts to comment out/uncomment the current "line" - to achieve this a search starts in the node under caret to identify the closest ancestor vertical collection and an ancestor of the node under caret that is a member of this vertical collection will be commented out/uncommented

You may get finer-grained control over the mechanism of detecting the ancestor line of the node under caret - simply define a handler for the COMMENT action on a collection of cells as follows:

How does it work in the model

When a node is commented out, it is placed as a child (wrapped) in a special “child attribute”, called BaseCommentAttribute. Then the instance of this attribute is attached to the commented node’s link in the former parent of the commented node. A ChildAttribute is same as the LinkAttribute concept, except that ChildAttribute gets attached to aggregation links. So the commented nodes are not stored as the usual children, and they won’t appear in queries like node.children, node.descendants, etc.

The MPS editor knows about comments and it will draw children as well as the commented out nodes, in this role.


The BaseCommentAttribute annotation comes from jetbrains.mps.lang.core, so this language needs to be listed among used languages in models that contain commented out nodes.

Querying for commented nodes

The smodel language gives you options to query for the commented nodes. You use the same syntax that works for any attributes, only that the comment attribute allows for parametrization by the containment link. For example, if a node has a child collection named commands, querying whether any of the commands children has been commented out would look like:


By default every commented node is drawn surrounded by /* */. You can override the visual appearance of a commented out node by defining a custom commented editor for the concept. Just define the usual editor with the hint “comment”:


For the comment hint to be available, your editor model needs to import the jetbrains.mps.lang.core.editor model.

The style of the editor should be changed so that the user can easily visually distinguish commented code.

You can either re-use the pre-defined Comment style, which uses a gray color with italics style, or you may create your own style for commented out nodes.

Note: The children of the commented node should be drawn with their usual editor so you need to remove the comment hint in child cells:

Easier customization

The next applicable editor cell gives you a more convenient way to customize the look of commented out nodes - you may address several concepts in a hierarchy with a single customized comment editor.

The next applicable editor cell simply removes the comment hint and redirects the request to find the original editor of the concept (IfStatement). This avoids the need for repetition of the editor definition. You may further simplify the task, if you define a single editor bound to the comment hint for a common super-concept - this way all sub-concepts will get the customized comment editor.

Commenting out/uncommenting nodes from code

The CommentUtil class from jetbrains.mps.editor.runtime can be used to comment out and uncomment nodes from code, such as actions, intentions or key maps. This gives you options to further customize the behavior of commented out nodes.

The CellAction_CommentOrUncomment and its inheritors class come from the same package. They give you the way to simply comment the node and restore the selection or uncomment the node if it is currently commented.

The Comment editor action

The response to the comment/uncomment action can also be customized on the node level. You can set the handler for COMMENT action in the cell's action map:

For example, if we want to prevent the user from commenting out conditions in the robot's Kaja While statement, we attach the above action map to the While editor's cell representing the condition:

Since the COMMENT action is customized it will do what is indicated.

The action will work only if the condition node is selected.

Since we create the CellAction_CommentOrUncommentNode with the node as the parameter, where the node is the While statement, the action will process the While statement:

1) If it is not commented, the action will comment it out.

2) If it is commented out, the action will uncomment it.

Thus the commenting of the condition will be disabled.

Migrating away from your custom commenting out functionality

In versions prior to MPS 3.3 language authors had to implement the comment out functionality themselves for each language individually. In MPS 3.3 the custom functionality may be redundant and should be replaced by the generic functionality provided by MPS, perhaps with some customization as described above. The existing usages of the old custom commenting-out functionality should be migrated to the generic version, which should be done in several steps:

  1. Your old concepts used for commenting out should be deprecated
  2. Your keyboard shortcuts, actions and intentions for commenting out/uncommenting should be deprecated or removed
  3. You may wish to customize the look of commented out nodes by defining custom editors attached to the "comment" editor hint (as described above)
  4. You may also wish to disable the generic comment out functionality on some editor cells (as described above)
  5. You may need to provide a migration that will automatically translate usages of your old custom commented nodes in user code into nodes commented in the generic way. This can be done either fully manually or with MPS assistance.

MPS-assisted migration

MPS can create a Migration for you, provided you indicate, which concepts represent the old custom comments using the IOldCommentAnnotation and IOldCommentContainer. Since there were two typical ways to create custom comments in the past, there need to be two interface:

  • IOldCommentAnnotation - should be implemented by the NodeAttribute that indicates a node is commented out, if attributes were used annotate commented out nodes
  • IOldCommentContainer - in case commented nodes were represented by a dedicated concept, such as SingleLineComment, these dedicated concepts should be marked with this concept interface

These marker concept interfaces come from jetbrains.mps.lang.core, so this language needs to extended by your language in order to use them. Once annotated, the generic comment-out functionality will be ceased on the nodes of these concepts in favor of the old custom comment out functionality.

Additionally, the old commenting-out concepts will have warnings reported on them - Old comment container should be migrated or Old comment annotation should be migrated. The quick-fixes for these warnings will create the necessary migrations to convert your old custom commenting-out scheme into the generic one painlessly. Just trigger the quick fixes, check the generated migrations and then migrate your projects.

Fully manual migration

You may create the migration fully manually. Typically all that your migration needs to do is to find all nodes being commented out in the old custom way, uncomment them and call CommentUtil.comment() on each node to get it commented out in the new way. The CommentUtil class comes from jetbrains.mps.editor.runtime.

The generic comment out functionality marks commented out nodes with the BaseCommentAttribute annotation that is attached to the parent of the commented-out node, holds the original role of the commented out node and comes from jetbrains.mps.lang.core, so this language needs to be used in models that contain commented out nodes. An automatic migration should add such language dependency to all altered models. You may take inspiration from The ReplaceSingleLineCommentsWithGenericComments migration, which migrated SingleLineComment nodes in BaseLanguage:


Alongside the usual language aspects, such as StructureEditorType-system, etc., it is possible for language authors to create custom language aspects (e.g. interpreter, alternative type-system, etc.), have them generated into a language runtime and then use these generated aspects from code.


This document will use the customAspect sample bundled with MPS to teach you how to define custom language aspects. The custom aspect feature is still under development, so some java-stuff that can be seen in this doc will gradually be replaced with a specific DSL constructions.

What is a custom aspect?

Language definitions in MPS can be thought of as a collection of aspects: structure, editor, typesystem, generator. Each of the aspects consists of declarations used by the corresponding aspect subsystem. For example, the type-system aspect consists of type-system rules and is used by the type-system engine.

Each aspect of a language is now defined in a separate aspect model. For example, the editor aspect of language L is defined in the L.editor model.

Each aspect is described using a set of aspect's main languages. E.g. there's the j.m.lang.editor language to describe the editor aspect.

Declarations in an aspect model may or may not be bound to some concept (like an editor of the concept is bound to a concept, mapping configuration in the generator aspect is not bound).

The aspect can be generated into a language's aspect runtime, which represents this aspect at runtime, in other words, when the language is used.

Since version 3.3, MPS allows language authors to define new aspects for its languages.

Development cycle of custom aspects

  1. Create a language to describe the aspect - you may reuse existing languages or create ones specific to the needs of the aspect. For example, each of the core MPS aspects uses its own set of languages, plus a few common ones, such as BaseLanguage or smodel
  2. Declare that this language (and maybe some others) describes some aspect of other languages - create an aspect descriptor
  3. Develop a generator in the created language to generate aspect's runtime classes (if needed)
  4. Develop the aspect subsystem that uses the aspect's runtime 

We'll further go through each step in detail.

Look around the sample project

If you open the customAspect sample project, you will get five modules.

The documentation language and its runtime solution are used to define new documentation aspects for any other language, sampleLanguage utilizes this new aspect - it uses it to document its concepts. The the sandbox solution shows the sampleLanguage's usage to view its documentation ability. The aspect subsystem is represented by the pluginSolution, which defines an action that shows the documentation for the concept of the currently focused node.


We won't cover creation of the documentation's language concepts as well as creation of sampleLanguage and sandbox solution in this cookbook, since these are core topics covered by all entry-level MPS tutorials. We will only focus on the specifics of creating a new language aspect. 

Language runtime

Before we move on, let's consider for a second how language aspects work at runtime.

  • For each language in MPS, a language descriptor is generated (the class is called
  • Given an aspect interface, the language descriptor returns a concrete implementation of this aspect in this language (let's call it AspectDescriptor).
  • The AspectDescriptor can be any class with any methods, the only restriction is that it should implement a marker interface ILanguageAspect. We suggest that an AspectDescriptor contains no code except getters for entities described by this aspect.

This is how a typical language runtime looks like:

The createAspect() method checks the type of the parameter expecting one of interfaces declared in aspects and returns a corresponding newly instantiated implementation.

This is how the interfaces defined in aspects may look like (this example is defined in the Intentions aspect):

Using the language aspects

Now, let's suppose we would like to use some of the aspects. E.g. while working with the editor, we'd like to acquire a list of intentions, which could be applied to the currently selected node.

  1. We first find all the language runtimes corresponding to the languages imported
  2. then get the intentions descriptors for each of them
  3. and finally get all the intentions from the descriptors and check their for applicability to the current node

The overall scheme is: Languages->LanguageRuntimes->Required aspect->Get what you want from this aspect

So your custom aspect need to hook into this discovery mechanism so that the callers can get hold of it.

Implementing custom aspect

Let's look in detail into the steps necessary to implement your custom aspect using the customAspect sample project:

  1. To make MPS treat some special model as a documentation aspect (that is our new custom aspect), an aspect declaration should be created in the documentation language. To do so, we create a plugin aspect in the language and import the customAspect language.
  2. Create an aspect declaration in the plugin model of the language and fill in its fields. This tells MPS that this language can be used to implement a new custom aspect for other languages.
  3. After making/rebuilding the documentation language, it's already possible to create a documentation aspect in the sample language and create a doc concept in it.
  4. Now, we should move to the language runtime in order to specify the functionality of the new aspect as it should work inside MPS. In our example, let's create an interface that would be able to retrieve and return the documentation for a chosen concept. To do so, we create a runtime solution, add it as a runtime module of our documentation language and create an interface in it. Note that the runtime class must implement the ILanguageAspect interface. To satisfy our needs, the method must take a concept as a parameter and return a string with the corresponding documentation text.
  5. In the generator for the documentation language we now need to have an implementation of the interface from above generated. A conditional root rule and the following template will do the trick and generate the documentation descriptor class:

    The condition ensures that the rule only triggers for the models of your custom aspect, i.e. in our case models that hold the documentation definitions (jetbrains.mps.samples.customAspect.sampleLanguage.documentation).
    The useful feature here is the concept switch construction, which allows you to ignore the concept implementation details. It simply loops through all documented concepts (the LOOP macro) and for each such concept creates a matching case (exactly ->$[ConceptDocumentation]) that returns a string value obtained from the associated ConceptDocumentation.
  6. So we have an interface and an implementation class. Now, we need to tie them together - we have to generate the part of the LanguageRuntime class, which will instantiate our concept, i.e. whenever the documentation aspect is required, it will return the DocumentationDescriptor class. To understand how the following works, look at how the class is generated (see Language class in model j.m.lang.descriptor.generator.template.main). The descriptor instantiation is done by a template switch called InstantiateAspectDescriptor, which we have to extend in our new aspect language so that it works with one more aspect model:

    Essentially, we're adding a check for the DocumentationAspectDescriptor interface to the generated Language class and return a fresh instance of the DocumentationDescriptor, if the requested aspectClass is our custom aspect interface.
  7. The only thing left is using our new aspect. For that purpose, an action needs to be created that will show documentation for a concept of a node under cursor on demand:

    The jetbrains.mps.ide.actions@java_stub model must be imported in order to be able to specify the context parameters. The action must be created as part of a (newly created) plugin solution (more on plugin solutions at Plugin) with a StandalonePluginDescriptor and hooked into the menu through an ActionGroupDeclaration:

  8. This way the IDE Code menu will be enhanced.
  9. Let's now try it out! Rebuild the project, create or open a node of the DocumentedConcept concept in the sandbox solution and invoke the Show Documentation action from the Code menu:

The icon description language helps describing and instantiating icons for various MPS elements: concepts, actions etc.

The langage has two aims:
1. Provide a tool for quick icon prototyping (e.g. making new icons for concepts)
2. Make icons an extensible language construct

First impression

Wherever an icon is expected in the MPS language definition languages, you can enter a textual description of the desired icon instead of pointing to an existing .png file.


The jetbrains.mps.lang.resources contains two constructs:


  • icon{} represents the image as an instance of javax.swing.Icon class.
  • iconResource{} returns an instance of jetbrains.mps.smodel.runtime.IconResource class.

Creating icon prototypes

When describing an icon, you can get assistance from the Create Icon intention, which offers an automatic way to create a textual description of an icon and thus to prototype it quickly.

Invoking the intention will result in creating a straightforward icon definition.

This definition describes a circular icon with letter "X" inside of it. Upon regeneration the generated icon will take effect and shows up in the UI.

The language explained

An icon description consists of layers, each of which can be any of:

  • a primitive graphical shape
  • a custom image loaded from a file
  • a character

These layers are then combined into a single image to represent the icon. These icon descriptions can be used:

  • to specify icons in different places of the language definition languages - in concepts, actions, etc, where icons are expected
  • in methods in the MPS UI that are supposed to return an Icon

Extending the language

The language is open for extension. To add new icon types, you need to create a new concept and make it implement the Icon interface. The Icon interface represents the desired icon and will get transformed to a .png file during the make process.

After generating a model, all Icons are collected from the output model and their generate() methods are called. These create .png files corresponding to the images described by the corresponding Icons. When an icon resource is requested (e.g. using the icon{} syntax), a resource referenced by Icon.getResourceId() method is loaded using the classloader of the corresponding module and converted into a Java Icon object.

icon{} vs iconResource{}

There are two constructs in the resources language to load resources. icon{} loads an image as an instance of javax.swing.Icon class, while iconResource{} returns an instance of jetbrains.mps.smodel.runtime.IconResource class. The second one is used in core MPS aspects, which should not depend on the javax.swing package. All UI-related code uses icon{}.


Plugin is a way to integrate your code with the MPS IDE functionality.
The jetbrains.mps.lang.plugin and jetbrains.mps.lang.plugin.standalone languages give you a number of root concepts that can be used in your plugin. This chapter describes all of them.

Plugin instantiation

While developing a plugin, you have a solution holding the plugin and want the plugin classes to be automatically reloadable so as not to have to restart MPS after each change to see its effect. To set the development phase correctly, do the following:

  1. Create a new solution for your plugin
  2. Create a model in this solution named <solution_name>.plugin
  3. Import j.m.lang.plugin and j.m.lang.plugin.standalone languages into the solution and the model
  4. Create a root StandalonePluginDescriptor in the model (it comes from the  j.m.lang.plugin.standalone language)
  5. Set the solution's Solution Kind to Other

    You can now edit your plugin model and see the changes applied to the current MPS instance just after generation. You can also distribute the solution and have the plugin successfully working for the users.

Actions and action groups

One can add custom actions to any menu in MPS by using action and action group entities.

An action describes one concrete action. Action groups are named lists of actions intended for structuring of actions - adding them to other groups and MPS groups (which represent menus themselves) and combining them into popup menus. You can also create groups with dynamically changing contents.

How to add new actions to existing groups?

In order to add new actions to existing groups, the following should be done:

  1. actions should be described
  2. described actions should be composed into groups
  3. these groups should be added to existing groups (e.g. to predefined MPS groups to add new actions to MPS menus).

Predefined MPS groups are stored in the jetbrains.mps.ide.actions model, which is an accessory model to jetbrains.mps.lang.plugin language, so you don't need to import it explicitly into your model. 

Action structure

Action properties

Name - The name of an action. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Mnemonic -  if mnemonic is specified, the action will be available via the alt+mnemonic shortcut when any group that contains this action is displayed. Note that the mnemonic (if specified) must be one of the chars in action's caption. Mnemonic is displayed as an underlined symbol in the action's caption.

Execute outside command - all operations with MPS models are executed within commands. A command is an item in the undo list (you don't control it manually, MPS does it for you), so the user can undo changes brought into the model by action's execution. Also, all the code executed in a command, has read-write access to the model. The catch is that if you show visual dialogs to the user from inside of a command, it can cause a deadlock by blocking while holding the read/write locks. It is thus recommended to have the execute outside command option set to false, only if you are not using UI in your action. Otherwise it should be set to true and proper read/write access locking should be performed manually with the read action and command statements within the action.

Also available in - currently, this can only be set to "everywhere", which means the action will not only be available in the context, where you can invoke it through the completion menu, but also in any other context. E.g. if some action is added to the editor context menu group, but the author wants it to be available when the focus is in the logical view, or just when all the editors are closed, "also available in" should be set to "everywhere".

Caption - the string representing the action in menus

Description - this string (if specified) will be displayed in the status bar when this action is active (selected in any menu)

Icon - this icon will be displayed near the action in all menus. You can select the icon file by pressing the "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Construction parameters

Each action can be parameterized at construction time using construction parameters. This can be any data determining action's behavior. Thus, a single action that uses construction parameters can represent multiple different behaviors. To manage actions and handle keymaps MPS needs a unique identifier for each concrete behavior represented by an action. So, the toString function was introduced for each construction parameter (can be seen in the inspector). For primitive types there is no need to specify this function explicitly - MPS can do it automatically. For more complex parameters, you need to write this function explicitly so that for each concrete behavior of an action there is a different set of values returned from the toString() functions.

Enable/disable action control

Is always visible flag - if you want your action to be visible even in the disabled state (when the action is not applicable in the current context), set this to true, otherwise to false.

Context parameters - specifies which items must be present in the current context for the action to be able to execute. They are extracted from the context before any action's method is executed. Context parameters have conditions associated with them - required and custom are the two most frequently used ones. If some required parameters were not extracted, the action state is set to disabled and the isApplicable/update/execute methods are not executed. If all required action parameters were extracted, you can use their values in all the action methods. Custom context parameters give you the option to decide whether the context parameter is mandatory on case-by-case basis using the supplied function.

There are 2 types of action parameters - simple and complex action parameters.

  • Simple action parameters (represented by ActionDataParameterDeclaration) allow to simply extract all available data from the current data context. The data is provided "by key", so you should specify the name and the key in the declaration. The type of the parameter will be set automatically.
  • Complex action parameters (represented by ActionParameterDeclaration) were introduced to perform some frequently used checks and typecasts. Now there are 3 types available for the context parameter of this type:
    • node<concept>- currently selected node, which is an instance of a specified concept. Action won't be enabled, if the selected node isn't an instance of this concept.
    • nlist<concept>- currently selected nodes. It is checked that all nodes are instances of the concept (if specified). As with node<concept>, the action won't be enabled if the check fails.
    • model- the current model holding the selected node

The available keys that the user can type into the context parameters declaration are obtained automatically from all imported models. MPS searches the imported models for subclasses of the CommonDataKeys (com.intellij.openapi.actionSystem) class. Typical such subclasses are:

  • CommonDataKeys (com.intellij.openapi.actionSystem)
  • PlatformDataKeys (com.intellij.openapi.actionSystem)
  • MPSCommonDataKeys (jetbrains.mps.ide.actions)
  • MPSEditorDataKeys (jetbrains.mps.ide.editor)
  • MPSDataKeys (jetbrains.mps.workbench)

Be sure to import these models (Control/Cmd + M) in order to see them in the completion menu for context parameters.


Is Applicable / update - In cooperation with the context parameters, this method controls the enabled/disabled state of the action. You can pick either of the two options:

  • The isApplicable method returns the new state of an action
  • The update method is designed to update the state manually. You can also update any of your action's properties (caption, icon etc.) by accessing action's presentation via event.getPresentation(). Call the the setEnabledState() method on an action to enable or disable it manually.

These methods are executed only if all required context parameters have been successfully extracted from the context.

Note: The this keyword refers to the current action, use action<...> to get hold of any visible action from your code.



Do not use the isApplicable() method if you want to modify the presentation manually. Although no errors would be reported from within isApplicable(), it is not guaranteed to work in all cases properly. The update() method is a more suitable place for complex presentation manipulations.


Execute - this method is executed when the action is performed. It is guaranteed that it is executed only if the action's update method for the same event left the action in active state (or isApplicable returned true) and all the required context parameters are present in the current context and were filled in.

Methods - in this section you can declare utility methods.

Group structure

Group describes a set of actions and provides the information about how to modify other groups with current group.


Name - The name of the group. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

is popup - if this is true, the group represents a popup menu, otherwise it represents a list of actions.

When "is popup" is true:
  • Caption - string that will be displayed as the name of the popup menu
  • Mnemonic - if mnemonic is specified, the popup menu will be available via the alt+mnemonic shortcut when any group that contains it is displayed. Note that the mnemonic (if specified) must be one of the chars in caption. Mnemonic is displayed as an underlined symbol in the popup menu caption.
  • Is invisible when disabled - if set to true, the group will not be shown in case it has no enabled actions or is disabled manually in the update() method. Call the enable()/disable() methods on an action group to enable or disable it manually.

There are 3 possibilities to describe group contents:

Element list - this is just a static list of actions, groups and labels (see modifications). The available elements are:

  • ->name - an anchors. Anchors are used for modifying one group with another. See Add statement section for details.
  • <---> - separator
  • ActionName[parameters] - an actions.

Build - this alternative should be used in groups, the contents of which is static, but depends on some initial conditions - the group is built once and is not updated ever after. Use the add statement to add elements inside build block.

Update - this goes for dynamically changing groups. Group is updated every time right before it is rendered.



In the update/build blocks use the add statement to add group members.


Modifications and labels

Add to <group> at position <position> - this statement adds the current group to a <group> at the given position. Every group has a <default> position, which tells to add the current group to the end of the target group. Some groups can provide additional positions by adding so-called anchors into themselves. Adding anchors is described in the contents section. The anchor itself is invisible and represents a position, in which a group can be inserted.


  • You shouldn't care about the group creation order and modifications order - this statement is fully declarative.
  • If A is added into B, B into C, C will contain A

actionGroup <...> expression

There is a specific expression available in the jetbrains.mps.lang.plugin language to access any registered group - actionGroup<group> expression. 

Bootstrap groups

Bootstrap groups are a way to work with action groups that have been defined outside of MPS (e.g. groups contributed by IDEA or some IDEA plugin).
In this case, a bootstrap group is defined in MPS and its internal ID is set to the ID of the external group. After having this done, you can work with the bootstrap group just like with a normal one - insert it into your groups and vice versa.
A regular user rarely needs to use bootstrap groups.



A quick and simple tutorial by Federico Tomassetti on how to create action and show it in a context menu is available here:

Please bear in mind that this tutorial uses an older version of MPS and the actual workings in MPS have changed since then, Especially we now recommend to use plugin solutions instead of the plugin aspect of a language to hold your actions. The tutorial may still give you some guidelines and useful insight.

Displaying progress indicators

Long-lasting actions should indicate their activity and progress to the user. Check out the Progress indicators page for details on how to use progress bars, how to allow for cancellation and how to enable actions for running in the background.

KeyMap Changes

The KeymapChangesDeclaration concept allows the plugin to assign key shortcuts to individual actions and group them into shortcuts schemes.

Any action can have a number of keyboard shortcuts. This can be specified using the KeyMapChanges concept. For a parameterized action, which has a number of "instances" (one instance per parameter value), a function can be specified, which returns different shortcut for a different parameter value.
In MPS, there are some "default keymaps", which you can see in Settings->Keymaps. The for keymap section allows you to specify a keycap that the KeyMapChanges definition is contributing to. E.g. one can set different shortcuts for the same action in the MacOS and the Windows keymaps.

Default Keymap


If you add a keyboard shortcut to the Default keymap, all keymaps are altered with this shortcut.



Note that by default ctrl is changed to cmd in MacOs keymap. If you want your action to have a ctrl + something shortcut on MacOs, you should re-define this shortcut for the MacOs keymap.

All the actions added by plugins are visible in Settings->Keymap and Settings->Menus and Toolbars. This means that any user can customize the shortcuts used for all MPS actions.

A KeyMap Change should be given a name unique within the model, it must specify the Keymap that is being altered (or Default to change all keymaps) and then assign a keystroke to actions that should have one. The keystroke can either be SimpleShortcutChange with a directly specified keystroke or ParametrizedShortcutChange, which gives you the ability to handle parametrized actions.


If your action uses platform indices (which is very rare), add it to NonDumbAwareActions. Those actions will be automatically disabled while the indices are being build.

Editor Tabs

If you look at any concept declaration you will certainly notice the tabs at the bottom of the editor. You are able to add the same functionality to the concepts from your language.

What is the meaning of these tabs? The answer is pretty simple - they contain the editors for some aspects of the "base" node. Each tab can be either single-tabbed (which means that only one node is displayed in it, e.g. editor tab) or multi-tabbed (if multiple nodes can be created for this aspect of the base node, see the Typesystem tab, for example).

How the editor for a node is created? When you open some node, call it N, MPS tries to find the "base" node for N. If there isn't any base node, MPS just opens the editor for the selected node. If the node is found (call it B), MPS opens some tabs for it, containing editors for some subordinate nodes. Then it selects the tab for N and sets the top icon and caption corresponding to B.

When you create tabbed editors, you actually provide rules for:

  • finding the base node
  • finding subordinate nodes
  • optionally an algorithms of subordinate nodes creation

The tabs that match the requested base concept are displayed and organized depending on their relative order rules specified in their respective order constraints sections.

Editor Tab Structure

Name - The name of the rule. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Icon - this icon will be displayed in the header of the tab. You can select the icon file by pressing the "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Shortcut char - a char to quickly navigate to the tab using the keyboard

Order constraints - an instance of the Order concept. Orders specify an order, in which the current tab should be displayed relative to the other tabs. You can either refer to an external order or specify one in-place.

Base node concept - the concept of the base node for this as well as all the related tabs.

Base Node - this is a rule for searching for the base node given a known node. It should return null, if the base node is not found or this TabbedEditor can't be applied.

Is applicable - indicates whether the tab can be used for the given base node

command - indicated whether the node creation should be performed as a command, i.e. whether it should be undoable and uses no additional UI interaction with the user.

getNode/getNodes - should return the node or a list of nodes to edit in this tab

getConcepts - return the concepts of nodes that this tab can be used for to edit

Create - if specified, this will be executed when user asks to create a new node from this tab. It is given a requested concept and the base node as parameters.


Tool is an instrument that has a graphical presentation and aimed to perform some specific tasks. For example, Usages View, Todo Viewer, Model and Module repository Viewers are all tools. MPS has rich UI support for tools - you can move it by drag-and-drop from one edge of the window to another, hide, show and perform many other actions.

Tools are created "per project". They are initialized/disposed on class reloading (after language generation, on "reload all" action etc.)

Tool structure

Name - The name of the tool. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Caption - this string will be displayed in tool's header and on the tool's button in tools pane

Number - if specified, alt+number becomes a shortcut for showing this tool (if it s available)

Icon - the icon to be displayed on the tool's button. You can select the icon file by pressing "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Position - on of top/bottom/left/right to add the tool to the desired MPS tool bar

Init - initialize the tool instance here

Dispose - dispose all the tool resources here

getComponent - should return a Swing component (instance of a class which extends JComponent) to display inside the tool's window. If you are planning to create tabs in your tool and you are familiar with the tools framework in IDEA, it's better to use IDEA's support for tabs. Using this framework greatly improves tabs functionality and UI.

Fields and methods - regular fields and methods, you can use them in your tool and in the external code.

Tool operation

We added the operation (GetToolInProjectOperation concept) to simply access a tool in some project. Use it as project.tool<toolName>, where project is an IDEA Project. Do not forget to import the jetbrains.mps.lang.plugin.standalone language to be able to use it.

Be careful


This operation can't currently be used in the dispose() method

Tabbed Tools

It's same as tool window, but additionally can contain multiple tabs

Tool structure

Name - The name of the tool. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

Caption - this string will be displayed in tool's header and on the tool's button in tools pane

Number - if specified, alt+number becomes a shortcut for showing this tool (if it s available)

Icon - the icon to be displayed on the tool's button. You can select the icon file by pressing "..." button. Note that the icon must be placed near your language (because it's stored not as an image, but as a path relative to the language's root)

Position - on of top/bottom/left/right to add the tool to the desired MPS tool bar

Init - initialize the tool instance here

Dispose - dispose all the tool resources here

Fields and methods - regular fields and methods, you can use them in your tool and in the external code.

Preferences components

Sometimes you may want to be able to edit and save some settings (e.g. your tools' settings) between MPS startups. We have introduced preferences components for these purposes.

Each preferences component includes a number of preferences pages and a number of persistent fields.Preferences page is a dialog for editing user preferences. They are accessible through File->Settings.

Persistent fields are saved to the .iws files when the project is closed and restored from them on project open. The saving process uses reflection, so you don't need to care about serialization/deserialization in most cases.



Only primitive types and non-abstract classes can be used as types of persistent fields. If you want to store some complex data, create a persistent field of type org.jdom.Element (do not forget to import the model org.jdom), annotate it with com.intellij.util.xmlb.annotations.Tag and serialize/deserialize your data manually after read / before write

Preferences component structure

name - component name. You can give any name you want, the only obvious constraint is that the names must be unique in the scope of the model.

fields - these are the persistent fields. They are initialized before after read and pages creation, so their values will be correct in every moment they can be accessed. They can have default values specified, as well.

after read / before write - these blocks are used for custom serialization purposes and for applying/collecting preferences, which have no corresponding preferences pages (e.g. tool dimensions)

pages - preferences pages

Preferences page structure

name - the string to be used as a caption in Settings page. The name must be unique within a model.

component - a UI component to edit preferences.



The uiLanguage components can be used here

icon - the icon to show in Settings window. The size of the icon can be up to 32x32

reset - reset the preferences values in the UI component when this method is called.

commit - in this method preferences should be collected from the UI component and commited to wherever they are used.

isModified - if this method returns false, commit won't be executed. This is typically useful for preferences pages with long-running commit method.

PreferenceComponent expression

We added an expression to simply access a PreferenceComponent in some project. You can access it as project.preferenceComponent<componentName>, where project is an IDEA Project. Do not forget to import the jetbrains.mps.lang.plugin.standalone language to use it.

Be careful


This operation can't currently be used in the dispose() method

Custom plugin parts (ProjectPlugin, ApplicationPlugin)

Custom plugin parts are custom actions performed on plugin initializing/disposing. They behave exactly like plugins. You can create as many custom plugins for your language as you want. There are two types of custom plugins - project and application custom plugins. The project custom plugin is instantiated once per project, while the application custom plugin is instantiated once per application and therefore it doesn't have a project parameter.

Previous Next

In MPS, any model consists of nodes. Nodes can have many types of relations. These relations may be expressed in a node structure (e.g. "class descendants" relation on classes) or not (e.g. "overriding method" relation on methods). Find Usages is a tool to display some specifically related nodes for a given node.

In MPS, the Find Usages system is fully customizable - you can write your own entities, so-called finders, which represent algorithms for finding related nodes. For every type of relation there is a corresponding finder.

This is how "find usages" result looks like:

Using Find Usages Subsystem

You can press Alt+F7 on a node (no matter where - in the editor or in the project tree) to see what kind of usages MPS can search for.

You can also right-click a node and select "Find Usages" to open the "Find usages" window.


 Finders - select the categories of usages you want to search for

 Scope - this lets you select where you want to search for usages - in concrete model, module, current project or everywhere.

 View Options - additional view options

After adjusting your search, click OK to run it. Results will be shown in the Find Usages Tool as shown above.


To implement your own mechanism for finding related nodes, you should become familiar with Finders. For every relation there is a specific Finder that provides all the information about the search process.

Where to store my finders?

Finders can be created in any model by importing findUsages language. However, MPS collects finders only from findUsages language aspects. So, if you want your finder to be used by the MPS Find Usages subsystem, it must be stored in the findUsages aspect of your language.

Finder structure


The name of a finder. You can choose any name you want, the only obvious constraint being that the names must be unique in the scope of the model.

for concept

Finder will be tested for applicability only to those nodes that are instances of this concept and its subconcepts.


This string represents the finder in the list of finders. Should be rather short.

long description

If it's not clear from the description string what exactly the finder does, you can add a long description, which will be shown as a tooltip for the finder in the list of finders.

is visible

Determines whether the finder is visible for the current node. For example, a finder that finds ancestor classes of some class should not be visible when this class has no parent.

is applicable

Finders that have passed for concept are tested for applicability to the current node. If this method returns true, the finder is shown in the list of available finders; otherwise it is not shown. The node argument of this method is guaranteed to be an instance of the concept specified in "for concept" or its subconcepts.
Please note the difference between is visible and is applicable. The first one is responsible only for viewing. The second one represents a "valid call" contract between the finder and its caller. This is important because we have an execute statement in findUsagesLanguage, which will be described later. See execute section below for details.


This method should find given node usages in a given scope. For each found usage, use the add result statement to register it.

searched nodes

This method returns nodes for which the finder searched. These nodes are shown in searched nodes subtree in the tool.
For each node to display, use the add node statement to register it.

get category

There are a number of variants to group found nodes in the tool. One of them is grouping by category, that is given for every found node by the finder that has found it. This method gives a category to each node found by this finder.

What does the MPS Find Usages subsystem do automatically? 

  • Stores search options between multiple invocations and between MPS runs
  • Stores search results between MPS runs
  • Automatically handles deleted nodes
  • All the visualization and operations with found nodes is done by the subsystem, not by finders

Specific Statements


Finders can be reused thanks to the execute statement. The execution of this statement consists of 2 steps: validating the search query (checking for concept and isApplicable), and executing the find method. That's where you can see the difference between isApplicable and isShown. If you use isApplicable for cases when the finder should be applicable, but not shown, you can get an error when using this finder in the execute statement.


You can see some finder examples in jetbrains.mps.baseLanguage.findUsages

You can also find all finders by going to the FinderDeclaration concept (Ctrl+N, type "FinderDeclaration", then press ENTER) and finding all instances of this concept (Alt+F7, check instances, then check Global Scope).

Previous Next

One of very effective ways to maintain high quality of code in MPS is the instant on-the-fly code analysis that highlights errors, warnings or potential problems directly in code. Just like with other code quality reporting tools, it is essential for the user to be able to mark false positives so that they are not reported repeatedly. MPS now provides the language developers with a customizable way to suppress errors in their languages. This functionality was used to implement Suppress Errors intention for BaseLanguage:
One place where this feature is also useful are the generators, since type errors, for example, are sometimes unavoidable in the templates.

If a node is an instance of a concept, which implements the ISuppressErrors interface, all issues on this node and all its children won't be shown. For example, comments in BaseLanguage implement ISupressErrors. It is also possible to define child roles, in which issues should be suppressed, by overriding the boolean method suppress(node<> child) of the ISupressErrors interface.
Additionally, if a node has an attribute of a concept that implements ISuppressErrors, issues in such node will be suppressed too. There is a convenience default implementation of an ISuppressErrors node attribute called SuppressErrorsAttribute. It can be applied to only those nodes that are instances of ICanSuppressErrors.

An example of using the SuppressErrorsAttribute attribute and the corresponding intention.

There is an error in editor:


BaseLanguage Statement implements ICanSuppressErrors, so the user can apply the highlighted intention here:

Now the error isn't highlighted any longer, but there is a newly added cross icon in the left pane. The SuppressErrorsAttribute can be removed either by pressing that cross or by applying the corresponding intention

Previous Next


MPS provides an API for creating custom debuggers as well as integrating with debugger for java. See Debugger Usage page for a description of the MPS debugger features.

The fundamentals

In order to debug code that gets generated from the user models, MPS needs to:

  • track nodes in user models down to the generated code, in order to be able to match the two worlds seamlessly in the debugger
  • understand, which types of breakpoints can be created on what nodes
  • know the options for starting the debugged code in the debugger
  • optionally also have a set of customized viewers to display the current values of data in memory of the debugged program to the user

MPS tries to automate as much of it as possible, however, in some scenarios the language designer also has to do her share of weight-lifting. Suppose you have a language, let's call it high.level, which generates code in some language low.level, which in turn is generated directly into text (there can be several other steps between high.level and low.level). Suppose that the text generated from low.level consists of java classes, and you want to have your high.level language integrated with MPS java debugger engine. See the following explanatory table:

high.level extends or generates into BaseLanguage

Do not have to do anything.

high.level does not extend nor generates into BaseLanguage

Specify which concepts in low.level are traceable.
Use breakpoint creators to be able to set breakpoints for high.level.

Debugging BaseLanguage and its extensions - integration with the java debugger

To integrate your BaseLanguage-generated language with the MPS java debugger engine, you rarely need to specify anything. MPS can keep track of the generation trace in the files, so breakpoints can be set as expected and the debugger correctly steps through your DSL code. 


The automatic tracing recognizes situations, when a node gets transformed through a reduction rule, and keeps a tracking record of the transformation in an appropriate model's file. For concepts that do not get reduced through their own reduction rules, you may, however, indicate explicitly, which part of the generated code should be preserved in the file. The $TRACE$ macro serves this purpose.

See Traceable nodes for more details and an example on the $TRACE$ macro usage.

Startup of a run configuration under java debugger

MPS provides a special language for creating run configurations for languages generated into java – jetbrains.mps.baseLanguage.runConfigurations. Those run configurations are able to start under debugger automatically. See Run configurations for languages generated into java for details.

Custom viewers

When one views variables and fields in a variable view, one may want to define one's own way to show certain values. For instance, collections could be shown as a collection of elements rather than as an ordinary object with all its internal structure.

For creating custom viewers MPS has language.

The language enables one to write one's own viewers for data of certain form.

A main concept of customViewers language is a custom data viewer. It receives a raw java value (an objects on stack) and returns a list of so-called watchables. A watchable is a pair of a value and its label (a string which cathegorizes a value, i.e. whether a value is a method, a field, an element, a size etc.) Labels for watchables are defined in custom watchables container. Each label could be assigned an icon.

The viewer for a specific type is defined in a custom viewer root. In the following table custom viewer parts are described:



for type

A type for which this viewer is intended.

can wrap

An additional filter for viewed objects.

get presentation

A string representation of an object.

get custom watchables

Subvalues of this object. Result of this funtion must be of type watchable list.

Custom Viewers language introduces two new types: watchable list and watchable.

This is the custom viewer specification for java.util.Map.Entry class:

And here we see how a map entry is displayed in debugger view:


Note that the JDT-tools solution must be imported into your plugin solution in order to compile your custom viewers.

Creating custom debugger

If generation of your language avoids BaseLanguage, you'll need to take care of node tracing and breakpoint specification manually. Additionally, if you are generating languages other than Java, you'll have to attach the target platform debugger into MPS. The Debugger API provided by MPS allows you to create such non-java debuggers. All the necessary classes are located in the "Debugger API for MPS" plugin. See also Debugger API description.

To summarize, when you target a language other than BaseLanguage, you typically need to specify:

Not all of those steps are absolutely necessary - which of them are depends on the actual language.


The customizedDebugger sample project bundled with MPS will give you an easy-to-follow example of a non-BaseLanguage Java-generating language that customizes the breakpoints as well as node traces in order to support debugging.


Traceable Nodes

This section describes how to specify which nodes require to save some additional information in file (like information about positions text, generated from the node, visible variables, name of the file the node was generated into etc.). files contain information allowing to connect nodes in MPS with generated text. For example, if a breakpoint is hit, java debugger tells MPS the line number in source file and to get the actual node from this information MPS uses information from files.

Specifically, files contain the following information:

  • position information: name of text file and position in it where the node was generated;
  • scope information: for each "scope" node (such that has some variables, associated with it and visible in the scope of the node) – names and ids of variables visible in the scope;
  • unit information: for each "unit node" (such that represent some unit of a language, for example a class in java) – name of the unit the node is generated into.

Concepts TraceableConcept, ScopeConcept and UnitConcept of language jetbrains.mps.lang.traceable are used for that purpose. To save some information into file, user should derive from one of those concepts and implement the specific behavior method. The concepts are described in the table below.



Behavior method to implement



Concepts for which location in text is saved and for which breakpoints could be created.

getTraceableProperty – some property to be saved into file.


Concepts which have some local variables, visible in the scope.

getScopeVariables – variable declarations in the scope.


Concepts which are generated into separate units, like classes or inner classes in java.

getUnitName – name of the generated unit. files are created on the last stage of generation – while generating text. So the described concepts are only to be used in languages generated into text. The entries are filled in automatically, whenever a TraceableConceptScopeConcept or UnitConcept are being generated through a reduction rule.

When automatic tracing is impossible, the $TRACE$ macro can be used in order to match the desired input node of a concept from language.high with the generated code explicitly.

Breakpoint Creators


TODO update screenshots with a non-Java debugger used


To specify how breakpoints are created on various nodes, root breakpoint creators is used. This is a root of concept BreakpointCreator from jetbrains.mps.debugger.api.lang language. The root should be located in the language plugin model. It contains a list of BreakpointableNodeItem, each of them specify a list of concept to create breakpoint for and a method actually creating a breakpoint. jetbrains.mps.debugger.api.lang provides several concepts to operate with debuggers, and specifially to create breakpoints. They are described below.

  • DebuggerReference – a reference to a specific debugger, like java debugger;
  • CreateBreakpointOperation – an operation which creates a location breakpoint of specified kind on a given node for a given project;
  • DebuggerType – a special type for references to debuggers.

On the following example breakpoint creators node from baseLanguage is shown.

In order to provide more complex filtering behavior, instead of a plain complex list breakpoint creators can use isApplicable function. There is an intention to switch to using this function.


Previous Next

Integrating into the MPS make framework

Build Facets


Like basically any build or make system, the MPS make executes a sequence of steps, or targets, to build an artifact. A global ordering of the necessary make steps is derived from relative priorities specified for each build targets (target A has to run before B, and B has to run before C, so the global order is A, B, C).

A complete build process may address several concerns, for example generating models into text, compiling these models, deploying them to the server, and/or generating .png files from graphviz source files. In MPS, such different build aspects are implemented with build facets. A facet is a collection of targets that address a common concern.


Avoiding unnecessary file overwrites

The make process does not overwrite generated files that hold identical content to the one just generated. You can rely on the fact that only the modified files get updated on disk.

The targets within a facet can exchange configuration parameters. For example, a target that is declared to run early in the overall make process may collect configuration parameters and pass them to the second facet, which then uses the parameters. The mechanism to achieve this intra-facet parameter exchange is called properties. In addition, targets can use queries to obtain information from the user during the make process.

The overall make process is organized along the pipes and filters pattern. The targets act as filters, working on a stream of data being delivered to them. The data flowing among targets is called resources. There are different kinds of resources, all represented as different Java interfaces and tuples:

  • MResource contains MPS models created by users, those that are contained in the project's solutions and languages
  • GResource represents the results of the generation process, which includes the output models, that is the final state of the models after generation has completed. These are transient models, which may be inspected by using the Save Transient Models build option
  • TResource represents the result of text-gen
  • CResource represents a collection of Java classes
  • DResource represents a collection of delta changes to models (IDelta)
  • TextGenOutcomeResource represents the text files generated by textgen



These resources interfaces have been deprecated:

  • IMResource contains MPS models created by users, those that are contained in the project's solutions and languages
  • IGResource represents the results of the generation process, which includes the output models, that is the final state of the models after generation has completed. These are transient models, which may be inspected by using the Save Transient Models build option
  • ITResource represents the text files generated by textgen towards the end of the make process
  • FResource

Build targets specify an interface. According to the pipes and filters pattern, the interface describes the kind of data that flows into and out of a make
target. It is specified in terms of the resouce types mentioned above, as well as in terms of the kind of processing the target applies to these resources. The following four processing policies are defined:

  • transform is the default. This policy consumes instances of the input resource type and produces instances of the output resource type (e.g. it may
    consume MResources and produce TResources.)
  • consume consumes the declared input, but produces no output. * produce consumes nothing, but produces output
  • pass through does not access any resources, neither produce nor consume.

Note that the make process is more coarse grained than model generation. In other words, there is one facet that runs all the model generators. If one needs
to "interject" additional targets into the MPS generation process (as opposed to doing something before or after model generation), this requires refactoring
the generate facets. This is beyond the scope of this discussion.

Building an Example Facet

As part of the project to build a C base language for MPS, the actual C compiler has to be integrated into the MPS build process. More
specifically, programs written in the C base language contain a way to generate a Makefile. This Makefile has to be executed once it and all the
corresponding .c and .h files have been generated, i.e. at the very end of the MPS make process.

To do this, we built a make facet with two targets. The first one inspects input models and collects the absolute paths of the directories that may contain a
Makefile after textgen. The second target then checks if there is actually a file called Makefile in this directory and then runs make there. The two
targets exchange the directories via properties, as discussed in the overview above.


The sampleFacet sample project that comes bundled with MPS distributions provides a simple facet definition that you can take as a starting point for your adventure with make facets.

The first target: Collecting Directories

Facets live in the plugins aspect of a language definition. Make sure you include the {{jetbrains.mps.make.facets} language into the plugins model,
so you can create instances of FacetDeclaration. A facet is executed as part of the make process of a model if that model uses the language that
declares the facet.

The facet is called runMake. It depends on TextGen and Generate. The dependencies to those two facets has to be specified so we can then declare our targets' priorities relative to targets in those facets.

The first target is called collectPaths. It is specified as {{transform IMResource -> IMResource} in order to get in touch with the input models. The
facet specifies, as priorities, after configure and before generate. The latter is obvious, since we want to get at the models before they are
generated into text. The former priority essentially says that we want this target to run after the make process has been initialized (in other words: if
you want to do something "at the beginning", use these two priorities.)

We then declare a property pathes which we use to store information about the modules that contain make files, and the paths to the directories in which
the generated code will reside.

Let's now look at the implementation code of the target. Here is the basic structure. We first initialize the pathes list. We then iterate of the
input (which is a collection of resources) and do something with each input (explained below). We then use the output statement to output the input
data, i.e. we just pass through whatever came into out target. We use the success statement to finish this target successfully (using success
at the end is optional, since this is the default). If something goes wrong, the failure statement can be used to terminate the target unsuccessfully.

The actual processing is straight forward Java programming against MPS data structures:

We use the getGeneratorOutputPath method to get the path to which the particular module generates its code (this can be configured by the user in the
model properties). We then get the model's dotted name and replace the dots to slashes, since this is where the generated files of a model in that module will
end up (inspect any example MPS project to see this). We then store the module's name and the model's name, separated by a slash, as a way of improving the
logging messages in our second target (via the variable locationInfo}). We add the two strings to the {{pathes collection. This pathes property
is queried by the second target in the facet.

The second Target: Running Make

This one uses the pass through policy since it does not have to deal with resources. All the input it needs it can get from the properties of the collectPaths target discussed above. This second target runs after collectPaths}, {{after textGen and before reconcile. It is obvious that is has to run after collectPaths}, since it uses the property data populated by it. It has to run after {{textGen}, otherwise the make files aren't there yet. And it has to run before {{reconcile}, because basically everything has to run before {{reconcile (smile)

Let us now look at the implementation code. We start by grabbing all those entries from the collectPathes.pathes property that actually contain a
Makefile. If none is found, we return with success.

We then use the progress indicator language to set up the progress bar with as many work units as we have directories with make files in them.

We then iterate over all the entries in the {{modelDirectoriesWithMakefile} collection. In the loop we advance the progress indicator and then use Java
standard APIs to run the make file.

To wrap up the target, we use the finish statement to clean up the progress bar.

Extension support

Extensions provide a possibility to extend certain aspects of a solution or a language, which are not covered by the standard language aspects and the plugin mechanisms. Typically you may need your language to slightly alter its behavior depending on the distribution model - MPS plugin, IntelliJ IDEA plugin or a standalone IDE. In such cases you define your extension points as interfaces to which then different implementations will be provided in different distributions.

Support for extensions exists in

  • languages
  • plugin solutions

Quick howto

  1. Create an extension point
  2. Create one or more extensions
  3. Both the extension point and the extension must be in the plugin model
    1. Each extension must provide a get method, returning an object
    2. Each extension may opt to receive the activate/deactivate notifications
    3. An extension may declare fields, just like classes can

Extension language

The language jetbrains.mps.lang.extension declares concepts necessary for building extensions.

Extension point

The ExtensionPoint concept represents an extension point. The extension object type must be specified as a parameter.


The Extension concept is used to create a concrete extension.

Accessing extension point

An extension point can be accessed by reference using extension point expression.

Accessing extension objects

An extension point includes a way to access all objects provided by its extensions.

Be Careful


Objects returned by the extensions have transient nature: they may become obsolete as soon as a module reloading event happens. It is not recommended to e.g. cache these objects. Instead is it better to get a fresh copy each time.

Java API

Extension points and extensions are managed by the ExtensionRegistry core component.

Stubs and custom persistence

Unable to render {include} The included page could not be found.

Custom persistence cookbook

MPS and Ant

Working with MPS and ant

Editing code of course requires the MPS editor. But generating models and
running tests can be done from the command line to integrate it with automatic
builds. Ant is used as the basis. In this section we explain how to use MPS
from the command line via ant.

For all of the examples we use a file that defines the
following two properties:

This file is included in all the build scripts we discuss
in this section. In addition, we have to define a set of MPS-specific tasks
using the taskdef element in ant. Also, a couple of JVM options are reused
over and over. Consequently, the following is a skeleton of all the build files
we will discuss:

Building the Languages in a Project

We start by building the contents of a project. Here is the necessary ant code
that has to be surrounded by the skeleton ant file shown above:

All modules within the project are generated. If only a subset of the modules in
the project should be generated, a modules fileset can be used. The
following code generates all the languages in a project; typically they reside
in the languages directory below the project. Note how we define a
different property that points to the project directory as opposed to the
project (.mps) file.

Sometimes a project needs access to other languages in order to be compilable.
These can be added with library elements, whose dir attribute has to
point to a directory that (directly, or further below) contains the required

Generating/Building Solutions

Building solutions that contain code written in a DSL is not fundamentally
different from building languages. However, it is important to set up the
libraries correctly so they point to the directories that contain the languages
used in the solutions.

Running Tests

MPS supports a special testing language that can be used for testing
constraints, type system rules and editor functionality. These tests can be run
from the UI using the Run option from the solution or model context menu
(see the figure below).

These tests can also be run from the command line. Here is the code you need:

The important ingredients here are the two system properties
mps.junit.project and mps.junit.pathmacro.mbeddr.home. The first one
specifies the project that contains the tests. The second one is a bit more
involved. The syntax mps.junit.pathmacro.XXX sets a value for a path
variable XXX in an MPS project. To make the tests run correctly, there has
to be a TestInfo node in the project that points to the project file. This
one uses a path variable (defined in MPS settings) to make it portable between
different machines and various locations in the file system. The
mps.junit.pathmacro.mbeddr.home thingy is used to supply a value for the
macro from the command line.

MPS and Git

Working with MPS and git

This section explains how to use git with MPS. It assumes a basic knowledge
of git and the git command line. The section focuses on the integration with
MPS. We will use the git command line for all of those operations that are not

We assume the following setup: you work on your local machine with a clone of an
existing git repository. It is connected to one upstream repository by the name
of origin.


VCS Granularity

MPS reuses the version control integration from the IDEA platform. Consequently,
the granularity of version control is the file. This is quite natural for
project files and the like, but for MPS models it can be confusing at the
beginning. Keep in mind that each model, living in solutions or
languages, is represented as an XML file, so it is these files that are handled
by the version control system.

The MPS Merge Driver

MPS comes with a special merge driver for git (as well as for SVN) that makes
sure MPS models are merged correctly. This merge driver has to be configured in
the local git settings. In the MPS version control menu there is an entry
Install Version Control AddOn. Make sure you execute this menu entry
before proceeding any further. As a result, your .gitconfig should contain
an entry such as this one:

The .gitignore

For all projects, the .iws} file should be added to {{.gitignore, since
this contains the local configuration of your project and should not be shared
with others.

Regarding the (temporary Java source) files generated by MPS, two approaches are
possible: they can be checked in or not. Not checking them in means that some of
the version control operations get simpler because there is less "stuff" to deal
with. Checking them in has the advantage that no complete rebuild of these files
is necessary after updating your code from the VCS, so this results in a
faster workflow.

If you decide not to check in temporary Java source files, the following
directories and files should be added to the .gitignore in your local

  • For languages: source_gen, source_gen.caches and
  • For solutions, if those are Java/BaseLanguage solutions, then the same
    applies as for languages. If these are other solutions to which the
    MPS-integrated Java build does not apply, then source_gen and
    source_gen.caches should be added, plus whatever else your own build
    process creates in terms of temporary files.

Make sure the .history files are not added to the gitignore*!
These are important for MPS-internal refactorings.

MPS' caches and Branching

MPS keeps all kinds of project-related data in various caches. These caches are
outside the project directory and are hence not checked into the VCS. This is
good. But it has one problem: If you change the branch, your source files
change, while the caches are still in the old state. This leads to all
kinds of problems. So, as a rule, whenever you change a branch (that is not
just trivially different from the one you have used so far), make sure you
select File -> Invalidate Caches, restart and rebuild your project.

Depending on the degree of change, this may also be advisable after pulling from
the remote repository.

Committing Your Work

In git you can always commit locally. Typically, commits will happen quite
often, on a fine grained level. I like to do these from within MPS. The screenshot below
shows a program where I have just added a new variable. This is highlighted with
the green bar in the gutter. Right-Clicking on the green bar allows you to rever
this change to the latest checked in state.

In addition you can use the Changes view (from the
Window -> Tool Windows menu) to look at the set of changed files. In my case

Unknown macro: {changesview}

) it is basically one .mps file (plus two files realted
to writing this document ). This .mps file contains the test case to
which I have added the new variable.

To commit your work, you can now select Version Control -> Commit Changes.
The resulting dialog, again, shows you all the changes you have made and you can
choose which one to include in your commit. After committing, your git status
will look something like this and you are ready to push:

Pulling and Merging

Pulling (or merging) from a remote repository or another branch is when you
potentially get merge conflicts. I usually perform all these operations from the
command line. If you run into merge conflicts, they should be resolved from
within MPS. After the pull or merge, the Changes view will highlight
conflicting files in red. You can right-click onto it and select the
Git -> Merge Tool option. This will bring up a merge tool on the level of the
projectional editor to resolve the conflict. Please take a look at the
screencast at

to see this process in action.

The process described above and in the video work well for MPS model files.
However, you may also get conflicts in project, language or solution files.
These are XML files, but cannot be edited with the projectional editor. Also,
if one of these files has conflicts and contains the < < < < and
> > > > merge markers, then MPS cannot open these files anymore because
the XML parser stumbles over these merge markers.

I have found the following two approaches to work:

  • You can either perform merges or pulls while the project is closed
    in MPS. Conflicts in project, language and solution files should then be
    resolved with an external merge tool such as WinMerge before attempting
    to open the project again in MPS.
  • Alternatively you can merge or pull while the project is open (so the
    XML files are already parsed). You can then identify those conflicing files
    via the Changes view and merge them on XML-level with the MPS merge
    tool. After merging a project file, MPS prompts you that the file has been
    changed on disk and suggests to reload it. You should do this.

Please also keep in mind my remark about invalidating caches above.

A personal Process with git

Many people have described their way of working with git regarding branching,
rebasing and merging. In principle each of these will work with MPS, when taking
account what has been discussed above. Here is the process I use.

To develop a feature, I create a feature branch with

I then immediately push this new branch to the remote repository as a backup,
and to allow other people to contribute to the branch. I use

Using the -u parameter sets up the branch for remote tracking.

I then work locally on the branch, committing changes in a fine-grained way.
I regularly push the branch to the remote repo. In less regular intervals I pull
in the changes from the master branch to make sure I don't diverge too far from
what happens on the master. I use merge for this:

Alternatively you can also use

This is the time when conflicts occur and have to be handled. In repeat this
process until my feature is finished. I then merge my changes back on the

Notice the --squash option. This allows me to "package" all of the commits
that I have created on my local branch into a single commit with a meaningful
comment such as "initial version of myFeature finished".

HTTP support plugin

All nodes in MPS models can now be referenced with a url. This gives you the ability to share pointers to code with others and continuous integration together with bug-tracking services may use direct references in their reports for your easy navigation.

The HTTP support plugin provides:

  • co-operating via node URLs
  • integration with YouTrack and TeamCity services

  • a DSL for defining custom extensions to the IDEA Platform built-in server

Node URLs

You can create URL references to your code via the context menu. The created URL will be copied to the clipboard and then can be pasted wherever you want. On clicking it, MPS will handle it and open the referenced code.


If you want to get a URL of a node programmatically, you should use the .getURL operation defined in jetbrains.mps.ide.httpsupport language.

YouTrack and TeamCity Integration

MPS listens for requests that come from YouTrack and TeamCity. Upon clicking the 'Open in IDE' button in a browser, MPS will open the requested file/node. Moreover, if you are trying to open a generated file, MPS will open its sources in a proper location.

Built-in server extensions

The features above are implemented using the IDEA Platform built-in server. If you have any other necessities to handle HTTP requests in the IDE, you can define an extension to it server via jetbrains.mps.ide.httpsupport language. Note that the defined extensions should be placed in a plugin solution. See Plugin for more information


Dependencies Analyzer (Analyze model dependencies)

The Dependencies Analyzer can report dependencies among modules or models. It can be called from the main menu or from the popup menu of modules/models:


The interactive report, shown in a panel at the bottom, allows the user to view usages of modules, models and nodes by other modules, models and nodes. The panel on the right side displays modules and models that the element selected in the left-hand side list depends on. The bottom panel lists the actual places that demand the currently selected dependency.

The L icon enables the user to toggle between analyzing the model dependencies and the languages used in the models.

Unlike the Module Dependencies Tool, which simply visualizes the dependency information specified in model properties, the Analyzer checks the actual code and performs dependency analysis. It detects and highlights the elements that you really depend on.

Module Dependencies Tool (Analyze module dependencies)

The Module Dependencies Tool allows the user to overview all the dependencies and used languages of a module or a set of modules, to detect potential cyclic dependencies as well as to see detailed paths that form the dependencies. The tool can be invoked from the menu as well as from the project pane when one or more modules are selected.

Module Dependency Tool shows all transitive dependencies of the modules in the left panel. Optionally, it can also display all directly or indirectly used languages. It is possible to expand any dependency node and get all dependencies of the expanded node as children. These will again be transitive dependencies, but this time for the expanded node.

Select one or more of the dependency nodes in the left panel. The right panel will show paths to each of the selected modules from its "parent" module. You can see a brief explanation of each relation between modules in the right tree. The types of dependencies can be one of: depends on, uses language, exports runtime, uses devkit, etc. For convinience the name of the target dependent module is shown in bold.

There are two types of dependency paths: Dependency and Used Language. The L button in the toolbar enables/disables displaying of Used Languages in the left tree panel. When you select a module in the Used Language folder in the left tree, the right tree shows only the dependency paths that introduce the used language relation for the given module. To show "ordinary" dependencies on a language module, you should select it outside of the Used Languages folder (e.g. the jetbrains.mps.lang.core language in the picture below). It is also possible to select multiple nodes (e.g. the same language dependency both inside and outside of the Used Language folder). In that case you get a union of results for both paths.

When you are using a language that comes with its own libraries, those libraries are typically not needed to compile your project. It is the runtime when the libraries must be around for your code to work. For tracking runtime dependencies in addition to the "compile-time visible" ones, you should check the Runtime option in the toolbar. The runtime dependencies are marked with a "runtime" comment.

The modules in the left tree that participate in dependency cycles are shown in red color. It is possible to expand the tree node to see the paths forming the cycle:

For some types of dependencies the pop-up menu offers the possibility to invoke convenience actions such as Show Usages or Safe Delete. For the "depends on" dependencies (those without re-export) Dependencies Analyzer will be invoked for the Show Usages action.

Run configurations


Run configurations allow users to define how to execute programs written in their language.

An existing run configuration can be executed either from run configurations box, located on the main toolbar,

by the "Run" menu item in the main menu

or through the run/debug popup (Alt+Shift+F10/Alt+Shift+F9).

Also run configurations could be executed/created for nodes, models, modules and projects. For example, the JUnit run configuration could run all tests in a selected project, module or model. See Producers on how to implement such behavior for your own run configurations.

To summarize, run configurations define the following things:

  • In the creation stage:
    • the configurations name, caption, icon;
    • the configurations kind;
    • how to create a configuration from node(s), model(s), module(s), project.
  • In the configuration stage:
    • persistent parameters;
    • an editor for persistent parameters;
    • a checker of persistent parameters validity.
  • In the execution stage:
    • the process, which is actually executed;
    • a console with all its tabs, action buttons and actual console window;
    • the things required for debugging this configuration (if it is possible).

The following languages have been introduced to support run configurations in MPS.

  • jetbrains.mps.execution.common (common language) – contains concepts utilized by the other execution* languages;
  • jetbrains.mps.execution.settings (settings language) – a language for defining different setting editors;
  • jetbrains.mps.execution.commands (command languages) – processes invocations from java;
  • jetbrains.mps.execution.configurations (configurations language) – the run configurations definition;


The Settings language allows to create setting editors and integrate them into one another. What we need from a settings editor is the following:

  • the fields to edit;
  • validation of fields' correctness;
  • an editor UI component;
  • apply/reset functions to apply settings from the UI component and to reset settings in the UI component to the saved state;
  • a dispose function to destroy the UI component when it is no longer needed.

As you can see, settings have UI components. Usually, one UI component is created for multiple instances of settings. In the settings language settings are usually called "configurations" and their UI components are called "editors".

The main concept of settings language is PersistentConfigurationTemplate. It has the following sections:

  • persistent properties - This section describes the actual settings we are editing. Since we want also to persist these settings (i.e. to write to xml/read from xml) and to clone our configurations, there is a restriction on their type: each property should be either Cloneable or String or any primitive type. There is also a special kind of property named template persistent property, but they are going to be discussed later.
  • editor - This section describes the editor of the configuration. It holds the following functions: create, apply to, reset from, dispose. A section can also define fields to store some objects of the editor. A create function should return a swing component – the main UI component of the editor. apply to/reset from functions apply or reset settings in the editor to/from configuration given as a parameter. dispose function disposes the editor.
  • check - In this section persistent properties are checked for correctness. If some properties are not valid, a report error statement can be used. Essentially, this statement throws RuntimeConfigurationException.
  • additional methods - This section is for methods, used in the configurations. Essentially, these methods are configuration instance methods.

Persistent properties

It was mentioned above that persistent properties could be either Cloneable or String or any primitive type. But if one uses the Settings language inside run configurations, those properties should also support xml persistence. Strings and primitives are persisted as usual. For objects the persistence is more complicated. Two types of properties are persisted for an object: public instance fields and properties with setXXX and getXXX methods. So, if one wish to use some complex type for a persistent property, he should either make all important fields public or provide setXXX and getXXX methods for whatever he wants to persist.

Integrating configurations into one another

One of the two basic features of the Settings language is easy integration of one configuration into another. For that template persistent properties are used.

Template parameters

The second basic feature of the Settings language is template parameters. These somewhat resemble constructor parameters in java. For example, if one creates a configuration for choosing a node, he may want to parametrize the configuration with nodes concept. The concept is not a persistent parameter in this case: it is not chosen by the user. This is a parameter specified at configuration creation.


The Commands language allows you to start up processes from the code in a way it is done from a command line. The main concept of the language is CommandDeclaration. In the declaration, command parameters and the way to start process with this parameters are specified. Also, commands can have debugger parameters and some utility methods.

Execute command sections

Each command can have several execute sections. Each of this sections defines several execution parameters. There are parameters of two types: required and optional. Optional parameters can have default values and could be ommited when the command is started, while required cannot have default values and they are mandatory. Any two execute sections of the command should have different (by types) lists of required parameters. One execute section can invoke another execute section. Each execute section should return either values of process or ProcessHandler types.


To start a process from a command execute section ProcessBuilderExpression is used. It is a simple list of command parts. Each part is either ProcessBuilderPart, which consists of an expression of type string or list<string>, or a ProcessBuilderKeyPart, which represents a parameter with a key (like "-classpath /path/to/classes"). When the code generated from ProcessBuilderExpression is invoked, each part is tested for being null or empty and ommited if so. Then, each part is split into multiple parts by spaces. So if you intent to provide a command part with a space in it and do not wish it to be split (for example, you have a file path with spaces), you have to put it into double quotes ("). The working directory of created process could be specified in the Inspector.

Debugger integration

To integrate a command with the debugger, two things are required to be specified:

  • the specific debugger to integrate with;
  • the command line arguments for a process.

To specify a debugger you can use DebuggerReference – an expression of debugger type in jetbrains.mps.debug.apiLang – to reference a specific debugger. Debugger settings must be an object of type IDebuggerSettings.


The Configurations language allows to create run configurations. To create a run configuration, on should create an instance of RunConfiguration (essentially, configuration from the settings language) and provide a RunConfigurationExecutor for it. One also may need a RunConfigurationKind to specify a kind of this configuration, RunConfigurationProducer to provide a way of creating this configuration from nodes, models, modules etc and a BeforeTask to specify, how to prepare the configuration before execution.


Executor is a node, which describes how a process is started for this run configuration. It takes the settings that the user entered and creates a process from it. So, the executor's execute methods should return an instance of type process. This is done via StartProcessHandlerStatement. Anything that has a type process or ProcessHandler could be passed to it. A process could be created in three different ways:

  1. via command;
  2. via ProcessBuilderExpression (recommended to use in commands only);
  3. by creating new instance of the ProcessHandler class; this method is recommended only if the above two do not fit you, for example when you are creating a run configuration for remote debugging and you do not really need to start a process.

The executor itself consists of the following sections:

  1. "for" section where the configuration this executor is for and an alias for it is specified;
  2. "can" section where the ability of run/debug this configuration is specified; if the command is not used in this executor, one must provide an instance of DebuggerConfiguration here;
  3. "before" section with the call of tasks, which could be executed before this configuration run, such as Make;
  4. "execute" section where the process itself is created.

Debugger integration

If a command is used to start a process, nothing should be done apart from specifying a configuration as debuggable (by selecting "debug" in the executor). However, if a custom debugger integration is required, it is done the same way as in the command declaration.


Producers for a run configuration describe how to create this configuration for various nodes or groups of nodes, models, modules or a project. This makes run configurations easily discoverable for users since for each producer they will see an action in the context menu suggesting to run the selected item. Also this simplifies configuring because it gives a default way to execute something without seeing the editing dialog first.

Each producer specifies one run configuration that it creates. It can have several produce from sections for each kind of source the configuration can be produced from. This source should be one of the following: node<>, nlist<>, model, module, project. Apart from source, each produce from section has a create section – a concept function parametrized with a source. The function should return either the created run configuration or null if it cannot be created for some reason.

Useful examples

In this section you can find some useful tips and examples of run configurations usages.

Person Editor

In this example an editor for a "Person" is created. This editor edits two properties of a person: name and e-mail address.

PersonEditor could be used from java code in the following way:

Exec command

This is an example of a simple command, which starts a given executable with programParameters in a given workingDirectory.

Compile with gcc before task

This is an example of a BeforeTask which performs compilation of a source file with the gcc command. It also demonstrates how to use commands outside of run configurations executors.

Note that this is just a toy example, in a real-life scenarios the task should show a progress window while compiling, for example.

Java Executor

This is an actual executor for the Java run configuration from MPS.

Java Producer

This is a producer for Java run configuration from MPS.

You can see here three "produce from" sections. A Java run configuration is created here from nodes of ClassConcept, StaticMethodDeclaration or IMainClass.

Running a node, generated into java class

Lets suppose you have a node of a concept which is generated into a java class with a main method and you wish to run this node from MPS. Then you do not have to create a run confgiuration in this case, but you should do the following:

  1. The concept you wish to run should implement IMainClass concept from jetbrains.mps.execution.util language. To specify when the node can be executed, override isNodeRunnable method.
  2. Unit information should be generated for the concept. Unit information is required to correctly determine the class name which is to be executed. You can read more about unit information, as whell as about all trace information, in Debugger section of MPS documentation. To ensure this check that one of the following conditions are satisfied:
    1. a ClassConcept from jetbrains.mps.baseLanguage is generated from the node;
    2. a node is generated into text (the language uses textGen aspect for generation) and the concept of the node implements UnitConcept interface from jetbrains.mps.traceable;
    3. a node is genearted into a concept for which one of the three specified conditions is satisfied.

Previous Next

Changes highlighting

Changes highlighting is a handy way to show changes which were made since last update from version control system.
The changes in models are highlighted in the following places:

Project tree view

Models, nodes, properties and references are highlighted.
green means new items, blue means modified items, brown means unversioned items.

Editor tabs

Highlighting appears for all of the editor tabs: for language aspect tabs of a concept and also for custom tabbed editors declared in plugin aspect of a language (see Plugin: Editor Tabs).


Every kind of changes are highlighted in MPS editor: changing properties and references, adding, deleting and replacing nodes.

If you hover mouse cursor over the highlighter's strip on the left margin of editor, the corresponding changes become highlighted in editor pane.
If you want to have your changes highlighted in editor pane all the time (not only on hovering mouse cursor over highlighter's strip), you can select "Highlight Nodes With Changes Relative to Base Version" option in IDE Settings → Editor.

If you click on highlighter's strip on the left margin, there appears a panel with three buttons: "Go to Previous Change", "Go to Next Change" and "Rollback".

If you click "Rollback", all the corresponding changes are reverted.
This feature allows you to freely make any changes to the MPS model in the editor without fear, because at any moment you can revert your changes conveniently right from the editor.

Previous Next

Default keymap reference

Core of editing




Ctrl + Space

Ctrl + Space

Code completion

Alt + Enter

Alt + Enter

Show contextual intention actions

Ctrl + Z

Cmd + Z


Ctrl + Shift + Z

Cmd + Shift + Z




Move to the next cell

Shift + Tab

Shift + Tab

Move to the previous cell

General editing




Ctrl + Alt + T

Cmd + Alt + T

Surround with...

Ctrl + X/ Shift + Delete

Cmd + X

Cut current line or selected block to buffer

Ctrl + C / Ctrl + Insert

Cmd + C

Copy current line or selected block to buffer

Ctrl + V / Shift + Insert

Cmd + V

Paste from buffer

Ctrl + D

Cmd + D

Duplicate current line or selected block

Shift + F5

Shift + F5

Clone root

Ctrl + Up/Down

Cmd + Up/Down

Expand/Shrink block selection region

Ctrl + Shift + Up/Down

Cmd + Shift + Up/Down

Move statements up/down

Shift + Arrows

Shift + Arrows

Extend the selected region to siblings

Ctrl + W

Cmd + W

Select successively increasing code blocks

Ctrl + Shift + W

Cmd + Shift + W

Decrease current selection to previous state

Ctrl + Y

Cmd + Y

Delete line

Alt + X

Control + X

Show note in AST explorer



Refresh the error messages in the editor

Ctrl + -

Cmd + -


Ctrl + Shift + -

Cmd + Shift + -

Collapse all

Ctrl + +

Cmd + +


Ctrl + Shift + +

Cmd + Shift + +

Expand all

Ctrl + Shift + 0-9

Cmd + Shift + 0-9

Set bookmark

Ctrl + 0-9

Ctrl + 0-9

Go to bookmark


Ctrl + N

Create Root Node (in the Project View)

Ctrl + Alt + click

Cmd + Alt + click

Show descriptions of error or warning at caret

Ctrl + Shift + T

Cmd + Shift + T

Show type of node

Ctrl + Alt + T

Cmd + Alt + T

Surround with...

Set dependencies on models, import used languages




Ctrl + M

Cmd + M

Import model

Ctrl + L

Cmd + L

Import language

Ctrl + R

Cmd + R

Import model by a root name

Find usages and Search




Alt + F7

Alt + F7

Find usages

Alt + F6 Alt + F6 Find concept instances

Ctrl + Alt + Shift + F7

Cmd + Alt + Shift + F7

Highlight cell dependencies

Ctrl + Shift + F6

Cmd + Shift + F6

Highlight instances

Ctrl + Shift + F7

Cmd + Shift + F7

Highlight usages

Ctrl + F

Cmd + F

Find text



Find next

Shift + F3

Shift + F3

Find previous





Ctrl + B / Ctrl + click

Cmd + B / Cmd + click

Go to declaration

Ctrl + N

Cmd + N

Go to root node by name

Ctrl + Shift + N

Cmd + Shift + N

Go to file by name

Ctrl + G

Cmd + G

Go to node by id

Ctrl + Shift + A

Cmd + Shift + A

Go to action by name

Ctrl + Alt + Shift + M

Cmd + Alt + Shift + M

Go to model

Ctrl + Alt + Shift + S

Cmd + Alt + Shift + S

Go to solution

Ctrl + Shift + S

Cmd + shift + S

Go to concept declaration

Ctrl + Shift + E

Cmd + Shift + E

Go to concept editor declaration

Alt + Left/Right

Control + Left/Right

Go to next/previous editor tab



Go to editor (from tool window)

Shift + Esc

Shift + Esc

Hide active or last active window

Shift + F12

Shift + F12

Restore default window layout

Ctrl + Shift + F12

Cmd + Shift + F12

Hide all tool windows



Jump to the last tool window

Ctrl + E

Cmd + E

Recent nodes popup

Ctrl + Alt + Left/Right

Cmd + Alt + Left/Right

Navigate back/forward

Alt + F1

Alt + F1

Select current node in any view

Ctrl + H

Cmd + H

Concept/Class hierarchy

F4 / Enter

F4 / Enter

Edit source / View source

Ctrl + F4

Cmd + F4

Close active editor tab

Alt + 2

Alt + 2

Go to inspector

Ctrl + F10

Cmd + F10

Show structure

Ctrl + Alt + ]

Cmd + Alt + ]

Go to next project window

Ctrl + Alt + [

Cmd + Alt + [

Go to previous project window

Ctrl + Shift + Right

Ctrl + Shift + Right

Go to next aspect tab

Ctrl + Shift + Left

Ctrl + Shift + Left

Go to previous aspect tab

Ctrl + Alt + Shift + R

Cmd + Alt + Shift + R

Go to type-system rules

Ctrl + Shift + T

Cmd + Shift + T

Show type

Ctrl + H

Ctrl + H

Show in hierarchy view

Ctrl + I

Cmd + I

Inspect node

BaseLanguage Editing




Ctrl + O

Cmd + O

Override methods

Ctrl + I

Cmd + I

Implement methods

Ctrl + /

Cmd + /

Comment/uncomment with block comment

Ctrl + F12

Cmd + F12

Show nodes

Ctrl + P

Cmd + P

Show parameters

Ctrl + Q

Ctrl + Q

Show node information

Alt + Insert

Ctrl + N

Create new ...

Ctrl + Alt + B

Cmd + Alt + B

Go to overriding methods / Go to inherited classifiers

Ctrl + U

Cmd + U

Go to uverriden method

BaseLanguage refactoring







Shift + F6

Shift + F6


Alt + Delete

Alt + Delete

Safe Delete

Ctrl + Alt + N

Cmd + Alt + N


Ctrl + Alt + M

Cmd + Alt + M

Extract Method

Ctrl + Alt + V

Cmd + Alt + V

Introduce Variable

Ctrl + Alt + C

Cmd + Alt + C

Introduce constant

Ctrl + Alt + F

Cmd + Alt + F

Introduce field

Ctrl + Alt + P

Cmd + Alt + P

Extract parameter

Ctrl + Alt + M

Cmd + Alt + M

Extract method

Ctrl + Alt + N

Cmd + Alt + N


Generation, compilation and run




Ctrl + F9

Cmd + F9

Generate current module

Ctrl + Shift + F9

Cmd + Shift + F9

Generate current model

Shift + F10

Shift + F10


Shift + F9

Shift + F9


Ctrl + Shift + F10

Cmd + Shift + F10

Run context configuration

Alt + Shift + F10

Alt + Shift + F10

Select and run a configuration

Ctrl + Shift + F9

Cmd + Shift + F9

Debug context configuration

Alt + Shift + F9

Alt + Shift + F9

Select and debug a configuration

Ctrl + Alt + Shift + F9

Cmd + Alt + Shift + F9

Preview generated text

Ctrl + Shift + X

Cmd + Shift + X

Show type-system trace







Step over



Step into

Shift + F8

Shift + F8

Step out




Alt + F8

Alt + F8

Evaluate expression

Ctrl + F8

Cmd + F8

Toggle breakpoints

Ctrl + Shift + F8

Cmd + Shift + F8

View breakpoints

VCS/Local History




Ctrl + K

Cmd + K

Commit project to VCS

Ctrl + T

Cmd + T

Update project from VCS

Ctrl + V

Ctrl + V

VCS operations popup

Ctrl + Alt + A

Cmd + Alt + A

Add to VCS

Ctrl + Alt + E

Cmd + Alt + E

Browse history

Ctrl + D

Cmd + D

Show differences





Alt + 0-9

Alt + 0-9

Open the corresponding tool window

Ctrl + S

Cmd + S

Save all

Ctrl + Alt + F11


Toggle full screen mode

Ctrl + Shift + F12


Toggle maximizing editor

Ctrl + BackQuote (`)

Control + BackQuote (`)

Quick switch current scheme

Ctrl + Alt + S

Cmd + ,

Open Settings dialog

Ctrl + Alt + C

Cmd + Alt + C

Model Checker

Module cloning

You may run into situations when you need to create a copy of a language or a solution. Module cloning gives you a quick way to create copies of modules.

If you want to clone a module, you select it in the project menu and click on the Clone Solution/Language action in the context menu. 

In the dialog that pops up you then choose the name and the location for the new module.

After pressing OK, the module will be cloned and ready to use. The new module will contain all the code and properties from the old one. If you are cloning a language then its generator will be also cloned.

As you know every module contains lots of references to other instances (model/module dependencies, generator priority rules, node references in the code, etc). So if an instance and a reference to it are both cloned, it's preferred that the new reference refers to the new instance. The cloning engine takes care about such situations and so you do not have to do it manually.

There are some cases when you cannot clone a module. First of all, you can clone only solutions and languages. It also matters, how models are stored in a module. In short, if all model roots in the module support cloning then the module can be cloned. 

Currently, there are three model root types that are provided by MPS out of the box: default, javaclasses and javasource_stubs. All these model roots support cloning, except for one case - when the model files are stored outside of the module directory. By default, this is not the case, so you rarely encounter any obstacles in module cloning.

Platform Languages 


The BaseLanguage is an MPS' counterpart to Java, since it shares with Java almost the same set of constructs. BaseLanguage is the most common target of code generation in MPS and the most extensively extended language at the same time.

In order to simplify integration with Java, it is possible to specify the classpath for all modules in MPS. Classes found on the classpath will then be automatically imported into @java_stub models and so can be used directly in programs that use the BaseLanguage.

The frequently extended concepts of MPS include:

  • Expression. Constructs, which are evaluated to some results like 1, "abc", etc.
  • Statement. Constructs, which can be contained on a method level like if/while/synchronized statement.
  • Type. Types of variables, like int, double.
  • IOperation. Constructs, which can be placed after a dot like in node.parent. The parent element is a IOperation here.
  • AbstractCreator. Constructs, which can be used to instantiate various elements.

BaseLanguage was created as a copy of Java 6. Extensions to BaseLanguage for Java 7 and 8 compatibility have been gradually added.

  • Java 7 language constructs are contained in the jetbrains.mps.baselanguage.jdk7 language
  • Java 8 language extensions are contained in the jetbrains.mps.baselanguage.jdk8 language
  • You may like to check out a documentation dedicated to MPS interoperability with Java

Previous Next

Base Language Extensions Style Guide

Base Language is by far the most widely extended language in MPS. Since it is very likely that a typical MPS project will use a lot of different extensions from different sources or language vendors, the community might benefit from having a unified style across all languages. In this document we describe the conventions that creators should apply to all Base Language extensions.

Quick Reference

If you use...

Set its style to...









A keyword is a widely used string, which identifies important concepts from a language. For example, all the primitive types from Base Language are keywords. Also names of statements such as ifStatement, forStatement are keywords. Use the KeyWord style from base language's stylesheet for keywords.

Curly braces

Curly braces are often used to demarcate a block of code inside of a containing construction. If you create an if-like construct, place opening curly brace on the same line as the construct header. I.e. use:

instead of

Use the LeftBrace and RightBrace styles to set correct offsets. Make sure that the space between a character which is to left to opening curly brace and the curly brace itself is equal to 1 space. You can do so with a help of padding-left/padding-right styles.


When you use parentheses, set the LeftParen/RightParen styles to the left/right parenthesis. If a parenthesis cell's sibling is a named node's property, disable the first/last position of a parenthesis with first/last-position-allowed style.


When you use named nodes: methods, variables, fields, etc, it's advisable to make their name properties have 0 left and right padding. Making identifier declaration and reference holding the same color is also a good idea. For example, in Base Language, field declarations and references have the same color.


If you have a semicolon somewhere, set its style to Semicolon. If you have a dot, use the Dot style. If you have a binary operator, use the Operator style for it.

Previous Next

MPS Java compatibility


The Java Compiler configuration tab in the preferences window only holds a single setting - “Project bytecode version”.

This setting defines the bytecode version of all Java classes compiled by MPS. These classes include classes generated from language’s aspects, classes of the runtime solutions, classes of the sandbox solutions, etc.

By default, the bytecode version is set to “JDK Default”. This means that the version of the compiled classes will be equal to the version of Java, which MPS is running under. E.g. if you run MPS under JDK 1.8 and “JDK Default” is selected, the bytecode version will be 1.8.

The other options for project bytecode version are 1.6, 1.7 and 1.8.


Note that MPS since version 3.4 can only run on JDK 1.8 and higher, so when compiling languages or MPS plugins you have to set the bytecode version to 1.8, otherwise your languages/plugins won’t be loaded. Setting the byte code version to earlier JDK versions is only useful for solution-only projects, which are generated into Java sources that you then compile and use outside of MPS.

Build scripts

Also, don’t forget to set java compliance level in the build scripts of your project. It should be the same as the project bytecode version.

Using java classes compiled with JDK 1.8

In the MPS modules pool you can find the JDK solution, which holds the classes of the running Java. So when you start MPS under JDK 1.8, the latest Java Platform classes will be available in the JDK solution.

You can also use any external Java classes, compiled under JDK 1.8 by adding them as Java stubs.

Since version 1.8, Java interfaces can contain default and static methods. At present, MPS does not support creating them in your BaseLanguage code, but you can call static and default methods defined in external Java classes, e.g classes of the Java Platform.

Static interface method call

In the example, we sort a list with the Comparator.reverseOrder()Comparator is an interface from java.util, and reverseOrder() is its static method, which was introduced in Java 1.8.

Default interface methods

Java 8 introduced also default methods. These are methods implemented directly in the interface. You can read about default methods here:

These methods can be called just like the usual instance methods. Sometimes, however, you need to call the default method directly from an interface that your class is implementing. E.g in case of multiple inheritance when a class implements several interfaces, each containing a default method with the same signature.

In that case foo() can be called explicitly on one of the interfaces via a SuperInterfaceMethodCall construction, which is newly located in the jetbrains.mps.baseLanguage.jdk8 language.

Using Java platform API

Java 8 introduced lambda expressions, of which you can learn more here:

MPS doesn’t yet have a language that would be generated into lambda-expressions. Instead, it has its own closure language, which is compatible with the new Java API!

Here’s the example of an interaction with the new JDK 8 Collections API:

The forEach() method is the new default method of java.lang.Iterable. It takes a Consumer interface as a parameter. Consumer is a functional interface as it only has one method. In Java 8 it would be possible to pass a lambda expression to forEach(). In MPS you can pass the MPS closure. A closure knows the type of the parameter taken by forEach() while generating and it will be generated exactly to the correct instance of the Consumer.

Concept Functions

Concept functions allow language designers to leave hooks for their language users, through which the users can provide code to leverage in the generated code. For example, most of the languages that MPS offers for language design, such as EditorConstraints or Intentions, leverage Concept functions:

You can also discover their usages down in the Inspector window:

Concept functions are defined in jetbrains.mps.baselanguage and they contain BaseLanguage code, which upon generation becomes part of the generated Java code. This option can give your DSLs enormous flexibility.


We'll use the Robot Kaja sample project to experiment with Concept functions. The goal is to allow the Script authors to provide a function that will customize the Trace messages, which are reported to the user through the trace command:

The user will be able to customize the trace messages through a function that receives the original message as a parameter and returns a string that should be displayed instead:

Define the concept function concept

First, a sub-concept of ConceptFunction must be created:

The behavior aspect overrides a few methods inherited from ConceptFunction:

  • getExpectedReturnType() - declares what type should be returned from the function
  • getApplicableConceptFunctionParameter() - lists the concepts that will represent parameters to this function
  • showName() - indicates, whether the name of the function should be displayed in the editor alongside the parameter list and the return type
  • getName() - the name of the function to display in the editor

Since MyFunction requires an argument to hold the original trace message value, we also need to create a concept to represent that parameter, which extends the ConceptFunctionParameter concept and specifies its type through an overriden getType() behavior method:

Add MyFunction to Script

Once defined, the MyFunction concept can be added to Script:

This will allow us to edit the function in the Script editor:

When you hit enter, the editor will display the signature of the concept function and you will be able to edit its body:

Notice that the Inspector shows the description messages for the function as well as its parameters, when you place the cursor on the concept function signature.

Generator adjustment

The last step that remains is to alter the generator so that the trace message customization can happen. We first need to modify the KajaFrame class, which is a super-class for all the classes that get generated from Robot Kaja Scripts:

The trace() method needs to call the new customizeMessage() method in order to have the original trace message customized. The default implementation of customizeMessage() method returns the message without any alteration.

The generator template that defines how a class generated for a Script should look like, now has to generate a extra method that will override the customizeMessage() method in KajaFrame:

The overriding method only gets generated when the concept function exists in the Script. The generator uses the body of myFunction as a body of the generated customizeMessage() method.

Now the concept function for customizing trace messages should be fully functional:





Closures are a handy extension to the base language. Not only they make code more consise, but you can use them as a vehicle to carry you through the lands of functional paradigm in programming. You can treat functions as first-class citizens in your programs - store them in variables, pass them around to methods as arguments or have methods and functions return other functions. The MPS Closures Support allows to you employ closures in your own languages. In fact, MPS itself uses closures heavily, for example, in the collections language.

This language loosely follows the "BGGA" proposal specification for closures in Java12. However, you don't need Java 7 to run code with MPS closures. The actual implementation uses anonymous inner classes, so any recent version of Java starting with 1.5 will run the generated code without problems. Only the closures runtime jar file is required to be on the classpath of the generated solutions.

Function type

{ Type1, Type2... => ReturnType }

Let's start with a trivial example of function type declaration. It declares a function that accepts no parameters and returns no value.

Subtyping rules

A function type is covariant by its return type and contravariant by parameter types.

For example, given we have defined a method that accepts {String => Number} :

we can pass an instance of {Object => Integer} (a function that accepts Object and returns int) to this method:

Simply put, you can use different actual types of parameters and the return value so long as you keep the promise made in the super-type's signature.

Notice the int type automatically converted to boxed type Integer.

Closure literal

Closure literal is created simply by entering a following construct: { <parameter decls> => <body> }. No "new" operator is necessary.

The result type is calculated following one or more of these rules:

  • last statement, if it's an ExpressionStatement;
  • return statement with an expression;
  • yield statement.

Note: it's impossible to combine return and yield within a single closure literal.

Closure invocation

The invoke operation is the only method you can call on a closure. Instead of entering

To invoke a closure, it is recommended to use the simplified version of this operation - parentheses enclosing the parameter list.

Invoking a closure then looks like a regular method call.

Some examples of closure literal definitions.


Functional programing without recursion would be like making coffe without water, so obviously you have a natural way to call recursively a closure from within its body:

A standalone invoke within the closure's body calls the current closure.

Closure conversion

For practical purposes a closure literal can be used in places where an instance of a single-method interface is expected, and vice versa3.

The generated code is exactly the same as when using anonymous class:

Think of all the places where Java requires instances of Runnable, Callable or various observer or listener classes:

Updated for MPS 1.5


The following changes are applicable to the upcoming 1.5 version of MPS.

As with interfaces, an abstract class containing exactly one abstract method can also be adapted to from a closure literal. This can help, for example, in smooth transition to a new API, when existing interfaces serving as functions can be changed to abstract classes implementing the new interfaces.

Yield statement

The yield statement allows closures populate collections. If a yield statement is encountered within the body of a closure literal, the following are the consequences:

  • if the type of yield statement expression is Type, then the result type of the closure literal is sequence<Type>;
  • all control statements within the body are converted into a switch statement within an infinite do-while loop at the generation;
  • usage of return statement is forbidden and the the value of last ExpressionStatement is ignored.

Functions that return functions

A little bit of functional programming for the functional hearts out there:

The curry() method is defined as follows:


In order to run the code generated by the closures language, it's necessary to add to the classpath of the solution the closures runtime library. This jar file contains the synthetic interfaces needed to support variables of function type and some utility classes. It's located in:

Differences from the BGGA proposal

  • No messing up with control flow. This means no support for control flow statements that break the boundaries of a closure literal.
  • No "early return" problem: since MPS allows return to be used anywhere within the body.
  • The yield statement.

[1] Closures for the Java Programming Language

[3] Version 0.5 of the BGGA closures specification is partially supported

[3] This is no longer true: only closure literal to interface conversion is supported, as an optimization measure.

Previous Next


An extension to the Base Language that adds support for collections.


Collection language provides a set of abstractions that enable the use of a few most commonly used containers, as well as a set of powerful tools to construct queries. The fundamental type provided by the collections is sequence, which is an abstraction analogous to Iterable in Java, or IEnumerable in .NET . The containers include list (both array-based and linked list), set and map. The collections language also provides the means to build expressive queries using closures, in a way similar to what LINQ does.

Null handling

Collections language has a set of relaxed rules regarding null elements and null sequences.

Null sequence is still a sequence

Null is a perfectly accepted value that can be assigned to a sequence variable. This results simply in an empty sequence.

Null is returned instead of exception throwing

Whereas the standard collections framework would have to throw an exception as a result of calling a method that cannot successfully complete, the collection language's sequence and its subtypes would return null value. For example, invoking first operation on an empty sequence will yield a null value instead of throwing an exception.

Skip and stop statements


Applicable within a selectMany or forEach closure. The effect of the skip statement is that the processing of the current input element stops, and the next element (if available) is immediately selected.


Applicable within a selectMany closure or a sequence initializer closure. The stop statement causes the construction of the output sequence to end immediately, ignoring all the remaining elements in the input sequence (if any).

Collections Runtime

Collections language uses a runtime library as its back end, which is designed to be extensible. Prior to version 1.5, the collections runtime library was written in Java and used only standard Java APIs. The release 1.5 brings a change: now the runtime library is available as an MPS model and uses constructs from jetbrains.mps.baseLanguage.closures language to facilitate passing of function-type parameters around.

Important change!


In order to make the transition from Java interfaces to abstract function types possible, several of the former Java interfaces in the collections runtime library have been changed into abstract classes. While no existing code that uses collections runtime will be broken, unfortunately this breaks the so called binary compatibility, which means that a complete recompilation of all the generated code is required to avoid incompatibility with the changed classes in the runtime.

The classes which constitute the collections runtime library can be found in the collections.runtime solution, which is available from the jetbrains.mps.baseLanguage.collections language.


Sequence is an abstraction of the order defined on a collection of elements of some type. The only operation that is allowed on a sequence is iterating its elements from first to last. A sequence is immutable. All operations defined in the following subsections and declared to return a sequence, always return a new instance of a sequence or the original sequence.

Although it is possible to create a sequence that produces infinite number of elements, it is not recommended. Some operations may require one or two full traversals of the sequence in order to compute, and invoking such an operation on an infinite sequence would never yield result.

Sequence type




Comparable types





new sequence

Parameter type

Result type

{ => sequence<Type> }


Sequence can be created with initializer.

closure invocation

Result type


A sequence may be returned from a closure (see Closures).

array as a sequence

Operand type

Parameter type

Result type




An array can be used as a sequence.

list, a set and a map are sequences, too. All operations defined on a sequence are also available on an instance of any of these types.

Sequence type is assignable to a variable of type java.lang.Iterable. The opposite is also true.

Operations on sequence

Iteration and querying
foreach statement

Loop statement

is equivalent to


Operand type

Parameter type

Result type


{ Type => void }


The code passed as a parameter (as a closure literal or by reference) is executed once for each element.


Operand type

Parameter type

Result type




Gives number of elements in a sequence.


Operand type

Parameter type

Result type




Test whether a sequence is empty, that is its size is 0.


Operand type

Parameter type

Result type




Test whether a sequence contains any elements.


Operand type

Parameter type

Result type




Gives the index of a first occurrence in a sequence of an element that is passed to it as a parameter.


Operand type

Parameter type

Result type




Produces boolean value, indicating whether or not a sequence contains the specified element.

any / all

Operand type

Parameter type

Result type


{ Type => boolean }


Produces boolean value that indicates whether any (in case of any operation) or all (in case of all) of the elements in the input sequence match the condition specified by the closure.


Operand type

Parameter type

Result type




Produces an #iterator.


Operand type

Parameter type

Result type




Produces an #enumerator.

Selection and filtering

Operand type

Parameter type

Result type




Yields the first element.


Operand type

Parameter type

Result type




Yields the last element.


Operand type

Parameter type

Result type




Produces a sequence that is sub-sequence of the original one, starting from first element and of size count.


Operand type

Parameter type

Result type




Produces a sequence that is sub-sequence of the original one, containing all elements starting with the element at index count.


Operand type

Parameter type

Result type




Produces a sequence that is a sub-sequence of the original one, containing all elements starting with first and up to (but not including) the element at index size minus count. In other words, this operation returns a sequence with all elements from the original one except the last count elements.


Operand type

Parameter type

Result type




Produces a sequence that is a sub-sequence of the original one, containing all elements starting with the element at index size minus count. In other words, this operations returns a sequence with count elements from the end of the original sequence, in the original order.


Operand type

Parameter type

Result type




Results in a sequence that is a sub-sequence of the original one, containing all elements starting with the element at index start and up to (but not including) the element at index end. It is a requirement that start is no greater than end.

This is equivalent to

Where skip = start, count = end - start .


Operand type

Parameter type

Result type


{ Type => boolean }


Produces a sequence that is a sub-sequence of the original one, with all elements for which the code passed as a parameter returns true.


Operand type

Parameter type

Result type


{ Type => boolean }


Results in the first element that matches the parameter closure.


Operand type

Parameter type

Result type


{ Type => boolean }


Results in the last element that matches the parameter closure.

Transformation and sorting

Operand type

Parameter type

Result type


{ Type => Type2 }


Results in a sequence consisting of elements, each of which is the result of applying the parameter function to each element of the original sequence in turn.


Operand type

Parameter type

Result type


{ Type => sequence<Type2> }


Produces a sequence that is a concatenation of all sequences, which are all the results of applying the parameter closure to each element of the original sequence in turn. The statements skip and stop are available within the parameter closure.


Operand type

Parameter type

Result type




Produces a sequence, which contains all elements from the original sequence in the original order, with all the elements having cardinality exactly 1. Of all occurrences of an element in the original sequence only the first occurrence is included in the resulting sequence.


Operand type

Parameter type

Result type


{ Type => Type2 }


Produces a sequence with all elements from the original one in the order, which corresponds to an order induced by an imaginary sequence produced by applying the selector function to each element in the original sequence in turn. The selector function can be thought of as returning a key, which is used to sort elements in a sequence. The ascending parameter controls the sort order.


Operand type

Parameter type

Result type


{ Type => Type2 }


Equivalent to sortBy, unless used as a chain operation immediately following sortBy or another alsoSortBy. The result is a sequence sorted with a compound key, with the first component taken from previous sortBy or alsoSortBy (which is also a compound key), and the last component taken from this operation.


Operand type

Parameter type

Result type


{ Type, Type => int }
boolean sequence<Type>

Produces a sequence containing all elements from the original one in the order produced by applying the comparator function (passed as a closure literal or by reference) to a list with elements from the original sequence. The ascending parameter controls the sort order (order is reversed if the value is false).