Child pages
  • Testing languages
Skip to end of metadata
Go to start of metadata

Testing languages

Introduction

Testing is an essential part of language designer's work. To be of any good MPS has to provide testing facilities both for BaseLanguage code and for languages. While the jetbrains.mps.baselanguage.unitTest language enables JUnit-like unit tests to test BaseLanguage code, the Language test language jetbrains.mps.lang.test provides a useful interface for creating language tests.

Icon

To minimize impact of test assertions on the test code, the Language test language describes the testing aspects through annotations (in a similar way that the generator language annotates template code with generator macros).

Quick navigation table

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use the jetbrains.mps.lang.test language to create MigrationTestCases. In these test cases write pieces of code to run migration on them.
See the Migration Tests section for details.

 

Tests creation

There are two options to add test models into your projects.

1. Create a Test aspect in your language

This is easier to setup, but can only contain tests that do not need to run in a newly started MPS instance. So typically can hold plain baselanguage unit tests. To create the Test aspect, right-click on the language node and choose chose New->Test Aspect.

Now you can start creating unit tests in the Test aspect.


Right-clicking on the Test aspect will give you the option to run all tests. The test report will then show up in a Run panel at the bottom of the screen.

2. Create a test model

This option gives you more flexibility. Create a test model, either in a new or an existing solution. Make sure the model's stereotype is set to tests.

Open the model's properties and add the jetbrains.mps.baselanguage.unitTest language in order to be able to create unit tests. Add the jetbrains.mps.lang.test language in order to create language (node) tests.

Additionally, you need to make sure the solution containing your test model has a kind set - typically choose Other, if you do not need either of the two other options (Core plugin or Editor plugin). 


Right-clicking on the model allows you to create new unit or language tests. See all the root concepts that are available:


Unit testing with BTestCase

As for BaseLanguage Test Case, represents a unit test written in baseLanguage. Those are familiar with JUnit will be quickly at home.

A BTestCase has four sections - one to specify test members (fields), which are reused by test methods, one to specify initialization code, one for clean up code and finally a section for the actual test methods. The language also provides a couple of handy assertion statements, which code completion reveals.

TestInfo

In order to be able to run node tests, you need to provide more information through a TestInfo node in the root of your test model.

Especially the Project path attribute is worth your attention. This is where you need to provide a path to the project root, either as an absolute or relative path, or as a reference to a Path Variable defined in MPS (Project Settings -> Path Variables).

To make the path variable available in Ant scripts, define it in your build file with the mps.macro. prefix (see example below).

Testing aspects of language definitions

Different aspects of language definitions are tested with different means:

Language definition aspects

The way to test

Intentions
Actions
Side-transforms
Editor ActionMaps
KeyMaps

Use the jetbrains.mps.lang.test language to create EditorTestCases. You set the stage by providing an initial piece of code, define a set of editing actions to perform against the initial code and also provide an expected outcome as another piece of code. Any differences between the expected and real output of the test will be reported as errors.
See the Editor Tests section for details.

Constraints
Scopes
Type-system
Dataflow

Use the jetbrains.mps.lang.test language to create NodesTestCases. In these test cases write snippets of "correct" code and ensure no error or warning is reported on them. Similarly, write "invalid" pieces of code and assert that an error or a warning is reported in the correct node.
See the Nodes Tests section for details.

Generator
TextGen

There is currently no built-in testing facility for these aspects. There are a few practices that have worked for us over time:

  • Perhaps the most reasonable way to check the generation process is by generating models, for which we already know the correct generation result, and then comparing the generated output with the expected one. For example, if your generated code is stored in a VCS, you could check for differences after each run of the tests.
  • You may also consider providing code snippets that may represent corner cases for the generator and check whether the generator successfully generates output from them, or whether it fails.
  • Compiling and running the generated code may also increase your confidence about the correctness of your generator.

Migrations

Use the jetbrains.mps.lang.test language to create MigrationTestCases. In these test cases write pieces of code to run migration on them.
See the Migration Tests section for details.

Node tests

A NodesTestCase contains three sections:


The first one contains code that should be verified. The section for test methods may contain baseLanguage code that further investigates nodes specified in the first section. The utility methods section may hold reusable baseLanguage code, typically invoked from the test methods.

Checking for correctness

To test that the type system correctly calculates types and that proper errors and warnings are reported, you write a piece of code in your desired language first. Then select the nodes, that you'd like to have tested for correctness and choose the Add Node Operations Test Annotation intention.


This will annotate the code with a check attribute, which then can be made more concrete by setting a type of the check:


Note that many of the options have been deprecated and should no longer be used.

The for error messages option ensures that potential error messages inside the checked node get reported as test failures. So, in the given example, we are checking that there are no errors in the whole Script.

Checking for type system and data-flow errors and warnings

If, on the other hand, you want to test that a particular node is correctly reported by MPS as having an error or a warning, use the has error / has warning option.


This works for both warnings and errors.


You can even tie the check with the rule that you expect to report the error / warning. Hit Alt + Enter when with cursor over the node and pick the Specify Rule References option:


An identifier of the rule has been added. You can navigate by Control/Cmd + B (or click) to the definition of the rule.


When run, the test will check that the specified rule is really the one that reports the error.

Type-system specific options

The check command offers several options to test the calculated type of a node.


Multiple expectations can be combined conveniently:

Testing scopes

The Scope Test Annotation allows the test to verify that the scoping rules bring the correct items into the applicable scope:


The Inspector panel holds the list of expected items that must appear in the completion menu and that are valid targets for the annotated cell:


Test and utility methods

The test methods may refer to nodes in your tests through labels. You assign labels to nodes using intentions:


The labels then become available in the test methods.

Editor tests

Editor tests allow you to test the dynamism of the editor - actions, intentions and substitutions.

An empty editor test case needs a name, an optional description, setup the code as it should look before an editor transformation, the code after the transformation (result) and finally the actual trigger that transforms the code in the code section.


For example, a test that an IfStatement of the Robot_Kaja language can be transformed into a WhileStatement by typing while in front of the if keyword would look as follows:


In the code section the jetbrains.mps.lang.test language gives you several options to invoke user-initiated actions - use type, press keys, invoke action or invoke intention. Obviously you can combine the special test commands for the plain baseLanguage code.

Icon

In order to be able to specify the desired actions and intentions, you need to import their models into the test model. Typically the jetbrains.mps.ide.editor.actions model is the most needed one when testing the editor reactions to user-generated actions.

 

To mark the position of the caret in the code, use the appropriate intention with the cursor located at the desired position:

The cursor position can be specified in both the before and the after code:

The cell editor annotation has extra properties to fine-tune the position of the caret in the annotated editor cell. These can be set in the Inspector panel.

Inspecting the editor state

Some editor tests may wish to inspect the state of the editor more thoroughly. The editor component expression gives you access to the editor component under cursor. You can inspect its state as well as modify it, like in these samples:

The is intention applicable expression let's you test, whether a particular intention can be invoked in the given editor context:

You can also get hold of the model and project using the model and project expressions, respectively.

Migration tests

Migrations tests can be used to check that migration scripts produce expected results using specified input.

To create a migration test case you should specify its name and the migration scripts to test. In many cases it should be enough to test individual migration scripts separately, but you can safely specify more than one migration script in a single test case, if you need to test how migrations interact with one another.

Additionally, migration test cases contain nodes to be passed into the migration process and those also nodes that are expected to come out as the ouptut of the migration.

When running, migration tests behave the following way:

  1. Input nodes are copied as roots into an empty module with single model.
  2. Migration scripts run on that module.
  3. Roots contained in that module after migration are compared with the expected ouput
  4. The check() method of the concerned migration(s) is invoked to ensure that it returns an empty list of problems

To simplify the process of writing migration tests, the expected output can be generated automatically from the input nodes using the currently deployed migration scripts. To do this, use the intention called 'Generate Output from Input'.

Running the tests

Inside MPS

To run tests in a model, just right-click the model in the Project View panel and choose Run tests:

If the model contains any of the jetbrains.mps.lang.test tests, a new instance of MPS is silently started in the background (that's why it takes quite some time to run these compared to plain baseLanguage unit tests) and the tests are executed in that new MPS instance. A new run configuration is created, which you can then re-use or customize:

The Run configurations dialog gives you options to tune the performance of tests.

  • Reuse caches - reusing the old caches of headless MPS instance when running tests cuts away a lot of time that would be needed to setup a test instance of MPS. It is possible to set and unset this option in the run configuration dialog.
  • Save caches in - specify the directory to save the caches in. By default, MPS choses the temp directory. Thus with the option Reuse caches set on, MPS saves its caches in the specified folder and reuses them whenever possible. If the option is unset, the directory is cleared on every run.
  • Execute in the same process - to speed up testing tests can be run in a so-called in-process mode. It was designed specifically for tests, which need to have an MPS instance running. (For example, for the language type-system tests MPS should safely be able to check the types of nodes on the fly.)
    The original way was to have a new MPS instance started in the background and run the tests in this instance. This option, instead, allows to have all tests run in the same original MPS process, so no new instance needs to be created. When the option Execute in the same process is set (the default setting), the test is executed in the current MPS environment. To run tests in the original way (in a separate process) you should uncheck this option. This way of tests' execution is applicable to all test kinds in MPS. Thus it works even for the editor tests!

    Icon

    Although the performance is so much better for in-process test execution, there are certain drawbacks in this workflow. Note, that the tests are executed in the same MPS environment that holds the project, so there is a possibility, that the code you write in your test may be potentially dangerous and sometimes cause real harm. For example, a test, which disposes the current project, could destroy the whole project. So the user of this feature needs to be careful when writing the tests.
    There are certain cases when the test must not be executable in-process. In that case it is possible to switch an option in the inspector to prohibit the in-process execution for that specific test.

    The test report is shown in the Run panel at the bottom of the screen:

 

From a build script

In order to have your generated build script offer the test target that you could use to run the tests using Ant, you need to import the jetbrains.mps.build.mps and jetbrains.mps.build.mps.tests languages into your build script, declare using the module-tests plugin and specify a test modules configuration.

To define a macro that Ant will pass to JUnit (e.g. for use in TestInfo roots in your tests), prefix it with mps.macro.:

Running Editor tests in IDEA Plugin

With the new JUnit test suite (jetbrains.mps.idea.core.tests.PluginsTestSuite) it is possible to execute editor tests for your languages in IntelliJ IDEA, when using the MPS plugin. To make use of this functionality you have to create a simple ANT script that will install all the necessary plugins into the IntelliJ platform and executing the tests by specifying test module name(s).

Previous Next

  • No labels