Child pages
  • Jaipur 2018.2 EAP3 (build 60795) Release Notes
Skip to end of metadata
Go to start of metadata

See also

Jaipur 2018.2 EAP1 (build 60539) Release Notes
Jaipur 2018.2 EAP2 (build 60663) Release Notes


On this page:


Expanded scalability options

Right now it is possible to set up a TeamCity cluster with the main server, a running builds node, and a read-only node for disaster recovery. But we think that the situation when a system administrator should carefully plan and configure each node for each dedicated activity is too complicated.

Eventually we want to have a setup where all of the nodes are uniform and can perform all of the tasks in an interchangeable way. So the idea is to have a cluster, where one of the nodes (the main one) distributes different responsibilities among other nodes and also handles such tasks as upgrade, licensing, diagnostics, and server configuration.

We’re not there yet. But as a step in this direction we now allow starting a secondary node which can either work as a read-only server (by default) or poll VCS repositories for changes (if the system administrator enables this responsibility for the node on the main TeamCity server).

So to clarify, instead of the read-only node which is limited to handling disaster recovery tasks only, you can now start a secondary node which can handle both: disaster recovery and VCS polling activity as well. In the future versions we’ll change the running builds node to this approach too. So the secondary node will be able to handle VCS polling and/or data from agents. And what is handled and where will be configured by the administrator on the main server.

Why we’ve chosen VCS polling? Because as it turns out it can be quite resource consuming, both in terms of CPU and memory. The main contributors to the CPU usage are:

  • discovering of VCS root instances, which involves traversing all active configurations and resolving parameter references

  • SSH / SSL connections (handshakes can be CPU intensive)

  • calculation of affected build configurations when detected changes are persisted into the database

As to commit hooks, they should be configured for the main server. If you already have them, no changes are needed: once you start a secondary node with the VCS polling responsibility, the enabled commit hooks will continue to function as before.

See documentation on how to configure and start a secondary node.

Support for test run additional details

A test run in TeamCity can be associated with some additional information (metadata), in addition to test status, execution time, and output. This information is organized in key-value pairs, and can be used to provide extra logs, screenshots, numeric values etc.

You can now use service messages to report this kind of additional test data in TeamCity.

Reporting additional test data

Additional test data is reported using the testMetadata service message, with mandatory testName, key and value attributes. The type is specified via the type attribute, by default type being text.

If the format of the service message is incorrect, a corresponding note about it is written into build log.

The format of the value can affect the additional test data rendering by TeamCity.

Different types of data can be reported by TeamCity:


You can see a graph of changes for a numeric value, from build to build for the given test.


Max value length = 1024

External links

Links to build Artifacts

The path to the artifact should be relative to the build artifacts directory, and can reference a file inside archive:

Screenshot from artifacts directory

The path to the screenshot should be relative to the build artifacts directory.

Displaying additional test data

You can view additional test data in various places in the TeamCity Web UI.

Additional test data for a failed test

If any additional data is present for a test, TeamCity shows it before test failure details (when a stacktrace is expanded), in a separate Test Metadata section.  



Additional test data graph for numeric values  


Additional data for successful tests

  • Test tab: the OK status for a test on the Test tab is now clickable if additional test data is present:

  • Test history

Automatic Investigation Assignment

Starting from this EAP, TeamCity comes bundled with Investigations Auto Assigner plugin, a build feature which enables automatic assignment of investigations for a build failure.

When configuring the build feature, you can specify:

  • the TeamCity username of the default user whom investigations will be assigned to when it is not clear whose changes actually broke the build

  • the list of usernames to exclude from assigning investigations.

After the build feature is added to a build configuration, the user is assigned to investigate a failure on the basis of the following heuristics:

  • If a user is the only committer to the build

  • If a user is the only one who changed the suspicious file. The suspicious file is the one which probably caused this failure,i.e. its name appears in the test or build problem error text

  • If a user was responsible for this problem the previous time

  • If a user is set as the default responsible user.

If you have not configured the build feature, TeamCity will show a suggestion for investigation assignment.

Docker related Improvements

Clean-up of Docker related data

TeamCity now tracks docker images tagged or pulled during builds (the list of images is stored in buildAgent/system/docker-used-images.dat file). During cleanup / freeing disk space, TeamCity tries to remove these images if they were not used within 3 days, 1 day, 0 on subsequent attempts to free disk space.

Image usage is tracked with the docker events --since command, logged with the DEBUG level to teamcity-agent.log

Image platform selector in Docker build step

Docker build step now supports selecting image platform, which leads to build agent requirements generation.

Other improvements

  • No labels