The recent versions of TeamCity allow setting up a “running builds node” to handle data coming from agents and a “read-only node” to handle disaster recovery scenarios. TeamCity 2018.2 further expands this setup with the ability to delegate polling version control systems for changes to a secondary node.
At the moment it looks complicated, but our intention is simple: we want to have a scalable architecture where you can add more nodes to the cluster, and all of these nodes are uniform and can perform all of the tasks in an interchangeable way. The only exception will be the main TeamCity server where the cluster will be configured and where such tasks as upgrade, licensing, and diagnostics will be handled.
As a step towards this uniformity, there is no “read-only node” anymore, it is now called a “secondary node”.
The secondary node is just a TeamCity server started with the additional
TEAMCITY_SERVER_OPTS=-Dteamcity.server.nodeId=<some id> teamcity-server.bat|sh start
See our documentation on how to configure and start a secondary node.
By default, a secondary node acts like a read-only node: it does not modify data, shows the user interface in a read-only mode and is well-suited for disaster recovery tasks.
Using the Nodes configuration page of the main TeamCity server, it is now possible to delegate VCS repositories polling responsibility to this secondary node. After that, all VCS polling related tasks and commit hooks will be handled by the secondary server. In some scenarios this can greatly reduce the CPU load of the main TeamCity server.
Note that if you already have commit hooks configured for the main server, no changes to the existing hooks are needed: the main server will accept the hook and delegate its processing to the secondary node.
Right now, the secondary node cannot handle data from agents as the running builds node. But this will change in the future versions of TeamCity. Agents data handling will be just another responsibility, which the main node will be able to delegate to a secondary node.
Installation of new plugins has become a lot simpler. It is no longer required to restart the server to enable a newly uploaded plugin.
Besides, if you disabled some of the bundled plugins on your TeamCity server, enabling them does not require the server restart now.
In addition, there is now an integration between the JetBrains Plugin Repository and TeamCity server itself. For instance, when you start browsing plugins by clicking “Browse plugins repository” button from your TeamCity server, you can share information about your server (URL, server id and version) with plugins repository, and benefit from the simplified installation of the plugins.
It is also possible to enable periodical check for plugins updates on your TeamCity server. Once updates are found in the plugins repository, they can be installed easily through web interface. However, you'll still need to restart the server to apply plugin updates.
Finally, there is good news for TeamCity plugin developers. If you develop your plugins with the help of our Maven SDK or plugin for Gradle from Rod MacKenzie, you can benefit from reloading plugins without the server restart. In addition to that, when TeamCity is started from these SDKs, it will work in the mode when all of the plugins can be enabled or disabled instantly, also without the server restart.
The TeamCity GitHub Pull Requests plugin is now bundled and available as a build feature. Please see our blog post for details.
Since TeamCity 2018.2, a test run in TeamCity can be associated with some supplementary information (metadata), in addition to test status, execution time, and output. This information can be used to provide extra logs, screenshots, numeric values, tags etc.
You can now use service messages to report this kind of additional test data in TeamCity and then view it in the TeamCity Web UI. Consult our documentation for details.
If you want to see tests metadata in action on your TeamCity 2018.2 server, just invoke “Create project from URL” action with the following URL: https://github.com/JetBrains/teamcity-test-metadata-demo
When asked, import the
settings.kts file. After that, you’ll have a project on your server where you can run a build and see the reported metadata for a failed test.
Starting from this version, TeamCity analyses build problems and test failures and tries to find a committer to blame for the problem using a number of heuristics.
For test failures, there will be a suggestion to assign an investigation to this user: you can review the suggestion and assign the user to investigate the failure.
In addition to suggestions, you can now configure the Investigations Auto Assigner build feature. It will use the same heuristics to enable automatic assignment of investigations, and not only for a test failure but also for build problems.
For the cases when it is not clear whose changes actually broke the build, you can assign the default user to investigate the problem. It is also possible to exclude some users (e.g. system users or the users no longer working on the project) from investigation assignment. See our documentation for details.
We introduced a new option in the UI to help those using Kotlin-based DSL. When viewing your build configuration settings in the UI, you can click View DSL in the sidebar: the DSL representation of the current configuration will be displayed and the setting being viewed (e.g. a build step, a trigger, dependencies) will be highlighted. To go back, click Edit in UI.
The new version addresses a number of requests related to using TeamCity as a NuGet server. Now, you can configure multiple NuGet feeds for a project in TeamCity.
Responding to our customers' feedback, we provided support for NuGet Server API v3, which is more performant than API v2 in the built-in TeamCity NuGet feed. Now all available protocols are supported.
Support for NuGet Server API v3 enables you to use authenticated NuGet v3 feeds in .NET CLI/NuGet/MSBuild build steps via the bundled NuGet Credentials plugin.
TeamCity agent now tracks docker images tagged or pulled during builds (the list of images is stored in the
buildAgent/system/docker-used-images.dat file). During cleanup / freeing disk space, TeamCity agent tries to remove these images if they were not used for a few days and disk space requirement must be satisfied.
Docker wrapper and build step now support selecting image platform, which leads to proper build agent requirements generation and the ability to work with mixed Windows and Linux containers scenarios.
Full list of fixed issues
What's New in TeamCity 2018.1