- Project Settings Configuration Changes
- Two-node Server Configuration
- Team Foundation Work Items Tracking
- Cloud Support
- Email Verification
- Exclude Patterns for Artifact Paths
- Project-based Agent Management Permissions
- Flaky Test Detection
- New Create project / Create build configuration buttons
- REST API Enhancements
- Bundled Tools Updates
- Other Improvements
Project Settings Configuration Changes
Starting from this EAP, configuration of issue trackers, versioned settings, custom charts, shared resources and third-party report tabs was moved from
<TeamCity Data Directory>/config/projects/<ProjectID>/pluginData/plugin-settings.xml to
<TeamCity Data Directory>/config/projects/<ProjectID>/project-config.xml file. The file now has the
<project-extensions> element which contains all of the above-mentioned project features.
Two-node Server Configuration
Starting with this EAP, the TeamCity server can operate in two modes: main TeamCity server (default mode) and build messages processor.
If a TeamCity server is started in the build messages processor mode, its functionality is reduced to only one feature - process real time data coming from the running builds. The primary purpose of this mode is to provide the scalability option for really large installations running several hundreds of agents on a single server now and planning on expanding their installation in the future.
When two servers working in different modes are connected, they appear as two nodes on the server Administration | Nodes Configuration page.
Several important notes about two-node configuration:
- at the moment the build messages processor node handles all of the data produced by running builds (build logs, artifacts, statistic values), pre-processes it and stores it in the database
- in a two-node installation, both the main TeamCity server and the build messages processor require access to the same data directory (which must be shared somehow if the nodes are installed on separate machines) and to the same database
- the URL where the build messages processor operates must be accessible by the agents and the main TeamCity server (occasionally the main TeamCity server also communicates with the build messages processor by HTTP)
- the main TeamCity server handles all other tasks: user interface, VCS related activity, management of agents, etc.
First of all, ensure that both machines where TeamCity software will be installed share the same TeamCity data directory. For large installations we recommend using NAS. It is possible to share directory using NFS, or Windows share, although the performance of such installation can be worse.
Once you have two machines, proceed with installing TeamCity software as usual: download a distribution package, unpack it or follow the installation wizard.
TEAMCITY_DATA_PATH environment variable on both machines, make sure it points to the shared data directory.
To start the main TeamCity server, follow our usual instructions.
Before starting a server in the build messages processor node, add additional arguments to TEAMCITY_SERVER_OPTS environment variable, for example:
TEAMCITY_SERVER_OPTS=-Dteamcity.server.mode=build-messages-processor -Dteamcity.server.rootURL=<processor url> <your regular options if you have them>
<processor url> is the URL where build messages processor will operate. This URL must be accessible by both the agents and main server. If you do not have an HTTP proxy installed in front of the eamCity servlet container and you did not change the port in the servlet container during the installation, then by default this URL will be:
http://<your host name>:8111/
To start the build messages processor, use our regular scripts:
The build messages processor uses the same approach to logging as the main server. You can check how the startup is doing in the
teamcity-server.log file. You can also open
<processor URL> in your browser, there you should see regular TeamCity startup screens.
Enabling Build Messages Processor
If both - the main server and build messages processor are up and running, you should see the "Nodes configuration" page in the Administration area on the main TeamCity server:
By default, the build messages processor is disabled. And all traffic produced by running builds is still handled by the main server. Once you enable the build messages processor, all newly started builds will be routed to this node. The existing running builds will continue being executed on the main server.
And vice versa, if you decide to disable the messages processor, only the newly started builds will be switched to the main server, the builds that were already running on the messages processor will continue running there.
At any point of time you can see how many builds are currently assigned to each node. If everything is configured correctly (the node is accessible by agents) and there are no problems with processing builds on the build messages processor, eventually all of the running builds should be switched to the messages processor once you enable it.
Restarting Servers while Builds are Running
The build messages processor as well as the main TeamCity server can be stopped or restarted while builds are running there. If agents can't connect to the build messages processor for some time, they will re-route their data to the main server. If the main server is also unavailable, agents will keep their data and re-send it once servers re-appear.
Both the main TeamCity server and the build messages processor must be of exactly the same version.
The upgrade sequence is the following:
- start upgrade on the main TeamCity server as usual
- at some point you will be warned that there is a build messages processor running and using the same database; you'll need to shutdown it first
- proceed with upgrade
- make sure everything is ok, agents are connecting etc. (since the messages processor is not available anymore, the agents will re-route their data to the main server)
- upgrade software on the messages processor machine to the same version
- start messages processor and check that it is connected via Nodes configuration page on the main server
- Only one build messages processor can be configured for the main TeamCity server.
- At the moment the build messages processor node does not support plugins.
Team Foundation Work Items Tracking
Since TeamCity 10, Team Foundation Work Items tracking is integrated with TeamCity. Supported versions are Microsoft Visual Studio Team Foundation Server 2010-2015, and Visual Studio Team Services.
TFS work items support can be configured on the Issue trackers page for a project. If a project has a TFVC root configured, TeamCity will suggest configuring the issue tracker as well.
By default, the integration works the same way as the other issue tracker integrations: you need to mention the work item ID in the comment message, so the work items can be linked to builds and the links will be displayed in various places in the TeamCity Web UI. Additionally, if your changeset has related work items,TeamCity can retrieve information about them even if no comment is added to the changeset. Besides, custom states for resolved work items are supported by TeamCity.
In addition, resolved states in TeamCity can be customized by using the
teamcity.tfs.workItems.resolvedStates internal property set to
"Closed?|Done|Fixed|Resolved?|Removed?" by default.
- When configuring a cloud image, you can now select an agent pool for the newly created cloud agents. Previously all of them were placed in the default pool.
- Custom names for agent images are supported now. The names of virtual machines in VMware must be unique. When using the same image in different cloud profiles, to avoid possible conflicts, use the custom agent image name when configuring a cloud profile in TeamCity. This feature can be also useful with naming patterns for agents. When a custom agent image name is specified, the names of cloud agent instances cloned from the image will be based on this name.
- EBS optimization is turned on by default for all instance types supporting it.
TeamCity administrators can enable / disable email verification (off by default) on the Administration | Authentication page.
If email verification is enabled on the TeamCity server, the Email address field in the user account registration form becomes mandatory. When an address is added / modified, the users will be asked to verify their it. If the email address is not verified, TeamCity will display a notification on the General tab of the user account settings. Verified email addresses will be marked with a green check on the Administration | Users page.
When project import scope if configured, users with the same username and email are compared based on their email verification. TeamCity will display the conflicts information and the administrator can choose whether to merge the users found.
Exclude Patterns for Artifact Paths
It is now possible to specify newline- or comma-separated paths in the form of
-:source [ => target] to exclude files or directories from publishing as build artifacts.
Rules are grouped by the right part and are applied in the order of appearance, e.g.
will tell TeamCity to publish all files except for
folder1 into the
Project-based Agent Management Permissions
Starting from this EAP, TeamCity introduces 6 new permissions added to Project Administrators:
1) Enable / disable agents associated with project
2) Start / Stop cloud agent for project
3) Change agent run configuration policy for project
4) Administer project agent machines (e.g. reboot, view agent logs, etc.)
5) Remove project agent
6) Authorize project agent
These agent permissions are project-based. Additionally, these permissions can provide agent pool management rights: if a person is granted a permission to perform a certain agent management action for all projects within a pool, this user can perform this action on all agents in this pool.
If an agent within a pool is assigned to a project where no such permission is granted to the user, the pool management right is revoked.
The Project Administrator role will no longer include the Agent Manager role for the new TeamCity installations. The existing installations will not be affected by this change, but the new permissions are added to Project Managers and it is possible to exclude the inherited agent manager role manually.
Flaky Test Detection
TeamCity now supports flaky test detection. A flaky test is a test that is unstable (can exhibit both a passing and a failing result) with the same code.
Flaky test detection is based on the following heuristics:
- High flip rate (Frequent test status changes). A flip in TeamCity is a test status change — either from OK to Failure, or vice versa. The Flip Rate is the ratio of such "flips" to the invocation count of a given test, measured per agent, per build configuration, or over a certain time period (7 days by default). A test which constantly fails, while having a 100% failure rate, will have its flip rate close to zero; a test which "flips" each time it is invoked will have the flip rate close to 100%.
If the flip rate is too high, TeamCity will consider the test flaky.
- Different test status for build configurations with the same VCS change: if two builds of different configurations are run on the same VCS change and the test result is different in these builds, the test is reported as flaky. This may be an indication of environmental issues.
- If the status of a test 'flipped' in the new build with no changes, i.e. a previously successful test failed in a build without changes or a previously failing test passed in a build without changes, TeamCity will consider the test flaky.
- Different test status for multiple invocations in the same build: if the same test is invoked multiple times and the test status flips, TeamCity will consider the test flaky.
Such tests are displayed on the dedicated project tab, Flaky Tests, along with the total number of test failures, the flip rate for the given test and reasons for qualifying the test as a flaky one. You can also see if the test is flaky when viewing the expanded stacktrace for a failed test on the build results page.
As with any failed test, you can assign investigations for a flaky test (or multiple tests). For flaky tests the resolution method is automatically set to 'Manual'; otherwise the investigation will be automatically removed once the test is successful, which does not mean that the flaky test has been fixed.
Note that is branches are configured for a VCS Root, flaky tests are detected for the default branch only.
New Create project / Create build configuration buttons
The new Create subproject and Create build configuration buttons have a drop-down now and you can select whether you want to create a project from scratch (manually), from URL, or using the popular version control systems GitHub.com and Bitbucket. When one of the latter two is selected, TeamCity offers to configure a connection to the VCS hosting for the current project. When the connection is configured, TeamCity displays the list of the available repositories with their URLs. All you have to do is select a repository URL and proceed with the configuration.
REST API Enhancements
- It is possible now to get help on locator usage by sending a request with "$help" string as a locator
- Project features are now exposed
- Order of projects and build configurations can now be changed via via .../app/rest/projects/xxx/order/projects and .../app/rest/projects/xxx/order/buildTypes requests
- It is now possible to get and update TeamCity license keys
- Maximum number of agents in an agent pool is supported
Bundled Tools Updates
- the bundled IntelliJ IDEA is updated to 2016.2 RC (162.1121.10)
Maven-related operations performed on the server-side are now moved to separate process
New option added to Subversion VCS root: Enable non-trusted SSL certificate; if this option is enabled, TeamCity will be able to connect to SVN servers without properly signed SSL certificate
Starting from this EAP TeamCity uses unidirectional agent-to-server connection via the polling protocol by default. If for some reason the polling protocol cannot be used, TeamCity switches to the fallback bidirectional communication via xml-rpc.
dedicated DSL for some settings:
A similar DSL is provided for mercurial; command-line, maven, gradle build steps; VCS, Finish build trigger, maven triggers; VCS labeling & versioned settings features.