Page Comparison - Getting Started with Timeline Profiling (v.8 vs v.43)

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Modern apps tend to be multithreaded. Background data caching, background search, and so on: Even in a simple app you’ll definitely find a process that can be run in a separate thread. Unfortunately, in addition to obvious benefits, this also makes your app more complicated and more difficult for debugging.

What Is Timeline?

To help you profile multithreaded apps, dotTrace 6.0 introduces a new view called Timeline. Timeline is a set of filters and diagrams that allow you to quickly visualize and analyze threads and thread-to-thread relationships in your app. The purpose of Timeline is answering main questions related to poor performance of multithreaded apps:

  • What causes user interface (UI) freezes in my app? Is this excessive computational work in the main thread, multiple garbage collections in worker threads, or maybe it’s an overuse of I/O operations?
  • Is poor performance caused by a concurrency issue like lock contention, irregular work distribution, or other?

Timeline Is Not “Just a View”


What Is Timeline Profiling?

Since version 6.0, dotTrace offers a new method of profiling your apps called timeline profiling. What is timeline profiling and why do you need it?

Unlike 'classic' performance profiling in dotTrace 5.5 and earlier, during timeline profiling dotTrace collects temporal call stack and thread state data. You get the same data about call times, but bound to a timeline. This way, you can analyze not only typical “what is the slowest method?” issues, but also ones where the order of events does matter, such as UI freezes, excessive garbage collections, uneven workload distribution, insufficient file I/O, and so on.
To start timeline profiling, simply choose the Timeline profiling type when configuring a session. To analyze collected timeline profiling snapshots, use a separate dotTrace component called Timeline Viewer.

What Next?

In this “Getting Started” Getting Started tutorial, we will take a detailed look at the key profiling steps and introduce you to the Timeline Viewer user interface and even try to solve . We will also profile a sample app and try to determine why its UI freezes, which is a very common task – finding a reason of UI freezes in an appprofiling task.


Sample App
 Running the Profiler and Getting a Snapshot
First Look at the Timeline’s Timeline User Interface
 Analyzing a Snapshot in Timeline
Analyzing File Operations and Garbage Collections in Timeline Conclusion

Sample App
Sample App
Sample App


As an example, we’ll use a small app Our sample app is used to reverse lines (in text files, e.g., "ABC" > "CBA") in text files. Briefly: With the Select Files button, a user specifies chooses one or more text files to be processed. The Process Files button runs a separate BackgroundWorker thread which reverses lines in the files. The progress of file processing Progress is displayed in a label on the main the left-hand corner of the window. After the processing is finished, the label shows the "says All files were successfully processed" message.
The source code of the for this app is available here.


*The algorithm of fixing UI freezes shown in this tutorial contains some unnecessary steps and may seem suboptimal to you suboptimal. This is made done intentionally as the main goal of the tutorial is to fully acquaint you with Timeline as maximally as possibleViewer.

The app has a serious drawback. After starting file processing, users experience long UI lags that last until the processing is over.
Let’s use Timeline to determine the cause of timeline profiling to find out why these freezes happen!*

Running the Profiler and Getting a Snapshot
Step 1
Step 1


  1. Open the MassFileProcessing.sln solution in Visual Studio.
  2. In case If you have dotTrace integrated with Visual Studio, run the profiler by choosing dotTrace ReSharper | Profile | Run Startup ProjectConfiguration Performance Profiling.
    Otherwise, run dotTrace from the Windows Start menu. In dotTrace, select New Session | Local, Profile Application | Standalone application.
  3. In Profiler Configuration, leave the Sampling profiling type. The thing is Timeline information can be gathered only in the sampling mode.
    In Profiling type, select Timeline. In case you ran dotTrace as a standalone app, you should also specify the path to the release executable of our sample app in in Application.
    Image Removed Image Added
  4. After you click click Run, dotTrace runs our app and a special dotTrace 6.0 Controller window displays a special controller window used to control the profiling process.
    Image RemovedNote that after you start profiling, dotTrace starts showing the real-time data about CPU usage and I/O operations in our app. 
    Now, we should Image Added
    Now, let's try to reproduce a performance issue in our app.
  5. Click the Select Files button and choose five text files that come with the app in the Text Files folder.
  6. Click the Process Files button to start file processing.
    As you can see, the app lags very badly. Actually, you are even unable to see the progress of file processing until it is finished and the All files were processed successfully message is shown.
  7. Collect a performance timeline profiling snapshot by clicking Get Snapshot in the dotTrace 6.0 Controller window. The collected clicking Get Snapshot’n’Wait in the controller window. The snapshot will be opened in dotTraceTimeline Viewer.
  8. Close the app. This will also close the controller window.
  9. In dotTrace, switch to the Timeline view by clicking the Image Removed Threads Timeline button.

First Look at


Timeline User Interface
Step 2
Step 2


Now, let’s Let’s make a little digression and take a look at the Timeline UI.
Image Removed Image Added

The concept of the Timeline UI is very simple:

  1. You have profiling data on input (see Source in (1) Overview).
  2. You apply a number of filters (2) to it. For example, you can ask Timeline to “Show all calls in Thread#1 that performed I/O operations in the timeframe between 2 s and 3 s” or “Show what thread blocked the Main thread”, and so on.
    The list of currently applied filters is shown in (1) Overview.
  3. You get the filtered data on output. 

What’s the most interesting about steps 2 and 3? The answer is - both steps are performed using the same UI controls. Each block (2) is used not only to select a specific filter but also to display a chunk of filtered information. This leads us to the other Timeline UI concept -- all filters affect each other. Thus, if you select System in Tread Types, Timeline will grey out all non-system threads in Threads Diagram, Call Tree will contain calls from the system threads only, the Tread States filter will contain only states that were detected for system threads, and so on.

For better understanding of this approach, let’s try the UI in action.analysis workflow in Timeline Viewer is quite simple: all you do is slice and dice the collected temporal data using filters.


*The only window that cannot work as a filter is Source View.

So, where are the filters? While most windows in Timeline Viewer display data, they are also used to set specific filters.* The result of every filter’s work is a set of time intervals or point events selected by a specific condition. For example, clicking Interval Filters | File I/O in the Filters window will Select all time intervals where threads performed file I/O operations. Clicking on the Main thread in the Threads diagram will Select lifetime of the Main thread.

Of course, filters can be chained together. If you activate the two filters mentioned above, you will get the resulting filter, Select all time intervals where the Main thread performed file I/O operations. By carefully combining filters, you can investigate almost every aspect of your application.

This is all we need to know before starting to working with Timeline Viewer. Time to try it in action!

Analyzing a Snapshot in Timeline
Step 3
Step 3


  1. Note



  1. As you see, the Threads Diagram filter contains now all app threads including the system and native ones. Actually, we’re interested only in threads created by ourselves which are the main app thread and a BackgroundWorker thread. So, first, let’s get rid of all other threads.
    In the Additional Filters | Thread Types filter, click on Main and Pool* while holding the Ctrl key.
    Image Removed
  2. Now, look at how Timeline has changed. As you see, the Filters List/Overview contains the filter you applied -- Thread Types: 2 values. Note how

    You can make them visible at any time.

    Look at the Threads diagram. By default, it contains all application threads excluding unmanaged threads.* Note that all filter values you see are calculated for all threads currently visible.
    For further analysis, we are not interested in threads that perform no work. So, first let’s get rid of them.
    Image Added

  3. Look at the list of threads on the diagram. It consists of the Main app thread, the Finalizer thread (used to finalize objects but does no work in our app), and Garbage Collection thread (used to perform background GC). The Background thread that processes files in our app was identified as Thread Pool (ID 10104) because background threads are created by the CLR Thread Pool. There's also one more Thread Pool (ID 2900) that does no work. This is probably some auxiliary CLR thread pool. 
    Let's hide the Finalizer and Thread Pool (ID 2900) threads as irrelevant to our analysis.
  4. Select the Finalizer and Thread Pool (ID 2900) threads in the Threads diagram.
  5. Right-click and select Hide Selected Threads.
    Image Added
  6. Look at the Threads diagram and the status bar. The following filter is currently applied to the snapshot data: Select lifetime intervals of all threads excepting hidden.
    Image Added
    Note how the data in other filters were affected. For example, now state times in in Thread States are  are now calculated for the threads of selected types only. Methods and Call Tree have changed too all threads except the hidden ones. Top Methods and Call Tree have changed too, showing calls only from the filtered threads.  Image Removed
  7. Take a look at the Threads Diagram - all threads of non-selected types are greyed out. Nevertheless, for the convenience we can hide them completely.
    Select the Autohide checkbox. This will automatically hide all the threads we are not interested in. 
    Image Removed
  8. The current scale of the Threads Diagram doesn’t the Threads diagram doesn't allow us to see the details on what was done in 10104 Thread Pool Thread #1 (our BackgroundWorker threadBackgroundWorker thread) in details. Let’s zoom in to itso that it fits the entire diagram.
    To do this, use use Ctrl+Mouse Wheel on the  on the Threads Diagram.
    This automatically adds the Time Range: visible 2,476 ms filterfilter Visible time range: 1586 ms. Note how this filter affects others: all values in Additional Filters were are recalculated for the visible time range.
    Image Removed The following filter is now applied to the snapshot data: Select all time intervals within the visible time range for all threads excepting hidden.
    Image Added
  9. Take a look at the the Threads Diagram diagram.
    What you see is how thread states changed over time. For example, our BackgroundWorker thread Thread BackgroundWorker thread 10104 Thread Pool Thread#1 was started approximately on 15at 16.5 3 s (after we clicked the the Process Files button button). Thread’s work included file processing (the blue Run on CPU state), and reading/writing files (the green File I/O stateMost of the time the thread was Running (rich blue intervals). Besides, for some reasons, there are intervals where the thread was not working for some time: it was in the Blocked (yellow) and Idle (pale blue) states.
  10. Remove the Time Range filter by clicking on the corresponding  button. This will zoom the diagram back out.
  11. Now, it’s time to deal with our UI freezes!
    What are the main reasons of such freezes? These are:
    • Excessive computational work on the main UI thread.
    • Long or frequent Garbage Collections (GC) on any thread (in workstation apps, when GC takes place all managed threads are suspended excluding the one that triggered the GC).
    • Blocking of the main UI thread by some other thread (for example, during access to some shared resources).
      Therefore, all we need to do is one by one exclude all reasons until we find the real one.
  12. Take a look at the States Diagram – It contains a special UI freezes section.
    Image Removed in the Waiting state (pale blue intervals).
    Image Added
  13. Note

    *The timeline pattern may look slightly different on your computer, as the execution flow depends on huge a great variety of factors (like a the number of CPU cores, their workload during the app execution, and so onetc.). Nevertheless, both freezes should the freeze must take place regardless of your system configuration.

    As you see there are two-three main freezes detected*: one (or more) closer to the app start and one after we started file processing.
    Apparently, the reason of the first ones is some inevitable start-up activities and there’s nothing we can do about it. We’re much more interested in the second freeze. Let’s look at it in more details.Select the second UI freeze

    Look at the Process Overview diagram.* In addition to CPU Utilization, it shows two event diagrams relevant for our performance analysis. The UI Freeze bar shows that the freeze started right after 10104 Thread Pool was created. Blocking Garbage Collection was also intensively performed on this time interval. As blocking GC suspends all managed threads, it may be the potential cause of the UI freeze.
    Image Added
    We must take a closer look at these events.

  14. First, let’s remove the Visible time range filter as we no longer need it. To do this, click on the filter in the list of applied filters. The diagram will zoom back out.
  15. Now, let’s investigate the UI freeze event more thoroughly.
    What are the main potential causes of such freezes? These are:
    • long or frequent blocking GCs;
    • blocking of the UI thread by some other thread (for example, due to lock contention); and/or
    • excessive computational work on the UI thread.
      We will proceed to exclude these causes one by one until only the true culprit remains.
  16. Select the UI freeze event by clicking on the corresponding bar in the States Diagram | UI freezes section.
    Image Removed

    * Now, all other filters show data only for this timeframe.
    ** The Threads: Main Thread filter is added automatically as UI freezes occur only on the main UI thread.

    The Time Range: selected 3,978 ms* and Threads: Main Thread** filters are added.
  17. Now, we should understand what our app was doing during the freeze. Let’s investigate filters in the Additional Filters window.
  18. Look at the Additional Filters | CLR States filter. It shows states of the .NET runtime environment during the selected timeframe (3541 ms).
    Image Removed
    It appears that most of the time our app was Running (81.80% and 2896 ms). The Garbage collection time is also quite high (17.50 % and 620 ms) but probably not enough to be the cause of the UI freeze (that lasted for the entire timeframe). 
  19. Select the Running state in Additional Filters | CLR States. Now, other filters will show data only for the periods of time when CLR was in the running state.
    Image Removed
  20. Look at the Thread States filter.
    Most of the time (82.97 %) the Main thread was in the Run on CPU state. This means that during the major part of the freezing time, the main UI thread was doing some work (the Blocked state is insignificant 302 ms)the Process Overview section. This will apply the filter by the UI freeze event. Note that this applies not only the filter by the freeze time range, but also the filter by the Main thread. The latter is done automatically, as the Main thread is the only one that processes the UI in our app.
    Thus, the resulting filter now is Select all time intervals on the Main thread where the UI freeze event took place.
    Image Added
  21. Now, to identify the true cause of this freeze, let’s investigate values in the Filters window.
    The first potential cause we should analyze is excessive blocking GCs. Take a look at the Blocking GC filter. Taking into account the currently applied filters, it shows how long the Main thread was (Blocking GC value) and was not (Exclude Blocking GC value) blocked by GC during the freeze.
    Image Added
    The Blocking GC time is quite high (483 ms, or 11.8% of the selected interval) and probably may have some impact on performance. Nevertheless, it could hardly be the cause of the 4-second freeze. Thus, we may exclude excessive GC from our list of suspects.
  22. Click the Exclude Blocking GC value. The resulting filter now is Select all time intervals on the Main thread where the UI freeze event took place and no blocking GC is performed.
  23. Let's investigate the potential causes “Blocking by some other thread” and “Excessive work on the Main thread.”
    Look at Thread State. This filter shows total time spent by threads in a certain state. Taking into account the currently applied filters, it shows states of the Main thread during the freeze.
    Image Added
    It appears that most of the freeze time (92.1%, or 3335 ms), the thread was doing some work as it was Running. The 242 ms value of the Waiting state is too small, which automatically excludes “Blocking by some other thread” as a cause. Therefore, the cause of the freeze is a can only be computational work on the the Main thread!
    Now, all All we need to do now is to find methods that were executed on the the Main thread during the freeze. This is a purpose of the Methods and Call Tree filters.Select Run on CPU in Additional Filters | Thread States.We can use the Top Methods and Call Tree filters for this.
  24. Select Running in the Thread States filter. This will make the resulting filter Select all time intervals where the Main thread was running when the UI freeze took place and no blocking GC was performed.
    The list of filters in Filters List/Overview will look like as follows:
    Image Removed Image Added
    Now, the filters Top Methods and  and Call Tree filters contain only methods that were executed by the main thread during the selected time interval (3541 ms long) when CLR was running.
    Image Removed
    Note that the Methods filter shows only user methods with the highest execution time. This time is a sum of the own execution time and execution times of all underlying system calls.
    Call Tree shows all calls in the stack (both to user and system methods). The time shown against a call is a sum of all underlying calls in a branch.during these time intervals.
  25. Look at the Top Methods filter. It shows the plain list of methods from the stack sorted by their execution time.
    Image Added
    In the current state, the list doesn't make a lot of sense as it is filled mostly with low-level system methods. Let's make the list more meaningful.
  26. Select the Hide system methods check-box. In this mode, Top Methods shows only user methods. The execution time of a user method is calculated as the sum of the method's own time and own time of all system methods it calls (down to the next user method in the stack).
    Image Added
    We see there are only two methods left: App.Main and ProcessInProgress.
  27. Look at the Call Tree.
    Image Added


    * Obviously, we should not take into account the App.Main call as it is simply the entry point to our app.

    The Methods filter shows us that the main time (713 ms) was spent in the MainWindow.ProcessInProgress method*. Therefore,

    To ease understanding of the call tree, Timeline Viewer folds system call chains by default. To unfold the chain, click the icon shown against the root system call.

    As you can see, App.Main spends most of the time in a number of system methods related to processing Windows messages.* This is typical behavior for any app with a graphical UI. This indicates that the app waits for user input in a message loop. We can simply ignore these methods when analyzing the snapshot. To find out what method causes the freeze, we should look at the next user method in the stack, which turns out to be ProcessInProgress (955 ms, or 28.6%).
    Since we assumed that lags took place due to some computational work in this method

    . Let’s look at it in the

    , let’s check its source code



  28. In In Call Tree, select MainWindow.ProcessInProgress.Look at Source Code. click on the ProcessInProgress method.
  29. Note

    *If the Source View window is not opened by default, open it using the View | Source View menu.

    Look at Source View window.*

  30. Code Block
    private void ProcessInProgress(object sender, ProgressChangedEventArgs e)
       var upd = (ProgressUpdater) e.UserState;
       lblProgress.Content = "File " + upd.CurrentFileNmb + " of" + upd.TotalFiles + ": " + e.ProgressPercentage + "%";

    It appears that this method is just an event handler that updates the progress of file processing operation in


    the progress label on the main window.

    Definitely, this doesn’t look like complex computations.

    These computations don't look complex, so why did the freezes occur?

    The answer is simple.

    Apparently, this event handler is called so often that the main window


    cannot cope with updating


    the label. Let’s check this out in our code.

  31. Note

    *If you use JetBrains ReSharper, you can employ a the very helpful Inspect This feature (press Ctrl+Shift+Alt+A combination). It is able to can show you all incoming calls to the method.

    Switch to Visual Studio and look at the code.
    Further code investigation* shows


    that this event handler is subscribed


    to the ProgressChanged event of the background worker. This event occurs when the worker calls the ReportProgress method. In turn, it is called from the ProcessFiles method of the background worker.

    Code Block
    for (int j = 0; j < _lines.Length; j++)
       var line = _lines[j];
       var stringReverser = new StringReverser(line);
       _lines[j] = stringReverser.Reverse();
       if (j % 105 == 0)
          var p = ((float)(j + 1) / _lines.Length) * 100;
          Worker.ReportProgress((int) p, _updater);
  32. Here is the cause of our performance issues

    : ReportProgress is called each time after


    5 lines in a text file are processed. As lines are processed very


    quickly, ReportProgress is called too frequently for the system. Let’s reduce this frequency, for instance, to one call

    for a

    per 1000 lines. Improve the if condition in the code.

    Code Block
    if (j % 1000 == 0)
       float _p = ((float)(j + 1) / _lines.Length) * 100;
       Worker.ReportProgress((int) _p, _updater);
  33. Rebuild the solution and perform profiling one more time as described in Running the Profiler and Getting a Snapshot.
  34. Here There it is! No more lags. Timeline also doesn’t detect any UI freezes during file processing.
    Image Removed Image Added



Here is a short summary of What we learned in this Getting Started tutorial:

  • Unlike “classic” performance profiling, during timeline profiling dotTrace collects temporal call stack and thread state data.
  • To analyze the results of timeline profiling, a special Timeline Viewer is used.
  • Timeline Viewer is a set of filters and diagrams that visualize threads and thread-to-thread relationshipsthe event timeline of your app and allow you to slice and dice the collected temporal data.
  • Each filter not only sets is dual-purpose: it both displays data and lets you set a specific condition for shown data but also displays the data.
  • Filters affect each other. Setting a specific value in a filter changes values shown in other filters.


  • can be chained together.

In the following tutorials, we will learn how to use Timeline to improve app performance in more complex scenarios. For , for example , when excessive garbage collections or file I/O operations take place, or when you face some multithreading issue like lock contention.