What Is Timeline Profiling?
Since the version 6.0, dotTrace offers a new method of profiling your apps – timeline profiling. What’s that and why do you need it?
Unlike “classic” performance profiling in dotTrace 5.5 and earlier, during timeline profiling, dotTrace collects temporal call stack and thread state data. Thus, you get the same data about call times but now these data is bound to the timeline. This gives you a great opportunity to analyze not only typical “what is the slowest method?” issues but also the ones where the order of events does matter: UI freezes, excessive garbage collections, uneven workload distribution, insufficient file I/O, and others.
Using timeline profiling is simple: all you need is to choose the Timeline profiling type when configuring a session. To analyze collected timeline profiling snapshots, you should use a separate dotTrace component called Timeline Viewer.
In this “Getting Started” tutorial we will take a detailed look at the main profiling steps, acquaint with the Timeline Viewer user interface and even try to solve a very common task – finding a reason of UI freezes in an app.
As an example, we’ll use a small app used to reverse lines (e.g., "ABC" > "CBA") in text files. Briefly: With the Select Files button, a user specifies text files to be processed. The Process Files button runs a separate BackgroundWorker thread which reverses lines in the files. The progress of file processing is displayed in a label on the main window. After the processing is finished, the label shows the "All files were successfully processed" message.
The source code of the app is available here.
The app has a serious drawback. After starting file processing, users experience long UI lags that last until the processing is over.
Let’s use timeline profiling to determine the cause of these freezes!*
Running the Profiler and Getting a Snapshot
- Open the MassFileProcessing.sln solution in Visual Studio.
- In case you have dotTrace integrated with Visual Studio, run the profiler by choosing ReSharper | Profile | Profile Startup Project (Performance).
Otherwise, run dotTrace from the Windows Start menu. In dotTrace, select New Session | Local, Profile Application | Standalone.
- In Profiling type, select Timeline. In case you ran dotTrace as a standalone app, you should also specify the path to the release executable of our sample app in Application.
- After you click Run, dotTrace runs our app and a special controller window used to control the profiling process.
Now, we should reproduce a performance issue in our app.
- Click the Select Files button and choose five text files that come with the app in the Text Files folder.
- Click the Process Files button to start file processing.
As you can see, the app lags very badly. Actually, you are even unable to see the progress of file processing until it is finished and the All files were processed successfully message is shown.
- Collect a timeline profiling snapshot by clicking Get Snapshot’n’Wait in the controller window. The snapshot will be opened in Timeline Viewer.
- Close the app. This will also close the controller window.
First Look at the Timeline’s User Interface
Now, let’s make a little digression and take a look at the Timeline UI.
The analysis workflow in Timeline Viewer is quite simple: all you do is slice and dice the collected temporal data using filters.
So, where are the filters? Almost all windows you see inside Timeline Viewer not only display data but also are used to set a specific filter*. The result of filter’s work is always a set of time intervals or point events selected by a specific condition. For example, clicking the Interval Filters | File I/O in the Filters window will “Select all time intervals where threads performed file I/O operations”. Clicking on the Main thread in the Threads diagram will “Select lifetime of the Main thread”.
Of course, filters can be chained together. Thus, if you turn on two filters mentioned above, you will get the resulting filter “Select all time intervals where the Main thread performed file I/O operations”. Complex filter combinations allow you to investigate almost every aspect of your application.
Actually, that's all you need to know before starting your work in Timeline Viewer. Now, it's time to try it in action!
Analyzing a Snapshot in Timeline
From the point of further analysis, we are not interested in the threads that do not perform any work. So, first, let’s get rid of them.
Look at the Threads diagram. By default, it contains all application threads excluding the unmanaged ones*. Note that all filter values you see are calculated for all currently visible threads.
- Look at the list of threads on the diagram. It consists of the Main app thread, the Finalizer thread (is used to finalize objects but does not do any work in our app), and Garbage Collection thread (is used to perform background GC). The Background thread that processes files in our app was identified as Thread Pool (ID 10104) because background threads are created by the CLR Thread Pool. There's also one more Thread Pool (ID 2900) that doesn't do any work. Probably, this is some auxiliary CLR thread pool.
Let's hide the Finalizer and Thread Pool (ID 2900) threads as meaningless for our analysis.
- Select the Finalizer and Thread Pool (ID 2900) threads in Threads diagram.
- Right-click and in the context menu select Hide Selected Threads.
- Look at the Threads diagram and the status bar. The filter that is now applied to the snapshot data is “Select lifetime intervals of all threads excepting hidden”.
Note how data in other filters were affected. For example, state times in Thread States are now calculated for all threads excepting the hidden ones. Top Methods and Call Tree have changed too showing calls only from the filtered threads.
- The current scale of the Threads diagram doesn't allow us to see the 10104 Thread Pool (our BackgroundWorker thread) in details. Let’s zoom in so that it fits the entire diagram.
To do this, use Ctrl+Mouse Wheel on the Threads Diagram.
This automatically adds the Visible time range: 1586 ms filter. Note how this filter affects others: all values are recalculated for the visible time range.
The filter that is now applied to the snapshot data is “Select all time intervals within the visible time range for all threads excepting hidden”.
- Take a look at the Threads diagram.
What you see is how thread states changed over time. For example, our BackgroundWorker thread 10104 Thread Pool was started approximately on 16.3 s (after we clicked the Process Files button). Most of the time the thread was Running (rich blue intervals). Besides, there are intervals where the thread was in the Waiting state (pale blue intervals).
So, it’s better to take a closer look on these events.
Look at the Process Overview diagram*. In addition to CPU Utilization, it shows two event diagrams meaningful for performance analysis: the UI Freeze bar shows that the freeze started right after the 10104 Thread Pool was created. Blocking Garbage Collection was also intensively performed on this time interval. As blocking GC suspends all managed threads, it may be the potential cause of the UI freeze.
- First, let’s remove the Visible time range filter as we no longer need it. To do this, click on the filter in the list of applied filters. This will zoom the diagram back out.
- Now, let’s investigate the UI freeze event more thoroughly.
What are the main reasons of such freezes? These are:
- Long or frequent blocking GCs.
- Blocking of the UI thread by some other thread (for example, due to lock contention).
- Excessive computational work on the UI thread.
Therefore, all we need to do is exclude all reasons one by one until we find the real one.
- Select the UI freeze event by clicking on the corresponding bar in the Process Overview section. This will apply the filter by the UI freeze event. Note that this applies not only the filter by the freeze time range, but also the filter by the Main thread. The latter is done automatically, as the Main thread is the only one that processes the UI in our app.
Thus, the resulting filter now is “Select all time intervals on the Main thread where the UI freeze event took place”.
- Now, we should understand what the real reason of this freeze was. Let’s investigate values in the Filters window.
The first reason we should analyze is excessive blocking GCs. Take a look at the Blocking GC filter. Taking into account the currently applied filters, it shows how long the Main thread was (Blocking GC value) and was not (Exclude Blocking GC value) blocked by GC during the freeze.
The Blocking GC time is quite high (483 ms or 11.8% of the selected interval) and probably may have some impact on performance. Nevertheless, it could hardly be the reason of the 4 s long freeze. Thus, as for now, the excessive GC reason can be excluded.
- Click the Exclude Blocking GC value. The resulting filter now is “Select all time intervals on the Main thread where the UI freeze event took place and no blocking GC is performed”.
- Let's investigate the “Blocking by some other thread” and “Excessive work on the Main thread” reasons.
Look at Thread State. This filter shows total time spent by threads in a certain state. Taking into account the currently applied filters, it shows states of the Main thread during the freeze.
It appears that most of the freeze time (92.1% or 3335 ms), the thread was doing some work as it was Running. The 242 ms value of the Waiting state is too small which automatically excludes the “Blocking by some other thread” reason. Therefore, the cause of the freeze is a computational work on the Main thread!
All we need to do is to find methods that were executed on the Main thread during the freeze. For this purpose we can use the Top Methods and Call Tree filters.
- Select Running in the Thread States filter. This will make the resulting filter “Select all time intervals where the Main thread was running when the UI freeze took place and no blocking GC was performed”.
The list of filters will look like follows:
Now, Top Methods and Call Tree filters contain only methods executed during these time intervals.
- Look at the Top Methods filter. It shows the plain list of methods from the stack sorted by their execution time.
In the current state, the list doesn't make a lot of sense as it is filled mostly with the low-level system methods. Let's make the list more meaningful.
- Turn on the Hide system methods checkbox. In this mode, Top Methods shows only user methods. The execution time of user methods is calculated as a sum of the own user method time and the own time of all system methods it calls (down to the next user method in the stack).
There are only two methods left: App.Main and ProcessInProgress.
- Look at the Call Tree.
As you can see, App.Main spends most of the time in a number of system methods related to processing Windows messages*. This is a typical behavior for any app with the visual UI. This indicates that the app waits for user input in a message loop. We can simply ignore these methods when analyzing the snapshot. To find out what method causes the freeze, we should look at the next user method in the stack which is ProcessInProgress (955 ms or 28.6%).
We can make an assumption that lags took place due to some computational work in this method. Let’s check its source code.
- In Call Tree, click on the ProcessInProgress method.
- Look at Source View window*.
So, why did the freezes occur? The answer is simple. Apparently, this event handler is called so often that the main window doesn’t cope with updating of the label. Let’s check this out in code.
It appears that this method is just an event handler that updates the progress of file processing operation in a label on the main window. Definitely, this doesn't look like complex computations.
Further code investigation* shows us that this event handler is subscribed on the ProgressChanged event of the background worker. This event occurs when the worker calls the ReportProgress method. In turn, it is called from the ProcessFiles method of the background worker.
Switch to Visual Studio and look at the code.
- Here is the cause of our performance issues – ReportProgress is called each time after 5 lines in a text file are processed. As lines are processed very quickly, ReportProgress is called too frequently for the system. Let’s reduce this frequency, for instance, to one call for a 1000 lines. Improve the if condition in the code.
- Rebuild the solution and perform profiling one more time as described in Running the Profiler and Getting a Snapshot.
- Here it is! No more lags. Timeline also doesn’t detect any UI freezes during file processing.
Here is a short summary of this Getting Started tutorial:
- Unlike “classic” performance profiling, during timeline profiling dotTrace collects temporal call stack and thread state data.
- To analyze the results of timeline profiling, a special Timeline Viewer is used.
- Timeline Viewer is a set of filters and diagrams that visualize the event timeline of your app and allow you to slice and dice the collected temporal data.
- Each filter not only sets a specific condition but also displays the data.
- Filters can be chained together.
In next tutorials, we will learn how to use Timeline to improve app performance in more complex scenarios. For example, when excessive garbage collections or file I/O operations take place, or when you face some multithreading issue like lock contention.