Modern apps tend to be multithreaded. Background data caching, background search, and so on: Even in a simple app you’ll definitely find a process that can be run in a separate thread. Unfortunately, in addition to obvious benefits, this also makes your app more complicated and more difficult for debugging.
What Is Timeline?
To help you profile multithreaded apps, dotTrace 6.0 introduces a new view called Timeline. Timeline is a set of filters and diagrams that allow you to quickly visualize and analyze threads and thread-to-thread relationships in your app. The purpose of Timeline is answering main questions related to poor performance of multithreaded apps:
- What causes user interface (UI) freezes in my app? Is this excessive computational work in the main thread, multiple garbage collections in worker threads, or maybe it’s an overuse of I/O operations?
- Is poor performance caused by a concurrency issue like lock contention, irregular work distribution, or other?
Timeline Is Not “Just a View”
Nevertheless, don’t think of Timeline as of just one more dotTrace view made “specifically for multithreading”. It is much, much more than that. In addition to capabilities described above, you can use Timeline as an ordinary profiler: dig into the call stack, view execution times and the source code of methods in the stack, and so on. Actually, you can do performance profiling in Timeline without the need in other views at all. Yep, Timeline is more a profiler in a profiler than just a view.
In this “Getting Started” tutorial we will take a detailed look at the Timeline user interface and even try to solve a very common task – finding a reason of UI freezes in an app.
As an example, we’ll use a small app used to reverse lines (e.g., "ABC" > "CBA") in text files. Briefly: With the Select Files button, a user specifies text files to be processed. The Process Files button runs a separate BackgroundWorker thread which reverses lines in the files. The progress of file processing is displayed in a label on the main window. After the processing is finished, the label shows the "All files were successfully processed" message.
The source code of the app is available here.
The app has a serious drawback. After starting file processing, users experience long UI lags that last until the processing is over.
Let’s use Timeline to determine the cause of these freezes!*
Running the Profiler and Getting a Snapshot
- Open the MassFileProcessing.sln solution in Visual Studio.
- In case you have dotTrace integrated with Visual Studio, run the profiler by choosing dotTrace | Profile Startup Project.
Otherwise, run dotTrace from the Windows Start menu. In dotTrace, select Profile | Standalone application.
- In Profiler Configuration, leave the Sampling profiling type. The thing is Timeline information can be gathered only in the sampling mode.
In case you ran dotTrace as a standalone app, you should also specify the path to the release executable of our sample app in Application.
- After you click Run, dotTrace runs our app and a special dotTrace 6.0 Controller window used to control the profiling process.
- Note that after you start profiling, dotTrace starts showing the real-time data about CPU usage and I/O operations in our app.
Now, we should reproduce a performance issue in our app.
- Click the Select Files button and choose five text files that come with the app in the Text Files folder.
- Click the Process Files button to start file processing.
As you can see, the app lags very badly. Actually, you are even unable to see the progress of file processing until it is finished and the All files were processed successfully message is shown.
- Collect a performance snapshot by clicking Get Snapshot in the dotTrace 6.0 Controller window. The collected snapshot will be opened in dotTrace.
- Close the app. This will also close the controller window.
- In dotTrace, switch to the Timeline view by clicking the Threads Timeline button.
First Look at the Timeline’s User Interface
Now, let’s make a little digression and take a look at the Timeline UI.
The concept of the Timeline UI is very simple:
- You have profiling data on input (see Source in (1) Overview).
- You apply a number of filters (2) to it. For example, you can ask Timeline to “Show all calls in Thread#1 that performed I/O operations in the timeframe between 2 s and 3 s” or “Show what thread blocked the Main thread”, and so on.
The list of currently applied filters is shown in (1) Overview.
- You get the filtered data on output.
What’s the most interesting about steps 2 and 3? The answer is - both steps are performed using the same UI controls. Each block (2) is used not only to select a specific filter but also to display a chunk of filtered information. This leads us to the other Timeline UI concept -- all filters affect each other. Thus, if you select System in Tread Types, Timeline will grey out all non-system threads in Threads Diagram, Call Tree will contain calls from the system threads only, the Tread States filter will contain only states that were detected for system threads, and so on.
For better understanding of this approach, let’s try the UI in action.
Analyzing a Snapshot in Timeline
- As you see, the Threads Diagram filter contains now all app threads including the system and native ones. Actually, we’re interested only in threads created by ourselves which are the main app thread and a
BackgroundWorkerthread. So, first, let’s get rid of all other threads.
In the Additional Filters | Thread Types filter, click on Main and Pool* while holding the Ctrl key.
- Now, look at how Timeline has changed. As you see, the Filters List/Overview contains the filter you applied -- Thread Types: 2 values. Note how data in other filters were affected. For example, now state times in Thread States are calculated for the threads of selected types only. Methods and Call Tree have changed too showing calls only from the filtered threads.
- Take a look at the Threads Diagram - all threads of non-selected types are greyed out. Nevertheless, for the convenience we can hide them completely.
Select the Autohide checkbox. This will automatically hide all the threads we are not interested in.
- The current scale of the Threads Diagram doesn’t allow us to see the details on what was done in Thread Pool Thread #1 (our
BackgroundWorkerthread). Let’s zoom in to it.
To do this, use Ctrl+Mouse Wheel on the Threads Diagram.
This automatically adds the Time Range: visible 2,476 ms filter. Note how this filter affects others: values in Additional Filters were recalculated for the visible time range.
- Take a look at the Threads Diagram.
What you see is how thread states changed over time. For example, our
BackgroundWorkerthread Thread Pool Thread#1 was started approximately on 15.5 s (after we clicked the Process Files button). Thread’s work included file processing (the blue Run on CPU state), and reading/writing files (the green File I/O state). Besides, for some reasons, the thread was not working for some time: it was in the Blocked (yellow) and Idle (pale blue) states.
- Remove the Time Range filter by clicking on the corresponding button. This will zoom the diagram back out.
- Now, it’s time to deal with our UI freezes!
What are the main reasons of such freezes? These are:
- Excessive computational work on the main UI thread.
- Long or frequent Garbage Collections (GC) on any thread (in workstation apps, when GC takes place all managed threads are suspended excluding the one that triggered the GC).
- Blocking of the main UI thread by some other thread (for example, during access to some shared resources).
Therefore, all we need to do is one by one exclude all reasons until we find the real one.
- Take a look at the States Diagram – It contains a special UI freezes section.
As you see there are two-three main freezes detected*: one (or more) closer to the app start and one after we started file processing.
Apparently, the reason of the first ones is some inevitable start-up activities and there’s nothing we can do about it. We’re much more interested in the second freeze. Let’s look at it in more details.
- Select the second UI freeze by clicking on the corresponding bar in the States Diagram | UI freezes section.
The Time Range: selected 3,978 ms* and Threads: Main Thread** filters are added.
- Now, we should understand what our app was doing during the freeze. Let’s investigate filters in the Additional Filters window.
- Look at the Additional Filters | CLR States filter. It shows states of the .NET runtime environment during the selected timeframe (3541 ms).
It appears that most of the time our app was Running (81.80% and 2896 ms). The Garbage collection time is also quite high (17.50 % and 620 ms) but probably not enough to be the cause of the UI freeze (that lasted for the entire timeframe).
- Select the Running state in Additional Filters | CLR States. Now, other filters will show data only for the periods of time when CLR was in the running state.
- Look at the Thread States filter.
Most of the time (82.97 %) the Main thread was in the Run on CPU state. This means that during the major part of the freezing time, the main UI thread was doing some work (the Blocked state is insignificant 302 ms). Therefore, the cause of the freeze is a computational work on the Main thread!
Now, all we need to do is to find methods that were executed on the Main thread during the freeze. This is a purpose of the Methods and Call Tree filters.
- Select Run on CPU in Additional Filters | Thread States.
The list of filters in Filters List/Overview will look like follows:
Now, Methods and Call Tree filters contain only methods that were executed by the main thread during the selected time interval (3541 ms long) when CLR was running.
Note that the Methods filter shows only user methods with the highest execution time. This time is a sum of the own execution time and execution times of all underlying system calls.
Call Tree shows all calls in the stack (both to user and system methods). The time shown against a call is a sum of all underlying calls in a branch.
MainWindow.ProcessInProgressmethod*. Therefore, lags took place due to some computational work in this method. Let’s look at it in the source code window.
The Methods filter shows us that the main time (713 ms) was spent in the
- In Call Tree, select MainWindow.ProcessInProgress.
- Look at Source Code.
So, why did the freezes occur? The answer is simple. Apparently, this event handler is called so often that the main window doesn’t cope with updating of the label. Let’s check this out in code.
It appears that this method is just an event handler that updates the progress of file processing operation in a label on the main window. Definitely, this doesn’t look like complex computations.
Further code investigation* shows us that this event handler is subscribed on the ProgressChanged event of the background worker. This event occurs when the worker calls the ReportProgress method. In turn, it is called from the ProcessFiles method of the background worker.
Switch to Visual Studio and look at the code.
- Here is the cause of our performance issues – ReportProgress is called each time after 10 lines in a text file are processed. As lines are processed very fast, ReportProgress is called too frequently for the system. Let’s reduce this frequency, for instance, to one call for a 1000 lines. Improve the if condition in the code.
- Rebuild the solution and perform profiling one more time as described in Running the Profiler and Getting a Snapshot.
- Here it is! No more lags. Timeline also doesn’t detect any UI freezes during file processing.
Here is a short summary of this Getting Started tutorial:
- Timeline is a set of filters and diagrams that visualize threads and thread-to-thread relationships.
- Each filter not only sets a specific condition for shown data but also displays the data.
- Filters affect each other. Setting a specific value in a filter changes values shown in other filters.
In next tutorials, we will learn how to use Timeline to improve app performance in more complex scenarios. For example, when excessive garbage collections or file I/O operations take place, or when you face some multithreading issue like lock contention.