Page Comparison - Getting Started with Timeline Profiling (v.42 vs v.43)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Redirect
visiblefalse
locationhttps://www.jetbrains.com/help/profiler/Get_Started_with_Timeline_Viewer.html
What Is Timeline Profiling?

Since version 6.0, dotTrace offers a new method of profiling your apps called timeline profiling. What is timeline profiling and why do you need it?

...

In this Getting Started tutorial, we will take a detailed look at the key profiling steps and introduce you to the Timeline Viewer user interface. We will also profile a sample app and try to determine why its UI freezes, which is a very common profiling task.

Contents

Sample App
Running the Profiler and Getting a Snapshot
First Look at Timeline User Interface
Analyzing a Snapshot in Timeline
Conclusion

Sample App
Anchor
Sample App
Sample App

...

  1. Note
    iconfalse

    *You can make them visible at any time.

    Look at the Threads diagram. By default, it contains all application threads excluding unmanaged threads.* Note that all filter values you see are calculated for all threads currently visible.
    For further analysis, we are not interested in threads that perform no work. So, first let’s get rid of them.

  2. Look at the list of threads on the diagram. It consists of the Main app thread, the Finalizer thread (used to finalize objects but does no work in our app), and Garbage Collection thread (used to perform background GC). The Background thread that processes files in our app was identified as Thread Pool (ID 10104) because background threads are created by the CLR Thread Pool. There's also one more Thread Pool (ID 2900) that does no work. This is probably some auxiliary CLR thread pool. 
    Let's hide the Finalizer and Thread Pool (ID 2900) threads as irrelevant to our analysis.
  3. Select the Finalizer and Thread Pool (ID 2900) threads in the Threads diagram.
  4. Right-click and select Hide Selected Threads.
  5. Look at the Threads diagram and the status bar. The following filter is currently applied to the snapshot data: Select lifetime intervals of all threads excepting hidden.

    Note how the data in other filters were affected. For example, state times in Thread States are now calculated for all threads except the hidden ones. Top Methods and Call Tree have changed too, showing calls only from the filtered threads. 
  6. The current scale of the Threads diagram doesn't allow us to see the 10104 Thread Pool (our BackgroundWorker thread) in details. Let’s zoom in so that it fits the entire diagram.
    To do this, use Ctrl+Mouse Wheel on the Threads Diagram.
    This automatically adds the filter Visible time range: 1586 ms. Note how this filter affects others: all values are recalculated for the visible time range.
    The following filter is now applied to the snapshot data: Select all time intervals within the visible time range for all threads excepting hidden.
  7. Take a look at the Threads diagram.
    What you see is how thread states changed over time. For example, our BackgroundWorker thread 10104 Thread Pool was started approximately at 16.3 s (after we clicked the Process Files button). Most of the time the thread was Running (rich blue intervals). Besides, there are intervals where the thread was in the Waiting state (pale blue intervals).
  8. Note
    iconfalse

    *The timeline pattern may look slightly different on your computer, as the execution flow depends on a great variety of factors (like the number of CPU cores, their workload during the app execution, etc.). Nevertheless, the freeze must take place regardless of your system configuration.

    Look at the Process Overview diagram.* In addition to CPU Utilization, it shows two event diagrams relevant for our performance analysis. The UI Freeze bar shows that the freeze started right after 10104 Thread Pool was created. Blocking Garbage Collection was also intensively performed on this time interval. As blocking GC suspends all managed threads, it may be the potential cause of the UI freeze.

    We must take a closer look at these events.

  9. First, let’s remove the Visible time range filter as we no longer need it. To do this, click on the filter in the list of applied filters. The diagram will zoom back out.
  10. Now, let’s investigate the UI freeze event more thoroughly.
    What are the main potential causes of such freezes? These are:
    • long or frequent blocking GCs;
    • blocking of the UI thread by some other thread (for example, due to lock contention); and/or
    • excessive computational work on the UI thread.
      We will proceed to exclude these causes one by one until only the true culprit remains.
  11. Select the UI freeze event by clicking on the corresponding bar in the Process Overview section. This will apply the filter by the UI freeze event. Note that this applies not only the filter by the freeze time range, but also the filter by the Main thread. The latter is done automatically, as the Main thread is the only one that processes the UI in our app.
    Thus, the resulting filter now is Select all time intervals on the Main thread where the UI freeze event took place.
  12. Now, to identify the true cause of this freeze, let’s investigate values in the Filters window.
    The first potential cause we should analyze is excessive blocking GCs. Take a look at the Blocking GC filter. Taking into account the currently applied filters, it shows how long the Main thread was (Blocking GC value) and was not (Exclude Blocking GC value) blocked by GC during the freeze.

    The Blocking GC time is quite high (483 ms, or 11.8% of the selected interval) and probably may have some impact on performance. Nevertheless, it could hardly be the cause of the 4-second freeze. Thus, we may exclude excessive GC from our list of suspects.
  13. Click the Exclude Blocking GC value. The resulting filter now is Select all time intervals on the Main thread where the UI freeze event took place and no blocking GC is performed.
  14. Let's investigate the potential causes “Blocking by some other thread” and “Excessive work on the Main thread.”
    Look at Thread State. This filter shows total time spent by threads in a certain state. Taking into account the currently applied filters, it shows states of the Main thread during the freeze.

    It appears that most of the freeze time (92.1%, or 3335 ms), the thread was doing some work as it was Running. The 242 ms value of the Waiting state is too small, which automatically excludes “Blocking by some other thread” as a cause. Therefore, the cause of the freeze can only be computational work on the Main thread!
    All we need to do now is find methods that were executed on the Main thread during the freeze. We can use the Top Methods and Call Tree filters for this.
  15. Select Running in the Thread States filter. This will make the resulting filter Select all time intervals where the Main thread was running when the UI freeze took place and no blocking GC was performed.
    The list of filters will look as follows:

    Now, the filters Top Methods and Call Tree contain only methods executed during these time intervals.
  16. Look at the Top Methods filter. It shows the plain list of methods from the stack sorted by their execution time.

    In the current state, the list doesn't make a lot of sense as it is filled mostly with low-level system methods. Let's make the list more meaningful.
  17. Select the Hide system methods check-box. In this mode, Top Methods shows only user methods. The execution time of a user method is calculated as the sum of the method's own time and own time of all system methods it calls (down to the next user method in the stack).

    We see there are only two methods left: App.Main and ProcessInProgress.
  18. Look at the Call Tree.

    Note
    iconfalse

    *To ease understanding of the call tree, Timeline Viewer folds system call chains by default. To unfold the chain, click the icon shown against the root system call.

    As you can see, App.Main spends most of the time in a number of system methods related to processing Windows messages.* This is typical behavior for any app with a graphical UI. This indicates that the app waits for user input in a message loop. We can simply ignore these methods when analyzing the snapshot. To find out what method causes the freeze, we should look at the next user method in the stack, which turns out to be ProcessInProgress (955 ms, or 28.6%).
    Since we assumed that lags took place due to some computational work in this method, let’s check its source code.

  19. In Call Tree, click on the ProcessInProgress method.
  20. Note
    iconfalse

    *If the Source View window is not opened by default, open it using the View | Source View menu.

    Look at Source View window.*

  21. Code Block
    languagecsharp
    private void ProcessInProgress(object sender, ProgressChangedEventArgs e)
    {
       var upd = (ProgressUpdater) e.UserState;
    
       lblProgress.Content = "File " + upd.CurrentFileNmb + " of" + upd.TotalFiles + ": " + e.ProgressPercentage + "%";
    }
    

    It appears that this method is just an event handler that updates the progress of file processing operation in the progress label on the main window. These computations don't look complex, so why did the freezes occur? Apparently, this event handler is called so often that the main window cannot cope with updating the label. Let’s check this out in our code.

  22. Note
    iconfalse

    *If you use JetBrains ReSharper, you can employ the very helpful Inspect This feature (press Ctrl+Shift+Alt+A). It can show you all incoming calls to the method.

    Switch to Visual Studio and look at the code.
    Further code investigation* shows that this event handler is subscribed to the ProgressChanged event of the background worker. This event occurs when the worker calls the ReportProgress method. In turn, it is called from the ProcessFiles method of the background worker.

    Code Block
    languagecsharp
    ...
    for (int j = 0; j < _lines.Length; j++)
    {
       var line = _lines[j];
       var stringReverser = new StringReverser(line);
       _lines[j] = stringReverser.Reverse();
    
       if (j % 5 == 0)
       {
          var p = ((float)(j + 1) / _lines.Length) * 100;
          Worker.ReportProgress((int) p, _updater);
       }
    }
    ...
    
  23. Here is the cause of our performance issues: ReportProgress is called each time after 5 lines in a text file are processed. As lines are processed very quickly, ReportProgress is called too frequently for the system. Let’s reduce this frequency, for instance, to one call per 1000 lines. Improve the if condition in the code.

    Code Block
    languagecsharp
    ...
    if (j % 1000 == 0)
    {
       float _p = ((float)(j + 1) / _lines.Length) * 100;
       Worker.ReportProgress((int) _p, _updater);
    }
    ...
    
  24. Rebuild the solution and perform profiling one more time as described in Running the Profiler and Getting a Snapshot.
  25. There it is! No more lags. Timeline also doesn’t detect any UI freezes during file processing.

...