Page Comparison - Getting Started with Timeline Profiling (v.32 vs v.33)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Note
    iconfalse

    * You can make them visible at any time.

    Look at the Threads diagram. By default, it contains all application threads excluding the unmanaged ones*. Note that all filter values you see are calculated for all currently visible threads.
    From the point of further analysis, we are not interested in the threads that do not perform any work. So, first, let’s get rid of  them. 
  2. Look at the list of threads on the diagram. It consists of the Main app thread, the Finalizer thread (is used to finalize objects but does not do any work in our app), and Garbage Collection thread (is used to perform background GC). The Background thread that processes files in our app was identified as Thread Pool (ID 10104) because background threads are created by the CLR Thread Pool. There's also one more Thread Pool (ID 2900) that doesn't do any work. Probably, this is some auxiliary CLR thread pool. 
    Let's hide the Finalizer and Thread Pool (ID 2900) threads as meaningless for our analysis.
  3. Select the Finalizer and Thread Pool (ID 2900) threads in Threads diagram.
  4. Right-click and in the context menu select Hide Selected Threads.
  5. Look at the Threads diagram and the status bar. The filter that is now applied to the snapshot data is “Select lifetime intervals of all threads excepting hidden”.

    Note how data in other filters were affected. For example, state times in Thread States are now calculated for all threads excepting the hidden ones. Top Methods and Call Tree have changed too showing calls only from the filtered threads. 
  6. The current scale of the Threads diagram doesn't allow us to see the 10104 Thread Pool (our BackgroundWorker thread) in details. Let’s zoom in so that it fits the entire diagram.
    To do this, use Ctrl+Mouse Wheel on the Threads Diagram.
    This automatically adds the Visible time range: 1586 ms filter. Note how this filter affects others: all values are recalculated for the visible time range.
    The filter that is now applied to the snapshot data is “Select all time intervals within the visible time range for all threads excepting hidden”.
  7. Take a look at the Threads diagram.
    What you see is how thread states changed over time. For example, our BackgroundWorker thread 10104 Thread Pool was started approximately on 16.3 s (after we clicked the Process Files button). Most of the time the thread was Running (rich blue intervals). Besides, there are intervals where the thread was in the Waiting state (pale blue intervals).
  8. Note
    iconfalse

    * The timeline pattern may look different on your computer as the execution flow depends on huge variety of factors (like a number of CPU cores, their workload during the app execution, and so on). Nevertheless, the freeze must take place regardless of your system configuration.

    Look at the Process Overview diagram*. In addition to CPU Utilization, it shows two event diagrams meaningful for performance analysis: the UI Freeze bar shows that the freeze started right after the 10104 Thread Pool was created. Blocking Garbage Collection was also intensively performed on this time interval. As blocking GC suspends all managed threads, it may be the potential cause of the UI freeze.

    So, it’s better to take a closer look on these events.
  9. First, let’s remove the Visible time range filter as we no longer need it. To do this, click on the filter in the list of applied filters. This will zoom the diagram back out.
  10. Now, let’s investigate the UI freeze event more thoroughly.
    What are the main reasons of such freezes? These are:
    • Long or frequent blocking GCs.
    • Blocking of the UI thread by some other thread (for example, due to lock contention).
    • Excessive computational work on the UI thread.
      Therefore, all we need to do is exclude all reasons one by one until we find the real one.
  11. Select the UI freeze event by clicking on the corresponding bar in the Process Overview section. This will apply the filter by the UI freeze event. Note that this applies not only the filter by the freeze time range, but also the filter by the Main thread. The latter is done automatically, as the Main thread is the only one that processes the UI in our app.
    Thus, the resulting filter now is “Select all time intervals on the Main thread where the UI freeze event took place”.
  12. Now, we should understand what the real reason of this freeze was. Let’s investigate values in the Filters window.
    The first reason we should analyze is excessive blocking GCs. Take a look at the Blocking GC filter. Taking into account the currently applied filters, it shows how long the Main thread was (Blocking GC value) and was not (Exclude Blocking GC value) blocked by GC during the freeze.

    The Blocking GC time is quite high (477 483 ms or 1011.72% 8% of the selected interval) , but it is definitely not enough to explain the freeze. Therefore, and probably may have some impact on performance. Nevertheless, it could hardly be the reason of the 4 s long freeze. Thus, as for now, the excessive GC reason can be excluded.Therefore, the reason of the freeze could be either
  13. Click the Exclude Blocking GC value. The resulting filter now is “Select all time intervals on the Main thread where the UI freeze event took place and no blocking GC is performed”.
  14. Let's investigate the “Blocking by some other thread” or and “Excessive work on the Main thread” reasons.
    Look at Thread State. This filter shows total time spent by threads in a certain state. Taking into account the currently applied filters, it shows states of the Main thread during the freeze.

It appears that most of the freeze time (

...

82.

...

6% or

...

3390 ms), the thread was doing some work as it was Running. The

...

669 ms value of the Waiting state is too small which automatically excludes the “Blocking by some other thread” reason. Therefore, the cause of the freeze is a computational work on the Main thread!
All we need to do is to find methods that were executed on the Main thread during the freeze. For this purpose we can use the Top Methods and Call Tree filters.

  1. Select Running in the Thread States filter. This will make the resulting filter “Select all time intervals where the Main thread was running during the UI freeze”.
    The list of filters will look like follows:

    Now, Top Methods and Call Tree filters contain only methods executed during these time intervals.
  2. Look at the Top Methods filter. It shows the plain list of methods from the stack sorted by their execution time.

    In the current state, the list doesn't make a lot of sense as it is filled mostly with the low-level system methods. Let's make the list more meaningful.
  3. Turn on the Hide system methods checkbox. In this mode, Top Methods shows only user methods. The execution time of user methods is calculated as a sum of the own user method time and the own time of all system methods it calls (down to the first user method in the stack).

    There are only two methods left: App.Main and ProcessInProgress.
  4. Look at the Call Tree.
    Note
    iconfalse

    * By default, to simplify understanding of the call tree, Timeline Viewer folds system calls. To unfold the calls, click the  icon shown against the root system call.

    As you can see, App.Main spends most of the time in a number of system methods related to processing Windows messages*. This indicates that UI freeze takes place in the app and doesn’t bring additional details. To find out what method causes the freeze, we should look at the next user method in the stack which is ProcessInProgress (1051 ms or 29.57%).  
    We can make an assumption that lags took place due to some computational work in this method. Let’s check its source code.
  5. In Call Tree, click on the ProcessInProgress method.
  6. Note
    iconfalse

    * If the Source View window is not opened by default, open it using the View | Source View menu.

    Look at Source View window*.
  7. Code Block
    languagecsharp
    private void ProcessInProgress(object sender, ProgressChangedEventArgs e)
    {
       var upd = (ProgressUpdater) e.UserState;
    
       lblProgress.Content = "File " + upd.CurrentFileNmb + " of" + upd.TotalFiles + ": " + e.ProgressPercentage + "%";
    }
    
    It appears that this method is just an event handler that updates the progress of file processing operation in a label on the main window. Definitely, this doesn't look like complex computations.
    So, why did the freezes occur? The answer is simple. Apparently, this event handler is called so often that the main window doesn’t cope with updating of the label. Let’s check this out in code.
  8. Note
    iconfalse

    * If you use ReSharper, you can employ a very helpful Inspect This feature (Ctrl+Shift+Alt+A combination). It is able to show you all incoming calls to the method.

    Switch to Visual Studio and look at the code.
    Further code investigation* shows us that this event handler is subscribed on the ProgressChanged event of the background worker. This event occurs when the worker calls the ReportProgress method. In turn, it is called from the ProcessFiles method of the background worker.
    Code Block
    languagecsharp
    ...
    for (int j = 0; j < _lines.Length; j++)
    {
       var line = _lines[j];
       var stringReverser = new StringReverser(line);
       _lines[j] = stringReverser.Reverse();
    
       if (j % 5 == 0)
       {
          var p = ((float)(j + 1) / _lines.Length) * 100;
          Worker.ReportProgress((int) p, _updater);
       }
    }
    ...
    
  9. Here is the cause of our performance issues – ReportProgress is called each time after 5 lines in a text file are processed. As lines are processed very quickly, ReportProgress is called too frequently for the system. Let’s reduce this frequency, for instance, to one call for a 1000 lines. Improve the if condition in the code.
    Code Block
    languagecsharp
    ...
    if (j % 1000 == 0)
    {
       float _p = ((float)(j + 1) / _lines.Length) * 100;
       Worker.ReportProgress((int) _p, _updater);
    }
    ...
    
  10. Rebuild the solution and perform profiling one more time as described in Running the Profiler and Getting a Snapshot.
  11. Here it is! No more lags. Timeline also doesn’t detect any UI freezes during file processing.

...