Profiling 101 for Python Developers: Picking the right Profiler 4/6

By Sümer Cip, on Feb 07, 2020

Blog post series index:

  1. What is a Profiler?
  2. The Many Types of Profilers
  3. List of Existing Python Profilers
  4. Picking the Right Profiler (you are here)
  5. Profiles visualizations
  6. Using Blackfire

The main trade-off between statistical and deterministic profilers is accuracy vs performance.

Overhead

A deterministic profiler will always add some kind of overhead to the profiled application as you will be making measurements at each profile event. When there are millions of calls involved, the tracing profiler’s overhead seems to add up and distort the results. This is even documented in Python cProfile‘s own documentation.

A statistical profiler, on the other hand, does not suffer from this kind of problem as it will make its measurement at specific intervals and maybe record only a limited number of stack depth. It can do this because at the end of the day, it relies on the fact that the data will be statistically correct.

I have profiled py-spyplop and pyinstrument. Their underlying implementations seem quite standard. With applications having lots and lots of calls, the results show they seem to add a constant amount of overhead (roughly ~%10) to the profiled application independent of the stack depth or call count.

Level of insights

A deterministic profiler on the other hand has its own benefits. As you control the time you are making the measurements, you will know the whole picture. You can measure different metrics like memory usage, network usage, wall-time per-function even the parameters to profiled functions.

It would be very hard, if not impossible, to get these kind of per-function metrics from a statistical profiler.

Getting further with Blackfire

An ideal solution would combine the benefits of the two types of profilers, profiling deterministically without affecting the performance of the profiled application.

That is a great strength of HTTP instrumentation with Blackfire. A special HTTP header activates the profiler in your web server and deactivates it by the end of the request.

This has zero overhead for end-users and gives you the benefit of profiling your production systems. Of course, this is only per-request profiling.

Using a full deterministic profiler in an app monitoring system would definitely be an overkill, thus most of the APM systems use statistical profilers or some kind of sampling to capture the performance data.

Next article: Profiles visualizations

Sümer Cip

Sümer Cip is currently a Senior Software Engineer at Blackfire. He has been coding for nearly 20 years and contributed to many open source projects in his career. He has a deep love for Python and obsession with code performance and low-level stuff.