Taming the beast: performance optimization of unfamiliar applications with Blackfire – part 2 of 2

By Thomas di Luccio, on Sep 27, 2023

Welcome to the second part of this mini-series dedicated to discovering how Blackfire can be used on an existing application you are discovering and trying to figure out. 

Your team may have inherited that long-lost application nobody remembered existed. Shout out to all the Indiana Jones among you. After all, most of us have been involved in a DIY remake of the Raiders of the Lost App at some point.

Now, place to the trickiest part of the process: writing assertions. When writing assertions, we want to evaluate the reasons why that script or endpoint is fast. And not how fast it is. Remember time is volatile and we have to use non-volatile metrics to have reliable tests.

This is another point for which I’m particularly interested in your thoughts and insights. Personally, my initial testing process involves evaluating the number of SQL queries, the HTTP requests made, and, when an ORM is in use, the number of entities that have been hydrated. 

That’s maybe a personal pet peeve, but I’ve noticed a tendency to create and hydrate an excess of entities. This often results in resource-intensive changeset computations when data is flushed to the database.

Let me say it clearly: this is not a definitive list but simply a personal boilerplate on top of which I’ll add more metrics as I better understand and optimize those parts of the application. This iterative process is key to shaping an effective performance optimization strategy.

tests:
  "homepage boilerplate test":
	path: "/"
	assertions:
 	    - "metrics.doctrine.entities.created.count <= XX"
 	    - "metrics.laravel.eloquent.models.created.count <= XX"
 	    - "metrics.sql.queries.count <= XX"
 	    - "metrics.http.requests.count <= XX"
	description: |
  	    TODO: improve those default assertions

  "product page boilerplate test":
	path: "/my-product"
	assertions:
 	    - "metrics.doctrine.entities.created.count <= XX"
 	    - "metrics.laravel.eloquent.models.created.count <= XX"
 	    - "metrics.sql.queries.count <= XX"
 	    - "metrics.http.requests.count <= XX"
	description: |
  	    TODO: improve those default assertions

  "checkout page boilerplate test":
	path: "/checkout"
	assertions:
 	    - "metrics.doctrine.entities.created.count <= XX"
 	    - "metrics.laravel.eloquent.models.created.count <= XX"
 	    - "metrics.sql.queries.count <= XX"
 	    - "metrics.http.requests.count <= XX"
	description: |
  	    TODO: improve those default assertions

The next steps is to trigger a profile for each of those endpoints, and gathering data to update the assertions with the expected value for those different metrics.

However, our work doesn’t stop here. With such a .blackfire.yaml file in place, we would be informed of any regression when a profile is made. It would be great to regularly evaluate the performance of all the critical endpoints with a single command. This is called synthetic monitoring and is an integral part of Blackfire.

It’s akin to scheduled health check-ups, helping us spot and treat any performance issues before they can significantly impact our application and end-users. You can read more about Blackfire Builds and synthetic monitoring in this post.

To do so, let’s add a scenario entry in our .blackfire.yaml file containing the different URLs we would like to evaluate. A profile will be triggered for every endpoint visited. As for every profile, all matching assertions will be evaluated. All the test results will be gathered in a build report.

scenarios: |
  #!blackfire-player
 
  scenario
	name 'The critical user journey'
 
  	visit url('/')
    	    expect status_code() == 200

  	visit url('my-product')
    	    expect status_code() == 200

  	visit url('/checkout')
    	    expect status_code() == 200

Build could be triggered periodically in the first place. Blackfire provides a convenient UI for that. Later, an integration with your CI/CD pipeline could also be put in place for a deeper level of automation. With this in place, you would never put something in production that would degrade the performance of your application.

In this methodology, we’ve prioritized maintaining the performance of the application’s crucial features as our first step. Having established a stable baseline, the next phase involves taking a deep dive into those impactful endpoints. We can uncover opportunities for enhancement by profiling them and analyzing the collected data, 

With a better understanding of the code execution, you would be able to determine better metrics to use, and even create your own custom ones. This cycle of continuous learning and improving is central to Blackfire’s approach to performance management.

As we wrap up this blog post, we are keen to hear your thoughts and experiences. Perhaps you’ve been in the same boat–thrust into the unknown waters of an unfamiliar application with the mission to improve its performance. How did you navigate it? What strategies did you employ? Do you have a different take on the approach we outlined?

We invite you to share your insights and experiences on Dev.to, Discord, Reddit, or Twitter. Performance optimization is a journey, and there’s so much we can learn from each other. We can collectively enhance our understanding and develop more effective strategies by sharing our stories. 

Happy Performance Optimization!


Taming the beast: performance optimization of unfamiliar applications with Blackfire” blog post series:

  • part 1
  • part 2 (you are here)

Thomas di Luccio

Thomas is a Developer Relations Engineer at Platform.sh for Blackfire.io. He likes nothing more than understanding the users' needs and helping them find practical and empowering solutions. He’ll support you as a day-to-day user.