Getting started with the Blackfire test suite: part 1 of a series

By Thomas di Luccio, on Jan 19, 2023

Performance testing is a crucial part of the software development process. 

Testing helps ensure that applications can handle the expected workload and user traffic, and they can identify any bottlenecks or issues before deployment.

Unfortunately, we tend to have (bad) reasons for not wanting to write tests. More often than not, the reason’s a lack of time, and those minutes we don’t invest in testing can turn into days of work when a massive performance regression sneaks its way into production.

Welcome to Part One in a series about our Performance testing suite 

Luckily for our users, Blackfire has a full testing suite to prevent that kind of scenario from happening, which we’ll detail in a new series of blog posts starting today. By the end, you should have a solid understanding of how to plan, design, and execute performance tests for your own projects.

Before going into how to start a test, here’s an overview of the Blackfire testing capabilities we’re going to cover over the next few articles.

Custom assertions 

Every time a profile is triggered, the custom assertions matching the executed script are evaluated. Assertions describe expectations that should be met when evaluating the performance of a script.

Scenarios

Scenarios describe critical user journeys whose performance needs to be evaluated and controlled. A profile will be triggered at every step of every scenario. For all profiles, all matching assertions will be evaluated.

Build reports

The results of all those evaluations are gathered in a build report, which provides an overview of the application’s performance. With that, we can know how many steps are failing at a given time and act accordingly.

Manual or automatic triggers 

Blackfire builds can be triggered manually, periodically, or automatically through webhooks and integration with most CI/CD pipelines. It then provides the ability to block the merging of a PR or a deployment, as the new code seems to introduce regressions in the performance of tested parts of the application.

Writing your first test 

Okay! Now that we have a full picture of Blackfire’s performance test suite let’s start from the beginning by writing our first test.

Blackfire CLI has us covered here. First, run blackfire client:bootstrap-tests at the root of the application to create your first .blackfire.yaml file:

tests:
   "The homepage should be super fast":
   	path: "/"
   	assertions:
       	- "main.wall_time < 20ms"
 
# Read more about writing scenarios at https://blackfire.io/docs/builds-cookbooks/scenarios
scenarios: |
   #!blackfire-player
 
   scenario
   	name 'The homepage should answer with a 200 status code'
 
   	visit url('/')
       	expect status_code() == 200

The file created contains your first assertion and the first step of your first scenario. It’s one small step for a developer, a giant leap for the performance of your application.

Let’s put aside the Scenario part of the files for now. We’ll cover that in an upcoming article in this series of blog posts. 

Let’s now try to understand the first test and fine-tune it.

We are using regular expressions. path: "/" means we are targeting the application’s homepage. We would use command: foo:bar to target the execution of the foo:bar CLI command.

This first test has only one assertion ("main.wall_time < 20ms"). The wall time of the homepage is expected to be less than 20 ms.

And that’s it. We have our first test in place! We could also trigger a test on the homepage to make sure it is taken into account. Check the Assertions tabs on the profile:

That’s a great way to start writing performance tests. Yet, can we do better?

Another way to test 

In this test, we rely on the evaluation of the wall time, which is a pretty volatile metric that can vary depending on the testing environment. It may be faster if it runs locally on a new and powerful computer, or, more likely, it’ll run slower on your staging server where too many other dev tools are clogging up the works.

Of course, we want our applications to be fast. More than testing the actual response time, we could test the reason why our application is fast. Testing the cause, not the consequences, would lead to more reliable tests.

The question is then to know what makes a script fast? What about testing the number of SQL queries? HTTP requests? Entities hydrated? How often are we flushing data to the database? Or going within that very greedy function, making complex computations? Let’s browse the 600+ Blackfire built-in metrics.

tests:
	"The homepage should be fast":
    	path: "/"
    	assertions:
        	- "metrics.sql.queries.count <= 5"
        	- "metrics.entities.created.count < 50"
        	- "metrics.http.requests.count == 0"
        	- "metrics.doctrine.orm.flush.count == 0"

This refactored first test could be the boilerplate for further testing.

In the next part of this series, we’ll explore the creation of custom metrics and ask questions like“How can we rely on our application logic to control its performance?”

In the meantime, discover our testing cookbooks and write your first performance tests. Let us know how they go.

Happy Performance Testing!


Read the full series:

  • Part 1 – your first test (you are here)
  • Part 2 – custom metrics
  • Part 3 – your first scenario
  • Part 4 – in-depth scenarios
  • Part 5 – automation and CI/CD pipelines

Thomas di Luccio

Thomas is a Developer Relations Engineer at Platform.sh for Blackfire.io. He likes nothing more than understanding the users' needs and helping them find practical and empowering solutions. He’ll support you as a day-to-day user.