Getting started with the Blackfire test suite: part 3 of a series
In the third installment of our Blackfire test suite series, you can learn how to simultaneously test all your applications’ critical user journeys.
In the first part of our test suite series, we wrote our first Performance Test and understood the need to rely on non-volatile metrics. In the second part, we discovered how custom metrics could use our application logic and provide powerful and stable measurements when testing the performance of our applications.
Those are great pillars of our testing strategy. But what now? Well, in this article, we will dive further into our exploration of Blackfire’s testing capabilities. Demonstrating how to use custom metrics in assertions and test all the critical parts of our application simultaneously. Ready to dive in?
How to: Custom metrics in assertion
First up, it’s custom metrics. Let’s consider the following custom metric:
metrics: app.main_controller_render: label: "MainController::render" matching_calls: php: - callee: "=App\\Controller\\MainController::render"
The metric is named
app. prefix is optional, but it’s always good practice to name your metrics with prefixes. Especially as your project and its coverage grows – this can help you organize your
Note: The assertions starts with
metrics., as it is based on a metric. This is opposed to
main. when evaluating a dimension of the profile itself (
main.wall_time < 75ms for instance).
You can now write an assertion based on that metric. These assertions can assess a variety of eight dimensions, including:
io. We have learned our lessons to stay clear of volatile dimensions like wall time and instead, let’s rely on cardinality:
tests: "The homepage should be rendered by MainController": path: "/" assertions: - "metrics.app.main_controller_render.count == 1"
Then, as always, trigger a profile and ensure your test is valid and recognized:
Blackfire, the Jedi way
The test relying on a custom metric is then considered. And if it’s a success then congrats, you have now mastered the first two pillars of the Blackfire testing suite!
We often refer to custom metrics as the Jedi way of using Blackfire since those are by far the most efficient ones. Does this mean we are Jedi now? Reveal that, we can not.
Take a look at the documentation on Metrics and Assertions which provides extensive examples of crafting top-notch tests and covers virtually all possibilities.
If all has gone smoothly, you should now be on track to writing efficient and precise tests, but you are still evaluating the performance of individual parts of our applications.
You need to have a larger vision and make sure all the critical parts of your applications adhere to the expectations you defined. Could you trigger batches of profiles, so the performance of specific sequences of requests is evaluated?
Synthetic Monitoring to the rescue
Blackfire provides Profiling-based Synthetic Monitoring which differs from live Monitoring. But, how? Blackfire Monitoring tracks the resource consumption of the traffic in real-time, while Synthetic Monitoring evaluates pre-defined scenarios. So, how does it work? Let’s write our first scenario by adding a new scenarios section in our
.blackfire.yaml file. Remember, you may already have your first scenario defined as one is provided after we ran
blackfire client:bootstrap-tests in the first part of this test series.
# Read more about writing scenarios at https://blackfire.io/docs/builds-cookbooks/scenarios scenarios: | #!blackfire-player scenario name 'The homepage should answer with a 200 status code' visit url('/') expect status_code() == 200
| after scenarios: required to start a multiline string. Please also note that the script must start with
#!blackfire-player as scenarios are run with Blackfire Player. We will go into details later in this article.
Blackfire provides a convenient online validator to help you build your test. It explains why your
.bkf files may be invalid. This is the YAML way. You can write as many scenarios as needed and each of them can contain as many steps as required. A good practice is to give scenarios and steps a name. Explicit names – as well as empowering descriptions in assertions – are keys to better collaboration between all the stakeholders of application-wide performance testing.
# Read more about writing scenarios at https://blackfire.io/docs/builds-cookbooks/scenarios scenarios: | #!blackfire-player scenario name 'Key pages should respond with a 200 status code' visit url('/') name 'Homepage should respond with a 200' expect status_code() == 200
Let’s now run our first scenario with Blackfire Player. The recommended way is to use the
blackfire/player Docker image.
docker run --rm -it -e BLACKFIRE_CLIENT_ID -e BLACKFIRE_CLIENT_TOKEN -v "`pwd`:/app" blackfire/player run .blackfire.yaml
BLACKFIRE_CLIENT_TOKEN can be found on your Personal Account Settings page.
As the command is a bit lengthy, you can always add an alias in your
.zshrc… file to simplify your developer experience.
alias blackfire-player="docker run --rm -it -e BLACKFIRE_CLIENT_ID -e BLACKFIRE_CLIENT_TOKEN -v \"`pwd`:/app\" blackfire/player"
We can now simply use a much more convenient syntax:
blackfire-player run …. Running
blackfire-player run –-help provides a list of useful options. Among those, the
--endpoint lets you define/override which endpoint to use when running our tests. The
--blackfire-env option lets you decide which Blackfire Environment to use and send the reports to. Blackfire-player even embedded a validator:
blackfire-player validate scenario.bkf
Blackfire Player CLI
Now it’s time for the final step in part 3 of our test suite – let’s run our first build. Blackfire Player CLI provide and output in the CLI, like so:
If it’s a success like the example above then you now have a solid testing foundation!
In the next part of our test suite, we will explore all the possibilities offered by Blackfire Scenarios. But if you just can’t wait until then, you can head over to our documentation for a complete guide so you can get started right away. In the meantime, find us on Reddit and Twitter, and make sure to let us know how your exploration of performance testing is going.
Happy Performance Testing!
Read the full series: