Getting started with the Blackfire test suite: part 2 of a series
In the second part of our Blackfire test suite series, learn how to create custom metrics that rely on your application logic.
In the first article of our test suite series, we’ve seen that performance tests can assess what accelerates an application or a script. Often, either the expected speed or resource consumption are too volatile to be considered reliable in tests. We should then focus on the causes, not the consequences.
Blackfire provides over 600+ metrics. The metrics relevant to a script are listed in a profile. Spending time understanding the internal mechanisms of our applications helps us identify the most relevant metrics for us.
Yet, soon enough, we may identify specific parts of our applications as particularly resource intensive. Our performance tests should then also target those particular functions.
This is what Blackfire custom metrics are made for. Let’s explore them in-depth and write our first custom metric that tracks the performance of the App\Controller\MainController::render
method.Blackfire’s documentation provides us with all the information needed to get started. We add a new metrics entry in the .blackfire.yaml
file and our first custom metric:
metrics: main_controller_render: label: "MainController::render" matching_calls: php: - callee: "=App\\Controller\\MainController::render"
This metric is being accounted for every time the callee (i.e., the App\Controller\MainController::render
) method is used. There can be more than one callee. Let’s trigger a profile to ensure this metric is considered. All the matching user-defined metrics are listed alongside the official ones in the timeline view of the Profiles, which you can see here:
Controlling the metric contribution
As for all metrics, the count and cost dimensions are available. We could have chosen to specify the contribution type and only collect the count or the dimensions by adding contrib: count-only
for instance (all possible values include count+cost
(default value), count-only
, cost-only
).
metrics: main_controller_render: label: "MainController::render" matching_calls: php: - callee: "=App\\Controller\\MainController::render" contrib: count-only
Timeline metrics
The timeline view displays spans that represent function calls from the beginning of the execution of the script to its end (left to right). From top to bottom, we can see the caller-callee relationship.
Some of those spans have backgrounds in a different color. Those spans contribute to a specific metric. Not all the metrics are highlighted on the timeline, as it would be unintelligible. You can highlight custom metrics by adding timeline: true
in its definition:
metrics: main_controller_render: label: "MainController::render" timeline: true matching_calls: php: - callee: "=App\\Controller\\MainController::render"
Adding markers
Markers can also be added to the timeline view. Alongside the programmatic way, the marker parameter controls whether visual feedback can be provided every time a specific metric is accounted for.
Let’s try to see how often we are making SQL queries. Yes, this might hurt a bit since many of us are trying to reduce their numbers.
sql_query: label: "Querying the database" marker: "This is SQL query" matching_calls: php: - callee: "=Doctrine\\DBAL\\Connection::executeQuery"
Indeed. We have a visual confirmation we might be making too many SQL queries.
Specifying a caller
What if we want to be more precise than that and target a specific sequence of calls? So far, we are targeting every call to the App\Controller\MainController::render method
. We assess the resource consumption when rendering the view of every endpoint defined in the MainController
. Let’s try to restrict that metric to the homepage only by defining a caller:
metrics: main_controller_render: label: "MainController::render" matching_calls: php: - caller: "=App\\Controller\\MainController::homepage" callee: "=App\\Controller\\MainController::render"
Multiple caller/callee groups
Caller/callee groups are always considered combined with a boolean AND
. You may chain several blocks of call selectors for a given metric. In that case, these blocks are combined with a boolean OR
:
metrics: main_controller_render: label: "MainController::render" matching_calls: php: - caller: "=App\\Controller\\MainController::homepage" callee: "=App\\Controller\\MainController::render" - callee: "=App\\Controller\\MainController::renderFooBar"
We’ve also discovered how to restrict the definition of our matching calls. What if we would like to be fuzzier and allow more flexibility? For instance, we could target the implementation of a certain interface, or extend a class or function name matching a REGEX
pattern.
The Metric selectors’ documentation details these extended possibilities.
Instead of considering only the MainController, let’s select the calls to the render method of any class extending the Symfony\Bundle\FrameworkBundle\Controller\AbstractController
:
callee: "|Symfony\\Bundle\\FrameworkBundle\\Controller\\AbstractController::render"
Watch the | that replaces the =
when starting the selector’s definition. Similarly, we could have targeted the class implementing of an interface:
callee: "|Symfony\\Contracts\\Service\\ServiceSubscriberInterface::render"
We can also target calls matching a specific pattern:
caller: /^App\\Controller\\MainController::(homepage|show)$/
You get it. This powerful and flexible feature allows you to match a very wide range of use cases. It’s getting hard to find reasons not to write performance tests, isn’t it?)
Organizing metrics with layers
Writing the first one is the hardest part of the Custom Metric journey. Soon enough, you will have many metrics and will need to keep them better organized. In that case, layers can be defined to group metrics under them.
metrics: # Main layer, will gather dimensions from all attached metrics. markdown: label: "Markdown" # Self referencing metrics are layers. layer: markdown timeline: true # Sub-layer markdown.parse: label: "Markdown Parser" layer: markdown markdown.parse.parsedown: label: "Markdown Parser (erusev/parsedown)" # The layer this metric contributes to. layer: markdown.parse matching_calls: php: - callee: "=Parsedown::text" php_markdown.parse.parsedown: label: "Markdown Parser (dflydev/markdown)" # The layer this metric contributes to. layer: markdown.parse matching_calls: php: - callee: "=dflydev\\markdown\\MarkdownParser::transform" caller: "!^dflydev\\\\markdown\\\\MarkdownParser::transformMarkdown!" - callee: "=dflydev\\markdown\\MarkdownParser::transformMarkdown"
And that’s all for now!
In the next part of this series, we will see how to combine assertions and custom metrics to control the performance of the critical user journeys of your applications.
Meanwhile, you can dig further into Custom Metrics by discovering the power of Argument capturing (read for PHP and for Python), and by rediscovering our previous series on Metrics.
Happy Performance Testing!
Read the full series: