Getting started with the Blackfire test suite: part 4 of a series

By Thomas di Luccio, on Mar 22, 2023

It’s time for the 4th installment of our Blackfire test suite where we’ll get stuck into application crawling. In part 3, we dived into Performance Testing and Blackfire Player and Scenarios and had us running our first scenario. In part 4, it’s time to explore those scenarios a little further so fasten your seatbelts – we’re going to crawl and profile! 

Crawling HTTP applications

First up, let’s define what exactly a scenario means in this context. A scenario is a sequence of steps, with each step within that sequence being an HTTP call sharing the HTTP session and cookies. The state is then cleared and refreshed at the end of every scenario. Now, let’s GET into the details on crawling:

A step can be a GET request on an endpoint, such as:

visit url('/')
    	expect status_code() == 200

Any HTTP method could be used:

scenario
	visit url('/')
    	method 'POST'
    	param foo "bar"
    	json true

Or an interaction with the UI:

click link('Read more')
    	expect status_code() == 200

Or a form submission:

submit button("Submit")
    	param title 'Happy Scraping'
    	param content 'Scraping with Blackfire Player is so easy!'

The parameters used could be hardcoded, such as in the previous example. Or faked:

submit button("Submit")
    	param title fake('sentence', 5)
    	param content join(fake('paragraphs', 3), "\n\n")

Or using parameters:

submit button("Sign in")
    	name "Authenticate"
    	param email user_login
    	param password user_password

Those parameters could be defined in the Blackfire Builds dashboard or passed as parameters:

blackfire-player run scenario.bkf --variable email=foo@bar.com user_password=****************

A wide range of selectors could be used as well. In this example, we are accessing the third link in a list:

# Click on the 3rd link from the list.
click css('.js-sightings-list > tr:nth-child(3) a')
     name "First sighting page"
     expect status_code() == 200

Iterations and loops are also possible:

scenario
	name "HTTP Cache"
	set paths ["/", "/blog/"]


	with path in paths
    	visit url(path)
        	name "Checking performance on path: " ~ path
        	expect status_code() == 200
        	# performance checks


scenario
	name "Checks on key pages"


	with name, data in \
    	{ \
        	admin: { slug: "/admin/", expectedStatusCode: 401 }, \
        	products: { slug: "/products/", expectedStatusCode: 200 }, \
        	about: { slug: "/about/", expectedStatusCode: 200 } \
    	}


    	visit url(data["slug"])
        	name "Checking performance on path: " ~ name
        	expect status_code() == data["expectedStatusCode"]




scenario
	name "While loops"
	visit url('/products/')
	set pageCount css(".max_results_count").first().text()


	set page 1
	while page < pageCount
    	visit url('/products/?page=' ~ page)
        	set page page + 1
        	expect status_code() == 200

With all that being said, it’s fair to say there are a lot of possibilities! If you want to discover all of the functionalities available to you with Blackfire crawling, then check out the documentation

So far, we have defined everything in a shared .blackfire.yaml file. That file may become too big at some point. Another good practice would be to split the scenarios into multiple files and/or reuse some sequences of steps that may be identical in different scenarios. But, how do you do that? 

First, groups can be defined to allow those steps to be reused and easily included in some scenarios:

group login
	visit url('/login')
    		expect status_code() == 200


	submit button('Login')
    		param user 'admin'
    		param password '**************'

The login group can now be included in a scenario. When evaluated, all the steps of the login group, and the actual scenarios will be evaluated.

scenario
	name "Scenario Name"


	include login


	visit url('/admin')
    		expect status_code() == 200

Groups, or scenarios, can be stored in external .bkf files. Then, included within some other files with the load statement:

# load and execute all scenarios from files in this directory
load "*.bkf"


# load and execute all scenarios from files in all sub-directories
load "**/*.bkf"

Profiler management

Finally, one last point for part 4 – Blackfire Player triggers a profile for each and every step defined in all the scenarios. You can disable the profiler in a step or a complete scenario by setting blackfire false.

scenario
	name "Scenario with Blackfire"
	# Use the environment name (or UUID) you're targeting or false to disable
	blackfire true
	# ...


scenario
	name "Scenario without Blackfire"
	# You can disable Blackfire support on the scenario, or only on some steps
	blackfire false
	# ...

This is particularly useful when reusing the same group multiple times. Imagine an e-commerce application for which we want to assess the performance of handling large and complete carts. A specific group may describe the sequences of action building up such a cart. Then, once included in other scenarios, diverse actions could be performed on that cart. We don’t want to have the same profiles evaluated repeatedly.

Also, we have the possibility to define supplementary expectations when defining the different steps. Blackfire Player is a web crawler which means it can also be used to evaluate the response returned with its current URL, status code, headers, body, and even HTML nodes using CSS selectors. Check out the documentation to discover all the possible expectations.

Since the first part of the series on performance testing we have discovered how to write efficient assertions, define custom metrics relying on our application’s logic, and, finally, how to control the performance of critical user journeys at once. But, what’s next? 

In the next episode, we will explore the automation of performance testing with Periodic Builds and integration with CI/CD pipelines. In the meantime, find us on Reddit and Twitter, and let us know how your exploration of performance testing went.

Happy Performance Testing!


Read the full series:

  • Part 1 – your first test
  • Part 2 – custom metrics
  • Part 3 – your first scenario
  • Part 4 – in-depth scenarios (you are here)
  • Part 5 – automation and CI/CD pipelines

Thomas di Luccio

Thomas is a Developer Relations Engineer at Platform.sh for Blackfire.io. He likes nothing more than understanding the users' needs and helping them find practical and empowering solutions. He’ll support you as a day-to-day user.