You're just one STEP away to hire a MentorPro
Technology we work in:
Services we provides:
As we dive into the festive season, keeping an eye on the performance of websites for businesses like retailers, publishers, and financial companies becomes super important. Initially, our teams work hard to make our sites speedy, but the real challenge comes in maintaining that speed once our application is out there in the real world.
I often chat with developers and testers who are wrestling with questions like how to keep track of their site’s speed over time and understanding the impact of new code releases on front-end performance.
So, I’ve gathered some top-notch advice from performance experts, like Andy Davies, the brain behind “Using WebPageTest: Web Performance Testing for Novices and Power Users.”
Let’s delve into:
Transport yourself back to 1968 when Bob Miller was exploring how we react to delays. He discovered that if you get a response within a tenth of a second after taking an action, your brain registers it as instantaneous. For instance, press a button, and if a light turns on within a hundred milliseconds, it feels immediate.
As the delay stretches to about a third of a second, you start noticing the lag. If the response comes within a second, you can smoothly continue without feeling disrupted.
Check out this graph showing response times and our perception of delays:
However, the longer the delay, the higher the likelihood of people bouncing. In 1968, the limit was found to be around ten seconds. Fast forward a few years, Microsoft conducted similar research and found the limit extended beyond the seven or eight-second mark. It’s a fascinating journey into understanding how our perception of time influences our online experiences!
Well, let’s talk about money!
If your website keeps people waiting or delivers a slow experience, it can seriously hurt your business. Real data shows that user experiences on websites directly influence their behavior. Take a look at this chart from a product that tracks real business experiences and how they affect user behavior:
As you can see, folks with fast website experiences tend to view more pages. For retailers, that means more product views and potentially more sales. If you’re in the publishing world relying on ads, it means more story reads. The speed of your site can make or break its success. Keep reading to understand how to keep your website in the fast lane!
In the chart above, pay attention to the orange line—it signifies the bounce rate.
What’s the bounce rate? It’s a measure of how many people land on your site, check out one page, and then bounce away. A high bounce rate indicates that visitors are leaving without taking any action. Not ideal.
Notice that the bounce rate is at its lowest around the three-second mark but steadily rises after that. The longer you keep people waiting, the more likely they are to simply view one page and exit.
Now, let’s shift our focus to the blue line, which represents the conversion rate.
The conversion rate reflects how many people make purchases or spend money on your site.
Observing the chart, you’ll notice very few conversions happen in the first three seconds—probably because not many pages load completely in that time. However, as we extend the waiting period, the likelihood of conversion decreases.
In the critical four-to-seven-second range, our conversion rate drops from five to four percent. That means a significant loss in revenue. 🙁
Take a look at the chart above, showcasing an actual scenario where Andy provided assistance to a retailer in enhancing their site’s speed.
Initially, the retailer was focusing exclusively on Android users. With Andy’s expertise, he implemented a few tweaks that resulted in a remarkable four-second improvement in the median experience for these Android visitors.
And here’s the exciting part—just with that minor adjustment, the revenue generated from these Android users skyrocketed by an impressive 26%! It goes to show how even seemingly small changes in site speed can have a substantial impact on user experience and, ultimately, the bottom line.
It’s common for teams to invest a significant amount of resources in optimizing backend performance—fine-tuning server farms, databases, and testing capacity to ensure a speedy delivery of the initial HTML payload to website visitors.
Let’s delve into some examples from various UK sites with a graph depicting front-end load times:
In this graph, the pink band represents the time it took for the backend to generate the initial response. The blue, on the other hand, encompasses all the other resources—images, scripts, stylesheets, essentially everything needed to complete the payload.
While it’s crucial not to overlook backend performance, it’s equally important to recognize that the majority of the work affecting user experience occurs in the browser.
Before the backend delivers a response, there’s no action for the front end to take. However, the critical work influencing the user experience happens in the browser.
To effectively measure front-end performance, it’s essential to adopt a mental model that aligns the metrics gathered with the actual business experience. The model illustrated below is one commonly used:
Understanding this model helps in comprehending how the metrics collected translate into the real user experience and aids in making informed decisions to enhance overall website performance.
The image above illustrates the visual cues that indicate to a visitor that everything is functioning correctly.
In this instance, we observe the change in the website’s address in the browser bar. But the question arises: when does the page truly become useful? Is it when the hero image appears in the center?
This point varies for different websites.
For a news site, usefulness might begin when someone can start reading the news. For an online retailer, it could be when the product image materializes, confirming to the visitor that they’re on the correct page.
Another factor to consider is when the page becomes usable. In the given example, usability might be delayed as the menu button is not immediately accessible.
When addressing front-end performance, the focus is on understanding how long it takes for a page to load before it becomes genuinely useful and usable. What transpires in the initial phase is crucial.
There are two broad ways of measuring how pages perform.
To gauge the performance of your front-end pages, there are two primary methods:
Both approaches have their significance, but this post will focus on the lab approach. This method closely aligns with how we initially integrate performance considerations into our workflow.
For a more comprehensive understanding, check out this list of free Application Performance Monitoring (APM) tools to enhance your performance measurement toolkit.
When it comes to front-end performance, the crucial starting point is to embed performance considerations into your Software Development Life Cycle (SDLC) from the get-go. Early integration is key!
Here are key situations during which you should be mindful of performance:
By weaving performance considerations into these stages, you lay the foundation for delivering an optimized and responsive front-end experience to your users.
To integrate performance engineering into your development workflow, follow these steps:
Begin with Google Lighthouse, a powerful tool for assessing performance. It is seamlessly integrated into Chrome DevTools. If you have Chrome installed, you can immediately start experimenting and exploring its features.
Leverage your current automation tests to capture performance data using Google Lighthouse. Tools like Cypress.io offer a Cypress-audit plugin, allowing seamless integration with Lighthouse.
By following these steps, you not only gain insights into your web application’s performance but also establish a systematic process for continuous performance evaluation within your development workflow. The incorporation of tools like Google Lighthouse and Cypress.io makes the process efficient and, as you rightly pointed out, cool!
Let’s take the UK version of Amazon as an example site to demonstrate Lighthouse’s features.
After Lighthouse completes its audit, it generates a report resembling the one below:
In this example, the performance score is 20, which may not be excellent, but having a metric that can be tracked over time is valuable. It serves as a useful point of discussion with your company stakeholders regarding performance.
You can use this number as a benchmark to monitor improvements or declines over time. It can also be a useful metric for comparing your performance against that of your competitors.
Once you have your performance number, you can begin focusing on strategies to enhance it and continuously work towards improving your website’s overall performance.
Beyond the primary performance score, Google Lighthouse provides additional metrics that offer insights into the visitor’s experience during page loading.
Here are some key metrics:
First Contentful Paint: Indicates when content begins to appear on the screen. This metric helps assess how quickly the initial content is rendered.
Speed Index: Measures the time it takes for the visitor’s screen to transition from blank to fully complete and stable. A faster speed index contributes to a better user experience.
Time to Interactive: Aims to measure when a visitor can actively interact with the application, such as clicking, scrolling, or entering text into a text box.
These metrics provide a nuanced understanding of the visitor’s experience on your site.
While the total “Performance” score serves as a high-level metric for tracking relative performance over time, these lower-level metrics offer a deeper analysis. They help you identify areas for improvement or potential issues.
Additionally, Lighthouse generates suggestions for enhancements, and in the diagnostics section, it provides insights into why a particular score was given.
In summary, Google Lighthouse provides both a top-level performance metric for tracking and sub-metrics that allow you to delve into specific aspects, aiding in the continuous improvement of your page’s speed and overall performance.
There are several cost-effective commercial tools available that can assist you in tracking your Lighthouse scores over time. Let’s explore a few of them:
DebugBear runs Lighthouse regularly on the web pages you provide and generates dashboards showcasing key front-end performance metrics. These dashboards make it easy to observe how your performance metrics are evolving over time.
Treo is another tool that creates Lighthouse dashboards and tracks performance changes over time. It provides snapshots of scores, timings from the latest test, and the main score, allowing you to visualize changes over time.
Often referred to as the Swiss army knife of performance testing tools, WebPageTest uses real browsers, not just Chrome. It supports testing in various browsers like Firefox, Edge, and Chrome, as well as on real mobile devices. WebPageTest allows testing from multiple locations globally.
After completing a test, it generates a report with a slightly lower level of detail compared to Lighthouse but offers a rich and wide set of metrics. The video film strip view in the report is particularly effective for illustrating key performance areas and building empathy for the visitor’s experience.
Just like commercial variants of Lighthouse, there are also commercial equivalents of WebPageTest.
SpeedCurve and Calibre are two such products that use the same engine as WebPageTest. They allow you to track performance over time, providing insights into your site’s performance in both staging and live environments.
Both SpeedCurve and Calibre enable hourly or daily tracking of your site’s performance, aiding in continuous improvement.
By utilizing these tools, you can not only make snapshots of your performance but also integrate them into your SDLC to effectively track and understand how your performance evolves over time, enabling proactive measures for improvement.
The power of APIs in your CI/CD pipeline is significant, especially when considering tools like DebugBear, Treo, SpeedCurve, and Calibre that offer API capabilities. The ability to initiate tests on-demand and retrieve results seamlessly enables integration into your build processes and development lifecycle.
By incorporating these tools with APIs into your CI/CD pipeline, you can track the impact of changes on performance metrics such as Lighthouse scores, raw timings in WebPageTest, SpeedCurve, and Calibre. SpeedCurve and Calibre, in particular, have the added advantage of tracking performance over time.
While open-source tools are available, Andy emphasizes the importance of vendor-paid solutions for accessing advanced functionality and insights. Integrating front-end performance testing into the continuous integration cycle is a valuable practice to ensure consistent attention to performance.
For those starting with front-end performance engineering, Andy suggests beginning with simple tools like PageSpeed Insights, Lighthouse, or commercial services. Establish baseline metrics, monitor changes, and gradually progress to more advanced speed improvements over time.
The impending use of performance as a ranking factor in Google’s search algorithm underscores the growing significance of front-end performance. Incorporating performance considerations into your workflows now positions your business for better user experiences and potential SEO advantages in the future.
Front-end performance testing involves evaluating the speed, responsiveness, and overall user experience of web assets such as websites and web applications.
Front-end performance directly impacts user experience, as slow-loading websites or apps can lead to frustration and abandonment, while fast-loading sites tend to have higher engagement and conversion rates.
Common front-end performance issues include render-blocking resources, unoptimized images, excessive JavaScript, and poor server configuration.
Front-end performance testing should be conducted regularly, especially before major releases or updates, to ensure optimal performance and user satisfaction.
Integrating performance testing into CI/CD pipelines allows for automated and consistent testing throughout the development lifecycle, enabling early detection of performance regressions and faster resolution of issues.
4.7/5
4.8/5
4.4/5
4.6/5
Pakistan
Punjab, Pakistan
28-E PIA, ECHS, Block E Pia Housing Scheme, Lahore, 54770
Phone : (+92) 300 2189222 (PK)
Australia
Perth, Western Australia
25 Mount Prospect Crescent, Maylands, Perth, 6051
Dubai
Albarsha , Dubai
Suhul Building No. 606, Albarsha 1, 47512
Phone : (+92) 300 2189222 (PK)