Salesforce has made steady progress on improving Lightning Experience performance. We started a dedicated program to improve Lightning Experience Page Time (EPT) and shared our plans with you. By the program’s first anniversary, we achieved a 12% boost, and at Dreamforce 2024, we announced that Lightning is now 25% faster. But our work doesn’t stop there. With the Spring ’25 release rolled out and Summer ’25 already deep in development, Salesforce is committed to delivering even more enhancements to your Lightning Experience performance.
Salesforce secures Lightning performance gains by preventing and managing performance regressions. We operate in a large and distributed development environment, where multiple teams work simultaneously on different aspects of the Lightning platform. With so many moving parts, some changes inevitably impact performance, sometimes unintentionally, and this can lead to regressions. Preventing setbacks is a continuous effort — one that requires proactive strategies, rigorous monitoring, and cross-team collaboration.
We’re continually refining our tools and practices to ensure that Lightning Experience continues to perform. This includes using artificial intelligence (AI) and its new capabilities to safeguard our performance gains. In this blog post, we’ll pull back the curtain on these efforts and give you a glimpse into some of the recent innovations.
Managing regressions to protect performance gains
Let’s dive into a recent example that showcases how important it is to check for regressions in order to maintain performance gains. Thanks to our recent advancements in the area, we were able to detect this regression swiftly and recover from its impact.
In this example, we will focus on subsequent page navigation Experience Page Time (EPT). As shown in the chart below, we exceeded a goal of 20% improvement for subsequent page navigations, currently tracking at 27%.
Here, a performance regression threatened to erase most of our hard-earned improvements for subsequent page loads. If left unchecked, this single issue could significantly degrade user experience and wipe out months of effort.
The root cause of the regression seemed minor and innocuous — just another new request added to the page navigation that on its own wasn’t performance-heavy. But, when triggered repeatedly in certain configurations, its impact was amplified. This led to a significant degradation in performance and required a fix to recover.
Preventing regressions is just as critical as making new improvements. Performance gains don’t follow a perfect, straight line. They require constant vigilance. With every step forward, we must ensure that we’re not inadvertently taking steps backward.
Innovating across the performance regressions lifecycle
We’ve demonstrated how a single code change can cause a large performance regression and the effort it can take to recover. Fighting regressions isn’t a one-and-done effort. It requires continuous monitoring, rapid detection, and a robust set of tools and processes to address issues big and small. Instead of reacting to regressions as they arise, we take a structured and proactive approach to managing them.
We define the lifecycle of a performance regression through five key steps — Prevent, Monitor, Detect, Collaborate, and Respond — within an iterative feedback loop.
Step 1: Prevent
Ideally, a regression should be caught during development, before it reaches production or impacts customers. To make this possible, performance metrics must be treated as non-functional requirements in every product development effort. Testing performance in development settings helps detect potential regressions early, while also identifying opportunities for improvement. This ensures that development meets both functional and performance quality standards.
For years, we have been measuring the performance of our development builds to catch regressions. For that purpose, we test code merging from multiple development teams in our lab environment. Recently, we’ve made the following changes to significantly improve our detection in lab, allowing us to catch regressions earlier and more efficiently:
- Increased the frequency of performance measurements: This allows us to identify regressions sooner
- Refined our statistical analysis techniques: This improves confidence in detecting regressions while reducing false positives that waste developer time
- Enhanced our ability to triage root causes more quickly: This enables teams to resolve regressions more efficiently
While many regressions are now caught and fixed before reaching production, some challenges still remain. For example, it can be challenging to ensure that lab conditions accurately reflect the wide range of real-world customer configurations.
We also learn from past regressions and use them as a feedback loop to identify patterns and strengthen developer guardrails where possible. For example, our analysis showed that certain factors, like bootstrap size — the Lightning framework’s initialization stage in first-page navigation — are strong indicators of potential performance issues. By setting stricter guardrails around bootstrap size, we’ve successfully eliminated similar regressions over the last several releases. This was a significant step forward in preventing performance setbacks.
Step 2 & Step 3: Monitor and Detect
From development to production, we recognize that some features carry more risk than others. For example, a feature can carry the potential for performance gain, but it may impact functionality. Or, the feature could pose the risk of a possible performance regression. To mitigate this, we implement gradual rollouts, effectively A/B testing new features for both functionality and performance.
This approach minimizes the likelihood of broad performance regressions in production and allows us to detect and address issues during rollout. We use predictive analytics to make data-driven decisions and ensure that performance-enhancing changes roll out safely while identifying regressions before they impact users.
Despite our best efforts, not every regression can be prevented. When a regression occurs, early detection is critical. To catch issues as soon as possible, we’ve significantly expanded our production telemetry and introduced early indicators into our monitoring of Lightning Experience EPT. Additionally, our Data and Analytics team continuously conducts ad-hoc analyses and refines data pivots to help development teams monitor and analyze performance more effectively.
As our data collection and monitoring has expanded, tracking all possible performance variations has become increasingly complex. Luckily, AI has been a game-changer. We use AI to scan vast datasets and detect anomalies, enabling us to quickly identify areas that need further investigation. With AI-driven alerts, we can now efficiently pinpoint and address performance regressions before they become widespread issues.
Step 4 & Step 5: Collaborate and Respond
Development takes a village, and so does maintaining good performance. Resolving regressions isn’t a one-person effort — it takes strong cross-team collaboration. In Salesforce’s large, distributed development environment, triaging performance regressions often takes multiple, cross-cloud teams. These teams must work together to diagnose root causes, identify fixes, and ensure a smooth resolution.
To ensure regressions are addressed efficiently, we track Mean Time To (MttX) metrics, measuring key stages in our response, including:
- MttD (Mean Time to Detect): How quickly we identify a regression
- MttE (Mean Time to Engage): How fast we mobilize the right teams
- MttR (Mean Time to Resolve): How long it takes to fully fix the issue
Through our focused efforts, we’ve improved the MttX metrics to be addressed within a matter of days, reflecting the speed and efficiency with which we now handle Lightning Experience performance regressions. While there’s always room to improve, this progress ensures that we minimize disruption and keep performance on track.
Lightning Experience performance: full speed ahead
Delivering the best customer experience is our top priority. However, preventing performance regressions is also critical. Without safeguards, hard-won gains can be eroded over time. Eliminating regressions entirely would be ideal, but it’s not realistic. That’s why we take a structured approach to ensure that any regression that slips through is swiftly detected, measured, and resolved.
Now, you’ve seen how we’ve focused on both improving Lighting Experience performance while preventing regressions. We continue to drive both forward, always improving, always innovating.
It’s important to remember that Lightning performance is also influenced by individual org configurations. Some regressions may be customer-specific. To address this, we are developing a new set of performance tools to provide better observability and insights into your Lightning performance. We’ll also have recommendations to help you optimize your Lightning performance along the way. Stay tuned for more details in our upcoming blog posts.
About the authors
Gery Gutnik is a Senior Director of Product, focusing on the performance and scale of the Salesforce web platform. His work spans Lighting Experience performance, content delivery networks (CDN) including CDN for Lightning component framework and Salesforce CDN for Experience sites, and performance observability tools.
Binzy Wu is a Senior Director of Engineering, focusing on application services, including UI API, GraphQL, application runtimes, and component services at Salesforce.
Martin Presler-Marshall works as a Software Engineering Architect for the Salesforce web platform. With over two decades of experience on software performance, he focuses mainly on server efficiency and UI performance.
Sharad Gandhi is a Senior Director of Engineering, focusing on the performance and scale of the Salesforce web platform. His work spans Lighting Experience, Lightning and LWR sites, experience delivery, generative canvas, and performance tools such as Salesforce Page Optimizer.