Salesforce Metadata API is a tool for developers and administrators to programmatically manage an org’s configurational metadata through the API services of deploy and retrieve. The API can be used for a variety of use cases, such as moving metadata between orgs during the development cycle, deploying large metadata configuration changes from development, or even larger changes like Salesforce org management.
In a previous blog post, we gained a deeper understanding of the Metadata API deployment process, including its various stages. This knowledge of the deploy transaction and its stages, complexity, and limitations helps API users manage their continuous integration and continuous delivery (CI/CD) processes more effectively. However, taking it a step further, adopting certain best practices can help users navigate through these limitations efficiently.
For this blog post, we’ll take a look at some best practices to help you master Metadata API deployments for greater efficiency. We’ll assume that you have a basic understanding of the API and its associated tools, and that you have read the blog post focused on the inner workings of the deployment process.
Factors that affect Metadata API deployments
Before diving into the best practices, it’s crucial to understand Metadata API’s limits.
- File and size limits: You can deploy or retrieve up to 10,000 files at once. The maximum size of the deployed or retrieved
.zip
file is 39 MB (compressed) for the SOAP API. - Apex deploys: There’s an upper limit on the number of Metadata API deployments originating from Apex that can be enqueued at a time. The exact limit depends on infrastructure considerations (more on this in the next section).
- Asynchronous nature: Deployments are asynchronous, meaning you initiate a
deploy()
call, and then poll for status updates usingcheckDeployStatus()
(see docs).
Infrastructural considerations
An important point to note is that infrastructure plays a big role in the end-to-end execution time of deployments. While we continuously enhance our API frameworks, deployment times are also affected by factors beyond our control. These include app server location, load, message queue traffic, peak hours, and planned or unplanned server maintenance. These parameters are equally important when considering the overall time a deployment can take to complete. Let’s take a deeper look at these parameters and learn how to navigate them.
App server upgrades and maintenance
Deployments that start right before a Salesforce server upgrade or maintenance may take longer to complete. This is because after the service is restored, the system will retry both the deployment and validation from the beginning. But it won’t rerun Apex tests that were already completed. To avoid delays due to planned upgrades, check the Salesforce Trust website for upgrade schedules and avoid running deployments during these times.
Long-running deployments due to certain operations
Certain operations that happen within a deployment can take significantly longer to complete than users may anticipate. These operations can vary in their completion times across the same or different Salesforce orgs based on their complexity and the size of requested changes. Accounting for these whenever you request them in a deployment can help set the right expectations for deployment times.
For example, if you have requested a field type conversion in your deployment, understand that the schema conversion is followed by data manipulation. The time taken for this operation depends on the size of the custom field and the number of records that need to be updated. Similarly, Apex recompilation after a deployment can take a sizable amount of time to complete. Be sure to understand and evaluate the need for recompiling Apex in your org after each deployment. You can view the state of this setting, as well as toggle it from ApexSettings.
Deployment windows
Based on the location of app servers, there are designated business hours that should be avoided to schedule larger deployments, for example, promotion to upstream environments. The industry best practice is to schedule deployments for off-peak hours, such as evenings or weekends, to ensure a smooth, stable, and predictable process with minimal impact on the business. This approach is a core part of effective release management and DevOps practices within the Salesforce ecosystem.
Best practices for efficient deployments
Now that you’re aware of the environmental ecosystem and its challenges of scale, availability, and maintenance, let’s take a look at what strategies are under your control to make your deployments more efficient.
Modularize and break down deployments
Small, incremental deploys: Break down large deployments into small, logical units. This approach minimizes the risk of reaching system limits, simplifies troubleshooting by isolating potential issues to smaller code bases, and speeds up feedback cycles for developers. For example, deploy declarative changes, like page layouts and validation rules, separately from deployments of programmatic components, like Apex classes and Lightning web components.
Group by dependency: Salesforce enforces a strict order of deployment for certain components. For example, custom objects must deploy before their custom fields, and Apex classes referencing these fields should deploy only after both the object and fields exist in the target environment. Our recommendation is to group and deploy highly interdependent metadata components together.
Prioritize essential components: If a full deployment is too extensive, prioritize essential components like core functionality or critical bug fixes for the target environment.
Deploy metadata using validate-only or quick deploys
This is one of the most crucial features that can help you significantly reduce your deployment times. By pre-validating your changes, you can quickly deploy them during scheduled release times, especially if they happen during peak business hours.
Pre-validate your deployment: A validate-only deployment (checkOnly
deploy option set to true
) is a “test run” that checks if your deployment would succeed without saving any changes by simulating a real deployment. It runs all tests and verifies the components in your deployment package against the target org.
Quick deploy: A quick deploy allows you to deploy a previously validated package without running any Apex tests again, making the process significantly faster.
Implement version control and CI/CD pipelines
Adopting a robust version control system like Git to manage metadata files allows you to track changes, collaborate with your team, and easily revert to previous versions if a deployment fails. It also allows you to leverage and automate deployment processes using CI/CD pipelines, like the Salesforce DevOps Center.
- Run validation-only deployments in lower environments (e.g., UAT, Staging) as part of your CI pipeline to catch errors early without making actual changes
- Integrate automated Apex tests into your CI/CD pipeline to ensure code quality and prevent regressions
- Keep your
package.xml
file accurate and up-to-date, including only the components you need to deploy
Explicit development org strategy
An understanding of the purposes of different sandbox types, along with a forward-looking strategy, helps optimize your CI/CD and sets your development teams up for success.
- Scratch orgs for development: Leverage scratch orgs for isolated, source-driven feature development, testing, and automation that use individual feature branches.
- Developer sandboxes: Utilize developer sandboxes when you need a stable, long-lasting environment for manual configuration or training, or long-term projects.
- Utilize Full sandboxes: For large deployments, Full sandboxes are crucial as they provide a complete replica of your production org’s metadata and data. This allows for realistic testing, including performance and load testing, and UAT.
- Regular sandbox refreshes: Refresh your sandboxes regularly to keep them in sync with production and minimize metadata drift. This reduces the likelihood of unexpected issues during deployment to production.
- Practice deployments: Regularly practice deploying to sandbox environments to refine your process, identify potential issues, and learn how to recover from failures.
Optimize deployment parameters and settings
There are several deployment options offered in the API for different purposes. Using them appropriately can help optimize your deployment as well as set the right expectations.
Rollback on Error
When you set the rollbackOnError
option to false
for a deployment, it tells the system to not roll back previously successful changes if a later component in the same deployment fails. If an error occurs midway through the deployment, any components that were successfully deployed before the error will remain in the org. This option is by default set to false
. When set to true
, in case of an error, the entire deployment translation is rolled back and the org is restored to the same state as prior to the deployment. It’s important to evaluate the stage of your pipeline and your org’s rollback needs before using this deploy option.
Apex test levels
Be sure to understand the different test levels for deployments, especially when deploying to a production environment. The default test level for production is RunLocalTests
, which runs all tests in the org except those from managed packages.
Apex compile on deploy
If you’re experiencing extremely slow Apex deployments, consider unchecking “Perform Synchronous Compile on Deploy” in your Apex settings. While disabling this setting can greatly speed up your CI/CD pipeline, it’s crucial to understand the implications. By skipping compilation in the pipeline, you push that process and its potential errors to runtime. This can lead to faster deployments but also introduces a risk of encountering unexpected compilation errors when your application is live. Ultimately, knowing this trade-off helps you decide the best way to optimize your pipeline: prioritize deployment speed or catch errors earlier.
Dependency management
Be aware of metadata dependencies. If you’re deploying a custom field, you’ll likely also need to deploy the custom object it belongs to, as well as any profiles or permission sets that grant access to that field. Failing to include dependencies is a common cause of deployment errors.
Unsupported metadata
Some Salesforce features have metadata that isn’t available through the Metadata API. For these, you must make changes manually in each org. You can check out the Salesforce Metadata Coverage Report for a definitive list of which features are supported in the API. Refer to this coverage report or documentation of individual metadata types to learn about their extended API support, as well any additional special access rules that may apply to each type.
Special behavior in deployment
Certain metadata types exhibit special behavior in the Metadata API depending on whether they’re being accessed programmatically via file-based deploys or through Change Sets. Refer to this help document to learn more about these types.
Pre and post-deployment actions
There are some additional actions that can be routinely practiced before and after deployments to ensure a smooth experience and reduce the likelihood of errors. Some common ones are listed below.
Pre-deployment checklist
- Temporarily disable automation: For very large or critical deployments, consider temporarily disabling email deliverability, certain workflow rules, validation rules, or process builders that might trigger unwanted actions during the deployment
- Communication: Inform users about the deployment window and potential impact
- Audit trail: Check Setup Audit Trail in the target org to ensure that no manual changes were made that could conflict with your deployment
- Backup: Have a backup of your metadata and data before a major deployment
Post-deployment checklist
- Run Apex/manual tests: Even if you ran tests during deployment, it’s good practice to run relevant Apex tests or conduct manual testing again to confirm that everything is working as expected
- Enable automation: Re-enable any temporarily disabled automation
- Monitor logs: Review and monitor your org and debug logs for any errors or warnings
Conclusion
Gaining expertise in Metadata API deployments requires navigating inherent infrastructural factors, as well as strategizing for success. By proactively implementing a comprehensive set of best practices, development and operations teams can significantly enhance the efficiency, predictability, and overall reliability of their deployment processes. This approach will lead to smoother, more consistent releases and a reduction in potential errors, ultimately improving our customers’ ability to deliver high-quality solutions.
About the author
Neha Ahlawat is a Product Management Director at Salesforce focused on Metadata API and its frameworks, source tracking, and metadata coverage.