On the last Wednesday of every month, Salesforce Developers hosts an “Ask Me Anything” (AMA) session on our YouTube channel. We take a deep dive into a monthly topic and answer live questions posted on Twitter, our developer community group, and YouTube live chat, along with sharing topic-related resources. Last month, the stars aligned when we realized that we’d be at TrailblazerDX during our typical broadcast time frame. So, for the first time ever (cue the confetti), we hosted Ask Me Anything in person!

Christie, Alba, Philippe, and Shane speaking to a crowd about Continuous Integration at TDX. TrailblazerDX 2022, Salesforce Developer Conference, is held at the Moscone Convention Center in San Francisco on Wednesday, April 27, 2022. (© Photo by Jakub Mosur)
On April 27, Developer Community Director Christie Fidura hosted an AMA on continuous integration at TrailblazerDX. She was joined by Salesforce Developer Advocates Alba Rivas and Philippe Ozil, along with Salesforce CLI Developer Shane McLaughlin. An audience of more than 60 Trailblazers asked their CI questions during our 40-minute session. Although we weren’t able to live stream this AMA, you’ll find answers from our experts, as well as helpful resources on all things continuous integration, in this blog post.

AMA session Q&A

What free tools do you recommend for setting up CI?

This probably doesn’t come as a surprise, but I’m a big fan of GitHub Actions. What I like about it is that it’s very accessible and easy to use. What you can do is look at others’ projects, check out their CI configuration (because it’s text-based), and basically apply the same thing to your own project. You should check out a few of our sample apps for some examples of how to get started.

Regardless of your CI provider, I also recommend these two tools, which I find quite handy when automating tasks:

  • jq (tool for parsing JSON with a CLI)
  • PMD (tool for running Apex static code analysis)

What is the main KPI for a successful CI integration? How do you measure success?

One of the things that we’re looking at in terms of KPIs is the time that CI jobs take to run. We monitor how long it takes, and we also monitor trends. It’s really important to look at trends. If you see that your build time increases significantly over time, there’s something wrong and you need to take action. In the end, regardless of which CI tool you use, run time costs money, so you want to be careful not to have jobs that run for too long.

In order to address this, you can build specialized jobs, so that you’re not repeating certain “expensive” tasks too often. For example, you don’t want to test packaging every time you make a change to your project. You only want to do packaging tests maybe once in a while to save on cost (CI run time).

We also closely look at code coverage evolution. With time, new features are added to your project and your code base expands, so you want to make sure that code coverage doesn’t decrease along the way.

What is the best practice for deploying profiles without destroying your org, and what are the implications of deploying everything every time?

The best practice for deploying profiles is to not do it. JUST SAY NO.

We do have one example of this in the E-Bikes sample app, where we deploy a profile for an Experience Cloud anonymous guest user. But as Shane said, it’s best to avoid it as much as possible.

In regards to CI/CD, is there a “secret sauce” for dealing with issues that arise after pushing code? Are there tools or best practices that can help tie them back to code changes?

The best way to avoid this is to have solid CI workflows. You want to have unit tests, integration tests, end-to-end tests, and user acceptance tests before your code hits production. The golden rule in software development is that the sooner you catch issues, the less expensive it is to fix them.

With CI/CD, you can pretty much match Git content with production deployments. Once you’ve identified the deployment that caused the issue, you can run a combination of git diff (see docs) and git blame (see docs) commands (or the equivalent in your favorite IDE) on the offending code to identify the author.

Partial deployments versus full deployments?

I like full deployments. The reason is that I know that they won’t all deploy and I know that I’m not missing a piece. Any time I’ve tried to do partial deployments for the detection and then note, “Hey, this is what changed about it,” I feel like I’ve always made more work for myself. You now have the ability to validate a deployment, so that when you actually want to deploy, you can schedule it to go at a certain time (e.g., in the middle of the night when no one’s on the system). I can’t think of a good reason not to deploy the entire thing. It also solves some data issues where — I guess this becomes a flavor preference — people are changing things in production that never make it into source control, and I want that to now match my source control.

So, if someone changes a report — because they can — I want to override that report and put it back the way it is in source control. This discourages them from changing a report again, rather than coming and doing it in a sandbox. I think this helps encourage good behavior.

How do you work with sandboxes in CI?

I think the first question would be, “Why are you using sandboxes?” It could be because you want to have an org that has everything like production. It’s more of a full sandbox. And I would probably have that one scripted as a deployment that I like to do before I actually deploy production. Then the trick there is that before I do a production deployment, I’m going to take the same code and put it in the sandbox. There are two things: it makes sure that it’s going to work, and it keeps that sandbox consistent with production.

The other thing that you see people using sandboxes for is this “org-dependent package” concept. So the idea with packaging is that they need to be metadata complete and all the references are internal. That’s hard sometimes if your org is a mess. Org-dependent packages are a package, and they’ve listed all the stuff that’s inside them. But they deploy over your org, and they don’t have to be dependency complete, as long as the org has what they need.

When there are multiple developers working on a single org, sometimes they change classes, they forget to run the tests, and they break. What do you recommend for handling that?

You should have developers working on feature branches on independent orgs. With continuous integration that shouldn’t happen, because when a developer pushes their changes to their feature branch, tests run automatically. Not only tests, but also other tasks that improve the quality of your code, such as linting and formatting.  You can check how we’ve implemented CI jobs to run with every git push in any of our sample apps.

When my team is doing a complicated deployment that takes a long period of time with large data loads, we have multiple people working in that system. We have a very short window that the system can be down, and currently, we’re having to do this over the weekend. We’re trying to minimize the impact on business throughout the week. Do you have any recommendations for how we can manage this more efficiently?

There’s a community-contributed Salesforce CLI plugin called SFDX Git Delta that compares different Git commits, calculates the metadata difference between them, and lets you deploy the resulting diff. This kind of incremental deployment is significantly faster than a full deployment. More information on the plugin can be found in this blog post, and you should give it a try to reduce production downtime during your future deployments.

How do you approach A/B testing? If I want to deploy a feature that I think will reduce sales cycle time by 15%, can I pull that into a subset of users and test that out?

Permissions. It’s possible to deploy some new objects, layouts, or flexipages, but not assign them to everyone in the org. Or for UI components or UI flows, put them on a new flexipage. Make a permission set that gives it to your subset of users for the experiment.

Philippe, can you please share your recommended Git branching strategy in three minutes or less? Go.

This is a tough question because there are different options depending on project shapes and dependencies.

A good approach is to have a multi-layer strategy:

  • At the bottom is the main branch which contains code that is currently deployed in production.
  • The level above is a UAT (User Acceptance Testing) branch that matches the deployment on a partial or full sandbox.
  • The higher levels are feature branches deployed on smaller sandboxes and scratch orgs. As your features mature, you’ll be merging your code down to the point where you reach the production branch.

On top of this, one important thing that we do in sample apps is that we separate the current Salesforce release branch and the prerelease branch. We run different CI jobs on these two branches to validate that the new release will not impact our project.
That’s the best intro to Git branching strategies that I can do in three minutes.

Alba, where can a DevOps team learn more about CI/CD processes?

Here are some great resources to get you started:

Our next AMA is coming up!

Please join us for our next AMA on May 25, 2022 as Principal Developer Advocate Julián Duque hosts Ask Me Anything with Salesforce Developers | Developer Tooling on our Salesforce Developers YouTube channel!

About the author

Sarah Welker is a Senior Marketing Analyst on the Salesforce Developer Relations team focusing on digital content and developer events. She’s a big fan of sports, the outdoors, and her kids. You can follow her on Twitter @sarahwelker47.

Sarah Welker

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS