This guide was originally published on Medium in early 2021 and has been updated with the latest guidance and advice, including new security features as part of recent releases and new pricing structure for reviews.

picture of developer’s hands on a laptop keyboard

You’ve built your app, it works great, and now you’re ready to release it to the world. But before you can publish your creation on the AppExchange, your app must pass security review. At Salesforce, nothing is more important than the trust of our customers. The security review process is here to validate that your app can be trusted with customer data.

In this blog post, we’ll provide guidance on the AppExchange Security Review process for architects, developers, testers, product owners, and anybody else who is involved in the development or submission of a given package.

Quick Links:

Overview
FAQs
Common Issues & Anti-patterns
Tooling
Submission Strategy
Conclusion

Overview

This article includes a collection of secure development best practices and a checklist of common pitfalls encountered during the AppExchange Security Review (SR). You can save time ahead of your security review by making sure that you avoid these common issues. It is not an exhaustive list, and the field of security is constantly evolving. To avoid duplication, we’ll link out to the authoritative source for a given topic.

Let’s start by addressing some frequently asked questions regarding the Security Review process.

FAQs

What tools can I use?

Salesforce Code Analyzer is developed by Salesforce and has specific analyzers to detect many common development and security issues. Integrate Code Analyzer into your Continuous Integration/Continuous Development (CI/CD) process to enforce rules that you define and to produce high-quality code. It is highly recommended that you use this tool to perform initial checks before you submit for security review.

Why did my package pass the initial scan and not the deeper security review?

The Security Review is a complex, multi-layered process involving several tools and security specialists. The initial automated code checking tools provide a high-level assessment of your code, but they cannot find all security vulnerabilities found during a manual review as this is a constantly evolving field and requires human expertise.

What caused our package to fail the review?

The person who submitted the review should have received an email with an attachment that details what caused the review to fail. Reach out to your partner account manager (PAM) if you do not have access to this email/report.

I’ve fixed all the findings from the security review, but now I’ve failed again!

The Security Review report is not an exhaustive checklist of things to fix, especially if there are a large number of issues with a submission. It lists the classes of vulnerabilities found on your application, but not every instance where they occur. The role of the Security Review is to validate that your package meets current best security practices, has no known vulnerabilities and is generally safe to promote to the AppExchange, where trust is our #1 priority. The Security Review is not there to find all the security issues for you, this is something that should be built into your development process and reviewed regularly.

I’ve submitted my package for review, so when will it be reviewed?

There are a couple of steps to this — once you submit, the package is subject to some initial checks before it is placed on the bigger SR queue. This catches any major submission mistakes, like the wrong version of the package submitted, and gives rapid feedback that the package needs to be resubmitted. Once past this stage, it goes on the main SR queue. Due to the labor-intensive process of performing the Security Review, there is a queue time of six to nine weeks. See the documentation for the latest info on queue lengths.

We launch tomorrow/next week and we need it reviewed now!

The Security Review process takes time, and you need to factor this into your development and release cycle. In exceptional circumstances, priority can be given to a particular review, but please remember that this really means exceptional and still requires a security reviewer to become available, which could be multiple days (our reviews are thorough!). Reach out to your Partner Account Manager if you need assistance.

Our package failed Security Review, and we’ve resubmitted it. Can we skip the queue?

When you submit for a retest, you already have a tester assigned, which, in a way, is skipping the queue time and going straight to your tester’s queue.

Our package has failed multiple times!

There are multiple reasons that this could happen:

  1. Some issues fixed, but not all — Remember, we don’t highlight everything, so check your code for all instances of an issue.
  2. New code, new issues — Has new code been added that introduces additional issues?
  3. Misunderstanding of the issues raised — This means that issues are not remedied correctly.
  4. Bad security design — Are there instances where your code is insecure by design? It may require re-architecting.
  5. Have you thoroughly reviewed the solution yourself? The purpose of the Security Review is to confirm that the solution was designed with security in mind, they are not going to secure it for you.

Common Issues & Anti-patterns

Every app is unique, but most Security Review failures break down into a few categories. This section details these, and provides links to documentation to help you avoid these issues.

Salesforce Code Analyzer is the ideal tool to use as part of your build process and has specific analyzers to detect many of the issues below before you submit.

CRUD, FLS

  1. Object (CRUD) and Field Level Security (FLS) — These are configured on profiles and permission sets and can be used to restrict access to standard and custom objects and individual fields. Salesforce developers should design their applications to enforce the organization’s CRUD and FLS settings on both standard and custom objects, and to gracefully degrade if a user’s access has been restricted. Some use cases where it might be acceptable to bypass CRUD/FLS are: creating roll up summaries or aggregates that don’t directly expose the data, modifying custom objects or fields like logs or system metadata that shouldn’t be directly accessible to the user via CRUD/FLS, and cases where granting direct access to the custom object create a less secure security model. Make sure to document these use cases as a part of your submission. For more information, please review the documentation for CRUD and FLS in the Salesforce developers docs.
  2. Enforce Field- and Object-Level Security in Apex – The Security.stripInaccessible method for field- and object-level data protection is now generally available. Use the stripInaccessible method to strip fields that the current user can’t access from query and subquery results. Use the method to remove inaccessible fields from sObjects before a DML operation to avoid exceptions. Also, use the stripInaccessible method to sanitize sObjects that have been deserialized from an untrusted source.
    Where: This change applies to Lightning Experience and Salesforce Classic in Enterprise, Performance, Unlimited, and Developer editions.
    How: The stripInaccesible method checks the source records for fields that don’t meet the field- and object-level security check for the current user and creates a return list of sObjects. The return list is identical to the source records, except that fields inaccessible to the current user are removed.
  3. Lightning Security — Because Lightning code shares the same origin as Salesforce-authored code, increased restrictions are placed on third-party Lightning code. These restrictions are enforced by Lightning Locker and a special Content Security Policy. There is also additional scrutiny in the AppExchange security review.
  4. External Resources — Everything that your package and user interacts with is a target for the Security Review. It’s important that the Salesforce security team reviews every extension package. Even small packages can introduce security vulnerabilities.
  5. For more information, see: Utilise Apex Security Enhancements to Reduce Development Time.
  6. Secure Apex Code with User Mode Database Operations – The new Database and Search methods support an accessLevel parameter that lets you run database and search operations in user mode instead of in the default system mode. This feature, now generally available, includes some changes since the last release. Apex code runs in system mode by default, which means that it runs with substantially elevated permissions over the user running the code. This feature was introduced in Spring ’23.
  7. For code targeting orgs prior to Spring ’23, WITH SECURITY_ENFORCED is the access level to use.
  8. For code targeting orgs after to Spring ’23, WITH USER_MODE is the access level to use.
  9. Selecting the correct sharing mode: Use the WITH SHARING or WITHOUT SHARING sharing keywords on a class to specify whether sharing rules must be enforced. Always include the latter in false positive reports. Use the inherited sharing keyword on a class to run the class in the sharing mode of the class that called it.
  10. Inherited sharing is an advanced topic and requires additional consideration. Because the sharing mode is determined at runtime, you must take extreme care to ensure that your Apex code is secure to run in both with sharing and without sharing modes.

Insecure endpoints

  1. Always use HTTPS when connecting to any external endpoints to push or pull data into Salesforce as a part of your application. Data sent over HTTP is accessible in clear text by any network attacker and poses a threat to the user. Find more information in the Secure Coding Guide.
  2. Every referenced external web service will be penetration tested, so make sure it’s set up correctly. A common issue is when an external service is deployed to production in Debug mode, causing it to divulge information (stack trace, etc.) if the penetration test manages to crash it by sending malformed data.

SOQL

  1. Avoid common performance issues, such as SOQL queries, in nested FOR loops.
  2. Avoid SOQL injection attacks by sanitizing inputs and adopting a defensive programming style.

CSS styling

  1. CSS for LWCs should be in the component’s CSS, not inline.
  2. Inline CSS is strongly discouraged and restricted by the Content Security Model.
  3. If your CSS breaks another component, you’ll fail review.
  4. Don’t use fixed, absolute, or float in CSS. Components are intended to be modular and run on pages with others.
    • You can add a false positive documentation for this, but it needs to be well justified.
  5. Don’t use .THIS with LWC components as these are already encapsulated. It is still required for Aura components.
  6. CSS can be an attack vector too, so it’s important to pay attention to this. See the following resources:

JavaScript

  1. There are numerous JavaScript recommendations throughout this document, so we’ll avoid repeating them here. One additional thing to consider is to check for legacy versions of libraries, especially jQuery. Legacy versions with known vulnerabilities will cause you to fail review, and this is an easy one to resolve ahead of time. Also, it’s worth mentioning retire.js again, which can be run as a build task or as a browser extension.
  2. Legacy JavaScript versions included with packages are one of the most common, and easiest-to-avoid issues that cause failure — always make sure you’re using the latest version.

Content Security Policy

The Content Security Policy Overview is a great resource on how the Lightning Framework uses Content Security Policy (CSP) to impose restrictions on content. The main objective is to help prevent cross-site scripting (XSS) and other code injection attacks.

Web browsers follow CSP rules specified in web page headers to block requests to unknown servers for resources including scripts, images, and other data. CSP directives also apply to client-side JavaScript, for example by restricting inline JavaScript in HTML.

So many issues come back to points raised on that page, that it makes sense to replicate the main points here:

  1. JavaScript libraries can only be referenced from your org — All external JavaScript libraries must be uploaded to your org as static resources. The script-src 'self' directive requires the script source to be called from the same origin. For more information, see Using External JavaScript Libraries.
  2. Resources must be located in your org by default — The font-src, img-src, media-src, frame-src, style-src, and connect-src directives are set to 'self'. As a result, resources such as fonts, images, videos, frame content, CSS, and scripts must be located in the org by default. You can change the CSP directives to permit access to third-party resources by adding CSP Trusted Sites. For more information, see Create CSP Trusted Sites to Access Third-Party APIs.
  3. HTTPS connections for resources — All references to external fonts, images, frames, and CSS must use an HTTPS URL. This requirement applies whether the resource is located in your org or accessed through a CSP Trusted Site.
  4. Blob URLs disallowed in iframes — The frame-src directive disallows the blob: schema. This restriction prevents an attacker from injecting arbitrary content into an iframe in a clickjacking attempt. Use a regular link to a blob URL and open the content in a new tab or window instead of using an iframe.
  5. Inline JavaScript disallowed — Script tags can’t be used to load JavaScript, and event handlers can’t use inline JavaScript. The unsafe-inline source for the script-src directive is disallowed. For example, this attempt to use an event handler to run an inline script is prevented: <button onclick="doSomething()"></button>.

Common vulnerabilities

  1. Cross-site scripting (XSS) — Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into otherwise benign and trusted websites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser-side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it. An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by your browser and used with that site. These scripts can even rewrite the content of the HTML page. Stored XSS attacks are persistent and occur as a result of malicious input being stored by the web application and later presented to users. For further info, see the OWASP wiki.
  2. Cross-site request forgery (CSRF) — CSRF is an attack that forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker’s choosing. A successful CSRF exploit can compromise end user data and perform state changing actions on this data without the user’s knowledge. If the targeted end user is the administrator account, this can compromise the entire web application. Using custom headers (including methods as PUT) to protect from CSRF is not a perfect approach. You need still to implement CSRF token as a security in-depth measure.
  3. Insecure session cookie handling — All session cookies should be set over HTTPS connections with the SECURE flag. These cookies should be invalidated upon logout, and the Session IDs stored in such cookies should be random with sufficient entropy, so as to prevent an attacker from guessing them with any reasonable chance of success. Cookie values should never be reused and should be unique per user, per session. Sensitive user data should not be stored in the cookie.

Code considerations

  1. Commented code — Comments are allowed in small snippets and samples, but full functions and classes that are commented out should be removed.
  2. Incomplete test documentation — It’s important that documentation is as complete as possible, including documenting your responses to false positives. This helps the reviewer understand why you may be doing something a particular way that normally wouldn’t be best practice, and understand what actions you’ve taken to mitigate any security concerns.
  3. Insecure software versions — When new vulnerabilities are discovered in software, it is important to apply patches and updates to a version of the software for which the vulnerability is fixed. Attackers can create attacks for disclosed vulnerabilities very quickly, so security patches should be deployed as soon as they are available. Note: If you think this is a false positive, please submit a false positive document in the next retest with your reasons.
  4. Secrets in code – Do not store secrets in code. Use protected custom settings, custom metadata, or named credentials as appropriate.
  5. Storing sensitive data — This is a brilliant resource on how to work securely with sensitive data. If your application copies and stores sensitive data that originated at Salesforce.com, you should take extra precautions. Salesforce.com takes threats to data that originated at their site very seriously, and a data breach or loss could jeopardize your relationship with Salesforce if you are a partner. Make sure you follow industry best practices for secure storage on your development platform. Never store Salesforce passwords off the platform.
  6. External system access, Session ID exfiltration – The security stance on the usage of the SessionID has tightened significantly in recent years. Specifically, sending the SessionID to an external system via API will result in an automatic fail going forward. Suggested alternatives are to use a dedicated Integration User or OAuth as modern alternatives. Draft guidance (login to Partner Community required).
  7. Password echo — Storing sensitive information in the source code of your application is rarely a good practice as anyone that has access to the source code can view the secrets in clear text.

Information leakage

Information leakage involves inadvertently revealing system data or debugging information that helps an adversary learn about the system and form a plan of attack. An information leak occurs when system data or debugging information leaves the program through an output stream or logging function.

  1. Sensitive information in debug — Revealing information in debug statements can help reveal potential attack vectors to an attacker. Debug statements can be invaluable for diagnosing issues in the functionality of an application, but they should not publicly disclose sensitive or overly detailed information (this includes PII, passwords, keys, and stack traces as error messages, among other things).
  2. Sensitive information in URL — Don’t forget that one of the simplest data transfer mediums is the URL itself. Sensitive information passed via GET method (HTTP GET Query String) to the web application may lead to data leakage and exposes the application in various ways. Full URL is often stored “as-is” on the server in clear text logs that may not be stored securely, can be seen by personnel, and may be compromised by a third-party. Search engines index URLs inadvertently storing sensitive information. Storage of full URL paths on local browser history, browser cache, bookmarks, and synchronized bookmarks between devices. URL info is sent to third-party web applications via the Referrer header. Long Term secrets, such as username/password, long-lasting access tokens, and API tokens, must not be sent in URLs.
  3. TLS/SSL Configuration — Due to historic export restrictions of high-grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options. Even if high-grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel. Ciphers, such as SSL v2/SSL v3/TLS v1.0/TLS v1.1, should not be supported by the server, or ciphers that utilize a NULL cipher or have weak key lengths. TLS 1.0 & 1.1 have been declared end-of-life by most systems, and should no longer be used. See: Testing for SSL-TLS. Currently, TLS 1.2 or greater is required by Salesforce.

I’d like to speak to the reviewer

This can be arranged, generally, with a minimum of three weeks’ notice as the service is popular. Security reviewers have office hours and teams can book a session with them to discuss the findings of a review. If your package has failed a couple of times, it may be worth booking an appointment just after the next review to speak to a security engineer. Book your office hours session via the Partner Security Portal.

False positives

There are times when you have a legitimate reason for doing something in a certain way, and have taken measures to ensure the security of the data. These instances should be clearly marked in code, and comments should be provided to avoid false positives. However, if you find that you have a lot of exceptions in your code, you may need to consider if your code is following an anti-pattern and needs re-architecting.

Security is a state of mind, not a check box

The purpose of the Security Review is to validate that you’ve taken all the necessary precautions. Many partners have their first package fail to pass Security Review the first time around, and they make it their priority not to let this happen again, so they aggressively review the code of all their packages and dependencies, like external web services. This is a win-win for everybody: it makes the Security Review process faster, and the partner is proactive about security — this is the ultimate goal.

Tooling

There are many tools available, each with its own focus. Use them to aid security analysis, but remember that security is a mindset and an explicit architectural process. While tools can spot particular patterns, anti-patterns, and other issues, they will never have the full understanding of what the solution is trying to do or the mindset of a human reviewer. Here is more information on some of these tools:

  1. Salesforce Code Analyzer (Previously Salesforce CLI Scanner) — Have you heard of the Salesforce Code Analyzer yet? It’s an intuitive open-source tool that can scan your code to identify common coding issues and possible vulnerabilities. Code Analyser provides a unified experience on top of multiple open-source code scanners to help individual developers and teams focus on code quality. Code Analyzer currently supports the PMD rule engine, PMD Copy Paste Detector, ESLint, RetireJS, and Salesforce Graph Engine. Integrate Code Analyzer into your Continuous Integration/Continuous Development (CI/CD) process to enforce rules that you define and to produce high-quality code.
  2. Chimera — This is a cloud-based, run-time scanner service that can be used to scan third-party websites. Note that Chimera is only for websites that that you own or can upload a token to.
  3. Source Code Scanner (Checkmarx) — Source Code Scanner lets you schedule scans, download scan reports, search all the scans for your org, and manage scan credits for your orgs. For more information, see the Checkmarx FAQ.
  4. ZAP — Zed Attack Proxy is an open-source web scanner from the OWASP.org and can be used to scan third-party websites.
  5. Common Vulnerabilities and Exposures — CVE® is a dictionary of publicly disclosed, cyber-security vulnerabilities and exposures that is free to search.
  6. Retire.js — There is a plethora of JavaScript libraries for use on the web and in Node.js apps out there. This greatly simplifies things, but we need to stay updated on security fixes. “Using Components with Known Vulnerabilities” is now a part of the OWASP Top 10 and insecure libraries can pose a huge risk for your web app. The goal of Retire.js is to help you detect the use of versions with known vulnerabilities.
  7. National Vulnerability Database — The NVD is the U.S. government repository of standards-based vulnerability management data represented using the Security Content Automation Protocol (SCAP).

Submission Strategy

How do I submit for Security Review?

You submit your package for review via the Partner Community. Before you do so, check the ISVforce guide for the latest guidance, and be sure to read all the info in this article.

Per-review pricing model

Please be aware that as of March 16th, 2023, the fees for security review are moving to a per-attempt model.

  • The $2,550 initial review fee is eliminated.
  • The $150 annual fee is eliminated.
  • The security review fee is $999 per attempt for paid apps.

What about free apps? Will the new fee apply here too?

At the time of writing, the status of free apps is as follows:

There will be no fees for Security Reviews for free solutions while we work to redefine the policy.

However, please check the fee updates page and discussion for the latest information.

Package versioning

Submit a point release (e.g., 16.7→16.8, etc.), not a patch release (16.7.1234). The security scanner in the partner portal is designed to work with major and minor releases (Major.Minor) only. Patch (Major.Minor.PATCH) releases are not supported and are purposely filtered from the list of available packages. Most issues that partners encounter with their package not being visible in the security scanner are resolved by creating a new Major.Minor (e.g., 16.7→16.8) release.

Multi-package submission

When submitting a solution that consists of multiple packages, it is important to be explicit on the case about the packages included in the org and their relationship to each other.

For example:
xx = base package
yy – depends on xx
zz depends on xx + yy

Only packages directly related to Security Review submission should be in the org, otherwise the submission will be rejected.

Upon submission: pre-queue checks

Once a package has been submitted, it enters a pre-queue state where checks are performed to confirm the validity of the submission. It may take a few days for these steps to be manually checked before the package goes on the official SR queue.

  1. Correct package(s) submitted — The package should be managed. No beta, unmanaged, or unlocked packages are allowed.
  2. No other packages submitted — if there are extra packages (see Multi-package submission), please provide a detailed explanation on the dependency of the package.
  3. Access to test org is validated — 2FA/MFA should be disabled, so the testing team can log in to test. Similarly, for a web app or remote site, please make sure that working credentials are provided.

Conclusion

The Security Review process is designed to validate that you have made good data hygiene decisions and have considered a security by design in your app. The more thought you place into security by design, the easier the submission process will be. This post gives you a head-start on things to do and avoid doing to make your review as smooth a process as possible.

Resources

About the author

Jonathan McNamee is a UK-based Technical Evangelist at Salesforce, helping ISV Partners get the most from their investment in the Salesforce Platform. He has 20 years of experience in developing web-technology solutions for companies of all sizes across a variety of industries and has been working in the Salesforce ecosystem since 2020. His interests include scalability, efficiency, and resilience of systems. Follow him on LinkedIn.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS