Best Practices
Follow these best practices as you deploy Model Context Protocol (MCP)-based integrations in your enterprise architecture.
Salesforce provides a hosted MCP server that handles authentication, authorization, and observability on your behalf. Building a custom proxy layer around the Salesforce REST APIs instead of using the hosted server undermines those guarantees: it bypasses the platform's security controls, removes request-level telemetry, and creates a maintenance burden that grows as the API surface evolves. Use the hosted server as intended.
Per-user authentication means every MCP tool call runs with the same permissions as the user who authorized the connection—the same field-level security, object access, and sharing rules that apply in Lightning. This is a feature, not a constraint: it means that AI agents operate within your existing governance model without extra configuration.
Refer to the next section for strategies to mitigate the risk of external agents making mistakes as they act in your Salesforce org.
MCP access control works at three levels. The client governs which users can connect to which servers, and certain clients offer RBAC-based controls of tools as well. The server defines which tools are available in a given configuration. The Salesforce platform enforces field-level security, object permissions, and sharing rules at runtime when tools execute. Each layer has a distinct role, and you should not attempt to compensate for one layer's gaps in another layer.
When you need tighter data access than the default server configuration provides, choose the strategy that matches your requirements:
-
Use a scoped SObject server. The platform ships several pre-configured SObject servers with varying capabilities: read-only (
sobject-reads), create and update without delete (sobject-mutations), delete only (sobject-deletes), and full access (sobject-all). Selecting the right server for your use case limits what agents can do without any code. -
Back a custom tool with a Named Query. A Named Query lets you define exactly which objects and fields a given tool returns. This is the right approach when you want an agent to query a specific slice of your data model without exposing the full SOQL surface.
-
Use Apex InvocableMethod for custom logic. For scenarios that require conditional access, data transformation, or multi-step operations with business rules, write an Apex class with an
@InvocableMethodannotation and expose it as a custom MCP tool. This approach gives you full programmatic control over what the agent can see and do.
MCP tools support an annotations object with four boolean hints that tell clients how the tool behaves: readOnlyHint, destructiveHint, idempotentHint, and openWorldHint. Well-behaved clients use these to present appropriate UX—auto-executing read-only tools and requiring confirmation for destructive ones.
Platform tools ship with accurate annotation values. Read operations like SOQL queries and schema lookups are marked readOnlyHint: true; delete operations are marked destructiveHint: true. Clients that respect these annotations can deliver a smoother experience without sacrificing safety.
For custom tools backed by Flows, Apex, or REST endpoints, set annotations explicitly. The spec defaults are conservative—a tool with no annotations is assumed potentially destructive and open-world. This means a custom query tool that omits readOnlyHint: true will trigger unnecessary "are you sure?" prompts in clients that respect annotations. At a minimum, set readOnlyHint: true and destructiveHint: false on tools that only read data, and destructiveHint: true on tools that delete or irreversibly modify records.
Annotations are hints, not enforcement. Not all clients read or respect them. They complement—but do not replace—the access control and human-in-the-loop strategies described above.
Getting tool granularity right is one of the most consequential decisions when building a custom MCP server. There are two failure modes, and both are common.
Too granular (subatomic). Many teams start by mapping their existing APIs directly to MCP tools. The problem is that internal APIs are often designed to be called in a specific sequence—each call returns a partial result that only makes sense in context of the calls around it. An AI client doesn't know this sequence. It sees a flat list of tools, tries to use them independently, and produces incorrect or incomplete results. If your tool surface requires an agent to call tools in a prescribed order to accomplish anything useful, it's too granular.
Too coarse (bundled workflows). The opposite problem is tools that bundle too many steps into a single operation—a tool that, for example, creates a lead, scores it, and routes it to a rep in one call. This kind of tool reduces flexibility: the agent can't intervene between steps, adjust behavior based on intermediate results, or reuse any part of the workflow in a different context. Broadly, if a tool is encoding a multi-step process with its own decision logic, it's better represented as a Salesforce Flow that the agent can invoke as needed.
The Goldilocks zone. The right level of grain sits between these extremes, and it isn't always a single atomic operation. A good tool returns something an AI client can reason about on its own, without needing to know what to call next. Sometimes that's a single operation—create a record, run a query. Sometimes it's a slightly more molecular unit—retrieve an opportunity with its related contacts and recent activity in one call, because that's the natural unit of information an agent needs to reason about a deal. The test is whether a tool call produces a self-contained, useful result. If the result is only meaningful after calling two more tools, reconsider the design.
When in doubt, start with slightly coarser tools and split them if agents demonstrate they need finer control—rather than starting subatomic and trying to compose upward.