Supported Models

Salesforce supports large language models (LLMs) from multiple providers, such as Amazon Bedrock, Azure OpenAI, OpenAI, and Vertex AI from Google. Salesforce-managed models are available out of the box. You can also bring your own model (BYOLLM) by using Einstein Studio.

This table lists the API names for all the standard configuration models in Einstein Studio. In addition to these models, you can use the API name from any custom model configuration in Einstein Studio.

To see details, such as model version and supported regions, see Large Language Model Support in Salesforce Help.

ModelAPI NameNotes
Anthropic Claude 3 Haiku on Amazonsfdc_ai__DefaultBedrockAnthropicClaude3Haiku* Salesforce Trust Boundary
Azure OpenAI Ada 002sfdc_ai__DefaultAzureOpenAITextEmbeddingAda_002Embeddings only
Azure OpenAI GPT 3.5 Turbosfdc_ai__DefaultAzureOpenAIGPT35Turbo
Azure OpenAI GPT 3.5 Turbo 16ksfdc_ai__DefaultAzureOpenAIGPT35Turbo_16kDeprecated
Azure OpenAI GPT 4 Turbosfdc_ai__DefaultAzureOpenAIGPT4TurboNot supported by Models API. Use BYOLLM instead.
OpenAI Ada 002sfdc_ai__DefaultOpenAITextEmbeddingAda_002Embeddings only
OpenAI GPT 3.5 Turbosfdc_ai__DefaultOpenAIGPT35Turbo
OpenAI GPT 3.5 Turbo 16ksfdc_ai__DefaultOpenAIGPT35Turbo_16kDeprecated
OpenAI GPT 4sfdc_ai__DefaultOpenAIGPT4Older GPT-4 model
OpenAI GPT 4 32ksfdc_ai__DefaultOpenAIGPT4_32kDeprecated
OpenAI GPT 4 Omni (GPT-4o)sfdc_ai__DefaultGPT4OmniLatest GPT-4 model. Geo-aware.
OpenAI GPT 4 Omni Mini (GPT-4o mini)sfdc_ai__DefaultOpenAIGPT4OmniMiniLow latency version of GPT-4o. Geo-aware.
OpenAI GPT 4 Turbosfdc_ai__DefaultOpenAIGPT4TurboOlder GPT-4 model

* Salesforce Trust Boundary: Anthropic Claude 3 Haiku on Amazon is operated on Amazon Bedrock infrastructure entirely within the Salesforce Trust Boundary. In contrast, other models are operated by Salesforce partners, either inside a shared trust zone or through the LLM provider directly using Einstein Studio’s bring your own LLM (BYOLLM) feature.

When you bring your own LLM, you consume 30% fewer Einstein Requests compared to other models. For details, see Einstein Usage.

The Models API supports Einstein Studio’s bring your own LLM (BYOLLM) feature, which currently supports Amazon Bedrock, Azure OpenAI, OpenAI, and Vertex AI from Google as foundation model providers. With BYOLLM, you can add a foundation model from a supported provider, configure your own instance of the model, and connect to the model using your own credentials. Although inference is handled by the LLM provider, the request is still routed through the Models API and Trust Layer features are fully supported.

Using a BYOLLM model with the Models API is the same as any other model. Look up the API Name of the configured model in Einstein Studio and use it as the {modelName} in the REST endpoint path or as the modelName property of the Apex request object.

This table lists all the foundation models that you can add in Einstein Studio with BYOLLM.

ProviderModelNotes
Amazon BedrockClaude 3 Haiku
Amazon BedrockClaude 3 Sonnet
Amazon BedrockClaude 3 Opus
Amazon BedrockClaude 3.5 Sonnet
Azure OpenAI, OpenAIGPT 3.5 Turbo
Azure OpenAI, OpenAIGPT 3.5 Turbo 16kDeprecated
Azure OpenAI, OpenAIGPT 4 Omni (GPT-4o)Latest GPT-4 model
Azure OpenAI, OpenAIGPT 4 TurboOlder GPT-4 model
OpenAIGPT 4Older GPT-4 model
OpenAIGPT 4 32kDeprecated
Vertex AI (Google)Gemini Pro 1.5

To learn more about BYOLLM, see Bring Your Own Large Language Model in Einstein 1 Studio on the Salesforce Developers Blog.

The Bring Your Own Large Language Model (BYOLLM) Open Connector is designed to provide powerful AI solutions to customers, independent software vendors (ISVs), and internal Salesforce teams. With this connector, you can connect the Einstein AI Platform to any language model, including custom-built models.

The BYOLLM Open Connector is a commitment to community-driven growth and innovation. By allowing users to integrate any LLM—from those models hosted on major cloud platforms to those models developed in-house—we're opening up a world of possibilities for enhanced, bespoke AI applications. This capability not only caters to the needs of large enterprises looking to leverage specific models like IBM Granite or Databricks DBRX, but also supports smaller teams eager to experiment with open-source models. With features designed to ensure ease of use, such as a streamlined UX in Einstein Studio and API specifications closely based on the OpenAI API, this connector empowers our users to enhance their AI-driven applications while maintaining high standards of security and compatibility.

See the Einstein AI Platform GitHub repository for API specifications and example code for the LLM Open Connector.

To choose the right model for your application, consider these criteria.

Capabilities: What can the model do? Advanced models can perform a wider variety of tasks (usually at the expense of higher costs and slower speeds—or both). The ability to follow complex instructions is a key indicator of model capabilities.

Cost: How much does the model cost to use? For details on usage and billing, see Einstein Usage.

Quality: How well does the model respond? The quality of model responses can be hard to measure quantitatively, but a good place to start is the LMSYS Chatbot Arena.

Speed: How long does it take the model to complete a task? Includes measures of latency and throughput.

For benchmarks and evaluations of LLMs and embedding models, see these resources.

The context window determines how many input and output tokens the model can process in a single request. The context window includes system messages, prompts, and responses.

All models are currently limited to a context size of 32,768 tokens when data masking is turned on in the Einstein Trust Layer. To turn off data masking and use the full context window, see Set Up Einstein Trust Layer in Salesforce Help.

For more information about the context window for individual models, see the model provider site.

A geo-aware model automatically routes your LLM request to a nearby data center based on where Data Cloud is provisioned for your org. Geo-aware routing offers greater control over data residency, and using nearby data centers minimizes latency.

Proximity to the nearest LLM server is determined by the region in which your Einstein generative AI platform instance is located. If you enabled the Einstein generative AI platform on or after June 13, 2024, then your Einstein generative AI platform region is the same as your Data Cloud region (Data Cloud: Data Center Locations). Otherwise, contact your Salesforce account executive to learn where it’s provisioned.

To learn more about geo-aware routing, see Geo-Aware LLM Request Routing in Salesforce Help.

Use these API names for each model type.

Model NameAPI NameNotes
Azure OpenAI Ada 002sfdc_ai__DefaultTextEmbeddingAda_002Embeddings only
Azure OpenAI GPT-3.5 Turbosfdc_ai__DefaultGPT35Turbo
Azure OpenAI GPT-3.5 Turbo 16Ksfdc_ai__DefaultGPT35Turbo_16kDeprecated
Azure OpenAI GPT-4 Turbosfdc_ai__DefaultGPT4TurboOlder GPT-4 model
Azure OpenAI GPT-4osfdc_ai__DefaultGPT4OmniLatest GPT-4 model
OpenAI GPT-4sfdc_ai__DefaultGPT4Older GPT-4 model
OpenAI GPT 3.5 Turbo Instructsfdc_ai__DefaultGPT35TurboInstruct

This table describes the countries and Amazon Bedrock data center regions where data resides or passes through for geo-aware models from Anthropic, such as Claude 3 Haiku.

Data Cloud CountryTrust Layer CountryAmazon Bedrock Data Center
AustraliaAustraliaAsia Pacific (Sydney)
BrazilUnited States and Brazil*South America (São Paulo)
GermanyGermanyEU (Frankfurt)
IndiaIndiaAsia Pacific (Mumbai)
JapanJapanUS West (Oregon)
United States (East)United StatesUS West (Oregon)
United States (West)United StatesUS West (Oregon)
All othersUnited StatesUS East (N. Virginia)

Requests are routed to a nearby data center provided by Azure OpenAI and hosted in one of its Azure availability zones.

If there’s a problem with the nearby data center, requests are routed to a data center provided by OpenAI in the United States. This fallback routing to the United States can’t be disabled.

For Brazil, Canada, the United States, and all other countries where geo-aware routing isn’t yet supported, the request is routed directly to OpenAI in the United States.

The Trust Layer also has separate data residency regions for:

  • Data masking and toxicity detection models
  • Audit Trail data stored in Data Cloud

This table describes the countries and data center regions where data resides or passes through for geo-aware models from OpenAI, such as GPT 3.5 Turbo.

Data Cloud CountryTrust Layer CountryData Center RegionFallback Region
AustraliaAustraliaAustralia EastUnited States
BrazilUnited States and Brazil*US East 2 / US WestNot applicable
CanadaUnited StatesUS East 2 / US WestNot applicable
FranceGermanyFrance CentralUnited States
IndiaIndiaIndia SouthUnited States
ItalyGermanyFrance CentralUnited States
JapanJapanJapan EastUnited States
GermanyGermanyFrance CentralUnited States
SpainGermanyFrance CentralUnited States
SwedenGermanyFrance CentralUnited States
SwitzerlandGermanyFrance CentralUnited States
United KingdomGermanyUK SouthUnited States
United StatesUnited StatesUS East 2 / US WestNot applicable
All othersUnited StatesUS East 2 / US WestNot applicable

*For Brazil, data masking models and toxicity detection models are hosted in the United States and Audit Trail data is hosted in Brazil.

Announcements for new models and model deprecations are part of the Einstein Platform release notes on a monthly basis.

Model deprecation is the process where a model provider gradually phases out a model (usually in favor of a new and improved model). The process starts with an announcement outlining when the model will no longer be accessible or supported. The deprecation announcement usually contains a specific shutdown date. Deprecated models are still available to use until the shutdown date.

After the shutdown date, you won’t be able to use that model in your application and requests to that model will be rerouted to a replacement model. We recommend that you start migrating your application away from a model as soon as its deprecation is announced. During migration, update and test each part of your application with the replacement model that we recommend. For more details about deprecated models, see Large Language Model Support in Salesforce Help.