Get Started with Einstein Generative AI

Einstein brings generative AI to your business at scale. The Einstein 1 Platform with Trust Layer securely connects your data with the power of large language models (LLMs).

With Einstein Studio, configure new models and test prompts in a playground environment before deploying a model to production.

Use the Models API to generate text and generate embedding vectors. The Models API provides Apex classes and REST endpoints that connect your application to LLMs from Salesforce partners, including Anthropic, Google, and OpenAI.

Simplify daily tasks by integrating prompt templates, powered by generative AI, into workflows. Create, test, revise, customize, and manage prompt templates that incorporate your CRM data from merge fields that reference record fields, flows, related lists, and Apex. Prompt Builder helps you to make effective prompts that safely connect you and your data with LLMs.

Bring the power of conversational AI to your business with Einstein Copilot. Build an intelligent, trusted, and customizable AI assistant, and you can help your users get more done in Salesforce.

Einstein for Developers (Beta) is an AI-powered developer tool that’s available as an easy-to-install Visual Studio Code extension built using CodeGen, the secure, custom AI model from Salesforce. The extension is available in the VS Code marketplace and the Open VSX registry.

Here are the answers to the most frequently asked questions from developers about Einstein Generative AI.

The Supported Models page not only lists all the models that are compatible with the Models API, but also offers a list of model criteria, links to benchmarks, and a comparison table.

Einstein Requests are a metric for tracking usage of Einstein Generative AI features, including the Models API. For details, see Einstein Usage.

A Models API request is a single HTTP request to the Models API, either through Apex or REST. Each Models API request consumes Einstein Requests based on the sum of the input (prompts, user messages, and system messages) and output (responses to prompts and assistant messages).

Chat applications can consume a large number of Einstein Requests because all the text for the conversation history must be sent with each Models API request.

Yes, all Models API requests are routed through the Trust Layer. Data Cloud is required to ensure that the Einstein Trust Layer functions correctly. For details, see Trust Layer.

You can customize model parameters by creating a custom-configured model in Einstein Studio. However, configured models are not yet supported by the Models API.

The Models API supports multimodal models like GPT-4o from OpenAI, but you can only use the text modality for now.

Because all LLM generations are processed by Trust Layer models for data masking and toxicity detection, the Models API doesn’t currently support streaming.

To support a wide range of models with a common interface, the Models API doesn’t currently support special features from ChatGPT or OpenAI’s API, such as web browsing, JSON mode, function calling, DALL·E image generation, and data analysis.

Although some of Supported Models for the Models API have extended context windows, all models are currently limited to a context size of 32,768 tokens to ensure compatibility with the Trust Layer models.