Get Started with Einstein Generative AI
Einstein brings generative AI to your business at scale. The Einstein 1 Platform with Trust Layer securely connects your data with the power of large language models (LLMs).
With Einstein Studio, configure foundation models and test prompts in a playground environment before deploying a model to production.
You can also bring your own large language model (BYOLLM) from an LLM provider using your own account. The model is saved to the Model Library in Einstein Studio as a foundation model that you can configure.
To connect any language model (including custom-built models) to Einstein Studio's BYOLLM feature, you can use the LLM Open Connector API specification.
- Get Started with Einstein Studio
- Salesforce Help: Add a Foundation Model
- Bring Your Own LLM
- Einstein AI Platform on GitHub: LLM Open Connector
Use the Models API to generate text and generate embedding vectors. The Models API provides Apex classes and REST endpoints that connect your application to LLMs from Salesforce partners, including Anthropic, Google, and OpenAI.
- Models API Developer Guide (Beta)
- Access Models API with Apex
- Access Models API with REST
- Build Lightning Web Components and Flows with Models API
Simplify daily tasks by integrating prompt templates, powered by generative AI, into workflows. Create, test, revise, customize, and manage prompt templates that incorporate your CRM data from merge fields that reference record fields, flows, related lists, and Apex. Prompt Builder helps you to make effective prompts that safely connect you and your data with LLMs.
Bring the power of conversational AI to your business with Agentforce Agents and Copilot. Build an intelligent, trusted, and customizable AI assistant, and you can help your users get more done in Salesforce.
Einstein for Developers is an AI-powered developer tool that’s available as an easy-to-install Visual Studio Code extension built using CodeGen, the secure, custom AI model from Salesforce. The extension is available in the VS Code marketplace and the Open VSX registry.
Here are the answers to the most frequently asked questions from developers about Einstein Generative AI.
The Supported Models page not only lists all the models that are compatible with the Models API, but also offers a list of model criteria, links to benchmarks, and a comparison table.
Einstein Requests are a metric for tracking usage of Einstein Generative AI features, including the Models API. For details, see Einstein Usage.
A Models API request is a single HTTP request to the Models API, either through Apex or REST. Each Models API request consumes Einstein Requests based on the sum of the input (prompts, user messages, and system messages) and output (responses to prompts and assistant messages).
Chat applications can consume a large number of Einstein Requests because all the text for the conversation history must be sent with each Models API request.
Yes! All Models API requests are routed through the Trust Layer. Data Cloud is required to ensure that the Einstein Trust Layer functions correctly. For details, see Trust Layer.
Yes! Salesforce Developers hosts a Postman collection for the Models REST API.
You can customize model parameters by creating a custom-configured model in Einstein Studio. You can then use those configured models with the Models API. See Supported Models.
The Models API supports multimodal models like GPT-4o from OpenAI, but you can only use the text modality for now.
Because all LLM generations are processed by Trust Layer models for data masking and toxicity detection, the Models API doesn’t currently support streaming.
To support a wide range of models with a common interface, the Models API doesn’t currently support special features from ChatGPT or OpenAI’s API, such as web browsing, JSON mode, function calling, DALL·E image generation, and data analysis.
Although some of the supported models for the Models API have extended context windows, all models are currently limited to a context size of 32,768 tokens when data masking is turned on in the Einstein Trust Layer. To turn off data masking and use the full context window, see Set Up Einstein Trust Layer in Salesforce Help.
- Salesforce Help: Einstein Generative AI