LLMs are evolving rapidly, and within the enterprise space, organizations are iterating on their AI strategies by using different model providers and platforms to address their unique use cases. We launched BYO LLM earlier in 2024 to give customers the ability to connect their own models, hosted on OpenAI, Microsoft Azure, Google Vertex AI, or Amazon Bedrock, to Salesforce, so they can use them with Salesforce’s custom prompt templates, APIs, and Apex. Now, we’ve doubled down on our commitment to give our customers choice and flexibility by launching the Salesforce LLM Open Connector.

In this blog, we’ll give you an overview of the LLM Open Connector, its benefits to different user types, some key considerations, and a walk-through of three example tasks.

What is the LLM Open Connector?

The goal of the Salesforce LLM Open Connector is to allow our customers and partners to seamlessly connect any large language model (LLM) to Salesforce for use cases that require generative AI. The LLM can be hosted on any provider or platform of your choice, embracing an ecosystem of model openness and flexibility. The LLM Open Connector is a new addition for connecting LLMs within our existing BYO LLM feature in Einstein Studio Model Builder. 

Since July 2024, BYO LLM has had four options for customers wanting to connect their external LLMs to Salesforce: OpenAI, Azure OpenAI, Google Gemini Pro, and Anthropic Claude on Amazon Bedrock. While these popular options cover a broad swath of the LLM landscape, there are many other high-quality LLMs that customers have asked to use. These include such models as Mistral, Llama, Cohere, IBM’s Granite, Databricks’ DBRX, AI21 Labs’ Jamba, and more. We have created the LLM Open Connector so that any of our customers or partners worldwide can integrate the LLM of their choice and maximize the value of generative AI within Salesforce.

Who is the LLM Open Connector designed for? 

The LLM Open Connector can be used by a variety of user roles, including:

Foundation model developers: Any customer or ISV partner who has built their own LLM will be able to use it with the LLM Open Connector. And, they can also provide guidance to their customers on how to set it up. See an example for IBM Granite on watsonx.

Administrators: Salesforce admins can easily connect the LLM required by their organization using Model Builder. Once the model is connected and saved, the admin can monitor ‌usage and also track feedback and auditing.

Prompt template builders: Salesforce Prompt Builder is a powerful tool that allows prompt engineers, data scientists, admins, and developers to build low-code prompts that are natively integrated in the Salesforce UX, data for grounding, and the Salesforce Platform (Flows, Apex, etc.).

Salesforce business users: Eventually, the goal for connecting models and building prompts is to empower the business user within Salesforce. The possibilities are endless for the types of generative AI solutions you can create in a CRM. We recommend focusing on one or two use cases that have high business value with clear ROI, and that have natural data gravity in CRM (e.g., service cases, sales leads, etc.). To start simple, you might consider focusing on use cases that either summarize or generate content for the user to help make them more productive.

LLM Open Connector product details

The LLM Open Connector introduces a new option in the BYO LLM user experience, represented as a fifth tile simply labeled “Connect to Your LLM.” This feature allows customers to connect any foundation model of their choice, provided it adheres to our Open Connector specification

Before you get started, there are a few key considerations/requirements to consider, including:

  • You need to have an LLM that you wish to connect, either with an existing provider or on a platform that you control. The model can be proprietary, open source, or “home grown.”
  • The LLM Open Connector is platform- and model-agnostic. This means that it allows you to choose your LLM model provider, inference platform, hosting platform, and so forth.
  • The model endpoint must be a standard REST API endpoint.
  • It must comply with the Open Connector specification, which is based on industry-standard LLM APIs.
  • The model endpoint should support standard foundation model capabilities, primarily chat completions or embeddings.

LLM Open Connector walk-through

With that background, let’s dive into a quick walk-through of the LLM Open Connector by looking at three simple tasks for three user types:

  1. Admin: Connecting the LLM
  2. Prompt engineer: Creating a prompt template
  3. Salesforce user: Invoking the prompt template

Task 1) Admin:  Connecting the LLM 

With the admin in mind, our first step is to select Einstein Studio Model Builder in Data Cloud, and then select the Add Foundation Model button.

Screenshot showing the Add Foundation Model button in Einstein Studio

We then choose the Connect to Your LLM tile.

Screenshot showing the Connect to Your LLM tile in Model Builder

On the next screen, we enter a few required inputs, such as a user-defined name, the endpoint URL, the private key, and the specific model name. We’ll use the Mistral Small model hosted on the GitHub Models Marketplace for our example,  although it can be any LLM model endpoint that conforms to the Open Connector API specification. As a developer, you can use the specifications and examples published on the Salesforce Einstein AI GitHub repo to create your own REST API model endpoints.

Screenshot showing the connection details for a new model in Model Builder

Once we’ve successfully connected the LLM, we now have a foundation model artifact that is persisted in Data Cloud. We can then click the New Configuration button to create a configured model that will be available in Prompt Builder, the Models API, etc.

Screenshot showing the New Configuration button for adding a new configured model in Model Builder

Next, in the Model Builder Playground, we enter a simple prompt to verify that our foundation model is working correctly.

Screenshot showing the testing of a new configured model in the Model Builder Playground

Once tested, we save our configuration with a name of our choice. The name we give the configured model will be the model name that will surface in Prompt Builder and other places in the UX.

Screenshot showing the newly created configured model in Model Builder

Selecting our newly configured model by clicking on the name will take us to the details of our model. After reviewing the model details, we can click the Create Prompt Template button to create a new prompt in Prompt Builder for usage in Salesforce.

Screenshot showing the Create Prompt Template button for a new template in Model Builder

Task 2) Prompt Engineer: Creating a prompt template

For this walkthrough, we’ll step into the shoes of a prompt engineer and create a basic prompt that uses the Account object and the Account Description field.

Screenshot showing the new Prompt Template screen in Prompt Builder

In the Prompt Builder template, we’ll create a short prompt to summarize open cases and opportunities for an account record. Notice that we’re inserting the Cases and Opportunities-related lists into the template using the Resource picker which will give the prompt template the grounded context it needs for each account record. 

First, we change the model from a Standard Model Type to Custom and select the Mistral model that we created previously with the LLM Open Connector.

Screenshot showing the custom model dropdown list in the Prompt Template screen in Prompt Builder

It is important to note that all requests made to a large language model connected with the LLM Open Connector still go through the standard Salesforce AI architecture stack, including the Einstein Trust Layer. In our example using Mistral, the prompt is requesting a chat completion inference from whatever hosting platform and model is behind the API endpoint.

Task 3) Salesforce User: Invoking the prompt template 

Lastly, we’ll look at it from the Salesforce user’s point of view. 

Developers can build experiences using prompt templates in any of the standard ways that Prompt Builder supports. For this example, we’ve simply configured a text field on the account page to use the prompt template we created in Task 2 (using the Mistral model). This experience allows the user to invoke the prompt template with a single click. 

Screenshot showing a standard text field on the account record that is configured to invoke the prompt template

Connecting the prompt template to a field in Salesforce is just one example of how to use our newly connected model. We could also use the model in a custom Lighting web component with the new Models API, or custom Agentforce actions. Soon, BYO models connected with the LLM Open Connector will be available to use in popular turnkey GenAI features that support prompt templates, such as Service Replies and Case Summaries.

Conclusion

As you saw in this post, we used the LLM Open Connector to connect and save a Mistral Small model endpoint in Model Builder. We then used it to build a custom prompt template, and finally, we connected the template to a GenAI text field within the core Salesforce user experience. We are excited to hear what LLMs you will connect and use in Salesforce for your generative AI use cases!

Resources

About the author

Darvish Shadravan is a Senior Director of Product Management at Salesforce, where he has focused on building AI products and helping customers with machine learning for more than a decade. Darvish’s current interest is enabling AI/ML professionals with maximum flexibility for generative AI applications and advanced data science capabilities in Salesforce. Connect with Darvish via LinkedIn.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS