Prompt templates are used to integrate generative AI capabilities into your applications and workflows. You can create them in Prompt Builder, which is part of the Einstein 1 Studio suite of AI development tools. When you invoke prompt templates, they are first resolved, which entails surfacing all the data references in the prompt template (e.g., merge fields, related lists, flows, or Apex), and then sent through the Einstein Trust Layer, before generating a response from the large language model (LLM). Some out-of-the-box entry points allow you to invoke prompt templates from Salesforce, such as the Email Composer or Lightning Record pages with dynamic forms.

In this blog post, we’ll explore how you can create custom entry points to invoke prompt templates from within your Salesforce apps using flows and Apex, and from your own apps outside Salesforce using the REST API. Be sure to also read our previous blog post on prompt templates, where we review different techniques to ground prompt templates.

Entry points for executing prompt templates

When you invoke a prompt template, it goes through several transformations to produce the final prompt, which is then sent to the LLM. The LLM response, or “completion,” is also processed to produce the final generation/s that you use in your AI-enabled business workflows or apps. On top of the out-of-the-box entry points, you can initiate this process from your custom business logic implemented with Flow or Apex, and from external systems through the REST API.

Entry points for invoking prompt templates

The table below (also included in the previous blog post) summarizes the inputs and entry points that a prompt template may have, depending on its type.

Prompt template type Description Inputs Entry points
Sales Email Draft personalized emails through Email Composer. Contact or lead, and optionally another object of your choice Email Composer
Flow
Apex
REST API
Copilot action
Field Generation Generate text to be stored in a custom field within a record page. An object of your choice Record page
Flow
Apex
REST API
Copilot action
Flex Generate text to be used anywhere. Up to five objects of your choice Flow
Apex
REST API
Copilot action
Record Summary Generate a summary for a record to be used in Einstein Copilot. An object of your choice Copilot action

Let’s review each of the custom entry points in depth.

Invoking prompt templates from Flow

Every saved and activated prompt template that you create is automatically exposed as an invocable action that you can invoke from Flow using the regular Actions element. You can filter them out by selecting the Prompt Template category when creating the element.

Creating a new prompt template action in Flow

Let’s take a look at an example. Imagine that you want to invoke a prompt template like the one below from a flow.

Prompt template that we will invoke from Flow

All templates with a specific input should have that input passed in as a related entity when invoked from a flow. We discuss the input options for each prompt template type in our previous blog post. In this example, since this is a field generation prompt template that’s associated with the Contact object, you need to pass a contact as a related entity.

Sample flow invoking a prompt template

As you can see, invoking prompt templates from Flow is very straightforward, making it very convenient to incorporate AI-generated outputs into your regular business workflows.

Invoking prompt templates from Apex

To invoke prompt templates in Apex, you use classes and methods from Connect API in Apex.

This time, we’ll use a slightly more complex flex prompt template as an example.

Prompt template that we will invoke from Apex

This template is used to generate social posts that allow real estate agents to promote the properties they’re selling. Note that the prompt template body is longer than shown here. Let’s explore how to invoke it from Apex.

First, like in Flow, each prompt template will expect different inputs to be passed in, so you need to create the inputs in Apex and pass them to the template. The sample flex prompt template is associated with a Property__c custom object, so we need to pass a property record when we invoke it from Apex.

Note that in the EinsteinPromptTemplateGenerationsInput instance, we can control some parameters of the invocation. Some interesting ones are:

  • The isPreview parameter controls whether the prompt should be just resolved and return the resolution, or resolved and then sent to the LLM to return the completion.
  • The numGenerations parameter controls how many responses you get from the LLM. The default is one, but you can return multiple completions in a single invocation if you want to choose a desired response from several options.
  • The temperature parameter controls how many risks you want the model to take. The default and minimum value is 0 (no risk), which will generate less innovative, more deterministic, but more accurate responses, and the maximum value is 1.
  • The frequencyPenalty parameter is used to control the repetitiveness of the generated tokens, taking into account how many times a token has appeared in this or in prior generations.

Finally, you perform the invocation by calling the generateMessagesForPromptTemplate method, and passing in the template API name and the EinsteinPromptTemplateGenerationsInput instance that you created.

The response will be an EinsteinPromptTemplateGenerationsRepresentation, containing the resolved prompt, the generations, and other parameters.

If there was a single generation, you can extract the response (of type EinsteinLLMGenerationItemOutput) text as follows.

Note that the response also includes an interesting parameter called safetyScoreRepresentation, which is a set of measures that Salesforce computes to evaluate the safety of the response by passing it through a toxicity detection model. Check the information that’s included in the safety score.

Going back to our flex prompt template example, note that we instructed the template to return valid JSON. This is a very convenient approach when invoking prompt templates from Apex, as you’ll be able to easily parse it, either in Apex or JavaScript, if the response is being sent to a Lightning web component.

As you can see, invoking prompt templates from Apex can be very useful when incorporating AI-generated outputs into more complex, back-end business logic. And of course, don’t forget that you can surface those responses in the front end, calling Apex from LWC.

Invoking prompt templates from the REST API

Finally, to invoke prompt templates from external systems, you use a resource exposed in the Connect REST API.

Same as in Flow and Apex, you need to pass the inputs that the prompt template expects in the request body.

And the response body of the generation will look something like this:

See a sample request to this resource in the Salesforce Platform Postman collection, and watch this video to learn how to set up Postman to test Salesforce’s APIs.

Thanks to the Connect REST API, it’s not only possible to invoke prompt templates programmatically within the Salesforce Platform, but also from third-party systems. Now, let your imagination fly and think about all the fantastic use cases that you could implement by using this capability!

Try it out!

Prompt Builder requires an Einstein 1 Edition license, or it can be purchased as an add-on. You can try it out for free by requesting a five-day trial org as part of the new Quick Start: Prompt Builder module on Trailhead. As a bonus, Einstein Copilot will be included in the trail org as well. And if you want to know more, take a look at this comprehensive list of resources:

About the author

Alba Rivas works as a Principal Developer Advocate at Salesforce. You can follow her on Twitter or LinkedIn.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS