As part of the Dreamforce ‘24 developer keynote, we demonstrated how to use the Models API (beta) to augment the Coral Cloud Resorts sample app (available on GitHub) securely using generative AI on the platform. This blog post takes a closer look at two specific use cases that were demoed during the keynote: creating SVG images dynamically and submitting feedback.
Models API use cases
The Models API allows you to invoke standard models offered by Salesforce or your own models connected via Model Builder, through the Einstein Trust Layer. It supports multiple use cases, and it has been exposed so you can build custom generative AI apps and workflows with code. If you haven’t heard about the Models API before, make sure to read our blog post on it (and Model Builder) before continuing.
As a summary, here are the capabilities that the Models API exposes:
- Generate text: You can send a prompt to the model and have it generate text. Using the Models API directly is ideal for text generation when you want to craft prompts dynamically in code and you don’t need the declarative customization capabilities of Prompt Builder prompt templates.
- Submit feedback: Provided you have audit and feedback activated (requires Data Cloud), you can use the Models API to submit feedback for a given generation (the different responses that the model produces). You can use this feedback later to further fine-tune your models and improve the quality of your prompts.
- Create a custom chat session: Using the Models API, you can create a custom chat session with the model, in which the history of messages is incorporated into each request, so the model has the full context of the conversation. When using this capability, on top of user messages, you can also send system messages, which are messages that your app crafts to provide additional context to the model.
- Generate embeddings: Embeddings are mathematical representations of content, such as words, pictures, audio, or video. Embeddings are stored in a vector database, where vectors represent the similarity of the content. The Models API can be used to generate embeddings for given text chunks. In this case, we recommend using an embeddings-specific model for the generation.
In this post, we’re focused on the first two capabilities.
The Coral Cloud Resorts demo
Coral Cloud Resorts is a sample hospitality application for a fictional resort that uses Agentforce, Data Cloud, and the Salesforce Platform to deliver highly personalized guest experiences. When guests arrive at Coral Cloud Resorts, they can register for various experiences available near the resorts.
For the demo, we created a Lightning Web Component (LWC) from which Coral Cloud employees can generate experience tickets using SVGs. We used the Models API to power two features:
- Dynamically generating SVGs using prompts when a user clicks the Generate button, while safely incorporating CRM data.
- Enabling users to give feedback for each generation of a ticket, good or bad, by clicking the thumbs-up or thumbs-down button.
Creating SVGs dynamically
The generation of the SVG makes use of the generate text capability we mentioned earlier. This is an ideal use case for the Models API, because the creation of the prompt is complex and done at runtime. The prompt changes dynamically depending on user selections (whether description, picture, and QR code are included or not), and the QR code is generated using a third-party JavaScript library. Here is a simplified version of the component’s JavaScript code:
1import { LightningElement, api, wire } from 'lwc';
2import generateText from '@salesforce/apex/LLMService.generateText';
3import submitFeedback from '@salesforce/apex/LLMService.submitFeedback';
4import { getRecord, getFieldValue } from 'lightning/uiRecordApi';
5import NAME_FIELD from '@salesforce/schema/Session__c.Experience__r.Name';
6import DESCRIPTION_FIELD from '@salesforce/schema/Session__c.Experience__r.Description__c';
7import PICTURE_FIELD from '@salesforce/schema/Session__c.Experience__r.Picture_URL__c';
8import DATE_FIELD from '@salesforce/schema/Session__c.Date__c';
9import START_TIME_FIELD from '@salesforce/schema/Session__c.Start_Time__c';
10import END_TIME_FIELD from '@salesforce/schema/Session__c.End_Time__c';
11import { ShowToastEvent } from 'lightning/platformShowToastEvent';
12
13export default class GenerateText extends LightningElement {
14 error;
15 generationId;
16 includeDescription = true;
17 includePicture = true;
18 includeQRCode = true;
19 invocationId;
20 showSpinner = false;
21 @api recordId;
22
23 // Retrieve record details with wire, we need to ground the prompt ourselves!
24 @wire(getRecord, {
25 recordId: '$recordId',
26 fields: [
27 NAME_FIELD,
28 DESCRIPTION_FIELD,
29 PICTURE_FIELD,
30 DATE_FIELD,
31 START_TIME_FIELD,
32 END_TIME_FIELD
33 ]
34 })
35 experience;
36
37 get experienceName() {
38 return getFieldValue(this.experience.data, NAME_FIELD);
39 }
40
41 get experienceDescription() {
42 return getFieldValue(this.experience.data, DESCRIPTION_FIELD);
43 }
44
45 get pictureURL() {
46 return getFieldValue(this.experience.data, PICTURE_FIELD);
47 }
48
49 get date() {
50 return getFieldValue(this.experience.data, DATE_FIELD);
51 }
52
53 get startTime() {
54 return getFieldValue(this.experience.data, START_TIME_FIELD);
55 }
56
57 get endTime() {
58 return getFieldValue(this.experience.data, END_TIME_FIELD);
59 }
60
61 get generatedQRCode() {
62 // TODO: Generate QR code for each user using third party library
63 }
64
65 // Create prompt dynamically on runtime
66 get prompt() {
67 let prompt = `Generate an SVG that is an entrance ticket for an activity called ${this.experienceName}.
68 Use a harmonized color palette that may include the next colors: #11535F, #045460, #04545F, #FAA390, #F6F4E6, #F86C6C, #17B3B7, #FBC2AA, #36CCCA, #F28C81, #F99082, #EFEEE5, #DCEFE4.
69 Use different font families, with different sizes and colors for the different texts following a top to bottom hierarchy.`;
70 if (this.includeDescription) {
71 prompt = `${prompt} For the background, use an img that points to "${this.pictureURL}", with a opacity of 0.3, `;
72 }
73 prompt = `${prompt} All elements should be separated by a small margin so they're easy to read.
74 Include an img that points to "${tickets}/logo_small.png" in the top left corner.
75 Include the name of the experience as a heading.`;
76 if (this.includeDescription) {
77 prompt = `${prompt} Include the activity description: ${this.experienceDescription}. It should be vertically aligned in the middle of the SVG.`;
78 }
79 prompt = `${prompt} Include a section that contains "".
80 Include the session date, ${this.date}, and the session time, ${this.startTime}-${this.endTime}. Simplify time format.`;
81 if (this.includeQRCode) {
82 prompt = `${prompt} Include an img that points to "${generatedQRCode}" on the bottom right corner that is aligned at the bottom right.`;
83 }
84 return `${prompt} Return just the svg code, with no explanations.`;
85 }
86
87 // Handle user selections
88 handleIncludeDescriptionChange(event) {
89 this.includeDescription = event.target.value;
90 }
91
92 handleIncludePictureChange(event) {
93 this.includePicture = event.target.value;
94 }
95
96 handleIncludeQRCodeChange(event) {
97 this.includeQRCode = event.target.value;
98 }
99
100 // Send prompt to model and generate SVG code (text)
101 async generateResponse() {
102 this.showSpinner = true;
103 try {
104 const response = await generateText({
105 prompt: this.prompt
106 });
107 const message = response.message.substring(
108 response.message.indexOf('```svg') + 6,
109 response.message.lastIndexOf('```')
110 );
111 this.template.querySelector('div.svg').innerHTML = message;
112 this.invocationId = response.invocationId; // Store so we can provide feedback!
113 this.generationId = response.generationId; // Store so we can provide feedback!
114 this.error = undefined;
115 } catch (error) {
116 this.template.querySelector('div.svg').innerHTML = '';
117 this.error = JSON.stringify(error);
118 } finally {
119 this.showSpinner = false;
120 }
121 }
122
123 // Rest of the code...
124}Note: SVG generation via prompt templates API is not allowed, as multimodal generation is not yet supported.
The generateResponse method is called when a user clicks the “Generate” button on the LWC. Note that the prompt requests the response to just contain the SVG code, so we can easily extract it by removing the delimiters. Another approach when working with LLMs from code is to ask for JSON, and then parse the response using the JSON.parse JavaScript function.
The generateResponse method calls an Apex method named generateText , which we placed on a generic LLMService class that contains other methods to help us work with LLMs via the Models API and the Connect API.
1@AuraEnabled
2public static Message generateText(String prompt) {
3 // Create generations request
4 aiplatform.ModelsAPI.createGenerations_Request request = new aiplatform.ModelsAPI.createGenerations_Request();
5
6 // Specify model (for instance, you could use a external model connected through Model Builder)
7 request.modelName = 'Amazon_BedRock_Claude_3_Sonnet_CM_12l_f8cf6e84';
8
9 // Create request body
10 aiplatform.ModelsAPI_GenerationRequest body = new aiplatform.ModelsAPI_GenerationRequest();
11 request.body = body;
12
13 // Add prompt to body
14 body.prompt = prompt;
15
16 String response;
17 String invocationId;
18 String generationId;
19 try {
20 // Make request
21 aiplatform.ModelsAPI modelsAPI = new aiplatform.ModelsAPI();
22 aiplatform.ModelsAPI.createGenerations_Response apiResponse = modelsAPI.createGenerations(
23 request
24 );
25 // Add response to return value
26 response = apiResponse.Code200.generation.generatedText;
27 invocationId = apiResponse.Code200.id;
28 generationId = apiResponse.Code200.generation.id;
29 } catch (aiplatform.ModelsAPI.createGenerations_ResponseException e) {
30 // Handle error
31 throw new EinsteinGenerativeGatewayException(
32 'Unable to get a valid response. Error: ' + e;
33 );
34 }
35
36 // Return response using a custom return type, as we need to include invocation
37 // and generation Ids for later feedback
38 Message msg = new Message();
39 msg.message = response;
40 msg.invocationId = invocationId;
41 msg.generationId = generationId;
42 return msg;
43}As you can see from the code, you can specify the name of the model to use on the generation request (in this case, an Amazon Bedrock Claude 3 Sonnet model we connected through Model Builder). You can specify one of the models that are offered as a service by Salesforce, or a model that you’ve connected through Model Builder, as we did in this particular use case.
Note that we store the invocationId and generationId, so we can provide feedback later. At the moment it’s not possible to configure the Models API to ask for multiple LLM generations in a single invocation, but it is designed to support this capability in the future.
Providing feedback for the generation
The thumbs-up and thumbs-down feedback buttons on the LWC allow Coral Cloud employees to provide feedback regarding how the ticket generation went. These buttons are handled in JavaScript as follows:
1async handleLike() {
2 this.submitFeedback(true);
3}
4
5async handleDislike() {
6 this.submitFeedback(false);
7}
8
9async submitFeedback(liked) {
10 this.showSpinner = true;
11 try {
12 const response = await submitFeedback({
13 invocationId: this.invocationId,
14 generationId: this.generationId,
15 liked: liked,
16 additionalFeedback: '' // We could provide additional feedback if needed
17 });
18 const event = new ShowToastEvent({
19 title: 'Submit Feedback',
20 message: response
21 });
22 this.dispatchEvent(event);
23 this.error = undefined;
24 } catch (error) {
25 this.error = JSON.stringify(error);
26 } finally {
27 this.showSpinner = false;
28 }
29}And this is the implementation for the submitFeedback Apex method that is called when the thumbs-up and thumbs-down buttons are clicked:
1@AuraEnabled
2public static String submitFeedback(
3 String invocationId,
4 String generationId,
5 Boolean liked,
6 String additionalFeedback
7) {
8 // Create feedback request
9 aiplatform.ModelsAPI.submitFeedback_Request request = new aiplatform.ModelsAPI.submitFeedback_Request();
10
11 // Provide feedback information in body
12 aiplatform.ModelsAPI_FeedbackRequest feedbackRequest = new aiplatform.ModelsAPI_FeedbackRequest();
13 feedbackRequest.id = invocationId;
14 feedbackRequest.generationId = generationId;
15 feedbackRequest.feedback = liked ? 'GOOD' : 'BAD';
16 feedbackRequest.feedbackText = additionalFeedback;
17 feedbackRequest.source = 'HUMAN';
18 request.body = feedbackRequest;
19
20 // Submit feedback
21 String response;
22 try {
23 aiplatform.ModelsAPI modelsAPI = new aiplatform.ModelsAPI();
24 aiplatform.ModelsAPI.submitFeedback_Response apiResponse = modelsAPI.submitFeedback(
25 request
26 );
27 response = apiResponse.Code202.message;
28 } catch (aiplatform.ModelsAPI.submitFeedback_ResponseException e) {
29 // Handle error
30 throw new EinsteinGenerativeGatewayException(
31 'Unable to get a valid response. Error: ' + e;
32 );
33 }
34 return response;
35}The feedback data is then visible through out-of-the-box reports and dashboards, which query the GenAIFeedback__dlm DMO (Data Model Object) under the hood. You can query this, and other DMOs, in Apex using SQL or SOQL.
Note that to be able to log feedback, you need to have audit and feedback activated, and this requires Data Cloud and consumes Data Cloud credits.
Conclusion
As the two examples highlighted in this post illustrate, the Models API provides a convenient way of using LLMs through the Einstein Trust Layer in your code-crafted apps. It supports multiple use cases, such as text and chat generation, feedback submission, and embeddings creation, and it’s in beta as of October 2024.
Watch the Dreamforce ‘24 developer keynote to see the SVG and feedback use cases in action. To find more code examples and resources about prompt templates and the Models API, check this GitHub repo, where you’ll find the developer guides and references, the existing Postman collections for both APIs, additional videos, and some Trailhead modules to explore.
About the authors
Alba Rivas works as a Principal Developer Advocate at Salesforce. You can follow her on GitHub or LinkedIn.
Charles Watkins is a Lead Developer Advocate at Salesforce. You can follow him on LinkedIn.

