Prompt Builder empowers developers to design, build, and manage prompt templates efficiently. It allows you to ground your prompts with Salesforce data and fosters seamless reusability within the platform, so you can tap into the power of generative AI with various large language models (LLMs) while taking full advantage of Salesforce CRM data, metadata, and platform features like flows and Apex.

Prompt engineering is essential for getting the most out of LLMs. This blog post explores practical prompt engineering techniques that you can use with Prompt Builder. These techniques will improve prompt responses and help you integrate generative AI into your applications.

Few-shot prompting

LLMs are trained on massive datasets, allowing them to perform various advanced tasks without specific examples or demonstrations. This technique is known as zero-shot prompting. However, providing a few examples within the prompt sent to the LLM can enable in-context learning, improving the accuracy of the model’s responses. This technique is called few-shot prompting.

Let’s consider a sample prompt that summarizes customer reviews for a specific product and generates a sentiment analysis (positive/negative/neutral). In this case, assume that the customer reviews are related to the product object; you can use a related list to ground the prompt in Prompt Builder.

You are a Data scientist. Summarize the feedback for below customer reviews

{!$RelatedList:Product2.Customer_Reviews__r}

Use the below instructions:

Assign AIGeneratedSentiment to positive, negative or neutral to the final response.

A few examples of a positive review are - great, awesome experience, exceed expectation
A few examples of a negative review are - had issues, not a great experience, not worth
A few examples of a neutral review are - Its decent, gets the job done

The ProductId is known and it is {!$Input:Product2.Id}

Make sure the response is strict JSON.

Here is the example JSON output:
{
   "AIGeneratedFeedback": "",
   "AIGeneratedSentiment": "positive/neutral/negative",
   "ProductId": ProductId
}

Notice that the provided prompt includes examples of what constitutes positive, negative, and neutral reviews.

The screenshot below shows the response output for the above prompt in Prompt Builder.

Few-shot prompting example showing output response

Important considerations

Here are some points to keep in mind with few-shot prompting:

  • As you add more data examples to the prompt, the number of tokens increases. LLMs have a restriction on the number of maximum tokens they can process in a transaction (also known as the context window), which can restrict the number of examples you can provide in a single prompt. For example, the GPT-4 model from OpenAI has a context window of 128K tokens. Always look for a balance, usually 5-10 good examples improve the response significantly for common use cases.
  • Prompt Builder provides versioning, making it easy to build different versions and test out prompts with different example sets.

Chain-of-thought (CoT) prompting

Chain-of-thought prompting involves providing the model with intermediate reasoning steps before asking it to respond to a multi-step problem. As the name suggests, you chain your thoughts together in the prompt to obtain a better response.

Let’s consider an example use case where you want to show recommended action items for a service representative to resolve a case for a product that a customer has purchased. Using chain-of-thought prompting, you can provide reasoning steps and let the LLM suggest steps based on the data context and the reasoning steps. Note that you can provide reasoning as bullet points or use JSON as shown in the example prompt below.

You are an AI assistant helping the Service Agent handling a case reported by {!$Input:Case.Contact.Name} about {!$Input:Case.Asset.Name} with the main problem being {!$Input:Case.Description}

Use the chain of thought below to recommend action items for the service agent to ensure customer satisfaction. Stop processing the action items once you see the NextStep as "End process"

Input:

Confirm Current Status: {!$Input:Case.Asset.Status__c}
Warranty Considerations: {!$Input:Case.Asset.Warranty_Status__c}
Similar Past Issues: {!Apex:GetSimilarCases}

Chain of Thought:
{
"ChainOfThought": [
{
  "Step": "Confirm Current Status",
   "Conditions": {
      "Bad": {
        "Action": "Check device status code and report. Stop further processing.",
        "NextStep": "End process"
     },
    "Good": {
      "Action": "No action required at this stage.",
      "NextStep": "Warranty Considerations"
     }
  }
},
{
  "Step": "Warranty Considerations",
   "Conditions": {
      "Not Under Warranty": {
         "Action": "Review warranty details to determine eligibility for services. Stop further processing.",
         "NextStep": "End process"
       },
    "Under Warranty": {
        "Action": "No action required at this stage.",
        "NextStep": "Assess Similar Past Issues"
     }
    }
 },
{
   "Step": "Assess Similar Past Issues",
    "Conditions": {
       "Found": {
          "Action": "Display past issues and resolution steps. Stop further processing.",
           "NextStep": "End process"
       },
       "Not Found": {
           "Action": "No action required at this stage.",
           "NextStep": "Look through Knowledge Articles"
         }
     }
},
{
     "Step": "Look through Knowledge Articles",
     "Conditions": {
     "Always": {
         "Action": "Review suggested knowledge articles: FAQ, Issues.",
         "NextStep": "End process"
      }
     }
   }
  ]
 }

The final response should be only JSON as shown below. Do not respond with anything other than JSON

{
  "Actions Items": [
   {
     "Step": "Confirm Current Status",
      "Status": "Good",
      "Action": "No action required at this stage."
   },
   {
    "Step": "Look through knowledge articles",
    "Status": "Suggested Articles Available",
     "Action": "Review the suggested knowledge articles: FAQ, Issues."
    }
  ]
}

Important considerations

Here are some considerations for CoT prompting:

  • A very common technique where you can have a model use CoT is by appending ‘Let’s think step by step’ into the prompt. This is also known as zero-shot chain-of-thought prompting, where you let the model generate the reasoning steps on its own.
  • You can combine chain-of-thought prompting with few-shot prompting to further improve the response.
  • The OpenAI GPT 4 Turbo model is found to have better reasoning capabilities. If you have a complex task that involves reasoning, it is better to select OpenAI GPT 4 Turbo as the model via Prompt Builder.

Automatic reasoning

In the chain-of-thought (CoT) example in the previous section, the prompt explicitly provided reasoning steps. Models like GPT-4 can reason on their own when provided with the description and access to the tools you want the model to use as a part of the prompt. On the Salesforce platform, these tools correspond to function names, descriptions, and parameters. These functions can be mapped to Apex classes or flows using custom metadata. Once the LLM returns the necessary response, the response can be parsed, and the linked Apex classes or flows can be executed based on the specified functions.

Let’s explore this further with an example prompt like the one below, which provides all the tools that the LLM has access to for resolving a case in the prompt.

Suggest the action steps as best you can for the "Problem" using tools and their descriptions. You have access to the following tools:

TOOLS
_______

verifyPurchase: Verify the purchase details from the database. Arguments: 'product_id': <String>, 'customer_id': <String>
checkWarranty: Check the warranty status of a product. Arguments: 'product_id': <String>
registerComplaint: Register a customer complaint regarding a product. Arguments: 'product_id': <String>, 'issue_description': <String>
scheduleRepair: Schedule a repair for a product under warranty. Arguments: 'product_id': <String>, 'repair_date': <String>

You should always think about what to do as a chain of action steps using tool descriptions and names.

Once you select a tool, plan out the remaining steps using your own thought process. The next step to take, should be one of [registerComplaint, verifyPurchase, checkWarranty, scheduleRepair]

Stop the execution once you exhaust the tools.

The final response should be only JSON as shown below. Do not respond with anything other than JSON.

[
  {
   "step": 1
   "action 1": action name,
    "Arguments": Arguments from the tool
   },

{
   "action 2": action name,
    "Arguments":
    "step": 2
  },

]

Problem:
"""Determine if the warranty for product ID 12345 still covers repairs for Customer with Id 86767, and if so, handle a complaint about a malfunctioning part for a refrigerator and schedule a repair for next week."""

The output response of this prompt is shown below. You can parse this response in Apex and then use Apex or flows to execute these actions step-by-step. In this example, the problem statement is hard-coded in the prompt template but this can come via a prompt from the user or via automation like an Apex class that invokes the prompt.

The reasoning capability shown in the above example is what gets used in conversational AI chatbots. However, the above approach requires maintaining the prompt template — a predefined structure for user prompts and system responses — as well as a library of actions, and it requires writing code to parse and execute actions. For a chatbot, you would also need to maintain the user interface (UI). Building a conversational AI assistant with this approach would entail maintaining a lot more code and the UI, making it harder to deploy and scale the solution.

In contrast, Einstein Copilot, Salesforce’s AI assistant, makes this easy. Salesforce bundles standard actions and lets you add a library of custom actions to extend them for your business use case. Einstein Copilot configurations and actions you make are exposed as metadata to simplify the maintenance and deployment of these actions. Furthermore, Copilot eliminates the need to maintain your own UI, as it allows you to create actions that seamlessly integrate with the out-of-the-box UI. Lastly, you can always expose prompt templates as Copilot actions giving you capabilities like the ability to chain together and use multiple prompt templates.

Conclusion

Mastering prompt engineering with tools like Prompt Builder can significantly enhance the effectiveness of generative AI within your applications. By combining few-shot prompting and chain-of-thought techniques with the automatic reasoning capabilities of Einstein Copilot, developers can optimize AI responses to be more precise and contextually aware, leading to more intelligent and responsive applications. Remember, the key is in the details — thoughtful, prompt construction and careful management of examples and reasoning steps are your gateways to unleashing the full potential of generative AI in real-world scenarios, inspiring a new era of possibilities.

Resources

About the author

Mohith Shrivastava is a Principal Developer Advocate at Salesforce with a decade of experience building enterprise-scale products on the Salesforce Platform. Mohith is currently among the lead contributors on Salesforce Stack Exchange, a developer forum where Salesforce Developers can ask questions and share knowledge. You can follow him on LinkedIn.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS