At Dreamforce 2023, there were several sessions on AI. My favorite was presented by Jakub Stefaniak, VP of Technology Strategy and Innovation at Aquiva Labs. In his presentation, Jakub discussed how to augment developer efficiency with large language models (LLMs) using the concept of prompt engineering.

In this blog post, I’ll summarize his insightful information on leveraging prompt engineering as a Salesforce Developer, along with some specific use cases. These techniques can help you get the best out of an LLM like GPT or ChatGPT, which is fine-tuned from GPT-3.5 and optimized for dialogue.

What is prompt engineering?

A prompt is a text input that you give to a LLM to tell it what to do. Prompt engineering is the art of crafting prompts so that the LLM understands what you want. It’s about making clear prompts that help you talk to the LLM in a way it understands perfectly.

Jakub further explains prompt engineering in the following quote.

Prompt engineering is like software engineering in a new programming language.

Building an app is an iterative process — you start with an idea and then keep improving the code until the app works as expected. Prompt engineering works the same way, with one key difference. Instead of refining lines of code, you refine words and sentences in natural language. So, rather than iterating on programming languages, you iterate on plain English prompts to guide an AI system. The goal is the same: through progressive rounds of tweaking and refinement, you aim to arrive at instructions that yield the desired results.

With software, you craft logic in code. With prompt engineering, you craft logic in language that the LLM understands. It’s an iterative approach to translating what you want into the best prompts in order to make AI models produce your desired outcome.

Below is a diagrammatic representation of how to do an iterative prompt development.

Concept of prompt engineering

The general guidelines for writing effective prompts are as follows.

  • Be clear and specific to begin with
  • Analyze why the results are not providing the desired outcomes
  • Refine your ideas and prompts
  • Repeat the process

Prompting techniques

Let’s take a look at three advanced techniques that Jakub explains can benefit Salesforce development use cases.

Role prompting

With role-based prompting, we ask the LLM to behave as a specific persona. For example, we can start a prompt with “Act as a Salesforce Expert/Salesforce Developer.”

To understand this, let’s take an example prompt like below.

Explain best practices of Apex development.

See the results of the execution of this prompt in ChatGPT (with the GPT 3.5 model).

While the above prompt yields general best practices, we can make it more precise by adding the role as shown in the prompt below.

Act as a Salesforce Developer, expert in managing technical debt. Explain the best practices of Apex.

View the results of the execution of this prompt in ChatGPT. You can clearly see that the results become more meaningful and relevant with the addition of a role to the prompt, as it now also suggests following the specific design patterns and dependency injection for scaling the code.

Using delimiters

Using delimiters allows AI to understand my request and the input needed to produce the best results. A delimiter can be used to clearly indicate distinct parts of the input. An example prompt for this is shown below.

You are Senior Salesforce Developer, Expert in Clean Code Development. Explain the code enclosed within triple backticks and propose how to improve it.

Take a look at the results from ChatGPT. You can see that the response is much more meaningful when using the delimiters in the prompt to separate code from the rest of the prompt input.

Few-shot prompting

By default, the prompts that we write for the LLM are zero-shot prompts. With zero-shot prompting, the model generates a response without any prior training on specific prompts. A common technique to improve results is by providing enough examples to the prompt. The screenshot of the slide from the session below clearly explains this technique with an example.

Few-short prompting example

Few-shot prompting can be very useful in generating metadata or code by providing some metadata examples or code samples to the prompt. This technique reduces incorrect information from the AI model, also referred to as hallucinations, and it can help you generate complex automation, such as flow metadata, object metadata, and others.

Common prompt engineering use cases

Next, let’s take a look at some important use cases highlighted by Jakub. These use cases, when adopted by developers, have shown great benefits in improving efficiency.

Diagram generation

Salesforce Developers often need to produce diagrams, such as data models, class diagrams in the Unified Modeling Language (UML) format, or sequence diagrams, in order to document their work. The LLM can generate all of it by providing appropriate prompts and specifying the output format.

For example, to generate a UML, we can give the LLM a prompt like below.

Prepare a UML diagram of the below classes. Generate the output in PlantUML code format.

Once we have the necessary PlantUML code, we can use any online PlantUML editor to get the diagram.

Jakub shared another example of using prompts for diagram generation. The process involves specifying sequence diagrams or data models through bullets or paragraphs of descriptive text. These natural language descriptions are input to a language model, which then outputs Mermaid-formatted code for the requested diagram. This code can then be copied into Mermaid’s live editor to render the final diagrammatic visualization.

In other words, instead of manually coding Mermaid diagrams, developers can provide high-level specifications in plain English prompts. The language model handles translating those specifications into the appropriate Mermaid code. It enables going straight from conceptual diagrams in text to completed visualization without directly writing code.

Metadata XML generation

The few-shot prompting approach enables generating metadata XML for Salesforce DX source format. To do this, you’ll need to provide the language model with:

  1. A PlantUML code of your data model
  2. Examples of XML metadata representing objects and fields

Based on that input, the model can output XML in the DX source format that defines the specified objects and fields. This automated prompting method saves time over manually configuring metadata via clicks in a UI.

The screenshots below show an example prompt from Jakub’s presentation that demonstrates how to generate Salesforce metadata for objects and fields by providing Plant UML code and an example metadata XML input to the AI.

PlantUML code for the data model

Example prompt for metadata XML generation

Code generation

Salesforce Developers can generate Apex and LWC code using general-purpose large language models like GPT. To provide a model customized for Salesforce platforms, we created Einstein for Developers (in Open Beta), trained specifically on Apex, LWC, and related languages. Unlike broad models, Einstein is tailored to the needs of Salesforce Developers. We surface Einstein through Visual Studio Code and Code Builder our new web IDE. In a future post, we’ll provide guidance on prompting techniques to help you effectively use this Salesforce-specialized model.

Unit testing, code explanations, and code refactoring

Other popular use cases that can immensely benefit Salesforce Developers are unit testing, code explanations, and code refactoring. You can provide code examples and write prompts to help you generate test code, generate explanations for code when working on a new system with a large chunk of existing code, and even refactor code following clean code best practices to improve code quality. These features are possible only via LLMs like GPT and are on the Einstein for Developers roadmap.

Should you consider prompt engineering?

A common question that you may have as a skilled Salesforce Developer is: should you consider using prompt engineering in your day-to-day work? Jakub has a beautiful flow chart (generated using AI from his bullet list of decisions) to explain this.

When to use prompt engineering and generative AI

I particularly liked the importance of the human in the loop here if accuracy is the goal. Another important element I would like to add here is security. When using generative AI tools like ChatGPT for Salesforce-related tasks, it is important not to input any customer-sensitive data to the LLM. This can potentially expose security risks. We have created an Einstein Trust Layer that allows you to safely adopt generative AI on the Salesforce Einstein 1 Platform.

Conclusion

Prompt engineering techniques, if done right, can help you get things right, better, and most importantly, faster. The techniques we have covered in this post are general and apply to any LLM. I want to thank Jakub again for sharing a wealth of knowledge about prompt engineering techniques with Salesforce Developers.

In the next blog post, we will dive into some more special prompting techniques when working with Einstein for Developers, Salesforce’s AI assistance tool that helps you generate code for Salesforce-specific languages like Apex.

About the author

Mohith Shrivastava is a Developer Advocate at Salesforce with a decade of experience building enterprise-scale products on the Salesforce Platform. Mohith’s current interest is Applied AI, and he loves exploring applying AI to software development. Mohith is currently among the lead contributors on Salesforce Stack Exchange, a developer forum where Salesforce Developers can ask questions and share knowledge. You can follow him via LinkedIn or X (formerly Twitter).

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS