Einstein for Developers Glossary

Einstein for Developers Glossary

This glossary defines generative AI terms that appear throughout the Einstein documentation.

generative pre-trained transformer (GPT): A family of language models developed by OpenAI that are generally trained on a large corpus of text data so they can generate human-like text.

grounding: The process used to inject domain-specific knowledge and customer information into the prompt. human in the loop (HITL): A model that requires human interaction.

intent: A user’s goal for interacting with the AI assistant.

large language model (LLM): A language model consisting of a neural network with many parameters trained on large quantities of text.

prompt: A natural language description of the task to be done. An input to the LLM.

prompt management: The suite of tools used to build, manage, package, and share prompts, including the prompt templates and the prompt template store.

prompt template: A string with placeholders/tags that can be replaced with custom values to generate a final prompt. The template includes the hyperparameters associated with that prompt and your choice of model/vendor if you’re not using default values.

prompt chaining: The method to select the right prompt engineering, which is a break-up of complex tasks into several intermediate steps, and then tie it back together in the hope that the AI generates a more concrete, customized, and thus better result. To get the best prompt, use the “Retry” option to regenerate code.

semantic retrieval: A scenario where a large language model uses all the knowledge that exists in a customer’s CRM data. Each CRM user has access to a personalized generative AI.

Feedback or Bugs | Edit this Article