agent Commands
agent create
Description for agent create
To run this command, you must have an agent spec file, which is a YAML file that define the agent properties and contains a list of AI-generated topics. Topics define the range of jobs the agent can handle. Use the "agent generate agent-spec" CLI command to generate an agent spec file. Then specify the file to this command using the --spec flag, along with the name (label) of the new agent with the --name flag. If you don't specify any of the required flags, the command prompts you.
When this command completes, your org contains the new agent, which you can then edit and customize in the Agent Builder UI. The new agent's topics are the same as the ones listed in the agent spec file. The agent might also have some AI-generated actions, or you can add them. This command also retrieves all the metadata files associated with the new agent to your local Salesforce DX project.
Use the --preview flag to review what the agent looks like without actually saving it in your org. When previewing, the command creates a JSON file in the current directory with all the agent details. The name of the JSON file is the agent's API name and a timestamp.
To open the new agent in your org's Agent Builder UI, run this command: "sf org open agent --api-name <api-name>".
Examples for agent create
Create an agent by being prompted for the required information, such as the agent spec file and agent name, and then create it in your default org:
sf agent create
Create an agent by specifying the agent name, API name, and spec file with flags; use the org with alias "my-org"; the command fails if the API name is already being used in your org:
sf agent create --name "Resort Manager" --api-name Resort_Manager --spec specs/resortManagerAgent.yaml --target-org my-org
Preview the creation of an agent named "Resort Manager" and use your default org:
sf agent create --name "Resort Manager" --spec specs/resortManagerAgent.yaml --preview
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- --name NAME
- Optional
-
Name (label) of the new agent.
- Type: option
- --api-name API-NAME
- Optional
-
API name of the new agent; if not specified, the API name is derived from the agent name (label); the API name must not exist in the org.
- Type: option
- --spec SPEC
- Optional
-
Path to an agent spec file.
- Type: option
- --preview
- Optional
-
Preview the agent without saving it in your org.
- Type: boolean
agent generate agent-spec
Description for agent generate agent-spec
The first step in creating an agent in your org with Salesforce CLI is to generate an agent spec using this command. An agent spec is a YAML-formatted file that contains information about the agent, such as its role and company description, and then an AI-generated list of topics based on this information. Topics define the range of jobs your agent can handle.
Use flags, such as --role and --company-description, to provide details about your company and the role that the agent plays in your company. If you prefer, you can also be prompted for the basic information; use --full-interview to be prompted for all required and optional properties. Upon command execution, the large language model (LLM) associated with your org uses the provided information to generate a list of topics for the agent. Because the LLM uses the company and role information to generate the topics, we recommend that you provide accurate, complete, and specific details so the LLM generates the best and most relevant topics. Once generated, you can edit the spec file; for example, you can remove topics that don't apply or change a topic's description.
You can also iterate the spec generation process by using the --spec flag to pass an existing agent spec file to this command, and then using the --role, --company-description, etc, flags to refine your agent properties. Iteratively improving the description of your agent allows the LLM to generate progressively better topics.
You can also specify other agent properties, such as a custom prompt template, how to ground the prompt template to add context to the agent's prompts, the tone of the prompts, and the username of a user in the org to assign to the agent.
When your agent spec is ready, you then create the agent in your org by running the "agent create" CLI command and specifying the spec with the --spec flag.
Examples for agent generate agent-spec
Generate an agent spec in the default location and use flags to specify the agent properties, such as its role and your company details; use your default org:
sf agent generate agent-spec --type customer --role "Field customer complaints and manage employee schedules." --company-name "Coral Cloud Resorts" --company-description "Provide customers with exceptional destination activities, unforgettable experiences, and reservation services."
Generate an agent spec by being prompted for the required agent properties and generate a maxiumum of 5 topics; write the generated file to the "specs/resortManagerSpec.yaml" file and use the org with alias "my-org":
sf agent generate agent-spec --max-topics 5 --output-file specs/resortManagerAgent.yaml --target-org my-org
Be prompted for all required and optional agent properties; use your default org:
sf agent generate agent-spec --full-interview
Specify an existing agent spec file called "specs/resortManagerAgent.yaml", and then overwrite it with a new version that contains newly AI-generated topics based on the updated role information passed in with the --role flag:
sf agent generate agent-spec --spec specs/resortManagerAgent.yaml --output-file specs/resortManagerAgent.yaml --role "Field customer complaints, manage employee schedules, and ensure all resort operations are running smoothly"
Specify that the conversational tone of the agent is formal and to attach the "resortmanager@myorg.com" username to it; be prompted for the required properties and use your default org:
sf agent generate agent-spec --tone formal --agent-user resortmanager@myorg.com
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- --type TYPE
- Optional
-
Type of agent to create. Internal types are copilots used internally by your company and customer types are the agents you create for your customers.
- Type: option
- Permissible values are: customer, internal
- --role ROLE
- Optional
-
Role of the agent.
- Type: option
- --company-name COMPANY-NAME
- Optional
-
Name of your company.
- Type: option
- --company-description COMPANY-DESCRIPTION
- Optional
-
Description of your company.
- Type: option
- --company-website COMPANY-WEBSITE
- Optional
-
Website URL of your company.
- Type: option
- --max-topics MAX-TOPICS
- Optional
-
Maximum number of topics to generate in the agent spec; default is 5.
- Type: option
- --agent-user AGENT-USER
- Optional
-
Username of a user in your org to assign to your agent; determines what your agent can access and do.
- Type: option
- --enrich-logs ENRICH-LOGS
- Optional
-
Adds agent conversation data to event logs so you can view all agent session activity in one place.
- Type: option
- Permissible values are: true, false
- --tone TONE
- Optional
-
Conversational style of the agent, such as how it expresses your brand personality in its messages through word choice, punctuation, and sentence structure.
- Type: option
- Permissible values are: formal, casual, neutral
- --spec SPEC
- Optional
-
Agent spec file, in YAML format, to use as input to the command.
- Type: option
- --output-file OUTPUT-FILE
- Optional
-
Path for the generated YAML agent spec file; can be an absolute or relative path.
- Type: option
- Default value: specs/agentSpec.yaml
- --full-interview
- Optional
-
Prompt for both required and optional flags.
- Type: boolean
- --prompt-template PROMPT-TEMPLATE
- Optional
-
API name of a customized prompt template to use instead of the default prompt template.
- Type: option
- --grounding-context GROUNDING-CONTEXT
- Optional
-
Context information and personalization that's added to your prompts when using a custom prompt template.
- Type: option
- --force-overwrite
- Optional
-
Don't prompt the user to confirm that an existing spec file will be overwritten.
- Type: boolean
agent generate template
Description for agent generate template
At a high-level, agents are defined by the Bot, BotVersion, and GenAiPlannerBundle metadata types. The GenAiPlannerBundle type in turn defines the agent's topics and actions. This command uses the metadata files for these three types, located in your local DX project, to generate a BotTemplate file for a specific agent (Bot). You then use the BotTemplate file, along with the GenAiPlannerBundle file that references the BotTemplate, to package the template in a managed package that you can share between orgs or on AppExchange.
Use the --agent-file flag to specify the relative or full pathname of the Bot metadata file, such as force-app/main/default/bots/My_Awesome_Agent/My_Awesome_Agent.bot-meta.xml. A single Bot can have multiple BotVersions, so use the --agent-version flag to specify the version. The corresponding BotVersion file must exist locally. For example, if you specify "--agent-version 4", then the file force-app/main/default/bots/My_Awesome_Agent/v4.botVersion-meta.xml must exist.
The new BotTemplate file is generated in the "botTemplates" directory in your local package directory, and has the name <Agent_API_name>_v<Version>_Template.botTemplate-meta.xml, such as force-app/main/default/botTemplates/My_Awesome_Agent_v4_Template.botTemplate-meta.xml. The command displays the full pathname of the generated files when it completes.
Examples for agent generate template
Generate an agent template from a Bot metadata file in your DX project that corresponds to the My_Awesome_Agent agent; use version 1 of the agent.
sf agent generate template --agent-file force-app/main/default/bots/My_Awesome_Agent/My_Awesome_Agent.bot-meta.xml --agent-version 1
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- --agent-version AGENT-VERSION
- Required
-
Version of the agent (BotVersion).
- Type: option
- -f | --agent-file AGENT-FILE
- Required
-
Path to an agent (Bot) metadata file.
- Type: option
agent generate test-spec
Description for agent generate test-spec
The first step when using Salesforce CLI to create an agent test in your org is to use this interactive command to generate a local YAML-formatted test spec file. The test spec YAML file contains information about the agent being tested, such as its API name, and then one or more test cases. This command uses the metadata components in your DX project when prompting for information, such as the agent API name; it doesn't look in your org.
To generate a specific agent test case, this command prompts you for this information; when possible, the command provides a list of options for you to choose from:
- Utterance: Natural language statement, question, or command used to test the agent.
- Expected topic: API name of the topic you expect the agent to use when responding to the utterance.
- Expected actions: One or more API names of the expection actions the agent takes.
- Expected outcome: Natural language description of the outcome you expect.
When your test spec is ready, you then run the "agent test create" command to actually create the test in your org and synchronize the metadata with your DX project. The metadata type for an agent test is AiEvaluationDefinition.
If you have an existing AiEvaluationDefinition metadata XML file in your DX project, you can generate its equivalent YAML test spec file with the --from-definition flag.
Examples for agent generate test-spec
Generate an agent test spec YAML file interactively:
sf agent generate test-spec
Generate an agent test spec YAML file and specify a name for the new file; if the file exists, overwrite it without confirmation:
sf agent generate test-spec --output-file specs/Resort_Manager-new-version-testSpec.yaml --force-overwrite
Generate an agent test spec YAML file from an existing AiEvaluationDefinition metadata XML file in your DX project:
sf agent generate test-spec --from-definition force-app//main/default/aiEvaluationDefinitions/Resort_Manager_Tests.aiEvaluationDefinition-meta.xml
Flags
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -d | --from-definition FROM-DEFINITION
- Optional
-
Filepath to the AIEvaluationDefinition metadata XML file in your DX project that you want to convert to a test spec YAML file.
- Type: option
- --force-overwrite
- Optional
-
Don't prompt for confirmation when overwriting an existing test spec YAML file.
- Type: boolean
- -f | --output-file OUTPUT-FILE
- Optional
-
Name of the generated test spec YAML file. Default value is "specs/<AGENT_API_NAME>-testSpec.yaml".
- Type: option
agent preview (Beta)
Description for agent preview
Use this command to have a natural language conversation with an active agent in your org, as if you were an actual user. The interface is simple: in the "Start typing..." prompt, enter a statement, question, or command; when you're done, enter Return. Your utterance is posted on the right along with a timestamp. The agent then responds on the left. To exit the conversation, hit ESC or Control+C.
This command is useful to test if the agent responds to your utterances as you expect. For example, you can test that the agent uses a particular topic when asked a question, and then whether it invokes the correct action associated with that topic. This command is the CLI-equivalent of the Conversation Preview panel in your org's Agent Builder UI.
When the session concludes, the command asks if you want to save the API responses and chat transcripts. By default, the files are saved to the "./temp/agent-preview" directory. Specify a new default directory by setting the environment variable "SF_AGENT_PREVIEW_OUTPUT_DIR" to the directory. Or you can pass the directory to the --output-dir flag.
Find the agent's API name in its main details page in your org's Agent page in Setup.
Before you use this command, you must complete these steps:
1. Create a connected app in your org as described in the "Create a Connected App" section here: https://developer.salesforce.com/docs/einstein/genai/guide/agent-api-get-started.html#create-a-connected-app. Do these four additional steps:
a. When specifying the connected app's Callback URL, add this second callback URL on a new line: "http://localhost:1717/OauthRedirect".
b. When adding the scopes to the connected app, add "Manage user data via Web browsers (web)".
c. Ensure that the "Require Secret for Web Server Flow" option is not selected.
d. Make note of the user that you specified as the "Run As" user when updating the Client Credentials Flow section.
2. Add the connected app to your agent as described in the "Add Connected App to Agent" section here: https://developer.salesforce.com/docs/einstein/genai/guide/agent-api-get-started.html#add-connected-app-to-agent.
3. Copy the consumer key from your connected app as described in the "Obtain Credentials" section here: https://developer.salesforce.com/docs/einstein/genai/guide/agent-api-get-started.html#obtain-credentials.
4. Set the "SFDX_AUTH_SCOPES" environment variable to "refresh_token sfap_api chatbot_api web api". This step ensures that you get the specific OAuth scopes required by this command.
5. Using the username of the user you specified as the "Run As" user above, authorize your org using the web server flow, as described in this document: https://developer.salesforce.com/docs/atlas.en-us.sfdx_dev.meta/sfdx_dev/sfdx_dev_auth_web_flow.htm.
IMPORTANT: You must use the "--client-id <CONNECTED-APP-CONSUMER-KEY>" flag of "org login web", where CONNECTED-APP-CONSUMER-KEY is the consumer key you previously copied. This step ensures that the "org login web" command uses your custom connected app, and not the default CLI connected app.
Press Enter to skip sharing the client secret.
6. When you run this command to interact with an agent, specify the username you authorized in the preceding step with the --connected-app-user (-a) flag.
Examples for agent preview
Interact with an agent with API name "Resort_Manager" in the org with alias "my-org". Connect to your agent using the alias "my-agent-user"; this alias must point to the username who is authorized using the Web server flow:
sf agent preview --api-name "Resort_Manager" --target-org my-org --connected-app-user my-agent-user
Same as the preceding example, but this time save the conversation transcripts to the "./transcripts/my-preview" directory rather than the default "./temp/agent-preview":
sf agent preview --api-name "Resort_Manager" --target-org my-org --connected-app-user my-agent-user --output-dir "transcripts/my-preview"
Flags
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- -a | --connected-app-user CONNECTED-APP-USER
- Required
-
Username or alias of the connected app user that's configured with web-based access tokens to the agent.
- Type: option
- -n | --api-name API-NAME
- Optional
-
API name of the agent you want to interact with.
- Type: option
- -d | --output-dir OUTPUT-DIR
- Optional
-
Directory where conversation transcripts are saved.
- Type: option
- -x | --apex-debug
- Optional
-
Enable Apex debug logging during the agent preview conversation.
- Type: boolean
agent test create
Description for agent test create
To run this command, you must have an agent test spec file, which is a YAML file that lists the test cases for testing a specific agent. Use the "agent generate test-spec" CLI command to generate a test spec file. Then specify the file to this command with the --spec flag, or run this command with no flags to be prompted.
When this command completes, your org contains the new agent test, which you can view and edit using the Testing Center UI. This command also retrieves the metadata component (AiEvaluationDefinition) associated with the new test to your local Salesforce DX project and displays its filename.
After you've created the test in the org, use the "agent test run" command to run it.
Examples for agent test create
Create an agent test interactively and be prompted for the test spec and API name of the test in the org; use the default org:
sf agent test create
Create an agent test and use flags to specify all required information; if a test with same API name already exists in the org, overwrite it without confirmation. Use the org with alias "my-org":
sf agent test create --spec specs/Resort_Manager-testSpec.yaml --api-name Resort_Manager_Test --force-overwrite --target-org my-org
Preview what the agent test metadata (AiEvaluationDefinition) looks like without deploying it to your default org:
sf agent test create --spec specs/Resort_Manager-testSpec.yaml --api-name Resort_Manager_Test --preview
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- --api-name API-NAME
- Optional
-
API name of the new test; the API name must not exist in the org.
- Type: option
- --spec SPEC
- Optional
-
Path to the test spec YAML file.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- --preview
- Optional
-
Preview the test metadata file (AiEvaluationDefinition) without deploying to your org.
- Type: boolean
- --force-overwrite
- Optional
-
Don't prompt for confirmation when overwriting an existing test (based on API name) in your org.
- Type: boolean
agent test list
Description for agent test list
The command outputs a table with the name (API name) of each test along with its unique ID and the date it was created in the org.
Examples for agent test list
List the agent tests in your default org:
sf agent test list
List the agent tests in an org with alias "my-org""
sf agent test list --target-org my-org
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
agent test results
Description for agent test results
This command requires a job ID, which the original "agent test run" command displays when it completes. You can also use the --use-most-recent flag to see results for the most recently run agent test.
By default, this command outputs test results in human-readable tables for each test case. The tables show whether the test case passed, the expected and actual values, the test score, how long the test took, and more. Use the --result-format to display the test results in JSON or Junit format. Use the --output-dir flag to write the results to a file rather than to the terminal.
Examples for agent test results
Get the results of an agent test run in your default org using its job ID:
sf agent test results --job-id 4KBfake0000003F4AQ
Get the results of the most recently run agent test in an org with alias "my-org":
sf agent test results --use-most-recent --target-org my-org
Get the results of the most recently run agent test in your default org, and write the JSON-formatted results into a directory called "test-results":
sf agent test results --use-most-recent --output-dir ./test-results --result-format json
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- -i | --job-id JOB-ID
- Required
-
Job ID of the completed agent test run.
- Type: option
- --result-format RESULT-FORMAT
- Optional
-
Format of the agent test run results.
- Type: option
- Permissible values are: json, human, junit, tap
- Default value: human
- -d | --output-dir OUTPUT-DIR
- Optional
-
Directory to write the agent test results into.
If the agent test run completes, write the results to the specified directory. If the test is still running, the test results aren't written.
- Type: option
agent test resume
Description for agent test resume
This command requires a job ID, which the original "agent test run" command displays when it completes. You can also use the --use-most-recent flag to see results for the most recently run agent test.
Use the --wait flag to specify the number of minutes for this command to wait for the agent test to complete; if the test completes by the end of the wait time, the command displays the test results. If not, the CLI returns control of the terminal to you, and you must run "agent test resume" again.
By default, this command outputs test results in human-readable tables for each test case. The tables show whether the test case passed, the expected and actual values, the test score, how long the test took, and more. Use the --result-format to display the test results in JSON or Junit format. Use the --output-dir flag to write the results to a file rather than to the terminal.
Examples for agent test resume
Resume an agent test in your default org using a job ID:
sf agent test resume --job-id 4KBfake0000003F4AQ
Resume the most recently-run agent test in an org with alias "my-org" org; wait 10 minutes for the tests to finish:
sf agent test resume --use-most-recent --wait 10 --target-org my-org
Resume the most recent agent test in your default org, and write the JSON-formatted results into a directory called "test-results":
sf agent test resume --use-most-recent --output-dir ./test-results --result-format json
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- -i | --job-id JOB-ID
- Optional
-
Job ID of the original agent test run.
- Type: option
- -r | --use-most-recent
- Optional
-
Use the job ID of the most recent agent test run.
- Type: boolean
- -w | --wait WAIT
- Optional
-
Number of minutes to wait for the command to complete and display results to the terminal window.
- Type: option
- --result-format RESULT-FORMAT
- Optional
-
Format of the agent test run results.
- Type: option
- Permissible values are: json, human, junit, tap
- Default value: human
- -d | --output-dir OUTPUT-DIR
- Optional
-
Directory to write the agent test results into.
If the agent test run completes, write the results to the specified directory. If the test is still running, the test results aren't written.
- Type: option
agent test run
Description for agent test run
Use the --api-name flag to specify the name of the agent test you want to run. Use the output of the "agent test list" command to get the names of all the available agent tests in your org.
By default, this command starts the agent test in your org, but it doesn't wait for the test to finish. Instead, it displays the "agent test resume" command, with a job ID, that you execute to see the results of the test run, and then returns control of the terminal window to you. Use the --wait flag to specify the number of minutes for the command to wait for the agent test to complete; if the test completes by the end of the wait time, the command displays the test results. If not, run "agent test resume".
By default, this command outputs test results in human-readable tables for each test case, if the test completes in time. The tables show whether the test case passed, the expected and actual values, the test score, how long the test took, and more. Use the --result-format to display the test results in JSON or Junit format. Use the --output-dir flag to write the results to a file rather than to the terminal.
Examples for agent test run
Start an agent test called Resort_Manager_Test for an agent in your default org, don't wait for the test to finish:
sf agent test run --api-name Resort_Manager_Test
Start an agent test for an agent in an org with alias "my-org" and wait for 10 minutes for the test to finish:
sf agent test run --api-name Resort_Manager_Test --wait 10 --target-org my-org
Start an agent test and write the JSON-formatted results into a directory called "test-results":
sf agent test run --api-name Resort_Manager_Test --wait 10 --output-dir ./test-results --result-format json
Flags
- --json
- Optional
-
Format output as json.
- Type: boolean
- --flags-dir FLAGS-DIR
- Optional
-
Import flag values from a directory.
- Type: option
- -o | --target-org TARGET-ORG
- Required
-
Username or alias of the target org. Not required if the `target-org` configuration variable is already set.
- Type: option
- --api-version API-VERSION
- Optional
-
Override the api version used for api requests made by this command
- Type: option
- -n | --api-name API-NAME
- Optional
-
API name of the agent test to run; corresponds to the name of the AiEvaluationDefinition metadata component that implements the agent test.
- Type: option
- -w | --wait WAIT
- Optional
-
Number of minutes to wait for the command to complete and display results to the terminal window.
- Type: option
- --result-format RESULT-FORMAT
- Optional
-
Format of the agent test run results.
- Type: option
- Permissible values are: json, human, junit, tap
- Default value: human
- -d | --output-dir OUTPUT-DIR
- Optional
-
Directory to write the agent test results into.
If the agent test run completes, write the results to the specified directory. If the test is still running, the test results aren't written.
- Type: option