Function Logging

Salesforce Functions provides logging to monitor, debug, and analyze how your deployed Functions are running. Log information is captured at the compute environment level, which means that logged activity for all Functions deployed to a given compute environment is captured in the same log, which can then be filtered by Function, date/time, or other logline fields.

Compute environment log information is not persisted by Salesforce. If you need to persist log information you will need to set up a log drain to a system that can receive the log information and persist it.

Compute environment logs do not capture any log information generated from your Salesforce orgs. For example, any Apex logs created from your Apex code that invokes a Function isn't capture in compute environment logs. For dealing with log information from Apex code, see Working with Apex Debug Logs.

Generate Log Information in Your Function Code

Functions are provided with a pre-initialized logging hander that can be used to capture application logs for the duration of the Function execution. JavaScript Function entry points will automatically receive the pre-initialized Logger instance that you can use to add log entries for debugging purposes, or general logging:

Java Functions should use the SLF4J logging framework for logging. The Java SDK for Functions provides a custom SLF4J logger binding that automatically adds Function specific info to loglines, such as the invocation ID.

You don't need to add your own SLF4J binding such as log4j or logback. Instead, just use SLF4J's LoggerFactory to get a Logger instance with the custom bindings. For more information on using SLF4J, see Frequently Asked Questions about SLF4J.

Here's a Java Functions example that uses the SLF4J logging framework:

View Compute Environment Logs

To access the log for your compute environment, use the env:log:tail command, providing the compute environment where your Function was deployed.

Log information will start to output to the current shell. To stop capturing logs, quit the process by using Ctrl-C in your shell. Logs are captured for all activity within the compute environment, so filter for just the Functions you're interested in, if needed.

Note that environment logs aren't buffered, so this command starts capturing and outputting log lines that occur after the command is executed. If you want to capture log details during a Function invocation you should start tailing the environment log in a separate shell before the Function is invoked.

If no Function is active in the environment, the log tailing will automatically stop after 1 hour of continued inactivity. You'll need to re-run the env:log:tail command to continue capturing logs for that environment.

After running env:log:tail and then invoking a Function, the output might look something like the following:

Log Drains

You can also set up a log drain to capture logging information. Setting up a log drain lets you capture log output to an external system for archiving or analysis. As with log tailing, log information sent to log drains is captured at the compute environment level.

When you set up a log drain you need to provide a HTTP or HTTPS URL that can receive the log drain messages. Drain log messages are formatted based on the RFC5424 "syslog" format. They are delivered over TCP, using the octet counting framing method. HTTPS drains support transport-level encryption using the HTTPS protocol, and authentication using HTTP Basic Authentication.

Set a log drain for a compute environment using the env:logdrain:add CLI command, for example:

You can set multiple log drain receivers, if necessary.

To see what log drains are currently set for your compute environment, if any, use the env:logdrain:list command:

To remove a log drain, use the env:logdrain:remove command, providing both the compute environment and the log drain receiver URL:

Functions log drains can be set up to work with many third-party systems, or you can provide your own using a log service using something like log-iss. For convenience, basic steps for setting up Functions log drains with some of the more popular third-party system is provided below. These are provided as examples and should not be treated as a comprehensive list of systems that support Functions log drains.

Coralogix

  1. Create an account in Coralogix if you don't already have one.
  2. Get your account's private-key and company ID by going to Settings > Send Your Logs. Choose an application-name to be associated with the logs.
  3. Use the env:logdrain:add command as described earlier, with a URL with the following format:

LogDNA

  1. Create an account on www.logdna.com if you don't already have one.
  2. Open the LogDNA webapp and click All Hosts > Add a host.
  3. Navigate to the Heroku section and execute the account-specific commands found under the Installing via Heroku Log Drains section.
  4. Use the log drain URL in your env:logdrain:add command as described earlier.

Papertrail

  1. If you don't already have a Papertrail account, sign up for one at https://www.papertrail.com/plans/.
  2. Sign into your account and click on Add Systems
  3. Select aggregating "app log files" from "Heroku". For choosing a method, choose "Method 2: Standalone".
  4. Under "Setup Heroku drain" Papertrail will provide a Heroku CLI command that will include the log drain URL, for example:

Use this URL in your env:logdrain:add command as described earlier. With the above Heroku CLI command example, your env:logdrain:add command would look like:

Splunk

  1. Install the RFC5424 Syslog add-on in your Splunk Enterprise platform.
  2. Create a new HTTP Event Collector token. Follow the Splunk documentation. For "name", use a unique identifier for your compute environment. For "source type" use "rfc5424_syslog".
  3. Generate a random channel UUID. This is required to for raw event collection by Splunk.
  4. Construct your log drain URL with the token and channel created in the steps above:

For example:

  1. Use this log drain URL in your env:logdrain:add command as described earlier.

Sumo Logic

  1. Configure a Sumo Logic Hosted Collector with an HTTP Source.
  2. Use the URL associated with the HTTP source that Sumo Logic provides in your env:logdrain:add command as described earlier.

Working with Apex Debug Logs

When invoking your Function using Apex, you can use Apex debug logs to get information on how the Function was invoked. You can view Apex debug logs from within the org (through the Developer Console or the Debug Logs Setup page). From the CLI, use the force:apex:log:list and force:apex:log:get to obtain debug logs for your Function invocation. For example, if you invoked a Function asynchronously, you could use force:apex:log:list to get a list of recent logs:

Using the force:apex:log:get command with the ID from the FunctionCallbackHandler operation, you can get more Apex log details on what happened when your async Function was invoked.

If you're working with a new scratch org and not able to see debug logs (force:apex:log:list returns "No results found"), you may need to bring the Developer Console (once) in that org -- this creates a Traceflag record that enables Apex debug logging.

For beta, when invoking Functions asynchronously using Apex, you may not be able to see Apex logs for the Apex callback. This is because the callback is being run as the Platform Integration user, which might not have Apex debugging enabled. To enable debugging for the Platform Integration user, in your org, from Setup, search for "Debug Logs". Create a User Trace Flag and specify "Platform Integration" as the Trace Entity Type.

For more information on using Apex debug logs, see Apex Developer Guide: Debug Log