Use Function Logging

Salesforce Functions provides logging to monitor, debug, and analyze how your deployed functions are running. Log information is captured at the compute environment level, which means that logged activity for all functions deployed to a given compute environment is captured in the same log, which can then be filtered by function, date/time, or other logline fields.

Compute environment log information is not persisted by Salesforce. If you need to persist log information you will need to set up a log drain to a system that can receive the log information and persist it.

Compute environment logs do not capture any log information generated from your Salesforce orgs. For example, any Apex logs created from your Apex code that invokes a function isn't capture in compute environment logs. For dealing with log information from Apex code, see Working with Apex Debug Logs.

Functions are provided with a pre-initialized logging handler that can be used to capture application logs for the duration of the function execution. JavaScript function entry points will automatically receive the pre-initialized Logger instance that you can use to add log entries for debugging purposes, or general logging:

Java functions should use the SLF4J logging framework for logging. The Java SDK for Functions provides a custom SLF4J logger binding that automatically adds function-specific info to loglines, such as the invocation ID.

You don't need to add your own SLF4J binding such as log4j or logback. Instead, just use SLF4J's LoggerFactory to get a Logger instance with the custom bindings. For more information on using SLF4J, see Frequently Asked Questions about SLF4J.

Here's a Java function example that uses the SLF4J logging framework:

To access the log for your compute environment, use the sf env log tail command, providing the compute environment where your function is deployed.

Log information will start to output to the current shell. To stop capturing logs, quit the process by using Ctrl-C in your shell. Logs are captured for all activity within the compute environment, so filter for just the functions you're interested in, if needed.

Note that environment logs aren't buffered, so this command starts capturing and outputting log lines that occur after the command is executed. If you want to capture log details during a function invocation you should start tailing the environment log in a separate shell before the function is invoked.

If no function is active in the environment, the log tailing stops after 1 hour of continued inactivity. To continue capturing logs for that environment, re-run the sf env log tail command.

After running sf env log tail and invoking a function, the output might look something like the following:

You can also set up a log drain to capture logging information. Setting up a log drain lets you capture log output to an external system for archiving or analysis. As with log tailing, log information sent to log drains is captured at the compute environment level.

When you set up a log drain you need to provide a HTTP or HTTPS URL that can receive the log drain messages. Drain log messages are formatted based on the RFC5424 "syslog" format. They are delivered over TCP, using the octet counting framing method. HTTPS drains support transport-level encryption using the HTTPS protocol, and authentication using HTTP Basic Authentication.

Set a log drain for a compute environment using the sf env logdrain add CLI command, for example:

You can set multiple log drain receivers if necessary.

To see what log drains are currently set for your compute environment, if any, use the sf env logdrain list command:

To remove a log drain, use the sf env logdrain remove command, providing both the compute environment and the log drain receiver URL:

Set up your function log drains to work with a third-party system or provide your own using a log service like log-iss. For convenience, we provide basic steps for setting up log drains for your functions with some of the more popular third-party systems. These examples don't include all the systems that support log drains for your functions.

Coralogix

  1. Create an account in Coralogix if you don't already have one.
  2. Get your account's private-key and company ID by going to Settings > Send Your Logs. Choose an application-name to be associated with the logs.
  3. Use the sf env logdrain add command as described earlier, with a URL with the following format:

LogDNA

  1. Create an account on www.logdna.com if you don't already have one.
  2. Open the LogDNA webapp and click All Hosts > Add a host.
  3. Navigate to the Heroku section and execute the account-specific commands found under the Installing via Heroku Log Drains section.
  4. Use the log drain URL in your sf env logdrain add command as described above.

Papertrail

  1. If you don't already have a Papertrail account, sign up for one at https://www.papertrail.com/plans/.
  2. Sign into your account and click on Add Systems
  3. Select aggregating "app log files" from "Heroku". For choosing a method, choose "Method 2: Standalone".
  4. Under "Setup Heroku drain" Papertrail will provide a Heroku CLI command that will include the log drain URL, for example:

Use this URL in your sf env logdrain add command as described earlier. With the above Heroku CLI command example, your sf env logdrain add command would look like:

Splunk

  1. Install the RFC5424 Syslog add-on in your Splunk Enterprise platform.
  2. Create a new HTTP Event Collector token. Follow the Splunk documentation. For "name", use a unique identifier for your compute environment. For "source type" use "rfc5424_syslog".
  3. Generate a random channel UUID. This is required to for raw event collection by Splunk.
  4. Construct your log drain URL with the token and channel created in the steps above:

For example:

  1. Use this log drain URL in your sf env logdrain add command as described earlier.

Sumo Logic

  1. Configure a Sumo Logic Hosted Collector with an HTTP Source.
  2. Use the URL associated with the HTTP source that Sumo Logic provides in your sf env logdrain add command as described earlier.

When invoking your function using Apex, you can use Apex debug logs to get information on how the function was invoked. You can view Apex debug logs from within the org (through the Developer Console or the Debug Logs Setup page). From the CLI, use the sf apex list log and sf apex get log to obtain debug logs for your function invocation. For example, if you invoked a function asynchronously, you could use sf apex list log to get a list of recent logs:

Using the sf apex get log command with the ID from the FunctionCallbackHandler operation, you can get more Apex log details on what happened when your async function was invoked.

If you're working with a new scratch org and you can't see debug logs (sf apex list log returns "No debug logs found in org"), open the Developer Console (once) in that org. Opening the Developer Console creates a Traceflag record that enables Apex debug logging.

When invoking functions asynchronously using Apex, you may not be able to see Apex logs for the Apex callback. This is because the callback is being run as the Platform Integration user, which might not have Apex debugging enabled. To enable debugging for the Platform Integration user, in your org, from Setup, search for "Debug Logs". Create a User Trace Flag and specify "Platform Integration" as the Trace Entity Type.

For more information on using Apex debug logs, see Apex Developer Guide: Debug Log