eCDN Logpush
eCDN Logpush delivers eCDN logs in batches to specified destinations. eCDN logs contain valuable information, such as the client IP address, the requested URL, the user agent string, and details about firewall events such as blocked requests, firewall rule triggers, and more.
When creating a Logpush job through eCDN Logpush, you can view the list of available fields for http_requests, firewall, and pageshield events. You can also specify which log types to deliver, including HTTP requests and firewall events, and specify different destination types:
- Create Logpush job for Amazon S3 bucket (Requires ownership verification step)
- Create Logpush job for Google Cloud Storage (Requires ownership verification step)
- Create Logpush job for Azure
- Create Logpush job for Datadog
- Create Logpush job for Splunk
- Create Logpush job for HTTPS
- Create Logpush job for Kinesis
Enable log push for an Amazon S3 destination:
- Create an S3 bucket. For details, see AWS S3 documentation.
- In the Amazon S3 console, navigate to S3 > Bucket > Permissions > Bucket Policy, and edit and paste the policy, replacing the Resource value with your own bucket path. The AWS Principal is owned by the CDN provider and must not be changed.
To avoid incomplete files and minimize storage costs, use the AbortIncompleteMultipartUpload
action when setting up a rule for S3 multipart uploads with eCDN Logpush. For details, see Uploading and copying objects using multipart upload_.
eCDN Logpush supports pushing eCDN logs directly to Amazon S3 through CDN-API Logpush API.
However, before you can create an eCDN Logpush job to directly push logs to an AWS S3 bucket, you must demonstrate ownership of the S3 bucket. You do this by writing the ownership token file to the bucket destination. You must also create a unique ownership challenge token for each destination path within the same bucket.
You can use the special string {DATE}
in the destination path to separate logs into daily subdirectories. For example, s3://customer-bucket/logs/{DATE}?region=us-east-1\&sse=AES256
. When the logs are stored, the name of the directory is replaced with the date in YYYYMMDD
format, for example: 20230215
.
Example request:
Example response:
{ "data": { "destinationPath": "s3://example-bucket/log?region=us-east-1", "fileName": "log/ownership-challenge-072fd6be.txt" } }
Populate the `ownershipChallengeToken` based on the call made in the previous section. The destinationPath
must be the same as the one specified while generating the ownership token. For details, see the Parameters, Optional Parameters and Limitations sections.
Example Logpush job API call:
Example API call response: 201 status code
Enable log push for a GCS destination:
- Create a GCS bucket.
- Cloudflare uses Google Cloud Identity and Access Management (IAM) to gain access to your Google Cloud Storage bucket. The Cloudflare IAM service account needs admin permission for the bucket. Add the member
logpush@cloudflare-data.iam.gserviceaccount.com
with Storage Object Admin permission. This ensures that Cloudflare can push the ownership verification and logs to the GCS bucket.
eCDN Logpush supports pushing eCDN logs directly to Google Cloud Storage (GCS) through the CDN Logpush API.
However, before you can create an eCDN Logpush job for pushing logs to a GCS bucket, you must demonstrate ownership of the GCS bucket. You do this by writing the ownership token file to the bucket destination. You must also create a unique ownership challenge token for each destination path within the same bucket.
You can use the special string {DATE}
in the destination path to separate logs into daily subdirectories. For example gs://customer-bucket/logs/{DATE}
. When the logs are stored, the name of the directory is replaced with the date in YYYYMMDD
format, for example: 20230215
.
Example request:`
Example response:
{ "data": { "destinationPath": "gs://cflogpushtest/http_requests/20251008", "fileName": "http_requests/20251008/ownership-challenge-072fd6be.txt" } }
Populate the ownershipChallengeToken
based on the call made in the previous section. The `destinationPath` must be the same as one specified while generating the ownership token. For details, see the Parameters, Optional Parameters and Limitations sections.
Example logpush job API call:
Example API call response: 201 status code
Enable log push for an Azure destination:
- Create a Blob Storage container. For details, see instructions from Azure ↗.
- Create a shared access signature (SAS) ↗ to secure and restrict access to your blob storage container.
- Use Storage Explorer ↗ to navigate to your container and right click on the container to create a signature. Set the signature to expire at least five years from the current date and only provide write permission.
- Provide the SAS URL as part of your
destinationPath
attribute. - Logpush stops pushing logs if your SAS token expires, which is why an expiration period of at least five years is needed.
Example destination configuration for Azure:
Cloudflare makes a test call to push test data to the Azure destination specified. If successful, the logpush job is created. If not, the logpush job creation fails.
The following example creates a logpush job with an Azure destination path. For details, see the Parameters, Optional Parameters and Limitations sections.
Example API call response: 201 status code
Enable log push for a Datadog destination:
- Specify your destination configuration hostname as either
http-intake.logs.datadoghq.com/v1/input
orhttp-intake.logs.datadoghq.com/api/v2/logs
- Retrieve the Datadog API Key using the Datadog documentation. This is a required field.
- You can set the service, host name, Datadog
ddsource
field, andddtags
fields as URL parameters. While these parameters are optional, they can be useful for indexing or processing logs. For details, see the Logs section ↗ in Datadog's documentation. Note that the values of these parameters can contain special characters, which should be URL encoded.
Example destination configuration for Datadog:
Cloudflare makes a test call to push test data to the specified Datadog destination configuration. If successful, the logpush job is created. If not, the logpush job creation fails.
This example creates a logpush job with a Datadog destination path. For details, see the Parameters, Optional Parameters and Limitations sections.
Example API call response: 201 status code
Configure a Splunk destination with these values:
- <SPLUNK_ENDPOINT_URL>: The Splunk raw HTTP Event Collector URL with the port. For example:
splunk.cf-analytics.com:8088/services/collector/raw
.- Cloudflare expects the Splunk endpoint to be
/services/collector/raw
while configuring and setting up the Logpush job.
- Cloudflare expects the Splunk endpoint to be
- <SOURCE_TYPE>: The Splunk source type, for example:
cloudflare:json
. - <SPLUNK_AUTH_TOKEN>: The Splunk authorization token that is URL-encoded, for example:
Splunk%20e6d94e8c-5792-4ad1-be3c-29bcaee0197d
.- Ensure you have enabled HEC in Splunk. For details, see Splunk Analytics Integrations.
- API requests fail with a 504 error if you add an incorrect URL. Typically, the Splunk Cloud endpoint URL includes text like
http-inputs-
before the hostname.
- <SPLUNK_CHANNEL_ID>: A unique channel ID. This is a random GUID that you can generate by using:
- An online tool like the GUID generator ↗.
- The command line, for example:
python \-c 'import uuid; print(uuid.uuid4())'
.
- <INSECURE_SKIP_VERIFY>: A boolean value. Cloudflare recommends setting this value to
false
, because setting this value totrue
is equivalent to using the-k
option with curl as shown in Splunk examples and is not recommended unless HEC uses a self-signed certificate.
Example destination configuration for Splunk:
Cloudflare makes a test call to push test data to the endpoint specified. If successful, the logpush job is created. If not, the logpush job creation fails.
This example creates a logpush job with a Splunk destination path. For details, see the Parameters, Optional Parameters and Limitations sections.
Example API call response: 201 status code
Configure the HTTPS destination:
- Make sure that the file upload to validate the destination accepts a gzipped
test.txt.gz
file with compressed content{"content":"tests"}
. Otherwise, it returns an error, for example:error validating destination: error writing object: error uploading
. - Specify "header_*" URL parameters for setting request headers.
- The HTTPS endpoint cannot have custom URL parameters that conflict with any "header_*" URL parameters you have set.
- These parameters must be properly URL-encoded, for example: use "%20" for a whitespace. Otherwise, some special characters can be decoded incorrectly.
- For destination_conf , you can specify URL parameters in addition to special "header_*" parameters.
- Non URL-encoded special characters are encoded when uploaded.
Example destination configuration for HTTPS endpoint:
Cloudflare makes a test call to push test data to the endpoint specified. If successful, the logpush job is created. If not, the logpush job creation fails.
This example creates a logpush job with an HTTPS destination endpoint. For details, see the Parameters, Optional Parameters and Limitations sections.
Example API call response: 201 status code
Configure the Kinesis destination:
- Create an IAM Role for Cloudflare Logpush to Assume with this trust relationship:
- Ensure that the IAM role has permissions to perform the
PutRecord
action on your Kinesis stream. - Replace <AWS_REGION>, <YOUR_AWS_ACCOUNT_ID> and <STREAM_NAME> with your values:
- (Optional) When using the STS Assume Role, you can include
sts-external-id
as adestination_conf
parameter in your Logpush job's requests. For details, see Securely Using External ID for Accessing AWS Accounts Owned by Others ↗ for more information.
Example destination configuration for setting up logpush to AWS Kinesis:
kinesis://<STREAM_NAME>?region=<AWS_REGION>&sts-assume-role-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>
This example creates a logpush job with Kinesis. For details, see the Parameters, Optional Parameters and Limitations sections.
Example API call response: 201 status code
- Name: Your site name, which can't be changed after the job is created.
- Destination Path: The destination path for receiving logs. The prerequisites information for each destination type describes how to get the
destinationPath
attribute. - Ownership Challenge Token: This attribute is required for AWS S3 and Google Cloud Storage destinations. It isn't required for other destinations such as HTTPS, Datadog, Azure, Splunk, or Kinesis.
- Log Type: The type of logs you want to receive in your S3 bucket. Currently, the available types are
http_requests
,firewall_events
, andpage_shield_events
. - Log Fields: The available log fields, which vary depending on the type of logs. For details, see eCDN Logpush Log Fields.
- Filter: You can use filters to select specific types of events to include in your logs or remove events irrelevant to your analysis. By applying filters to your logs, you can focus on the most important data and avoid unnecessary noise.
For details, see eCDN Logpush Filter.
You can create a maximum of two Logpush jobs per zone.
An eCDN Logpush Job isn't enabled upon creation. You must enable the Logpush Job to start receiving logs. For additional details, see CDN Zones API Overview.
A successful response returns 200.
This example shows JSON log output from an eCDN Logpush job.
For details, see CDN Zones API Overview.
A successful response returns 200.
For details, see CDN Zones API Overview.
A successful response returns 204.
- The Logpush job was accidentally turned off, and we missed some logs for a certain time period. Is there a way to retrieve those missed logs?
- No, eCDN Logpush pushes the logs into the S3 bucket as soon as they become available and isn't able to backfill the missing logs.
- Can I adjust how often logs are pushed?
- No, eCDN Logpush pushes the logs in batches to the destination specified at the earliest possible time.
- I created an ownership token, but I don’t see the file in my S3 bucket.
- Check your S3 bucket policy for allowing eCDN Logpush to write a file to your bucket. Edit and paste the policy into S3 > Bucket > Permissions > Bucket Policy, replacing the Resource value as shown in the Enable eCDN Logpush to Amazon S3 section.