Streaming Ingestion Walkthrough

Use this walkthrough to understand the steps for loading records using streaming ingestion.

Before you can start, make sure you’ve completed the prerequisites required to set up your Ingestion API:

  • Set up an Ingestion API Connector to define the endpoints and object payload to ingest data.
  • Download the object endpoints from the configured Ingestion API connector. The object endpoints and the Ingestion API connector name are used as parameters in the API calls.
  • To configure ingestion jobs and expose the API for external consumption create an Ingestion API Data Stream.
  1. Create a collection of JSON objects that matches the schema of the source objects you defined in your Ingestion API data stream. Wrap the collection in a data envelope and save it as orders.json. The payload must be less than 200 KB.

  2. Request a Data Cloud access token. For more information, refer to Authentication. The access_token property in the token exchange response contains the bearer token to use for the authorization header. The instance_url is the Data Cloud instance where the Ingestion API is hosted.

  3. Use the Synchronous Record Validation method and validate if your ingestion request is configured correctly. Requests that fail validation aren’t processed into the data lake. If the request fails validation, the API returns a report indicating which records are problematic and why. Fix the request based on the validation feedback and resubmit until you received a status code in the 200 s from the API.

    Here's how a sample request looks like.

    ecomm is the name of the Ingestion API connector and Order is the object configured in the Ingestion API connector for payload.

  4. After you’re confident that the integration is properly configured, upload your JSON data via the Ingestion API. You must receive a 202 Accepted response indicating your data has been accepted for processing.