Release Notes Archive

This page contains all previous years' release notes.

NEW

Detect products on retail shelves with an optimized algorithm.

How: To use the retail execution algorithm, first create a dataset that has a type of image-detection. Then when you train the dataset to create a model, you specify an algorithm of retail-execution. The cURL command is as follows.

These Einstein Vision calls take the algorithm parameter.

  • Train a model—POST /v2/vision/train
  • Retrain a model—POST /v2/vision/retrain

NEW

Detect text in an image with Einstein OCR (Generally Available)

Get optical character recognition (OCR) models that detect alphanumeric text in an image with Einstein OCR. You access the models from a single REST API endpoint. Each model has specific use cases, such as business card scanning, product lookup, and digitizing documents and tables.

How: When you call the API, you send in an image, and the JSON response contains various elements based on the value of the task parameter. Here’s what a cURL call to the OCR endpoint looks like.

The response JSON returns the text and coordinates of a bounding box (in pixels) for that text.

Einstein Intent now supports multiple languages.

Einstein Intent datasets and models now support these languages: English (US), English (UK), French, German, Italian, Portuguese, Spanish, Chinese (Simplified) (beta), Chinese (Traditional) (beta), Japanese (beta). You specify the language when you create an intent dataset. When you train that dataset, the model inherits the language of the dataset.

How: There are two new API parameters that enable multilanguage support: language and algorithm. When you create the dataset, you specify the language in the language parameter. When you train the dataset to create a model, you pass in the algorithm parameter with a value of multilingual-intent or multilingual-intent-ood (to create a model that handles out-of-domain predictions).

These calls take the language parameter.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

As a beta feature, Chinese (Simplified), Chinese (Traditional), and Japanese language support is a preview and isn’t part of the “Services” under your master subscription agreement with Salesforce. Use this feature at your sole discretion, and make your purchase decisions only on the basis of generally available products and features. Salesforce doesn’t guarantee general availability of this feature within any particular time frame or at all, and we can discontinue it at any time. This feature is for evaluation purposes only, not for production use. It’s offered as is and isn’t supported, and Salesforce has no liability for any harm or damage arising out of or in connection with it. All restrictions, Salesforce reservation of rights, obligations concerning the Services, and terms for related Non-Salesforce Applications and Content apply equally to your use of this feature.

Create Einstein Intent models that support out-of-domain text.

Einstein Intent lets you create a model that handles predictions for unexpected, out-of-domain, text. Out-of-domain text is text that doesn’t fall into any of the labels in the model.

How: When you train an intent dataset, pass the algorithm parameter with a value of multilingual-intent-ood. To see how the algorithm works, let’s say you have a case routing model with five labels: Billing, Order Change, Password Help, Sales Opportunity, and Shipping Info. The following text comes in for prediction: “What is the weather in Los Angeles?” If the model was created using the standard algorithm, the response looks like this JSON.

The text sent for prediction clearly doesn’t fall into any of the labels. The model isn’t designed to handle predictions that don’t match one of the labels, so the model returns the labels with the best probability. If you create the model with the multilingual-intent-ood algorithm, and you send the same text for prediction, the response returns an empty probabilities array.

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

NEW

Get more detailed error messages for Einstein Objection Detection training API calls.

When you train an object detection dataset and the training process encounters an error, the API now returns more descriptive error messages. In most cases, the error message specifies the issue that caused the error and how to fix it.

How: The improved errors are returned for these API endpoints when the dataset type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain

NEW

Added elements in Language API model metrics response.

New elements returned in the model metrics let you better understand the performance of your model. The response JSON for an Einstein Language API call that returns model metrics information contains three new elements: the macroF1 field, the precision array, and the recall array.

When: This change applies to all language models created after September 30, 2019. If you want to see these changes for models created before that date, retrain the dataset and create a new model.

How: The new field and arrays appear in the response for these calls when the model type is text-intent or text-sentiment.

  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get model learning curve—GET /v2/language/models/<MODEL_ID>/lc

CHANGED

Text datasets can contain up to 3 million records. The maximum number of words in a text dataset is now 3 million.

A text dataset is a dataset that has a type of text-intent or text-sentiment.

How: You receive an error from the following calls when you train a text dataset that has more than 3 million words:

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain

To avoid this error, be sure that when you create a dataset or add examples to a dataset, that it contains less than 3 million words across all examples. For best results, we recommend that each example is around 100 words.

CHANGED

Object detection max image size increased.

We increased the maximum size of an image you can add to an object detection dataset from 1 MB to 5 MB.

How: The new maximum image size applies to these calls when the dataset type is image-detection.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create an example—POST /v2/vision/datasets/<DATASET_ID>/examples
  • Create a feedback example—POST /v2/vision/feedback
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

NEW

Intent API response JSON contains a new algorithm field.

The response JSON for an Einstein Intent API call that returns model information now contains the algorithm field. The default return value is intent.

How: The algorithm field appears in the response for these calls when the dataset type or model type is text-intent.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

NEW

API response JSON contains a new language field.

The response JSON for an Einstein Vision API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is N/A.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

API response JSON contains a new language field.

The response JSON for an Einstein Language API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is en_US.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

Object Detection API response JSON contains a new algorithm field.

The response JSON for an Einstein Vision API call that returns object detection model information now contains the algorithm field. The default return value is object-detection.

How: The algorithm field appears in the response for these calls when the dataset type or model type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

Einstein Language default language is now en-US.

The default language changed to en_US from ENGLISH.

How: The language field now contains the value en_US in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT /v2/language/datasets/<DATASET_ID>/upload

NEW

Einstein Vision and Language Model Builder just released.

An AppExchange package that provides a UI for the Einstein Vision and Language deep learning APIs. You can easily create datasets, build models, and make predictions all right from Salesforce.

NEW

API response JSON contains a new numOfDuplicates field.

The response JSON for any Einstein Vision API call that returns dataset information now includes the numOfDuplicates field. This field indicates the number of images not added to the dataset because they’re duplicates.

Why: When you create a dataset or add data to a dataset, duplicate images are omitted. The numOfDuplicates field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create a dataset —POST /v2/vision/datasets
  • Get a dataset—GET /v2/vision/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/vision/datasets
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

API response JSON contains a new numOfDuplicates field.

The response JSON for any Einstein Language API call that returns dataset information now contains the numOfDuplicates field. This field indicates the number of text strings not added to the dataset because they’re duplicates.

Why: When you create a dataset or add data to a dataset, duplicate text strings are omitted. The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT `/v2/language/datasets/<DATASET_ID>/upload

CHANGED

Maximum image dataset size increased to 2 GB.

We doubled the maximum size of an image dataset from 1 GB to 2 GB.

Maximum text dataset size increased to 2 GB.

We doubled the maximum size of a text dataset from 1 GB to 2 GB.

CHANGED

Number of API calls to return examples is limited to 30 calls per month.

Each Einstein Platform Services account is now limited to 30 calls per calendar month to Einstein Vision and Einstein Language endpoints that return examples.

This limit applies across all APIs that return examples. If you exceed this limit, you receive an error message.

How: These API endpoints return examples.

  • Get all Einstein Vision examples—GET /v2/vision/datasets/<DATASET_ID>/examples
  • Get all Einstein Vision examples for a label—GET /v2/vision/examples?labelId=<LABEL_ID>
  • Get all Einstein Language examples —GET /v2/language/datasets/<DATASET_ID>/examples
  • Get all Einstein Language examples for a label—GET /v2/language/examples?labelId=<LABEL_ID>

NEW

API response JSON contains a new language field.

The response JSON for any Einstein Vision API call that returns dataset information now contains the language field. The return value is N/A.

How: The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create a dataset —POST /v2/vision/datasets
  • Get a dataset—GET /v2/vision/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/vision/datasets
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

Use the optional language parameter when creating a text dataset.

When creating a text dataset, you can now specify a language with the new language parameter. The default is ENGLISH. We created this parameter for future use, so you don’t need to do anything now.

How: The language parameter is available in these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync

API response JSON contains a new language field.

The response JSON for any Einstein Language API call that returns dataset information now contains the language field. The return value for existing datasets is ENGLISH.

How: The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT /v2/language/datasets/<DATASET_ID>/upload

CHANGED

Exceeding the maximum dataset size returns an error.

When you create a dataset (using the POST call) or add data to a dataset (using the PUT call), if the resulting dataset exceeds the maximum dataset size of 1 GB, the call fails and an error is returned. This change applies to Einstein Vision and Einstein Language.

See Create a Dataset From a Zip File Asynchronously (Vision) and Create Examples From a Zip File (Vision).

See Create a Dataset From a File Asynchronously (Language) and Create Examples From a File (Language).

Training requests from customers on a paid plan are prioritized.

Training requests from customers on a paid plan are prioritized before training requests made by customers on a free plan. A training request is any call to the /train or /retrain resources.

Use the API usage call to find out what kind of plan you have. See Get API Usage.

The following is a list of the plans. The free plan has a value of STARTER.

  • HEROKU

  • STARTER—2,000 predictions per calendar month.

  • BRONZE—10,000 predictions per calendar month.

  • SILVER—250,000 predictions per calendar month.

  • GOLD—One million predictions per calendar month.

  • SALESFORCE

  • STARTER—2,000 predictions per calendar month.

  • SFDC_1M_EDITION—One million predictions per calendar month.

You might see a delay in training if you're on the free tier of service and there are other training requests in the queue. This change applies to Einstein Vision and Einstein Language.

CHANGED

Changes to delete dataset functionality for Einstein Vision and Einstein Language.

The delete dataset API call no longer returns a 204 status code for a successful dataset deletion. Instead, the API returns a 200 status code, which specifies that a dataset deletion response was successfully received, but the deletion has yet to be completed. See Delete a Dataset (Vision) and Delete a Dataset (Language).

In addition to the new status code, the call returns a JSON response with a deletion ID. You can use this ID to query the status of the deletion. The response looks similar to this JSON. See Get Deletion Status (Vision) and Get Deletion Status (Language).

Deleting a dataset no longer deletes the associated models. You must explicitly delete models. See Delete a Model (Vision) and Delete a Model (Language).

NEW

Get the deletion status with this new API endpoint.

After you delete a dataset or a model, it may take some time for the data to be deleted. To confirm whether a dataset or model has been deleted, call the /deletion endpoint along with the deletion ID.

Valid values are:

  • QUEUED—Object deletion hasn't started.
  • RUNNING—Object deletion is in progress.
  • SUCCEEDED—Object deletion is complete.
  • SUCCEEDED_WAITING_FOR_CACHE_REMOVAL—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.

See Get Deletion Status (Vision) and Get Deletion Status (Language).

Delete a model with this new API endpoint.

Now deleting a dataset doesn't delete the models associated with that dataset. Instead, use this new API endpoint to delete a model. This cURL call deletes a model.

The response looks similar to this JSON.

See Delete a Model (Vision) and Delete a Model (Language).

After you delete a model, use the id to check the status of the deletion.

NEW

Reset your private key.

After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. See Reset Your Private Key.

CHANGED

Rate limiting for Einstein Language (which includes Einstein Intent and Einstein Sentiment) and Einstein Object Detection goes into effect today.

The free tier of our service will offer 2,000 free predictions (increased from 1,000 free predictions) each calendar month. See Rate Limits.

When you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call one of the prediction resources. To purchase predictions, contact your Salesforce or Heroku AE.

A prediction is any POST call to these endpoints:

  • /vision/predict
  • /vision/detect
  • /language/intent
  • /language/sentiment

CHANGED

On January 15, 2018, the response returned by the /detect call is changing.

In the new response JSON, the field "resultType": "DetectionResult" is removed and the field "object": "predictresponse" is added.

The new response looks like this JSON.

See the Detection with Image File section and Detection with Image URL section of Detection.

CHANGED

Generate an access token using a refresh token.

Instead of using your private key to generate an access token, you can generate a refresh token and use that to generate an access token. A refresh token is a JWT token that never expires.

A refresh token is useful in cases where an application is offline and doesn't have access to they key, such as mobile apps. See Generate an OAuth Access Token.

NEW

Einstein Sentiment, Einstein Intent, and Einstein Object Detection now generally available.

Einstein Vision and Language make it possible to streamline your workflows across sales, service, and marketing so that you can do things like: visual product search, product identification, intelligent case routing, and automated planogram analysis.

NEW

Add feedback to object detection models.

If your object detection model misclassifies images, you can use the feedback API to add those images, along with their correct labels, to the dataset. After you add feedback to the dataset you can:

  • Train the dataset to create a new model
  • Retrain the dataset to update the model and keep the same model ID

See Add Feedback to a Dataset and Create Feedback Examples From a Zip File.

CHANGED

Model training must be complete before you can delete a dataset.

If a dataset is being trained and has an associated model with a status of QUEUED or RUNNING, you must wait until the training is complete before you can delete the dataset.

CHANGED

JWT token is now longer.

The JWT tokens you use to call the API are now longer. You see this change whether you use the token web page to get a token or whether you generate the token in code by calling the /oauth2/token endpoint. See Generate an OAuth Token.

NEW

Get learning curve metrics for Einstein Language models.

Use this new API call to get the model metrics for each epoch (training iteration) performed to create a sentiment or intent model. See Get Model Learning Curve.

Use the precision-recall curve metrics to understand your Einstein Language model.

When you get the model metrics, the API now returns the precision-recall curve for your model. These metrics help you understand how well the model performs. See Get Model Metrics.

NEW

Einstein Object Detection now available.

Use this API to train models to recognize and count multiple distinct objects within an image. This API is part of Einstein Vision, so you use the same calls as you do for image and multi-label models. But the data you use to create the models is different. See Create a Dataset From a Zip File Asynchronously.

New Trailhead module: Einstein Intent API Basics.

Build a deep-learning custom model to categorize text and automate business processes. See Einstein Intent API Basics.

NEW

Get all examples for a label.

You can now return all examples for a single label by passing in the label ID. This API call is available in both Einstein Vision and Einstein Language. For Einstein Vision, see Get All Examples for Label. For Einstein Language, see Get All Examples for Label.

NEW

Pass parameters as JSON when classifying text using the Einstein Language APIs.

You can now pass text in JSON when calling the /intent and /sentiment resources. See Prediction for Intent and Prediction for Sentiment.

CHANGED

Einstein Image Classification API limits updated.

  • The image file name maximum length increased from 100 to 150 characters.

  • There's no longer a maximum number of examples you can create using the Create an Example call.

Add single examples to a dataset.

You can use the Create an Example call to add an example to a dataset that was created from a .zip file.

Unicode characters now supported in all APIs.

These elements can now contain unicode characters:

  • .zip file name
  • directory or label name
  • file or example name
  • dataset name

Default split ratio changed.

In the Einstein Language APIs, the default split ratio used during training is now 0.8. With this split ratio, 80% of the data is used to create the model and 20% is used to test the model.

The minimum number of examples changed in the Einstein Language APIs.

  • A dataset with a type of text-intent must have at least five examples per label.
  • A dataset with a type of text-sentiment must have at least five examples per label.

NEW

Einstein Language (Beta) released.

Einstein Language includes two APIs that you can use to unlock powerful insights within text.

  • Einstein Intent (Beta)—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish.

  • Einstein Sentiment (Beta)—Classify the sentiment of text into positive, negative, and neutral classes.

See Introduction to Salesforce Einstein Language.

NEW

Einstein Image Classification API version 2.0 released.

This table lists all the changes to the API in the new version. Einstein Vision is now the umbrella term for all of the image recognition APIs. The Einstein Vision API is now called the Image Classification API.

Use the version selector at the top of this page to switch to the documentation for another version.

The API now uses the https://api.einstein.ai endpoint.

When you access the Einstein Platform Services APIs, you can now use this new endpoint. For example, the endpoint to get a dataset is https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>.

The old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.

Optimize your model using feedback.

Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created.

  • Use the new API call to add a feedback example. See Create a Feedback Example.

  • The call to get all examples now has three new query parameters: feedback, upload, and all. Use these query parameters to refine the examples that are returned. See Get All Examples.

  • The call to train a dataset and create a model now takes the trainParams object {"withFeedback": true}. This option specifies that the feedback examples are used during the training process. By default, the feedback examples aren't used during training if you don't specify this value. See Train a Dataset.

Retrain a dataset and keep the same model ID.

There's now a call to retrain a dataset, for example, if you added new data to the dataset or you want to include feedback data. Retraining a dataset lets you maintain the model ID which is ideal if you reference the model in production code. See Retrain a Dataset.

Multi-label datasets are available.

The new dataset type image-multi-label enables you to specify that the dataset contains multi-label data. Any models you create from this dataset have a modelType of image-multi-label. See Determine the Model Type You Need.

There are two new calls to get the model metrics and the learning curve for a multi-label model.

See Get Multi-Label Model Metrics and Get Multi-Label Model Learning Curve.

Get up and running with multi-label predictions using our prebuilt multi-label model.

This multi-label model is used to classify a variety of objects. See Use the Prebuilt Models.

Use the numResults parameter to limit prediction results.

The numResults optional request parameter lets you specify the number of labels and probabilities to return when sending in data for prediction. This parameter can be used with both Einstein Vision and Einstein Language.

Use global datasets to include additional data in your model.

Global datasets are public datasets that Salesforce provides. When you train a dataset to create a model, you can include the data from a global dataset. One way you can use global datasets is to create a negative class in your model. See Use Global Datasets.

CHANGED

Dataset type is required when you create a dataset.

When you call the API to create a dataset, you must pass in the type request parameter to specify the type of dataset. Valid values are:

  • image—Standard classification dataset. Returns the single class into which an image falls.

  • image-multi-label—Multi-label classification dataset. Returns multiple classes into which an image falls.

See Determine the Model Type You Need.

Getting all datasets returns a maximum of 25 datasets.

If you omit the count parameter, the call to get all datasets returns 25. If you set the count query parameter to a value greater than 25, the call returns 25 datasets. See Get All Datasets.

DEPRECATED

The following calls have been removed from the Einstein Image Classification API in version 2.0.

  • Create a label. You must pass in the labels when you create the dataset. /vision/datasets/<DATASET_ID>/labels

  • Get a label. vision/datasets/<DATASET_ID>/labels/<LABEL_ID>

  • Get an example. /vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>

  • Delete an example. /vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>