Release Notes Archive

This page contains all previous years' release notes.

The old api.metamind.io endpoint has been retired. See note from July 27, 2017

What: There is an issue with API Usage tracking that return the wrong number of used API calls. To ensure continued access to the service are temporarily not enforcing rate limits while work to rectify the API Usage issue. Expected date for the resumption of accurate API Usage and enforcement of rate limting is first quarter 2024.

What: To enhance the security and availability of Salesforce services, this endpoint is now deprecated, and we plan to remove it on August 14, 2023. This endpoint is not available to new customers, and we discourage existing customers from using it.

What: To simplify Einstein Language API the Get Deletsion Status endpoint has been deprecated. Deletion is now immediate. To determine if the a dataset has been deleted use the Get Dataset command. A deleted dataset will return a 404 (Not Found) http status.

What: To enhance the security and availability of Salesforce services, the path request parameter on the dataset upload endpoint has been deprecated.

What: Specifying either intent or multilingual-intent will default to multilingual-intent. The intent algorithm was V2, which has been deprecated. The default multilingual-intent algorithm uses V3 with has improvements in accuracy and performnce. You will not need to make any changes to existing calls that use intent as the algorithm.

What: A side effect of the migration of Einstein Vision and Language services to Hyperforce, calls to Get Model Metrics (https://api.einstein.ai/v2/language/models/<MODEL_ID> will return a 404 (Not found) error until the next dataset train or retrain.

How: To use the retail execution algorithm, first create a dataset that has a type of image-detection. Then when you train the dataset to create a model, you specify an algorithm of retail-execution. The cURL command is as follows.

These Einstein Vision calls take the algorithm parameter.

  • Train a model—POST /v2/vision/train
  • Retrain a model—POST /v2/vision/retrain

Get optical character recognition (OCR) models that detect alphanumeric text in an image with Einstein OCR. You access the models from a single REST API endpoint. Each model has specific use cases, such as business card scanning, product lookup, and digitizing documents and tables.

How: When you call the API, you send in an image, and the JSON response contains various elements based on the value of the task parameter. Here’s what a cURL call to the OCR endpoint looks like.

The response JSON returns the text and coordinates of a bounding box (in pixels) for that text.

Einstein Intent datasets and models now support these languages: English (US), English (UK), French, German, Italian, Portuguese, Spanish, Chinese (Simplified) (beta), Chinese (Traditional) (beta), Japanese (beta). You specify the language when you create an intent dataset. When you train that dataset, the model inherits the language of the dataset.

How: There are two new API parameters that enable multilanguage support: language and algorithm. When you create the dataset, you specify the language in the language parameter. When you train the dataset to create a model, you pass in the algorithm parameter with a value of multilingual-intent or multilingual-intent-ood (to create a model that handles out-of-domain predictions).

These calls take the language parameter.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

As a beta feature, Chinese (Simplified), Chinese (Traditional), and Japanese language support is a preview and isn’t part of the “Services” under your master subscription agreement with Salesforce. Use this feature at your sole discretion, and make your purchase decisions only on the basis of generally available products and features. Salesforce doesn’t guarantee general availability of this feature within any particular time frame or at all, and we can discontinue it at any time. This feature is for evaluation purposes only, not for production use. It’s offered as is and isn’t supported, and Salesforce has no liability for any harm or damage arising out of or in connection with it. All restrictions, Salesforce reservation of rights, obligations concerning the Services, and terms for related Non-Salesforce Applications and Content apply equally to your use of this feature.

Einstein Intent lets you create a model that handles predictions for unexpected, out-of-domain, text. Out-of-domain text is text that doesn’t fall into any of the labels in the model.

How: When you train an intent dataset, pass the algorithm parameter with a value of multilingual-intent-ood. To see how the algorithm works, let’s say you have a case routing model with five labels: Billing, Order Change, Password Help, Sales Opportunity, and Shipping Info. The following text comes in for prediction: “What is the weather in Los Angeles?” If the model was created using the standard algorithm, the response looks like this JSON.

The text sent for prediction clearly doesn’t fall into any of the labels. The model isn’t designed to handle predictions that don’t match one of the labels, so the model returns the labels with the best probability. If you create the model with the multilingual-intent-ood algorithm, and you send the same text for prediction, the response returns an empty probabilities array.

These calls take the algorithm parameter.

  • Train a dataset—POST /v2/language/train
  • Retrain a dataset—POST /v2/language/retrain

When you train an object detection dataset and the training process encounters an error, the API now returns more descriptive error messages. In most cases, the error message specifies the issue that caused the error and how to fix it.

How: The improved errors are returned for these API endpoints when the dataset type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain

New elements returned in the model metrics let you better understand the performance of your model. The response JSON for an Einstein Language API call that returns model metrics information contains three new elements: the macroF1 field, the precision array, and the recall array.

When: This change applies to all language models created after September 30, 2019. If you want to see these changes for models created before that date, retrain the dataset and create a new model.

How: The new field and arrays appear in the response for these calls when the model type is text-intent or text-sentiment.

  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get model learning curve—GET /v2/language/models/<MODEL_ID>/lc

A text dataset is a dataset that has a type of text-intent or text-sentiment.

How: You receive an error from the following calls when you train a text dataset that has more than 3 million words:

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain

To avoid this error, be sure that when you create a dataset or add examples to a dataset, that it contains less than 3 million words across all examples. For best results, we recommend that each example is around 100 words.

We increased the maximum size of an image you can add to an object detection dataset from 1 MB to 5 MB.

How: The new maximum image size applies to these calls when the dataset type is image-detection.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create an example—POST /v2/vision/datasets/<DATASET_ID>/examples
  • Create a feedback example—POST /v2/vision/feedback
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

The response JSON for an Einstein Intent API call that returns model information now contains the algorithm field. The default return value is intent.

How: The algorithm field appears in the response for these calls when the dataset type or model type is text-intent.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

The response JSON for an Einstein Vision API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is N/A.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

The response JSON for an Einstein Language API call that returns model information now contains the language field. When you train a dataset, the resulting model inherits the language of the dataset. For Einstein Vision datasets and models, the return value is en_US.

How: The language field appears in the response for these calls.

  • Train a dataset—POST /v2/language/datasets/train
  • Retrain a dataset—POST /v2/language/datasets/retrain
  • Get training status—GET /v2/language/train/<MODEL_ID>
  • Get model metrics—GET /v2/language/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/language/datasets/<DATASET_ID>/models

The response JSON for an Einstein Vision API call that returns object detection model information now contains the algorithm field. The default return value is object-detection.

How: The algorithm field appears in the response for these calls when the dataset type or model type is image-detection.

  • Train a dataset—POST /v2/vision/train
  • Retrain a dataset—POST /v2/vision/retrain
  • Get training status—GET /v2/vision/train/<MODEL_ID>
  • Get model metrics—GET /v2/vision/models/<MODEL_ID>
  • Get all models for a dataset—GET /v2/vision/datasets/<DATASET_ID>/models

The default language changed to en_US from ENGLISH.

How: The language field now contains the value en_US in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT /v2/language/datasets/<DATASET_ID>/upload

An AppExchange package that provides a UI for the Einstein Vision and Language deep learning APIs. You can easily create datasets, build models, and make predictions all right from Salesforce.

The response JSON for any Einstein Vision API call that returns dataset information now includes the numOfDuplicates field. This field indicates the number of images not added to the dataset because they’re duplicates.

Why: When you create a dataset or add data to a dataset, duplicate images are omitted. The numOfDuplicates field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create a dataset —POST /v2/vision/datasets
  • Get a dataset—GET /v2/vision/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/vision/datasets
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

The response JSON for any Einstein Language API call that returns dataset information now contains the numOfDuplicates field. This field indicates the number of text strings not added to the dataset because they’re duplicates.

Why: When you create a dataset or add data to a dataset, duplicate text strings are omitted. The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT `/v2/language/datasets/<DATASET_ID>/upload

We doubled the maximum size of an image dataset from 1 GB to 2 GB.

We doubled the maximum size of a text dataset from 1 GB to 2 GB.

Each Einstein Platform Services account is now limited to 30 calls per calendar month to Einstein Vision and Einstein Language endpoints that return examples.

This limit applies across all APIs that return examples. If you exceed this limit, you receive an error message.

How: These API endpoints return examples.

  • Get all Einstein Vision examples—GET /v2/vision/datasets/<DATASET_ID>/examples
  • Get all Einstein Vision examples for a label—GET /v2/vision/examples?labelId=<LABEL_ID>
  • Get all Einstein Language examples —GET /v2/language/datasets/<DATASET_ID>/examples
  • Get all Einstein Language examples for a label—GET /v2/language/examples?labelId=<LABEL_ID>

The response JSON for any Einstein Vision API call that returns dataset information now contains the language field. The return value is N/A.

How: The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/vision/datasets/upload
  • Create a dataset synchronously—POST /v2/vision/datasets/upload/sync
  • Create a dataset —POST /v2/vision/datasets
  • Get a dataset—GET /v2/vision/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/vision/datasets
  • Create examples from a .zip file—PUT /v2/vision/datasets/<DATASET_ID>/upload
  • Create feedback examples from a .zip file—PUT /v2/vision/bulkfeedback

When creating a text dataset, you can now specify a language with the new language parameter. The default is ENGLISH. We created this parameter for future use, so you don’t need to do anything now.

How: The language parameter is available in these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync

The response JSON for any Einstein Language API call that returns dataset information now contains the language field. The return value for existing datasets is ENGLISH.

How: The language field appears in the response for these calls.

  • Create a dataset asynchronously—POST /v2/language/datasets/upload
  • Create a dataset synchronously—POST /v2/language/datasets/upload/sync
  • Get a dataset—GET /v2/language/datasets/<DATASET_ID>
  • Get all datasets—GET /v2/language/datasets
  • Create examples from a file—PUT /v2/language/datasets/<DATASET_ID>/upload

When you create a dataset (using the POST call) or add data to a dataset (using the PUT call), if the resulting dataset exceeds the maximum dataset size of 1 GB, the call fails and an error is returned. This change applies to Einstein Vision and Einstein Language.

See Create a Dataset From a File Asynchronously (Language) and Create Examples From a File (Language).

Training requests from customers on a paid plan are prioritized before training requests made by customers on a free plan. A training request is any call to the /train or /retrain resources.

Use the API usage call to find out what kind of plan you have. See Get API Usage.

The following is a list of the plans. The free plan has a value of STARTER.

  • HEROKU

  • STARTER—2,000 predictions per calendar month.

  • BRONZE—10,000 predictions per calendar month.

  • SILVER—250,000 predictions per calendar month.

  • GOLD—One million predictions per calendar month.

  • SALESFORCE

  • STARTER—2,000 predictions per calendar month.

  • SFDC_1M_EDITION—One million predictions per calendar month.

You might see a delay in training if you're on the free tier of service and there are other training requests in the queue. This change applies to Einstein Vision and Einstein Language.

The delete dataset API call no longer returns a 204 status code for a successful dataset deletion. Instead, the API returns a 200 status code, which specifies that a dataset deletion response was successfully received, but the deletion has yet to be completed. See Delete a Dataset (Language).

In addition to the new status code, the call returns a JSON response with a deletion ID. You can use this ID to query the status of the deletion.

Deleting a dataset no longer deletes the associated models. You must explicitly delete models. See Delete a Model (Language).

After you delete a dataset or a model, it may take some time for the data to be deleted. To confirm whether a dataset or model has been deleted, call the /deletion endpoint along with the deletion ID.

Valid values are:

  • QUEUED—Object deletion hasn't started.
  • RUNNING—Object deletion is in progress.
  • SUCCEEDED—Object deletion is complete.
  • SUCCEEDED_WAITING_FOR_CACHE_REMOVAL—Object was deleted, but it can take up to 30 days to delete some related files that are cached in the system.

See Get Deletion Status (Language).

Now deleting a dataset doesn't delete the models associated with that dataset. Instead, use this new API endpoint to delete a model. This cURL call deletes a model.

The response looks similar to this JSON.

See [Delete a Model (Vision) ] and Delete a Model (Language).

After you delete a model, use the id to check the status of the deletion.

After you sign up for an account, you download or save your private key in the form of a .pem file. But sometimes things happen. If you lose your private key, you can reset it. See Reset Your Private Key.

The free tier of our service will offer 2,000 free predictions (increased from 1,000 free predictions) each calendar month. See Rate Limits.

When you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call one of the prediction resources. To purchase predictions, contact your Salesforce or Heroku AE.

A prediction is any POST call to these endpoints:

  • /language/intent
  • /language/sentiment

In the new response JSON, the field "resultType": "DetectionResult" is removed and the field "object": "predictresponse" is added.

The new response looks like this JSON.

See the Detection with Image File section and Detection with Image URL section of [Detection].

Instead of using your private key to generate an access token, you can generate a refresh token and use that to generate an access token. A refresh token is a JWT token that never expires.

A refresh token is useful in cases where an application is offline and doesn't have access to they key, such as mobile apps. See Generate an OAuth Access Token.

Einstein Vision and Language make it possible to streamline your workflows across sales, service, and marketing so that you can do things like: visual product search, product identification, intelligent case routing, and automated planogram analysis.

If your object detection model misclassifies images, you can use the feedback API to add those images, along with their correct labels, to the dataset. After you add feedback to the dataset you can:

  • Train the dataset to create a new model
  • Retrain the dataset to update the model and keep the same model ID

See [Add Feedback to a Dataset] and [Create Feedback Examples From a Zip File].

If a dataset is being trained and has an associated model with a status of QUEUED or RUNNING, you must wait until the training is complete before you can delete the dataset.

The JWT tokens you use to call the API are now longer. You see this change whether you use the token web page to get a token or whether you generate the token in code by calling the /oauth2/token endpoint. See Generate an OAuth Token.

Use this new API call to get the model metrics for each epoch (training iteration) performed to create a sentiment or intent model. See Get Model Learning Curve.

When you get the model metrics, the API now returns the precision-recall curve for your model. These metrics help you understand how well the model performs. See Get Model Metrics.

Use this API to train models to recognize and count multiple distinct objects within an image. This API is part of Einstein Vision, so you use the same calls as you do for image and multi-label models. But the data you use to create the models is different. See [Create a Dataset From a Zip File Asynchronously].

Build a deep-learning custom model to categorize text and automate business processes. See Einstein Intent API Basics.

You can now return all examples for a single label by passing in the label ID. This API call is available in both Einstein Vision and Einstein Language. For Einstein Vision, see [Get All Examples for Label]. For Einstein Language, see [Get All Examples for Label].

You can now pass text in JSON when calling the /intent and /sentiment resources. See Prediction for Intent and Prediction for Sentiment.

  • The image file name maximum length increased from 100 to 150 characters.

  • There's no longer a maximum number of examples you can create using the [Create an Example] call.

You can use the [Create an Example] call to add an example to a dataset that was created from a .zip file.

These elements can now contain unicode characters:

  • .zip file name
  • directory or label name
  • file or example name
  • dataset name

In the Einstein Language APIs, the default split ratio used during training is now 0.8. With this split ratio, 80% of the data is used to create the model and 20% is used to test the model.

  • A dataset with a type of text-intent must have at least five examples per label.
  • A dataset with a type of text-sentiment must have at least five examples per label.

Einstein Language includes two APIs that you can use to unlock powerful insights within text.

  • Einstein Intent (Beta)—Categorize unstructured text into user-defined labels to better understand what users are trying to accomplish.

  • Einstein Sentiment (Beta)—Classify the sentiment of text into positive, negative, and neutral classes.

See Introduction to Salesforce Einstein Language.

This table lists all the changes to the API in the new version. Einstein Vision is now the umbrella term for all of the image recognition APIs. The Einstein Vision API is now called the Image Classification API.

Use the version selector at the top of this page to switch to the documentation for another version.

When you access the Einstein Platform Services APIs, you can now use this new endpoint. For example, the endpoint to get a dataset is https://api.einstein.ai/v2/vision/datasets/<DATASET_ID>.

The old api.metamind.io endpoint still works, but be sure to update your code to use the new endpoint.

Use the feedback API to add a misclassified image with the correct label to the dataset from which the model was created.

  • Use the new API call to add a feedback example. See [Create a Feedback Example].

  • The call to get all examples now has three new query parameters: feedback, upload, and all. Use these query parameters to refine the examples that are returned. See [Get All Examples].

  • The call to train a dataset and create a model now takes the trainParams object {"withFeedback": true}. This option specifies that the feedback examples are used during the training process. By default, the feedback examples aren't used during training if you don't specify this value. See [Train a Dataset].

There's now a call to retrain a dataset, for example, if you added new data to the dataset or you want to include feedback data. Retraining a dataset lets you maintain the model ID which is ideal if you reference the model in production code. See [Retrain a Dataset].

The new dataset type image-multi-label enables you to specify that the dataset contains multi-label data. Any models you create from this dataset have a modelType of image-multi-label. See [Determine the Model Type You Need].

See [Get Multi-Label Model Metrics] and [Get Multi-Label Model Learning Curve].

This multi-label model is used to classify a variety of objects.

The numResults optional request parameter lets you specify the number of labels and probabilities to return when sending in data for prediction. This parameter can be used with both Einstein Vision and Einstein Language.

Global datasets are public datasets that Salesforce provides. When you train a dataset to create a model, you can include the data from a global dataset. One way you can use global datasets is to create a negative class in your model. See Use Global Datasets.

When you call the API to create a dataset, you must pass in the type request parameter to specify the type of dataset. Valid values are:

  • image—Standard classification dataset. Returns the single class into which an image falls.

  • image-multi-label—Multi-label classification dataset. Returns multiple classes into which an image falls.

If you omit the count parameter, the call to get all datasets returns 25. If you set the count query parameter to a value greater than 25, the call returns 25 datasets. See Get All Datasets.

The following calls have been removed from the Einstein Image Classification API in version 2.0.

  • Create a label. You must pass in the labels when you create the dataset. /vision/datasets/<DATASET_ID>/labels

  • Get a label. vision/datasets/<DATASET_ID>/labels/<LABEL_ID>

  • Get an example. /vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>

  • Delete an example. /vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>