Lots of functionality has been added to our Einstein offering for developers since the launch of Einstein Vision a couple of months ago. This blog post guides you through the new capabilities and how to implement them into your Salesforce org.
Introducing Einstein Platform
The most obvious change is that we launched our Einstein Platform Services at TrailheaDX. This is due to the introduction of new services.
- Einstein Vision is now the name for multiple services, Einstein Image Classification (GA) and Einstein Object Detection (in pilot).
- Einstein Language is an umbrella that includes two new beta services, Einstein Sentiment and Einstein Intent.
You may ask yourself: Are there differences in using those new services compared to the previous version of Einstein Vision? Glad you asked, there are not many.
The common patterns across the services can be categorized as:
- Collect and classify training data (images or text).
- Upload the data to the respective Einstein Platform service.
- Train custom models based on the data.
- Predict/classify data based on the models.
There’s an API for that—and more!
To make it even easier for you to get hands-on, I’ve open-sourced a new wrapper library for Apex on GitHub that simplifies the handling for you. And if you’re already using Salesforce DX (strongly advised), you’re only two commands away from implementing the wrapper into the scratch org of your choice. You can find dedicated setup instructions in the README file of the repository.
The wrapper is a new version of the Einstein Vision focused wrapper as explained here and unifies the access to all Einstein Platform Services. And as we as developers love code, here is an example of how the wrapper is used to predict an image based on a remote URL.
Code explanation:
- Line 1: This creates a new
Einstein_PredictionService
object that handles all requests. The object holds all methods to communicate with the Einstein Platform REST APIs. It simplifies the user experience as you don’t have to set the correct HTTP headers, check required values, and more. - Line 2: The service sends the URL of the image that needs to be classified and gets in return the prediction result based on the specified model.
- Line 3: The label value of the highest prediction is assigned to a local variable.
Even though this is a pretty basic example, it shows how simple it is to use the wrapper. No manual coded OAuth authentication, HTTP requests, or JSON parsing needed.
You can also experience the wrapper without writing a single line of code! The repository contains a Lightning-based playground for all Einstein Platform Services (hat-tip to Christophe for providing most of the code for that!). Check out this video to see it in action:
One image, multiple definitions
In the first version of Einstein Image Classification, you can classify images when one specific type of object is recognized. Only the prediction result with the highest probability was relevant. But what about when you have to identify multiple objects within an image? For applications where you need more complex image recognition, Einstein Image Classification now supports multi-label classification. The model returns predictions for every object it can identify, with associated probabilities.
Instead of Einstein_PredictionService.Types.IMAGE
the parameter Einstein_PredictionService.Types.IMAGE_MULTI
is passed. This tells the wrapper to handle all requests for this service object as multi-label images. The Einstein Platform documentation contains some scenarios that explain where to use standard image classification versus multi-label classification.
Einstein Language
The newest addition to Einstein Platform Services is the Einstein Language offering (in beta). This includes:
- Einstein Sentiment
- Einstein Intent
The following diagram showcases the flow for both services. If you worked previously with Einstein Image Classification, you can quickly identify the similarities.
With Einstein Sentiment, you classify given text based on its sentiment. You can, for example, analyze if a customer’s email, a social media post, or a text in a custom application has a positive, neutral, or negative tone. A predefined model is provided so that the service can be directly used without any additional training.
Einstein Intent is a service that categorizes given text into custom labels, helping you understand what users are trying to accomplish. Some uses can include automated case routing for new support cases or monitoring requests in a user community.
Let’s dig deeper into some examples.
Einstein Intent
This API lets you define the classes you want utterances/text to be classified as. For example, to identify if a customer’s demand fits within certain categories for a service case ticket. This .csv file contains an example list of sample data for common case classification scenarios.
Code explanation:
- Line 1: This creates a new
Einstein_PredictionService
object that handles all requests. - Line 2: The Einstein Platform server downloads the .csv file from the remote URL and stores it. The
createDatasetFromUrlSync()
method is only recommended for small files. If you upload large files, usecreateDatasetFromUrlAsync()
instead. - Line 3: A new model is trained based on the dataset. You use this model to get predictions.
Based on the data within the file you’ve uploaded and trained into a model, the Einstein Platform service applies machine learning to identify common patterns per category to identify future predictions. Once the model is trained, you can then run predictions against any text.
Code explanation:
- Line 1: The service classifies the text based on the custom model ‘YourModelId’.
- Line 2: The label value of the highest intent probability is assigned to a local variable.
Einstein Sentiment
The sentiment API comes with a prebuilt model for identifying the sentiment of a text. You don’t have to build your own model to use this service.
Code explanation:
- Line 1: This creates a new
Einstein_PredictionService
object that handles all requests. - Line 2: The service classifies the text based on the prebuilt ‘CommunitySentiment’ model.
- Line 3: The label value of the highest sentiment probability is assigned to a local variable.
If you don’t want to use the pretrained model, you can train your own model using your data. Simply provide a text file (.csv or .tsv format), like you do for intent, but only use positive, neutral, and negative as the classes. This example code shows how to upload a .csv file from a remote URL and train a custom model.
Code explanation:
- Line 1: The Einstein Platform server downloads the .csv file from the remote URL and stores it. The
createDatasetFromUrlSync()
method, just like our intent example, is only recommended for small files. For large files, use thecreateDatasetFromUrlAsync()
method. - Line 2: A new model is created and trained based on the dataset.
Train, Feedback, Retrain
An important part of any prediction, be it for images or language, is the process of training the system with good data. Over time you may find that a certain percentage of your images or text aren’t returning with a firm classification. To improve your model, it’s necessary to consistently evaluate and reevaluate your data.
To do this in the past, you had two options.
- Add new examples to an existing dataset.
- Create a new dataset with new data.
Both options led to the creation of a new model and new model ID, which then needed to be updated in all systems running predictions against that model. To make this process easier for you, new feedback and retrain mechanisms have been introduced.
Code explanation:
- Line 1: This creates a new
Einstein_PredictionService
object that handles all requests. - Line 2: The text string ‘your text’ is added to the custom intent model ‘YourModelId’ and with that also to the dataset on which the model has been trained.
You can query a dataset for all contained training data with the getDatasets()
method, or for example only the data that has been added via the feedback interface.
Once you’re satisfied with the newly added training data, you can retrain an existing model and keep the existing model ID.
Code explanation:
- Line 1: This creates a new
Einstein_TrainParam
object that lets you specify custom training parameters. - Line 2: The
withFeedback
property specifies that the training should also include feedback data. - Line 3: The existing model is retrained using the feedback data.
Your image models, our datasets
In some cases it makes sense to add negative labels to your models. But what does ‘negative’ mean? Let’s say you have a dataset with cat and dog (cats preferred) images. The system learns automatically to identify those. But what about teaching your model to also know if an image doesn’t have any dogs or cats? That’s where a negative label is relevant. Negative labels are basically a set of images that don’t contain any of the data that you want to classify.
You can either collect your own images for adding such a category, or use one of the global datasets that we provide.
Code explanation:
- Line 1: This creates a new
Einstein_PredictionService
object that handles all requests. - Line 2: The
getGlobalDatasets()
method returns an array of global datasets that we maintain. - Line 5: The dataset ID of the first global dataset is passed as training parameter.
- Line 6: The model gets trained with your custom data and the data from our global dataset. This doesn’t add the data from the global model to your custom data.
Using Einstein Services in one Apex call
The wrapper library also simplifies handling different Einstein service calls within a single method.
Code explanation:
- Line 2: A new Map object is created to hold multiple string key-value pairs, in this case the prediction labels for intent and sentiment, which map to key values of ‘intent’ and ‘sentiment’.
- Line 3: A new
Einstein_PredictionService
object for intent prediction is created. - Line 4: The given text gets classified against the custom ‘MyPartnerCommunity’ intent model.
- Line 6: The type of the
Einstein_PredictionService
is set to classify the text sentiment. - Line 7: The given text gets classified against the pre-defined sentiment model.
Your next steps
If you haven’t worked with Einstein Platform Services yet, your first step is to get familiar with the capabilities in the Einstein Platform documentation. You should also go to Trailhead and earn some badges, both with the Einstein Vision QuickStart and by supporting Muenzpraeger’s Home for Wayward Cats. Also check out the wrapper library on GitHub, deploy it with Salesforce DX to a scratch org of your choice, and get hands-on in the playground!
About the author
René Winkelmeyer works as a Senior Developer Evangelist at Salesforce. He focuses on enterprise integrations, mobile, and security with the Salesforce Platform. You can follow him on Twitter on his handle @muenzpraeger.