Until now, when you wanted to use the Einstein Vision and Language deep learning services, you needed to make API calls from the command line with cURL or use a tool like Postman. But a new AppExchange app changes that. The Einstein Vision and Language Model Builder package gives you a UI in your Salesforce org so that you can create datasets, build models, and make predictions, all within Salesforce.

Einstein Vision and Language Model Builder, created by Salesforce Labs, is available for free on AppExchange. After you install the package in your org, you have a UI for the vision and language APIs. Note that this is a free application, but it’s not an official product from Salesforce, and doesn’t have any guaranteed support.

Quickly build Einstein Vision and Language models

Because Einstein Vision and Language are available only via APIs, it’s a bit of a hurdle when you first get started. For example, let’s say you just want to try out the APIs or you’re doing a proof of concept. You already have a lot to think about to ensure that you have enough data, to verify that you have high-quality data, to troubleshoot model performance, and so on. Troubleshooting API calls just adds to the complexity.

Using the AppExchange app, you can follow the standard deep learning development cycle (DLLC) using clicks before you get to code. The iterative DLLC at a high level looks like this:

  1. Gather data
  2. Create the dataset
  3. Train the dataset to create a model
  4. Test the model

Install the app

To play around with the app, sign up for a free DE (developer edition) org and install the app there. Then you won’t affect anything in your production org. Note: If you’re a new user, when you sign up for a DE org, the username you select is only in the format of an email address. You don’t need to use an actual email address.

Follow the directions in the Quip doc. Sign up for an Einstein Platform Services account, download your .pem file, and set up My Domain in your org before you install the app. I had some issues using Firefox, so I recommend using Chrome.

Log in with your Salesforce credentials and step through the process as detailed in the Quip doc.

If the installation was successful, you now have a new app in your org called Einstein Playground. Be sure to follow the steps in the App Installation and Configuration section of the Quip doc to upload your .pem file. If you see a message that your account isn’t properly configured, that means you haven’t uploaded your private key.

Build a model

Now let’s put the app to the test. Here’s what you’ll do:

  • Create an image dataset
  • Train the dataset to create a model
  • Make a prediction

These steps use the .zip file from the quick start in the documentation.

Step 1: Create a dataset

Start by creating a dataset using the mountains and beaches .zip file.

  1. From the App Launcher, select Einstein Playground.
  2. Select Image Classification > Dataset Creation.
  3. Enter the URL of the .zip file that contains the images: https://einstein.ai/images/mountainvsbeach.zip
  4. Click Create. You see a message that the dataset was successfully created.
  5. Click Datasets and Models and then click Refresh Datasets. You see the new dataset at the end of the list.

Step 2: Train the dataset to create a model

You train the dataset to create a model. The model is the construct that delivers the predictions.

  1. If you’re not on the Datasets and Models page, navigate to Image Classification > Datasets and Models.
  2. Locate the dataset you just created, and click Train. You see a message that the training process started.
  3. To monitor the training progress, click the Models tab for the dataset.
  4. Click Refresh Models to check the status. When the progress is at 100%, the model is ready to make predictions.

Step 3: Get back a Prediction Score

Now it’s time to send a file in and get a prediction back from the model.

  1. Navigate to Image Classification > Prediction.
  2. Select the model that you just created. It’s called mountainvsbeach model.
  3. Enter the URL of the image you want to send in for prediction. In this case, we have an image that you can use here.
  4. Click Send. If you look on the right side of the screen under Response, you see that the model prediction is 97 percent that the image is of a beach. If you click Raw, you can see the full JSON response.

Things to keep in mind

Although the model builder UI is in Salesforce, you must aggregate your data and make it available on either a local drive or a web location. For the vision APIs, gather up the images in a .zip file. For the language APIs, your data must be in a text file (.csv, .tsv, or .json). For more information about how to format an image dataset, see the documentation.

I found the UI navigation a bit unintuitive at first, and you might too. It helps to know the overall model building workflow: gather data, create the dataset, train the dataset, and get a prediction. The UI doesn’t really guide you through the process. When you know what you’re trying to accomplish and get used to the UI, it’s not an issue.

The app UI provides a good starting point for doing a POC and creating a model. Of course, if you want to use your model in Salesforce or another app, the next step is to write code to integrate the model and call the Einstein Platform Services REST APIs.

Resources

Documentation: Einstein Platform Services Developer Guide
Trailhead Project: Quick Start: Einstein Image Classification
Trailhead Module: Einstein Intent API Basics
TDX 2019 Session: Add Custom Deep Learning to Your Salesforce Apps with Clicks, not Code
TDX 2019 Session: Come for the AI, Stay for the API

About the author

Dianne Siebold is a Principal Technical Writer on the Platform doc team at Salesforce. She specializes in deep learning, AI, and integration technologies.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS