The Salesforce Developers website will undergo maintenance on May 29, 2024 from 3:00 a.m. UTC to 10:00 a.m. UTC. The maintenance process may affect the availability of our documentation. Please plan accordingly.

If you work in any type of software development, you’ve probably heard about or used the agile methodology. Based on agile’s success in software development, the principles and processes have been applied to other disciplines, like project management, manufacturing, marketing, human resources, and even artificial intelligence (AI). Yes, these same principles and processes can be used to deliver data science functionality to your customers.

In this post, we cover what agile AI looks like in practice using an Einstein Vision project as an example. You can also apply all the concepts covered to Einstein Language. Here’s what we’ll do:

  • Follow the minimum viable product (MVP) principle to create a minimum viable dataset (MVD).
  • Learn when and how to add a negative label to improve your model.
  • Further improve the model by using feedback and progressively adding new labels.

Agile AI

A key principle of agile is customer satisfaction through early and continuous software delivery. What does this mean? It means deliver something that customers can use and then continually improve and deliver in regular intervals.

In an agile AI approach, teams break down a data science project into small, manageable, and achievable chunks. The goal is to deliver something usable to the customer with each milestone. The team can then build on what they deliver and continue to incrementally improve.

Let’s look at building a car: Let’s say you’re on a team tasked with building a car. The customer’s ultimate goal is to get from point A to point B as quickly and easily as possible. This image illustrates what delivery might look like using a waterfall versus agile approach.

The waterfall method uses a linear approach to achieve the final product. However, using the agile method, you could first deliver a skateboard. A skateboard is nowhere near the functionality of a car, but it does get you from point A to point B. Then, in each subsequent iteration, you improve the functionality, always delivering something usable.

Data science projects can be high visibility and high risk. Taking an agile approach to your next data science project is one way to reduce risk and ensure success.

So what does an agile approach to data science look like? Let’s take a look at a scenario that uses Einstein Vision to classify images. In this scenario, you use Einstein Vision to create a model that identifies whether an image is an apple or a pear.

During this process, we look at how to approach the project in an agile way, and at techniques you can use to incrementally improve your model and its accuracy. In this scenario, we have three sprints (milestones).

Sprint 1: Create an MVD for your MVP

One agile term you might frequently hear is MVP, or minimum viable product. The MVP is the minimum functionality the team delivers in a given sprint with just enough features to satisfy early customers. Those customers can then provide feedback for future product development. In agile AI, we start with an MVD, or minimum viable dataset.

You start with an MVD because collecting all the data needed for your ideal model could stall your progress. Like with software, you start by defining what is the minimum or smallest number of labels or categories you can start with. Aim to build a decent model just for those labels and not one more.

The first thing you do is gather images that are representative of the types of images presented to the model for classification. Check out the blog post Why Representative Datasets Are Important for Computer Vision Models for more information. For our scenario, we collected a bunch of examples of apple and pear images.

After you pull all the data together, you create a dataset and then train it to create a model. Here’s what the cURL call looks like to send an image of an apple for classification.

The model returns a prediction similar to this JSON.

Good news: The model returns a high probability that the image is an apple. You spent Sprint 1 getting data and creating a model that your customers can use to identify images of apples and pears. Great job!

Sprint 2: Use a negative class to improve predictions

The apple and pear model is a good first iteration because it returns accurate predictions for apples and pears. You used representative data that included a wide variety of images of apples and pears and images of varying quality.

This model works great when the image being classified is an apple or a pear. But what kind of result does the model return with an image of an orange?

The model returns a prediction similar to this JSON.

The labeled data from which the model was created contains only apples and pears. When you attempt to classify an image of another object, the model can only return a prediction that the image is an apple or a pear. The model only knows what you teach it, and right now it only knows apples and pears. You can further iterate and improve this model by including a negative class.

To add a negative class to your Einstein Vision dataset, you first collect images that aren’t apples and pears. If you don’t have images of your own, you could use the publicly available Caltech 256 dataset or a Kaggle dataset.

Put all the images in a folder named “Other,” and then create a .zip file. When the images are added to the dataset, they’re labeled Other. To add images to a dataset, you use the PUT API call that looks like this.

After you add the images to the dataset, you retrain the dataset to update the model. Now when the model classifies an image of an orange, it returns the Other label with a high percentage.

From the prediction results, you can tell right away that the classified image of an orange isn’t an apple or a pear. In Sprint 2, you further refined the model and made it easier for your customers to use.

Sprint 3: Improve your model with feedback

You can use the Einstein Vision feedback API calls to provide the ability for your users to give you feedback about predictions. For example, let’s say an image of an orange was sent in, and the model returns a high probability with the label Apple. Now that the model has an Other label, users can give feedback that the image was misclassified, and the actual label for that image is Other.

This cURL call is an example of adding the misclassified image to the dataset with the correct label.

After you add feedback examples to the dataset, an admin can review and use them to retrain the dataset to incorporate the feedback into the model. To include feedback examples, use the trainParams object, and pass in the value {"withFeedback": true}. The cURL call to retrain a dataset and include feedback looks like this.

As your model is used in production, you keep track of all the images being classified. Over time, you see that images of oranges are frequently sent for classification. The model correctly identifies those images and returns the Other label because of user feedback. However, the model would be even more useful if it could correctly identify oranges.

To improve the model, you add a new class called “Orange.” To do this, you can start by using the images of oranges previously labeled as Other. You can also gather images of oranges. Add them to the dataset and retrain the model, similar to how you added the label in Sprint 2. Now when an image of an orange is classified, the results return a high probability for the label Orange.

In Sprint 3, you used the feedback feature in Einstein Vision to enable your users to send misclassified images back to the model along with the correct label. Based on the model usage, you saw an opportunity to improve the model by adding a label called Orange.

Einstein Vision gives you the tools to improve the accuracy and usability of your deep learning models. Combine these tools with an agile approach and you can quickly deliver functionality to your users, and then continue to evolve and improve that functionality. You can reduce risk and iterate toward greatness!


Related posts

About the author

Dianne Siebold is a principal technical writer on the platform doc team at Salesforce.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS