Get your #buildspiration from the Summer ’19 release! We are sharing five of our favorite release features for Developers (and Admins) as part of the The MOAR You Know learning journey. Get the release highlights as curated and published by our evangelists. Complete the trailmix by July 31, 2019, to get a special community badge, and unlock a contribution of $10 to FIRST.
The AI services gap has never been wider. Only a fraction of Machine Learning (ML) APIs are easily accessible to our customers. While developers have access to business context, data and are fluent in Salesforce technologies (i.e. Apex, Workflows and Lightning Web Components), there is friction in integrating machine learning capabilities.
Though as developers, we can invoke Apex callouts to such APIs, it routinely involves writing code, tests, error handling, etc. Additionally, this involves account registration, authentication, setup, vendor maintenance and other operational overhead.
Take calling a translation API for example. The following is an example code snippet you would have to write:
With the new Apex AI Services, all of those are abstracted in a single line of code. No separate authentication, no registration, no setup, no fuss. The ML capability is available as a first class Apex method.
We are further reducing friction by launching such Machine Learning APIs in Apex with a set of unmanaged packages of pre-built applications such as Lightning Web Components and Apex triggers. We hope that this will help bootstrap building new applications and also help discover new problems where such machine learning solutions can be applied.
With Summer ’19 we are piloting two such Machine Learning APIs in Apex: Einstein Translation and Einstein OCR (Optical Character Recognition). Let’s take a look at what they do.
With Einstein OCR, anyone can detect text from images using optical character recognition on any Salesforce image files with clicks, or build customized OCR with Apex and embed AI into their processes and apps instantly.
With the power of OCR, you can now solve problems such as search for text in images, extract VIN from automobile door jam images, extract price from product images and so on. Here is the Apex method to invoke OCR:
You see, a single line of code, not more.
In the above screenshot, the
detectImageText returned all the text from the image which was then regex-ed to extract the VIN. As a part of the Summer ‘19 release, we provide pilot participants with a pre-built attachment search application. On installing the unmanaged package for this application, when an image attachment such as an invoice is uploaded, in the background, the text from the image is extracted and stored in the attachment description. This makes it automatically searchable in your org.
With Einstein Translation, anyone can add translations on any Salesforce text fields with clicks or build customized translations with Apex and embed AI into their processes and apps instantly. This is a language translation feature. Given a source language and a target language, it will translate a chunk of text to that destination language.
With the power of Einstein Translation you can now translate case description and title for global service centers, user messages in chatbots, websites and so on.
Here is the Apex method to invoke Einstein Translation:
As a part of the Summer ‘19 release, you can leverage a pre-built translation application. On installing the unmanaged package for this application, you get a custom Lightning component called Einstein Translation which you can drag and drop on any detail page and translate any standard or custom text field to a language of your choice.
How do I learn more?
As mentioned previously in this post, the program is currently in pilot. You can get nominated for it via your Account Executive.
Additionally, you can learn more through our Press Release.