Einstein for Developers Overview

Einstein for Developers Overview

Overview

Einstein for Developers is an AI-powered developer tool that’s available as an easy-to-install Visual Studio Code extension built using CodeGen, the secure, custom AI model from Salesforce. The extension is available in the VS Code marketplace and the Open VSX registry. Note that the extension does not use customer data to train our LLM.

Einstein for Developers assists you throughout the Salesforce development process with expertise learned from anonymized code patterns. Our suite of AI-powered developer tools increases productivity and provides helpful assistance for complex coding tasks. We enforce development best practices with code generation and our suite of recommended static analysis and security scanning tools. With boilerplate code generation as its foundation, AI-assisted tooling also makes it easier for new developers to onboard to the Salesforce Platform.

Important: This feature is a Beta Service. A customer may opt to try such Beta Service in its sole discretion. Any use of the Beta Service is subject to the applicable Beta Services Terms provided at Agreements and Terms.

Current Capabilities

Einstein for Developers generates Apex code from natural language prompts and automatically suggests code completions for you as you type. When enabled along with IntelliSense, this feature makes Salesforce development tooling in Visual Studio Code even richer. Familiarity with Visual Studio Code is assumed.

  • Enter natural language instructions in a sidebar, so you can work with your editor and the tool side by side, without any interruptions to your workflow. Or use the VS Code Command Palette to enter a prompt describing what you’d like to build and then generate code suggestions within your editor.
  • Use inline autocompletion to automatically receive suggestions as you write Apex and LWC (Javascript, CSS and HTML) code.
  • Generate Apex unit tests to quickly accomplish required code coverage to get your Apex code ready for deployment.

Note: Einstein for Developers uses generative AI, which can produce inaccurate or harmful responses. The output generated by AI is often nondeterministic. Before using the generated output, review it for accuracy and safety. You assume responsibility for how the outcomes of Einstein are applied to your organization.

Trusted Generative AI at Salesforce

Einstein solutions are designed, developed, and delivered to be compliant with our five principles for trusted generative AI.

Accuracy: We prioritize accuracy, precision, and recall in our models, and we back our model outputs up with explanations and sources whenever possible. We recommend that a human check model output before sharing with end users.

Safety: We work to mitigate bias, toxicity, and harmful outputs in our models using industry-leading techniques. We protect the privacy of personally identifiable information (PII) in our data by adding guardrails around this data.

Honesty: We ensure that the data we use in our models respects data provenance and that we have consent to use the data.

Empowerment: Whenever possible, we design models to include human involvement as part of the workflow.

Sustainability: We strive to build right-sized models that prioritize accuracy and to reduce our carbon footprint.

Learn more at Salesforce AI Research: Trusted AI.

The CodeGen Model

Important: Einstein for Developers uses a customized LLM that is based on our open-source CodeGen model. This model that powers Einstein for Developers is the exclusive property of Salesforce.

CodeGen2.5

A new member of the growing family of Salesforce CodeGen models, CodeGen2.5 shows that a small model, if trained well, can achieve surprisingly good performance.

Key aspects of the CodeGen2.5 model version are:

  • It was released with state-of-the-art on HumanEval for 7B parameters.
  • At only 7B parameters, its performance is on par with code-generation models (CodeGen1-16B, CodeGen2-16B, StarCoder-15B) with more than 15B parameters.
  • It features robust infill sampling, that is, the model can “read” text on both the left and right side of the current cursor position.
  • It is optimized for fast sampling under Flash attention for serving completions. It is also optimized for local deployment to personal machines.
  • CodeGen2.5 is permissively licensed in Apache 2.0.

See the blog post CodeGen2.5: Small, but Mighty.

Feedback or Bugs | Edit this Article