Generative AI has emerged as the game changer for innovative, customer-driven companies. Powered by advanced algorithms and machine learning, generative AI can drive innovation, streamline processes, and accelerate companies everywhere by providing personalized experiences and solutions tailored to customers’ unique needs.

Equally important to powerful, customer-led experiences is the protection of business-critical data. AI systems process and generate content based on large datasets, and large language models (LLMs) unfortunately aren’t putting your business first. As you prepare to roll out generative AI capabilities, it is crucial to prioritize data privacy. By implementing robust data protection measures, you not only maintain compliance with relevant regulations, but you also maintain customer trust — your most valuable asset.

Using the five steps outlined below, you can innovate quickly, increase productivity, and enhance personalized experiences, all while ensuring the security and privacy of your customer data.

Step 1: Understand and audit your data

In order to ensure that you have the right governance, privacy, and security protections in place, you’ll want to understand what data you’ll be using to create prompts, templates, and training models. Understanding the data you are allowing AI models to access will help prevent inadvertently sharing customers’ sensitive or personal data.

So, how to get started? First, anonymize and aggregate customer data before using it for generative AI purposes. Remove personally identifiable information (PII) and any other sensitive data that could identify individuals.

One easy way to do this in Salesforce is by using Data Detect, a product that allows you to review objects and fields before allowing AI processes to access them for prompts and training. Once data has been scanned through Data Detect, you can confirm that there are no surprises in that data, like credit card numbers or email addresses in fields where that kind of data shouldn’t exist.

Data Detect can also help recommend a classification level, such as “Confidential” or “PII” for personal data, and provide details on the content of an object, as well as find sensitive data generated by chatbots, cases, and call transcripts automatically logged by AI.

Step 2: Set up data privacy protection for your generative AI processes

Respecting customer privacy and protecting data throughout your AI processes is crucial for establishing and maintaining trust. As you rely more on AI to understand and make decisions off your data, how are you also protecting that data, especially PII?

For AI processes that use personal data, such as augmenting Contact records or orchestrating dynamic 1:1 Marketing offers, you’ll want to develop clear and transparent data usage policies that outline how customer data will be handled, including its use in generative AI systems. Communicate these policies to your customers and provide them with the opportunity to opt out or choose the appropriate level of data usage. Additionally, create a policy for eliminating and obfuscating data that is no longer useful or relevant, so that your customers stay protected and your generative AI processes stay accurate.

Privacy Center can help verify that your AI processes are consented for use in training and prompts. Privacy Center can also help you create retention policies to manage the lifecycle of data used by and generated by AI, including call transcripts, chatbots, and cases automatically logged by AI.

Step 3: Set up your org for managing generative AI

To both protect data used in AI processes and confirm that your integrations are staying within the bounds of the data you want to use, you’ll want to implement controls to protect customer data from unauthorized access or breaches.

Access controls allow you to restrict access to customer data to only authorized personnel. By granting access on a need-to-know basis, you reduce the risk of AI models and unauthorized individuals accessing sensitive data. This protects against potential misuse of that data while ensuring customer privacy.

Security Center can help you centrally manage user permissions and org configurations for data used in and ingested from AI processes.

Now let’s get ready to roll out AI safely across your organization.

Step 4: Test your processes for rollout

Testing in a sandbox environment is paramount when it comes to generative AI. This serves two critical purposes: testing AI processes and training employees on the safe and responsible use of generative AI. By conducting thorough testing in a controlled sandbox environment, organizations can assess and refine the performance and behavior of their generative AI models before deploying them in real-world scenarios. Testing allows for the identification and mitigation of potential issues, such as biases, errors, or unintended consequences that may arise during a generative AI process.

Moreover, a sandbox environment provides a safe space for employees to gain hands-on experience and training in using generative AI tools and systems. It allows them to explore capabilities and identify ethical considerations while making informed decisions when using the technology responsibly in their day-to-day operations. By leveraging sandbox testing, organizations can ensure the reliability, effectiveness, and ethical application of generative AI while empowering their workforce to embrace and utilize this transformative technology with confidence.

Be sure that when you’re using a sandbox for AI training, you’ve eliminated all personal data to build your prompts or train an AI model — you can easily eliminate or obfuscate any data that shouldn’t be included with Data Mask.

Step 5: Monitor and protect your AI processes

Ensuring that AI integration doesn’t access data or modify systems beyond the intended scope is crucial for maintaining data security and system integrity. As we described above, access controls and user permissions should be carefully defined, granting AI systems only the necessary privileges and limiting their access to specific data sources or systems. And, thorough testing and validation of AI integration should be conducted to verify it functions as intended and doesn’t have unintended consequences or vulnerabilities.

Finally, implementing robust monitoring mechanisms can help detect and alert any unauthorized access attempts or abnormal behavior by the AI system. Regular audits and reviews of AI integration processes and access logs can help identify any deviations or potential security risks.

Event Monitoring helps make that monitoring and detection process easier by allowing the set up of capabilities, such as transaction security, to send alerts or block actions beyond what was initially intended for your AI process.

Finally, as you’re getting deeper into your AI journey, it’s critical that your data is backed up and can be restored down to the record level in the unlikely case that data used and augmented by AI is misconfigured or incorrectly synced. Back up your data in order to view each version of the records used and touched by AI, and restore any mistakes.

Conclusion

By adopting a privacy-first approach and implementing robust data protection measures, you can create a trusted foundation for responsible, sustainable, and ethical generative AI practices, all while driving more efficient and effective innovation and more personalized customer interactions. For more on getting started with generative AI, check out our Getting Started with AI Trail!

Resources

About the author

Marla Hay VP of Security, Privacy, and Data Management at Salesforce and runs the Trusted Services product organization. She joined Salesforce in 2017 after leading product at a consumer identity management company. Marla holds a BS in Computer Science from Cornell University and an MS in Computer Science from Johns Hopkins University.

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS