What groundbreaking features are making waves with Salesforce Data Cloud this summer? Join us for a conversation with Danielle Larregui as she walks us through the latest advancements in the Summer ’24 release. We discuss the innovative ability to virtualize data model objects on account records and the extended reach of Enrichments to include more standard and custom objects. Plus, Danielle introduces new connectors like the Heroku Postgres connector for seamless data integration from Heroku to Data Cloud.

And that’s not all! We also caught up with René to talk about the buzz surrounding the AI Now Tour. These tours are a fantastic opportunity for those eager to get hands-on with Einstein and Data Cloud. Happy listening!

Show Highlights:

  • Detailed discussion on Data Cloud’s new functionalities, including virtualizing data model objects on account records
  • Expansion of Enrichments to more standard and custom objects
  • New connectors, specifically highlighting the Heroku Postgres connector for seamless data integration
  • Feature Manager in Data Cloud setup, enabling access to beta connectors

Links:

Transcript:

René Winkelmeyer:

Welcome to the Salesforce Developer Podcast. My name is René Winkelmeyer, and in this podcast, I’m hosting Salesforce Developer Advocates who are going to share insightful stories, new tips, techniques, and everything, actually, you have to know these days as a Salesforce Developer. Today, I’m very, very happy to have Danielle Larregui back on the podcast.

Danielle Larregui:

Hi, René. I’m super excited to be back on the podcast. It’s one of my favorite podcasts from Salesforce. I might be a little bit biased though.

René Winkelmeyer:

You’re already saying this because I’m your people manager. That’s it, right? No. Now Danielle, you haven’t been here since, I believe it was mid-March, and because you work in the Data Cloud space, I know that a lot has happened in these 8, 9, 10 weeks. What do you want to share with us?

Danielle Larregui:

Oh, yes. As you know, Data Cloud now deploys sometimes weekly, so eight, nine or 10 weeks these days is almost like a lifetime for functionality in Data Cloud. Some of the things I really want to talk about are the new highlights that came out of the summer 24 release with enrichments, with new connectors, as well as some new APIs that came out for Data Cloud.

René Winkelmeyer:

Okay, so I’m going to start with one term that some people may not know, so you can explain it and also the new functionality, and that is enrichments.

Danielle Larregui:

Oh, yes. Well, this is a plug. Really, if you listen to my previous podcast with René, enrichments was something that was on the frontier or that maybe might’ve recently came out, but with Data Cloud enrichments, you had the ability to write back data from data cloud, data model objects to Salesforce, the CRM org that’s hosting Data Cloud without doing any type of code. It’s all configuration.

René Winkelmeyer:

And what is new now?

Danielle Larregui:

Oh, yeah. So what is new in Data Cloud enrichments? Well, previously enrichments was only available for the contact and lead object, so you were able to virtualize a data model object as a related list on a contact or lead. As of the summer ’24 release, you can now virtualize a data model object on an account record. So what does that mean? That means that if you have some data that’s coming in via the connectors from some external system like ServiceNow or one of the other connectors, now that data can be seen on an account record. That’s really, really powerful.

René Winkelmeyer:

But from my understanding is that enrichment is not also about showing data natively, really in a related list, it’s also about copy fields that bring data over into an org. Right?

Danielle Larregui:

Yep, that’s completely correct. Also previously you had the ability or developers had the ability to bring over data from a field on a data model object and copy that data that could create a custom field on the contact or lead and then have that data copied over from a field on a data model object to that custom contact or lead field, and then set up a sync where the data was synced every so often over to the contact or lead field. Now, previously that was only available on contact and lead, but what came out during the summer ’24 release and beta was the ability to do that on many standard objects as well as any custom objects that someone might’ve created.

René Winkelmeyer:

A full expansion really to, again, most standard objects that you just mentioned. I guess tasks and events are not in that list like usual, am I right?

Danielle Larregui:

Yes. Tasks and events are not in that, but there are some other great standard objects that are in there like opportunity, cases, and even products. So there’s campaigns, there’s some great functionality that’s coming out for those as well.

René Winkelmeyer:

Okay, cool. So enrichments is like package number one. What is the next package you want to talk about?

Danielle Larregui:

Yes, so there’s been a wide variety of connectors that have come out and both pilot and beta that has been expanding for Data Cloud. One of them in particular that was released that Heroku is really excited about is the ability to bring in data from Heroku Postgres into Data Cloud using the new Heroku connector. So I demoed that in Release Readiness Live, and that allows you to now when you’re building your custom web apps and your custom mobile apps on top of the Heroku platform, you can now sync that data really, really easily back into Data Cloud and use it with all of data cloud’s activations.

René Winkelmeyer:

Oh, that is cool. But is one directional? That is going from Postgres to Data Cloud, right?

Danielle Larregui:

That is correct. That is correct. Right now it’s one directional, but you never know what could happen in the future and what I might’ve seen.

René Winkelmeyer:

I want to know, but also maybe not now, but I think that’s really a powerful element to go natively from Data Cloud to Heroku and consume that data. Some of my listeners may actually know that Heroku runs one of the, if not the, largest Postgres installation in the world, and I’m a big Postgres fan also, so I’m really glad that this kind of integration is now natively available and you don’t have to build some hoops actually to go through that. Now you also mentioned the beta connectors. What is that?

Danielle Larregui:

Yes, that’s a great question. Any of you out there that have access to a data cloud environment, you might’ve seen something in Data Cloud setup that’s called Feature Manager. So this is a one way trip. If you listen to this podcast and you go and flip this, I want you to remember that I told you in this podcast that this is a one way trip. When you toggle that feature manager button that allows you to see all of the connectors that are in beta right now, and you could start using them and start playing with them and start ingesting data into Data Cloud, but once you turn that feature on, you cannot turn it off. It’ll always be on in your data cloud org, for those of you that have Data Cloud in production.

René Winkelmeyer:

This sounds, I would love to turn it on. Feature Manager provides you then access to those connectors in beta, is that correct?

Danielle Larregui:

Yes, yes. Once you toggle feature manager on, you’ll start seeing, I think it’s something like a hundred connectors that are currently in beta that allow you to connect to a wide variety of external systems.

René Winkelmeyer:

Which then makes integration easy. You don’t have to really think about what do you want to do or find a potentially custom build solution or go by some integration layer to actually do this, which I think is really great because people can move fast, in my opinion.

Danielle Larregui:

Yeah, exactly. Most of these connectors, they have some version of auth built into the connector, so you just need to do the auth. After you do the auth, it creates the connection. You can even test the connection while in Data Cloud setup, and I will challenge anyone that listens to this podcast if they can probably do the majority of these of connections in 15 minutes or less, and then after they do the connection, they will need to create a data stream and in the data stream tab they will start seeing their tables and they will be able to start ingesting data from any one of those source systems in 15 minutes or less. I challenge them.

René Winkelmeyer:

Are we going to make a bet? I’m going to find a connector for me. Cool, that sounds really exciting. I think what you just highlighted and if you’re 15 or 10 or 60 minutes, but really in no time, you actually can get data from those systems into the cloud, which I think that is really the most important part. You don’t lose time, you can experiment. Is that the right thing? Do I have to change maybe something in my source system that it maps better to what I want to do in Data Cloud, which is actually great.

Danielle Larregui:

Yeah.

René Winkelmeyer:

Now connectors is one big thing with Data Cloud, and I think one thing that we talk about, but maybe not as often as we should, is in this regard also Bring Your Own Lake.

Danielle Larregui:

Yes. Previously in the previous podcast episode we were announcing that we just released new functionality under Bring Your Own Lake with integrations, bi-directional with both Snowflake and with BigQuery. And recently we just added two new vendors. We added Amazon Redshift as well as Databricks, and those are both bidirectional connections as well that allows you to bring in data from another external big data warehouse into Data Cloud. Just really increasing the power of Data Cloud.

René Winkelmeyer:

And for those who haven’t listened yet to the podcast episode of the Bidirectional Access is really about bringing data from an existing data lake into Data Cloud, but as well using Data Cloud natively from that third party data lake. And I think that is really the power and also the open approach actually that we practice to give everyone the opportunity to use the data as they need, right? That’s what it’s about. There’s no walled garden to make sure like, “Okay, you can’t get out.” It’s like, “No, no, here’s all your data. You can use it. You can use the activated data actually from Data Cloud in your other systems.”

Danielle Larregui:

Absolutely. Which is going to make your data way more powerful than it ever was before.

René Winkelmeyer:

That was nicely said. Now I’m going to go in a fully other direction because now we have talked about all these connectors and it’s very easy to set up and things, but there is something that I, as a developer, always care about when I read these three characters, which is API and there’s a new data graph API.

Danielle Larregui:

There is. Previously you had to use the Query API, you had to write a whole bunch of SQL joins if you wanted to get a primary record and it’s associate records. With data graphs, you can now build out a graph which allows you to relate objects in Data Clouds, so primary objects and child objects in Data Cloud, so you can build those relationships out in Data Cloud. And now with the data graph API, you can retrieve records using those previous data graphs that you built. You build the relationship and the data graph and then you send in the data graph name using the data graph API as well as the record that you want to retrieve, and what you’ll get back is a JSON blob that has that records information as well as all of the associated records that are linked using the metadata structure of that data graph.

René Winkelmeyer:

That’s really awesome. Can you enlighten me, and I’m certain I’m missing some point in my technical knowledge, is you can use calculated insights in a data graph, right?

Danielle Larregui:

Yes.

René Winkelmeyer:

Okay, I think that’s a really great feature actually, right? That you can use calculated insights within a data graph just adding to the functionality that you just explained, and from my memory–and I have to go back to play a bit with that–is also better way potentially just to create a data graph just from a data kit for those who didn’t try that before. Now there’s one feature that, not going to say feature, but actually something is now available really great for ISV Partners in a relationship to Data Cloud.

Danielle Larregui:

Yes, second generation packaging, which they’ve already heard from me that I want it to be made for everyone, not just ISV Partners, so don’t worry listeners. Those of you that are cringing right now that it’s only available for ISV Partners, I’ve been told that it’s going to be made available for many more people after Data Cloud Sandboxes releases, which Data Cloud Sandboxes will be coming out shortly in beta, but after Data Cloud sandboxes get released, then second generation packaging will be made available for it. Now, with second generation packaging, previously Data Cloud only had first generation packaging available, which didn’t work so good with things like Source Code Control, GitHub, so second generation packaging is going to be a little bit better. You’re going to be able to create cool things with Data Cloud and a scratch org, ISV partners only, and then you’re going to be able to package those up using second generation packaging and upload those easily to source code control like GitHub and be able to do better versioning with those packages.

René Winkelmeyer:

Cool. I think that is really exciting. Now, I don’t want to stretch your time too much today because, as you said, there are sometimes weekly release data cloud, and I just don’t want to envision how much you have to keep up these days, which is great because then we have enough to talk about the next time or actually next week already. Well, next week you’ll be traveling because when you all listen to this, Danielle will be already on the road to London. Now, I would say Danielle, thank you very much for being on the podcast again. It was a pleasure. I love to laugh with you and all the listeners, you don’t hear what we cut out for the good reasons because we had a lot of fun.

Danielle Larregui:

Yeah, it was great talking to you, René, again. And speaking of what René just said, you should really look up AI Now tours. We have a couple more of those before Dreamforce, and that’s one of the reasons that I’m going to London is to teach at an AI Now tour. I definitely recommend if you want to get hands-on with some of these things that we discussed today to look for if an AI Now tour is in your area and sign up.

René Winkelmeyer:

Okay, sounds exciting. Happy day. Goodbye.

Danielle Larregui:

Bye-bye.

Get notified of new episodes with the new Salesforce Developers Slack app.