Today, we sit down with Daniel Hoechst, a Salesforce Architect at Instructure, to talk about his experiences on the platform, including how he got started and how an MBA has factored into his approach. Plus, we talk about some of his latest projects like the Limits Monitor, an app for monitoring Salesforce org limits and Test Factory, a utility that can be used in unit tests to create test data.

Show Highlights:

  • How being an MBA has factored into his approach to Salesforce
  • The importance of watching for edge cases that can break limits
  • Pitfalls that developers get into that can cause troubles for orgs
  • Limits Monitor: an application that helps you monitor limits and then send out warnings 
  • Test Factory: A data factory based on an older test factory project called SmartFactory 

Resources:

Shout out:

Episode Transcript

 

Daniel: I could just add an object and boom it was there. Like I could add a field to an object and add it to page layout. I was done. Like I didn’t have to do all these crazy things on this record. So I just fell in love with it.

 

Josh: That is Daniel Hoechst, a Salesforce architect at Instructure. I’m Josh Birk, a Developer Evangelist at Salesforce and here on the Salesforce Developer Podcast, you’ll hear stories and insights from developers for developers. Today, we sit down to talk to Daniel about his experiences on the platform, as well as a specific project of his to help monitor limits. That quote was about him first countering Salesforce, a platform he approaches from the perspective of an MBA.

 

 

Daniel: Well, you know, you learn everything at an MBA such as how finances and accounting practices work to how to manage people. So, you know, having that background has really helped me understand what the business is asking for rather than just coming at it from a technical background. So you know, I feel like I have a better idea of like, why they’re asking for something, and I can provide a lot more value beyond just giving them a technical solution. You know, I always ask why I say like, why are you trying to build that thing? What are you trying to do? Because often, you know, so many people come to you with a technical solution. They’re like, I need an object that has 20 fields, and I need this automation to happen. And you have to back them up a little bit and go, Well wait, why? What are you trying to do? And oftentimes, we find that there’s some other solution that’s really going to solve the business, the better. It helps them, you know, to get away from these weird technical solutions. So they come up with them themselves.

 

 

Josh: Daniel actually entered the world of software before getting his MBA, and that led to his introduction to Salesforce in a role that’s becoming something of a theme on the podcast. He started working customer service.

 

 

Daniel: I actually entered the software world before I got my MBA many many years ago. I’ve actually lived in Utah now for 20 years. When I first got here, I actually started working in customer service at a outdoor products manufacturer called Pencil. And I was just working customer service have a civil engineering degree, but I didn’t really know what I wanted to do with myself. So out of college, I moved out west from Georgia and ended up working customer service. Well, while I was there, they had just implemented a new e RP that was as 400. And nobody knew how it worked. So I started reading the manual and started making suggestions on how we can improve things. And finally, it got annoyed enough that they said come work for us. So I got a job as a developer on the IT team helping support the RP. I spent quite a few years building green screen applications to support that. And eventually we found Salesforce and implemented it for sales.

 

 

Josh: Fast forward to his current role where he’s in a large org with a lot of different moving parts. And I think a lot of people listening probably are familiar with what happened – when one of those gears can bring down the entire machine.

 

 

Daniel: Well, so we run a pretty large Salesforce org here. Like I mentioned, we have users across all the clouds pretty much. And we have been around for 10 years on the Salesforce platform. The company has a lot of different admins, a lot of different developers. There’s a lot of things going on. We have, I don’t know, something like 30 or 40, manage packages. Some of those are big ones like Financial Force, PSA, Salesforce, CPQ, and others. And they all sometimes interact with each other in odd ways. And you know that we have our own code, too. So we actually got hit a few times. Right. When I started my boss, when I came on, he was super gun shy about applications connecting the API, because we had a vendor who had connected to our API and brought the work down, because it used to follow API calls. So that was one example of you know, we had other products that were just causing loops, maybe one managed package or something we wrote was causing the loop to hit and cause Other things to happen. So you know, we were hitting limits. And you know, few occasions, we actually had to frantically call up premier support and ask them to increase some limit, you know, there be our API limit, probably the number of email messages that get sent by apex. That’s another limit we’ve hit. And we hit it a few too many times. And I got frustrated and said, There’s gotta be a better way to watch for this. So I kind of sat down and looked at the documentation and said, hey, there’s actually an API endpoint to get the limits the org limits, I’m not looking at the transaction limits, you know, the number of x, the execution time and that kind of thing. I’m more interested in 24 hour limits, like the things that if you hit those, your org is in trouble, and said, Hey, you know what, it’s possible. I think I can do this.

 

 

Josh: And again, it only really takes one gear to be a bad operator in order to kind of give up the whole works.

 

 

Daniel: Right. Exactly. Yeah. Like the API is a good example. You know, if people are connecting to traditional API soap API, you can’t really put them in a little box and say, Hey, you can only have 10% of our API. And if you go above that, you’re just going to stop working, which would be wonderful. But you have instead, one application can run away with your limits, especially with the API calls. Now, fortunately, you know, talking about API calls, because that was my problem has often been, I’m just happy to see in the spring release notes that that’s now going to be a soft limit. So if you over at your org can still continue to run. But I still think it’s important to watch those limits. Because even if you have something that’s going to run away, you know, if you have something that’s hitting your API and doing something, something is wrong, and you need to be alerted to it.

 

 

Josh: Now, these bad operators, they might not be built with bad intentions, but they might be coming out from the perspective of something that’s an easy pitfall for developing with Salesforce and that is not developing to scale.

 

 

Daniel: You know, sometimes I think it kind of has to do with data volume. You know, I think like I said, we have a big Oracle, we have a pretty big database volume. And I think some of the smaller vendors out there that have written stuff apps for Salesforce haven’t considered companies with such large volumes of data. And you know, they may have had something that’s let’s say, they’re syncing contacts, you know, from Salesforce to some other application. And they didn’t implement it. They’re just going in and just grabbing every single time tt connects, it goes, I’m gonna get all your contacts, I’m going to copy them over, I’m not going to look for last modified date or anything like that. So just poor design, like people aren’t really thinking about, oh, and I get into a larger scale of data, I need to start thinking about, Well, okay, they’ll pull everything, let’s use some of the newer features of the platform. I mean, well, you could use the last modified date. But you know, there’s even other features now like Change Data Capture. And so these other things that could be used to really identify specific changes without having to go into search through the entire or to look for records that have changed.

 

 

Josh: Okay, so anecdote time. I was once an innocent developer on a project for a very large media company. And we kept asking them for a data set that was representative of their production data so that we could go to scale. And they first gave us a few hundred rows of data, which we didn’t believe. And so we kept asking, well, what’s something that’s more representative and eventually gave us one that was closer to 50,000 rows of data, which we thought was pretty plausible only to go to production find out that it was more like half million rows of data. Needless to say, there was some performance changes that needed to be applied after that. So if you’re on a project that has large data volumes, make sure that that is getting flagged correctly to your developer. And moving on from things like API calls and data volume. What are some other pitfalls that we can look for, but also potentially avoid in an application?

 

 

Daniel: …things such as storage usage, and as I mentioned, the email sent from Apex is an odd one, because many of your limits will increase based on the number of users that you have in the system that you know one does not I want to say it’s 2000 emails in 24. four hour period, it might be higher, I can’t, there’s different fit 21 which organization you’re on, but it doesn’t increase. So you know, if you have 1000 users or you have 100 users, it’s same limit. So that’s one that you have to really be careful of, because it’s one that doesn’t scale with the number of users that you have. And we have some code that sends emails from cases. And we have gotten into a situation where there’s a loop, we try to avoid sending emails through Apex at all. But there’s a few edge cases that we had to do it, we try to send them with workflow rules or process builder because the number of emails you can send with those is astronomical. It’s like hundreds of thousands, if not millions. But if you wanted to send an attachment, you can’t send an email with an attachment through workflow or process builder. And so if we needed to send an attachment, it’s dynamic, we have to use some Apex for that. So we’ve had a situation where we had basically the case we had a loop where the recipient on the other end, was replying something but was replying with an attachment and then that would come back into sales. worse as an attachment, and then our Apex will be like, Oh, I need to email this attachment to all the case, contacts. And it just was looping. And so that was a bug that we identified in our code through monitoring or limits, because we hate, you know, there’s something happening. And we can actually see it and identify the case before it took the tires work down.

 

 

Josh: And let’s shift away from problems and over to solutions, because Daniel has a project up on GitHub for an application that runs on the platform that helps you monitor these limits, and then send out warnings if you’re getting close to them.

 

 

Daniel: So how I’ve configured it is actually, you know, the nice thing I was surprised as I built this app was that the amount of actual code that I used is maybe about 20 or 30 lines of code that’s about it. The rest of it is all using platform stuff, like you know objects and process builder and even have any flow in there but process or so, the idea is you can configure each limit, so I prefer Instructions on my GitHub of how to initialize the app. So basically, you can run a little bit of Apex anonymous where it just goes and does a list of all the limits that are available to be monitored and inserts records. So we have a basically a header level record of your limit. So you’d have a header level of Apex emails, you have a header level of API, data storage, etc. And then we can configure each of those limits to be monitored at increments up lie right now I do up increments up to 15 minutes. So you can do 15 minute intervals. So every 15 minutes, the scheduled job runs and says, You need to pull a snapshot for this limit. And if it does, then it will pull a snapshot and records it as a child object to that limit. So basically, you just have a limit and snapshots and it just runs on that schedule. And then we can configure for each limit, what are my thresholds, so there’s several thresholds that you might want to watch when limits are considered one is just a percent of total. So let’s say I’ll use data storage, let’s say your data storage, you’re in a Developer Edition. So your data storage is 10 max. So if you are using, let’s say you’re using eight megs you’re at 80% of data storage, that might be the time you want to start showing the limit, you know, telling the admin team that you’re getting close to hitting that limit. So I’ve got it configurable at that limit that says 42 storage, notify me when I’m at 80% of the limit. And so everything will hum along fine until we hit that limit. So that’s one way we monitor it. Another way I look at which can also identify problems is if there’s a big increase suddenly. So in that case, let’s say I’ll use data storage again. Let’s say you’ve been humming along and you just got a couple of megs of data and then all of a sudden it pops to nine Meg’s or let’s say seven so we don’t hit go over 80% so if you were humming along at two Meg’s and it pops to seven Meg’s that’s a pretty big increase percentage wise and so we can configure that As well, and I’ve, by default, I put it as if there’s anything that’s a 20% increase between snapshots. That’s something to be concerned with. And you can configure that per limit to say, you know, for data storage, I might be more concerned if it increases 5%, right, if you have a big org, so we can look at it in a couple of different ways to watch for those problems.

 

Josh: Now, when Daniel first started tinkering around with this, the optics that we’re talking about for the limits were API only. And so he actually put together a quick MVP using Boomi that would like go after the API and then set up the emails and well, it worked more or less

 

Daniel:  it can send texts and emails and phone calls and it can wake you up. That’s great. We find it we address the limit, but let’s say it was we’ll get back to the email one because we talked about that a lot. Let’s say you you got close. Let’s say you’re at 1500 out of 2000 emails, and that’s perilously close, but we caught the problem and fixed it. We’re going to still be at that limit for another 24 hours or so because it’s a role in 24 hours. area. And so if I have that threshold of 80%, or whatever, somebody’s going to get woken up every time the limit process runs, it goes, Hey, you’re still over this threshold, wake up, wake up. And so that was a problem.

 

I had some people not so happy with me.

 

So, you know, I started thinking about it. And also it was kind of a, you know, an MVP product, I had hard coded most of the thresholds and other things and, and I realized I wanted to have, you know, some flexibility. So I wanted to be able to change it per limit to monitor different levels. And I wanted to have some kind of a snooze functionality so that if we do go over a limit, we can snooze it for a certain period of time and say, Alright, we’ve hit the limit. We know what’s happening, we’re going to snooze it now. So for next 24 hours instead of 80%. Make it 90%. And that way it can still be monitoring. So if you have another spike that causes a problem, great, but if as long as you stay below 90 for the next eight hours or whatever you set then it will go incidentally alerts again. So those were kind of the things that I said, Now that I have had some experience with this and found the problems and things that I’d like to improve. I said, I can build this on the Salesforce platform.

 

 

Josh: Daniel’s code is all up on GitHub, all open source. And it’s also a great example of a dependent package because it’s also dependent on his own test factory project, which in itself is a great example of open source. Because it’s loosely based on an older project…

 

 

Daniel: Matthew Botos, from Mavens consulting, built a another test factory called, I believe, it’s called Smart factory. And in a big following, this was, I want to say, eight, seven years ago, and people really liked it. And you know, the nice thing about a test factory is you can have predictable objects, you know, records and that kind of thing. And so what smart factory did was it used describes to look at your data at your object and say, Oh, well, that’s a Boolean. I’m going to put a checkbox and I’m going to put a yes in there. That’s a text field. I’m gonna throw random text value in it. Basically, you know, that smart factory, just use it, I want an account and it would go look the account object and fill in all of the fields with just random values for the most part. And it worked. And you know, it’s nice because for a developer, if you had something that was required field, it would put something in it, you would have to worry about that. But what I didn’t like was the randomness, I didn’t like that it would just put random data in to all these fields, because I felt like that could give you unpredictable test results, you know, you go run your test once and just something about the random data that gets put it might cause your test to fail the next time you run it, and then nothing happens. So I wanted a little more control over what the values were in there. So I ended up writing my own kind of model a little bit about like his interfaces. So I borrowed from his stuff like how he was generating this object, the interfaces almost the same. But behind the scenes, instead of doing random you provide templates and you say, Okay, my template for account is like account name has value XYZ company, you know, and it’s also every time you ask for an account, it always has that value.

 

Josh: Now, more specifically, these packages aren’t just dependent. They’re unlocked packages as Daniel has moved them to that particular SFDX format. Now I did ask Daniel about his favorite non technical hobby is. As it turns out, it dovetails right into a community event that occurred just recently.

 

Daniel: I spend a lot of time out in the snow. One of the reasons well it’s not one of the reasons I moved to Utah but it’s definitely one of the reasons I stay here. I tell you, I love playing in the outdoors so much we organized snow force here in Salt Lake City. And that was really an excuse for me to go skiing. So that has been fun to see grow and I still get to go skiing so I think this podcast will probably air after SnowForce this year. But yeah, in February will have stuff force again and that’s a day of Salesforce community sessions and a day of skiing. So that’s what I love doing before. So let me combine the two hobbies so…

 

Josh: so if you like skiing and salesforce, Head on over to SnowForce. Check it out in 2021. I want to give a thanks to Daniel for the great conversation and some of the great code that he has put out there on GitHub. Thanks to you for listening. If you would like to learn more about the show, head on over to developer.salesforce.com/podcast where you can see show notes, old episodes and also links to your favorite podcast services. I’ll talk to you next week.

Get notified of new episodes with the new Salesforce Developers Slack app.