Salesforce Anti-Patterns: A Cautionary Tale
Don't let your Salesforce org suffer from anti-patterns that often emerge unknowingly as you customize your business needs. Read this fun fictitious story to learn how to avoid and recover from sub-optimal implementations.
The following story is based on experiences compiled from several real-world engagements with Salesforce users. The names of characters and companies in this story have been changed to protect their true identities–but you know who you are. 🙂
It Begins …
Mike’s company, XYZ, just purchased Salesforce licenses for all of their sales reps, and it’s his team’s job to manage the rollout of Salesforce as quickly as possible. Mike, Vijay, and Lydia all have years of experience managing and customizing XYZ’s on-premise CRM system. They quickly skim the Salesforce documentation, recognize most of the general application and database development/administration concepts they read about, and thus conclude that this transition to Salesforce will be straightforward without much training necessary.
Customizing the Salesforce Schema
Vijay has lots of experience with relational databases, so Mike asks Vijay to customize the Salesforce schema for XYZ’s needs. Vijay quickly learns that standard object customization (e.g., Accounts, Leads, Opportunities), custom object creation, object relationships, and data validation are a breeze to configure without writing any SQL thanks to the intuitive Salesforce Setup UI. In particular, Vijay falls in love with formula fields, and creates many of them to dynamically surface related child data in parent objects and vice versa, thinking such fields will make reports and queries easier to build later on because they won’t require users to join objects.
Loading Large Amounts of Data
XYZ’s on-premises CRM system has millions of Accounts, Contacts, Opportunities, etc. that all need to migrate to Salesforce. Lydia’s on it–she takes a quick lesson on the Salesforce Data Loader and decides this is the right tool for the job after loading a few records into a Salesforce Developer Edition org. The plan, she decides, is to wait until just before the CRM system switchover to extract data from the on-premises database into CSV files, then configure the Data Loader to use the Bulk API in parallel mode to pump the data into Salesforce as quickly as possible. The team’s under pressure for time, so Lydia doesn’t give her bulk data loading strategy another thought, and moves on to her next task …
Creating and Maintaining Local Database Backups
Even though XYZ is fully committed to their cloud strategy, they still want to maintain a local copy of their CRM data. Lydia rushes to meet this requirement next. Considering this task is the opposite of the data load she just devised, Lydia does a tad bit more training with the Data Loader and decides that this tool will also work to query and extract data from Salesforce to local CSV files on a nightly basis. The easiest way to meet this requirement, she decides, is to do simple full refresh of the local database every night.
Users and Feature Governance
Mike, meanwhile, has been working on a plan for user provisioning, data access, and feature governance. Here, Mike learns that there are some unique Salesforce concepts and features to understand, including record ownership and strategies for sharing private records. Mike takes his time gathering requirements and implementing a role hierarchy and a number of sharing rules on Accounts, Cases and Opportunities to meet them. Because they want users to have the appropriate visibility right away, Lydia and Mike decide to make the objects Private and implement the sharing rules before loading the data. He also makes a note that the sharing rules will need to be recalculated for the org every quarter, first thing Monday morning after the new sales account assignments are available. [X]
Mike decides to enable users to help themselves whenever possible when it comes to certain Salesforce features. For example, creating Salesforce reports and dashboards are easy, so Mike decides to let sales reps create their own reports around their data access permissions. His reasoning: Why should he spend time researching requirements for reports and dashboards when users can build and maintain what they need themselves?
Business Process Configuration
Mike is also in charge of configuring various business processes for XYZ’s users. The first key requirement is to set up something that will drive workflow for Sales reps–what Leads to call next, in order of priority.
Mike meets with Jamie in Sales, an experienced Salesforce user from before her time at XYZ, to discuss this key requirement and others. Jamie recommends an easy solution–that each sales rep create their own list view for Leads, and refresh the list view periodically so that they always see the latest Leads requiring attention. Mike loves the idea because he doesn’t have to implement anything to meet the requirement. Sweet!
This fictitious, yet true to life, story demonstrates how some common anti-patterns emerge when implementing Salesforce. What’s an anti-pattern?
Design patterns in the context of computer science are proven, highly-effective approaches to solving common challenges in software development. So it makes sense that an anti-pattern is the opposite of a design pattern: A software implementation of some sort that is suboptimal.
Now let’s go back to the story and learn more about the anti-patterns that adversely affect the Salesforce implementation and how the team can recover from these problems.
Fast Forward … Data Loads Fail
It’s go time, ready to switch from on-premises to cloud. Lydia does a huge data dump from the on-premises CRM system, and transforms the data as necessary before attempting to load the data into Salesforce. She starts up the Data Loader and attempts to bulk load the data from the CSVs using the parallel loading feature.
Some of the initial loads go very quickly–for example, the load of the parent Account object. However, subsequent loads of some related objects are plagued with lock contention errors causing batch retries, leading to painfully slow loads that often result in load failures. Lydia is extremely frustrated and starts to wonder if Salesforce can handle their data requirements.
Lydia doesn’t understand the Salesforce locking mechanisms and how those can affect her bulk data load. If she did, she could easily work around this by pre-sorting the child records by parent Id in her CSV file to lessen the chance of parent record lock contention among parallel load batches. Lydia also doesn’t realize that the sharing configuration that she and Mike set up prior to the data load is further contributing to the poor loading performance which is then magnifying her lock contention problems. By deferring the org’s sharing calculations until after her data load, she could significantly increase both the load and sharing calculation performance.
Full Database Backups are Slow
Lydia is now doubly frustrated. Why? The nightly full data backups that she configured using the Data Loader take a lot of time to complete and sometimes slow down other jobs–every night.
Lydia is attempting to refresh the entire local database every night, doing more work than what’s necessary. This translates into long-running Bulk API jobs that unnecessarily hold onto asynchronous processing threads and delay other batch work from happening.
Instead, Lydia should be doing nightly incremental data backups–only backing up the data that is new or updated since the previous incremental backup. When doing this, she should use queries that filter records using SystemModstamp (a standard field in all objects that has an index) rather than LastModifiedDate field (not indexed).
Users Complain About Slow Reports
Mike’s decision to let sales reps create reports at will is coming back to haunt him. Every day, he gets complaints about reports that take a long time to complete, sometimes timing out and failing altogether. Salesforce Support is great at helping him diagnose the problems with each report as they come in, but it turns into nearly a full-time job to just manage the seemingly constant inflow of report problems.
As Mike works with more and more problem reports, patterns begin to emerge. Most reports are suffering from the same ailment–inefficient reports with non-selective filters on unindexed fields. Even worse, Mike sees that every sales rep is creating virtually identical poorly performing reports, so he’s repeating the same work over and over again to tune each sales rep’s reports.
Mike didn’t properly anticipate the problems with allowing everyone to create reports, nor did he educate XYZ’s users on how to build efficient reports that can scale as the company’s database grows. Additionally, Mike didn’t take the time to consider a great alternative–creating a library of public, controlled, and optimized reports that meet the requirements of XYZ’s sales reps. Fewer reports to tune and maintain, plus high user satisfaction.
Formula Fields Unknowingly Slow Reports
Even after filter tuning, Mike observes that reports including many of those formula fields that Vijay built are still inefficient. How is this determined? Salesforce Support works through the cases with Mike and notes that such reports have five or more joins in the underlying query plan.
Report builders are including many of the formula fields that Vijay built to surface data from related records dynamically at report runtime. Consequently, Salesforce performs many joins in the underlying query to run the report.
Formula fields are a great tool when used in the proper context while understanding the scalability and response time implications for their usage. However, without proper understanding of how they function, formula fields can sometimes come back to bite you.
If Mike and Vijay had studied the formula fields more and understood that these formula fields had the potential to slow down many reports and queries, Vijay might have implemented the fields differently: For example, using a trigger to populate denormalized related fields that would facilitate blazing report/query performance without runtime joins.
Sharing Recalculations Clobber Nightly Backups
The first sharing recalculation is due, so Mike kicks it off. Not only does the sharing recalculation take forever to complete, but Lydia calls Mike to explain that the nightly full backup also starts running slower than it already does. She has no idea why.
Sharing recalculations and Bulk API jobs both draw from the org’s available pool of asynchronous processing threads. This competition causes both jobs to go slower than if they were scheduled to happen apart from one another. Also, if the full backup was changed to a much faster incremental backup, the time requirement for the backup (and thread usage) would be much smaller, allowing more time for the sharing recalculation.
Sales Reps Develop Productivity (and Eyesight) Issues
Mike took the seemingly easy solution and didn’t consider the productivity of the end user when choosing the approach to implement. With a more thorough analysis, Mike would have seen that Salesforce has many easy-to-implement workflow solutions such as workflow rules for this type of process.
Mike, Vijay, and Lydia thought they were making the proper decisions when implementing Salesforce for XYZ. An overarching reason that they experienced so many problems is because, based on their experience working with on-premise systems, they made assumptions that led to Salesforce performance issues later. They could have considered how one part of the implementation might affect others and tested their design for scalability under planned workloads before going live into production. Lesson learned: A little time planning and testing upfront can save a lot of time addressing the technical debt afterward.
The good news is that the links to the related information in each anti-pattern section provide solutions that the team read and implemented. Now XYZ’s implementation performs and scales well, and sales reps are super productive and closing lots of new deals with the new workflow solution.
About the Author
Steve Bobrowski is an Architect Evangelist within the Technical Enablement team of the salesforce.com Customer-Centric Engineering group. The team’s mission is to help customers understand how to implement technically sound Salesforce solutions. Check out all of the resources that this team maintains on the Architect Core Resources page of the Salesforce Developer network.