A common issue when programming is how to keep your code properly organized — both to aid in readability, decrease the amount of test setup necessary, and speed up your ability to deliver new code by making proper reuse of existing code. For all of these things, Dependency injection (DI) can be a valuable technique to learn and apply to the code that you’re writing. Properly applied, DI makes testing your mission-critical code paths easy and fun, and your code focused and reusable. In this post, we’ll review three basic approaches to dependency injection; we’ll also look at how you can best use dependency injection on both existing and greenfield projects.
Dependency injection basics
When it comes to dependency injection within the field of computer science, it might be helpful to start with the 20,000 foot view: if your program is comprised of objects, then dependency injection is the mechanism by which your objects are composed. Why does that matter? For our particular purposes, one reason it matters has to do with Salesforce’s code coverage requirements; instead of writing more tests, we can optimize our code by putting similar functionality within singular classes. In this way, we can both reduce the amount of code we need to write and the amount of code we need to test.
Once you start properly encapsulating your code in separate classes (the partitioning of logic into objects, or classes, being one of the most important concepts in object-oriented programming), you now are well on your way toward being able to benefit from dependency injection. Let’s review three different possible dependency injection solutions in Apex:
Constructor-based dependency injection
This is the most widely-used DI option. Since developers have been encouraged for decades to list constructors at the top of their classes (following the “newspaper” style, or top-down approach to readability), constructor-based dependency injection let’s us infer what our class will do based on the classes passed into it:
Somebody reading this class for the first time benefits from the dependencies having been a part of the object being constructed; this class has a slim list of responsibilities, and can be easily tested. It’s easy to continue down this road. While a class called OpportunityUpdater
may make sense in some cases, it might make even more sense for you to have an object associated with DML
, which the OpportunityUpdater
can then take in:
Note that these methods are not static — they are instance methods. While it can be useful to differentiate between static and instance methods by looking at whether a method interacts with any member variables (if it doesn’t, it can be made static), that doesn’t tell the full story. If we want to pass a class around (using dependency injection) to encapsulate domain-specific behavior, it’s necessary for the methods to be non-static. Classes getting passed a dependency (calling classes, sometimes referred to as “client” classes) can’t call static methods on the specific instance of the class that they’ve been passed — they can only call global or public instance methods.
Now the OpportunityUpdater
looks a little different:
At face value, it may seem like we’ve gone too far. While our OpportunityUpdater
is now delegating the actual Database.update
operation to DML
, we’ve added an additional dependency to our constructor! Is this really going to benefit us in the long run?
If we left things as they stood, perhaps not. The true power of dependency injection is only partially due to objects being properly encapsulated; the increase in testability we get as a result is the real driving force. In classic object-oriented programming, DML
represents an important boundary in the system; it’s the moment during which we hand off SObjects to the platform. In other words — it’s the proper place to introduce mocking in order to reduce our testing dependencies. Whereas with standard Salesforce code, you might be forced to insert / update / delete / undelete records as required to fulfill various business requirements, with just a little bit of extra work, our new DML
object can reduce the test setup and ceremony required to fully test our business logic:
Note that in order for us to create override-able methods, it’s also necessary for our class to be marked as virtual
. Using the virtual
keyword is perfect when you have a default implementation in mind but want to give options for how individual methods can be customized. Contrast this with the abstract
keyword, which prevents a class from being specifically initialized, but allows for code reuse within subclasses. Using interfaces for a class like this would be considered an anti-pattern: when you only have one concrete instance of a class, having an interface to represent the public API for a class like DML
is an unnecessary abstraction. On the other hand, if you were writing this code with an eye toward packaging, where DML
represented the intended basic functionality but there could be many different implementations, using an interface like IDML
to represent the public API for this class would be a prudent move.
Now, when we go to test the OpportunityUpdater
, we need only a small mock DML
object to completely remove the Salesforce database from our testing equation:
As noted in the comment above, in a real codebase, DMLMock
would be its own standalone class — and it would be used throughout the codebase to validate single-object DML statements without needing to rely on the database. Note that this doesn’t absolve us from the responsibility of having true integration tests with actual cross-object DML.
Property-based dependency injection
We can also use public properties to set our dependencies:
In this example, note that we no longer pass our DML
instance in through the OpportunityUpdater
’s constructor. Instead, it is publicly exposed and can be set accordingly anytime prior to its first usage in a class. Note that this breaks encapsulation; now objects outside of the OpportunityUpdater
know two things:
- There’s a public property called “DML” on the
OpportunityUpdater
, and it has a typeDML
- The
DML
type has methods on it likedoUpdate
This encapsulation violation is why I don’t typically recommend property-based dependency injection. Property-based getters and setters have their use cases. They’re wonderful for lazily initializing properties that are not accessed via all code paths on an object, and they’re of course great when preparing data transfer objects (DTOs), but for dependency injection, they take a sideline to the constructor-based approach.
@TestVisible dependency injection
Salesforce exposes the @testVisible
decorator for properties and class definitions; this represents an additional tool in our arsenal when considering ways to inject properties into classes. However, @testVisible
properties should be used with care, for a variety of reasons:
Using @testVisible
to perform dependency injection tends to lead to an uneven balance when it comes to encapsulation. On the one hand, you know absolutely for sure that your backing dml
property can only be swapped out in tests, which is great. On the other, it introduces a discrepancy between the constructor and the tests. If, over time, additional setup comes to be done on the dml
property (or any @testVisible
property being mocked in tests), it becomes possible for these slight deviations between production-level and testing code to introduce bugs.
As far as DRY (or “don’t repeat yourself” — one of the adages in software engineering that we try to adhere to as much as possible) is concerned, you can simplify your tests setup by creating private static helper methods that take care of setting the dml
property prior to returning the initialized OpportunityUpdater
in tests.
It also adds an additional responsibility for the OpportunityUpdater
(and any other object like it). Now, the OpportunityUpdater
needs to know about the specific type of DML
class that it should instantiate in the event that the mock backing variable isn’t set. While this may not directly break encapsulation, it’s what we would call a “code-smell” — in a few places, it might be OK, but the more of this pattern we see, the harder it is to maintain.
One area where @testVisible
DI works well is within Apex libraries. If you have only one concrete implementation for a class / method within your library, but you need to internally mock that as a dependency within your own tests, this is an area where @testVisible
feels like an OK compromise. You don’t break encapsulation to the outside world as a library author, but you can still write blazing fast unit tests.
Applying dependency injection to clean up existing code
When you’re working with existing code, look for repetitive code that can be extracted away into its own code “units” by creating new classes to encapsulate the previously duplicated behavior. Use dependency injection to add the newly encapsulated objects back into the class(es) where the behavior was repeatedly duplicated. Use this method to spur incremental improvements; leave the code cleaner than when you found it, and over the time, the existing quality of the codebase will naturally improve.
If the code that you’re extracting is poorly tested, use this as an opportunity to up your test coverage by writing tests; only perform the refactoring once you’re satisfied with your existing code coverage. This gives you a safety net when it comes time to validating that the changes you’re making to consolidate existing code hasn’t broken anything. If all the tests pass, you can consider writing object-specific test coverage and shifting some of the testing burden away from the area where the DI is occurring to the newly created object that you’re using DI on.
As a general rule of thumb, if you’re inserting more than three “layers” of Salesforce records (let’s say Task → Contact → Account) within the setup for your tests, it’s likely that your object under test is doing too much and that it can benefit from receiving dependencies that take care of some of its responsibilities.
How to properly design a greenfield project to use dependency injection
In a brand new project, there are many important decisions to make, such a what your sharing model will be and what kind of trigger framework you’ll end up using. As you design your system, dependency injection can be a very strong tool in your arsenal if you remember to properly institute boundaries between the different areas of your system.
As an example, proper encapsulation of objects can lead to several distinct domains:
- Queries
- DML
- API calls
- Trigger management
- Application-specific behavior (like lead conversion, case management, etc.)
Focus on unit tests to cover key functionality with end-to-end tests when necessary (particularly for cross-object based updates). If done right, this means that the bulk of your trigger-based code will be integration tests with true DML being performed. Likewise with schedulable classes. DI can help you to properly “feed” mocked dependencies into your objects in the vast majority of other use cases.
Proper usage of DI forces you to “think in objects;” it keeps your code units small, testable, and easily understandable. The benefits of this can’t be understated. When classes balloon in responsibility, they tend to feature two things:
- Repetition
- Development difficulties when introducing new functionality
As your greenfield system increases in depth, you can avoid it also scaling in complexity by using dependency injection to keep your code re-use high.
Conclusion
Speed up your tests and overall delivery time by properly isolating code — and dependency injection is a tried and true method that can help you to do so. Note that Salesforce as a platform offers more advanced tools, like the Stub API, that can also aid in your attempt to reduce testing complexity. The Stub API can only replace methods that are publicly available on your objects, though, whereas DI can be used to replace complex functionality in the inner private methods of your classes with mocked behavior.
Remember that when you’re working with existing or legacy code, the “extract and refactor” methodology that DI represents enables you to make small, incremental changes that over time can completely refurbish older code into something that’s better understood, better tested, and faster!
Lastly — dependency injection is a topic with a huge surface area. I’ve focused in this article mainly on how proper usage of DI can help you to write well-tested and easily mockable code. Phillipe Ozil has spent a considerable amount of time in a prior post talking about DI from a variety of other angles, such as how inversion of control in combination with interfaces helps to write loosely-coupled code: perfect for packaging and for providing declarative options to select which code to run.
About the author
James Simone is a Senior Member of Technical Staff at Salesforce, and he has been developing on the Salesforce Platform since 2015. He’s been blogging since late 2019 on the subject of Apex, Flow, and more in The Joys Of Apex. When not writing code, he enjoys rock climbing, sourdough bread baking, running with his dog, and more. You can follow along on his life adventures at She & Jim.