This is the second part of a three-part blog series about testing. We will be talking about:
In our last installment, we talked about why we test. Today, we’ll talk about how to write effective tests — specifically, effective Apex tests. What do I mean when we say effective? To be effective, a test needs to be consistent, definitive and descriptive.
Effective tests consistently pass unless the underlying code has broken. When an effective test does fail, you can clearly see from the description what went wrong. They establish trust not only in your code, but in the value of the tests themselves.
These three common traits make a test effective: First, an effective test uses its own data. Creating our own test data drives consistency by ensuring that the only thing that will make our tests fail are code changes. Secondly, effective tests isolate the code being tested from test setup with platform tools. In other words, effective tests are written in a way that ensures creating data for the test doesn’t count against governor limits. Using the
Test.stopTest() method calls helps isolate our code. Isolating your tested code this way also helps drive consistent tests. Without this isolation, our test may fail because the code and test setup exceeded a Platform Governor limit. Effective tests also make liberal use of assertion methods. Assertions make the tests definitive and when done right, descriptive. Finally, useful tests exercise not only the predicted code path, but exceptions and permissions as well.
Let’s look at these traits in practice, alongside some Apex Code.
Example class: IEQOAccount
Testing our IEQOAccount class
A Positive Test passes when we pass known good data into a unit of code, and receive the expected result.
A Negative Test passes when we pass known bad data into a unit of code and receive the expected error condition or exception.
Permissions Tests are tests where a unit of code is executed with different user profiles or different permission sets applied. They pass or fail, depending on whether the user can access the data.
The positive test should assert the expected result for correct input. We also need a negative test, one that causes the method to throw an IEQOException. Lastly, we need a test that succeeds, but reflects only the average of opportunities a given user can see. Let’s look at how to build these.
Creating the data
Because this is a wrapper on the standard account object, this class requires an account to exist. Additionally, our
getRoundedAvgPriceOfOpps() method requires opportunities to exist as well. We could create an account and a few opportunities in the body of the individual test methods. However, there’s a better way.
There are actually a couple of ways to create our data. Here we’ll look at the basic idea, but in our next post, we’ll discuss advanced data creation for tests! We can create a method annotated with
@testSetup. @testSetup methods run before each test method. (Note: While you can have more than one method annotated as
@testSetup, there’s no guarantee what order they’ll run in.)
Let’s create a
@testSetup method that establishes baseline test data for us.
@testSetup method gives us five accounts, each with five opportunities. This won’t suffice for all our tests, but it’s a good start and will remove a lot of boiler plate code from our tests. Now that we’ve established some test data, we can start fleshing out our tests.
Looking at this test, it’s broken into two general sections, before and after calling
Test.startTest();. We start by hydrating standard accounts into IEQOAccounts. Note the assertion. This kind of assertion makes sure we’re able to successfully wrap our accounts. I call these “guard assertions.” They serve as a guard against a fundamental constructor, or schema change. This assertion ensures our data makes sense for this test. After calling
startTest(), we loop over each of our IEQOAccount objects. Inside this loop we call the method we’re testing. In this case, I’ve wrapped the method call in an assertion. Here it’s an
assertEquals assertion, which takes two mandatory parameters and one optional parameter. The required parameters are first and represent two values to compare for equality. The final parameter is a “friendly message” to the developer, i.e. you. The friendly message lets you know what assertion failed and why (If, of course, you put that in). This is our easiest unit test. We put valid information in and our method returns a predictable result.
There are three key things to keep in mind to ensure positive tests are effective:
1. Ensuring we create our own valid test data
2. Isolating our executed method from any data setup and sanity checking. In this case, using
But we can’t stop here and call our class tested! If we run code coverage on this, you’ll find that one section isn’t covered.
Testing exceptions may seem confusing, at first, but it’s straightforward. Like positive testing, we’ll need to generate our own test data. Additionally, we need to massage our data so that the code throws the exception. As our class is currently written, it means ensuring returnValue is 0. Since our testSetup method generates data values greater than 0, we need to modify the test data. There are two ways to do that: Either we delete the opportunities, or we edit the amount values. Deleting the opportunities leads to an interesting discovery — the class will fail to return an average when there are zero opportunities. This causes the calculation to fail. Looks like we’ll need to move our exception clause! Instead of checking the return value, we actually care that the average isn’t null. This is just one of the many reasons we test: to discover edge cases we may not have initially thought of.
Our final test method is the most complex. At our fictional company Ignoti Et Quasi Oculti, our org is set to keep Opportunities private. Susan cannot see the opportunities that Bob owns, and vice versa. Accounts, on the other hand, are public read/write. We need to ensure that a page displaying the account’s rounded average reflects only the current user’s opportunities. This requires us to test executing our method as a user who owns a subset of opportunities. We do this with the
System.runAs(user) method. We’ll need a user which, like data, we need to create during testing.
Let’s dive into this test.
The first section of our test creates a role and a user that we’ll use to complete our testing. Everything inside the
System.runAs() block executes as our created user. Remember, our
@testSetup method created five accounts with opportunities, but it created those as the default system user. Those accounts will be visible to the test. Their associated opportunities, however, will not be visible.
Our test is set to attempt to generate the
roundedAveragePriceOfOpps() for all accounts. To ensure that we’re both failing and succeeding properly, we need to follow the negative test pattern. However, we’ll need to add a positive assertion in the Try section. This risks causing an exception, if the assertion fails. This is why it’s important to catch specific subclasses of exception, rather than just ‘exception.’ Additionally, remember to check the properties of the exception. In this case, I’m comparing the message to ensure the exception has been thrown for the reason I expect. Finally, when writing this style test, remember to check how many times we captured a failure. That way we can check that we had no more failures than we expected.
You can do it!
Testing Apex can seem an imposing burden. Breaking tests down into positive, negative and permission patterns helps structure how and what you write. This can reduce the “cognitive load” of testing and help you realize the full benefits of testing.
But don’t take my word for it — test your code! Virtually all orgs have some code lying around with just-enough code coverage for deployment. Find that code and whiteboard out which test patterns you currently have. Most of us have positive tests. But are they using test-generated data? Do they have meaningful assertions? Is the code tested in isolation from your test’s setup? Write some additional negative and permission tests for your code.
In the coming weeks, we’ll host a live pair-coding session focused on writing tests. Have you found a bit of code that’s difficult to test? Wonder how to write a negative test for a class you’re developing? Contact us on Twitter @SalesforceDevs with code examples, and we’ll get to as many of them as possible during the pair-coding session.
Keep the conversation going on Twitter with #MonthOfTesting and stay tuned next week for the next and final post in our series.
About the author
Kevin Poorman works as a Senior Developer Evangelist at Salesforce. He focuses on Testing, IOT, Mobile, and Integrations on the Lightning Platform. You can pester him on Twitter @codefriar.