The New Apex Queueable Interface

The new Queueable interface provides you with more tools for writing asynchronous Apex code. Here are more details on how it works.

Here at Apex, we like to do things asynchronously. For example, I started this blog post before Dreamforce, but finished it after the conference craziness had died down. In keeping with the asynchronous spirit, I hope you will read this post now at the office, and read it again later at home.

Hear Me Now, Believe Me Later

<spiel> If you don’t need your results “now Now NOW!”, we can relax our multi-tenancy restrictions and let you do more with Apex. Asynchronous processing gives your users a better experience for long-running processes. Asynchronous transactions allow us to throttle your jobs and do our best to smooth out the peak load spikes we see every day. </spiel>

We have provided you with several tools to run your code in an asynchronous manner. This has convinced many of you to adopt! But you needed more…oh yes, you needed more. So we are giving you the Queueable interface.

The easiest way to fully explain the Queueable interface is to give you the back-story on its inception.

When a Batch Is Not a Batch

Throughout my time as Apex PM, I have noticed a disturbing trend. I’ve noticed several, actually, but for the purposes of this paragraph let’s assume I have only noticed one. On the Apex Jobs pages of many orgs, I see Batch Apex jobs with one batch, and jobs with zero batches. (Yes, much like I quietly peek in my daughter’s room while she sleeps, I look at your Apex job statistics. I want to make sure you’re ok!) In fact, somewhere around 90% of all batch jobs have zero or one batch.

Let’s look at how the dictionary defines “batch”:

a ***group*** of jobs (as programs) that are submitted for processing on a computer and whose results are obtained at a later time

Notice the word “group” in there? Probably, since I starred the word. I starred it because it’s the key word in the sentence. There were about 20 definitions for “group” in the dictionary; rather than pasting them all here, I’ll simply say that they all had the concept of “two or more”.

When I looked at the pattern of how Batch Apex was being used, it didn’t coincide with the definition of batch. I was intrigued. I dug deeper.

Back to the @future

Why were these jobs not being run as @future? @future seems tailor-made for this non-batch use case. It is meant for single-shot processing. You can enqueue thousands of @future jobs, versus only five for Batch Apex. @future is also nearly always going to be run faster than batch.

That last point is key. Each batch job has, at minimum, two asynchronous messages that go through the queue. You need one for the start method, and one for the execute method. @future is always a single message. You are looking at double latency for Batch Apex in the best case. Sure, if your org already has thousands of @future messages in the queue and nothing in Batch Apex, the batch job will get to the front faster. This wasn’t the pattern I was seeing, though…batch was slower, and people were still using it in favor of @future. Between these two messages, we need to store the record IDs that you will iterate over, and there is overhead in our storing that query cursor or iterator between the start and execute. It’s just a lot of overhead for no value, in the non-batch case.

After surveying some of you in the community, the picture began to clear up a bit. There were a few things that @future did not provide. This was pushing people to choose the slower batch system for tasks that didn’t need no stinking batches. (Sorry; I had to make that joke at SOME point.)

First drawback: primitive arguments. @future requires the arguments be primitives, which means reconstructing a structure once the method is called. Batch is a class implementing an interface, so we serialize your data structure and rehydrate it for you at run-time. This alone seems to have sent a lot of people towards the batch framework.

Second drawback: difficult access to job ID. The executeBatch method returns a jobID, while calling an @future job does not give you the ID of the related job. When people build a user interface and wish to see when a job is done, they need that ID. You can obtain the @future ID by querying, but it’s not nearly as direct as getting the ID in response to your call.

Third drawback: no chaining. We allowed Batch Apex to call itself a couple of years ago, which made many of you shout with happiness. It also sent batch-free jobs with a chaining requirement into Batch Apex. The law of unintended consequences strikes again.

Meet Me in the Middle

We wanted to provide you something with the features of Batch Apex that you liked, but without the overhead of a batch job. We wanted to provide you with something that behaved like @future, but without some of the drawbacks that pattern holds.

We created Queueable.

The Queueable interface behaves a lot like the Batchable interface. It gives you a class structure that we can serialize for you, and you can serialize more than just primitive arguments. It is called by a method, System.enqueueJob(), which returns a job ID to you. At the same time, it requires you to implement neither a start method nor a finish method; it’s all execute, and nothing more.

To Infinity, and Beyond!

What about chaining? We will support calling Queueable from a Queueable method. For the first release, Winter ’15, you can only go into the stack two levels, so you can’t call it on a persistent loop. Starting with Spring ’15, you will be able to chain Queueable jobs forever. We will put some new restrictions in place, because forever is a long time.

The primary new item to know about is the increasing backoff for re-queuing a job. Each job will have a delay before it is enqueued again. The time will climb from a one-second delay to a one-minute delay, and it will continue at one minute forever. This means your logic will be able to run as quickly as once a minute. This should be sufficiently fast for your use cases; as it is, we can sometimes see delays in the queue of several minutes. I have to think that the world can wait sixty seconds for nearly anything in forever-asynchronous mode.

As an incentive to you to do the right thing, chaining in Queueable will be faster than Batch Apex. The same backoff will be in place for Batch Apex, but it will top out at four minutes. If you are chaining a finite series of jobs together, you won’t see much difference; if you are looping through unknown jobs forever, you will be doing so 4x slower in Batch than you would in Queueable.

The other major restriction on chaining is one-async-call-per-async-job. If each Queueable job made two Queueable calls, our queue would grow quickly to the point that it would start affecting gravity and would eventually consume the data center. That would be sub-optimal. While you can spawn many Queueable jobs in your initial transaction, each subsequent job will only be able to replace itself in the queue.

Suicidal Scheduling

While we’re on the topic of chaining, I’d like to describe another disturbing trend, one that I call “suicidal scheduling”. Suicidal scheduling is where a scheduled job enqueues another scheduled job and then aborts itself. This allows these jobs to be chained.

Try stopping this job if you want to. You can’t. It goes too quickly to be aborted. By the time you have found the job ID, the job has already run, enqueued another, and aborted itself. It’s like a virus, only you wrote it yourself, instead of unintentionally downloading it with the FREE NO ADS version of Candy Crush Saga.

We will not block this approach, since it functions today. We will, however, slow it down. Any call to System.schedule inside of a scheduled job will run no sooner than five minutes later. Ostensibly, the job is a “scheduled job”, which means it is going to repeat itself lots of times, so five minutes shouldn’t be a major issue. If it is set up simply to kill itself, throughput is going to degrade. That is good for everyone else on the pod.

Make the Switch!

The Queueable interface is there for you to enjoy. I cannot make you use it; I can only tell you why you should and give you incentive to do so. I could also (switch to Queueable!) use very basic QUEUEABLE subliminal messaging, queueable Queueable.

Existing non-batch Batch Apex can be converted to Queueable. Suicidal scheduling can be easily converted to Queueable. The changes are pretty simple; the same method call (execute) is present in both. Here is a scheduled class:

public class ScheduledClass implements Schedulable {
    public void execute(SchedulableContext SC) {
        //do stuff
    }
}

…and here is the corresponding Queueable class:

public class QueueableClass implements Queueable {
    public void execute(QueueableContext SC) {
        //do stuff
    }
}

If you didn’t notice any difference at first glance, it’s because they’re really that similar. If you are doing batch-less jobs, Batch Apex converts almost as easily. You will need to use your query result directly rather than iterating over “scope”. It should be a pretty quick change.

This new interface should serve many of the use cases that you’ve been implementing with Scheduled and Batch Apex. Your existing code will still run, but I encourage you to make use of the new interface to make your implementation as efficient as possible.

(Queueable.)

Published
October 23, 2014

Leave your comments...

The New Apex Queueable Interface