The New Apex Queueable Interface

Here at Apex, we like to do things asynchronously. For example, I started this blog post before Dreamforce, but finished it after the conference craziness had died down. In keeping with the asynchronous spirit, I hope you will read this post now at the office, and read it again later at home.

Hear Me Now, Believe Me Later

<spiel> If you don’t need your results “now Now NOW!”, we can relax our multi-tenancy restrictions and let you do more with Apex. Asynchronous processing gives your users a better experience for long-running processes. Asynchronous transactions allow us to throttle your jobs and do our best to smooth out the peak load spikes we see every day. </spiel>

We have provided you with several tools to run your code in an asynchronous manner. This has convinced many of you to adopt! But you needed more…oh yes, you needed more. So we are giving you the Queueable interface.

The easiest way to fully explain the Queueable interface is to give you the back-story on its inception.

When a Batch Is Not a Batch

Throughout my time as Apex PM, I have noticed a disturbing trend. I’ve noticed several, actually, but for the purposes of this paragraph let’s assume I have only noticed one. On the Apex Jobs pages of many orgs, I see Batch Apex jobs with one batch, and jobs with zero batches. (Yes, much like I quietly peek in my daughter’s room while she sleeps, I look at your Apex job statistics. I want to make sure you’re ok!) In fact, somewhere around 90% of all batch jobs have zero or one batch.

Let’s look at how the dictionary defines “batch”:

a ***group*** of jobs (as programs) that are submitted for processing on a computer and whose results are obtained at a later time

Notice the word “group” in there? Probably, since I starred the word. I starred it because it’s the key word in the sentence. There were about 20 definitions for “group” in the dictionary; rather than pasting them all here, I’ll simply say that they all had the concept of “two or more”.

When I looked at the pattern of how Batch Apex was being used, it didn’t coincide with the definition of batch. I was intrigued. I dug deeper.

Back to the @future

Why were these jobs not being run as @future? @future seems tailor-made for this non-batch use case. It is meant for single-shot processing. You can enqueue thousands of @future jobs, versus only five for Batch Apex. @future is also nearly always going to be run faster than batch.

That last point is key. Each batch job has, at minimum, two asynchronous messages that go through the queue. You need one for the start method, and one for the execute method. @future is always a single message. You are looking at double latency for Batch Apex in the best case. Sure, if your org already has thousands of @future messages in the queue and nothing in Batch Apex, the batch job will get to the front faster. This wasn’t the pattern I was seeing, though…batch was slower, and people were still using it in favor of @future. Between these two messages, we need to store the record IDs that you will iterate over, and there is overhead in our storing that query cursor or iterator between the start and execute. It’s just a lot of overhead for no value, in the non-batch case.

After surveying some of you in the community, the picture began to clear up a bit. There were a few things that @future did not provide. This was pushing people to choose the slower batch system for tasks that didn’t need no stinking batches. (Sorry; I had to make that joke at SOME point.)

First drawback: primitive arguments. @future requires the arguments be primitives, which means reconstructing a structure once the method is called. Batch is a class implementing an interface, so we serialize your data structure and rehydrate it for you at run-time. This alone seems to have sent a lot of people towards the batch framework.

Second drawback: difficult access to job ID. The executeBatch method returns a jobID, while calling an @future job does not give you the ID of the related job. When people build a user interface and wish to see when a job is done, they need that ID. You can obtain the @future ID by querying, but it’s not nearly as direct as getting the ID in response to your call.

Third drawback: no chaining. We allowed Batch Apex to call itself a couple of years ago, which made many of you shout with happiness. It also sent batch-free jobs with a chaining requirement into Batch Apex. The law of unintended consequences strikes again.

Meet Me in the Middle

We wanted to provide you something with the features of Batch Apex that you liked, but without the overhead of a batch job. We wanted to provide you with something that behaved like @future, but without some of the drawbacks that pattern holds.

We created Queueable.

The Queueable interface behaves a lot like the Batchable interface. It gives you a class structure that we can serialize for you, and you can serialize more than just primitive arguments. It is called by a method, System.enqueueJob(), which returns a job ID to you. At the same time, it requires you to implement neither a start method nor a finish method; it’s all execute, and nothing more.

To Infinity, and Beyond!

What about chaining? We will support calling Queueable from a Queueable method. For the first release, Winter ’15, you can only go into the stack two levels, so you can’t call it on a persistent loop. Starting with Spring ’15, you will be able to chain Queueable jobs forever. We will put some new restrictions in place, because forever is a long time.

The primary new item to know about is the increasing backoff for re-queuing a job. Each job will have a delay before it is enqueued again. The time will climb from a one-second delay to a one-minute delay, and it will continue at one minute forever. This means your logic will be able to run as quickly as once a minute. This should be sufficiently fast for your use cases; as it is, we can sometimes see delays in the queue of several minutes. I have to think that the world can wait sixty seconds for nearly anything in forever-asynchronous mode.

As an incentive to you to do the right thing, chaining in Queueable will be faster than Batch Apex. The same backoff will be in place for Batch Apex, but it will top out at four minutes. If you are chaining a finite series of jobs together, you won’t see much difference; if you are looping through unknown jobs forever, you will be doing so 4x slower in Batch than you would in Queueable.

The other major restriction on chaining is one-async-call-per-async-job. If each Queueable job made two Queueable calls, our queue would grow quickly to the point that it would start affecting gravity and would eventually consume the data center. That would be sub-optimal. While you can spawn many Queueable jobs in your initial transaction, each subsequent job will only be able to replace itself in the queue.

Suicidal Scheduling

While we’re on the topic of chaining, I’d like to describe another disturbing trend, one that I call “suicidal scheduling”. Suicidal scheduling is where a scheduled job enqueues another scheduled job and then aborts itself. This allows these jobs to be chained.

Try stopping this job if you want to. You can’t. It goes too quickly to be aborted. By the time you have found the job ID, the job has already run, enqueued another, and aborted itself. It’s like a virus, only you wrote it yourself, instead of unintentionally downloading it with the FREE NO ADS version of Candy Crush Saga.

We will not block this approach, since it functions today. We will, however, slow it down. Any call to System.schedule inside of a scheduled job will run no sooner than five minutes later. Ostensibly, the job is a “scheduled job”, which means it is going to repeat itself lots of times, so five minutes shouldn’t be a major issue. If it is set up simply to kill itself, throughput is going to degrade. That is good for everyone else on the pod.

Make the Switch!

The Queueable interface is there for you to enjoy. I cannot make you use it; I can only tell you why you should and give you incentive to do so. I could also (switch to Queueable!) use very basic QUEUEABLE subliminal messaging, queueable Queueable.

Existing non-batch Batch Apex can be converted to Queueable. Suicidal scheduling can be easily converted to Queueable. The changes are pretty simple; the same method call (execute) is present in both. Here is a scheduled class:

public class ScheduledClass implements Schedulable {
    public void execute(SchedulableContext SC) {
        //do stuff
    }
}

…and here is the corresponding Queueable class:

public class QueueableClass implements Queueable {
    public void execute(QueueableContext SC) {
        //do stuff
    }
}

If you didn’t notice any difference at first glance, it’s because they’re really that similar. If you are doing batch-less jobs, Batch Apex converts almost as easily. You will need to use your query result directly rather than iterating over “scope”. It should be a pretty quick change.

This new interface should serve many of the use cases that you’ve been implementing with Scheduled and Batch Apex. Your existing code will still run, but I encourage you to make use of the new interface to make your implementation as efficient as possible.

(Queueable.)

tagged , , , Bookmark the permalink. Trackbacks are closed, but you can post a comment.
  • John Thompson

    Very amusing post, and +1 for Queueable!

  • Lucas Buyo

    I’ll try that to check how it works, but the post couldn’t be more clear and simple explained.
    Thanks for that!

  • Siva

    Thanks for sharing Josh… I am expecting the same thing as an mix of future class and batch apex. let me give try how it works.

  • wtm17

    Is it System.enqueue() or System.enqueueJob()? The documentation suggests the latter but the post mentions the former.

    • Josh Kaplan

      For syntax questions, you should trust, in this order:
      The Complier
      The Documentation
      The Product Manager
      Blog Posts By The Product Manager

      (thanks for pointing it out; i have corrected it)

  • Eric Swinehart

    This article was very timely for me. I implemented a Queueable class this morning which I had started as a future method. My particular usage was in a trigger method that made more sense to call asynchronously.

    Would you say it’s a best practice to use Queueable over future or is there a use case where a future method has a clear advantage?

  • Neil Reid

    Great article Josh.

    You mentioned : “….The same backoff will be in place for Batch Apex, but it will top out at four minutes…”.

    If at all possible, could you be more specific about the rate at which the “topped-out” threshold is reached?
    By “same” do you mean the same formula that Queueable uses? Or will Batchable use different logic to arrive at the top-out of four minutes?

    Obviously, the exact moment a chained async batch runs is always a function of spare platform capacity.
    We would, however be very interested in the degradation pseudo logic

    Eg:
    delay(current) = delay(previous) plus n seconds … or …
    delay(current) = delay(previous) plus n seconds squared … etc etc

    Our message queueing product on the AppExchange would benefit greatly if :
    (1) the first time a batch is run, it starts with minimal latency
    (2) at least 7 or 8 chained batches complete before the minimum time delay between batches becomes material (ie 30 seconds or more)

    Obviously implementation details are not the end user’s direct concern but if you and your team could optimise in this direction it would be greatly appreciated!

  • It looks like Queueable does not support making a callout and chaining. You can do one or the other but not both. Here is the example code that throws “System.AsyncException: Maximum callout depth has been reached.” unless the chaining or callout is commented out.


    /**
    * Invoke with: ID jobId = System.enqueueJob(new MyQueueableClass());
    */
    public class MyQueueableClass implements Queueable, Database.AllowsCallouts {
    Integer i;
    public MyQueueableClass(Integer i) {
    this.i = i;
    }

    public void execute(QueueableContext context) {
    // Make request
    HttpRequest req = new HttpRequest();
    req.setEndpoint('https://google.com');
    req.setMethod('GET');

    // Send request
    Http http = new Http();
    HTTPResponse res = http.send(req);
    System.debug(res.getBody());
    System.debug('i:'+i);
    if(i==0) {
    System.enqueueJob(new MyQueueableClass(1));
    }
    }
    }

    • Rupert Barrow

      As per the documentation here
      https://www.salesforce.com/us/developer/docs/apexcode/Content/apex_queueing_jobs.htm

      “The maximum stack depth for chained jobs is two, which means that you can have a maximum of two jobs in the chain.”

      • Rupert, how many chained jobs do you see in the example code? There is only one chained job being called from the current job which gives a maximum stack depth of 2.

        There is however, a callout being made which is not the same thing as a chained job but seems to be counted as the same. I don’t understand why they would limit our ability to make a callout while chaining as that seems like a pretty practical use case for chaining (ex. export/import data while avoiding limits).

        • Rupert Barrow

          Brett, you’re right.

          It’s an error message similar to “batches cannot be run from a @future method”. Callouts are handled asynchronously by the system, maybe a Queueable calling a Queueable calling a callout is too many syncs for the platform : a bug, in my opinion.

          Anyway, I don’t really see the point of just 1 level of chaining : what is the point ? Either you chain multiple times (the way you can do it with batches), or you use another technique; but chiming only once ?

          I’m not totally confident in this new feature.

          • Thank Rupert! The 1 level of chaining was to emphasize that there is no more than 2 chained jobs, following the SFDC docs. I tried to simplify the code as much as possible to demonstrate the issue.

    • Enrico Murru

      +1 This seem not to be documented. A callout does not seem to be a chained job in my opinion, nor treated like it was.

    • Jyoti Goyal

      +1 Got the same issue. Has someone got the solution or documentation for this issue?

    • Thank you for help 🙂 Brett!

  • Mike Berry

    I believe I have discovered a bug with System.enqueueJob() when executed in the context of a unit test. When all tests in an org are run (such as during a production deployment), any unit tests that make use of Test.startTest()/stopTest() and that are executed after a call to System.enqueueJob will fail with the error message: System.FinalException: Method only allowed during testing. In case it matters, we have parallel apex testing disabled in our production org. I opened a case with support, #11204977 but as we only have basic support it is taking a while to get some traction. Any assistance would be appreciated. Thanks.

  • Enkhbold Tsagaach

    My dev org is working fine with the Queueable, but one of my colleague had encountered this problem “Invalid interface name Queueable” when he tried to merge/deploy my changes into his dev org. Some people suggested using lower case in queueable but it didn’t work.

    This guy says it’s not “turned on” for every org and we need to make an request to turn it on?
    https://developer.salesforce.com/forums/ForumsMain?id=906F0000000AoebIAC

  • Amit Malik

    Thanks Josh. Excellent Post.

  • Sam Mohyee

    So I’m one of the people the author noticed who chains batch apex jobs that have a single batch each. Why do I do this? Because my requirements include enforcing a minimum time gap between batches (in my case due to callout limits enforced by a third party).

    Chaining batch apex using ‘system.scheduleBatch()’ is the only way I know how to do this.

    It doesn’t look like Queueable addresses this requirement, or am I wrong?

  • Barton Ledford

    Everyone upvote this idea to allow chaining and callouts in queuables https://success.salesforce.com/ideaView?id=08730000000Dl7VAAS

  • Mark

    So great that Salesforce takes it upon themselves to randomly introduce a significant “slow it down” that breaks applications. Thanks guys, love your work and the endless pain you put us through.