Efficient Use of the Force.com APIs from Heroku | Salesforce Developers Blog

Novice Force.com developers tend to intensely dislike the limits that come along with a multitenant environment – for example, the governor limits on Apex Code. With more experience, a love/hate relationship develops, as limits start to catch bugs such as infinite loops. The true Force.com guru embraces limits as a guide to proper coding style for maximizing performance by (amongst other things) minimizing database access.

Writing a Ruby application on Heroku, calling the Database.com or Force.com APIs, it’s easy to forget all about limits, particularly when you can scale your app up to any number of dynos (a dyno being a single processes running on Heroku) with a single command. Easy, until the day your app becomes a roaring success, and API calls start to return REQUEST_LIMIT_EXCEEDED. Now what?

Well, you could call your friendly salesforce.com account rep and increase your daily API call limit, but that should be a last resort, rather than the first response. In this blog entry, based on an app I’ll be presenting at Cloudstock next month, I’ll show you one strategy for modifying your app to stay within the existing API call limits, saving you that call to the account rep and improving your app’s performance. I’ll use a Ruby web app written with the Sinatra framework as an example, but the principles apply to any API client.

First, look at the API calls your app is making. Does your app really need to make those calls on every request? You might be able to cache data in your app, maximizing performance and preserving data freshness. For example, a naive app might pull merchandise data from Force.com on every page request:

def merchandise
  query = "SELECT Id, Name, Image_URL__c, Description__c from Merchandise__c"
  @merchandise ||=
    force_token.get("#{force_token.params['instance_url']}/services/data"+
      "/v24.0/query/?q=#{CGI::escape(query)}").parsed['records']
end

If the merchandise data is identical for every end user (it’s retrieved in the context of the web app, rather than an individual user), and it’s being updated relatively infrequently compared to page requests at the web site, we can make some HUGE performance gains for very minimal cost by caching the merchandise records. We could just store them in memory, but, since our dynos can spin up and down in response to deployment and scaling, we’ll use Heroku’s Memcache add-on for a more persistent solution. Like most Heroku add-ons, it has a free tier of service, and we can add it to our app with a single command:

heroku addons:add memcache

Heroku recommends the Dalli Memcache client; the Heroku Dev Center gives full instructions for adding the Dalli gem to your app (as well as using Memcache from Rails, Java and Python), so I’ll focus on the changes to our hypothetical app:

require "dalli"

def dalli_client
  # Compress data on its way to the cache, cache entries expire in 1 hour
  @dalli_client ||=
    Dalli::Client.new(nil, :compression => true, :expires_in => 3600)
end

MERCHANDISE_KEY = 'merchandise'

#...

def merchandise
  @merchandise ||= dalli_client.get(MERCHANDISE_KEY)
  unless @merchandise
    query ="SELECT Id, Name, Image_URL__c, Description__c from Merchandise__c"
    @merchandise =
      force_token.get("#{force_token.params['instance_url']}/services/data"+
        "/v24.0/query/?q=#{CGI::escape(query)}").parsed['records']
    dalli_client.set(MERCHANDISE_KEY, @merchandise)
  end
  @merchandise
end

We try to get the merchandise from Memcache; if there’s nothing there, we call the API, as before. Dalli serializes and, in this configuration, compresses our data on its way to Memcache, reversing the process when we retrieve it. With the above settings, we’ll use an API call every hour. That definitely won’t exceed the limit!

But what if we add some merchandise to the database? We don’t want our customers to have to wait up to an hour to see it! Taking advantage of an Apex trigger and a future method, we can reduce that waiting time to almost zero. Here’s the Apex Code:

trigger FlushMechandiseCacheTrigger on Merchandise__c
    (after insert, after update, after delete, after undelete) {
    MerchandiseUtils.flushMerchandiseCache();
}
public class MerchandiseUtils {
    @future (callout=true)
    public static void flushMerchandiseCache() {
        HttpRequest req = new HttpRequest();
        req.setEndpoint(MerchandiseAppConfig__c.getOrgDefaults().Cache_URL__c);
        req.setMethod('DELETE');

        Http http = new Http();
        HTTPResponse res = http.send(req);

        System.debug('flushMerchandiseCache: '+res.getStatusCode()+
            ' '+res.getStatus());
    }
}

As you can see, the trigger just calls the future method on any change to merchandise data, while the future method sends an HTTP DELETE request to the app at Heroku. Note that future methods execute asynchronously, with no guarantee on latency, although, from personal experience, they tend to execute within a few seconds of invocation. A custom setting is used to store the endpoint URL, avoiding the need to edit code if the endpoint changes for some reason.

Our app’s end of the conversation is almost trivial:

delete '/merchandisecache' do
  JSON.pretty_generate({ :success => dalli_client.delete(MERCHANDISE_KEY)}) + "n"
end

You would, of course, secure this call to avoid denial-of-service attacks consuming your API calls by repeatedly flushing the cache; the details would depend on your app, but an HMAC based on a shared secret would work well for a point-to-point integration such as this.

Now any change to the merchandise data will result in the cached data being deleted, and fresh data being loaded on the next page request. We’re making the absolute minimum number of calls to the API, while preserving data freshness – that’s a great result in any language (pardon the pun!)

As mentioned above, there is an assumption here that changes to merchandise records are relatively infrequent compared to page requests. Since future method calls are also rate-limited (currently, the limit is 200 method calls per Salesforce/Salesforce Platform/Force.com – One App user license, per 24 hours), you’ll need to understand the expected rate of change in merchandise records. If you are constantly creating, modifying and deleting stock, the most efficient trade-off might be to forego the trigger altogether and reduce the Memcache expiry. A cache expiry of 1 minute would only consume 1,440 API calls per 24-hour period.

I’ll look at the case where data is specific to the user in another blog entry soon; in the meantime, go register for Cloudstock to attend my session, Integrating your Social Enterprise and Facebook with Heroku, and many more!

UPDATE – here is the video replay of the Cloudstock session mentioned above:

Stay up to date with the latest news from the Salesforce Developers Blog

Subscribe