You need to sign in to do that
Don't have an account?

SSLv3 Poodle vulnerability & outbound messaging
Hi,
Because of SSLv3 Poodle vulnerability, we have turned off SSLv3 support on our web server. This in term is causing Salesforce outbound messaging to fail.
Is there a work around with this from Salesforce end?
The outbound messaging processing issue was resolved once we turn SSLv3 back on our web server.
Ted Tsung
Because of SSLv3 Poodle vulnerability, we have turned off SSLv3 support on our web server. This in term is causing Salesforce outbound messaging to fail.
Is there a work around with this from Salesforce end?
The outbound messaging processing issue was resolved once we turn SSLv3 back on our web server.
Ted Tsung
At present, some outbound calls initiated from Salesforce are initiated using SSLv3, so if this is disabled on your server, there'll be a handshake failure. Customers will be notified when Salesforce decides to disable SSLv3 in outbound calls. Until then, it is advised that you support this for incoming calls (received from Salesforce).
Please note, in a recent communication, Salesforce advising to disable SSLv3 when connecting to Salesforce. This is a different scenario as the request is sent by the customer, not by Salesforce, and we do support up to TLSv1.2.
At present, Salesforce R&D is aggressively working on a strategy around this, but there is no set schedule at present. One this is finalized, there will a technology communication sent out.
Thanks,
Shashank
Thanks,
Drew
Could you clarify further please? You say "some outbound calls from Salesforce are initiated using SSLv3"
Are HTTPRequest callouts done over SSLv3? ... or are they made with TLSv.1.0 or higher?
Many SF implementations involve callouts to services managed by third parties - and these parties are responding quickly to the POODLE threat by shutting off SSLv3 support. If Salesforce is slow to follow suit, there will be a LOT of unhappy SF customers.
- Ron
There is nothing on trust.salesforce.com or in the security section there.
Surely this must be on Salesforce's radar?
Thanks,
David
...and what do you mean "some outbound calls initiated from Salesforce are initiated using SSLv3" - if that is the case, that implies that *some* are using TLS1.2. Why can't you have "fewer" try using SSLv3? This is within your control, no?
This is getting critical - Shopify and several others have already shut off SSLv3 which is having a direct impact on operations of many sf.com customers. Please provide a schedule for a fix - this is not acceptable.
Authorize.net is disabling all SSLv3 connections on November 4. Millions of emails from other technology firms have gone out this week, informing customers of the situation with SSLv3.
Meanwhile, publicly it seems that Salesforce is treating this as an insignificant browser issue: https://help.salesforce.com/apex/HTViewSolution?urlname=POODLE-SSL-3-0&language=en_US
I sure hope R&D is working day and night on this. Otherwise this is an integration catastrophe for outbound messaging.
This is going to become a news item if sf.com doesn't do some damage control. I think we would all like to hear from the CEO as to why one of the largest business integration services in the world is unable to move more quickly - this is putting customers in serious risk of both security exposure as well as operational exposure. Can't wait to see this in the press in a few days.....geez
It's still not clear what's going to happen right now ... just a lot of speculation, and no helpful information from SF. Very disappointing.
You can utilize a proxy service that accepts SSLv3 to make your calls for you.
The general idea is to stand up a proxy (I like Cors-anywhere (https://github.com/Rob--W/cors-anywhere)) On a sslv3 enabled server (heroku?) and point your outbound calls at the proxy. The proxy then makes TLSv1.2 callouts to auth.net etc.
I highly suggest utilizing custom certs and reasonable security measures to ensure only you are using the proxy.
Does anyone know what % of outbound calls are going out as SSLv3?
Is there any way for customers to choose before December 10 that all calls go out as TLS 1.0 or higher?
Should our expectation be that Apex callouts cannot be totally counted on to make a good connection until December 10? That seems an awfully long time for a fundamental platform connectivity issue to exist! It'd be analygous to Salesforce saying that the Save button on page layouts won't work 20% of the time and to just try again if it fails. Kinda crazy.
The help article states the following... The problem is that a huge amount of communication is done to services not in our own environments. We cannot control whether Google, Stripe, Authorize.net, etc. turn off SSL v3.
I've been hearing rumors of the core issue being with cached connections in the Java client and that it should go away over time, but cannot really confirm or deny that officially. The help article does not really address this.
I think the dev community needs more explanation. Especially in the area of Apex callouts to 3rd party services.
@David Hecht 7 - We have been tracking this for several days and the % of failures is pretty consistent - right around 50%. Quite random on success/failure rate related to time of day, etc.
I guess we have a date but having a fix almost 1 month after other sites have responded (e.g. completely disabled a big hole!) is a bit concerning. Good to see other developers pushing for sf.com to get better - let's hope they do.
Here's a chart of the last several days of callouts made to our geocoder. These callouts are occuring from within our customers own Salesforce systems (1000s of orgs). A callout is only being counted here if its successful. These things run on an hourly schedule in customer systems and do fluctuate each day, but notice the dramatic drop in connections starting with the change to disabling SSL v3 by our geocode provider.
We will use TLS with callouts but if it fails. We drop down to SSL and hard code to send it via SSLv3 for 24 hours or an app restart. Which ever comes first. This should address any changes that occur with the way other companies integrate with Salesforce until we completely disable SSL 3.0 on December 10th."
Also, would the downgrade occur for any other reason besides lack of support for TLS .... e.g. network latency?
An additional bit of data we've captured - all (no exceptions) our failures over the last 2 weeks (roughly 18,000) have come from NA4, NA8 or NA14. All other clients have been performing normally. Also worth noting is a "failure" as we count it actually means 3 consecutive failures as it has a retry mechanism.
Perhaps "app" means "instance" in the case of David's comment and we end up suffering because some other code in the instance fails on a TLS callout and therefore reverts us all (the entire instance) back to SSL3?
Interested in any more data folks can provide.
Vantiv / Litle shut off SSLv3 support yesterday, and Authorize.net said they were going to do it today. We have lots of customers on both and sandbox testing last week / over the weekend looked good, and no support issues yesterday or today.
It looks like the Salesforce TLS first, then SSL if that fails approach was implemented over the weekend as best we can tell, support said it was scheduled for yesterday.
Would have been great if we had something to tell customers last week, other than just hoping Salesforce would take care of it, but it does look like they did take care of it.
I asked for clarification from Salesforce support on what an "app" meant in the context of the failover to SSL for 24 hours, and haven't heard back yet. I would assume it would be an org ID and not an entire instance, but maybe even more specific like an appexchange app or apex within Salesforce.
Glad this appears to be taken care of.
It looks like they fixed the glitch though so we're good.
The level of capability we are talking about (connectivity) is what the Salesforce platform is here for, to "just work". It's the blocking and tackling of a platform.
The mind boggling thing to me is that, after being asked again and again and again, Salesforce seems to ignore this issue. We just need to hear from them that they understand the situation and have a plan. The Help article they published is not enough.
I do prioritize getting it fixed over an explanation about it so I am thankful to those at Salesforce working on it. (thanks!) Would just be nice to get some sort of communication during the process.
Sounds like things are on the right path.
Questions that have been raised around Salesforce’s support of SSL 3.0 and TLS 1.0. While we are in the process of disabling SSL 3.0, Salesforce currently supports TLS 1.0 and TLS 1.2 for inbound requests and TLS 1.0 for outbound call-outs.
Our Technology Team has been actively working to address an issue that causes outbound call-outs to use SSL 3.0 more frequently than they should, given we have TLS 1.0 enabled. We understand that this may have caused issues for customers who have already disabled SSL 3.0 in their call-out endpoints. We released a fix to Sandboxes last Friday, October 31, and plan to release the fix to production instances during off-peak hours on Wednesday, November 5, 2014.
Many customers and partners who have tested this fix in their Sandboxes have reported successful connections using TLS 1.0. A few customers continued to experience TLS 1.0 issues on their Sandboxes, and our Technology team is working with them to find a solution.
There was an issue specifically to Na14 that was generating more outbound messages that were using SSLv3 but that has since been fixed. That is probably why a few of you guys saw an issue with
But feel free to send me a note. Thanks
Thanks for this communication !
However, we did also need to increase our Apex timeout settings for the outbound call. We set it to the maximum of 120 seconds. We first did this in the sandbox which resolved the issues there (after the coding changes were applied by SFDC), and then deployed to production. I am not certain why this was necessary (unless the new coding had possibly introduced other latency issues that were not present before?) but it seems to have done the trick - just putting it out there in case others are in a similar situation.
Thank you Brian from for providing the helpful explanation of the issue.
First, Thank you for this communication. It's the "official word" needed to pass on to clients who are looking for us to explain whats going on and when it will "just work".
At the risk of opening a can of worms, and in full knowledge that I'm dropping this deep in a safe harbor:
TLS1.0 and SSLv3 are only sufficiently different to be incompatible with one another. However, a good bit of the Cyphers are shared between TLS1.0 and SSLv3 (source: https://www.openssl.org/docs/apps/ciphers.html#TLS_v1_0_cipher_suites_ scroll up for the SSLv3)
Since POODLE deomonstrates that the CBC cyphers in SSLv3 are insecure ...
What's the plan to bring TLS1.2 (or even 1.3) to In/Out calls on the Platform?
Here is a relevant discussion on the cypher suites shared by these two protocols:
http://security.stackexchange.com/questions/70832/why-doesnt-the-tls-protocol-work-without-the-sslv3-ciphersuites
Hope this helps.
you can use this as our official word: https://help.salesforce.com/apex/HTViewSolution?urlname=Salesforce-disabling-SSL-3-0-encryption&language=en_US
Specifically for poodle the ability to change the padding characters and the padding length while keeping the packet valid is one of the deficiencies that makes SSLv3 insecure while TLSv1 doesn't have this issue. Attempting the same attack with TLSv1 wouldn't be successful.
hope this helps.
Can you guys send me your org id, the endpoint, the time it started, and best way to reach you guys (or any other information you guys have) to bestebez@salesforce.com
@Operations Managment @Always Thinkin @CRMScienceKirk
Are any of you still seeing this error?
This could be because of a buggy implementation of one of the cryptographic algorithms at the client end. You can find more information here:
http://security.stackexchange.com/questions/39844/getting-ssl-alert-write-fatal-bad-record-mac-during-openssl-handshake
Hope it helps.
Best,
Swetha.
Thanks
AP0 and AP1 are in progress and are expected to have SSLv3 disabled momentarily, likely within 30 minutes.
https://success.salesforce.com/ideaView?id=08730000000DhJaAAK (https://success.salesforce.com/ideaView?id=08730000000DhJaAAK" target="_blank)
I'm sure TLS 1.2 support is coming, but I think there are some significant engineering hurdles before it's available. Specifically, I believe Sf still utilizes JDK 6, which doesn't support 1.1 or 1.2.
Regardless of Salesforce's support of TLS1.2, there's evidence now that even that is (sometimes, often?) insecure. From the public release:
The impact of this problem is similar to that of POODLE, with the attack being slightly easier to execute–no need to downgrade modern clients down to SSL 3 first, TLS 1.2 will do just fine. The main target are browsers, because the attacker must inject malicious JavaScript to initiate the attack. A successful attack will use about 256 requests to uncover one cookie character, or only 4096 requests for a 16-character cookie. This makes the attack quite practical.
https://community.qualys.com/blogs/securitylabs/2014/12/08/poodle-bites-tls
SSLv3 is fully disabled in the inbound and outbound directions at Salesforce on all instances and endpoints.
Callouts or servers that keep the connection open should run into this issue less often than callouts or servers that close the connection after each request. If your callout or your server configuration uses the 'Connection: close' header in normal circumstances, it's advisable to not do that anymore.
To work around this issue for now, disabling the ciphersuites on the callout's target server that use a DH key exchange should be effective. We believe that this issue is already fixed in the Spring '15 release, though we can't promise that, as per Salesforce's safe harbor statement at http://investor.salesforce.com/about-us/investor/safe-harbor-statement/default.aspx. We may be able to get the fix deployed earlier than the Spring '15 release, but that can't be promised at this time. Disabling the DH ciphersuites on the callout target server should give quick relief for now.
But the handshake_failure issue occur again.
On January 29, 2015 10:15 UTC handshake_failure error occured.
I opened a case and got root cause information.
Callout program bug.