Tag Archives: Confidentiality

The SLOTH attack and IKE/IPsec

Executive Summary: The IKE daemons in RHEL7 (libreswan) and RHEL6 (openswan) are not vulnerable to the SLOTH attack. But the attack is still interesting to look at .

The SLOTH attack released today is a new transcript collision attack against some security protocols that use weak or broken hashes such as MD5 or SHA1. While it mostly focuses on the issues found in TLS, it also mentions weaknesses in the “Internet Key Exchange” (IKE) protocol used for IPsec VPNs. While the TLS findings are very interesting and have been assigned CVE-2015-7575, the described attacks against IKE/IPsec got close but did not result in any vulnerabilities. In the paper, the authors describe a Chosen Prefix collision attack against IKEv2 using RSA-MD5 and RSA-SHA1 to perform a Man-in-the-Middle (MITM) attack and a Generic collision attack against IKEv1 HMAC-MD5.

We looked at libreswan and openswan-2.6.32 compiled with NSS as that is what we ship in RHEL7 and RHEL6. Upstream openswan with its custom crypto code was not evaluated. While no vulnerability was found, there was some hardening that could be done to make this attack less dangerous that will be added in the next upstream version of libreswan.

Specifically, the attack was prevented because:

  • The SPI’s in IKE are random and part of the hash, so it requires an online attack of 2^77 – not an offline attack as suggested in the paper.
  • MD5 is not enabled per default for IKEv2.
  • Weak Diffie-Hellman groups DH22, DH23 and DH24 are not enabled per default.
  • Libreswan as a server does not re-use nonces for multiple clients.
  • Libreswan destroys nonces when an IKE exchange times out (default 60s).
  • Bogus ID payloads in IKEv1 cause the connection to fail authentication.

The rest of this article explains the IKEv2 protocol and the SLOTH attack.

The IKEv2 protocol

sloth-ike-1

The IKE exchange starts with an IKE_INIT packet exchange to perform the Diffie-Hellman Key Exchange. In this exchange, the initiator and responder exchange their nonces. The result of the DH exchange is that both parties now have a shared secret called SKEYSEED. This is fed into a mutually agreed PRF algorithm (which could be MD5, SHA1 or SHA2) to generate as much pseudo-random key material as needed. The first key(s) are for the IKE exchange itself (called the IKE SA  or Parent SA), followed by keys for one or more IPsec SAs (also called Child SAs).

But before the SKEYSEED can be used, both ends need to perform an authentication step. This is the second packet exchange, called IKE_AUTH. This will bind the Diffie-Hellman channel to an identity to prevent the MITM attack. Usually these are digital signatures over the session data to prove ownership of the identity’s private key. Technically, it signs a hash of the session data. In TLS that signature is over the hash of the session data which made TLS more vulnerable to the SLOTH attack.

The attack is to trick both parties to sign a hash which the attacker can replay to the other party to fake the authentication of both entities.

sloth-ike-2

They call this a “transcript collision”. To facilitate the creation of the same hash, the attacker needs to be able to insert its own data in the session to the first party so that the hash of that data will be identical to the hash of the session to the second party. It can then just pass on the signatures without needing to have private keys for the identities of the parties involved. It then needs to remain in the middle to decrypt and re-encrypt and pass on the data, while keeping a copy of the decrypted data.

The IKEv2 COOKIE

The initial IKE_INIT exchange does not have many payloads that can be used to manipulate the outcome of the hashing of the session data. The only candidate is the NOTIFY payload of type COOKIE.

Performing a Diffie-Hellman exchange is relatively expensive. An attacker could send a lot of IKE_INIT requests forcing the VPN server to use up its resources. These could all come from spoofed source IP addresses, so blacklisting such an attack is impossible. To defend against this, IKEv2 introduced the COOKIE mechanism. When the server gets too busy, instead of performing the Diffie-Hellman exchange, it calculates a cookie based on the client’s IP address, the client’s nonce and its own server secret. It hashes these and sends it as a COOKIE payload in an IKE_INIT reply to the client. It then deletes all the state for this client. If this IKE_INIT exchange was a spoofed request, nothing more will happen. If the request was a legitimate client, this client will receive the IKE_INIT reply, see the COOKIE payload and re-send the original IKE_INIT request, but this time it will include the COOKIE payload it received from the server. Once the server receives this IKE_INIT request with the COOKIE, it will calculate the cookie data (again) and if it matches, the client has proven that it contacted the server before. To avoid COOKIE replays and thwart attacks attempting to brute-force the server secret used for creating the cookies, the server is expected to regularly change its secret.

Abusing the COOKIE

The SLOTH attacker is the MITM between the VPN client and VPN server. It prepares an IKE_INIT request to the VPN server but waits for the VPN client to connect. Once the VPN client connects, it does some work with the received data that includes the proposals and nonce to calculate a malicious COOKIE payload and sends this COOKIE to the VPN client. The VPN client will re-send the IKE_INIT request with the COOKIE to the MITM. The MITM now sends this data to the real VPN server to perform an IKE_INIT there. It includes the COOKIE payload even though the VPN server did not ask for a COOKIE. Why does the VPN server not reject this connection? Well, the IKEv2 RFC-7296 states:

When one party receives an IKE_SA_INIT request containing a cookie whose contents do not match the value expected, that party MUST ignore the cookie and process the message as if no cookie had been included

The intention here was likely meant for a recovering server. If the server is no longer busy, it will stop sending cookies and stop requiring cookies. But a few clients that were just about to reconnect will send back the cookie they received when the server was still busy. The server shouldn’t reject these clients now, so the advice was to ignore the cookie in that case. Alternatively, the server could just remember the last used secret for a while and if it receives a cookie when it is not busy, just do the cookie validation. But that costs some resources too which can be abused by an attacker to send IKE_INIT requests with bogus cookies. Limiting the time of cookie validation from the time when the server became unbusy would mitigate this.

COOKIE size

The paper contains an error when it talks about this COOKIE size:

To implement the attack, we must first find a collision between m1 amd m’1. We observe that in IKEv2 the length of the cookie is supposed to be at most 64 octets but we found that many implementations allow cookies of up to 2^16 bytes. We can use this flexibility in computing long collisions.

It is not clear where the authors got the value of 64. The RFC does not mention anything about the maximum cookie size. The COOKIE value is sent as a NOTIFY PAYLOAD. These payloads have a two byte Payload Length value, so NOTIFY data is legitimately 2^16 (65535) bytes. Adding more bytes should not be possible. Any IKE implementation that reads more bytes than specified in the Payload Length value would be very broken. Assuming the COOKIE NOTIFY is the last payload in the packet, the attacker could increase the length specified in the IKE header and stuff additional bytes after this payload, but proper implementations would not read this data. In fact, libreswan encountered some interoperability problems when it did this by mistake when it was padding the IKE packets to a multiple of 8 bytes (as per IKEv1 but not IKEv2) and got its IKE packets rejected by various implementations. Still, the authors claim 65535 bytes is enough for their attack.

Attacking the AUTH hash

Assuming the above works, it needs to find a collision between m1 and m’1. The only numbers they claim could be feasible is when MD5 would be used for the authentication step in IKE_AUTH. An offline attack could then be computed of 2^16 to 2^39 which they say would take about 5 hours. As the paper states, IKEv2 implementations either don’t support MD5, or if they do it is not part of the default proposal set. It makes a case that the weak SHA1 is widely supported in IKEv2 but admits using SHA1 will need more computing power (they listed 2^61 to 2^67 or 20 years). Note that libreswan (and openswan in RHEL) requires manual configuration to enable MD5 in IKEv2, but SHA1 is still allowed for compatibility.

The final step of the attack – Diffie-Hellman

Assuming the above succeeds the attacker needs to ensure that g^xy’ = g^x’y. To facilitate that, they use a subgroup confinement attack, and illustrate this with an example of picking x’ = y’ = 0. Then the two shared secrets would have the value 1. In practice this does not work according to the authors because most IKEv2 implementations validate
the received Diffie-Hellman public value to ensure that it is larger than 1 and smaller than p – 1.They did find that Diffie-Hellman groups 22 to 24 are known to have many small subgroups, and implementations tend to not validate these. Which led to an interesting discussion on one of the cypherpunks mailinglists about the mysterious nature of the DH groups in RFC-5114. Which are not enabled in libreswan (or openswan in RHEL) by default, and require manual configuration precisely because the origin of these groups is a mystery.

The IKEv1 attack

The paper briefly brainstorms about a variant of this attack using IKEv1. It would be interesting because MD5 is very common with IKEv1, but the article is not really clear on how that attack should work. It mentions filling the ID payload with malicious data to trigger the collision, but such an ID would never pass validation.

Counter measures

Work was already started on updating the cryptographic algorithms deemed mandatory to implement for IKE. Note that it does not state which algorithms are valid to use, or which to use per default. This work is happening at the IPsec working group at the IETF and can be found at draft-ietf-ipsecme-rfc4307bis. It is expected to go through a few more rounds of discussion and one of the topics that will be raised are the weak DH groups specified in RFC-5114.

Upstream Libreswan has hardened its cookie handling code, preventing the attacker from sending an uninvited cookie to the server without having their connection dropped.

Factoring RSA Keys With TLS Perfect Forward Secrecy

What is being disclosed today?

Back in 1996, Arjen Lenstra described an attack against an optimization (called the Chinese Remainder Theorem optimization, or RSA-CRT for short). If a fault happened during the computation of a signature (using the RSA-CRT optimization), an attacker might be able to recover the private key from the signature (an “RSA-CRT key leak”). At the time, use of cryptography on the Internet was uncommon, and even ten years later, most TLS (or HTTPS) connections were immune to this problem by design because they did not use RSA signatures. This changed gradually, when forward secrecy for TLS was recommended and introduced by many web sites.

We evaluated the source code of several free software TLS implementations to see if they implement hardening against this particular side-channel attack, and discovered that it is missing in some of these implementations. In addition, we used a TLS crawler to perform TLS handshakes with servers on the Internet, and collected evidence that this kind of hardening is still needed, and missing in some of the server implementations: We saw several RSA-CRT key leaks, where we should not have observed any at all.

The technical report, “Factoring RSA Keys With TLS Perfect Forward Secrecy”, is available in PDF format.

What is the impact of this vulnerability?

An observer of the private key leak can use this information to cryptographically impersonate the server, after redirecting network traffic, conducting a man-in-the-middle attack. Either the client making the TLS handshake can see this leak, or a passive observer capturing network traffic. The key leak also enables decryption of connections which do not use forward secrecy, without the need for a man-in-the-middle attack. However, forward secrecy must be enabled in the server for this kind of key leak to happen in the first place, and with such a server configuration, most clients will use forward secrecy, so an active attack will be required for configurations which can theoretically lead to RSA-CRT key leaks.

Does this break RSA?

No. Lenstra’s attack is a so-called side-channel attack, which means that it does not attack RSA directly. Rather, it exploits unexpected implementation behavior. RSA, and the RSA-CRT optimization with appropriate hardening, is still considered secure.

Are Red Hat products affected?

The short answer is: no.

The longer answer is that some of our products do not implement the recommend hardening that protects against RSA-CRT key leaks. (OpenSSL and NSS already have RSA-CRT hardening.) We will continue to work with upstream projects and help them to implement this additional defense, as we did with Oracle in OpenJDK (which led to the CVE-2015-0478 fix in April this year). None of the key leaks we observed in the wild could be attributed to these open-source projects, and no key leaks showed up in our lab testing, which is why this additional hardening, while certainly desirable to have, does not seem critical at this time.

In the process of this disclosure, we consulted some of our partners and suppliers, particularly those involved in the distribution of RPM packages. They indicated that they already implement RSA-CRT hardening, at least in the configurations we use.

What would an attack look like?

The attack itself is unobservable because the attacker performs an off-line mathematical computation on data extracted from the TLS handshake. The leak itself could be noticed by an intrusion detection system if it checks all TLS handshakes for mathematical correctness.

For the key leaks we have observed, we do not think there is a way for remote attackers to produce key leaks at will, in the sense that an attacker could manipulate the server over the network in such a way that the probability of a key leak in a particular TLS handshake increases. The only thing the attacker can do is to capture as many handshakes as possible, perhaps by initiating many such handshakes themselves.

How difficult is the mathematical computation required to recover the key?

Once the necessary data is collected, the actual computation is marginally more complicated than a regular RSA signature verification. In short, it is quite cheap in terms of computing cost, particularly in comparison to other cryptographic attacks.

Does it make sense to disable forward secrecy, as a precaution?

No. If you expect that a key leak might happen in the future, it could well have happened already. Disabling forward secrecy would enable passive observers of past key leaks to decrypt future TLS sessions, from passively captured network traffic, without having to redirect client connections. This means that disabling forward secrecy generally makes things worse. (Disabling forward secrecy and replacing the server certificate with a new one would work, though.)

How can something called Perfect Forward Secrecy expose servers to additional vulnerabilities?

“Perfect Forward Secrecy“ is just a name given to a particular tweak of the TLS protocol. It does not magically turn TLS into a perfect protocol (that is, resistant to all attacks), particularly if the implementation is incorrect or runs on faulty hardware.

Have you notified the affected vendors?

We tried to notify the affected vendors, and several of them engaged in a productive conversation. All browser PKI certificates for which we observed key leaks have been replaced and revoked.

Does this vulnerability have an name?

We think that “RSA-CRT hardening” (for the countermeasure) and “RSA-CRT key leaks” (for a successful side-channel attack) is sufficiently short and descriptive, and no branding is appropriate. We expect that several CVE IDs will be assigned for the underlying vulnerabilties leading to RSA-CRT key leaks. Some vendors may also assign CVE IDs for RSA-CRT hardening, although no key leaks have been seen in practice so far.

Factoring RSA export keys – FREAK (CVE-2015-0204)

This week’s issue with OpenSSL export ciphersuites has been discussed in the press as “Freak” and “Smack”. These are addressed by CVE-2015-0204, and updates for affected Red Hat products were released in January.

Historically, the United States and several other countries tried to control the export or use of strong cryptographic primitives. For example, any company that exported cryptographic products from the United States needed to comply with certain key size limits. For RSA encryption, the maximum allowed key size was 512 bits and for symmetric encryption (DES at that time) it was 40 bits.

The U.S. government eventually lifted this policy and allowed cryptographic primitives with bigger key sizes to be exported. However, these export ciphersuites did not really go away and remained in a lot of codebases (including OpenSSL), probably for backward compatibility purposes.

It was considered safe to keep these export ciphersuites lying around for multiple purposes.

  1. Even if your webserver supports export ciphersuites, most modern browsers will not offer that as a part of initial handshake because they want to establish a session with strong cryptography.
  2. Even if you use export cipher suites, you still need to factor the 512 bit RSA key or brute-force the 40-bit DES key. Though doable in today’s cloud/GPU infrastructure, it is pointless to do this for a single session.

However, this results in a security flaw, which affects various cryptographic libraries, including OpenSSL. OpenSSL clients would accept RSA export-grade keys even when the client did not ask for export-grade RSA. This could further lead to an active man-in-the-middle attack, allowing decryption and alteration of the TLS session in the following way:

  • An OpenSSL client contacts a TLS server and asks for a standard RSA key (non-export).
  • A MITM intercepts this requests and asks the server for an export-grade RSA key.
  • Once the server replies, the MITM attacker forwards this export-grade RSA key to the client. The client has a bug (as described above) that allows the export-grade key to be accepted.
  • In the meantime, the MITM attacker factors this key and is able to decrypt all possible data exchange between the server and the client.

This issue was fixed in OpenSSL back in October of 2014 and shipped in January of 2015 in Red Hat Enterprise Linux 6 and 7 via RHSA-2015-0066. This issue has also been addressed in Fedora 20 and Fedora 21.

Red Hat Product Security initially classified this as having low security impact, but after more details about the issue and the possible attack scenarios have become clear, we re-classified it as a moderate-impact security issue.

Additional information on mitigating this vulnerability can be found on the Red Hat Customer Portal.

Update on Red Hat Enterprise Linux 6 and FIPS 140 validations

Red Hat achieved its latest successful FIPS 140 validation back in April 2013. Since then, a lot has happened. There have been well publicized attacks on cryptographic protocols, weaknesses in implementations, and changing government requirements. With all of these issues in play, we want to explain what we are doing about it.

One of the big changes was that we enabled support of Elliptic Curve Cryptography (ECC) and Elliptic Curve Diffie Hellman (ECDH) in Red Hat Enterprise Linux to meet the National Institute of Standards and Technology’s (NIST’s) “Suite B” requirements taking effect this year. Because we added new ciphers, we knew we needed to re-certify. Re-certification brings many advantages to our government customers, who not only benefit from the re-certification, but they also maintain coverage from our last FIPS 140 validation effort. One advantage of re-certification is that we have picked up fixes for BEAST, Lucky 13, Heartbleed, Poodle, and some lesser known vulnerabilities around certificate validation. It should be noted that these attacks are against higher level protocols that are not part of any crypto primitives covered by a FIPS validation. But, knowing the fixes are in the packages under evaluation should give customers additional peace of mind.

The Red Hat Enterprise Linux 6 re-certification is now under way. It includes reworked packages to meet all the updated requirements that NIST has put forth taking effect Jan. 1, 2014, such as a new Deterministic Random Bit Generator (DRGB) as specified in SP 800-90A (PDF); an updated RSA key generation technique as specified in FIPS 186-4 (PDF); and updated key sizes and algorithms as specified in SP 800-131A (PDF).

Progress on the certification is moving along – we’ve completed review and preliminary testing and are now applying for Cryptographic Algorithm Validation System (CAVS) certificates. After that, we’ll submit validation paperwork to NIST. All modules being re-certified are currently listed on NIST’s Modules in Process page, except Volume Encryption (dm-crypt). Its re-certification is taking a different route because the change is so minor thus not needing CAVS testing. We are expecting the certifications to be completed early this year.

Disabling SSLv3 on the client and server

Recently, some Internet search engines announced that they would prefer websites secured with encryption over those that were not.  Of course there are other reasons why securing your website with encryption is beneficial.  Protecting authentication credentials, mitigating the use of cookies as a means of tracking and allowing access, providing privacy of your users, and authenticating your own server thus protecting the information you are trying to convey to your users.  And while setting up and using encryption on a webserver can be trivial, doing it properly might take a few additional minutes.

Red Hat strives to ship sane defaults that allow both security and availability.  Depending on your clients a more stringent or lax configuration may be desirable.  Red Hat Support provides both written documentation as well as a friendly person that can help make sense of it all.  Inevitably, it is the responsibility of the system owner to secure the systems they host.

Good cryptographic protocols

Protocols are the basis for all cryptography and provide the instructions for implementing ciphers and using certificates.  In the asymmetric, or public key, encryption world the protocols are all based off of the Secure Sockets Layer, or SSL, protocol.  SSL has come along way since its initial release in 1995.  Development has moved relatively quickly and the latest version, Transport Layer Security version 1.2 (TLS 1.2), is now the standard that all new software should be supporting.

Unfortunately some of the software found on the Internet still supports or even requires older versions of the SSL protocol.  These older protocols are showing their age and are starting to fail.  The most recent example is the POODLE vulnerability which showed how weak SSL 3.0 really is.

In response to the weakened protocol Red Hat has provided advice to disable SSL 3.0 from its products, and help its customers implement the best available cryptography.  This is seen in products from Apache httpd to Mozilla Firefox.  Because SSL 3.0 is quickly approaching its twentieth birthday it’s probably best to move on to newer and better options.

Of course the protocol can’t fix everything if you’re using bad ciphers.

Good cryptographic ciphers

Cryptographic ciphers are just as important to protect your information.  Weak ciphers, like RC4, are still used on the Internet today even though better and more efficient ciphers are available.  Unfortunately the recommendations change frequently.  What was suggested just a few months ago may no longer be good choices today.  As more work goes into researching the available ciphers weaknesses are discovered.

Fortunately there are resources available to help you stay up to date.  Mozilla provides recommended cipher choices that are updated regularly.  Broken down into three categories, system owners can determine which configuration best meets their needs.

Of course the cipher can’t fix everything if your certificate are not secure.

Certificates

Certificates are what authenticate your server to your users.  If an attacker can spoof your certificate they can intercept all traffic going between your server and users.  It’s important to protect your keys and certificates once they have been generated.  Using a hardware security module (HSM) to store your certificates is a great idea.  Using a reputable certificate authority is equally important.

Clients

Most clients that support SSL/TLS encryption automatically try to negotiate the latest version.  We found with the POODLE attack that http clients, such as Firefox, could be downgraded to a weak protocol like SSL 3.0.  Because of this many server owners went ahead and disabled SSL 3.0 to prevent the downgrade attack from affecting their users.  Mozilla has, with their latest version of Firefox, disabled SSL 3.0 by default (although it can be re-enabled for legacy support).  Now users are protected even though server owners may be lax in their security (although they are still at the mercy of the server’s cipher and protocol choices).

Much of the work has already been done behind the scenes and in the development of the software that is used to serve up websites as well as consume the data that comes from these servers.  The final step is for system owners to implement the technology that is available.  While a healthy understanding of cryptography and public key infrastructure is good, it is not necessary to properly implement good cryptographic solutions.  What is important is protecting your data and that of your users.  Trust is built during every interaction and your website it usually a large part of that interaction.

Can SSL 3.0 be fixed? An analysis of the POODLE attack.

SSL and TLS are cryptographic protocols which allow users to securely communicate over the Internet. Their development history is no different from other standards on the Internet. Security flaws were found with older versions and other improvements were required as technology progressed (for example elliptic curve cryptography or ECC), which led to the creation of newer versions of the protocol.

It is easier to write newer standards, and maybe even implement them in code, than to adapt existing ones while maintaining backward compatibility. The widespread use of SSL/TLS to secure traffic on the Internet makes a uniform update difficult. This is especially true for hardware and embedded devices such as routers and consumer electronics which may receive infrequent updates from their vendors.

The fact that legacy systems and protocols need to be supported, even though more secure options are available, has lead to the inclusion of a version negotiation mechanism in SSL/TLS protocols. This mechanism allows a client and a server to communicate even if the highest SSL/TLS version they support is not identical. The client indicates the highest version it supports in its ClientHello handshake message, then the server picks the highest version supported by both the client and the server, then communicates this version back to the client in its ServerHello handshake message. The SSL/TLS protocols implement protections to prevent a man-in-the-middle (MITM) attacker from being able to tamper with handshake messages that force the use of a protocol version lower than the highest version supported by both the client and the server.

Most popular browsers implement a different out-of-band mechanism for fallback to earlier protocol versions. Some SSL/TLS implementations do not correctly handle cases when a connecting client supports a newer TLS protocol version than supported by the server, or when certain TLS extensions are used. Instead of negotiating the highest TLS version supported by the server the connection attempt may fail. As a workaround, the web browser may attempt to re-connect with certain protocol versions disabled. For example, the browser may initially connect claiming TLS 1.2 as the highest supported version, and subsequently reconnect claiming only TLS 1.1, TLS 1.0, or eventually SSL 3.0 as the highest supported version until the connection attempt succeeds. This can trivially allow a MITM attacker to cause a protocol downgrade and make the client/server use SSL 3.0. This fallback behavior is not seen in non HTTPS clients.

The issue related to the POODLE flaw is an attack against the “authenticate-then-encrypt” constructions used by block ciphers in their cipher block chaining (CBC) mode, as used in SSL and TLS. By using SSL 3.0, at most 256 connections are required to reliably decrypt one byte of ciphertext. Known flaws already affect RC4 and non block-ciphers and their use is discouraged.

Several cryptographic library vendors have issued patches which introduce the TLS Fallback Signaling Cipher Suite Value (TLS_FALLBACK_SCSV) support to their libraries. This is essentially a fallback mechanism in which clients indicate to the server that they can speak a newer SSL/TLS versions than the one they are proposing. If TLS_FALLBACK_SCSV was included in the ClientHello and the highest protocol version supported by the server is higher than the version indicated by the client, the server aborts the connection, because it means that the client is trying to fallback to a older version even though it can speak the newer version.

Before applying this fix, there are several things that need to be understood:

  • As discussed before, only web browsers perform an out-of-band protocol fallback. Not all web browsers currently support TLS_FALLBACK_SCSV in their released version. Even if the patch is applied on the server, the connection may still be unsafe if the browser is able to negotiate SSL 3.0
  • Clients which do not implement out-of-protocol TLS version downgrades (generally anything which does not speak HTTPS) do not need to be changed. Adding TLS_FALLBACK_SCSV is unnecessary (and even impossible) if there is no downgrade logic in the client application.
  • The TLS/SSL server needs to be patched to support the SCSV extension – though, as opposed to the client, the server does not have to be rebuilt with source changes applied. Just installing an upgrade TLS library is sufficient. Due to the current lack of browser support, this server-side change does not have any positive security impact as of this writing. It only prepares for a future where a significant share of browsers implement TLS_FALLBACK_SCSV.
  • If both the server and the client are patched and one of them only supports SSL 3.0, SSL 3.0 will be used directly, which results in a connection with reduced security (compared to currently recommended practices). However, the alternative is a total connection failure or, in some situations, an unencrypted connection which does nothing to protect from an MITM attack. SSL 3.0 is still better than an unencrypted connection.
  • As a stop-gap measure against attacks based on SSL 3.0, disabling support for this aging protocol can be performed on the server and the client. Advice on disabling SSL 3.0 in various Red Hat products and components is available on the Knowledge Base.

Information about (the lack of ongoing) attacks may help with a decision. Protocol downgrades are not covert attacks, in particular in this case. It is possible to log SSL/TLS protocol versions negotiated with clients and compare these versions with expected version numbers (as derived from user profiles or the HTTP user agent header). Even after a forced downgrade to SSL 3.0, HTTPS protects against tampering. The plaintext recovery attack described in the POODLE paper (Bodo Möller, Thai Duong, Krzysztof Kotowicz, This POODLE Bites: Exploiting The SSL 3.0 Fallback, September 2014) can be detected by the server and just the number of requests generated by it could be noticeable.

Red Hat has done additional research regarding the downgrade attack in question. We have not found any clients that can be forcibly downgraded by an attacker other than clients that speak HTTPS. Due to this fact, disabling SSL 3.0 on services which are not used by HTTPS clients does not affect the level of security offered. A client that supports a higher protocol version and cannot be downgraded is not at issue as it will always use the higher protocol version.

SSL 3.0 cannot be repaired at this point because what constitutes the SSL 3.0 protocol is set in stone by its specification. However, starting in 1999, successor protocols to SSL 3.0 were developed called TLS 1.0, 1.1, and 1.2 (which is currently the most recent version). Because of the built-in protocol upgrade mechanisms, these successor protocols will be used whenever possible. In this sense, SSL 3.0 has indeed been fixed – an update to SSL 3.0 should be seen as being TLS 1.0, 1.1, and 1.2. Implementing TLS_FALLBACK_SCSV handling in servers makes sure that attackers cannot circumvent the fixes in later protocol versions.

POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566)

Red Hat Product Security has been made aware of a vulnerability in the SSL 3.0 protocol, which has been assigned CVE-2014-3566. All implementations of SSL 3.0 are affected. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack.

To mitigate this vulnerability, it is recommended that you explicitly disable SSL 3.0 in favor of TLS 1.1 or later in all affected packages.

A brief history

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over networks. The SSL protocol was originally developed by Netscape.  Version 1.0 and was never publicly released; version 2.0 was released in February 1995 but contained a number of security flaws which ultimately led to the design of SSL 3.0. Over the years, several flaws were found in the design of SSL 3.0 as well. This ultimately lead to the development and widespread use of the TLS protocol.

Most TLS implementations remain backward compatible with SSL 3.0 to incorporate legacy systems and provide a smoother user experience. Many SSL clients implement a protocol downgrade “dance” to work around the server side interoperability issues. Once the connection is downgraded to SSL 3.0, RC4 or a block cipher with CBC mode is used; this is where the problem starts!

What is POODLE?

The POODLE vulnerability has two aspects. The first aspect is a weakness in the SSL 3.0 protocol, a padding oracle. An attacker can exploit this vulnerability to recover small amounts of plaintext from an encrypted SSL 3.0 connection, by issuing crafted HTTPS requests created by client-side Javascript code, for example. Multiple HTTPS requests are required for each recovered plaintext byte, and the vulnerability allows attackers to confirm if a particular byte was guessed correctly. This vulnerability is inherent to SSL 3.0 and unavoidable in this protocol version. The fix is to upgrade to newer versions, up to TLS 1.2 if possible.

Normally, a client and a server automatically negotiate the most recent supported protocol version of SSL/TLS. The second aspect of the POODLE vulnerability concerns this negotiation mechanism. For the protocol negotiation mechanism to work, servers must gracefully deal with a more recent protocol version offered by clients. (The connection would just use the older, server-supported version in such a scenario, not benefiting from future protocol enhancements.) However, when newer TLS versions were deployed, it was discovered that some servers just terminated the connection at the TCP layer or responded with a fatal handshake error, preventing a secure connection from being established. Clearly, this server behavior is a violation of the TLS protocol, but there were concerns that this behavior would make it impossible to deploy upgraded clients and widespread interoperability failures were feared. Consequently, browsers first try a recent TLS version, and if that fails, they attempt again with older protocol versions, until they end up at SSL 3.0, which suffers from the padding-related vulnerability described above. This behavior is sometimes called the compatibility dance. It is not part of TLS implementations such as OpenSSL, NSS, or GNUTLS; it is implemented by application code in client applications such as Firefox and Thunderbird.

Both aspects of POODLE require a man in the middle attack at the network layer. The first aspect of this flaw, the SSL 3.0 vulnerability, requires that an attacker can observe the network traffic between a client and a server and somehow trigger crafted network traffic from the client. This does not strictly require active manipulation of the network transmission, passive eavesdropping is sufficient. However, the second aspect, the forced protocol downgrade, requires active manipulation of network traffic.  As described in the POODLE paper, both aspects require the attacker to be able to observe and manipulate network traffic while it is in transit.


How are modern browsers affected by the POODLE security flaw?

Browsers are particularly vulnerable because session cookies are short and an ideal target for plain text recovery, and the way HTTPS works allows an attacker to generate many guesses quickly (either through Javascript or by downloading images). Browsers are also most likely to implement the compatibility fallback.
By default, Firefox supports SSL 3.0, and performs the compatibility fallback as described above. SSL 3.0 support can be switched off, but the compatibility fallback cannot be configured separately.

Is this issue fixed?

The first aspect of POODLE, the SSL 3.0 protocol vulnerability, has already been fixed through iterative protocol improvements, leading to the current TLS version, 1.2. It is simply not possible to address this in the context of the SSL 3.0 protocol, a protocol upgrade to one of the successors is needed. Note that TLS versions before 1.1 had similar padding-related vulnerabilities, which is why we recommend to switch to TLS 1.1, at least. (SSL and TLS are still quite similar as protocols, the name change has non-technical reasons.)

The second aspect, caused by browsers which implement the compatibility fallback in an insecure way, has yet to be addressed. Strictly speaking, this is a security vulnerability in browsers due to the way they misuse the TLS protocol. One way to fix this issue would be to remove the compatibility dance, focusing instead on making servers compatible with clients implementing the most recent TLS implementation (as explained, the protocol supports a version negotiation mechanism, but some servers refuse to implement it).

However, there is an industry-wide effort under way to enable browsers to downgrade in a secure fashion, using a new Signaling Cipher Suite Value (SCSV). This will require updates in browsers (such as Firefox) and TLS libraries (such as OpenSSL, NSS and GNUTLS). However, we do not envision changes in TLS client applications which currently do not implement the fallback logic, and neither in TLS server applications as long as they use one of the system TLS libraries. TLS-aware packet filters, firewalls, load balancers, and other infrastructure may need upgrades as well.

Is there going to be another SSL 3.0 issue in the near future? Is there a long term solution?

Disabling SSL 3.0 will obviously prevent exposure to future SSL 3.0-specific issues. The new SCSV-based downgrade mechanism should reliably prevent the use of SSL 3.0 if both parties support a newer protocol version. Once these software updates are widely deployed, the need to disable SSL 3.0 to address this and future vulnerabilities will hopefully be greatly reduced.

SSL 3.0 is typically used in conjunction with the RC4 stream cipher. (The only other secure option in a strict, SSL 3.0-only implementation is Triple DES, which is quite slow even on modern CPUs.) RC4 is already considered very weak, and SSL 3.0 does not even apply some of the recommended countermeasures which prolonged the lifetime of RC4 in other contexts. This is another reason to deploy support for more recent TLS versions.

I have patched my SSL implementation against BEAST and LUCKY-13, am I still vulnerable?

This depends on the type of mitigation you have implemented. If you disabled protocol versions earlier than TLS 1.1 (which includes SSL 3.0), then the POODLE issue does not affect your installation. If you forced clients to use RC4, the first aspect of POODLE does not apply, but you and your users are vulnerable to all of the weaknesses in RC4. If you implemented the n/n-1 split through a software update, or if you deployed TLS 1.1 support without enforcing it, but made no other configuration changes, you are still vulnerable to the POODLE issue.

Is it possible to monitor for exploit attempts?

The protocol downgrade is visible on the server side. Usually, servers can log TLS protocol versions. This information can then be compared with user agents or other information from the profile of a logged-in user, and mismatches could indicate attack attempts.

Attempts to abuse the SSL 3.0 padding oracle part of POODLE, as described in the paper, are visible to the server as well. They result in a fair number of HTTPS requests which follow a pattern not expected during the normal course of execution of a web application. However, it cannot be ruled out that a more sophisticated adaptive chosen plain text attack avoids confirmation of guesses from the server, and this more advanced attack would not be visible to the server, only to the client and the network up to the point at which the attacker performs their traffic manipulation.

What happens when i disable SSL 3.0 on my web server?

Some old browsers may not be able to support a secure connection to your site. Estimates of the number of such browsers in active use vary and depend on the target audience of a web site. SSL protocol version logging (see above) can be used to estimate the impact of disabling SSL 3.0 because it will be used only if no TLS version is available in the client.

Major browser vendors including Mozilla and Google have announced that they are to deactivate the SSL 3.0 in their upcoming versions.

How do I secure my Red Hat-supported software?

Red Hat has put together several articles regarding the removal of SSL 3.0 from its products.  Customers should review the recommendations and test changes before making them live in production systems.  As always, Red Hat Support is available to answer any questions you may have.

TLS landscape

Transport Layer Security (TLS) or, as it was known in the beginnings of the Internet, Secure Sockets Layer (SSL) is the technology responsible for securing communications between different devices. It is used everyday by nearly everyone using the globe-spanning network.

Let’s take a closer look at how TLS is used by servers that underpin the World Wide Web and how the promise of security is actually executed.

Adoption

Hyper Text Transfer Protocol (HTTP) in versions 1.1 and older make encryption (thus use of TLS) optional. Given that the upcoming HTTP 2.0 will require use of TLS and that Google now uses the HTTPS in its ranking algorithm, it is expected that many sites will become TLS-enabled.

Surveying the Alexa top 1 million sites, most domains still don’t provide secure communication channel for their users.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Additionally, if we look at the version of the protocol supported by the servers most don’t support the newest (and most secure) version of the protocol TLSv1.2.  Of more concern is the number of sites that support the completely insecure SSLv2 protocol.

Only half of HTTPS servers support TLS 1.2

Only half of HTTPS servers support TLS 1.2

(There are no results for SSLv2 for first 3 months because of error in software that was collecting data.)

One of the newest and most secure ciphers available in TLS is Advanced Encryption Standard (AES) in Galois/Counter Mode (AES-GCM). Those ciphers provide good security, resiliency against known attacks (BEAST and Lucky13), and very good performance for machines with hardware accelerators for them (modern Intel and AMD CPUs, upcoming ARM).

Unfortunately, it is growing a bit slower than TLS adoption in general, which means that some of the newly deployed servers aren’t using new cryptographic libraries or are configured to not use all of their functions.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Bad recommendations

A few years back, a weakness in TLS 1.0 and SSL 3 was shown to be exploitable in the BEAST attack. The recommended workaround for it was to use RC4-based ciphers. Unfortunately, we later learned that the RC4 cipher is much weaker than it was previously estimated. As the vulnerability that allowed BEAST was fixed in TLSv1.1, using RC4 ciphers with new protocol versions was always unnecessary. Additionally, now all major clients have implemented workarounds for this attack, which currently makes using RC4 a bad idea.

Unfortunately, many servers prefer RC4 and some (~1%) actually support only RC4.  This makes it impossible to disable this weak cipher on client side to force the rest of servers (nearly 19%) to use different cipher suite.

RC4 is still used with more than 18% of HTTPS servers.

RC4 is still used with more than 18% of HTTPS servers.

The other common issue, is that many certificates are still signed using the obsolete SHA-1. This is mostly caused by backwards compatibility with clients like Windows XP pre SP2 and old phones.

SHA-256 certificates only recently started growing in numbers

SHA-256 certificates only recently started growing in numbers

The sudden increase in the SHA-256 between April and May was caused by re-issuance of certificates in the wake of Heartbleed.

Bad configuration

Many servers also support insecure cipher suites. In the latest scan over 3.5% of servers support some cipher suites that uses AECDH key exchange, which is completely insecure against man in the middle attacks. Many servers also support single DES (around 15%) and export grade cipher suites (around 15%). In total, around 20% of servers support some kind of broken cipher suite.

While correctly implemented SSLv3 and later shouldn’t allow negotiation of those weak ciphers if stronger ones are supported by both client and server, at least one commonly used implementation had a vulnerability that did allow for changing the cipher suite to arbitrary one commonly supported by both client and server. That’s why it is important to occasionally clean up list of supported ciphers, both on server and client side.

Forward secrecy

Forward secrecy, also known as perfect forward secrecy (PFS), is a property of a cipher suite that makes it impossible to decrypt communication between client and server when the attacker knows the server’s private key. It also protects old communication in case the private key is leaked or stolen. That’s why it is such a desirable property.

The good news is that most servers (over 60%) not only support, but will actually negotiate cipher suites that provide forward secrecy with clients that support it. The used types are split essentially between 1024 bit DHE and 256 bit ECDHE, scoring respectively 29% and 33% of all servers in latest scan. The amount of servers that do negotiate PFS enabled cipher suites is also steadily growing.

PFS support among TLS-enabled HTTP servers

PFS support among TLS-enabled HTTP servers

Summary

Most Internet facing servers are badly configured, sometimes it is caused by lack of functionality in software, like in case of old Apache 2.2.x releases that don’t support ECDHE key exchange, and sometimes because of side effects of using new software with old configuration (many configuration tutorials suggested using !ADH in cipher string to disable anonymous cipher suites, that unfortunately doesn’t disable anonymous Elliptic Curve version of DH – AECDH, for that, use of !aNULL is necessary).

Thankfully, the situation seems to be improving, unfortunately rather slowly.

If you’re an administrator of a server, consider enabling TLS.  Performance issues when encryption was slow and taxing on servers are long gone. If you already use TLS, double check your configuration preferably using the Mozilla guide to server configuration as it is regularly updated. Make sure you enable PFS cipher suites and put them above non-PFS ciphers and that you as well as the Certificate Authority you’ve chosen, use modern crypto (SHA-2) and large key sizes (at least 2048 bit RSA).

If you’re a user of a server and you’ve noticed that the server doesn’t use correct configuration, try contacting the administrator – he may have just forgotten about it.

It’s all a question of time – AES timing attacks on OpenSSL

This blog post is co-authored with Andy Polyakov from the OpenSSL core team.

Advanced Encryption Standard (AES) is the mostly widely used symmetric block cipher today. Its use is mandatory in several US government and industry applications. Among the commercial standards AES is a part of SSL/TLS, IPSec, 802.11i, SSH and numerous other security products used throughout the world.

Ever since the inclusion of AES as a federal standard via FIPS PUB 197 and even before that when it was known as Rijndael, there has been several attempts to cryptanalyze it. However most of these attacks have not gone beyond the academic papers they were written in. One of them worth mentioning at this point is the key recovery attacks in AES-192/AES-256. A second angle to this is attacks on the AES implementations via side-channels. A side-channel attack exploits information which is leaked through physical channels such power-consumption, noise or timing behaviour. In order to observe such a behaviour the attacker usually needs to have some kind of direct or semi-direct control over the implementation.

There has been some interest about side-channel attacks in the way OpenSSL implements AES. I suppose OpenSSL is chosen mainly because its the most popular cross-platform cryptographic library used on the internet. Most Linux/Unix web servers use it, along with tons of closed source products on all platforms. The earliest one dates back to 2005, and the recent ones being about cross-VM cache-timing attacks on OpenSSL AES implementation described here and here. These ones are more alarming, mainly because with applications/data moving into the cloud, recovering AES keys from a cloud-based virtual machine via a side-channel attack could mean complete failure for the code.

After doing some research on how AES is implemented in OpenSSL there are several interesting facts which have emerged, so stay tuned.

What are cache-timing attacks?

Cache memory is random access memory (RAM) that microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory. Just like all other resources, cache is shared among running processes for the efficiency and economy. This may be dangerous from a cryptographic point of view, as it opens up a covert channel, which allows malicious process to monitor the use of these caches and possibly indirectly recover information about the input data, by carefully noting some timing information about own cache access.

A particular kind of attack called the flush+reload attack works by  forcing data in the victim process out of the cache, waiting a bit, then measuring the time it takes to access the data. If the victim process accesses the data while the spy process is waiting, it will get put back into the cache, and the spy process’s access to the data will be fast. If the victim process doesn’t access the data, it will stay out of the cache, and the spy process’s access will be slow. So, by measuring the access time, the spy can tell whether or not the victim accessed the data during the wait interval. All this under premise that data is shared between victim and adversary.

Note that we are not talking about secret key being shared, but effectively public data, specifically lookup tables discussed in next paragraph.

Is AES implementation in OpenSSL vulnerable to cache-timing attacks?

Any cipher relying heavily on S-boxes may be vulnerable to cache-timing attacks. The processor optimizes execution by loading these S-boxes into the cache so that concurrent accesses/lookups, will not need loading them from the main memory. Textbook implementations of these ciphers do not use constant-time lookups when accessing the data from the S-boxes and worse each lookup depends on portion of the secret encryption key. AES-128, as per the standard, requires 10 rounds, each round involves 16 S-box lookups.

The Rijndael designers proposed a method which results in fast software implementations. The core idea is to merge S-box lookup with another AES operation by switching to larger pre-computed tables. There still are 16 table lookups per round. This 16 are customarily segmented to 4 split tables, so that there are 4 lookups per table and round. Each table consists of 256 32-bit entries. These are referred to as T-tables, and in the case of the current research, the way these are loaded into the cache leads to timing-leakages. The leakage as described in the paper  is quantified by probability of a cache line not being accessed as result of block operation. As each lookup table, be it S-box or pre-computed T-table, consists of 256 entries, probability is (1-n/256)^m, where n is number of table elements accommodated in single cache line, and m is number of references to given table per block operation. Smaller probability is, harder to mount the attack.

Aren’t cache-timing attacks local, how is virtualized environment affected?

Enter KSM (Kernel SamePage Merging). KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine virtual machine, a new page is created for that guest virtual machine. This means that cross-VM cache-timing attacks would now be possible. You can stop KSM or modifiy its behaviour. Some details are available here.

You did not answer my original question, is AES in OpenSSL affected?

In short, no. But not to settle for easy answers, let’s have a close look at how AES in OpenSSL operates. In fact there are several implementations of AES in OpenSSL codebase and each one of them may or may not be chosen based on specific run-time conditions. Note: All of the above discussions are in about OpenSSL version 1.0.1.

  • Intel Advanced Encryption Standard New Instructions or AES-NI, is an extension to the x86 instruction set for intel and AMD machines used since 2008. Intel processors from Westmere onwards and AMD processors from Bulldozer onwards have support for this. The purpose of AES-NI is to allow AES to be performed by dedicated circuitry, no cache is involved here, and hence it’s immune to cache-timing attacks. OpenSSL uses AES-NI by default, unless it’s disabled on purpose. Some hypervisors mask the AES-NI capability bit, which is customary done to make sure that the guests can be freely migrated within heterogeneous cluster/farm. In those cases OpenSSL will resort to other implementations in its codebase.
  • If AES-NI is not available, OpenSSL will either use Vector Permutation AES (VPAES) or  Bit-sliced AES (BSAES), provided the SSSE3 instruction set extension is available. SSSE3 was first introduced in 2006, so there is a fair chance that this will be available in most computers used. Both of these techniques avoid data- and key-dependent branches and memory references, and therefore are immune to known timing attacks. VPAES is used for CBC encrypt, ECB and “obscure” modes like OFB, CFB, while BSAES is used for CBC decrypt, CTR and XTS.
  • In the end, if your processor does not support AES-NI or SSSE3, OpenSSL falls back to integer-only assembly code. Unlike widely used T-table implementations, this code path uses a single 256-bytes S-box. This means that probability of a cache line not being accessed as result of block operation would be (1-64/256)^160=1e-20. “Would be” means that actual probability is even less, in fact zero, because S-box is fully prefetched, and even in every round.

For completeness sake it should be noted that OpenSSL does include reference C implementation which has no mitigations to cache-timing attacks. This is a platform-independent fall-back code that is used on platforms with no assembly modules, as well as in cases when assembler fails for some reason. On side note, OpenSSL maintains really minimal assembler requirement for AES-NI and SSSE3, in fact the code can be assembled on Fedora 1, even though support for these instructions was added later.

Bottom line is that if you are using a Linux distribution which comes with OpenSSL binaries, there is a very good chance that the packagers have taken pain to ensure that the reference C implementation is not compiled in. (Same thing would happen if you download OpenSSL source code and compile it)

It’s not clear from the research paper how the researchers were able to conduct the side channel attack. All evidence suggests that they ended up using the standard reference C implementation of AES instead of assembly modules which have mitigations in place.  The researchers were contacted but did not respond to this point.  Anyone using an OpenSSL binary they built themselves using the defaults, or precompiled as part of an Linux distribution should not be vulnerable to these attacks.