Tag Archives: Integrity

The SLOTH attack and IKE/IPsec

Executive Summary: The IKE daemons in RHEL7 (libreswan) and RHEL6 (openswan) are not vulnerable to the SLOTH attack. But the attack is still interesting to look at .

The SLOTH attack released today is a new transcript collision attack against some security protocols that use weak or broken hashes such as MD5 or SHA1. While it mostly focuses on the issues found in TLS, it also mentions weaknesses in the “Internet Key Exchange” (IKE) protocol used for IPsec VPNs. While the TLS findings are very interesting and have been assigned CVE-2015-7575, the described attacks against IKE/IPsec got close but did not result in any vulnerabilities. In the paper, the authors describe a Chosen Prefix collision attack against IKEv2 using RSA-MD5 and RSA-SHA1 to perform a Man-in-the-Middle (MITM) attack and a Generic collision attack against IKEv1 HMAC-MD5.

We looked at libreswan and openswan-2.6.32 compiled with NSS as that is what we ship in RHEL7 and RHEL6. Upstream openswan with its custom crypto code was not evaluated. While no vulnerability was found, there was some hardening that could be done to make this attack less dangerous that will be added in the next upstream version of libreswan.

Specifically, the attack was prevented because:

  • The SPI’s in IKE are random and part of the hash, so it requires an online attack of 2^77 – not an offline attack as suggested in the paper.
  • MD5 is not enabled per default for IKEv2.
  • Weak Diffie-Hellman groups DH22, DH23 and DH24 are not enabled per default.
  • Libreswan as a server does not re-use nonces for multiple clients.
  • Libreswan destroys nonces when an IKE exchange times out (default 60s).
  • Bogus ID payloads in IKEv1 cause the connection to fail authentication.

The rest of this article explains the IKEv2 protocol and the SLOTH attack.

The IKEv2 protocol

sloth-ike-1

The IKE exchange starts with an IKE_INIT packet exchange to perform the Diffie-Hellman Key Exchange. In this exchange, the initiator and responder exchange their nonces. The result of the DH exchange is that both parties now have a shared secret called SKEYSEED. This is fed into a mutually agreed PRF algorithm (which could be MD5, SHA1 or SHA2) to generate as much pseudo-random key material as needed. The first key(s) are for the IKE exchange itself (called the IKE SA  or Parent SA), followed by keys for one or more IPsec SAs (also called Child SAs).

But before the SKEYSEED can be used, both ends need to perform an authentication step. This is the second packet exchange, called IKE_AUTH. This will bind the Diffie-Hellman channel to an identity to prevent the MITM attack. Usually these are digital signatures over the session data to prove ownership of the identity’s private key. Technically, it signs a hash of the session data. In TLS that signature is over the hash of the session data which made TLS more vulnerable to the SLOTH attack.

The attack is to trick both parties to sign a hash which the attacker can replay to the other party to fake the authentication of both entities.

sloth-ike-2

They call this a “transcript collision”. To facilitate the creation of the same hash, the attacker needs to be able to insert its own data in the session to the first party so that the hash of that data will be identical to the hash of the session to the second party. It can then just pass on the signatures without needing to have private keys for the identities of the parties involved. It then needs to remain in the middle to decrypt and re-encrypt and pass on the data, while keeping a copy of the decrypted data.

The IKEv2 COOKIE

The initial IKE_INIT exchange does not have many payloads that can be used to manipulate the outcome of the hashing of the session data. The only candidate is the NOTIFY payload of type COOKIE.

Performing a Diffie-Hellman exchange is relatively expensive. An attacker could send a lot of IKE_INIT requests forcing the VPN server to use up its resources. These could all come from spoofed source IP addresses, so blacklisting such an attack is impossible. To defend against this, IKEv2 introduced the COOKIE mechanism. When the server gets too busy, instead of performing the Diffie-Hellman exchange, it calculates a cookie based on the client’s IP address, the client’s nonce and its own server secret. It hashes these and sends it as a COOKIE payload in an IKE_INIT reply to the client. It then deletes all the state for this client. If this IKE_INIT exchange was a spoofed request, nothing more will happen. If the request was a legitimate client, this client will receive the IKE_INIT reply, see the COOKIE payload and re-send the original IKE_INIT request, but this time it will include the COOKIE payload it received from the server. Once the server receives this IKE_INIT request with the COOKIE, it will calculate the cookie data (again) and if it matches, the client has proven that it contacted the server before. To avoid COOKIE replays and thwart attacks attempting to brute-force the server secret used for creating the cookies, the server is expected to regularly change its secret.

Abusing the COOKIE

The SLOTH attacker is the MITM between the VPN client and VPN server. It prepares an IKE_INIT request to the VPN server but waits for the VPN client to connect. Once the VPN client connects, it does some work with the received data that includes the proposals and nonce to calculate a malicious COOKIE payload and sends this COOKIE to the VPN client. The VPN client will re-send the IKE_INIT request with the COOKIE to the MITM. The MITM now sends this data to the real VPN server to perform an IKE_INIT there. It includes the COOKIE payload even though the VPN server did not ask for a COOKIE. Why does the VPN server not reject this connection? Well, the IKEv2 RFC-7296 states:

When one party receives an IKE_SA_INIT request containing a cookie whose contents do not match the value expected, that party MUST ignore the cookie and process the message as if no cookie had been included

The intention here was likely meant for a recovering server. If the server is no longer busy, it will stop sending cookies and stop requiring cookies. But a few clients that were just about to reconnect will send back the cookie they received when the server was still busy. The server shouldn’t reject these clients now, so the advice was to ignore the cookie in that case. Alternatively, the server could just remember the last used secret for a while and if it receives a cookie when it is not busy, just do the cookie validation. But that costs some resources too which can be abused by an attacker to send IKE_INIT requests with bogus cookies. Limiting the time of cookie validation from the time when the server became unbusy would mitigate this.

COOKIE size

The paper contains an error when it talks about this COOKIE size:

To implement the attack, we must first find a collision between m1 amd m’1. We observe that in IKEv2 the length of the cookie is supposed to be at most 64 octets but we found that many implementations allow cookies of up to 2^16 bytes. We can use this flexibility in computing long collisions.

It is not clear where the authors got the value of 64. The RFC does not mention anything about the maximum cookie size. The COOKIE value is sent as a NOTIFY PAYLOAD. These payloads have a two byte Payload Length value, so NOTIFY data is legitimately 2^16 (65535) bytes. Adding more bytes should not be possible. Any IKE implementation that reads more bytes than specified in the Payload Length value would be very broken. Assuming the COOKIE NOTIFY is the last payload in the packet, the attacker could increase the length specified in the IKE header and stuff additional bytes after this payload, but proper implementations would not read this data. In fact, libreswan encountered some interoperability problems when it did this by mistake when it was padding the IKE packets to a multiple of 8 bytes (as per IKEv1 but not IKEv2) and got its IKE packets rejected by various implementations. Still, the authors claim 65535 bytes is enough for their attack.

Attacking the AUTH hash

Assuming the above works, it needs to find a collision between m1 and m’1. The only numbers they claim could be feasible is when MD5 would be used for the authentication step in IKE_AUTH. An offline attack could then be computed of 2^16 to 2^39 which they say would take about 5 hours. As the paper states, IKEv2 implementations either don’t support MD5, or if they do it is not part of the default proposal set. It makes a case that the weak SHA1 is widely supported in IKEv2 but admits using SHA1 will need more computing power (they listed 2^61 to 2^67 or 20 years). Note that libreswan (and openswan in RHEL) requires manual configuration to enable MD5 in IKEv2, but SHA1 is still allowed for compatibility.

The final step of the attack – Diffie-Hellman

Assuming the above succeeds the attacker needs to ensure that g^xy’ = g^x’y. To facilitate that, they use a subgroup confinement attack, and illustrate this with an example of picking x’ = y’ = 0. Then the two shared secrets would have the value 1. In practice this does not work according to the authors because most IKEv2 implementations validate
the received Diffie-Hellman public value to ensure that it is larger than 1 and smaller than p – 1.They did find that Diffie-Hellman groups 22 to 24 are known to have many small subgroups, and implementations tend to not validate these. Which led to an interesting discussion on one of the cypherpunks mailinglists about the mysterious nature of the DH groups in RFC-5114. Which are not enabled in libreswan (or openswan in RHEL) by default, and require manual configuration precisely because the origin of these groups is a mystery.

The IKEv1 attack

The paper briefly brainstorms about a variant of this attack using IKEv1. It would be interesting because MD5 is very common with IKEv1, but the article is not really clear on how that attack should work. It mentions filling the ID payload with malicious data to trigger the collision, but such an ID would never pass validation.

Counter measures

Work was already started on updating the cryptographic algorithms deemed mandatory to implement for IKE. Note that it does not state which algorithms are valid to use, or which to use per default. This work is happening at the IPsec working group at the IETF and can be found at draft-ietf-ipsecme-rfc4307bis. It is expected to go through a few more rounds of discussion and one of the topics that will be raised are the weak DH groups specified in RFC-5114.

Upstream Libreswan has hardened its cookie handling code, preventing the attacker from sending an uninvited cookie to the server without having their connection dropped.

Important security notice regarding signing key and distribution of Red Hat Ceph Storage on Ubuntu and CentOS

Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project (ceph.com) and Inktank (download.inktank.com), which were hosted on a computer system outside of Red Hat infrastructure.

download.inktank.com provided releases of the Red Hat Ceph product for Ubuntu and CentOS operating systems. Those product versions were signed with an Inktank signing key (id 5438C7019DCEEEAD). ceph.com provided the upstream packages for the Ceph community versions signed with a Ceph signing key (id 7EBFDD5D17ED316D). While the investigation into the intrusion is ongoing, our initial focus was on the integrity of the software and distribution channel for both sites.

To date, our investigation has not discovered any compromised code available for download on these sites. We can not not fully rule out the possibility that some compromised code was available for download at some point in the past.

For download.inktank.com, all builds were verified matching known good builds from a clean system. However, we can no longer trust the integrity of the Inktank signing key, and therefore have re-signed these versions of the Red Hat Ceph Storage products with the standard Red Hat release key. Customers of Red Hat Ceph Storage products should only use versions signed by the Red Hat release key.

For ceph.com, the Ceph community has created a new signing key (id E84AC2C0460F3994) for verifying their downloads.  See ceph.com for more details.

Customer data was not stored on the compromised system. The system did have usernames and hashes of the fixed passwords we supplied to customers to authenticate downloads.

To reiterate, based on our investigation to date, the customers of the CentOS and Ubuntu versions of Red Hat Ceph Storage should take action as a precautionary measure to download the rebuilt and newly-signed product versions. We have identified and notified those customers directly.

Customers using Red Hat Ceph Storage products for Red Hat Enterprise Linux are not affected by this issue. Other Red Hat products are also not affected.

Customers who have any questions or need help moving to the new builds should contact Red Hat support or their Technical Account Manager.

Factoring RSA Keys With TLS Perfect Forward Secrecy

What is being disclosed today?

Back in 1996, Arjen Lenstra described an attack against an optimization (called the Chinese Remainder Theorem optimization, or RSA-CRT for short). If a fault happened during the computation of a signature (using the RSA-CRT optimization), an attacker might be able to recover the private key from the signature (an “RSA-CRT key leak”). At the time, use of cryptography on the Internet was uncommon, and even ten years later, most TLS (or HTTPS) connections were immune to this problem by design because they did not use RSA signatures. This changed gradually, when forward secrecy for TLS was recommended and introduced by many web sites.

We evaluated the source code of several free software TLS implementations to see if they implement hardening against this particular side-channel attack, and discovered that it is missing in some of these implementations. In addition, we used a TLS crawler to perform TLS handshakes with servers on the Internet, and collected evidence that this kind of hardening is still needed, and missing in some of the server implementations: We saw several RSA-CRT key leaks, where we should not have observed any at all.

The technical report, “Factoring RSA Keys With TLS Perfect Forward Secrecy”, is available in PDF format.

What is the impact of this vulnerability?

An observer of the private key leak can use this information to cryptographically impersonate the server, after redirecting network traffic, conducting a man-in-the-middle attack. Either the client making the TLS handshake can see this leak, or a passive observer capturing network traffic. The key leak also enables decryption of connections which do not use forward secrecy, without the need for a man-in-the-middle attack. However, forward secrecy must be enabled in the server for this kind of key leak to happen in the first place, and with such a server configuration, most clients will use forward secrecy, so an active attack will be required for configurations which can theoretically lead to RSA-CRT key leaks.

Does this break RSA?

No. Lenstra’s attack is a so-called side-channel attack, which means that it does not attack RSA directly. Rather, it exploits unexpected implementation behavior. RSA, and the RSA-CRT optimization with appropriate hardening, is still considered secure.

Are Red Hat products affected?

The short answer is: no.

The longer answer is that some of our products do not implement the recommend hardening that protects against RSA-CRT key leaks. (OpenSSL and NSS already have RSA-CRT hardening.) We will continue to work with upstream projects and help them to implement this additional defense, as we did with Oracle in OpenJDK (which led to the CVE-2015-0478 fix in April this year). None of the key leaks we observed in the wild could be attributed to these open-source projects, and no key leaks showed up in our lab testing, which is why this additional hardening, while certainly desirable to have, does not seem critical at this time.

In the process of this disclosure, we consulted some of our partners and suppliers, particularly those involved in the distribution of RPM packages. They indicated that they already implement RSA-CRT hardening, at least in the configurations we use.

What would an attack look like?

The attack itself is unobservable because the attacker performs an off-line mathematical computation on data extracted from the TLS handshake. The leak itself could be noticed by an intrusion detection system if it checks all TLS handshakes for mathematical correctness.

For the key leaks we have observed, we do not think there is a way for remote attackers to produce key leaks at will, in the sense that an attacker could manipulate the server over the network in such a way that the probability of a key leak in a particular TLS handshake increases. The only thing the attacker can do is to capture as many handshakes as possible, perhaps by initiating many such handshakes themselves.

How difficult is the mathematical computation required to recover the key?

Once the necessary data is collected, the actual computation is marginally more complicated than a regular RSA signature verification. In short, it is quite cheap in terms of computing cost, particularly in comparison to other cryptographic attacks.

Does it make sense to disable forward secrecy, as a precaution?

No. If you expect that a key leak might happen in the future, it could well have happened already. Disabling forward secrecy would enable passive observers of past key leaks to decrypt future TLS sessions, from passively captured network traffic, without having to redirect client connections. This means that disabling forward secrecy generally makes things worse. (Disabling forward secrecy and replacing the server certificate with a new one would work, though.)

How can something called Perfect Forward Secrecy expose servers to additional vulnerabilities?

“Perfect Forward Secrecy“ is just a name given to a particular tweak of the TLS protocol. It does not magically turn TLS into a perfect protocol (that is, resistant to all attacks), particularly if the implementation is incorrect or runs on faulty hardware.

Have you notified the affected vendors?

We tried to notify the affected vendors, and several of them engaged in a productive conversation. All browser PKI certificates for which we observed key leaks have been replaced and revoked.

Does this vulnerability have an name?

We think that “RSA-CRT hardening” (for the countermeasure) and “RSA-CRT key leaks” (for a successful side-channel attack) is sufficiently short and descriptive, and no branding is appropriate. We expect that several CVE IDs will be assigned for the underlying vulnerabilties leading to RSA-CRT key leaks. Some vendors may also assign CVE IDs for RSA-CRT hardening, although no key leaks have been seen in practice so far.

Secure distribution of RPM packages

This blog post looks at the final part of creating secure software: shipping it to users in a safe way. It explains how to use transport security and package signatures to achieve this goal.

yum versus rpm

There are two commonly used tools related to RPM package management, yum and rpm. (Recent Fedora versions have replaced yum with dnf, a rewrite with similar functionality.) The yum tool inspects package sources (repositories), downloads RPM packages, and makes sure that required dependencies are installed along with fresh package installations and package updates. yum uses rpm as a library to install packages. yum repositories are defined by .repo files in /etc/yum.repos.d, or by yum plugins for repository management (such as subscription-manager for Red Hat subscription management). rpm is the low-level tool which operates on explicit set of RPM packages. rpm provides both a set of command-line tools, and a library to process RPM packages. In contrast to yum, package dependencies are checked, but violations are not resolved automatically. This means that rpm typically relies on yum to tell it what to do exactly; the recipe for a change to a package set is called a transaction. Securing package distribution at the yum layer resembles transport layer security. The rpm security mechanism is more like end-to-end security (in fact, rpm uses OpenPGP internally, which has traditionally been used for end-to-end email protection).

Transport security with yum

Transport security is comparatively easy to implement. The web server just needs to serve the package repository metadata (repomd.xml and its descendants) over HTTPS instead of HTTP. On the client, a .repo file in /etc/yum.repos.d has to look like this:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1

$releasever expands to the Fedora version at run time (like “22”). By default, end-to-end security with RPM signatures is enabled (see the next section), but we will focus on transport security first.

yum will verify the cryptographic digests contained in the metadata files, so serving the metadata over HTTPS is sufficient, but offering the .rpm files over HTTPS as well is a sensible precaution. The metadata can instruct yum to download packages from absolute, unrelated URLs, so it is necessary to inspect the metadata to make sure it does not contain such absolute “http://” URLs. However, transport security with a third-party mirror network is quite meaningless, particularly if anyone can join the mirror network (as it is the case with CentOS, Debian, Fedora, and others). Rather than attacking the HTTPS connections directly, an attacker could just become part of the mirror network. There are two fundamentally different approaches to achieve some degree of transport security.

Fedora provides a centralized, non-mirrored Fedora-run metalink service which provides a list if active mirrors and the expected cryptographic digest of the repomd.xml files. yum uses this information to select a mirror and verify that it serves the up-to-date, untampered repomd.xml. The chain of cryptographic digests is verified from there, eventually leading to verification of the .rpm file contents. This is how the long-standing Fedora bug 998 was eventually fixed.

Red Hat uses a different option to distribute Red Hat Enterprise Linux and its RPM-based products: a content-distribution network, managed by a trusted third party. Furthermore, the repositories provided by Red Hat use a separate public key infrastructure which is managed by Red Hat, so breaches in the browser PKI (that is, compromises of certificate authorities or misissued individual certificates) do not affect the transport security checks yum provides. Organizations that wish to implement something similar can use the sslcacert configuration switch of yum. This is the way Red Hat Satellite 6 implements transport security as well. Transport security has the advantage that it is straightforward to set up (it is not more difficult than to enable HTTPS). It also guards against manipulation at a lower level, and will detect tampering before data is passed to complex file format parsers such as SQLite, RPM, or the XZ decompressor. However, end-to-end security is often more desirable, and we cover that in the next section.

End-to-end security with RPM signatures

RPM package signatures can be used to implement cryptographic integrity checks for RPM packages. This approach is end-to-end in the sense that the package build infrastructure at the vendor can use an offline or half-online private key (such as one stored in hardware security module), and the final system which consumes these packages can directly verify the signatures because they are built into the .rpm package files. Intermediates such as proxies and caches (which are sometimes used to separate production servers from the Internet) cannot tamper with these signatures. In contrast, transport security protections are weakened or lost in such an environment.

Generating RPM signatures

To add an RPM signature to a .rpm signature, you need to generate a GnuPG key first, using gpg --gen-key. Let’s assume that this key has the user ID “[email protected]”. We first export the public key part to a file in a special directory, otherwise rpmsign will not be able to verify the signatures we create as it uses the RPM database as a source of trusted signing keys (and not the user GnuPG keyring):

$ mkdir $HOME/rpm-signing-keys
$ gpg --export -a [email protected] > $HOME/rpm-signing-keys/example-com.key

The name of the directory $HOME/rpm-signing-keys does not matter, but the name of the file containing the public key must end in “.key”. On Red Hat Enterprise Linux 7, CentOS 7, and Fedora, you may have to install the rpm-sign package, which contains the rpmsign program. The rpmsign command to create the signature looks like this:

$ rpmsign -D '_gpg_name [email protected]' --addsign hello-2.10.1-1.el6.x86_64.rpm
Enter pass phrase:
Pass phrase is good.
hello-2.10.1-1.el6.x86_64.rpm:

(On success, there is no output after the file name on the last line, and the shell prompt reappears.) The file hello-2.10.1-1.el6.x86_64.rpm is overwritten in place, with a variant that contains the signature embedded into the RPM header. The presence of a signature can be checked with this command:

$ rpm -Kv -D "_keyringpath $HOME/rpm-signing-keys" hello-2.10.1-1.el6.x86_64.rpm
hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: OK
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: OK
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

If the output of this command contains “NOKEY” lines instead, like the following, it means that the public key in the directory $HOME/rpm-signing-keys has not been loaded successfully:

hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

Afterwards, the RPM files can be distributed as usual and served over HTTP or HTTPS, as if they were unsigned.

Consuming RPM signatures

To enable RPM signature checking in rpm explicitly, the yum repository file must contain a gpgcheck=1 line, as in:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1
gpgcheck=1

Once signature checks are enabled in this way, package installation will fail with a NOKEY error until the signing key used by .rpm files in the repository is added to the system RPM database. This can be achieved with a command like this:

$ rpm --import https://download.example.com/keys/rpmsign.asc

The file needs to be transported over a trusted channel, hence the use of an https:// URL in the example. (It is also possible to instruct the user to download the file from a trusted web site, copy it to the target system, and import it directly from the file system.) Afterwards, package installation works as before.

After a key has been import, it will appear in the output of the “rpm -qa” command:

$ rpm -qa | grep ^gpg-pubkey-
…
gpg-pubkey-ab0e12ef-de337997
…

More information about the key can be obtained with “rpm -qi gpg-pubkey-ab0e12ef-de337997”, and the key can be removed again using the “rpm --erase gpg-pubkey-ab0e12ef-de337997”, just as if it were a regular RPM package.

Note: Package signatures are only checked by yum if the package is downloaded from a repository (which has checking enabled). This happens if the package is specified as a name or name-version-release on the yum command line. If the yum command line names a file or URL instead, or the rpm command is used, no signature check is performed in current versions of Red Hat Enterprise Linux, Fedora, or CentOS.

Issues to avoid

When publishing RPM software repositories, the following should be avoided:

  1. The recommended yum repository configuration uses baseurl lines containing http:// URLs.
  2. The recommended yum repository configuration explicitly disables RPM signature checking with gpgcheck=0.
  3. There are optional instructions to import RPM keys, but these instructions do not tell the system administrator to disable the gpgcheck=0 line in the default yum configuration provided by the independent software vendor.
  4. The recommended “rpm --import” command refers to the public key file using an http:// URL.

The first three deficiencies in particular open the system up to a straightforward man-in-the-middle attack on package downloads. An attacker can replace the repository or RPM files while they are downloaded, thus gaining the ability execute arbitrary commands when they are installed. As outlined in the article on the PKI used by the Red Hat CDN, some enterprise networks perform TLS intercept, and HTTPS downloads will fail. This possibility is not sufficient to justify weakening package authentication for all customers, such as recommending to use http:// instead of https:// in the yum configuration. Similarly, some customers do not want to perform the extra step involving “rpm --import”, but again, this is not an excuse to disable verification for everyone, as long as RPM signatures are actually available in the repository. (Some software delivery processes make it difficult to create such end-to-end verifiable signatures.)

Summary

If you are creating a repository of packages you should ensure give your users a secure way to consume them. You can do this by following these recommendations:

  • Use https:// URLs everywhere in configuration advice regarding RPM repository setup for yum.
  • Create a signing key and use them to sign RPM packages, as outlined above.
  • Make sure RPM signature checking is enabled in the yum configuration.
  • Use an https:// URL to download the public key in the setup instructions.

We acknowledge that package signing might not be possible for everyone, but software downloads over HTTPS downloads are straightforward to implement and should always be used.

Factoring RSA export keys – FREAK (CVE-2015-0204)

This week’s issue with OpenSSL export ciphersuites has been discussed in the press as “Freak” and “Smack”. These are addressed by CVE-2015-0204, and updates for affected Red Hat products were released in January.

Historically, the United States and several other countries tried to control the export or use of strong cryptographic primitives. For example, any company that exported cryptographic products from the United States needed to comply with certain key size limits. For RSA encryption, the maximum allowed key size was 512 bits and for symmetric encryption (DES at that time) it was 40 bits.

The U.S. government eventually lifted this policy and allowed cryptographic primitives with bigger key sizes to be exported. However, these export ciphersuites did not really go away and remained in a lot of codebases (including OpenSSL), probably for backward compatibility purposes.

It was considered safe to keep these export ciphersuites lying around for multiple purposes.

  1. Even if your webserver supports export ciphersuites, most modern browsers will not offer that as a part of initial handshake because they want to establish a session with strong cryptography.
  2. Even if you use export cipher suites, you still need to factor the 512 bit RSA key or brute-force the 40-bit DES key. Though doable in today’s cloud/GPU infrastructure, it is pointless to do this for a single session.

However, this results in a security flaw, which affects various cryptographic libraries, including OpenSSL. OpenSSL clients would accept RSA export-grade keys even when the client did not ask for export-grade RSA. This could further lead to an active man-in-the-middle attack, allowing decryption and alteration of the TLS session in the following way:

  • An OpenSSL client contacts a TLS server and asks for a standard RSA key (non-export).
  • A MITM intercepts this requests and asks the server for an export-grade RSA key.
  • Once the server replies, the MITM attacker forwards this export-grade RSA key to the client. The client has a bug (as described above) that allows the export-grade key to be accepted.
  • In the meantime, the MITM attacker factors this key and is able to decrypt all possible data exchange between the server and the client.

This issue was fixed in OpenSSL back in October of 2014 and shipped in January of 2015 in Red Hat Enterprise Linux 6 and 7 via RHSA-2015-0066. This issue has also been addressed in Fedora 20 and Fedora 21.

Red Hat Product Security initially classified this as having low security impact, but after more details about the issue and the possible attack scenarios have become clear, we re-classified it as a moderate-impact security issue.

Additional information on mitigating this vulnerability can be found on the Red Hat Customer Portal.

Update on Red Hat Enterprise Linux 6 and FIPS 140 validations

Red Hat achieved its latest successful FIPS 140 validation back in April 2013. Since then, a lot has happened. There have been well publicized attacks on cryptographic protocols, weaknesses in implementations, and changing government requirements. With all of these issues in play, we want to explain what we are doing about it.

One of the big changes was that we enabled support of Elliptic Curve Cryptography (ECC) and Elliptic Curve Diffie Hellman (ECDH) in Red Hat Enterprise Linux to meet the National Institute of Standards and Technology’s (NIST’s) “Suite B” requirements taking effect this year. Because we added new ciphers, we knew we needed to re-certify. Re-certification brings many advantages to our government customers, who not only benefit from the re-certification, but they also maintain coverage from our last FIPS 140 validation effort. One advantage of re-certification is that we have picked up fixes for BEAST, Lucky 13, Heartbleed, Poodle, and some lesser known vulnerabilities around certificate validation. It should be noted that these attacks are against higher level protocols that are not part of any crypto primitives covered by a FIPS validation. But, knowing the fixes are in the packages under evaluation should give customers additional peace of mind.

The Red Hat Enterprise Linux 6 re-certification is now under way. It includes reworked packages to meet all the updated requirements that NIST has put forth taking effect Jan. 1, 2014, such as a new Deterministic Random Bit Generator (DRGB) as specified in SP 800-90A (PDF); an updated RSA key generation technique as specified in FIPS 186-4 (PDF); and updated key sizes and algorithms as specified in SP 800-131A (PDF).

Progress on the certification is moving along – we’ve completed review and preliminary testing and are now applying for Cryptographic Algorithm Validation System (CAVS) certificates. After that, we’ll submit validation paperwork to NIST. All modules being re-certified are currently listed on NIST’s Modules in Process page, except Volume Encryption (dm-crypt). Its re-certification is taking a different route because the change is so minor thus not needing CAVS testing. We are expecting the certifications to be completed early this year.

Analysis of the CVE-2013-6435 Flaw in RPM

The RPM Package Manager (RPM) is a powerful command-line driven package management system capable of installing, uninstalling, verifying, querying, and updating software packages. RPM was originally written in 1997 by Erik Troan and Marc Ewing. Since then RPM has been successfully used in all versions of Red Hat Linux and currently in Red Hat Enterprise Linux.

RPM offers considerable advantages over traditional open-source software install methodology of building from source via tar balls, especially when it comes to software distribution and management. This has led to other Linux distributions to accept RPM as either the default package management system or offer it as an alternative to the ones which are default in those distributions.

Like any big, widely used software, over time several features are added to it and also several security flaws are found. On several occasions Red Hat has found and fixed security issues with RPM.

Florian Weimer of Red Hat Product Security discovered an interesting flaw in RPM, which was assigned CVE-2013-6435. Firstly, let’s take a brief look at the structure of an RPM file. It consists of two main parts: the RPM header and the payload. The payload is a compressed CPIO archive of binary files that are installed by the RPM utility. The RPM header, among other things, contains a cryptographic checksum of all the installed files in the CPIO archive. The header also contains a provision for a cryptographic signature. The signature works by performing a mathematical function on the header and archive section of the file. The mathematical function can be an encryption process, such as PGP (Pretty Good Privacy), or a message digest in the MD5 format.

If the RPM is signed, one can use the corresponding public key to verify the integrity and even the authenticity of the package. However, RPM only checked the header and not the payload during the installation.

When an RPM is installed, it writes the contents of the package to its target directory and then verifies its checksum against the value in the header. If the checksum does not match, that means something is wrong with the package (possibly someone has tampered with it) and the file is removed. At this point RPM refuses to install that particular package.

Though this may seem like the correct way to handle things, it has a bad consequence. Let’s assume RPM installs a file in the /etc/cron.d directory and then verifies its checksum. This offers a small race-window, in which crond can run before the checksum is found to be incorrect and the file is removed. There are several ways to prolong this window as well. So in the end we achieve arbitrary code execution as root, even though the system administrator assumes that the RPM package was never installed.

The approach Red Hat used to solve the problem is:

  • Require the size in the header to match with the size of the file in the payload. This prevents anyone from tampering with the payload, because the header is cryptographically verified. (This fix is already present in the upstream version of RPM)
  • Set restrictive permissions while a file is being unpacked from an RPM package. This will only allow root to access those file. Also, several programs, including cron, perform a check for permission sanity before running those files.

Another approach to mitigate this issue is the use of the O_TMPFILE flag. Linux kernel 3.11 and above introduced this flag, which can be passed to open(2), to simplify the creation of secure temporary files. Files opened with the O_TMPFILE flag are created, but they are not visible in the file system. As soon as they are closed, they are deleted. There are two uses for these files: race-free temporary files and creation of initially unreachable files. These unreachable files can be written to or changed same as regular files. RPM could use this approach to create a temporary, unreachable file, run a checksum on it, and either delete it or atomically link it to set the file up, without being vulnerable to the attack described above. However, as mentioned above, this feature is only available in Linux kernel 3.11 and above, was added to glibc 2.19, and is slowly making its way into GNU/Linux distributions.

The risk mentioned above is greatly reduced if the following precautions are followed:

  • Always check signatures of RPM packages before installing them. Red Hat RPMs are signed with cryptographic keys provided at https://access.redhat.com/security/team/key. When installing RPMs from Red Hat or Fedora repositories, Yum will automatically validate RPM packages via the respective public keys, unless explicitly told not to (via the “nogpgcheck” option and configuration directive).
  • Package downloads via Red Hat software repositories are protected via TLS/SSL so it is extremely difficult to tamper with them in transit. Fedora uses a whole-file hash chain rooted in a hash downloaded over TLS/SSL from a Fedora-run central server.

The above issue (CVE-2013-6435) has been fixed along with another issue (CVE-2014-8118), which is a potentially exploitable crash in the CPIO parser.

Red Hat customers should update to the latest versions of RPM via the following security advisories:
https://rhn.redhat.com/errata/RHSA-2014-1974.html
https://rhn.redhat.com/errata/RHSA-2014-1975.html
https://rhn.redhat.com/errata/RHSA-2014-1976.html

Disabling SSLv3 on the client and server

Recently, some Internet search engines announced that they would prefer websites secured with encryption over those that were not.  Of course there are other reasons why securing your website with encryption is beneficial.  Protecting authentication credentials, mitigating the use of cookies as a means of tracking and allowing access, providing privacy of your users, and authenticating your own server thus protecting the information you are trying to convey to your users.  And while setting up and using encryption on a webserver can be trivial, doing it properly might take a few additional minutes.

Red Hat strives to ship sane defaults that allow both security and availability.  Depending on your clients a more stringent or lax configuration may be desirable.  Red Hat Support provides both written documentation as well as a friendly person that can help make sense of it all.  Inevitably, it is the responsibility of the system owner to secure the systems they host.

Good cryptographic protocols

Protocols are the basis for all cryptography and provide the instructions for implementing ciphers and using certificates.  In the asymmetric, or public key, encryption world the protocols are all based off of the Secure Sockets Layer, or SSL, protocol.  SSL has come along way since its initial release in 1995.  Development has moved relatively quickly and the latest version, Transport Layer Security version 1.2 (TLS 1.2), is now the standard that all new software should be supporting.

Unfortunately some of the software found on the Internet still supports or even requires older versions of the SSL protocol.  These older protocols are showing their age and are starting to fail.  The most recent example is the POODLE vulnerability which showed how weak SSL 3.0 really is.

In response to the weakened protocol Red Hat has provided advice to disable SSL 3.0 from its products, and help its customers implement the best available cryptography.  This is seen in products from Apache httpd to Mozilla Firefox.  Because SSL 3.0 is quickly approaching its twentieth birthday it’s probably best to move on to newer and better options.

Of course the protocol can’t fix everything if you’re using bad ciphers.

Good cryptographic ciphers

Cryptographic ciphers are just as important to protect your information.  Weak ciphers, like RC4, are still used on the Internet today even though better and more efficient ciphers are available.  Unfortunately the recommendations change frequently.  What was suggested just a few months ago may no longer be good choices today.  As more work goes into researching the available ciphers weaknesses are discovered.

Fortunately there are resources available to help you stay up to date.  Mozilla provides recommended cipher choices that are updated regularly.  Broken down into three categories, system owners can determine which configuration best meets their needs.

Of course the cipher can’t fix everything if your certificate are not secure.

Certificates

Certificates are what authenticate your server to your users.  If an attacker can spoof your certificate they can intercept all traffic going between your server and users.  It’s important to protect your keys and certificates once they have been generated.  Using a hardware security module (HSM) to store your certificates is a great idea.  Using a reputable certificate authority is equally important.

Clients

Most clients that support SSL/TLS encryption automatically try to negotiate the latest version.  We found with the POODLE attack that http clients, such as Firefox, could be downgraded to a weak protocol like SSL 3.0.  Because of this many server owners went ahead and disabled SSL 3.0 to prevent the downgrade attack from affecting their users.  Mozilla has, with their latest version of Firefox, disabled SSL 3.0 by default (although it can be re-enabled for legacy support).  Now users are protected even though server owners may be lax in their security (although they are still at the mercy of the server’s cipher and protocol choices).

Much of the work has already been done behind the scenes and in the development of the software that is used to serve up websites as well as consume the data that comes from these servers.  The final step is for system owners to implement the technology that is available.  While a healthy understanding of cryptography and public key infrastructure is good, it is not necessary to properly implement good cryptographic solutions.  What is important is protecting your data and that of your users.  Trust is built during every interaction and your website it usually a large part of that interaction.

POODLE – An SSL 3.0 Vulnerability (CVE-2014-3566)

Red Hat Product Security has been made aware of a vulnerability in the SSL 3.0 protocol, which has been assigned CVE-2014-3566. All implementations of SSL 3.0 are affected. This vulnerability allows a man-in-the-middle attacker to decrypt ciphertext using a padding oracle side-channel attack.

To mitigate this vulnerability, it is recommended that you explicitly disable SSL 3.0 in favor of TLS 1.1 or later in all affected packages.

A brief history

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over networks. The SSL protocol was originally developed by Netscape.  Version 1.0 and was never publicly released; version 2.0 was released in February 1995 but contained a number of security flaws which ultimately led to the design of SSL 3.0. Over the years, several flaws were found in the design of SSL 3.0 as well. This ultimately lead to the development and widespread use of the TLS protocol.

Most TLS implementations remain backward compatible with SSL 3.0 to incorporate legacy systems and provide a smoother user experience. Many SSL clients implement a protocol downgrade “dance” to work around the server side interoperability issues. Once the connection is downgraded to SSL 3.0, RC4 or a block cipher with CBC mode is used; this is where the problem starts!

What is POODLE?

The POODLE vulnerability has two aspects. The first aspect is a weakness in the SSL 3.0 protocol, a padding oracle. An attacker can exploit this vulnerability to recover small amounts of plaintext from an encrypted SSL 3.0 connection, by issuing crafted HTTPS requests created by client-side Javascript code, for example. Multiple HTTPS requests are required for each recovered plaintext byte, and the vulnerability allows attackers to confirm if a particular byte was guessed correctly. This vulnerability is inherent to SSL 3.0 and unavoidable in this protocol version. The fix is to upgrade to newer versions, up to TLS 1.2 if possible.

Normally, a client and a server automatically negotiate the most recent supported protocol version of SSL/TLS. The second aspect of the POODLE vulnerability concerns this negotiation mechanism. For the protocol negotiation mechanism to work, servers must gracefully deal with a more recent protocol version offered by clients. (The connection would just use the older, server-supported version in such a scenario, not benefiting from future protocol enhancements.) However, when newer TLS versions were deployed, it was discovered that some servers just terminated the connection at the TCP layer or responded with a fatal handshake error, preventing a secure connection from being established. Clearly, this server behavior is a violation of the TLS protocol, but there were concerns that this behavior would make it impossible to deploy upgraded clients and widespread interoperability failures were feared. Consequently, browsers first try a recent TLS version, and if that fails, they attempt again with older protocol versions, until they end up at SSL 3.0, which suffers from the padding-related vulnerability described above. This behavior is sometimes called the compatibility dance. It is not part of TLS implementations such as OpenSSL, NSS, or GNUTLS; it is implemented by application code in client applications such as Firefox and Thunderbird.

Both aspects of POODLE require a man in the middle attack at the network layer. The first aspect of this flaw, the SSL 3.0 vulnerability, requires that an attacker can observe the network traffic between a client and a server and somehow trigger crafted network traffic from the client. This does not strictly require active manipulation of the network transmission, passive eavesdropping is sufficient. However, the second aspect, the forced protocol downgrade, requires active manipulation of network traffic.  As described in the POODLE paper, both aspects require the attacker to be able to observe and manipulate network traffic while it is in transit.


How are modern browsers affected by the POODLE security flaw?

Browsers are particularly vulnerable because session cookies are short and an ideal target for plain text recovery, and the way HTTPS works allows an attacker to generate many guesses quickly (either through Javascript or by downloading images). Browsers are also most likely to implement the compatibility fallback.
By default, Firefox supports SSL 3.0, and performs the compatibility fallback as described above. SSL 3.0 support can be switched off, but the compatibility fallback cannot be configured separately.

Is this issue fixed?

The first aspect of POODLE, the SSL 3.0 protocol vulnerability, has already been fixed through iterative protocol improvements, leading to the current TLS version, 1.2. It is simply not possible to address this in the context of the SSL 3.0 protocol, a protocol upgrade to one of the successors is needed. Note that TLS versions before 1.1 had similar padding-related vulnerabilities, which is why we recommend to switch to TLS 1.1, at least. (SSL and TLS are still quite similar as protocols, the name change has non-technical reasons.)

The second aspect, caused by browsers which implement the compatibility fallback in an insecure way, has yet to be addressed. Strictly speaking, this is a security vulnerability in browsers due to the way they misuse the TLS protocol. One way to fix this issue would be to remove the compatibility dance, focusing instead on making servers compatible with clients implementing the most recent TLS implementation (as explained, the protocol supports a version negotiation mechanism, but some servers refuse to implement it).

However, there is an industry-wide effort under way to enable browsers to downgrade in a secure fashion, using a new Signaling Cipher Suite Value (SCSV). This will require updates in browsers (such as Firefox) and TLS libraries (such as OpenSSL, NSS and GNUTLS). However, we do not envision changes in TLS client applications which currently do not implement the fallback logic, and neither in TLS server applications as long as they use one of the system TLS libraries. TLS-aware packet filters, firewalls, load balancers, and other infrastructure may need upgrades as well.

Is there going to be another SSL 3.0 issue in the near future? Is there a long term solution?

Disabling SSL 3.0 will obviously prevent exposure to future SSL 3.0-specific issues. The new SCSV-based downgrade mechanism should reliably prevent the use of SSL 3.0 if both parties support a newer protocol version. Once these software updates are widely deployed, the need to disable SSL 3.0 to address this and future vulnerabilities will hopefully be greatly reduced.

SSL 3.0 is typically used in conjunction with the RC4 stream cipher. (The only other secure option in a strict, SSL 3.0-only implementation is Triple DES, which is quite slow even on modern CPUs.) RC4 is already considered very weak, and SSL 3.0 does not even apply some of the recommended countermeasures which prolonged the lifetime of RC4 in other contexts. This is another reason to deploy support for more recent TLS versions.

I have patched my SSL implementation against BEAST and LUCKY-13, am I still vulnerable?

This depends on the type of mitigation you have implemented. If you disabled protocol versions earlier than TLS 1.1 (which includes SSL 3.0), then the POODLE issue does not affect your installation. If you forced clients to use RC4, the first aspect of POODLE does not apply, but you and your users are vulnerable to all of the weaknesses in RC4. If you implemented the n/n-1 split through a software update, or if you deployed TLS 1.1 support without enforcing it, but made no other configuration changes, you are still vulnerable to the POODLE issue.

Is it possible to monitor for exploit attempts?

The protocol downgrade is visible on the server side. Usually, servers can log TLS protocol versions. This information can then be compared with user agents or other information from the profile of a logged-in user, and mismatches could indicate attack attempts.

Attempts to abuse the SSL 3.0 padding oracle part of POODLE, as described in the paper, are visible to the server as well. They result in a fair number of HTTPS requests which follow a pattern not expected during the normal course of execution of a web application. However, it cannot be ruled out that a more sophisticated adaptive chosen plain text attack avoids confirmation of guesses from the server, and this more advanced attack would not be visible to the server, only to the client and the network up to the point at which the attacker performs their traffic manipulation.

What happens when i disable SSL 3.0 on my web server?

Some old browsers may not be able to support a secure connection to your site. Estimates of the number of such browsers in active use vary and depend on the target audience of a web site. SSL protocol version logging (see above) can be used to estimate the impact of disabling SSL 3.0 because it will be used only if no TLS version is available in the client.

Major browser vendors including Mozilla and Google have announced that they are to deactivate the SSL 3.0 in their upcoming versions.

How do I secure my Red Hat-supported software?

Red Hat has put together several articles regarding the removal of SSL 3.0 from its products.  Customers should review the recommendations and test changes before making them live in production systems.  As always, Red Hat Support is available to answer any questions you may have.

TLS landscape

Transport Layer Security (TLS) or, as it was known in the beginnings of the Internet, Secure Sockets Layer (SSL) is the technology responsible for securing communications between different devices. It is used everyday by nearly everyone using the globe-spanning network.

Let’s take a closer look at how TLS is used by servers that underpin the World Wide Web and how the promise of security is actually executed.

Adoption

Hyper Text Transfer Protocol (HTTP) in versions 1.1 and older make encryption (thus use of TLS) optional. Given that the upcoming HTTP 2.0 will require use of TLS and that Google now uses the HTTPS in its ranking algorithm, it is expected that many sites will become TLS-enabled.

Surveying the Alexa top 1 million sites, most domains still don’t provide secure communication channel for their users.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Additionally, if we look at the version of the protocol supported by the servers most don’t support the newest (and most secure) version of the protocol TLSv1.2.  Of more concern is the number of sites that support the completely insecure SSLv2 protocol.

Only half of HTTPS servers support TLS 1.2

Only half of HTTPS servers support TLS 1.2

(There are no results for SSLv2 for first 3 months because of error in software that was collecting data.)

One of the newest and most secure ciphers available in TLS is Advanced Encryption Standard (AES) in Galois/Counter Mode (AES-GCM). Those ciphers provide good security, resiliency against known attacks (BEAST and Lucky13), and very good performance for machines with hardware accelerators for them (modern Intel and AMD CPUs, upcoming ARM).

Unfortunately, it is growing a bit slower than TLS adoption in general, which means that some of the newly deployed servers aren’t using new cryptographic libraries or are configured to not use all of their functions.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Bad recommendations

A few years back, a weakness in TLS 1.0 and SSL 3 was shown to be exploitable in the BEAST attack. The recommended workaround for it was to use RC4-based ciphers. Unfortunately, we later learned that the RC4 cipher is much weaker than it was previously estimated. As the vulnerability that allowed BEAST was fixed in TLSv1.1, using RC4 ciphers with new protocol versions was always unnecessary. Additionally, now all major clients have implemented workarounds for this attack, which currently makes using RC4 a bad idea.

Unfortunately, many servers prefer RC4 and some (~1%) actually support only RC4.  This makes it impossible to disable this weak cipher on client side to force the rest of servers (nearly 19%) to use different cipher suite.

RC4 is still used with more than 18% of HTTPS servers.

RC4 is still used with more than 18% of HTTPS servers.

The other common issue, is that many certificates are still signed using the obsolete SHA-1. This is mostly caused by backwards compatibility with clients like Windows XP pre SP2 and old phones.

SHA-256 certificates only recently started growing in numbers

SHA-256 certificates only recently started growing in numbers

The sudden increase in the SHA-256 between April and May was caused by re-issuance of certificates in the wake of Heartbleed.

Bad configuration

Many servers also support insecure cipher suites. In the latest scan over 3.5% of servers support some cipher suites that uses AECDH key exchange, which is completely insecure against man in the middle attacks. Many servers also support single DES (around 15%) and export grade cipher suites (around 15%). In total, around 20% of servers support some kind of broken cipher suite.

While correctly implemented SSLv3 and later shouldn’t allow negotiation of those weak ciphers if stronger ones are supported by both client and server, at least one commonly used implementation had a vulnerability that did allow for changing the cipher suite to arbitrary one commonly supported by both client and server. That’s why it is important to occasionally clean up list of supported ciphers, both on server and client side.

Forward secrecy

Forward secrecy, also known as perfect forward secrecy (PFS), is a property of a cipher suite that makes it impossible to decrypt communication between client and server when the attacker knows the server’s private key. It also protects old communication in case the private key is leaked or stolen. That’s why it is such a desirable property.

The good news is that most servers (over 60%) not only support, but will actually negotiate cipher suites that provide forward secrecy with clients that support it. The used types are split essentially between 1024 bit DHE and 256 bit ECDHE, scoring respectively 29% and 33% of all servers in latest scan. The amount of servers that do negotiate PFS enabled cipher suites is also steadily growing.

PFS support among TLS-enabled HTTP servers

PFS support among TLS-enabled HTTP servers

Summary

Most Internet facing servers are badly configured, sometimes it is caused by lack of functionality in software, like in case of old Apache 2.2.x releases that don’t support ECDHE key exchange, and sometimes because of side effects of using new software with old configuration (many configuration tutorials suggested using !ADH in cipher string to disable anonymous cipher suites, that unfortunately doesn’t disable anonymous Elliptic Curve version of DH – AECDH, for that, use of !aNULL is necessary).

Thankfully, the situation seems to be improving, unfortunately rather slowly.

If you’re an administrator of a server, consider enabling TLS.  Performance issues when encryption was slow and taxing on servers are long gone. If you already use TLS, double check your configuration preferably using the Mozilla guide to server configuration as it is regularly updated. Make sure you enable PFS cipher suites and put them above non-PFS ciphers and that you as well as the Certificate Authority you’ve chosen, use modern crypto (SHA-2) and large key sizes (at least 2048 bit RSA).

If you’re a user of a server and you’ve noticed that the server doesn’t use correct configuration, try contacting the administrator – he may have just forgotten about it.