Tag Archives: tls

Factoring RSA Keys With TLS Perfect Forward Secrecy

What is being disclosed today?

Back in 1996, Arjen Lenstra described an attack against an optimization (called the Chinese Remainder Theorem optimization, or RSA-CRT for short). If a fault happened during the computation of a signature (using the RSA-CRT optimization), an attacker might be able to recover the private key from the signature (an “RSA-CRT key leak”). At the time, use of cryptography on the Internet was uncommon, and even ten years later, most TLS (or HTTPS) connections were immune to this problem by design because they did not use RSA signatures. This changed gradually, when forward secrecy for TLS was recommended and introduced by many web sites.

We evaluated the source code of several free software TLS implementations to see if they implement hardening against this particular side-channel attack, and discovered that it is missing in some of these implementations. In addition, we used a TLS crawler to perform TLS handshakes with servers on the Internet, and collected evidence that this kind of hardening is still needed, and missing in some of the server implementations: We saw several RSA-CRT key leaks, where we should not have observed any at all.

The technical report, “Factoring RSA Keys With TLS Perfect Forward Secrecy”, is available in PDF format.

What is the impact of this vulnerability?

An observer of the private key leak can use this information to cryptographically impersonate the server, after redirecting network traffic, conducting a man-in-the-middle attack. Either the client making the TLS handshake can see this leak, or a passive observer capturing network traffic. The key leak also enables decryption of connections which do not use forward secrecy, without the need for a man-in-the-middle attack. However, forward secrecy must be enabled in the server for this kind of key leak to happen in the first place, and with such a server configuration, most clients will use forward secrecy, so an active attack will be required for configurations which can theoretically lead to RSA-CRT key leaks.

Does this break RSA?

No. Lenstra’s attack is a so-called side-channel attack, which means that it does not attack RSA directly. Rather, it exploits unexpected implementation behavior. RSA, and the RSA-CRT optimization with appropriate hardening, is still considered secure.

Are Red Hat products affected?

The short answer is: no.

The longer answer is that some of our products do not implement the recommend hardening that protects against RSA-CRT key leaks. (OpenSSL and NSS already have RSA-CRT hardening.) We will continue to work with upstream projects and help them to implement this additional defense, as we did with Oracle in OpenJDK (which led to the CVE-2015-0478 fix in April this year). None of the key leaks we observed in the wild could be attributed to these open-source projects, and no key leaks showed up in our lab testing, which is why this additional hardening, while certainly desirable to have, does not seem critical at this time.

In the process of this disclosure, we consulted some of our partners and suppliers, particularly those involved in the distribution of RPM packages. They indicated that they already implement RSA-CRT hardening, at least in the configurations we use.

What would an attack look like?

The attack itself is unobservable because the attacker performs an off-line mathematical computation on data extracted from the TLS handshake. The leak itself could be noticed by an intrusion detection system if it checks all TLS handshakes for mathematical correctness.

For the key leaks we have observed, we do not think there is a way for remote attackers to produce key leaks at will, in the sense that an attacker could manipulate the server over the network in such a way that the probability of a key leak in a particular TLS handshake increases. The only thing the attacker can do is to capture as many handshakes as possible, perhaps by initiating many such handshakes themselves.

How difficult is the mathematical computation required to recover the key?

Once the necessary data is collected, the actual computation is marginally more complicated than a regular RSA signature verification. In short, it is quite cheap in terms of computing cost, particularly in comparison to other cryptographic attacks.

Does it make sense to disable forward secrecy, as a precaution?

No. If you expect that a key leak might happen in the future, it could well have happened already. Disabling forward secrecy would enable passive observers of past key leaks to decrypt future TLS sessions, from passively captured network traffic, without having to redirect client connections. This means that disabling forward secrecy generally makes things worse. (Disabling forward secrecy and replacing the server certificate with a new one would work, though.)

How can something called Perfect Forward Secrecy expose servers to additional vulnerabilities?

“Perfect Forward Secrecy“ is just a name given to a particular tweak of the TLS protocol. It does not magically turn TLS into a perfect protocol (that is, resistant to all attacks), particularly if the implementation is incorrect or runs on faulty hardware.

Have you notified the affected vendors?

We tried to notify the affected vendors, and several of them engaged in a productive conversation. All browser PKI certificates for which we observed key leaks have been replaced and revoked.

Does this vulnerability have an name?

We think that “RSA-CRT hardening” (for the countermeasure) and “RSA-CRT key leaks” (for a successful side-channel attack) is sufficiently short and descriptive, and no branding is appropriate. We expect that several CVE IDs will be assigned for the underlying vulnerabilties leading to RSA-CRT key leaks. Some vendors may also assign CVE IDs for RSA-CRT hardening, although no key leaks have been seen in practice so far.

Secure distribution of RPM packages

This blog post looks at the final part of creating secure software: shipping it to users in a safe way. It explains how to use transport security and package signatures to achieve this goal.

yum versus rpm

There are two commonly used tools related to RPM package management, yum and rpm. (Recent Fedora versions have replaced yum with dnf, a rewrite with similar functionality.) The yum tool inspects package sources (repositories), downloads RPM packages, and makes sure that required dependencies are installed along with fresh package installations and package updates. yum uses rpm as a library to install packages. yum repositories are defined by .repo files in /etc/yum.repos.d, or by yum plugins for repository management (such as subscription-manager for Red Hat subscription management). rpm is the low-level tool which operates on explicit set of RPM packages. rpm provides both a set of command-line tools, and a library to process RPM packages. In contrast to yum, package dependencies are checked, but violations are not resolved automatically. This means that rpm typically relies on yum to tell it what to do exactly; the recipe for a change to a package set is called a transaction. Securing package distribution at the yum layer resembles transport layer security. The rpm security mechanism is more like end-to-end security (in fact, rpm uses OpenPGP internally, which has traditionally been used for end-to-end email protection).

Transport security with yum

Transport security is comparatively easy to implement. The web server just needs to serve the package repository metadata (repomd.xml and its descendants) over HTTPS instead of HTTP. On the client, a .repo file in /etc/yum.repos.d has to look like this:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1

$releasever expands to the Fedora version at run time (like “22”). By default, end-to-end security with RPM signatures is enabled (see the next section), but we will focus on transport security first.

yum will verify the cryptographic digests contained in the metadata files, so serving the metadata over HTTPS is sufficient, but offering the .rpm files over HTTPS as well is a sensible precaution. The metadata can instruct yum to download packages from absolute, unrelated URLs, so it is necessary to inspect the metadata to make sure it does not contain such absolute “http://” URLs. However, transport security with a third-party mirror network is quite meaningless, particularly if anyone can join the mirror network (as it is the case with CentOS, Debian, Fedora, and others). Rather than attacking the HTTPS connections directly, an attacker could just become part of the mirror network. There are two fundamentally different approaches to achieve some degree of transport security.

Fedora provides a centralized, non-mirrored Fedora-run metalink service which provides a list if active mirrors and the expected cryptographic digest of the repomd.xml files. yum uses this information to select a mirror and verify that it serves the up-to-date, untampered repomd.xml. The chain of cryptographic digests is verified from there, eventually leading to verification of the .rpm file contents. This is how the long-standing Fedora bug 998 was eventually fixed.

Red Hat uses a different option to distribute Red Hat Enterprise Linux and its RPM-based products: a content-distribution network, managed by a trusted third party. Furthermore, the repositories provided by Red Hat use a separate public key infrastructure which is managed by Red Hat, so breaches in the browser PKI (that is, compromises of certificate authorities or misissued individual certificates) do not affect the transport security checks yum provides. Organizations that wish to implement something similar can use the sslcacert configuration switch of yum. This is the way Red Hat Satellite 6 implements transport security as well. Transport security has the advantage that it is straightforward to set up (it is not more difficult than to enable HTTPS). It also guards against manipulation at a lower level, and will detect tampering before data is passed to complex file format parsers such as SQLite, RPM, or the XZ decompressor. However, end-to-end security is often more desirable, and we cover that in the next section.

End-to-end security with RPM signatures

RPM package signatures can be used to implement cryptographic integrity checks for RPM packages. This approach is end-to-end in the sense that the package build infrastructure at the vendor can use an offline or half-online private key (such as one stored in hardware security module), and the final system which consumes these packages can directly verify the signatures because they are built into the .rpm package files. Intermediates such as proxies and caches (which are sometimes used to separate production servers from the Internet) cannot tamper with these signatures. In contrast, transport security protections are weakened or lost in such an environment.

Generating RPM signatures

To add an RPM signature to a .rpm signature, you need to generate a GnuPG key first, using gpg --gen-key. Let’s assume that this key has the user ID “[email protected]”. We first export the public key part to a file in a special directory, otherwise rpmsign will not be able to verify the signatures we create as it uses the RPM database as a source of trusted signing keys (and not the user GnuPG keyring):

$ mkdir $HOME/rpm-signing-keys
$ gpg --export -a [email protected] > $HOME/rpm-signing-keys/example-com.key

The name of the directory $HOME/rpm-signing-keys does not matter, but the name of the file containing the public key must end in “.key”. On Red Hat Enterprise Linux 7, CentOS 7, and Fedora, you may have to install the rpm-sign package, which contains the rpmsign program. The rpmsign command to create the signature looks like this:

$ rpmsign -D '_gpg_name [email protected]' --addsign hello-2.10.1-1.el6.x86_64.rpm
Enter pass phrase:
Pass phrase is good.
hello-2.10.1-1.el6.x86_64.rpm:

(On success, there is no output after the file name on the last line, and the shell prompt reappears.) The file hello-2.10.1-1.el6.x86_64.rpm is overwritten in place, with a variant that contains the signature embedded into the RPM header. The presence of a signature can be checked with this command:

$ rpm -Kv -D "_keyringpath $HOME/rpm-signing-keys" hello-2.10.1-1.el6.x86_64.rpm
hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: OK
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: OK
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

If the output of this command contains “NOKEY” lines instead, like the following, it means that the public key in the directory $HOME/rpm-signing-keys has not been loaded successfully:

hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

Afterwards, the RPM files can be distributed as usual and served over HTTP or HTTPS, as if they were unsigned.

Consuming RPM signatures

To enable RPM signature checking in rpm explicitly, the yum repository file must contain a gpgcheck=1 line, as in:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1
gpgcheck=1

Once signature checks are enabled in this way, package installation will fail with a NOKEY error until the signing key used by .rpm files in the repository is added to the system RPM database. This can be achieved with a command like this:

$ rpm --import https://download.example.com/keys/rpmsign.asc

The file needs to be transported over a trusted channel, hence the use of an https:// URL in the example. (It is also possible to instruct the user to download the file from a trusted web site, copy it to the target system, and import it directly from the file system.) Afterwards, package installation works as before.

After a key has been import, it will appear in the output of the “rpm -qa” command:

$ rpm -qa | grep ^gpg-pubkey-
…
gpg-pubkey-ab0e12ef-de337997
…

More information about the key can be obtained with “rpm -qi gpg-pubkey-ab0e12ef-de337997”, and the key can be removed again using the “rpm --erase gpg-pubkey-ab0e12ef-de337997”, just as if it were a regular RPM package.

Note: Package signatures are only checked by yum if the package is downloaded from a repository (which has checking enabled). This happens if the package is specified as a name or name-version-release on the yum command line. If the yum command line names a file or URL instead, or the rpm command is used, no signature check is performed in current versions of Red Hat Enterprise Linux, Fedora, or CentOS.

Issues to avoid

When publishing RPM software repositories, the following should be avoided:

  1. The recommended yum repository configuration uses baseurl lines containing http:// URLs.
  2. The recommended yum repository configuration explicitly disables RPM signature checking with gpgcheck=0.
  3. There are optional instructions to import RPM keys, but these instructions do not tell the system administrator to disable the gpgcheck=0 line in the default yum configuration provided by the independent software vendor.
  4. The recommended “rpm --import” command refers to the public key file using an http:// URL.

The first three deficiencies in particular open the system up to a straightforward man-in-the-middle attack on package downloads. An attacker can replace the repository or RPM files while they are downloaded, thus gaining the ability execute arbitrary commands when they are installed. As outlined in the article on the PKI used by the Red Hat CDN, some enterprise networks perform TLS intercept, and HTTPS downloads will fail. This possibility is not sufficient to justify weakening package authentication for all customers, such as recommending to use http:// instead of https:// in the yum configuration. Similarly, some customers do not want to perform the extra step involving “rpm --import”, but again, this is not an excuse to disable verification for everyone, as long as RPM signatures are actually available in the repository. (Some software delivery processes make it difficult to create such end-to-end verifiable signatures.)

Summary

If you are creating a repository of packages you should ensure give your users a secure way to consume them. You can do this by following these recommendations:

  • Use https:// URLs everywhere in configuration advice regarding RPM repository setup for yum.
  • Create a signing key and use them to sign RPM packages, as outlined above.
  • Make sure RPM signature checking is enabled in the yum configuration.
  • Use an https:// URL to download the public key in the setup instructions.

We acknowledge that package signing might not be possible for everyone, but software downloads over HTTPS downloads are straightforward to implement and should always be used.

Microsoft Brings HSTS to Windows 7 and 8.1

Microsoft announced it has added HTTP Strict Transport Security (HSTS) to Internet Explorer 11 on Windows 8.1 and Windows 7, in addition to its native inclusion in Microsoft Edge on Windows 10.

Researchers Say POODLE Attack Affects Some TLS Implementations

The POODLE attack against SSLv3 that researchers from Google revealed earlier this year also affects some implementations of TLS and vendors now are scrambling to release patches for gear affected by the vulnerability. Soon after the POODLE attack was disclosed in October, researchers began looking into whether it might affect protocols other than SSLv3. It quickly […]

Disabling SSLv3 on the client and server

Recently, some Internet search engines announced that they would prefer websites secured with encryption over those that were not.  Of course there are other reasons why securing your website with encryption is beneficial.  Protecting authentication credentials, mitigating the use of cookies as a means of tracking and allowing access, providing privacy of your users, and authenticating your own server thus protecting the information you are trying to convey to your users.  And while setting up and using encryption on a webserver can be trivial, doing it properly might take a few additional minutes.

Red Hat strives to ship sane defaults that allow both security and availability.  Depending on your clients a more stringent or lax configuration may be desirable.  Red Hat Support provides both written documentation as well as a friendly person that can help make sense of it all.  Inevitably, it is the responsibility of the system owner to secure the systems they host.

Good cryptographic protocols

Protocols are the basis for all cryptography and provide the instructions for implementing ciphers and using certificates.  In the asymmetric, or public key, encryption world the protocols are all based off of the Secure Sockets Layer, or SSL, protocol.  SSL has come along way since its initial release in 1995.  Development has moved relatively quickly and the latest version, Transport Layer Security version 1.2 (TLS 1.2), is now the standard that all new software should be supporting.

Unfortunately some of the software found on the Internet still supports or even requires older versions of the SSL protocol.  These older protocols are showing their age and are starting to fail.  The most recent example is the POODLE vulnerability which showed how weak SSL 3.0 really is.

In response to the weakened protocol Red Hat has provided advice to disable SSL 3.0 from its products, and help its customers implement the best available cryptography.  This is seen in products from Apache httpd to Mozilla Firefox.  Because SSL 3.0 is quickly approaching its twentieth birthday it’s probably best to move on to newer and better options.

Of course the protocol can’t fix everything if you’re using bad ciphers.

Good cryptographic ciphers

Cryptographic ciphers are just as important to protect your information.  Weak ciphers, like RC4, are still used on the Internet today even though better and more efficient ciphers are available.  Unfortunately the recommendations change frequently.  What was suggested just a few months ago may no longer be good choices today.  As more work goes into researching the available ciphers weaknesses are discovered.

Fortunately there are resources available to help you stay up to date.  Mozilla provides recommended cipher choices that are updated regularly.  Broken down into three categories, system owners can determine which configuration best meets their needs.

Of course the cipher can’t fix everything if your certificate are not secure.

Certificates

Certificates are what authenticate your server to your users.  If an attacker can spoof your certificate they can intercept all traffic going between your server and users.  It’s important to protect your keys and certificates once they have been generated.  Using a hardware security module (HSM) to store your certificates is a great idea.  Using a reputable certificate authority is equally important.

Clients

Most clients that support SSL/TLS encryption automatically try to negotiate the latest version.  We found with the POODLE attack that http clients, such as Firefox, could be downgraded to a weak protocol like SSL 3.0.  Because of this many server owners went ahead and disabled SSL 3.0 to prevent the downgrade attack from affecting their users.  Mozilla has, with their latest version of Firefox, disabled SSL 3.0 by default (although it can be re-enabled for legacy support).  Now users are protected even though server owners may be lax in their security (although they are still at the mercy of the server’s cipher and protocol choices).

Much of the work has already been done behind the scenes and in the development of the software that is used to serve up websites as well as consume the data that comes from these servers.  The final step is for system owners to implement the technology that is available.  While a healthy understanding of cryptography and public key infrastructure is good, it is not necessary to properly implement good cryptographic solutions.  What is important is protecting your data and that of your users.  Trust is built during every interaction and your website it usually a large part of that interaction.

Issues Arise With MS14-066 Schannel Patch

Some users who have installed the MS14-066 patch that fixes a vulnerability in the Schannel technology in Windows are having issues with the fix causing TLS negotiations to fail in some circumstances. The problem arises when users have TLS 1.2 enabled in certain configurations and it will sometimes cause processes to hang or become unresponsive from […]

Google Releases Nogotofail Tool to Test Network Security

The last year has produced a rogues’ gallery of vulnerabilities in transport layer security implementations and new attacks on the key protocols, from Heartbleed to the Apple gotofail flaw to the recent POODLE attack. To help developers and security researchers identify applications that are vulnerable to known SSL/TLS attacks and configuration problems, Google is releasing a […]