Tag Archives: Red Hat Enterprise Linux

The SLOTH attack and IKE/IPsec

Executive Summary: The IKE daemons in RHEL7 (libreswan) and RHEL6 (openswan) are not vulnerable to the SLOTH attack. But the attack is still interesting to look at .

The SLOTH attack released today is a new transcript collision attack against some security protocols that use weak or broken hashes such as MD5 or SHA1. While it mostly focuses on the issues found in TLS, it also mentions weaknesses in the “Internet Key Exchange” (IKE) protocol used for IPsec VPNs. While the TLS findings are very interesting and have been assigned CVE-2015-7575, the described attacks against IKE/IPsec got close but did not result in any vulnerabilities. In the paper, the authors describe a Chosen Prefix collision attack against IKEv2 using RSA-MD5 and RSA-SHA1 to perform a Man-in-the-Middle (MITM) attack and a Generic collision attack against IKEv1 HMAC-MD5.

We looked at libreswan and openswan-2.6.32 compiled with NSS as that is what we ship in RHEL7 and RHEL6. Upstream openswan with its custom crypto code was not evaluated. While no vulnerability was found, there was some hardening that could be done to make this attack less dangerous that will be added in the next upstream version of libreswan.

Specifically, the attack was prevented because:

  • The SPI’s in IKE are random and part of the hash, so it requires an online attack of 2^77 – not an offline attack as suggested in the paper.
  • MD5 is not enabled per default for IKEv2.
  • Weak Diffie-Hellman groups DH22, DH23 and DH24 are not enabled per default.
  • Libreswan as a server does not re-use nonces for multiple clients.
  • Libreswan destroys nonces when an IKE exchange times out (default 60s).
  • Bogus ID payloads in IKEv1 cause the connection to fail authentication.

The rest of this article explains the IKEv2 protocol and the SLOTH attack.

The IKEv2 protocol

sloth-ike-1

The IKE exchange starts with an IKE_INIT packet exchange to perform the Diffie-Hellman Key Exchange. In this exchange, the initiator and responder exchange their nonces. The result of the DH exchange is that both parties now have a shared secret called SKEYSEED. This is fed into a mutually agreed PRF algorithm (which could be MD5, SHA1 or SHA2) to generate as much pseudo-random key material as needed. The first key(s) are for the IKE exchange itself (called the IKE SA  or Parent SA), followed by keys for one or more IPsec SAs (also called Child SAs).

But before the SKEYSEED can be used, both ends need to perform an authentication step. This is the second packet exchange, called IKE_AUTH. This will bind the Diffie-Hellman channel to an identity to prevent the MITM attack. Usually these are digital signatures over the session data to prove ownership of the identity’s private key. Technically, it signs a hash of the session data. In TLS that signature is over the hash of the session data which made TLS more vulnerable to the SLOTH attack.

The attack is to trick both parties to sign a hash which the attacker can replay to the other party to fake the authentication of both entities.

sloth-ike-2

They call this a “transcript collision”. To facilitate the creation of the same hash, the attacker needs to be able to insert its own data in the session to the first party so that the hash of that data will be identical to the hash of the session to the second party. It can then just pass on the signatures without needing to have private keys for the identities of the parties involved. It then needs to remain in the middle to decrypt and re-encrypt and pass on the data, while keeping a copy of the decrypted data.

The IKEv2 COOKIE

The initial IKE_INIT exchange does not have many payloads that can be used to manipulate the outcome of the hashing of the session data. The only candidate is the NOTIFY payload of type COOKIE.

Performing a Diffie-Hellman exchange is relatively expensive. An attacker could send a lot of IKE_INIT requests forcing the VPN server to use up its resources. These could all come from spoofed source IP addresses, so blacklisting such an attack is impossible. To defend against this, IKEv2 introduced the COOKIE mechanism. When the server gets too busy, instead of performing the Diffie-Hellman exchange, it calculates a cookie based on the client’s IP address, the client’s nonce and its own server secret. It hashes these and sends it as a COOKIE payload in an IKE_INIT reply to the client. It then deletes all the state for this client. If this IKE_INIT exchange was a spoofed request, nothing more will happen. If the request was a legitimate client, this client will receive the IKE_INIT reply, see the COOKIE payload and re-send the original IKE_INIT request, but this time it will include the COOKIE payload it received from the server. Once the server receives this IKE_INIT request with the COOKIE, it will calculate the cookie data (again) and if it matches, the client has proven that it contacted the server before. To avoid COOKIE replays and thwart attacks attempting to brute-force the server secret used for creating the cookies, the server is expected to regularly change its secret.

Abusing the COOKIE

The SLOTH attacker is the MITM between the VPN client and VPN server. It prepares an IKE_INIT request to the VPN server but waits for the VPN client to connect. Once the VPN client connects, it does some work with the received data that includes the proposals and nonce to calculate a malicious COOKIE payload and sends this COOKIE to the VPN client. The VPN client will re-send the IKE_INIT request with the COOKIE to the MITM. The MITM now sends this data to the real VPN server to perform an IKE_INIT there. It includes the COOKIE payload even though the VPN server did not ask for a COOKIE. Why does the VPN server not reject this connection? Well, the IKEv2 RFC-7296 states:

When one party receives an IKE_SA_INIT request containing a cookie whose contents do not match the value expected, that party MUST ignore the cookie and process the message as if no cookie had been included

The intention here was likely meant for a recovering server. If the server is no longer busy, it will stop sending cookies and stop requiring cookies. But a few clients that were just about to reconnect will send back the cookie they received when the server was still busy. The server shouldn’t reject these clients now, so the advice was to ignore the cookie in that case. Alternatively, the server could just remember the last used secret for a while and if it receives a cookie when it is not busy, just do the cookie validation. But that costs some resources too which can be abused by an attacker to send IKE_INIT requests with bogus cookies. Limiting the time of cookie validation from the time when the server became unbusy would mitigate this.

COOKIE size

The paper contains an error when it talks about this COOKIE size:

To implement the attack, we must first find a collision between m1 amd m’1. We observe that in IKEv2 the length of the cookie is supposed to be at most 64 octets but we found that many implementations allow cookies of up to 2^16 bytes. We can use this flexibility in computing long collisions.

It is not clear where the authors got the value of 64. The RFC does not mention anything about the maximum cookie size. The COOKIE value is sent as a NOTIFY PAYLOAD. These payloads have a two byte Payload Length value, so NOTIFY data is legitimately 2^16 (65535) bytes. Adding more bytes should not be possible. Any IKE implementation that reads more bytes than specified in the Payload Length value would be very broken. Assuming the COOKIE NOTIFY is the last payload in the packet, the attacker could increase the length specified in the IKE header and stuff additional bytes after this payload, but proper implementations would not read this data. In fact, libreswan encountered some interoperability problems when it did this by mistake when it was padding the IKE packets to a multiple of 8 bytes (as per IKEv1 but not IKEv2) and got its IKE packets rejected by various implementations. Still, the authors claim 65535 bytes is enough for their attack.

Attacking the AUTH hash

Assuming the above works, it needs to find a collision between m1 and m’1. The only numbers they claim could be feasible is when MD5 would be used for the authentication step in IKE_AUTH. An offline attack could then be computed of 2^16 to 2^39 which they say would take about 5 hours. As the paper states, IKEv2 implementations either don’t support MD5, or if they do it is not part of the default proposal set. It makes a case that the weak SHA1 is widely supported in IKEv2 but admits using SHA1 will need more computing power (they listed 2^61 to 2^67 or 20 years). Note that libreswan (and openswan in RHEL) requires manual configuration to enable MD5 in IKEv2, but SHA1 is still allowed for compatibility.

The final step of the attack – Diffie-Hellman

Assuming the above succeeds the attacker needs to ensure that g^xy’ = g^x’y. To facilitate that, they use a subgroup confinement attack, and illustrate this with an example of picking x’ = y’ = 0. Then the two shared secrets would have the value 1. In practice this does not work according to the authors because most IKEv2 implementations validate
the received Diffie-Hellman public value to ensure that it is larger than 1 and smaller than p – 1.They did find that Diffie-Hellman groups 22 to 24 are known to have many small subgroups, and implementations tend to not validate these. Which led to an interesting discussion on one of the cypherpunks mailinglists about the mysterious nature of the DH groups in RFC-5114. Which are not enabled in libreswan (or openswan in RHEL) by default, and require manual configuration precisely because the origin of these groups is a mystery.

The IKEv1 attack

The paper briefly brainstorms about a variant of this attack using IKEv1. It would be interesting because MD5 is very common with IKEv1, but the article is not really clear on how that attack should work. It mentions filling the ID payload with malicious data to trigger the collision, but such an ID would never pass validation.

Counter measures

Work was already started on updating the cryptographic algorithms deemed mandatory to implement for IKE. Note that it does not state which algorithms are valid to use, or which to use per default. This work is happening at the IPsec working group at the IETF and can be found at draft-ietf-ipsecme-rfc4307bis. It is expected to go through a few more rounds of discussion and one of the topics that will be raised are the weak DH groups specified in RFC-5114.

Upstream Libreswan has hardened its cookie handling code, preventing the attacker from sending an uninvited cookie to the server without having their connection dropped.

Risk report update: April to October 2015

Picture of risk playing cardsIn April 2015 we took a look at a years worth of branded vulnerabilities, separating out those that mattered from those that didn’t. Six months have passed so let’s take this opportunity to update the report with the new vulnerabilities that mattered across all Red Hat products.

ABRT (April 2015) CVE-2015-3315:

ABRT (Automatic Bug Reporting Tool) is a tool to help users to detect defects in applications and to create a bug report. ABRT was vulnerable to multiple race condition and symbolic link flaws. A local attacker could use these flaws to potentially escalate their privileges on an affected system to root.

This issue affected Red Hat Enterprise Linux 7 and updates were made available. A working public exploit is available for this issue. Other products and versions of Enterprise Linux were either not affected or not vulnerable to privilege escalation.

JBoss Operations Network open APIs (April 2015) CVE-2015-0297:

Red Hat JBoss Operations Network is a middleware management solution that provides a single point of control to deploy, manage, and monitor JBoss Enterprise Middleware, applications, and services. The JBoss Operations Network server did not correctly restrict access to certain remote APIs which could allow a remote, unauthenticated attacker to execute arbitrary Java methods. We’re not aware of active exploitation of this issue. Updates were made available.

“Venom” (May 2015) CVE-2015-3456:

Venom was a branded flaw which affected QEMU. A privileged user of a guest virtual machine could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host’s QEMU process corresponding to the guest.

A number of Red Hat products were affected and updates were released. Red Hat products by default would block arbitrary code execution as SELinux sVirt protection confines each QEMU process.

“LogJam” (May 2015) CVE-2015-4000:

TLS connections using the Diffie-Hellman key exchange protocol were found to be vulnerable to an attack in which a man-in-the-middle attacker could downgrade vulnerable TLS connections to weak cryptography which could then be broken to decrypt the connection.

Like Poodle and Freak, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue. Various packages providing cryptography were updated.

BIND DoS (July 2015) CVE-2015-5477:

A flaw in the Berkeley Internet Name Domain (BIND) allowed a remote attacker to cause named (functioning as an authoritative DNS server or a DNS resolver) to exit, causing a denial of service against BIND.

This issue affected the versions of BIND shipped with all versions of Red Hat Enterprise Linux. A public exploit exists for this issue. Updates were available the same day as the issue was public.

libuser privilege escalation (July 2015) CVE-2015-3246:

The libuser library implements a interface for manipulating and administering user and group accounts. Flaws in libuser could allow authenticated local users with shell access to escalate privileges to root.

Red Hat Enterprise Linux 6 and 7 were affected and updates available same day as issue was public. Red Hat Enterprise Linux 5 was affected and a mitigation was published.  A public exploit exists for this issue.

Firefox lock file stealing via PDF reader (August 2015) CVE-2015-4495:

A flaw in Mozilla Firefox could allow an attacker to access local files with the permissions of the user running Firefox. Public exploits exist for this issue, including as part of Metasploit, and targeting Linux systems.

This issue affected Firefox shipped with versions of Red Hat Enterprise Linux and updates were available the next day after the issue was public.

Firefox add-on permission warning (August 2015) CVE-2015-4498:

Mozilla Firefox normally warns a user when trying to install an add-on if initiated by a web page.  A flaw allowed this dialog to be bypassed.

This issue affected Firefox shipped with Red Hat Enterprise Linux versions and updates were available the same day as the issue was public.

Conclusion

The issues examined in this report were included because they were meaningful.  This includes the issues that are of a high severity and are likely easy to be exploited (or already have a public working exploit), as well as issues that were highly visible or branded (with a name or logo), regardless of their severity.

Between 1 April 2015 and 31 October 2015 for every Red Hat product there were 39 Critical Red Hat Security Advisories released, addressing 192 Critical vulnerabilities.  Aside from the issues in this report which were rated as having Critical security impact, all other issues with a Critical rating were part of Red Hat Enterprise Linux products and were browser-related: Firefox, Chromium, Adobe Flash, and Java (due to the browser plugin).

Our dedicated Product Security team continue to analyse threats and vulnerabilities against all our products every day, and provide relevant advice and updates through the customer portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter.  Hear more about vulnerability handling in our upcoming virtual event: Secure Foundations for Today and Tomorrow.

Secure distribution of RPM packages

This blog post looks at the final part of creating secure software: shipping it to users in a safe way. It explains how to use transport security and package signatures to achieve this goal.

yum versus rpm

There are two commonly used tools related to RPM package management, yum and rpm. (Recent Fedora versions have replaced yum with dnf, a rewrite with similar functionality.) The yum tool inspects package sources (repositories), downloads RPM packages, and makes sure that required dependencies are installed along with fresh package installations and package updates. yum uses rpm as a library to install packages. yum repositories are defined by .repo files in /etc/yum.repos.d, or by yum plugins for repository management (such as subscription-manager for Red Hat subscription management). rpm is the low-level tool which operates on explicit set of RPM packages. rpm provides both a set of command-line tools, and a library to process RPM packages. In contrast to yum, package dependencies are checked, but violations are not resolved automatically. This means that rpm typically relies on yum to tell it what to do exactly; the recipe for a change to a package set is called a transaction. Securing package distribution at the yum layer resembles transport layer security. The rpm security mechanism is more like end-to-end security (in fact, rpm uses OpenPGP internally, which has traditionally been used for end-to-end email protection).

Transport security with yum

Transport security is comparatively easy to implement. The web server just needs to serve the package repository metadata (repomd.xml and its descendants) over HTTPS instead of HTTP. On the client, a .repo file in /etc/yum.repos.d has to look like this:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1

$releasever expands to the Fedora version at run time (like “22”). By default, end-to-end security with RPM signatures is enabled (see the next section), but we will focus on transport security first.

yum will verify the cryptographic digests contained in the metadata files, so serving the metadata over HTTPS is sufficient, but offering the .rpm files over HTTPS as well is a sensible precaution. The metadata can instruct yum to download packages from absolute, unrelated URLs, so it is necessary to inspect the metadata to make sure it does not contain such absolute “http://” URLs. However, transport security with a third-party mirror network is quite meaningless, particularly if anyone can join the mirror network (as it is the case with CentOS, Debian, Fedora, and others). Rather than attacking the HTTPS connections directly, an attacker could just become part of the mirror network. There are two fundamentally different approaches to achieve some degree of transport security.

Fedora provides a centralized, non-mirrored Fedora-run metalink service which provides a list if active mirrors and the expected cryptographic digest of the repomd.xml files. yum uses this information to select a mirror and verify that it serves the up-to-date, untampered repomd.xml. The chain of cryptographic digests is verified from there, eventually leading to verification of the .rpm file contents. This is how the long-standing Fedora bug 998 was eventually fixed.

Red Hat uses a different option to distribute Red Hat Enterprise Linux and its RPM-based products: a content-distribution network, managed by a trusted third party. Furthermore, the repositories provided by Red Hat use a separate public key infrastructure which is managed by Red Hat, so breaches in the browser PKI (that is, compromises of certificate authorities or misissued individual certificates) do not affect the transport security checks yum provides. Organizations that wish to implement something similar can use the sslcacert configuration switch of yum. This is the way Red Hat Satellite 6 implements transport security as well. Transport security has the advantage that it is straightforward to set up (it is not more difficult than to enable HTTPS). It also guards against manipulation at a lower level, and will detect tampering before data is passed to complex file format parsers such as SQLite, RPM, or the XZ decompressor. However, end-to-end security is often more desirable, and we cover that in the next section.

End-to-end security with RPM signatures

RPM package signatures can be used to implement cryptographic integrity checks for RPM packages. This approach is end-to-end in the sense that the package build infrastructure at the vendor can use an offline or half-online private key (such as one stored in hardware security module), and the final system which consumes these packages can directly verify the signatures because they are built into the .rpm package files. Intermediates such as proxies and caches (which are sometimes used to separate production servers from the Internet) cannot tamper with these signatures. In contrast, transport security protections are weakened or lost in such an environment.

Generating RPM signatures

To add an RPM signature to a .rpm signature, you need to generate a GnuPG key first, using gpg --gen-key. Let’s assume that this key has the user ID “[email protected]”. We first export the public key part to a file in a special directory, otherwise rpmsign will not be able to verify the signatures we create as it uses the RPM database as a source of trusted signing keys (and not the user GnuPG keyring):

$ mkdir $HOME/rpm-signing-keys
$ gpg --export -a [email protected] > $HOME/rpm-signing-keys/example-com.key

The name of the directory $HOME/rpm-signing-keys does not matter, but the name of the file containing the public key must end in “.key”. On Red Hat Enterprise Linux 7, CentOS 7, and Fedora, you may have to install the rpm-sign package, which contains the rpmsign program. The rpmsign command to create the signature looks like this:

$ rpmsign -D '_gpg_name [email protected]' --addsign hello-2.10.1-1.el6.x86_64.rpm
Enter pass phrase:
Pass phrase is good.
hello-2.10.1-1.el6.x86_64.rpm:

(On success, there is no output after the file name on the last line, and the shell prompt reappears.) The file hello-2.10.1-1.el6.x86_64.rpm is overwritten in place, with a variant that contains the signature embedded into the RPM header. The presence of a signature can be checked with this command:

$ rpm -Kv -D "_keyringpath $HOME/rpm-signing-keys" hello-2.10.1-1.el6.x86_64.rpm
hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: OK
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: OK
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

If the output of this command contains “NOKEY” lines instead, like the following, it means that the public key in the directory $HOME/rpm-signing-keys has not been loaded successfully:

hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

Afterwards, the RPM files can be distributed as usual and served over HTTP or HTTPS, as if they were unsigned.

Consuming RPM signatures

To enable RPM signature checking in rpm explicitly, the yum repository file must contain a gpgcheck=1 line, as in:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1
gpgcheck=1

Once signature checks are enabled in this way, package installation will fail with a NOKEY error until the signing key used by .rpm files in the repository is added to the system RPM database. This can be achieved with a command like this:

$ rpm --import https://download.example.com/keys/rpmsign.asc

The file needs to be transported over a trusted channel, hence the use of an https:// URL in the example. (It is also possible to instruct the user to download the file from a trusted web site, copy it to the target system, and import it directly from the file system.) Afterwards, package installation works as before.

After a key has been import, it will appear in the output of the “rpm -qa” command:

$ rpm -qa | grep ^gpg-pubkey-
…
gpg-pubkey-ab0e12ef-de337997
…

More information about the key can be obtained with “rpm -qi gpg-pubkey-ab0e12ef-de337997”, and the key can be removed again using the “rpm --erase gpg-pubkey-ab0e12ef-de337997”, just as if it were a regular RPM package.

Note: Package signatures are only checked by yum if the package is downloaded from a repository (which has checking enabled). This happens if the package is specified as a name or name-version-release on the yum command line. If the yum command line names a file or URL instead, or the rpm command is used, no signature check is performed in current versions of Red Hat Enterprise Linux, Fedora, or CentOS.

Issues to avoid

When publishing RPM software repositories, the following should be avoided:

  1. The recommended yum repository configuration uses baseurl lines containing http:// URLs.
  2. The recommended yum repository configuration explicitly disables RPM signature checking with gpgcheck=0.
  3. There are optional instructions to import RPM keys, but these instructions do not tell the system administrator to disable the gpgcheck=0 line in the default yum configuration provided by the independent software vendor.
  4. The recommended “rpm --import” command refers to the public key file using an http:// URL.

The first three deficiencies in particular open the system up to a straightforward man-in-the-middle attack on package downloads. An attacker can replace the repository or RPM files while they are downloaded, thus gaining the ability execute arbitrary commands when they are installed. As outlined in the article on the PKI used by the Red Hat CDN, some enterprise networks perform TLS intercept, and HTTPS downloads will fail. This possibility is not sufficient to justify weakening package authentication for all customers, such as recommending to use http:// instead of https:// in the yum configuration. Similarly, some customers do not want to perform the extra step involving “rpm --import”, but again, this is not an excuse to disable verification for everyone, as long as RPM signatures are actually available in the repository. (Some software delivery processes make it difficult to create such end-to-end verifiable signatures.)

Summary

If you are creating a repository of packages you should ensure give your users a secure way to consume them. You can do this by following these recommendations:

  • Use https:// URLs everywhere in configuration advice regarding RPM repository setup for yum.
  • Create a signing key and use them to sign RPM packages, as outlined above.
  • Make sure RPM signature checking is enabled in the yum configuration.
  • Use an https:// URL to download the public key in the setup instructions.

We acknowledge that package signing might not be possible for everyone, but software downloads over HTTPS downloads are straightforward to implement and should always be used.

libuser vulnerabilities

It was discovered that the libuser library contains two vulnerabilities which, in combination, allow unprivileged local users to gain root privileges.

libuser is a library that provides read and write access to files like /etc/passwd, which constitute the system user and group database. On Red Hat Enterprise Linux it is a central system component.

What is being disclosed today?

Qualys reported two vulnerabilities:

It turns out that these vulnerabilities can be exploited by an unprivileged local user to gain root privileges on an affected system. However, due to the way libuser works, only users who have accounts already listed in /etc/passwd can exploit this vulnerability, and the user needs to supply the account password as part of the attack. These requirements mean that exploitation by accounts listed only in LDAP (or some other NSS data source) or by system accounts without a valid password is not possible.

Further analysis showed that the first vulnerability, CVE-2015-3245, is also due to a missing check in libuser.

Which system components are affected by these vulnerabilities?

libuser is a library, which means that in order to exploit it, a program which employs it must be used. Ideally, such a program has the following properties:

  1. It uses libuser.
  2. It is SUID-root.
  3. It allows putting almost arbitrary content into /etc/passwd.

Without the third item, exploitation may still be possible, but it will be much more difficult. If the program is not SUID-root, a user will not have unlimited attempts to exploit the race condition.

A survey of programs processing /etc/passwd and related files presents this picture:

  • passwd is SUID-root, but it uses PAM to change the password, which has custom code to modify /etc/passwd not affected by the race condition. The account locking functionality in passwd does use libuser, but it is restricted to root.
  • chfn and chsh from util-linux are SUID-root and use libuser to change /etc/passwd (the latter depending on how util-linux was compiled) but they have fairly strict filters controlling what users can put into these files.
  • lpasswd, lchfn, lchsh and related utilities from libuser are not SUID-root.
  • userhelper in the usermode package has all three qualifications: libuser-based, SUID-root, and lack of filters.

This is why userhelper is a plausible target for exploitation, and other programs such as passwd and chfn are not.

How can these vulnerabilities be addressed?

System administrators can apply updates from your operating system vendor. Details of affected Red Hat products and security advisories are available on the knowledge base article on the Red Hat Customer Portal.

This security update will change libuser to apply additional checks to the values written to the user and group files (so that injecting newlines is no longer possible), and replaces the locking and file update code to follow the same procedures as the rest of the system. The first change is sufficient to prevent newline injection with userhelper as well, which means that only libuser needs to be updated.

If software updates are not available or cannot be applied, it is possible to block access to the vulnerable functionality with a PAM configuration change. System administrators can edit the files /etc/pam.d/chfn and /etc/pam.d/chsh and block access to non-root users by using pam_warn (for logging) and pam_deny:

#%PAM-1.0
auth       sufficient   pam_rootok.so
auth required pam_warn.so
auth required pam_deny.so
auth       include      system-auth
account    include      system-auth
password   include      system-auth
session    include      system-auth

This will prevent users from changing their login shells and their GECOS field. userhelper identifies itself to PAM as “chfn”, which means this change is effective for this program as well.

Acknowledgements

Red Hat would like to thank Qualys for reporting these vulnerabilities.

VENOM, don’t get bitten.

CC BY-SA CrowdStrike

CC BY-SA CrowdStrike

QEMU is a generic and open source machine emulator and virtualizer and is incorporated in some Red Hat products as a foundation and hardware emulation layer for running virtual machines under the Xen and KVM hypervisors.

CVE-2015-3456 (aka VENOM) is a security flaw in the QEMU’s Floppy Disk Controller (FDC) emulation. It can be exploited by a malicious guest user with access to the FDC I/O ports by issuing specially crafted FDC commands to the controller. It can result in guest controlled execution of arbitrary code in, and with privileges of, the corresponding QEMU process on the host. Worst case scenario this can be guest to host exit with the root privileges.

This issue affects all x86 and x86-64 based HVM Xen and QEMU/KVM guests, regardless of their machine type, because both PIIX and ICH9 based QEMU machine types create ISA bridge (ICH9 via LPC) and make FDC accessible to the guest. It is also exposed regardless of presence of any floppy related QEMU command line options so even guests without floppy disk explicitly enabled in the libvirt or Xen configuration files are affected.

We believe that code execution is possible but we have not yet seen any working reproducers that would allow this.

This flaw arises because of an unrestricted indexed write access to the fixed size FIFO memory buffer that FDC emulation layer uses to store commands and their parameters. The FIFO buffer is accessed with byte granularity (equivalent of FDC data I/O port write) and the current index is incremented afterwards. After each issued and processed command the FIFO index is reset to 0 so during normal processing the index cannot become out-of-bounds.

For certain commands (such as FD_CMD_READ_ID and FD_CMD_DRIVE_SPECIFICATION_COMMAND) though the index is either not reset for certain period of time (FD_CMD_READ_ID) or there are code paths that don’t reset the index at all (FD_CMD_DRIVE_SPECIFICATION_COMMAND), in which case the subsequent FDC data port writes result in sequential FIFO buffer memory writes that can be out-of-bounds of the allocated memory. The attacker has full control over the values that are stored and also almost fully controls the length of the write. Depending on how the FIFO buffer is defined, he might also have a little control over the index as in the case of Red Hat Enterprise Linux 5 Xen QEMU package, where the index variable is stored after the memory designated for the FIFO buffer.

Depending on the location of the FIFO memory buffer, this can either result in stack or heap overflow. For all of the Red Hat Products using QEMU the FIFO memory buffer is allocated from the heap.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledge-base.

Mitigation

The sVirt and seccomp functionalities used to restrict host’s QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue.  A possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

Not using IPv6? Are you sure?

World IPv6 Launch logo

CC-BY World IPv6 Launch

Internet Protocol version 6 (IPv6) has been around for many years and was first supported in Red Hat Enterprise Linux 6 in 2010.  Designed to provide, among other things, additional address space on the ever-growing Internet, IPv6 has only recently become a priority for ISPs and businesses.

On February 3, 2011, ICANN announced that the available pool of unallocated IPv4 addresses had been completely emptied and urged network operators and server owners to implement IPv6 if they had not already done so.  Unfortunately, many networks still do not support IPv6 and many system and network administrators don’t understand the security risks associated with not having some sort of IPv6 control within their networks setup even if IPv6 is not supported.  The common thought of not having to worry about IPv6 since it’s not supported on a network is a false one.

The Threat

On many operating systems, Red Hat Enterprise Linux and Fedora included, IPv6 is preferred over IPv4.  A DNS lookup will search first for an IPv6 address and then an IPv4 address.  A system requesting a DHCP allocation will, by default, attempt to obtain both addresses as well.  When a network does not support IPv6 it leaves open the possibility of rouge IPv6 DHCP and DNS servers coming online to redirect traffic either around current network restrictions or through a specific choke point where traffic can be inspected or both.  Basically, if you aren’t offering up IPv6 within your network someone else could.

Just like on an IPv4 network, monitoring IPv6 on the internal network is crucial for security, especially if you don’t have IPv6 rolled out.  Without proper monitoring, an attacker, or poorly configured server, could start providing a path way out of your network, bypassing all established safety mechanisms to keep your data under control.

Implementing IPv6

There are several methods for protecting systems and networks from attacks revolving around IPv6.  The simplest, and most preferred method, is to simply start using IPv6.  It becomes much more difficult for rouge DNS and DHCP servers to be implemented on a functioning IPv6 network.  Implementing IPv6 isn’t particularly difficult either.

Unfortunately IPv6 isn’t all the simple to implement either.  As UNC‘s Dr. Joni Julian spoke about in her SouthEast LinuxFest presentation on IPv6 Security, many of the tools administrators use to manage network connections have been rewritten, and thus renamed, to support IPv6.  This adds to the confusion when other tools, such as iptables, require different rules to be written to support IPv6.  Carnegie Mellon University’s CERT addresses many different facets of implementing IPv6 including ip6tables rules.  There are many resources available to help system and network administrators setup IPv6 on their systems and networks and by doing so networks will automatically be available to IPv6-only networks of the future present.

Blocking and Disabling IPv6

If setting up IPv6 isn’t possible the next best thing is disabling, blocking, and monitoring for IPv6 on the network.  This means disabling IPv6 in the network stack and blocking IPv6 in ip6tables.

# Set DROP as default policy to INPUT, OUTPUT, and FORWARD chains.
ip6tables -P INPUT DROP
ip6tables -P OUTPUT DROP
ip6tables -P FORWARD DROP

# Set DROP as a rule to INPUT and OUTPUT chains.
ip6tables -I INPUT -p all -j DROP
ip6tables -I OUTPUT -p all -j DROP

Because it can never known that every system on a network will be properly locked down, monitoring for IPv6 packets on the network is important.  Many IDSs can be configured to alert on such activity but configuration is key.

A few final words

IPv6 doesn’t have to be scary but if you want to maintain a secure network a certain amount of respect is required.  With proper monitoring IPv6 can be an easily manageable “threat”.  Of course the best way to mitigate the risks is to embrace IPv6.  Rolling it out and using it prevents many of the risks already discussed and it could already be an availability issue if serving up information over the Internet is important.

Factoring RSA export keys – FREAK (CVE-2015-0204)

This week’s issue with OpenSSL export ciphersuites has been discussed in the press as “Freak” and “Smack”. These are addressed by CVE-2015-0204, and updates for affected Red Hat products were released in January.

Historically, the United States and several other countries tried to control the export or use of strong cryptographic primitives. For example, any company that exported cryptographic products from the United States needed to comply with certain key size limits. For RSA encryption, the maximum allowed key size was 512 bits and for symmetric encryption (DES at that time) it was 40 bits.

The U.S. government eventually lifted this policy and allowed cryptographic primitives with bigger key sizes to be exported. However, these export ciphersuites did not really go away and remained in a lot of codebases (including OpenSSL), probably for backward compatibility purposes.

It was considered safe to keep these export ciphersuites lying around for multiple purposes.

  1. Even if your webserver supports export ciphersuites, most modern browsers will not offer that as a part of initial handshake because they want to establish a session with strong cryptography.
  2. Even if you use export cipher suites, you still need to factor the 512 bit RSA key or brute-force the 40-bit DES key. Though doable in today’s cloud/GPU infrastructure, it is pointless to do this for a single session.

However, this results in a security flaw, which affects various cryptographic libraries, including OpenSSL. OpenSSL clients would accept RSA export-grade keys even when the client did not ask for export-grade RSA. This could further lead to an active man-in-the-middle attack, allowing decryption and alteration of the TLS session in the following way:

  • An OpenSSL client contacts a TLS server and asks for a standard RSA key (non-export).
  • A MITM intercepts this requests and asks the server for an export-grade RSA key.
  • Once the server replies, the MITM attacker forwards this export-grade RSA key to the client. The client has a bug (as described above) that allows the export-grade key to be accepted.
  • In the meantime, the MITM attacker factors this key and is able to decrypt all possible data exchange between the server and the client.

This issue was fixed in OpenSSL back in October of 2014 and shipped in January of 2015 in Red Hat Enterprise Linux 6 and 7 via RHSA-2015-0066. This issue has also been addressed in Fedora 20 and Fedora 21.

Red Hat Product Security initially classified this as having low security impact, but after more details about the issue and the possible attack scenarios have become clear, we re-classified it as a moderate-impact security issue.

Additional information on mitigating this vulnerability can be found on the Red Hat Customer Portal.

Common Criteria

What is Common Criteria?

Common Criteria (CC) is an international standard (ISO/IEC 15408) for certifying computer security software. Using Protection Profiles, computer systems can be secured to certain levels that meet requirements laid out by the Common Criteria. Established by governments, the Common Criteria treaty has been signed by 17 countries, and each country recognizes the other’s certifications.

In the U.S., Common Criteria is handled by the National Information Assurance Partnership (NIAP). Other countries have their own CC authorities. Each authority certifies CC labs, which do the actual work of evaluating products. Once certified by the authority, based on the evidence from the lab and the vendor, that certification is recognized globally.

Your certification is given a particular assurance level which, roughly speaking, represents the strength of the certification. Confidence is higher at a level EAL4 than at EAL2 for a certification. Attention is usually given to the assurance level, instead of what, specifically, you’re being assured of, which is the protection profiles.
CC certification represents a very specific set of software and hardware configurations. Software versions and hardware model and version is important as differences will break the certification.

How does the Common Criteria work?

The Common Criteria authority in each country creates a set of expectations for particular kinds of software: operating systems, firewalls, and so on. Those expections are called Protection Profiles. Vendors, like Red Hat, then work with a third-party lab to document how we meet the Protection Profile. A Target of Evaluation (TOE) is created which is all the specific hardware and software that’s being evaluated. Months are then spent in the lab getting the package ready for submission. This state is known as “in evaluation”.
Once the package is complete, it is submitted to the relevant authority. Once the authority reviews and approves the package the product becomes “Common Criteria certified” for that target.

Why does Red Hat need or want Common Criteria?

Common Criteria is mandatory for software used within the US Government and other countries’ government systems. Other industries in the United States may also require Common Criteria. Because it is important to our customers, Red Hat spends the time and energy to meet these standards.

What Red Hat products are Common Criteria certified?

Currently, Red Hat Enterprise Linux (RHEL) 5.x and 6.x meet Common Criteria in various versions. Also, Red Hat’s JBoss Enterprise Application Platform 5 is certified in various versions. It should be noted that while Fedora and CentOS operating systems are related to RHEL, they are not CC certified. The Common Criteria Portal provides information on what specific versions of a product are certified and to what level. Red Hat also provides a listing of all certifications and accreditation of our products.

Are minor releases of RHEL certified?

When a minor release, or a bug fix, or a security issue arises, most customers will want to patch their systems to remain secure against the latest threats. Technically, this means falling out of compliance. For most systems, the agency’s Certifying Authority (CA) requires these updates as a matter of basic security measures. It is already understood that this breaks CC.

Connecting Common Criteria evaluation to a specific minor versions is difficult, at best, for a couple of reasons:

First, the certifications will never line up with a particular minor version exactly. A RHEL minor version is, after all, just a convenient waypoint for what is actually a constant stream of updates. The CC target, for example, began with RHEL 6.2, but the evaluated configuration will inevitably end up having packages updated from their 6.2 versions. In the case of FIPS, the certifications aren’t tied to a RHEL version at all, but to the version of the certified package. So OpenSSH server version 5.3p1-70.el6 is certified, no matter which minor RHEL version you happen to be using.

This leads to the second reason. Customers have, in the past, forced programs to stay on hopelessly outdated and unpatched systems only because they want to see /etc/redhat-release match the CC documentation exactly. Policies like this ignore the possibility that a certified package could exist in RHEL 6.2, 6.3, 6.4, etc., and the likelihood that subsequent security patches may have been made to the certified package. So we’re discouraging customers from saying “you must use version X.” After all, that’s not how CC was designed to work. We think CC should be treated as a starting point or baseline on which a program can conduct a sensible patching and errata strategy.

Can I use a product if it’s “in evaluation”?

Under NSTISSP #11, government customers must prefer products that have been certified using a US-approved protection profile. Failing that, you can use something certified under another profile. Failing that, you must ensure that the product is in evaluation.

Red Hat has successfully completed many Common Criteria processes so “in evaluation” is less uncertain than it might sound. When a product is “in evaluation”, the confidence level is high that certification will be awarded. We work with our customers and their CAs and DAAs to help provide information they need to make a decision on C&A packages that are up for review.

I’m worried about the timing of the certification. I need to deploy today!

Red Hat makes it as easy as possible for customers to use the version of Red Hat Enterprise Linux that they’re comfortable with. A subscription lets you use any version of the product as long as you have a current subscription. So you can buy a subscription today, deploy a currently certified version, and move to a more recent version once it’s certified–at no additional cost.

Why can’t I find your certification on the NIAP website?

Red Hat Enterprise Linux 6 was certified by BSI under OS Protection Profile at EAL4+. This is equivalent to certifying under NIAP under the Common Criteria mutual recognition treaties. More information on mutual recognition can be found on the CCRA web site. That site includes a list of the member countries that recognize one another’s evaluations.

How can I keep my CC-configured system patched?

A security plugin for the yum update tool allows customers to only install patches that are security fixes. This allows a system to be updated for security issues while not allowing bug fixes or enhancements to be installed. This makes for a more stable system that also meets security update requirements.

To install the security plugin, from a root-authenticated prompt:

# yum install yum-plugin-security
# yum updateinfo
# yum update --security

Once security updates have been added to the system, the CC-evaluated configuration has changed and the system is no longer certified. STIG requirements are now being met, however, and the system is more secure. This is the recommended way of building a system: starting with CC and then patching in accordance with DISA regulations. Consulting the CA and DAA during the system’s C&A process will help establish guidelines and expectations.

You didn’t answer all my questions. Where do I go for more help?

Red Hat Support is available anytime a customer, or potential customer, has a question about a Red Hat product.

Additional Reading

Samba vulnerability (CVE-2015-0240)

Samba is the most commonly used Windows interoperability suite of programs, used by Linux and Unix systems. It uses the SMB/CIFS protocol to provide a secure, stable, and fast file and print services. It can also seamlessly integrate with Active Directory environments and can function as a domain controller as well as a domain member (legacy NT4-style domain controller is supported, but the Active Directory domain controller feature of Samba 4 is not supported yet).

CVE-2015-0240 is a security flaw in the smbd file server daemon. It can be exploited by a malicious Samba client, by sending specially-crafted packets to the Samba server. No authentication is required to exploit this flaw. It can result in remotely controlled execution of arbitrary code as root.

We believe code execution is possible but we’ve not yet seen any working reproducers that would allow this.

This flaw arises because of an uninitialized pointer is passed to the TALLOC_FREE() funtion. It can be exploited by calling the ServerPasswordSet RPC api on the NetLogon endpoint, by using a NULL session over IPC.
Note: The code snippets shown below are from samba-3.6 shipped with Red Hat Enterprise Linux 6. (All versions of samba >= 3.5 are affected by this flaw)
In the _netr_ServerPasswordSet() function, cred is defined as a pointer to a structure. It is not initialized.

1203 NTSTATUS _netr_ServerPasswordSet(struct pipes_struct *p,   
1204                          struct netr_ServerPasswordSet *r)   
1205 {   
1206         NTSTATUS status = NT_STATUS_OK;   
1207         int i;  
1208         struct netlogon_creds_CredentialState *creds;

Later netr_creds_server_step_check() function is called with cred at:

1213  status = netr_creds_server_step_check(p, p->mem_ctx,   
1214                               r->in.computer_name,   
1215                               r->in.credential,   
1216                               r->out.return_authenticator,   
1217                               &creds);

If netr_creds_server_step_check function fails, it returns and cred is still not initialized. Later in the _netr_ServerPasswordSet() function, cred is freed using the TALLOC_FREE() function which results in an uninitialized pointer free flaw.
It may be possible to control the value of creds, by sending a number of specially-crafted packets. Later we can use the destructor pointer called by TALLOC_FREE() to execute arbitrary code.

As mentioned above, this flaw can only be triggered if netr_creds_server_step_check() fails. This is dependent on the version of Samba used.

In Samba 4.1 and above, this crash can only be triggered after setting “server schannel = yes” in the server configuration. This is due to the
adbe6cba005a2060b0f641e91b500574f4637a36
commit, which introduces NULL initialization into the most common code path. It is still possible to trigger an early return with a memory allocation failure, but that is less likely to occur. Therefore this issue is more difficult to exploit. Red Hat Product Security team has rated this flaw as having important impact on Red Hat Enterprise Linux 7.

In older versions of Samba (samba-3.6 as shipped with Red Hat Enterprise Linux 5 and 6. samba-4.0 as shipped with Red Hat Enterprise Linux 6) the above mentioned commit does not exist. An attacker could call _netr_ServerPasswordSet() function with a NULLED buffer, which could trigger this flaw. Red Hat Product Security has rated this flaw as having critical impact on all other versions of samba package shipped by Red Hat.

Lastly the version of Samba 4.0 shipped with Red Hat Enterprise Linux 6.2 EUS is based on an alpha release of Samba 4, which lacked the password change functionality and thus the vulnerability. The same is true for the version of Samba 3.0 shipped with Red Hat Enterprise Linux 4 and 5.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledgebase.  This flaw is also fixed in Fedora 20 and Fedora 21.

Security improvements in Red Hat Enterprise Linux 7

Each new release of Red Hat® Enterprise Linux® is not only built on top of the previous version, but a large number of its components incorporate development from the Fedora distribution. For Red Hat Enterprise Linux 7, most components are aligned with Fedora 19, and with select components coming from Fedora 20. This means that users benefit from new development in Fedora, such as firewalld which is described below. While preparing the next release of Red Hat Enterprise Linux, we review components for their readiness for an enterprise-class distribution. We also make sure that we address known vulnerabilities before the initial release. And we review new components to check that they meet our standards regarding security and general suitability for enterprise use.

One of the first things that happens is a review of the material going into a new version of Red Hat Enterprise Linux. Each release includes new packages that Red Hat has never shipped before and anything that has never been shipped in a Red Hat product receives a security review. We look for various problems – from security bugs in the actual software to packaging issues. It’s even possible that some packages won’t make the cut if they prove to have issues that cannot be resolved in a manner we decide is acceptable. It’s also possible that a package was once included as a dependency or feature that is no longer planned for the release. Rather than leave those in the release, we do our best to remove the unneeded packages as they could result in security problems later down the road.

Previously fixed security issues are also reviewed to ensure nothing has been missed since the last version. While uncommon, it is possible that a security fix didn’t make it upstream, or was somehow dropped from a package at some point during the move between major releases. We spend time reviewing these to ensure nothing important was missed that could create problems later.

Red Hat Product Security also adds several new security features in order to better protect the system.

Before its 2011 revision, the C++ language definition was ambiguous as to what should happen if an integer overflow occurs during the size computation of an array allocation. The C++ compiler in Red Hat Enterprise Linux 7 will perform a size check (and throw std::bad_alloc on failure) if the size (in bytes) of the allocated array exceeds the width of a register, even in C++98 mode. This change affects the code generated by the compiler–it is not a library-level correction. Consequently, we compiled all of Red Hat Enterprise Linux 7 with a compiler version that performs this additional check.

When we compiled Red Hat Enterprise Linux 7, we also tuned the compiler to add “stack protector” instrumentation to additional functions. The GCC compiler in Red Hat Enterprise Linux 6 used heuristics to determine whether a function warrants “stack protector” instrumentation. In contrast, the compiler in Red Hat Enterprise Linux 7 uses precise rules that add the instrumentation to only those functions that need it. This allowed us to instrument additional functions with minimal performance impact, extending this probabilistic defense against stack-based buffer overflows to an even larger part of the code base.

Red Hat Enterprise Linux 7 also includes firewalld. firewalld allows for centralized firewall management using high-level concepts, such as zones. It also extends spoofing protection based on reverse path filters to IPv6, where previous Red Hat Enterprise Linux versions only applied anti-spoofing filter rules to IPv4 network traffic.

Every version of Red Hat Enterprise Linux is the result of countless hours of work from many individuals. Above we highlighted a few of the efforts that the Red Hat Product Security team assisted with in the release of Red Hat Enterprise Linux 7. We also worked with a number of other individuals to see these changes become reality. Our job doesn’t stop there, though. Once Red Hat Enterprise Linux 7 was released, we immediately began tracking new security issues and deciding how to fix them. We’ll further explain that process in an upcoming blog post about fixing security issues in Red Hat Enterprise Linux 7.