Tag Archives: Fedora

Secure distribution of RPM packages

This blog post looks at the final part of creating secure software: shipping it to users in a safe way. It explains how to use transport security and package signatures to achieve this goal.

yum versus rpm

There are two commonly used tools related to RPM package management, yum and rpm. (Recent Fedora versions have replaced yum with dnf, a rewrite with similar functionality.) The yum tool inspects package sources (repositories), downloads RPM packages, and makes sure that required dependencies are installed along with fresh package installations and package updates. yum uses rpm as a library to install packages. yum repositories are defined by .repo files in /etc/yum.repos.d, or by yum plugins for repository management (such as subscription-manager for Red Hat subscription management). rpm is the low-level tool which operates on explicit set of RPM packages. rpm provides both a set of command-line tools, and a library to process RPM packages. In contrast to yum, package dependencies are checked, but violations are not resolved automatically. This means that rpm typically relies on yum to tell it what to do exactly; the recipe for a change to a package set is called a transaction. Securing package distribution at the yum layer resembles transport layer security. The rpm security mechanism is more like end-to-end security (in fact, rpm uses OpenPGP internally, which has traditionally been used for end-to-end email protection).

Transport security with yum

Transport security is comparatively easy to implement. The web server just needs to serve the package repository metadata (repomd.xml and its descendants) over HTTPS instead of HTTP. On the client, a .repo file in /etc/yum.repos.d has to look like this:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1

$releasever expands to the Fedora version at run time (like “22”). By default, end-to-end security with RPM signatures is enabled (see the next section), but we will focus on transport security first.

yum will verify the cryptographic digests contained in the metadata files, so serving the metadata over HTTPS is sufficient, but offering the .rpm files over HTTPS as well is a sensible precaution. The metadata can instruct yum to download packages from absolute, unrelated URLs, so it is necessary to inspect the metadata to make sure it does not contain such absolute “http://” URLs. However, transport security with a third-party mirror network is quite meaningless, particularly if anyone can join the mirror network (as it is the case with CentOS, Debian, Fedora, and others). Rather than attacking the HTTPS connections directly, an attacker could just become part of the mirror network. There are two fundamentally different approaches to achieve some degree of transport security.

Fedora provides a centralized, non-mirrored Fedora-run metalink service which provides a list if active mirrors and the expected cryptographic digest of the repomd.xml files. yum uses this information to select a mirror and verify that it serves the up-to-date, untampered repomd.xml. The chain of cryptographic digests is verified from there, eventually leading to verification of the .rpm file contents. This is how the long-standing Fedora bug 998 was eventually fixed.

Red Hat uses a different option to distribute Red Hat Enterprise Linux and its RPM-based products: a content-distribution network, managed by a trusted third party. Furthermore, the repositories provided by Red Hat use a separate public key infrastructure which is managed by Red Hat, so breaches in the browser PKI (that is, compromises of certificate authorities or misissued individual certificates) do not affect the transport security checks yum provides. Organizations that wish to implement something similar can use the sslcacert configuration switch of yum. This is the way Red Hat Satellite 6 implements transport security as well. Transport security has the advantage that it is straightforward to set up (it is not more difficult than to enable HTTPS). It also guards against manipulation at a lower level, and will detect tampering before data is passed to complex file format parsers such as SQLite, RPM, or the XZ decompressor. However, end-to-end security is often more desirable, and we cover that in the next section.

End-to-end security with RPM signatures

RPM package signatures can be used to implement cryptographic integrity checks for RPM packages. This approach is end-to-end in the sense that the package build infrastructure at the vendor can use an offline or half-online private key (such as one stored in hardware security module), and the final system which consumes these packages can directly verify the signatures because they are built into the .rpm package files. Intermediates such as proxies and caches (which are sometimes used to separate production servers from the Internet) cannot tamper with these signatures. In contrast, transport security protections are weakened or lost in such an environment.

Generating RPM signatures

To add an RPM signature to a .rpm signature, you need to generate a GnuPG key first, using gpg --gen-key. Let’s assume that this key has the user ID “[email protected]”. We first export the public key part to a file in a special directory, otherwise rpmsign will not be able to verify the signatures we create as it uses the RPM database as a source of trusted signing keys (and not the user GnuPG keyring):

$ mkdir $HOME/rpm-signing-keys
$ gpg --export -a [email protected] > $HOME/rpm-signing-keys/example-com.key

The name of the directory $HOME/rpm-signing-keys does not matter, but the name of the file containing the public key must end in “.key”. On Red Hat Enterprise Linux 7, CentOS 7, and Fedora, you may have to install the rpm-sign package, which contains the rpmsign program. The rpmsign command to create the signature looks like this:

$ rpmsign -D '_gpg_name [email protected]' --addsign hello-2.10.1-1.el6.x86_64.rpm
Enter pass phrase:
Pass phrase is good.
hello-2.10.1-1.el6.x86_64.rpm:

(On success, there is no output after the file name on the last line, and the shell prompt reappears.) The file hello-2.10.1-1.el6.x86_64.rpm is overwritten in place, with a variant that contains the signature embedded into the RPM header. The presence of a signature can be checked with this command:

$ rpm -Kv -D "_keyringpath $HOME/rpm-signing-keys" hello-2.10.1-1.el6.x86_64.rpm
hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: OK
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: OK
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

If the output of this command contains “NOKEY” lines instead, like the following, it means that the public key in the directory $HOME/rpm-signing-keys has not been loaded successfully:

hello-2.10.1-1.el6.x86_64.rpm:
    Header V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

Afterwards, the RPM files can be distributed as usual and served over HTTP or HTTPS, as if they were unsigned.

Consuming RPM signatures

To enable RPM signature checking in rpm explicitly, the yum repository file must contain a gpgcheck=1 line, as in:

[gnu-hello]
name=gnu-hello for Fedora $releasever
baseurl=https://download.example.com/dist/fedora/$releasever/os/
enabled=1
gpgcheck=1

Once signature checks are enabled in this way, package installation will fail with a NOKEY error until the signing key used by .rpm files in the repository is added to the system RPM database. This can be achieved with a command like this:

$ rpm --import https://download.example.com/keys/rpmsign.asc

The file needs to be transported over a trusted channel, hence the use of an https:// URL in the example. (It is also possible to instruct the user to download the file from a trusted web site, copy it to the target system, and import it directly from the file system.) Afterwards, package installation works as before.

After a key has been import, it will appear in the output of the “rpm -qa” command:

$ rpm -qa | grep ^gpg-pubkey-
…
gpg-pubkey-ab0e12ef-de337997
…

More information about the key can be obtained with “rpm -qi gpg-pubkey-ab0e12ef-de337997”, and the key can be removed again using the “rpm --erase gpg-pubkey-ab0e12ef-de337997”, just as if it were a regular RPM package.

Note: Package signatures are only checked by yum if the package is downloaded from a repository (which has checking enabled). This happens if the package is specified as a name or name-version-release on the yum command line. If the yum command line names a file or URL instead, or the rpm command is used, no signature check is performed in current versions of Red Hat Enterprise Linux, Fedora, or CentOS.

Issues to avoid

When publishing RPM software repositories, the following should be avoided:

  1. The recommended yum repository configuration uses baseurl lines containing http:// URLs.
  2. The recommended yum repository configuration explicitly disables RPM signature checking with gpgcheck=0.
  3. There are optional instructions to import RPM keys, but these instructions do not tell the system administrator to disable the gpgcheck=0 line in the default yum configuration provided by the independent software vendor.
  4. The recommended “rpm --import” command refers to the public key file using an http:// URL.

The first three deficiencies in particular open the system up to a straightforward man-in-the-middle attack on package downloads. An attacker can replace the repository or RPM files while they are downloaded, thus gaining the ability execute arbitrary commands when they are installed. As outlined in the article on the PKI used by the Red Hat CDN, some enterprise networks perform TLS intercept, and HTTPS downloads will fail. This possibility is not sufficient to justify weakening package authentication for all customers, such as recommending to use http:// instead of https:// in the yum configuration. Similarly, some customers do not want to perform the extra step involving “rpm --import”, but again, this is not an excuse to disable verification for everyone, as long as RPM signatures are actually available in the repository. (Some software delivery processes make it difficult to create such end-to-end verifiable signatures.)

Summary

If you are creating a repository of packages you should ensure give your users a secure way to consume them. You can do this by following these recommendations:

  • Use https:// URLs everywhere in configuration advice regarding RPM repository setup for yum.
  • Create a signing key and use them to sign RPM packages, as outlined above.
  • Make sure RPM signature checking is enabled in the yum configuration.
  • Use an https:// URL to download the public key in the setup instructions.

We acknowledge that package signing might not be possible for everyone, but software downloads over HTTPS downloads are straightforward to implement and should always be used.

libuser vulnerabilities

It was discovered that the libuser library contains two vulnerabilities which, in combination, allow unprivileged local users to gain root privileges.

libuser is a library that provides read and write access to files like /etc/passwd, which constitute the system user and group database. On Red Hat Enterprise Linux it is a central system component.

What is being disclosed today?

Qualys reported two vulnerabilities:

It turns out that these vulnerabilities can be exploited by an unprivileged local user to gain root privileges on an affected system. However, due to the way libuser works, only users who have accounts already listed in /etc/passwd can exploit this vulnerability, and the user needs to supply the account password as part of the attack. These requirements mean that exploitation by accounts listed only in LDAP (or some other NSS data source) or by system accounts without a valid password is not possible.

Further analysis showed that the first vulnerability, CVE-2015-3245, is also due to a missing check in libuser.

Which system components are affected by these vulnerabilities?

libuser is a library, which means that in order to exploit it, a program which employs it must be used. Ideally, such a program has the following properties:

  1. It uses libuser.
  2. It is SUID-root.
  3. It allows putting almost arbitrary content into /etc/passwd.

Without the third item, exploitation may still be possible, but it will be much more difficult. If the program is not SUID-root, a user will not have unlimited attempts to exploit the race condition.

A survey of programs processing /etc/passwd and related files presents this picture:

  • passwd is SUID-root, but it uses PAM to change the password, which has custom code to modify /etc/passwd not affected by the race condition. The account locking functionality in passwd does use libuser, but it is restricted to root.
  • chfn and chsh from util-linux are SUID-root and use libuser to change /etc/passwd (the latter depending on how util-linux was compiled) but they have fairly strict filters controlling what users can put into these files.
  • lpasswd, lchfn, lchsh and related utilities from libuser are not SUID-root.
  • userhelper in the usermode package has all three qualifications: libuser-based, SUID-root, and lack of filters.

This is why userhelper is a plausible target for exploitation, and other programs such as passwd and chfn are not.

How can these vulnerabilities be addressed?

System administrators can apply updates from your operating system vendor. Details of affected Red Hat products and security advisories are available on the knowledge base article on the Red Hat Customer Portal.

This security update will change libuser to apply additional checks to the values written to the user and group files (so that injecting newlines is no longer possible), and replaces the locking and file update code to follow the same procedures as the rest of the system. The first change is sufficient to prevent newline injection with userhelper as well, which means that only libuser needs to be updated.

If software updates are not available or cannot be applied, it is possible to block access to the vulnerable functionality with a PAM configuration change. System administrators can edit the files /etc/pam.d/chfn and /etc/pam.d/chsh and block access to non-root users by using pam_warn (for logging) and pam_deny:

#%PAM-1.0
auth       sufficient   pam_rootok.so
auth required pam_warn.so
auth required pam_deny.so
auth       include      system-auth
account    include      system-auth
password   include      system-auth
session    include      system-auth

This will prevent users from changing their login shells and their GECOS field. userhelper identifies itself to PAM as “chfn”, which means this change is effective for this program as well.

Acknowledgements

Red Hat would like to thank Qualys for reporting these vulnerabilities.

Single sign-on with OpenConnect VPN server over FreeIPA

In March of 2015 the 0.10.0 version of OpenConnect VPN was released. One of its main features is the addition of MS-KKDCP support and GSSAPI authentication. Putting the acronyms aside that means that authentication in FreeIPA, which uses Kerberos, is greatly simplified for VPN users. Before explaining more, let’s first explore what the typical login process is on a VPN network.

Currently, with a VPN server/product one needs to login to the VPN server using some username-password pair, and then sign into the Kerberos realm using, again, a username-password pair. Many times, exactly the same password pair is used for both logins. That is, we have two independent secure authentication methods to login to a network, one after the other, consuming the user’s time without necessarily increasing the security level. Can things be simplified and achieve single sign on over the VPN? We believe yes, and that’s the reason we combined the two independent authentications into a single authentication instance. The user logs into the Kerberos realm once and uses the obtained credentials to login to the VPN server as well. That way, the necessary passwords are asked only once, minimizing login time and frustration.

How is that done? If the user needs to connect to the VPN in order to access the Kerberos realm, how could he perform Kerberos authentication prior to that? To answer that question we’ll first explain the protocols in use. The protocol followed by the OpenConnect VPN server is HTTPS based, hence, any authentication method available for HTTPS is available to the VPN server as well. In that particular case, we take advantage of the SPNEGO, and the the MS-KKDCP protocols. The former enables GSSAPI negotiation over HTTPS, thus allowing a Kerberos ticket to be used to authenticate to the server. The MS-KKDCP protocol allows an HTTPS server to behave as a proxy to a Kerberos Authentication Server, and that’s the key point which allows the user to obtain the Kerberos ticket over the VPN server protocol. Thus, the combination of the two protocols allows the OpenConnect VPN server to operate both as a proxy to KDC and as a Kerberos-enabled service. Furthermore, the usage of HTTPS ensures that all transactions with the Kerberos server are protected using the OpenConnect server’s key, ensuring the privacy of the exchange. However, there is a catch; since the OpenConnect server is now a proxy for Kerberos messages, the Kerberos Authentication Server cannot see the real IPs of the clients, and thus cannot prevent a flood of requests which can cause denial of service. To address that, we introduced a point system to the OpenConnect VPN server for banning IP addresses when they perform more than a pre-configured amount of requests.

As a consequence, with the above setup, the login processes is simplified by reducing the required steps to login to a network managed by FreeIPA. The user logs into the Kerberos Authentication Server and the VPN to the FreeIPA managed network is made available with no additional prompts.

Wouldn’t that reduce security? Isn’t it more secure to ask different credentials from the user to connect to the home network and different credentials to access the services into it? That’s a valid concern. There can be networks where this is indeed a good design choice, but in other networks it may be not. By stacking multiple authentication methods you could result in having your users trying the different credentials to the different login prompts, effectively training the less security-oriented to try the passwords they were provided anywhere until it works. However, it is desirable to increase the authentication strength when coming from untrusted networks. For that, it is possible, and recommended, to configure FreeIPA to require a second factor authenticator‌ (OTP) as part of the login process.

Another, equally important concern for the single sign-on, is to prevent re-authentication to the VPN for the whole validity time of a Kerberos key. That is, given the long lifetime of Kerberos tickets, how can we prevent a stolen laptop from being able to access the VPN? That, we address by enforcing a configurable TGT ticket lifetime limit on the VPN server. This way, VPN authentication will only occur if the user’s ticket is fresh, and the user’s password will be required otherwise.

Setting everything up

The next paragraphs move from theory to practice, and describe the minimum set of steps required to setup the OpenConnect VPN server and client with FreeIPA. At this point we assume that a FreeIPA setup is already in place and a realm name KERBEROS.REALM exists. See the Fedora FreeIPA guide for information on how to setup FreeIPA.

Server side: Fedora 22, RHEL7

The first step to install the latest of the 0.10.x branch OpenConnect VPN server (ocserv) at the server system. You can use the following command. In a RHEL7 you will also need to setup the EPEL7 repository.

yum install -y ocserv

That will install the server in an unconfigured state. The server utilizes a single configuration file found in /etc/ocserv/ocserv.conf. It contains several directives documented inline. To allow authentication with Kerberos tickets as well as with the password (e.g., for clients that cannot obtain a ticket – like clients in mobile phones) it is needed to enable PAM as well as GSSAPI authentication with the following two lines in the configuration file.

auth = pam
enable-auth = gssapi[tgt-freshness-time=360]

The option ‘tgt-freshness-time’, is available with openconnect VPN server 0.10.5, and specifies the valid for VPN authentication lifetime, in seconds, of a Kerberos (TGT) ticket. A user will have to reauthenticate if this time is exceeded. In effect that prevents the usage of the VPN for the whole lifetime of a Kerberos ticket.

The following line will enable the MS-KKDCP proxy on ocserv. You’ll need to replace the KERBEROS.RELAM with your realm and the KDC IP address.

kkdcp = /KdcProxy KERBEROS.REALM tcp@KDC-IP-ADDRESS:88

Note, that for PAM authentication to operate you will also need to set up a /etc/pam.d/ocserv. We recommend to use pam_sssd for that, although it can contain anything that best suits the local policy. An example for an SSSD PAM configuration is shown in the Fedora Deployment guide.

The remaining options in ocserv.conf are about the VPN network setup; the comments in the default configuration file should be self-explicable. At minimum you’ll need to specify a range of IPs for the VPN network, the addresses of the DNS servers, and the routes to push to the clients. At this point the server can be run with the following commands.

systemctl enable ocserv
systemctl start ocserv

The status of the server can be checked using “systemctl status ocserv”.

Client side: Fedora 21, RHEL7

The first step is to install the OpenConnect VPN client, named openconnect, in the client system. The version must be 7.05 or later. In a RHEL7 you will need to setup the EPEL7 repository.

yum install -y openconnect network-manager-openconnect

Setup Kerberos to use ocserv as KDC. For that you’ll need to modify /etc/krb5.conf to contain the following:

[realms]
KERBEROS.REALM = {
    kdc = https://ocserv.example.com/KdcProxy
    http_anchors = FILE:/path-to-your/ca.pem
    admin_server = ocserv.example.com
    auto_to_local = DEFAULT
}

[domain_realm]
.kerberos.test = KERBEROS.REALM
kerberos.test = KERBEROS.REALM

Note that, ocserv.example.com should be replaced with the DNS name of your server, and the /path-to-your/ca.pem should be replaced by the a PEM-formatted file which holds the server’s Certificate Authority. For the KDC option the server’s DNS name is preferred to an IP address to simplify server name verification for the Kerberos libraries. At this point you should be able to use kinit to authenticate and obtain a ticket from the Kerberos Authentication Server. Note however, that kinit is very brief on the printed errors and a server certificate verification error will not be easy to debug. Ensure that the http_anchors file is in PEM format, it contains the Certificate Authority that signed the server’s certificate, and that the server’s certificate DNS name matches the DNS name setup in the file. Note also, that this approach requires the user to always use the OpenConnect’s KDCProxy. To avoid that restriction, and allow the user to use the KDC directly when in LAN, we are currently working towards auto-discovery of KDC.

Then, at a terminal run:

$ kinit

If the command succeeds, the ticket is obtained, and at this point you will be able to setup openconnect from network manager GUI and connect to it using the Kerberos credentials. To setup a VPN via NetworkManager on the system menu, select VPN, Network Settings, and add a new Network of “CISCO AnyConnect Compatible VPN (openconnect)”. On the Gateway field, fill in the server’s DNS name, add the server’s CA certificate, and that’s all required.

To use the command line client with Kerberos the following trick is recommended. That avoids using sudo with the client and runs the openconnect client as a normal user, after having created a tun device. The reason it avoids using the openconnect client with sudo, is that sudo will prevent access to the user’s Kerberos credentials.

# sudo ip tuntap add vpn0 mode tun user my-user-name
$ openconnect server.example.com -i vpn0

Client side: Windows

A windows client is available for OpenConnect VPN at this web site. Its setup, similarly to NetworkManager, requires setting the server’s DNS name and its certificate. Configuring windows for use with FreeIPA is outside the scope of this text, but more information can be found at this FreeIPA manual.

Conclusion

A single sign-on solution using FreeIPA and the OpenConnect VPN has many benefits. The core optimization of a single login prompt for the user to authorize access to network resources will result in saving user time and frustration. It is important to note that these optimizations are possible by making VPN access part of the deployed infrastructure, rather than an after thought deployment.  With careful planning, an OpenConnect VPN solution can provide a secure and easy solution to network authentication.

VENOM, don’t get bitten.

CC BY-SA CrowdStrike

CC BY-SA CrowdStrike

QEMU is a generic and open source machine emulator and virtualizer and is incorporated in some Red Hat products as a foundation and hardware emulation layer for running virtual machines under the Xen and KVM hypervisors.

CVE-2015-3456 (aka VENOM) is a security flaw in the QEMU’s Floppy Disk Controller (FDC) emulation. It can be exploited by a malicious guest user with access to the FDC I/O ports by issuing specially crafted FDC commands to the controller. It can result in guest controlled execution of arbitrary code in, and with privileges of, the corresponding QEMU process on the host. Worst case scenario this can be guest to host exit with the root privileges.

This issue affects all x86 and x86-64 based HVM Xen and QEMU/KVM guests, regardless of their machine type, because both PIIX and ICH9 based QEMU machine types create ISA bridge (ICH9 via LPC) and make FDC accessible to the guest. It is also exposed regardless of presence of any floppy related QEMU command line options so even guests without floppy disk explicitly enabled in the libvirt or Xen configuration files are affected.

We believe that code execution is possible but we have not yet seen any working reproducers that would allow this.

This flaw arises because of an unrestricted indexed write access to the fixed size FIFO memory buffer that FDC emulation layer uses to store commands and their parameters. The FIFO buffer is accessed with byte granularity (equivalent of FDC data I/O port write) and the current index is incremented afterwards. After each issued and processed command the FIFO index is reset to 0 so during normal processing the index cannot become out-of-bounds.

For certain commands (such as FD_CMD_READ_ID and FD_CMD_DRIVE_SPECIFICATION_COMMAND) though the index is either not reset for certain period of time (FD_CMD_READ_ID) or there are code paths that don’t reset the index at all (FD_CMD_DRIVE_SPECIFICATION_COMMAND), in which case the subsequent FDC data port writes result in sequential FIFO buffer memory writes that can be out-of-bounds of the allocated memory. The attacker has full control over the values that are stored and also almost fully controls the length of the write. Depending on how the FIFO buffer is defined, he might also have a little control over the index as in the case of Red Hat Enterprise Linux 5 Xen QEMU package, where the index variable is stored after the memory designated for the FIFO buffer.

Depending on the location of the FIFO memory buffer, this can either result in stack or heap overflow. For all of the Red Hat Products using QEMU the FIFO memory buffer is allocated from the heap.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledge-base.

Mitigation

The sVirt and seccomp functionalities used to restrict host’s QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue.  A possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

Samba vulnerability (CVE-2015-0240)

Samba is the most commonly used Windows interoperability suite of programs, used by Linux and Unix systems. It uses the SMB/CIFS protocol to provide a secure, stable, and fast file and print services. It can also seamlessly integrate with Active Directory environments and can function as a domain controller as well as a domain member (legacy NT4-style domain controller is supported, but the Active Directory domain controller feature of Samba 4 is not supported yet).

CVE-2015-0240 is a security flaw in the smbd file server daemon. It can be exploited by a malicious Samba client, by sending specially-crafted packets to the Samba server. No authentication is required to exploit this flaw. It can result in remotely controlled execution of arbitrary code as root.

We believe code execution is possible but we’ve not yet seen any working reproducers that would allow this.

This flaw arises because of an uninitialized pointer is passed to the TALLOC_FREE() funtion. It can be exploited by calling the ServerPasswordSet RPC api on the NetLogon endpoint, by using a NULL session over IPC.
Note: The code snippets shown below are from samba-3.6 shipped with Red Hat Enterprise Linux 6. (All versions of samba >= 3.5 are affected by this flaw)
In the _netr_ServerPasswordSet() function, cred is defined as a pointer to a structure. It is not initialized.

1203 NTSTATUS _netr_ServerPasswordSet(struct pipes_struct *p,   
1204                          struct netr_ServerPasswordSet *r)   
1205 {   
1206         NTSTATUS status = NT_STATUS_OK;   
1207         int i;  
1208         struct netlogon_creds_CredentialState *creds;

Later netr_creds_server_step_check() function is called with cred at:

1213  status = netr_creds_server_step_check(p, p->mem_ctx,   
1214                               r->in.computer_name,   
1215                               r->in.credential,   
1216                               r->out.return_authenticator,   
1217                               &creds);

If netr_creds_server_step_check function fails, it returns and cred is still not initialized. Later in the _netr_ServerPasswordSet() function, cred is freed using the TALLOC_FREE() function which results in an uninitialized pointer free flaw.
It may be possible to control the value of creds, by sending a number of specially-crafted packets. Later we can use the destructor pointer called by TALLOC_FREE() to execute arbitrary code.

As mentioned above, this flaw can only be triggered if netr_creds_server_step_check() fails. This is dependent on the version of Samba used.

In Samba 4.1 and above, this crash can only be triggered after setting “server schannel = yes” in the server configuration. This is due to the
adbe6cba005a2060b0f641e91b500574f4637a36
commit, which introduces NULL initialization into the most common code path. It is still possible to trigger an early return with a memory allocation failure, but that is less likely to occur. Therefore this issue is more difficult to exploit. Red Hat Product Security team has rated this flaw as having important impact on Red Hat Enterprise Linux 7.

In older versions of Samba (samba-3.6 as shipped with Red Hat Enterprise Linux 5 and 6. samba-4.0 as shipped with Red Hat Enterprise Linux 6) the above mentioned commit does not exist. An attacker could call _netr_ServerPasswordSet() function with a NULLED buffer, which could trigger this flaw. Red Hat Product Security has rated this flaw as having critical impact on all other versions of samba package shipped by Red Hat.

Lastly the version of Samba 4.0 shipped with Red Hat Enterprise Linux 6.2 EUS is based on an alpha release of Samba 4, which lacked the password change functionality and thus the vulnerability. The same is true for the version of Samba 3.0 shipped with Red Hat Enterprise Linux 4 and 5.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledgebase.  This flaw is also fixed in Fedora 20 and Fedora 21.

Before you initiate a “docker pull”

In addition to the general challenges that are inherent to isolating containers, Docker brings with it an entirely new attack surface in the form of its automated fetching and installation mechanism, “docker pull”. It may be counter-intuitive, but “docker pull” both fetches and unpacks a container image in one step. There is no verification step and, surprisingly, malformed packages can compromise a system even if the container itself is never run. Many of the CVE’s issues against Docker have been related to packaging that can lead to install-time compromise and/or issues with the Docker registry.

One, now resolved, way such malicious issues could compromise a system was by a simple path traversal during the unpack step. By simply using a tarball’s capacity to unpack to paths such as “../../../” malicious images were able to override any part of a host file system they desired.

Thus, one of the most important ways you can protect yourself when using Docker images is to make sure you only use content from a source you trust and to separate the download and unpack/install steps. The easiest way to do this is simply to not use “docker pull” command. Instead, download your Docker images over a secure channel from a trusted source and then use the “docker load” command. Most image providers also serve images directly over a secure, or at least verifiable, connection. For example, Red Hat provides a SSL-accessible “Container Images”.  Fedora also provides Docker images with each release as well.

While Fedora does not provide SSL with all mirrors, it does provide a signed checksum of the Docker image that can be used to verify it before you use “docker load”.

Since “docker pull” automatically unpacks images and this unpacking process itself is often compromised, it is possible that typos can lead to system compromises (e.g. a malicious “rel” image downloaded and unpacked when you intended “rhel”). This typo problem can also occur in Dockerfiles. One way to protect yourself is to prevent accidental access to index.docker.io at the firewall-level or by adding the following /etc/hosts entry:

127.0.0.1 index.docker.io

This will cause such mistakes to timeout instead of potentially downloading unwanted images. You can still use “docker pull” for private repositories by explicitly providing the registry:

docker pull registry.somewhere.com/image

And you can use a similar syntax in Dockerfiles:

from registry.somewhere.com/image

Providing a wider ecosystem of trusted images is exactly why Red Hat began its certification program for container applications. Docker is an amazing technology, but it is neither a security nor interoperability panacea. Images still need to come from sources that certify their security, level-of-support, and compatibility.

Fedora Security Team

Vulnerabilities in software happen.  When they get fixed it’s up to the packager to make those fixes available to the systems using the software.  Duplicating much of the response efforts that Red Hat Product Security performs for Red Hat products, the Fedora Security Team (FST) has recently been created to assist packagers get vulnerability fixes downstream in a timely manner.

At the beginning of July, there were over 500 vulnerability tickets open* against Fedora and EPEL.  Many of these vulnerabilities already had patches or releases available to remedy the problems but not all.  The Team has already found several examples of upstream not knowing that the vulnerability exists and was able to fix the issue quickly.  This is one of the reasons having a dedicated team to work these issues is so important.

In the few short weeks since the Team was created, we’ve already closed 14 vulnerability tickets and are working another 150.  We hope to be able to work in a more real-time environment once the backlog decreases.  Staying in front of the vulnerabilities will not be easy, however.  During the week of August 3rd, 27 new tickets were opened for packages in Fedora and EPEL.  While we haven’t figured out a way to get ahead of the problem, we are trying to deal with the aftermath and get fixes pushed to the users as quickly as possible.

Additional information on the mission and the Team can be found on our wiki page.  If you’d like to get involved please join us for one of our meetings and subscribe to our listserv.

 

* A separate vulnerability ticket is sometimes opened for different versions of Fedora and EPEL resulting in multiple tickets for a single vulnerability.  This makes informing the packager easier but also inflates the numbers significantly.

Controlling access to smart cards

Smart cards are increasingly used in workstations as an authentication method. They are mainly used to provide public key operations (e.g., digital signatures) using keys that cannot be exported from the card. They also serve as a data storage, e.g., for the corresponding certificate to the key. In RHEL and Fedora systems low-level access to smart cards is provided using the pcsc-lite daemon, an implementation of the PC/SC protocol, defined by the PC/SC industry consortium. In brief the PC/SC protocol allows the system to execute certain pre-defined commands on the card and obtain the result. The implementation on the pcsc-lite daemon uses a privileged process that handles direct communication with the card (e.g., using the CCID USB protocol), while applications can communicate with the daemon using the SCard API. That API hides, the underneath communication between the application and the pcsc-lite daemon which is based on unix domain sockets.

However, there is a catch. As you may have noticed there is no mention of access control in the communication between applications and the pcsc-lite daemon. That is because it is assumed that the access control included in smart cards, such as PINs, pinpads, and biometrics, would be sufficient to counter most threats. That isn’t always the case. As smart cards typically contain embedded software in the form of firmware there will be bugs that can be exploited by a malicious application, and these bugs even if known they are not easy nor practical to fix. Furthermore, there are often public files (e.g., without the protection of a PIN) present on a smart card that while they were intended to be used by the smart card user, it is not always desirable to be accessible by all system users. Even worse, there are certain smart cards that would allow any user of a system to erase all smart card data by re-initializing it. All of these led us to introduce additional access control to smart cards, in par with the access control used for external hard disks. The main idea is to be able to provide fine-grained access control on the system, and specify policies such as “the user on the console should be able to fully access the smart card, but not any other user”. For that we used polkit, a framework used by applications to grant access to privileged operations. The reason of this decision is mainly because polkit has already been successfully used to grant access to external hard disks, and unsurprisingly the access control requirements for smart cards share many similarities with removable devices such as hard disks.

The pcsc-lite access control framework is now part of pcsc-lite 1.8.11 and will be enabled by default in Fedora 21. The advantages that it offers is that it can prevent unauthorized users from issuing commands to smart cards, and prevent unauthorized users from reading, writing or (in some cases) erasing any public data from a smart card. The access control is imposed during the session initialization, thus reducing to minimal any potential overhead. The default policy in Fedora 21 will treat any user on the console as authorized, as physical access to the console implies physical access to the card, but remote users, e.g., via ssh, or system daemons will be treated as unauthorized unless they have administrative rights.

Let’s now see how the smart card access control can be administered. The system-wide policy for pcsc-lite daemon is available at /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy. That file is a polkit XML file that contains the default rules needed to access the daemon. The default policy that will be shipped in Fedora 21 consists of the following.

  <action id="org.debian.pcsc-lite.access_pcsc">
    <description>Access to the PC/SC daemon</description>
    <message>Authentication is required to access the PC/SC daemon</message>
    <defaults>
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
      <allow_active>yes</allow_active>
    </defaults>
  </action>

  <action id="org.debian.pcsc-lite.access_card">
    <description>Access to the smart card</description>
    <message>Authentication is required to access the smart card</message>
    <defaults>
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
      <allow_active>yes</allow_active>
    </defaults>
  </action>

The syntax format is explained in more details in the polkit manual page. The pcsc-lite relevant parts are the action IDs. The action with ID “org.debian.pcsc-lite.access_pcsc” contains the policy in order to access the pcsc-lite daemon and issue commands to it, i.e., access the unix domain socket. The latter action with ID “org.debian.pcsc-lite.access_card” contains the policy to issue commands to smart cards available to the pcsc-lite daemon. That distinction allows for example programs to query the number of readers and cards present, but not issue any commands to them. Under both policies only active (console) processes are allowed to access the pcsc-lite daemon and smart cards, unless they are privileged processes.

Polkit, is quite more flexible though. With it we can provide even more fine-grained access control, e.g., to specific card readers. For example, if we have a web server that utilizes a smart card we can restrict it to use only the smart cards under a given reader. These rules are expressed in Javascript and can be added in a separate file in /usr/share/polkit-1/rules.d/. Let’s now see how the rules for our example would look like.

polkit.addRule(function(action, subject) {
    if (action.id == "org.debian.pcsc-lite.access_pcsc" &&
        subject.user == "apache") {
            return polkit.Result.YES;
    }
});

polkit.addRule(function(action, subject) {
    if (action.id == "org.debian.pcsc-lite.access_card" &&
        action.lookup("reader") == 'name_of_reader' &&
        subject.user == "apache") {
            return polkit.Result.YES;    }
});

Here we add two rules. The first one allows the user “apache”, which is the user the web-server runs under, to access the pcsc-lite daemon. That rule explicitly allows access to the daemon because in our default policy only administrator and console user can access it. The latter rule, it allows the same user to access the smart card reader identified by “name_of_reader”. The name of the reader can be obtained using the commands pcsc_scan or opensc-tool -l.

With these changes to pcsc-lite we manage to provide reasonable default settings for the users of smart cards that apply to most, if not all, typical uses. These default settings increase the overall security of the system, by denying access to the smart card firmware, as well as to data and operations for non-authorized users.