Category Archives: Redhat

Redhat

RHSA-2014:1371-1: Important: nss security update

Red Hat Enterprise Linux: Updated nss packages that fix one security issue are now available for Red
Hat Enterprise Linux 4 Extended Life Cycle Support, Red Hat Enterprise
Linux 5.6 Long Life, Red Hat Enterprise Linux 5.9 Extended Update Support,
Red Hat Enterprise Linux 6.2 Advanced Update Support, and Red Hat
Enterprise Linux 6.4 Extended Update Support.

Red Hat Product Security has rated this update as having Important security
impact. A Common Vulnerability Scoring System (CVSS) base score, which
gives a detailed severity rating, is available from the CVE link in the
References section.
CVE-2014-1568

The Source of Vulnerabilities, How Red Hat finds out about vulnerabilities.

Red Hat Product Security track lots of data about every vulnerability affecting every Red Hat product. We make all this data available on our Measurement page and from time to time write various blog posts and reports about interesting metrics or trends.

One metric we’ve not written about since 2009 is the source of the vulnerabilities we fix. We want to answer the question of how did Red Hat Product Security first hear about each vulnerability?

Every vulnerability that affects a Red Hat product is given a master tracking bug in Red Hat bugzilla. This bug contains a whiteboard field with a comma separated list of metadata including the dates we found out about the issue, and the source. You can get a file containing all this information already gathered for every CVE. A few months ago we updated our ‘daysofrisk’ command line tool to parse the source information allowing anyone to quickly create reports like this one.

So let’s take a look at some example views of recent data: every vulnerability fixed in every Red Hat product in the 12 months up to 30th August 2014 (a total of 1012 vulnerabilities).

Firstly a chart just giving the breakdown of how we first found out about each issue: Sources of issues

  • CERT: Issues reported to us from a national cert like CERT/CC or CPNI, generally in advance of public disclosure
  • Individual: Issues reported to Red Hat Product Security directly by a customer or researcher, generally in advance of public disclosure
  • Red Hat: Issues found by Red Hat employees
  • Relationship: Issues reported to us by upstream projects, generally in advance of public disclosure
  • Peer vendors: Issues reported to us by other OS distributions, through relationships
    or a shared private forum
  • Internet: For issues not disclosed in advance we monitor a number of mailing lists and security web pages of upstream projects
  • CVE: If we’ve not found out about an issue any other way, we can catch it from the list of public assigned CVE names from Mitre

Next a breakdown of if we knew about the issue in advance. For the purposes of our reports we count knowing the same day of an issue as not knowing in advance, even though we might have had a few hours notice: Known in advanceThere are few interesting observations from this data:

  • Red Hat employees find a lot of vulnerabilities. We don’t just sit back and wait for others to find flaws for us to fix, we actively look for issues ourselves and these are found by engineering, quality assurance, as well as our security teams. 17% of all the issues we fixed in the year were found by Red Hat employees. The issues we find are shared back in advance where possible to upstream and other peer vendors (generally via the ‘distros’ shared private forum).
  • Relationships matter. When you are fixing vulnerabilities in third party software, having a relationship with the upstream makes a big difference. But
    it’s really important to note here that this should never be a one-way street, if an upstream is willing to give Red Hat information about flaws in advance,
    then we need to be willing to add value to that notification by sanity checking the draft advisory, checking the patches, and feeding back the
    results from our quality testing. A recent good example of this is the OpenSSL CCS Injection flaw; our relationship with OpenSSL gave us advance
    notice of the issue and we found a mistake in the advisory as well as a mistake in the patch which otherwise would have caused OpenSSL to have to have
    done a secondary fix after release. Only two of the dozens of companies prenotified about those OpenSSL issues actually added value back to OpenSSL.
  • Red Hat can influence the way this metric looks; without a dedicated security team a vendor could just watch what another vendor does and copy them,
    or rely on public feeds such as the list of assigned CVE names from Mitre. We can make the choice to invest to find more issues and build upstream relationships.

Bash specially-crafted environment variables code injection attack

Update 2014-09-30 19:30 UTC

Questions have arisen around whether Red Hat products are vulnerable to CVE-2014-6277 and CVE-2014-6278.  We have determined that RHSA-2014:1306, RHSA-2014:1311, and RHSA-2014:1312 successfully mitigate the vulnerability and no additional actions need to be taken.


 

Update 2014-09-26 12:00 UTC

We have written a FAQ to address some of the more common questions seen regarding the recent bash issues.

Frequently Asked Questions about the Shellshock Bash flaws


Update 2014-09-26 02:20 UTC

Red Hat has released patched versions of Bash that fix CVE-2014-7169.  Information regarding these updates can be found in the errata.  All customers are strongly encouraged to apply the update as this flaw is being actively attacked in the wild.
Fedora has also released a patched version of Bash that fixes CVE-2014-7169.  Additional information can be found on Fedora Magazine.

Update 2014-09-25 16:00 UTC

Red Hat is aware that the patch for CVE-2014-6271 is incomplete. An attacker can provide specially-crafted environment variables containing arbitrary commands that will be executed on vulnerable systems under certain conditions. The new issue has been  assigned CVE-2014-7169.
We are working on patches in conjunction with the upstream developers as a critical priority. For details on a workaround, please see the knowledgebase article.
Red Hat advises customers to upgrade to the version of Bash which contains the fix for CVE-2014-6271 and not wait for the patch which fixes CVE-2014-7169. CVE-2014-7169 is a less severe issue and patches for it are being worked on.

Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, Bash has evolved from a simple terminal based command interpreter to many other fancy uses.

In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the Bash shell. It is common for a lot of programs to run Bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc)

Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example:

  • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
  • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).
  • PHP scripts executed with mod_php are not affected even if they spawn subshells.
  • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
  • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
  • Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.

Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these Bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like:

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
 vulnerable
 this is a test

The patch used to fix this flaw, ensures that no code is allowed after the end of a Bash function. So if you run the above example with the patched version of Bash, you should get an output similar to:

 $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x'
 this is a test

We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice.

Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue.

We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723

Information on CentOS can be found at http://lists.centos.org/pipermail/centos/2014-September/146099.html.

Enterprise Linux 5.10 to 5.11 risk report

Red Hat Enterprise Linux 5.11 was released this month (September 2014), eleven months since the release of 5.10 in October 2013. So, as usual, let’s use this opportunity to take a look back over the vulnerabilities and security updates made in that time, specifically for Red Hat Enterprise Linux 5 Server.

Red Hat Enterprise Linux 5 is in Production 3 phase, being over seven years since general availability in March 2007, and will receive security updates until March 31st 2017.

<!–Red Hat is transitioning to Red Hat Subscription Management for all Red Hat products by July 31, 2017. All systems registered as clients to Red Hat Network Classic Hosted directly, or indirectly with Red Hat Proxy, must be migrated to Red Hat Subscription Management by July 31, 2017.–>

Errata count

The chart below illustrates the total number of security updates issued for Red Hat Enterprise Linux 5 Server if you had installed 5.10, up to and including the 5.11 release, broken down by severity. It’s split into two columns, one for the packages you’d get if you did a default install, and the other if you installed every single package.

Note that during installation there actually isn’t an option to install every package, you’d have to manually select them all, and it’s not a likely scenario. For a given installation, the number of package updates and vulnerabilities that affected your systems will depend on exactly what you selected during installation and which packages you have subsequently installed or removed.

Security errata 5.10 to 5.11 Red Hat Enterprise Linux 5 ServerFor a default install, from release of 5.10 up to and including 5.11, we shipped 41 advisories to address 129 vulnerabilities. 8 advisories were rated critical, 11 were important, and the remaining 22 were moderate and low.

For all packages, from release of 5.10 up to and including 5.11, we shipped 82 advisories to address 298 vulnerabilities. 12 advisories were rated critical, 29 were important, and the remaining 41 were moderate and low.

You can cut down the number of security issues you need to deal with by carefully choosing the right Red Hat Enterprise Linux variant and package set when deploying a new system, and ensuring you install the latest available Update release.

Critical vulnerabilities

Vulnerabilities rated critical severity are the ones that can pose the most risk to an organisation. By definition, a critical vulnerability is one that could be exploited remotely and automatically by a worm. However we also stretch that definition to include those flaws that affect web browsers or plug-ins where a user only needs to visit a malicious (or compromised) website in order to be exploited. Most of the critical vulnerabilities we fix fall into that latter category.

The 12 critical advisories addressed 33 critical vulnerabilities across just three different projects:

  • An update to NSS/NSPR: RHSA-2014:0916(July 2014). A race condition was found in the way NSS verified certain certificates which could lead to arbitrary code execution with the privileges of the user running that application.
  • Updates to PHP, PHP53: RHSA-2013:1813, RHSA-2013:1814
    (December 2013). A flaw in the parsing of X.509 certificates could allow scripts using the affected function to potentially execute arbitrary code. An update to PHP: RHSA-2014:0311
    (March 2014). A flaw in the conversion of strings to numbers could allow scripts using the affected function to potentially execute arbitrary code.
  • Updates to Firefox, RHSA-2013:1268 (September 2013), RHSA-2013:1476 (October 2013), RHSA-2013:1812 (December 2013), RHSA-2014:0132 (February 2014), RHSA-2014:0310 (March 2014), RHSA-2014:0448 (Apr 2014), RHSA-2014:0741 (June 2014), RHSA-2014:0919 (July 2014) where a malicious web site could potentially run arbitrary code as the user running Firefox.

Updates to correct 32 of the 33 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public.

Overall, for Red Hat Enterprise Linux 5 since release until 5.11, 98% of critical vulnerabilities have had an update available to address them available from the Red Hat Network either the same day or the next calendar day after the issue was public.

Other significant vulnerabilities

Although not in the definition of critical severity, also of interest are other remote flaws and local privilege escalation flaws:

  • A flaw in glibc, CVE-2014-5119, fixed by RHSA-2014:1110 (August 2014). A local user could use this flaw to escalate their privileges. A public exploit is available which targets the polkit application on 32-bit systems although polkit is not shipped in Red Hat Enterprise Linux 5. It may be possible to create an exploit for Red Hat Enterprise Linux 5 by targeting a different application.
  • Two flaws in squid, CVE-2014-4115, and CVE-2014-3609, fixed by RHSA-2014:1148 (September 2014). A remote attacker could cause Squid to crash.
  • A flaw in procmail, CVE-2014-3618, fixed by RHSA-2014:1172 (September 2014). A remote attacker could send an email with specially crafted headers that, when processed by formail, could cause procmail to crash or, possibly, execute arbitrary code as the user running formail.
  • A flaw in Apache Struts, CVE-2014-0114, fixed by RHSA-2014:0474 (April 2014). A remote attacker could use this flaw to manipulate the ClassLoader used by an application server running Stuts 1 potentially leading to arbitrary code execution under some conditions.
  • A flaw where yum-updatesd did not properly perform RPM signature checks, CVE-2014-0022, fixed by RHSA-2014:1004 (Jan 2014). Where yum-updatesd was configured to automatically install updates, a remote attacker could use this flaw to install a malicious update on the target system using an unsigned RPM or an RPM signed with an untrusted key.
  • A flaw in the kernel floppy driver, CVE-2014-1737, fixed by RHSA-2014:0740 (June 2014). A local user who has write access to /dev/fdX on a system with floppy drive could use this flaw to escalate their privileges. A public exploit is available for this issue. Note that access to /dev/fdX is by default restricted only to members of the floppy group.
  • A flaw in libXfont, CVE-2013-6462, fixed by RHSA-2014:0018 (Jan 2014). A local user could potentially use this flaw to escalate their privileges to root.
  • A flaw in xorg-x11-server, CVE-2013-6424, fixed by RHSA-2013:1868 (Dec 2013). An authorized client could potentially use this flaw to escalate their privileges to root.
  • A flaw in the kernel QETH network device driver, CVE-2013-6381, fixed by RHSA-2014:0285 (March 2014). A local, unprivileged user could potentially use this flaw to escalate their privileges. Note this device is only found on s390x architecture systems.

Note that Red Hat Enterprise Linux 5 was not affected by the OpenSSL issue, CVE-2014-0160, “Heartbleed”.

Previous update releases

We generally measure risk in terms of the number of vulnerabilities, but the actual effort in maintaining a Red Hat Enterprise Linux system is more related to the number of advisories we released: a single Firefox advisory may fix ten different issues of critical severity, but takes far less total effort to manage than ten separate advisories each fixing one critical PHP vulnerability.

To compare these statistics with previous update releases we need to take into account that the time between each update release is different. So looking at a default installation and calculating the number of advisories per month gives the following chart:

Security Errata per month to 5.10This data is interesting to get a feel for the risk of running Enterprise Linux 5 Server, but isn’t really useful for comparisons with other major versions, distributions, or operating systems — for example, a default install of Red Hat Enterprise Linux 4AS did not include Firefox, but 5 Server does. You can use our public security measurement data and tools, and run your own custom metrics for any given Red Hat product, package set, time scales, and severity range of interest.

See also:
5.10, 5.9, 5.8, 5.7, 5.6, 5.5, 5.4, 5.3, 5.2, and 5.1 risk reports.

TLS landscape

Transport Layer Security (TLS) or, as it was known in the beginnings of the Internet, Secure Sockets Layer (SSL) is the technology responsible for securing communications between different devices. It is used everyday by nearly everyone using the globe-spanning network.

Let’s take a closer look at how TLS is used by servers that underpin the World Wide Web and how the promise of security is actually executed.

Adoption

Hyper Text Transfer Protocol (HTTP) in versions 1.1 and older make encryption (thus use of TLS) optional. Given that the upcoming HTTP 2.0 will require use of TLS and that Google now uses the HTTPS in its ranking algorithm, it is expected that many sites will become TLS-enabled.

Surveying the Alexa top 1 million sites, most domains still don’t provide secure communication channel for their users.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Just under 40% of HTTP servers support TLS or SSL and present valid certificates.

Additionally, if we look at the version of the protocol supported by the servers most don’t support the newest (and most secure) version of the protocol TLSv1.2.  Of more concern is the number of sites that support the completely insecure SSLv2 protocol.

Only half of HTTPS servers support TLS 1.2

Only half of HTTPS servers support TLS 1.2

(There are no results for SSLv2 for first 3 months because of error in software that was collecting data.)

One of the newest and most secure ciphers available in TLS is Advanced Encryption Standard (AES) in Galois/Counter Mode (AES-GCM). Those ciphers provide good security, resiliency against known attacks (BEAST and Lucky13), and very good performance for machines with hardware accelerators for them (modern Intel and AMD CPUs, upcoming ARM).

Unfortunately, it is growing a bit slower than TLS adoption in general, which means that some of the newly deployed servers aren’t using new cryptographic libraries or are configured to not use all of their functions.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Only 40% of TLS web servers support AES-GCM ciphersuites.

Bad recommendations

A few years back, a weakness in TLS 1.0 and SSL 3 was shown to be exploitable in the BEAST attack. The recommended workaround for it was to use RC4-based ciphers. Unfortunately, we later learned that the RC4 cipher is much weaker than it was previously estimated. As the vulnerability that allowed BEAST was fixed in TLSv1.1, using RC4 ciphers with new protocol versions was always unnecessary. Additionally, now all major clients have implemented workarounds for this attack, which currently makes using RC4 a bad idea.

Unfortunately, many servers prefer RC4 and some (~1%) actually support only RC4.  This makes it impossible to disable this weak cipher on client side to force the rest of servers (nearly 19%) to use different cipher suite.

RC4 is still used with more than 18% of HTTPS servers.

RC4 is still used with more than 18% of HTTPS servers.

The other common issue, is that many certificates are still signed using the obsolete SHA-1. This is mostly caused by backwards compatibility with clients like Windows XP pre SP2 and old phones.

SHA-256 certificates only recently started growing in numbers

SHA-256 certificates only recently started growing in numbers

The sudden increase in the SHA-256 between April and May was caused by re-issuance of certificates in the wake of Heartbleed.

Bad configuration

Many servers also support insecure cipher suites. In the latest scan over 3.5% of servers support some cipher suites that uses AECDH key exchange, which is completely insecure against man in the middle attacks. Many servers also support single DES (around 15%) and export grade cipher suites (around 15%). In total, around 20% of servers support some kind of broken cipher suite.

While correctly implemented SSLv3 and later shouldn’t allow negotiation of those weak ciphers if stronger ones are supported by both client and server, at least one commonly used implementation had a vulnerability that did allow for changing the cipher suite to arbitrary one commonly supported by both client and server. That’s why it is important to occasionally clean up list of supported ciphers, both on server and client side.

Forward secrecy

Forward secrecy, also known as perfect forward secrecy (PFS), is a property of a cipher suite that makes it impossible to decrypt communication between client and server when the attacker knows the server’s private key. It also protects old communication in case the private key is leaked or stolen. That’s why it is such a desirable property.

The good news is that most servers (over 60%) not only support, but will actually negotiate cipher suites that provide forward secrecy with clients that support it. The used types are split essentially between 1024 bit DHE and 256 bit ECDHE, scoring respectively 29% and 33% of all servers in latest scan. The amount of servers that do negotiate PFS enabled cipher suites is also steadily growing.

PFS support among TLS-enabled HTTP servers

PFS support among TLS-enabled HTTP servers

Summary

Most Internet facing servers are badly configured, sometimes it is caused by lack of functionality in software, like in case of old Apache 2.2.x releases that don’t support ECDHE key exchange, and sometimes because of side effects of using new software with old configuration (many configuration tutorials suggested using !ADH in cipher string to disable anonymous cipher suites, that unfortunately doesn’t disable anonymous Elliptic Curve version of DH – AECDH, for that, use of !aNULL is necessary).

Thankfully, the situation seems to be improving, unfortunately rather slowly.

If you’re an administrator of a server, consider enabling TLS.  Performance issues when encryption was slow and taxing on servers are long gone. If you already use TLS, double check your configuration preferably using the Mozilla guide to server configuration as it is regularly updated. Make sure you enable PFS cipher suites and put them above non-PFS ciphers and that you as well as the Certificate Authority you’ve chosen, use modern crypto (SHA-2) and large key sizes (at least 2048 bit RSA).

If you’re a user of a server and you’ve noticed that the server doesn’t use correct configuration, try contacting the administrator – he may have just forgotten about it.

Is your software fixed?

A common query seen at Red Hat is “our auditor says our Red Hat machines are vulnerable to CVE-2015-1234, is this true?” or “Why hasn’t Red Hat updated software package foo to version 1.2.3?” In other words, our customers (and their auditors) are not sure whether or not we have fixed a security vulnerability, or if a given package is up to date with respect to security issues. In an effort to help our security-conscious customers, Red Hat make this information available in an easy to consume format.

What’s the deal with CVEs?

Red Hat is committed to the CVE process. To quote our CVE compatibility page:

We believe that giving our users accurate and complete information about security issues is extremely important. By including CVE names when we discuss security issues in our services and products, we can help users cross-reference vulnerabilities so they spend less time investigating and categorizing security events.

Red Hat has a representative on the CVE Editorial Board and declared CVE compatibility in April 2002.

To put it simply: if it’s a security issue and we fix it in an RHSA it gets a CVE. In fact we usually assign CVEs as soon as we determine a security issue exists (additional information on determining what constitutes a security issue can be found on our blog.).

How to tell if you software is fixed?

A CVE can be queried at our public CVE page.  Details concerning the vulnerability, the CVSS v2 metrics, and security errata are easily accessible from here.

To verify you system is secure, simply check which version of the package you have installed and if the NVR of your installed package is equal to or higher than the NVR of the package in the RHSA then you’re safe.

What’s an NVR?

The NVR is the Name-Version-Release of the package. The Heartbleed RHSA lists packages such as: openssl-1.0.1e-16.el6_5.7.x86_64.rpm. So from this we see a package name of “openssl” (a hyphen), a version of 1.0.1e (a hyphen) and the release is 16.el6_5.7. Assuming you are running RHEL 6, x86_64, if you have openssl version 1.0.1e release 16.el6_5.7 or later you’re protected from the Heartbleed issue.

Please note, there is an additional field called “epoch”, this field actually supersedes the version number (and release), most packages do not have an epoch number, however a larger epoch number means that a package can override a package with a lower epoch. This can be useful, for example, if you need a custom modified version of a package that also exists in RPM repos you are already using.  By assigning an epoch number to your package RPM you can override the same version package RPMs from another repo even if they have a higher version number. So be aware, using packages that have the same name and a higher epoch number you will not get security updates unless you specifically create new RPM’s with the epoch number and the security update.

But what if there is no CVE page?

As part of our process the CVE pages are automatically created if public entries exist in Bugzilla.  CVE information may not be available if the details of the vulnerability have not been released or the issue is still embargoed.  We do encourage responsible handling of vulnerabilities and sometimes delay CVE information from being made public.

Also, CVE information will not be created if the software we shipped wasn’t vulnerable.

How to tell if your system is vulnerable?

If you have a specific CVE or set of CVEs that you are worried about you can use the yum command to see if your system is vulnerable. Start by installing yum-plugin-security:

sudo yum install yum-plugin-security

Then query the CVE you are interested in, for example on a RHEL 7 system without the OpenSSL update:

[root@localhost ~]# yum updateinfo info --cve CVE-2014-0224
===============================================
 Important: openssl security update
===============================================
 Update ID : RHSA-2014:0679
 Release : 
 Type : security
 Status : final
 Issued : 2014-06-10 00:00:00
 Bugs : 1087195 - CVE-2010-5298 openssl: freelist misuse causing 
        a possible use-after-free
 : 1093837 - CVE-2014-0198 openssl: SSL_MODE_RELEASE_BUFFERS NULL
   pointer dereference in do_ssl3_write()
 : 1103586 - CVE-2014-0224 openssl: SSL/TLS MITM vulnerability
 : 1103593 - CVE-2014-0221 openssl: DoS when sending invalid DTLS
   handshake
 : 1103598 - CVE-2014-0195 openssl: Buffer overflow via DTLS 
   invalid fragment
 : 1103600 - CVE-2014-3470 openssl: client-side denial of service 
   when using anonymous ECDH
 CVEs : CVE-2014-0224
 : CVE-2014-0221
 : CVE-2014-0198
 : CVE-2014-0195
 : CVE-2010-5298
 : CVE-2014-3470
Description : OpenSSL is a toolkit that implements the Secure 
Sockets Layer

If your system is up to date or the CVE doesn’t affect the platform you’re on then no information will be returned.

Conclusion

Red Hat Product Security makes available as much information as we can regarding vulnerabilities affecting our customers.  This information is available on our customer portal as well as within the software repositories. As you can see it is both easy and quick to determine if your system is up to date on security patches with the provided information and tools.

The following checklist can be used to check if systems or packages are affected by specific security issues:

1) Check if the issue you’re concerned about has a CVE and check the Red Hat CVE page:

https://access.redhat.com/security/cve/CVE-2014-0224

2) Check to see if your system is up to date for that issue:

sudo yum install yum-plugin-security 
yum updateinfo info --cve CVE-2014-0224

3) Alternatively you can check the package NVR in the RHSA errata listed in the CVE page (in #1) and compare it to the packages on your system to see if they are the same or greater.
4) If you still have questions please contact Red Hat Support!

Fedora Security Team

Vulnerabilities in software happen.  When they get fixed it’s up to the packager to make those fixes available to the systems using the software.  Duplicating much of the response efforts that Red Hat Product Security performs for Red Hat products, the Fedora Security Team (FST) has recently been created to assist packagers get vulnerability fixes downstream in a timely manner.

At the beginning of July, there were over 500 vulnerability tickets open* against Fedora and EPEL.  Many of these vulnerabilities already had patches or releases available to remedy the problems but not all.  The Team has already found several examples of upstream not knowing that the vulnerability exists and was able to fix the issue quickly.  This is one of the reasons having a dedicated team to work these issues is so important.

In the few short weeks since the Team was created, we’ve already closed 14 vulnerability tickets and are working another 150.  We hope to be able to work in a more real-time environment once the backlog decreases.  Staying in front of the vulnerabilities will not be easy, however.  During the week of August 3rd, 27 new tickets were opened for packages in Fedora and EPEL.  While we haven’t figured out a way to get ahead of the problem, we are trying to deal with the aftermath and get fixes pushed to the users as quickly as possible.

Additional information on the mission and the Team can be found on our wiki page.  If you’d like to get involved please join us for one of our meetings and subscribe to our listserv.

 

* A separate vulnerability ticket is sometimes opened for different versions of Fedora and EPEL resulting in multiple tickets for a single vulnerability.  This makes informing the packager easier but also inflates the numbers significantly.

Controlling access to smart cards

Smart cards are increasingly used in workstations as an authentication method. They are mainly used to provide public key operations (e.g., digital signatures) using keys that cannot be exported from the card. They also serve as a data storage, e.g., for the corresponding certificate to the key. In RHEL and Fedora systems low-level access to smart cards is provided using the pcsc-lite daemon, an implementation of the PC/SC protocol, defined by the PC/SC industry consortium. In brief the PC/SC protocol allows the system to execute certain pre-defined commands on the card and obtain the result. The implementation on the pcsc-lite daemon uses a privileged process that handles direct communication with the card (e.g., using the CCID USB protocol), while applications can communicate with the daemon using the SCard API. That API hides, the underneath communication between the application and the pcsc-lite daemon which is based on unix domain sockets.

However, there is a catch. As you may have noticed there is no mention of access control in the communication between applications and the pcsc-lite daemon. That is because it is assumed that the access control included in smart cards, such as PINs, pinpads, and biometrics, would be sufficient to counter most threats. That isn’t always the case. As smart cards typically contain embedded software in the form of firmware there will be bugs that can be exploited by a malicious application, and these bugs even if known they are not easy nor practical to fix. Furthermore, there are often public files (e.g., without the protection of a PIN) present on a smart card that while they were intended to be used by the smart card user, it is not always desirable to be accessible by all system users. Even worse, there are certain smart cards that would allow any user of a system to erase all smart card data by re-initializing it. All of these led us to introduce additional access control to smart cards, in par with the access control used for external hard disks. The main idea is to be able to provide fine-grained access control on the system, and specify policies such as “the user on the console should be able to fully access the smart card, but not any other user”. For that we used polkit, a framework used by applications to grant access to privileged operations. The reason of this decision is mainly because polkit has already been successfully used to grant access to external hard disks, and unsurprisingly the access control requirements for smart cards share many similarities with removable devices such as hard disks.

The pcsc-lite access control framework is now part of pcsc-lite 1.8.11 and will be enabled by default in Fedora 21. The advantages that it offers is that it can prevent unauthorized users from issuing commands to smart cards, and prevent unauthorized users from reading, writing or (in some cases) erasing any public data from a smart card. The access control is imposed during the session initialization, thus reducing to minimal any potential overhead. The default policy in Fedora 21 will treat any user on the console as authorized, as physical access to the console implies physical access to the card, but remote users, e.g., via ssh, or system daemons will be treated as unauthorized unless they have administrative rights.

Let’s now see how the smart card access control can be administered. The system-wide policy for pcsc-lite daemon is available at /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy. That file is a polkit XML file that contains the default rules needed to access the daemon. The default policy that will be shipped in Fedora 21 consists of the following.

  <action id="org.debian.pcsc-lite.access_pcsc">
    <description>Access to the PC/SC daemon</description>
    <message>Authentication is required to access the PC/SC daemon</message>
    <defaults>
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
      <allow_active>yes</allow_active>
    </defaults>
  </action>

  <action id="org.debian.pcsc-lite.access_card">
    <description>Access to the smart card</description>
    <message>Authentication is required to access the smart card</message>
    <defaults>
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
      <allow_active>yes</allow_active>
    </defaults>
  </action>

The syntax format is explained in more details in the polkit manual page. The pcsc-lite relevant parts are the action IDs. The action with ID “org.debian.pcsc-lite.access_pcsc” contains the policy in order to access the pcsc-lite daemon and issue commands to it, i.e., access the unix domain socket. The latter action with ID “org.debian.pcsc-lite.access_card” contains the policy to issue commands to smart cards available to the pcsc-lite daemon. That distinction allows for example programs to query the number of readers and cards present, but not issue any commands to them. Under both policies only active (console) processes are allowed to access the pcsc-lite daemon and smart cards, unless they are privileged processes.

Polkit, is quite more flexible though. With it we can provide even more fine-grained access control, e.g., to specific card readers. For example, if we have a web server that utilizes a smart card we can restrict it to use only the smart cards under a given reader. These rules are expressed in Javascript and can be added in a separate file in /usr/share/polkit-1/rules.d/. Let’s now see how the rules for our example would look like.

polkit.addRule(function(action, subject) {
    if (action.id == "org.debian.pcsc-lite.access_pcsc" &&
        subject.user == "apache") {
            return polkit.Result.YES;
    }
});

polkit.addRule(function(action, subject) {
    if (action.id == "org.debian.pcsc-lite.access_card" &&
        action.lookup("reader") == 'name_of_reader' &&
        subject.user == "apache") {
            return polkit.Result.YES;    }
});

Here we add two rules. The first one allows the user “apache”, which is the user the web-server runs under, to access the pcsc-lite daemon. That rule explicitly allows access to the daemon because in our default policy only administrator and console user can access it. The latter rule, it allows the same user to access the smart card reader identified by “name_of_reader”. The name of the reader can be obtained using the commands pcsc_scan or opensc-tool -l.

With these changes to pcsc-lite we manage to provide reasonable default settings for the users of smart cards that apply to most, if not all, typical uses. These default settings increase the overall security of the system, by denying access to the smart card firmware, as well as to data and operations for non-authorized users.

Towards efficient security code audits

Conducting a code review is often a daunting task, especially when the goal is to find security flaws. They can, and usually are, hidden in all parts and levels of the application – from the lowest level coding errors, through unsafe coding constructs, misuse of APIs, to the overall architecture of the application. Size and quality of the codebase, quality of (hopefully) existing documentation and time restrictions are the main complications of the review. It is therefore useful to have a plan beforehand: know what to look for, how to find the flaws efficiently and how to prioritize.

Code review should start by collecting and reviewing existing documentation about the application. The goal is to get a decent overall picture about the application – what is the expected functionality, what requirements can be possibly expected from the security standpoint, where are the trust boundaries. Not all flaws with security implications are relevant in all contexts, e.g. effective denial of service against server certainly has security implications, whereas coding error in command line application which causes excessive CPU load will probably have low impact. At the end of this phase it should be clear what are the security requirements and which flaws could have the highest impact.

Armed with this knowledge the next step is to define the scope for audit. It is generally always the case that conducting a thorough review would require much more resources than are available, so defining what parts will be audited and which vulnerabilities will be searched for increases efficiency of the audit. It is however necessary to state all the assumptions made explicitly in the report – this makes it possible for others to review them or revisit them in the future in next audits.

In general there are two approaches to conducting a code review – for the lack of better terminology we shall call them bottom up and top down. Of course, real audits always combine techniques from both, so this classification is merely useful when we want to put them in a context.

The top down approach starts with the overall picture of the application and security requirements and drills down towards lower levels of abstraction. We often start by identifying components of application, their relationships and mapping the flow of data. Drilling further down, we can choose to inspect potentially sensitive interfaces which components provide, how data is handled at rest and in motion, how access to sensitive parts of application are restricted etc. From this point audit is quickly becoming very targeted – since we have a good picture of which components, interfaces and channels might be vulnerable to which classes of attacks, we can focus our search and ignore the other parts. Sometimes this will bring us down to the level of line-by-line code inspection, but this is fine – it usually means that architecturally some part of security of application depends on correctness of the code in question.

Top down approach is invaluable, as it is possible to find flaws in overall architecture that would otherwise go unnoticed. However, it is also very demanding – it requires a broad knowledge of all classes of weaknesses, threat models and ability to switch between abstraction levels quickly. Cost of such audit can be reduced by reviewing the application very early in the design phase – unfortunately most of the times this is not possible due to development model chosen or phase in which audit was requested. Another way how to reduce the effort is to invest effort into documentation and reusing it in the future audits.

In the bottom up approach we usually look for indications of vulnerabilities in the code itself and investigate whether they can possibly lead to exploitation. These indications may include outright dangerous code, misuse of APIs, dangerous coding constructs and bad practices to poor code quality – all of these may indicate presence of weakness in the code. Search is usually automated, as there is abundance of tools to simplify this task including static analyzers, code quality metric tools and the most versatile one: grep. All of these reduce the cost of finding a potentially weak spots and so the cost lies in separating wheat from chaff. Bane of this appoach is receiver operating characteristic curve – it is difficult to substantially improve it, so we are usually left with the tradeoffs between false positives and false negatives.

Advantages of bottom up approach are relatively low requirements on resources and reusability. This means it is often easy and desirable to run such analyses as early and as often as possible. It is also much less depends on the skill of the reviewer, since the patterns can be collected to create a knowledgebase, aided with freely available resources on internet. It is a good idea to create checklists to make sure all common types of weaknesses are audited for and make this kind of review more scalable. On the other hand, biggest disadvantage is that certain classes of weaknesses can never be found with this approach – these usually include architectural flaws which lead to vulnerabilities with biggest impact.

The last step in any audit is writing a report. Even though this is usually perceived as the least productive time spent, it is an important one. A good report can enable other interested parties to further scrutinize weak points, provides necessary information to make a potentially hard decisions and is a good way to share and reuse knowledge that might otherwise stay private.