Evolution of the SSL and TLS protocols

The Transport Layer Security (TLS) protocol is undoubtedly the most widely used protocol on the Internet today. If you have ever done an online banking transaction, visited a social networking website, or checked your email, you have most likely used TLS. Apart from wrapping the plain text HTTP protocol with cryptographic goodness, other lower level protocols like SMTP and FTP can also use TLS to ensure that all the data between client and server is inaccessible to attackers in between. This article takes a brief look at the evolution of the protocol and discusses why it was necessary to make changes to it.

Like any other standard used today on the internet, the TLS protocol also has a humble beginning and a rocking history. Originally developed by Netscape in 1993 it was initially called Secure Sockets Layer (SSL). The first version was said to be so insecure that “it could be broken in ten minutes” when Marc Andreessen presented it at an MIT meeting. Several iterations were made which led to SSL version 2 and, later in 1995, SSL version 3. In 1996, an IETF working group formed to standardize SSL. Even though the resulting protocol is almost identical to SSL version 3, the process took three years.

TLS version 1.0, with a change in name to prevent trademark issues, was published as RFC 2246. Later versions 1.1 and 1.2 were published which aimed to address several shortcomings and flaws in the earlier versions of the protocol.

Cryptographic primitives are based on mathematical functions and theories

The TLS protocol itself is based on several cryptographic primitives including asymmetric key exchange protocols, ciphers, and hashing algorithms. Assembling all these primitives together securely is non-trivial and would not be practical to implement individually in the same way TLS does. For example, AES is a pretty strong symmetric cipher, but like any other symmetric cipher it needs the encryption key to be securely exchanged between the client and the server. Without an asymmetric cipher there is no way to exchange keys on an insecure network such as the Internet. Hashing functions are used to help authenticate the certificates used to exchange the keys and also ensure integrity of data-in-transit. These hash algorithms, like SHA, have one way properties and are reasonably collision resistant. All these cryptographic primitives, arranged in a certain way, make up the TLS protocol as a whole.

Key Exchanges

The reason two systems that have never met can communicate securely is due to secure key exchange protocols. Because each system must know the same secret to establish a secure communications path using a symmetric cipher, the use of key exchange systems allow those two systems to establish that secret and securely share it with each other to establish the communications path.

The Rivest-Shamir-Adleman (RSA) cryptosystem is the most widely used asymmetric key exchange algorithm. This algorithm assumes that factorization of large numbers is difficult, so while the public key (n) is calculated using n = p x q, it is hard for an attacker to factorize n into the corresponding primes p and q, which can be easily used to calculate the private key.

The Diffie-Hellman key exchange (DHE) uses the discrete log problem and assumes that when given y = g ^ a mod p, it is difficult to solve this equation to extract the private key a. Elliptic-Curve-based Diffie-Hellman key exchange (ECDHE) uses the abstract DH problem, but uses multiplication in elliptic curve groups for its security.

Symmetric algorithms

Symmetric algorithms used today like Advanced Encryption Standard (AES) have good confusion and diffusion properties, which mean that the encrypted data will be statistically different from the input. ChaCha20 is a newer stream cipher that is starting to see some traction and may see additional use in the future as a faster alternative to AES.

Changes as time and technology progresses

Faster computers are now more accessible to the common public via cloud computing, GPUs, and dedicated FPGA devices than they were 10 years ago. New computation methods have also become possible. Quantum computers are getting bigger, making possible attacks on the underlying mathematics of many algorithms used for cryptography. Also, new research in mathematics means that as older theories are challenged and newer methods are invented and researched, our previous assumptions about hard mathematical problems are losing ground.

New design flaws in the TLS protocol are also discovered from time to time. The POODLE flaw in SSL version 3 and DROWN flaw in SSL version 2 showed that the previous versions of the protocol are not secure. We can likely expect currently deployed versions of TLS to also have weaknesses as research continues and computing power gets greater.

Attacks against cryptographic primitives and its future

RSA

The best known attack against RSA is still factoring n into its components p and q. The best known algorithm for factoring integers larger than 10^100 is the number field sieve. The current recommendation from NIST is using a minimum RSA key length of 2048 bits for information needed to be protected until at least the year 2030. For secrecy beyond that year larger keys will be necessary.

RSA’s future, however, is bleak! IETF recommended removal of static-RSA from the TLS version 1.3 draft standard stating “[t]hese cipher suites have several drawbacks including lack of PFS, pre-master secret contributed only by the client, and the general weakening of RSA over time. It would make the security analysis simpler to remove this option from TLS version 1.3. RSA certificates would still be allowed, but the key establishment would be via DHE or ECDHE.” The consensus in the room at IETF-89 was to remove RSA key transport from TLS 1.3.

DHE and ECC

Like RSA, the best known attack against DHE is the number field sieve. With the current computing power available, a 512-bit DH key takes 10 core-years to break. NIST recommends a key size of 224 bits and 2048-bit group size for any information which needs to be protected till 2030.

As compared to DHE, ECC has still stood its ground and is being increasingly used in newer software and hardware implementations. Most of the known attacks against ECC work only on special hardware or against buggy implementations. NIST recommends use of at least 224-bit key size for ECC curves.

However, the biggest threat to all of the above key exchange methods is quantum computing. Once viable quantum computing technology is available, all of the above public key cryptography systems will be broken. NIST recently conducted a workshop on post-quantum cryptography and several alternatives to the above public cryptography schemes were discussed. It is going to be interesting to watch what these discussions lead to, and what new standards are formed.

Symmetric ciphers and hashes

All symmetric block ciphers are vulnerable to brute force attacks. The amount of time taken to brute force depends on the size of the key; the bigger the key, the more time and power it takes to brute force. The SWEET32 attack has already shown that small block sizes are bad and has finally laid 3DES to rest. We already know that RC4 is insecure and there have been several attempts to deprecate it.

The proposed TLS version 1.3 draft has provision for only two symmetric ciphers, namely AES and ChaCha20, and introduces authenticated encryption (AEAD). The only MAC function allowed is Poly1305.

And in conclusion…

No one knows for sure what will happen next but history has shown that older algorithms are at risk. That’s why it is so important to stay up to date on cryptography technology. Developers should make sure their software supports the latest versions of TLS while deprecating older versions that are broken (or weakened). System owners should regularly test their systems to verify what ciphers and protocols are supported and stay educated on what is current and what the risks are to utilizing old cryptography.

Category

Secure

Tags

security tls

Leave a Reply