Modern Cryptology Tutorial Chapter 2

The purpose of a cryptosystem is to encipher an intelligible cleartext (also called plaintext), thus producing an unintelligible ciphertext (also called cryptogram). The intended receiver must be able to decipher the ciphertext, thus recovering the cleartext. However, eavesdroppers (also called cryptanalysts) must be unable to decrypt the ciphertext. Notice the important difference between deciphering and decryption. 

There are several ways in which cryptosystems can be classified. We consider the following as most fundamental: 

• Restricted use cryptosystems 
• General use cryptosystems 
- secret-key 
- public-key. 

A cryptosystem is restricted if its security is based on keeping secret the nature of the enciphering and deciphering algorithms. The simplest historic such system is the so-called Ceasar cipher, which simply consists of replacing each letter in the plaintext with the third following letter in the alphabet (with wraparound when necessary). For instance, the word "cleartext" becomes "fohduwhaw". Restricted systems are usually designed by amateurs and are almost always child's play for professionally experienced cryptanalysts. More importantly, such systems are of no use in the modern context of a large number of users. Codes, which are instances of restricted cryptosystems, are not discussed here. 

A cryptosystem is general if its security lies not in the secrecy of the enciphering and deciphering algorithms, but rather on a relatively-short secret value known as the key. It should be easy" for individual users to come up. with their own keys so that even the designer of the cryptosystem cannot break it without knowing which key has actually been used. 

For some applications (mostly military, diplomatic and covert actions), there is no reason for the designer of a general cryptosystem to publicly disclose the nature of his algorithms. Some additional safety can be obtained by keeping this information confidential. It is however crucial not to rely on this secrecy, for one never knows when it may be compromised. For this reason, reliability analyses of such systems should always be carried out under the assumption that the potential enemy knows all about the system, except for the actual key being used. And if the enemy in reality does not have this knowledge, so much the better. For other types of applications, such as large scale finandal ones, it is in fact better to disclose how the cryptosystem works. Otherwise, users will always suspect the possible existence of a secret method to break the system. 

An obvious requirement for the security of a general cryptosystem is a very large number of possible keys, so as to discourage exhaustive search (trying to systematically decipher a given ciphertext using each possible key until meaningful cleartext emerges). For instance, one might naively consider Caesar's cipher as an instance (with key k = 3) of the "general" cryptosystem consisting of replacing each letter in the plaintext with the kth following letter in the alphabet, where k is the secret key. This generalization is worthless because it accommodates only 25 non-trivial keys, making exhaustive search easy for anyone who suspects the nature of the encipherment (at least if the enciphered message has enough redundancy to allow only one meaningful decryption). 

One should be aware, however, that there is no safety in large numbers alone. For instance, another generalization of Caesar's cipher consists of 
choosing as key an arbitrary permutation of the 26 letters of the alphabet, such as EROX...WM, and enciphering each plaintext letter according to this 
permutation (A ~ E, B ~ R,..., Z ~ M) so that BAD DAY becomes REX XEW. Considering that there are 26! different permutations of the 26 letters, which is more than 4 X 1026, one might feel that exhaustive search on the key space is not feasible: it would take over ten billion years to try each

possible key at the rate of one billion keys every second! Nonetheless, this (mono-alphabetic) simple substitution cipher is rather easy to cryptanalyse, if only because of the variation in natural-language letter frequencies. Much safer cryptosystems have been designed with a significantly smaller key space. 

Coming back to the classification, a general cryptosystem is secret-key if some secret piece of information (the key) has to be agreed upon ahead of time between any two parties that wish to communicate through the cryptosystem. In our previous example, if A enciphers a message using key RROX...WM and sends the ciphertext to B, it had better be the case that B knows which key was used for the encipherment. 

This need for secure key distribution was not an insuperable problem in the days when cryptography was for the few, although foresight was necessary 
to prevent prohibitive delays before secure communication could be established. Now that cryptography has gone public, however, it is unreasonable to set up a network in which each pair of potential users shares a secret key in advance, because the number of keys would grow quadratically with the number of users. In 1976, Diffie and Hellman laid the ground for overcoming this difficulty by proposing the notion of public-key cryptography. A similar idea  was independently discovered by Merkle [t79]. This was soon to be followed by Rivest, Shamir and Adteman's first proposed practical implementation. Secure communication over insecure channels between two totally unacquainted parties was at last possible. 

The key observation that lead to public-key cryptography was that whoever enciphers a message does not need to be able to decipher it. In such systems, each user selects a private key from which she obtains a pair of algorithms. She makes one of them available to everyone as her pub#c enciphering  algorithm, whereas she keeps secret the other one, which is the corresponding deciphering algorithm. This allows even a complete stranger to use her public algorithm to encipher a message for her; yet only she can decipher it through yhe use of her private algorithm. It goes without aaying that such systems can only be secure if it is infeasible to figure out a deciphering algorithm from the corresponding public enciphering algorithm. 

More recently, Goldwasser and Micali have set forward the notion of probabilistie encryption, which is a very interesting variation on the theme of 
public-key cryptography. When a message is enciphered with probabilistic encryption, it becomes essentially just as hard for a cryptanalyst  to figure out any information on the message than it is for him to recover its entire contents. Moreover, there exists a probabilistic encryption scheme that  is faster than the leading public-key encryption scheme proposed thus far (RSA). These cryptosystems are called "probabilistic" because enciphering the same cleartext message several times under the same key can give rise to completely different ciphertexts. 

Other different approaches to the key distribution problem have been proposed. For instance, Alpern and Schneider's keyless cryptography can be used 
effectively in a network that hides the origin (but not the contents) of messages. Shamir's identity based cryptosystem removes the need for key distribution, but requires a trusted center to create private keys.
We shall not discuss these concepts here. Finally, Bennett and Brassard built on the work of Wiesner to develop quantum cryptography, which proposes 
completely different foundations for cryptography and bases its claims of security on quantum physics rather than mathematics and computational  complexity theory. The final chapter of this book is  devoted to quantum cryptography.

Post a Comment (0)
Previous Post Next Post