Sunday, January 27, 2013

Protocols for the Security Stickler

Data communications channels are often insecure, subjecting messages transmitted over the channels to passive and active threats (Barkley, 1994). Internet protocols connect various networks and data packets are transmitted over them. An entire protocol stack exists over which computers exchange messages. For example, Web-browsers sent Hyper-text Markup Language (HTML) messages over the Hyper-text Transfer Protocol (HTTP) which sits on top of the TCP/IP stack. Additional protocols are now in places that create secure channels for such communication, Secure Sockets Layer (SSL) sits between the HTTP and TCP/IP protocols, so for secure Web-page transfers HTTP is transmitted over the standard port 443 of SSL rather than the unsecured port 80 assigned to HTTP. Together this results in HTTPS (HTTP over SSL) communication. Secure socket layer and TLS are security protocols primarily used for network transport of messages.

Secure Sockets Layer

The Secure Sockets Layer (SSL) protocol is a security protocol that provides communication privacy over the Internet by allowing client-server applications to communicate in a way that is designed to prevent eavesdropping, tampering or message forgery (Freier, Karlton & Kocher, 1996). SSL is composed of a handshake protocol and a record protocol, which typically sits on top of a reliable communication protocol like TCP. SSL evolved into its latest version 3.0 resulting in the transport layer security protocol.

Transport Layer Security

The primary goal of The Transport Layer Security (TLS) protocol is to provide privacy and data integrity between two communicating applications; this is used for encapsulation of various higher level protocols (Dierks & Allen, 1999, p. 3). The TLS is actually a combination of two layers, the TLS Record Protocol and the TLS Handshake Protocol. The TLS Record Protocol has two basic properties: connection privacy and reliability. The TLS Handshake protocol has three basic properties: peer identity authentication, shared secret negotiation, and negotiation reliability.

One advantage of TLS is that is independent of the application protocol (Dierks & Allen, 1999, p. 4). Higher-level protocols can be layered on top of this protocol. This leaves the decision of TLS initiation of handshaking and authentication certificate exchanges to the judgment of higher-level protocol designers. The primary goals of the TLS protocol, thus, are to provide cryptographic security, interoperability, and extensibility. These are fundamental requirements of enterprise security.

 

Sunday, January 20, 2013

Message Digests and Keys

A message digest is analogous to the hand signatures in the real world. Digests are a convenient and useful way of authenticating messages.

Web-o-pedia defines message digest as:

The representation of text in the form of a single string of digits, created using a formula called a one-way hash function. Encrypting a message digest with a private key creates a digital signature, which is an electronic means of authentication (p.1)

A message in its entirety is taken as input and a small fingerprint created, this message along with its unique fingerprint is sent with the document. When the recipient is able to verify the fingerprint of the document it ensures that the message did not change during transmission. A message may be sent in plain text along with a message digest in the same transmission. The idea is that the recipient would be able to verify that the plain text was not transmitted unaltered by examining the digital signature. The most popular algorithm for message digests is the MD5 (IrnisNet.com, n.d.). Created at Massachusetts Institute of Technology, it was published to public domain as Internet RFC 1321.

MD5

The MD5, developed by Dr. Roland R. Rivest, is an algorithm that takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input (Abzug, 1991). While not mathematically proven, it is conjectured that it is not feasible to create a message from the digest. In other words, it is computationally infeasible to “produce any message having a given pre-specified target message digest” (Abzug, 1991).

MD5 is described in the request for comment (rfc) 1321. Rivest (1992) summarized MD5 as:

The MD5 algorithm is an extension of the MD4 message-digest algorithm. MD5 is slightly slower than MD4, but is more "conservative" in design. MD5 was designed because it was felt that MD4 was perhaps being adopted for use more quickly than justified by the existing critical review; because MD4 was designed to be exceptionally fast, it is "at the edge" in terms of risking successful cryptanalytic attack. MD5 backs off a bit, giving up a little in speed for a much greater likelihood of ultimate security. (p.3)

Message digest 5 is an enhancement over MD4 – Rivest (1992) describes this version as more conservative as its predecessors and easier to codify the algorithm compactly. The algorithm provides a fingerprint of a message of any length. In order to come up with two messages (plain text) resolving to the exact same fingerprint is of the order 2 to the power of 64 operations. To reverse-engineer a fingerprint with a matching plain text message required 2 to the power of 128 operations. Such great numbers provide current computational infeasibility.

SHA-1

The Secure Hash Algorithm 1(SHA-1) algorithm is an advanced algorithm adopted by the United States of America as a Federal Information Processing Standard. SHA-1, as explained in the RFC 3174, is employed for computing a condensed representation of a message or a data file (Jones, 2001). This algorithm can accept a message of any length (theoretically less than 2 to the power of 64 bits); the output is a 160-bit message digest that is computationally unique to the input given. This signature can be used for validation against the previous signature.

Demonstration. For example, if the user registers with a password “purdue1234” the SHA-1 algorithm can be applied which will result in a 160-bit “8ad4d7e66116219c5407db13280de7b4c2121e23”. This digest can be saved in the database instead of the plain text password the user registers with. The next time the user signs on with the same plain-text password – it will get converted to the same signature which can then be compared to authenticate the user. If the user enters a different password say “rohit1234” the SHA-1 digests it as “fb0f57cb70fbd8926f2912585854cbe4bcf83942”. This triggers a mismatch and the authentication fails. The algorithm guarantees to generate the same 160-bit signature given the plain-text, and it is computationally infeasible to reverse the digest into the plain-text. Therefore even if the database is “hacked” the passwords will not be usable. This is one of the most common techniques employed in the industry for saving sensitive data that only needs to be verified and not reused.
DSA

Digital Signature Algorithm (DSA) is an algorithm inherited from the National Security Agency (NSA) and published by the National Institute of Standards and Technology (NIST) in the Digital Signature Standard (DSS) as part of the United States government’s Capstone project (RSA Laboratories, n.d.). In order to gain a better understanding of DSA, the discrete logarithm problem needs to be explained. RSA Laboratories documentation explains that for a group element g, if g is multiplied by itself n times, it is represented by gn ; the discrete logarithm problem is as follows: take two group elements g and h which belong to a finite group G, find an integer x such that gx=h. The discrete logarithm problem is a complex one, it is considered more complex and a harder one-way function than those algorithms that are based on the factoring problem.

Algorithm implementations that have emerged are quick with a big of “O(O(n))”. The big-O notation is a theoretical measure of the execution of an algorithm usually the time or memory needed given the problem size n, which is usually the number of item (NIST, 1976). Signature verification is faster than signature verification, whereas with the RSA algorithm the verification is much faster than the generation of the actual digest itself (RSA Laboratories, n.d.). Initial criticism of the algorithm surrounded around the lack of flexibility when compared with the RSA cryptosystem, the verification performance, adoption issues as cited by hardware and software vendors that had standardized on RSA, and finally the discretionary selection of the algorithm by NSA (RSA Laboratories, n.d.). DSA has now been incorporated by several specifications and implementations. This can now be considered a good choice for adoption by the enterprise.

Secret Keys

Two general types of cryptosystems have evolved over the decades: secret-key cryptography and public-key cryptography. In secret-key cryptography, as the name suggests, a key is maintained and kept secretive from the public domain, only the recipient and the sender have knowledge of the key. This is also known as symmetric key cryptography. In a public-key cryptography system, two keys play a role in ensuring security. The public key is well published or can be requested, the private key is kept secret by the individual parties. This scheme requires a Certificate Authority such that tampering of public keys is prevented. The primary advantage of this scheme over the other is that no secure courier is needed to transfer the secret key. The main disadvantage is that broadcasting of encrypted messages is not possible.

Symmetric Keys

This scheme is characterized by the use of one single key that can encrypt and decrypt the plain text message. The encryption and decryption algorithms now exist in the public domain, the only way this scheme can be used is by the knowledge of a key. If the key is known only to the parties that are in a secured communication mode, secrecy can be provided (Barkley, 1994). When symmetric key cryptography is used for communications and the messages are intercepted by a hacker, it is computationally infeasible to derive the key or decrypt the message from the cipher even if the encryption algorithm is known. The cipher can only be decrypted if the secret key is known. Because the secret key is known only by the message sender and the message receiver, the secrecy of the transmission can be guaranteed.

MAC. While secrecy can be guaranteed the integrity of the message cannot be guaranteed. In order to ensure that the message has integrity, a cryptographic checksum called the Message Authentication Code (MAC) is appended to the message. A MAC is a type of message digest, it is smaller than the original message, a MAC cannot be reverse engineered, and colliding messages are hard to find. The MAC is computed as a function of the message being transmitted and the secret key (Barkley, 1994). This is done by the message originator or the sender.

Asymmetric Keys

Asymmetric key cryptography is different in the sense that there is only one key that is well known to both parties and another set of keys that is private. This scheme is also known as public-key cryptography. The public key is used to generate a function that transforms text (Barkley, 1994). The private key is secret and is known only to the parties who own their respective public keys. The public keys are meant to be distributed. Both the keys are part of a pair and either one can be deemed public and the other private. Each key generates a transformation function, because the public key is known its transformation can be derived and be made known also. In addition, the functions have an inverse relationship. If one function encrypts a message the other can be used to decrypt it (Barkley, 1994). How these transformation functions are used is as follows: the public key of the destination is requested, the sender uses the public key of the destination and transforms the data to be transmitted using it. The sender then transmits the encrypted data to the desired sender. Note that the transmission of the data is encrypted and can only be decrypted by the other pair of the public key that was used. The private key of the receiver can decrypt the message. The receiver uses the private key after receiving the encrypted message and then uses it to decrypt the message, after which the message can be consumed.

The advantage of such a scheme is that two users can communicate with each other without having to share a common key; usually with symmetric key cryptography a common key is saved. The common key which is usually a secret key is not something that should be shared in the first place. Also, distribution of secret keys adds to the layer of complexity associated with the security of the system. Using public-key cryptography this issue is easily resolved. Because it is computationally infeasible for the private key to be derived from the public key, it is also, therefore, infeasible to decrypt the message encrypted with the public key. While there is convenience there is an issue with the inefficiency of the mechanism. The time taken to complete the encryption of plain text can take a long time; also the length of the cipher text can be longer than the plain text message itself. Also, distribution of messages is not possible because the private key is held by only one principal. Therefore it is not possible to use this scheme for encrypted broadcasts. Applications for public-key cryptography are often seen in the enterprise: authentication, integrity and non-repudiation.

Sunday, January 13, 2013

Cryptography–It’s the Key

Julius Caesar encrypted messages so that the messenger could not understand the cipher (Faqs.org, 2003). A “shift by 3” function was used i.e. he substituted A by D, Z by C etc. Only the recipient, who knew the key, three in this case, could decipher the message. A cipher system is a way of disguising messages such that only the recipients with the knowledge of the ‘key’ can decipher it. Cryptography is the art of using cipher/crypto systems. Cryptanalysis is the art of deciphering the encrypted message without prior knowledge of the key means other than the intended.

http://users.telenet.be/d.rijmenants/pics/codewheel.gif

A strong cryptosystem has a large key space, it will certainly produce cipher text which appears random to all standard statistical tests and it will resist all known previous attacks (Faqs.org, 2003). Several types of cryptography and standards exist today. Public Key Cryptography Standards (PKCS) is an important security standard, it defines a binary format that can be used for storing certificates. Public key cryptography and shared key cryptography can also use message digests – this is a one-way has function.

Sunday, January 6, 2013

Authentication–Who Are You ?

Authentication

Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be (SearchSecurity.com, n.d.). Verifying an identity claim is more complex than it appears to be upfront. There are several authentication methodologies, several security protocols, encryption schemes and hashing algorithms. There is no “best” security solution. For every implementation, it is important to establish the best possible options available.

Authentication Types

Authentication has existed well into the history of ancient human civilization. In the enterprise environment, it is increasingly becoming important that the authentication architecture is well defined. It is common for the user to enter authentication credentials. Several types of authentication methods exist today. Entities can be authenticated based on secret knowledge (like username and password combination), biometric (like fingerprint scans), and digital certificates. In private and public computer networks (including the Internet), authentication is commonly done through the use of logon passwords (Faqs.org, 2003).

Knowledge-based authentication

The most common authentication type is the knowledge-based. An identification key along with a secret pass code is required to access systems protected by such an authentication scheme. A user-id and password challenge screen is commonly seen in web-based email systems. The knowledge of the password is considered secret and is considered enough information to let the user in.

Both ends of a session must have the secret password (and/or key) in order for authentication to take place. The password also needs to be transmitted from the principal’s location to the principal authenticator’s location (Jaworski, Perrone & Chaganti, 2001). This leads to an obvious exposure. The link between the locations needs to be secured such that snoop attempts are not possible or data deciphering infeasible. One way of securing such systems is by the use of Kerberos. This is a password-base authentication system where a secret symmetric key is used to cipher and decipher passwords.

Biometrics-based authentication

Authentication based on biometrics is still in its infancy. Unique attributes extracted from individuals are used for authentication. Fingerprints, hand geometry, facial recognition, iris recognition, and dynamic signature verification are some of the more prominent biometric technologies. Biometrics by themselves is not fool-proof technologies, there are several potential ways to hack into such systems, and this risk presents additional concern relative to privacy protection. While research is in progress for revocable biometric tokens, this technology is not commercially implemented on a mass scale yet.

Certificate-based authentication

This technique has grown in popularity in recent years. What is a certificate? A certificate is just data that identifies a principal. Important information contained in the certificate is the public key of the principal, the validity dates of the certificate and the digital signature issued by the certified issuer (Jaworski, Perrone & Chaganti, 2001). The signer uses its private key to generate a cipher text called signature from a block of plain text. This cipher can only be decrypted using the signer’s public key – this ensures that the signature was actually signed by the signer, because the private key, as the name suggests, is secret.

This technology has significant advantages when encrypted plain text needs to be sent across otherwise unsecured connections. On a client-server architecture enabled with both server-side and client-side certificates, both parties can send encrypted information which each side knows came from the other. This is because only the public keys can decrypt the information that the private key encrypted.

A well-know certificate, often called the root or Certificate Authority (CA), signs the server’s public key with its private key. This way no hacker can create a false public key and pretend to communicate with signed data with the assumed keys. The distribution of the root public keys is done in a secure fashion; browsers come pre-configured with these keys.

Several certificate implementations have evolved, most significantly the X.509 v3 standard. This standard allows several different algorithms to be used for creating digital signatures. The X.509 contains information about the version of the certificate, the serial number information, information identifying the signature algorithm and its parameter, the CA identity and signature, the dates of validity and the principal identity and public key.