Sunday, December 30, 2012

Guaranteed Integrity of Messages

The ability to guarantee the integrity of a document and the authentication of the sender has been highly desirable since the beginning of human civilization. Even today, we are constantly challenged for authentication in the form of picture identification, personal hand signature and finger prints. Organizations need to ensure authentication of the individual and other corporations before they conduct business transactions with them.

http://www.gfi.com/blog/wp-content/uploads/2009/05/security-integrity-availability-confidentiality.jpg

When human contact is not possible, the challenge of authentication and consequently authorization increases. Encryption technologies, especially public-key cryptography provide a reliable way to digitally sign documents. In today’s digital economies and global networks digital signatures play a vital role in information security.

Sunday, December 23, 2012

Security–the most important Quality Attribute

While digital signatures and encryption are old technologies, their importance is renewed with the rapid growth of the Internet. Online business transactions have been growing at a rapid pace. More and more money transactions occur electronically and over the Internet. Non-repudiation is important when personal contact is not possible. Digital signatures serve that purpose. Encryption ensures that information sent for the intended party can only be read, unaltered by that party. Several technologies support encryption.

The enterprise security model consists of domains that get protection from resources not permitted to access or execute functions. There is a clear distinction between authorizing a resource and authenticating a resource. When a person shows a driver’s license at the bar before he gets a drink, the bar tender will look at it and compare his photograph with the actual person presenting it. This is authentication. When he checks the date of birth for legal drinking age, he has authorized the requester for the drink.

In the corporate environment, it is exceedingly important that the same form of authentication and authorization take place digitally. With new business channels open on the Internet, web applications deployed on the intranet for employees, and business-to-business (B2B) commerce channels created on the extranet, millions of dollars worth of transactions occur.

Business critical information is passed on the wire between computers, which if exposed to the general public or in the wrong hands could be disastrous to the company in question. For every business that exists there is a threat to the business. For e-business initiatives the anonymity of the network, especially the Internet, brings new threats to information exchange. It is important that information is exchanged secretly and confidently.

DSV and Custody Chaining

Dynamic signature verification (DSV) is the process by which an individual’s signature is authenticated against a known signature pattern. Dynamics of the process of creating a signature is initially enrolled into the authenticating system, which is then used to compare the future signature patterns. Several factors including speed, pressure, acceleration, velocity and size ratios are taken into account. These measurements are then digitized and stored for comparison later.
Signatures have long been used to authenticate documents in the real world, before the technology wave, signatures, seals and tamper-proof envelopes were used for secure and valid message exchange. With the onset of technology and digital document interchange, a growing need for authenticating digital documents has emerged.
Digital signatures had emerged in the 1970s as a means of developing a cipher of fixed length from an input of theoretically unlimited length. The signature is expected to be collision free and computationally infeasible to reverse into the original document. Both handwritten signatures and digital signatures have to comply with the basic requirements of authenticity, integrity, and non-repudiation (Elliott, Sickler, Kukula & Modi, n.d.).
In the information technology departments of corporations, documents are regularly exchanged between teams, companies, out sourced contract workers, internal consultants and executive management. These documents are often confidential and contain company secrets. However, due to resource constraints such documents are often shared with consultants and contract workers.

It is therefore a viable solution to provide digital signatures on those documents using proper authentication protocols. One way this could be achieved would be through dynamic signature verification. An interface that can create unique digital signatures from the physical dynamic signature and apply it to the electronic document would be ideal.
The requirement of a verifiable trusted signature creation technique for enterprise-wide document collaboration is required. DSV is an ideal technology suited for this purpose. Sensitive documents can be signed using a DSV module which can electronically sign the e-document. The document can be then shared with confidence that it has not been altered in transit and the recipient will be able to trust it.





Sunday, December 16, 2012

Fingerprinting and Biometrics at Airports

I was unpleasantly surprised to see longer than usual lines at the international port of entry at O’Hare this February. My flight connected me to O’Hare International at Chicago from Schiphol Airport at Amsterdam, Netherlands. It was a long flight and it wasn’t apparent to me the reason for the delay in processing passengers. A huge line of people with hand luggage zigzagged what appeared to be a large hall, the end of the line fading in the distance. I was tired and wanted to get to my apartment and I did not believe I would ever get there at this rate.

In a 2004 article published on New Scientist, Will Knight reports that the Department of Homeland Security (DHS) initiated the installation of a fingerprinting system. A total of 115 airports have the biometric security equipment installed (Knight, 2004).A DHS officer made the comment to Knight that “it takes each finger scan takes just three seconds and pilot schemes produced just one error in every thousand checks” (Knight, 2004).

http://eyetrackingupdate.com/wp-content/uploads/2011/02/digital-fingerprint-scan-300x300.jpg

The early morning long lines brought back memories of the traditional waits outside the U.S. Consulate general in India where the visas are issued. It is said that heat, rain nor storms get in the way of ticket seekers to paradise itself – the United States of America. Visa applicants are happy to divulge their fingerprint for an entry permit into the USA.

Knight (2004) cites Bruce Schneier, founder of the US security consultancy firm Counterpane, who believes that gathering more information through this method is only collecting more data while the problem with security lays in a lack of intelligence not the amount of data . Schneier believes that there is enough data already available but not enough intelligence to process it. He goes on to explain that the terrorists who crashed airplanes into buildings on September 11,2001 had valid passports and were not on previous terrorist watch-lists.

The U.S. immigration officer asked me to wet my left and right index finger and place it on the fingerprint sensor, just like the Visa officer had asked me to do in India. The visa had been issued at the end of the day – a very long day. There was a camera placed along with the fingerprint sensor. No pictures were taken in either place. I placed my finger, the immigration officer instructed me to wait. The computer system looked up my fingerprint compared it with their databases in what seemed like an eternity. Finally, the immigration office smiled back at me and let me proceed. I still had to go to baggage collection and customs; I feared more divulgence of impressions from body parts. Thankfully there were none. After ninety minutes of baby-steps through the immigration lines and multi-finger scans at the Chicago O’Hare airport I was free to step into the “Land of the free, home of the brave”.

Sunday, December 9, 2012

SOA 2004–a blast from the past or what I thought about it back then

I wrote up some views on Service Oriented Architecture in 2004. This was a time when XML was a buzzword and people were wondering and writing about SOA. I was implementing a leading edge solution for a policy administration system using an ACORD XML interface and hosting Internet B2B services for independent agencies. A soup to nuts solution that included XML, SOAP, WSDL, Java EE, EJB and RDBMS + COTS.

I also wrote this unpublished paper:

Introduction

This is the most important decade for distributed computing. Reuse and interoperability are back in a big way for distributed applications. Over the years, several types of reuse methodologies have been proposed and implemented with little success: procedure reuse, object reuse, component reuse, design reuse etc. None of the methodologies tackled interoperable reuse. Enter web-services. Web-services are big and everyone in the industry is taking this seriously. Web services are reusable services based on industry-wide standards. This is significant because it could very well be the silver bullet for software reuse. Software can now be reused via web services and applications can be built leveraging Service Oriented Architectures. This paper relates Service Oriented Architectures and highlights its significance and relationship to web-services.

Distributed Software Applications

Software applications deployed across several servers and connected via a network are called distributed applications. Web-services promise to connect such applications even when they may be deployed across disparate platforms in a heterogeneous application landscape. Cross-platform capabilities are one of web-service’s key attractions because interoperability has been a dream of the distributed-computing community for years (Vaughan-Nichols, Steven J.). In the past, distributed computing was complex and clunky. Previous standards like CORBA (Common Object Request Broker Architecture), RMI (Remote Method Invocation), XML-RPC (Extensible Markup Language – Remote Procedure Calls), and IIOP (Internet Inter-ORB Protocol) were used for distributed applications and information interchange; these were not based on strict standards.

Sun Microsystems’s RMI (Remote Method Invocation) over JRMP (Java Remote Method Protocol) was the next revolution of distributed computing. JRMP required both client and server to have a JRE(Java Runtime Environment) installed. It provided DGC (Distributed Garbage Collection) and advanced connection management. With the release of its J2EE specification, Sun introduced EJBs (Enterprise JavaBeans). EJBs promised to support both RMI over JRMP and CORBA IDL (Integrated Development Language) over IIOP (Internet Inter-ORB Protocol). Distribution of these beans (read objects) and transaction management across topologies seemed to be a blue sky dream that never materialized. In addition, the J2EE standard was not envisioned to be a truly enterprise standard – in the sense that integration with other object oriented platforms was not “graceful”. Microsoft introduced .Net and C# that directly compete with J2EE and Java. The continued disengagement between these two major platforms has reached its threshold. It has became imperative that there be a common cross-platform cross-vendor standard for interoperability of business services. Web-services seem to have bridged the gap in the distributed computing space that no other technology has in the past: standardize the interoperability space.

Dublin Core Metadata Glossary defines interoperability as:

The ability of different types of computers, networks, operating systems, and applications to work together effectively, without prior communication, in order to exchange information in a useful and meaningful manner. There are three aspects of interoperability: semantic, structural and syntactical.

Vaughan-Nichols (2002) states that web-services enables interoperability via a set of open standards, which distinguishes it from previous network services such as CORBA’s Internet Inter-ORB Protocol (IIOP).

Web Services

The word “service” conjures up different connotations to different audiences. We need to understand what a service is not. One damaging assumption for service is that it is another term for component(Perrey & Lycett, 2004). Component-orientation, object-orientation and integration based architectures are in the same space and are often a source of confusion.

Service-Architecture defines a service: “A service is a function that is well-defined, self-contained, and does not depend on the context or state of other services.” Perret and Lycett, attempt to define “service” by unifying its usage context by business, technical, provider and consumer. They describe and contrast multiple perspectives on “service” in detail. “The concept of perspective is the key to reconciling the different understandings of service. Business participants view a service (read business service) as a unit of transaction, described in a contract, and fulfilled by the business infrastructure.” They contrast this with the technical participant’s perception of a service as a “unit of functionality with the semantics of service described as s form of interface”. The authors go on to define a service: “Service is functionality encapsulated and abstracted from context”. They argue that the contrasting perceptions of services are really not an issue as long as there is commonality in the underlying perception. The commonality seems to lie in the reuse of services.

“Web services can be characterized as self-contained, modular applications that can be described, published, located and invoked over a common Web-based infrastructure which is defined by open standards.” (Zimmermann, Milinski, Craes, & Oellermann, 2004)

The Web Service Architecture

We are on the cusp of building “plug-compatible” software components that will reduce the cost of software systems at the same time increase their capabilities (Barry, 2003). Applications can be built on architectures which leverage these services. The goal is for service-oriented architectures to be decoupled for the very services it invokes.

Service-oriented architecture leverages the interoperability of web-services to make distributed software reusable.

Web-services makes the process more abstract than object request brokers by delivering an entire external service without users having to worry about moving between internal code blocks(Vaughan-Nichols, 2002). A recent Yankee Group survey results showed that three out of four enterprise buyers plan on investing in SOA (Service-oriented Architecture) technology within one year(Systinet, 2004).

Interoperability is driven by standards, specifications and their adoption. A service operates under a contract or agreement which will set expectations, and a particular ontological standpoint that influences its semantics (Perrey & Lycett, 2003). Applications that expose business processes with web-services are simpler to invoke and reuse by other applications because of pre-defined contracts that the service publishes. Web-services are interoperable and service-oriented architecture enables reuse, as a result SOA and web-service have formed a natural alliance(Systinet, 2004).

The collection of web-service specifications enables a consortium of vendors with their own underlying implementations of these standards to compete viably in the reuse and interoperability market. This is good because the competition is limited to the implementation level as opposed to the standards-level. Vendors will enable a compliant-based marketplace for distributed applications which expose web-services. This would enable SOA-based web-services to consistently search and leverage services in a business domain, via well-known public, private or protected registries, that are compliant with these standards.

Practitioners have used web-services for interoperability successfully in large systems:

“To achieve true interoperability between Microsoft (MS) Office™/.NET™ and Java™, and to implement more than 500 Web service providers in a short time frame were two of the most important issues that had to be solved. The current, second release of this solution complies with the Web Services Interoperability (WS-I) Basic Profile 1.0. Leveraging the Basic Profile reduced the development and testing efforts significantly” (Vaughan-Nichols, 2002).

The Communication Protocol

While web-services are primarily meant to communicate over HTTP (Hyper Text Transfer Protocol) they can communicate over other protocols as well. SOAP (not an acronym) popularly misrepresented as an object-access protocol is the primary message exchange paradigm for web-services. SOAP is fundamentally a stateless, one-way message exchange paradigm(W3C, 2004).

Interoperability is driven by standards, specifications and their adoption. True interoperability between platforms is achieved via SOAP (Zimmermann et al., 2004). Web services are interoperable and service-oriented architecture enables their interoperability. Interoperable protocol binding specifications for exchanging SOAP messages are inherent to web-services(W3C, 2004).

The collection of specifications enables a pool of vendors with their own implementation of these standards. This is good because the competition is limited to the implementation level as opposed to the standards-level. WS standards compliant vendors will enable a compliant based marketplace for distribute applications which would greatly support service oriented architectures. This would enable SOAs to consistently search and leverage services in a domain that are compliant with these standards.

The Description Language and Registry

While WSDL (Web Service Description Language) describes a service, a registry is a place where the location of WSDLs can be searched. There are two primary models for web-services registry (SunMicrosystems, 2003). UDDI and ebXML each target a specific information space. While UDDI focuses more on technical aspects when listing the service, ebXML focuses on business aspects more. In a nutshell, SOAP, WSDL and UDDI fall short in their abilities to automate ad-hoc B2B relationships and associated transactions. None are qualified to address the standardization of business processes, such as procurement process (SunMicrosystems, 2003).

The initial intent of UDDI was to create a set of public service directories that would enable and fuel the growth of B2B electronic commerce. Since the initial release of the UDDI specification, only a few public UDDI registries have been created. These registries are primarily used as test beds for web service developers.

Conclusion

Web-services in combination with service-oriented architecture have bridged the interoperability gap in the distributed computing space unlike any other technology in the past. Service-oriented architecture and web-services are a paradigm shift in the interoperability space because they are based on industry accepted standards and are simpler to implement across disparate software deployments. This technology is certainly here to stay.

Sunday, December 2, 2012

Speech Recognition in Automobiles

I wrote this in 2004 when I purchased a car with Voice Activated controls. It was amazing back then.

Speech Recognition in Automobiles

I am alone in my car cruising from Carmel, Indiana to Purdue University in West Lafayette, Indiana for a weekend class. It’s early in the morning and I wonder if I will make it to class on time. After about ten minutes on interstate 65, I ask impatiently “How long to the destination?” Honda’s advanced navigation system gears into action; it promptly queries the Global Positioning System (GPS) satellites and local GPS repeaters for the vehicle’s current co-ordinates. It then averages out the expected speed based on current averages on the interstate, state roads and inner streets and responds back in a pleasant natural female voice “It is about forty two minutes to the destination”. I am definitely going to be late for class.

Speech recognition technology, once a domain of fantastic science fiction, is a reality today. This technology has begun to touch our lives on a daily basis in our automobiles. A recent article (Rosencrance, 2004) reports on the speech recognition technology in Honda automobiles. The system has the ability to take drivers’ voice commands for directions and then respond with “voice-guided turn-by-turn instructions, so they don't have to take their hands off the wheel” (Rosencrance, 2004), said Alistair Rennie, vice president of sales for IBM's pervasive computing division. Rennie added that this “goes significantly beyond what was done before in terms of being able to deliver an integrated speech experience in a car” (Rosencrance, 2004).

Using IBM's Embedded ViaVoice software the system can recognize spoken street and city names across the continental United States (Rosencrance, 2004). The system recognizes almost every task a driver may want to accomplish while on the road. Commands that can operate the radio, compact disk (CD) player, climate control, defrost systems. It can recognize more than 700 commands and 1.7 million streets and city names. All this is possible without the driver looking away from the road.

clip_image002

(Figure 1)

“Display on” I prod along. The in-dash LCD screen lights up (see Figure 1). I glance at it for a second – there is a map of the state of Indiana and a symbol inching up north towards the destination - a red bull’s eye on the electronic map. I will get there soon. I say “XM Radio Channel twenty”. The integrated satellite radio starts up and plays high quality music.

Automobiles that leverage speech recognition technology are not only making vehicles more attractive to car buyers but also make the roads safer by allowing the driver to never have to take their eyes off the vehicles. Research conducted by the National Highway Traffic Safety Administration (NHTSA) found that automatic speech recognition (ASR) systems distracted drivers less than graphical user interfaces in vehicles performing the same function (Lee, Caven, Haake & Brown, n.d).

Not before long the speech system fades down the music volume and then articulates in the same pleasant voice “Exit approaching in two miles – stay to the right”. The ‘exit mile countdown’ goes on every half a mile until the car actually takes the exit. In about ten minutes I pull into the parking lot. I am running late by ten minutes – the class has probably begun and the exam papers probably handed out to the cohort. Before I turn off the engine, I finally ask, “Will I make a good grade?” There is no response from the system this time.