Thursday, July 31, 2008

Identity Governance Framework

This post summarizes the white paper 'Identity Governance Framework' by Oracle.

IGF initiative was first announced in November 2006 by Oracle and in November 2007 it was submitted to Liberty Alliance.

Identity Governance Framework (IGF) helps enterprises easily determine and control how identity related information, including Personally Identifiable Information (PII), access entitlements, attributes, etc. are used, stored (in multiple sources), and propagated between their systems.

It enables organizations to define enterprise level policies to securely and confidently share sensitive personal information between applications that need such data, without having to compromise on business agility or efficiency.

In other words IGF allows to define a set of business rules around usage of identity-related data.

For example user's SSN should only be used in an application that works on behalf of the user - where in all the other cases no application should have access to that specific piece of data.

Further,IGF focuses on following attributes:

- Decouple applications from deployment infrastructure
- Include identity-related data stored inside and outside classic enterprise directory
- Access to user attributes and entitlement data
- Policy-driven to support management and governance

In other words, IGF defines a set of declarative contracts between suppliers and consumers of identity-related information based on two standards - CARML and APPML.

The Identity Governance Framework (IGF) is designed to allow:

1. Application developers to build applications that access identity-related data from a wide range of sources
2. Administrators and deployers to define, enforce, and audit policies concerning the use of identity-related data.

IGF has four components:

1. Identity attribute service - a service that supports access to many different identity sources and enforces administrative policy
2. CARML: declarative syntax using which clients may specify their attribute requirements
3. AAPML: declarative syntax which enables providers of identity-related data to express policy on the usage of information,
4. Multi-language API (Java, .NET, Perl) for reading and writing identity-related attributes.

Client Attributes Requirements Markup Language (CARML) is a specification built by the developer during the development process (much like WSDL is to a web service).

In other words CARML is targeted towards the developers who develop applications which consume identity-related data.

This specification indicates the required and optional attributes, operations, and indexes the application will use when deployed.

Attribute Authority Policy Markup Language (AAPML) is a contract between an attribute authority and an attribute service.

AAPML is targeted towards the identity providers and defines the constraints under which identity data is released.

In other words, it specifies the rules and constraints to access attributes brokered by the service.

These rules are expressed using a form of XACML specifically focused on attributes, based on XACML 2.0.

For example, following rules/obligations will be considered by the identity provider to release identity-related data.

- Subject – characteristics of application, user, strength of authentication
- Resources – attribute names
- Actions – read or write
- Environment – Internet/Intranet/VPN/..
- Consent – availability of specific consent records
- Relationship between Subject and requested identity information
- Whether data can be cached or propagated further

The developer builds their application with minimal or no regard for how or where identity-related data comes from or is stored.

The application developer uses the CARML API to both declare the attribute data needed for the application and the operations needed to support the application.

The declaration is then used to either extract a CARML XML document (for manual configuration), or is automatically asserted when the application connects to the attribute service.

As far as the application developer is concerned, the attribute service manages how and where all identity-related data is processed.

The attribute authority works with attribute service administrator to decide under what conditions specific data may be released.

Together, they work together to define an AAPML document which specifies how and when the attribute services provide access to the authorities information and what operations are enabled.

Attribute service administrators take CARML requirements from applications and reconcile those requirements with available attribute data and existing policies.

They may be required to define mappings to handle differences between client applications and attribute sources.

For example, a client application might have an attribute SsnLast4Digits. While this doesn’t exist in any data store, they could create a mapping that simply allows matching against the last four digits of SSN in an appropriate attribute authority.

This allows for both schema conversion and enhancing usage of minimal information.

With IGF, enterprises will benefit in following ways:

- Rapid deployment of applications without change to identity infrastructures
- Meet legal, regulatory and enterprise policies on managing identity data

Never they leave... just checking out...

Dr. Sanjiva, addressing the farewell party to Ruchith,Deepal,Saminda, Sanka, Sandakith, Dinesh, Diluka, Suran & Chandima - who left the company for higher studies, mentioned, "You can check-out WSO2 any time you like, But you can never leave!" - following the famous Hotel California.

I've been working closely with Ruchith during last year or so - in fact Ruchith is the one who interviewed me for WSO2.

To be honest, it's my privilege to work with such a talented person.

He's not just technically talented - but a great presenter, a patient teacher and a fantastic leader.

Ruchith has made him self famous in the open source community - specially in the web services security arena - and the WSO2 Identity Solution is a brain child of him - but he still finds time to answer any dumb question thrown to him.

I am still young to WSO2 to admire his service there - but any WSO2er will.

I recently had an opportunity to go on a business trip to UK/US with Ruchith and while we were in US we visited one of our clients there and on the way back to hotel - Ruchith mentioned that he truly feels guilty to leave WSO2 at this moment. He further added that - WSO2 has built him up during last few years from zero to hero - and at the time he's in a great shape to help back WSO2 - he has to leave - which he felt guilty about.

Also, once we were in London, due to extremely personal reasons - Ruchith had to say 'no' to a request came to him from the company to do a training on Axis2. It was a very valid reason - and he had no other option. Later, from my colleagues at WSO2, I learnt that this was the first time Ruchith has said 'no' for such a request - it was clearly reflected on his face just after conveying his decision back to the company - and I have never seen Ruchith being so upset, before.

Once again, I am still young to WSO2 to comment on his loyalty - but, any WSO2er will.

That is... just one side of the story - let me add few words on the other side as well.

There was a time Ruchith felt he should postpone his admission to grad school by either a semester or by an year.

Any company CEO will definitely admire this decision and probably put a party on this.

But, Dr.Sanjiva, knowing very well the value Ruchith can add to WSO2 - encouraged him to leave the company to pursue his higher studies on time - which Ruchith finally had to agree.

This is fantastic - but not a surprise to anybody aware of WSO2 culture.

Ruchith: it's been great working with you for last few months and I wish you all the very best for all your higher studies - make your country and people proud of you...!!!

Wednesday, July 30, 2008

Autopost to Everywhere

Only thing worries me here is, I have to give my username/password of all the other services I need to work with posterous, to posterous :(

Tuesday, July 29, 2008

Complex Event Processing with Esper and WSO2 ESB

Monday, July 28, 2008

Effective SOA deployment using an SOA Registry-Repository

This post summarizes the white paper 'Effective SOA deployment using an SOA Registry-Repository' by Sun Microsystems.

An SOA registry-repository is increasingly becoming an important infrastructure middleware solution for managing the rising complexities and meeting new requirements related to SOA.

Many different information artifacts describe a service component or relate it to other service components and information artifacts, such as:

- Multiple WSDL files may describe the various interfaces and protocol bindings of these interfaces for the service component.
- XML schema files may describe the documents exchanged by messages in the service protocol.
- Business process orchestration for the service component may be described by artifacts such as BPEL descriptions, and ebXML business process specification schemas.
- Metadata may describe the assembly structure and subcomponents of a composites ervice.
- XSLT stylesheets may be used as adapters between service components to handle impedance mismatch due to service version differences.
- WSRP descriptions may describe how service components are used by portals.
- Organizational policies, business rules, and procedures may define how service components and service information artifacts may be defined and used.

Governance is defined as the policies, rules, and regulations under which an organization functions as well as the processes that are put in place to ensure compliance with those policies, rules, and regulations.

It is required to have a point of control within the SOA infrastructure that provides governance of service components and artifacts by enforcing the organizational policies that govern them.

The need for a point of control and governance within the SOA deployment demands that service information artifacts be stored and managed in a consistent manner that allows enforcement of organizational policies.

This is precisely the role served by a registry-repository service within an SOA deployment.

The following example describes a registry-repository using a common metaphor:

- A registry-repository is like your local library.
- It has a repository that contains all types of electronic assets, much like the library book shelves contain all types of published content including books, magazines, videos, and so on.
- It has a registry that contains metadata describing the electronic artifacts, much like the library’s card catalog contains information describing the published content on its book shelves.
- The registry and repository are administered jointly. Within a library, the card catalog information and books in the shelves are administered jointly.
- Any number of registry-repositories should be able to work together to offer a unified service, much like multiple libraries can participate in a cooperative network and offer a unified service.

A registry can only store links or pointers to service information artifacts.

The actual artifacts must reside outside the registry.

A registry-repository provides an integrated solution able to store metadata such as links or pointers to artifacts, as well as the actual artifacts.

An SOA registry-repository should provide governance capabilities that enable organizations to define and enforce organizational policies governing the content and usage of the artifacts throughout their life cycles.

Since organizational policies vary, an SOA registry-repository should enable organizations to enforce custom policies for the governance of any type of service information artifact throughout its life cycle.

A registry-repository should allow business rules to be enforced at the time of publishing. It should also allow such business rules to be defined by the organization and specialized for the types of artifacts. If an artifact fails publish-time validation checking, the registry-repository should either reject the artifact, or accept it as invalid and automatically notify responsible parties.

Federated information management allows multiple registry-repositories to federate together and appear as a single, virtual registry-repository, while allowing individual organizations to retain local control over their own registry-repositories.

Governance requires enforcement of organization policies that are described in a machine-processable syntax, typically XML.

An SOA registry-repository will be used to manage and govern policies in much the same manner as other SOA artifacts.

The ebXML Registry standard supports federated policy management of access control policies expressed in the XML syntax defined by the OASIS Extensible Access Control Markup Language (XACML) standard.

A registry-repository should also support federated identity management features such as single sign-on, and integrate with identity and access management services using open standards such as Security Assertions Markup Language (SAML) and Liberty.

A registry-repository contains all service-related artifacts for service components available within an SOA deployment.

A registry-repository should provide discovery capabilities that are extensible and can accommodate the simplest to the most complex domain-specific discovery queries. Specifically, its discovery queries should not all be predefined.

Here are some examples of service artifact discovery queries expressed in plain English:

- Find all WSDL documents that use a specified namespace pattern
- Find all Service Bindings or Services that have a certain text pattern in their documentation
- Find all Service Bindings that are SOAP bindings AND use DOC Literal style AND do not use HTTP as transport
- Find all WSDLs with Service implementations that use specified implementation platform (for example, Java™ 2 Platform,Enterprise Edition or J2EE™, .Net, and so on)
- Find all WSDLs with Service implementations that use specified platform resources (such as database, Java Message Service or JMS, Java API for XML Registries or JAXR, and so on)

Cataloging of artifacts improves their discoverablity and is essential in supporting the kind of artifact-specific queries.

With cataloging information is automatically converted to metadata at the time it is published.

For example, an organization may define cataloging policies for WSDL artifacts such that when a WSDL document is published, it is cataloged in a WSDL-specific manner to generate metadata that includes information on:

- The documents imported by the WSDL document (such as other WSDL documents and XML schema documents)
- The name spaces used by the WSDL document and documents imported by it
- The name and description of the bindings, interfaces, and types used by the WSDL document

Such metadata can then be used in WSDL-specific discovery queries.

As more and more service components are reused within other service components, the task of tracking the network of dependencies between service components becomes more challenging and significant. A registry-repository should provide a set of standard relationship types, but also allow an organization to define additional relationship types based on its specific requirements.

Service components are deployed in phases. During each phase, organizational access control policies (ACPs) determine the controlled community of users which has access to the service.

A registry-repository should allow ACPs to be defined and enforced for service information artifacts. Since ACPs tend to be fairly specific to organizational needs, the registry-repository should allow for fine-grained access control policies that can accommodate specific needs.

Service components evolve over time for a variety of reasons, such as the need to fulfill new requirements. A service’s evolution may involve changes in its implementation and/or public interface.

A registry-repository should provide versioning capabilities that enable automatic version control of any type of service information artifact

When a service evolves, it is important to notify its consumers of the changes to the service.

A registry-repository should provide a change notification capability that allows interested parties, such as system administrators, to create subscriptions to events within the registry-repository that may be of interest to them.

As IT organizations evaluate which registry-repository to deploy as part of their SOA infrastructures, the choices often fall into the following categories:

1. Proprietary registry-repository
2. UDDI registry without a repository
3. UDDI registry with a proprietary repository
4. ebXML registry-repository
5. Combination of UDDI registry and ebXML registry-repository

A UDDI registry offers a subset of capabilities offered by an ebXML Registry.

In particular, it provides only a registry and no repository.

What gets published in a UDDI registry are pointers to service artifacts such as WSDL.

What gets published to the ebXML Registry are not just pointers to service artifacts, but also the actual artifact themselves.

Thus, an ebXML registry-repository can be used for governance of any type of service artifacts throughout their life cycles.

Further, at the end, the white paper also provides a Registry Standards Comparison Matrix which compares UDDI and ebXML Registry 3.0 standards in various categories.

Sunday, July 27, 2008

SOA Governance - Enabling Sustainable Success with SOA

This post summarizes the white paper 'SOA Governance - Enabling Sustainable Success with SOA' by webMethods - which is a must read, the paper it self covers lots of ground in SOA Governance.

In most organizations, virtually every IT resource and process will have some level of governance associated with it in the form of policies, rules, and controls that define how a particular asset is managed and utilized, or parameters around how a certain IT function is performed.

The act of establishing and enacting these rules falls under the broad umbrella of IT governance, with the purpose being to institutionalize discipline and maturity in IT processes so as to gain greater control and economies.

SOA governance, is a subset of IT governance related to establishing policies, controls, and enforcement mechanisms—within the context of the activities and constructs associated with SOA—similar to those that exist for managing and controlling other aspects of IT.

Initially, the concept of SOA governance was applied narrowly to the development and use of Web services, for example, validating the conformance of a Web service with specific standards or managing Web services in the SOA run-time environment.

The paper defines SOA Governance in the following way:

The art and discipline of managing outcomes consistent with measurable preconditions and expectations through structured relationships, procedures, and policies applied to the organization and utilization of distributed capabilities that may be under the control of different ownership domains.

Without effective SOA governance, organizations will experience some predictable challenges:

1. A fragile and brittle SOA implementation
2. Services that cannot easily be reused because they are unknown to developers or because they were not designed with reuse in mind
3. Lack of trust and confidence in services as enterprise assets, which results in a “build it myself” mentality (further compounding the lack of reuse with redundancy and unnecessary duplication of functionality)
4. Security breaches that cannot easily be traced
5. Unpredictable performance

The first requirement of SOA governance is architecture governance.

Architecture governance is necessary to ensure that SOA as architecture evolves by design and not by accident.

A key aspect of SOA architecture governance is defining a roadmap that will guide the smooth and orderly evolution of the architecture over time.

SOA exposes standalone application functionality at a fine-grained level of granularity, thus necessitating a new form of governance—service-level lifecycle governance.

Service-level governance applies at the level of individual services and covers a wide gamut of requirements and situations.

Design-time governance is primarily an IT development function that involves the application of rules for governing the definition and creation of Web services.

Policies might include ensuring that services are technically correct and valid, and that they conform to relevant organizational and industry standards.

If an organization has an SOA governance infrastructure in place—in the form of software that facilitates the implementation of SOA governance practices—these checks can be invoked automatically when developers check services into a registry.

In addition, approval and notification workflows can be triggered by a governance-enabled registry to ensure that services pass through pre-defined review and approval steps so that they meet architectural and organizational standards for business function encapsulation, reusability, reliability, and so on.

Governance at run-time revolves around the definition and enforcement of policies for controlling the deployment, utilization, and operation of deployed services.

These run-time policies typically relate to non-functional requirements such as trust enablement, quality-of-service management,and compliance validation.

Examples of run-time governance include:

- Checking a service against a set of rules before it is deployed into production, for example, to ensure that only a certain message transport or specific schemas are used
- Securing services so that they are accessible only to authorized consumers possessing the appropriate permissions, and that data is encrypted if required
- Validating that services operate in compliance with prescribed corporate standards, in effect, to confirm that a service is not just designed to be compliant, but that its implementation is actually compliant

A more specific case of run-time governance involves service-level monitoring and reporting.

Change is inevitable and, at some point, services deployed in the run-time environment will have to be changed to adapt to new business requirements. Since the majority of services will be designed once and then modified several times over their lifespans, change-time governance—the act of managing services through the cycle of change.

The second section of the white paper describes the technologies behind SOA Governance.

At a basic level, an SOA governance system should facilitate service-level governance across the lifecycle from design-time to run-time to change-time. It should allow polices to be defined and created, and provide mechanisms for these policies to be enforced at each phase of the service lifecycle.

The main components of this system include:

- A registry, which acts as a central catalog of business services
- A repository, for storing policies and other metadata related to the governance of the services
- Policy enforcement points, which are the agents that enact the actual policy enforcement and control at design-time, run-time, and change-time
- A rules engine for managing the declaration of policies and rules and automating their enforcement
- An environment for configuring and defining policies and for managing governance workflows across the service lifecycle

A registry is usually identified as one of the first requirements of SOA adoption and registries play an important role in governance. In simple terms, a registry is a catalog or index that acts as the “system of record” for the services within an SOA. A registry is not designed to store the services themselves; rather, it indicates their location by reference.

As the place where services are made known within the SOA, a registry is also a natural management and governance point. For example, compliance requirements—such as
conformance with the WS-I Basic Profile or the use of specific namespaces and schemas— might be imposed on services before they are allowed to be published in the registry. Or, as services are registered or changed, the registry also has the ability to trigger approval and change notification workflows so that stakeholders are alerted to changes.

An SOA registry typically fulfills the following functions:

- Stores service descriptions, information about their end-points and other technical details that a consumer requires in order to invoke the service, such as protocol bindings and message formats
- Allows services to be categorized and organized
- Allows users to publish new services into the registry and to browse and search for existing services
- Maintains service history, allowing users to see when a service was published or changed

A governance repository should support the following capabilities:

- An information model or taxonomy for representing and storing organizational and regulatory policies that can be translated into rules that are enforced by the SOA governance system. It should be possible for policies and rules to be interpreted
by people or machines (and sometimes both) as appropriate.
- Audit capabilities for tracking the trail of changes and authorizations applied to assets within the repository context.
- Identity management capabilities and role-based access controls to ensure that
only appropriate parties have access to policies.
- A notification system and content validation capabilities to provide additional assurances that policies are well-formed, consistent, and properly applied.

In practice there are many benefits to combining both Registry and Repository into a single entity.

Implementing them as separate products creates the burden of duplicate data entry, sets up the need to synchronize information, and increases the risk of inconsistencies between the two.

The places where policies are actually applied and enforced—the policy enforcement points—change depending on the lifecycle stage. During design-time, the registry/repository itself is the point of enforcement. During run-time, policies are generally enforced by the underlying message transport system that connects service providers with consumers. Finally, during change-time, policies are typically enforced by the IT management system.

A rules engine is not strictly a requirement of an SOA governance system, but incorporating rules engine technology within the registry/repository enables a significant degree of flexibility and automation, while reducing the reliance on humans to perform mechanical governance tasks.

Saturday, July 26, 2008

SOA Governance - Introduction

This post summarizes the white paper 'SOA Governance - Introduction' by WebLayers.

Service-oriented architectures (SOAs) promise unlimited agility and organizational flexibility with a new layer of Services that need to be carefully created and managed.

These services are standards-based, reusable, platform independent, and easy to integrate.

The promised benefits of SOA for business include

1. A substantial IT cost reduction,
2. Faster delivery on business requirements
3. Effective introduction of new and competitive business models.

In moving towards SOA, companies want to

1. Ensure continuity of business operations
2. Manage security exposure
3. Align technology implementation with business requirements
4. Manage liabilities and dependencies
5. Reduce the cost of operations

In other words, SOA is about facilitating change, about gaining and leveraging agility for competitive advantage and SOA governance is about managing change to maintain that agility and to ensure that it consistently serves business objectives and delivers return on investment (ROI).

Another way to define SOA governance is, it is the ability to ensure that all of the independent efforts (whether in the design, development, deployment, or operations of a Service) come together to meet the enterprise SOA requirements.

The failure to govern the evolving SOA can result in millions of dollars in costly service redesigns, maintenance, and project delays

Gartner, Inc says, in 2006, enterprises worldwide have spent nearly $3billion on failed and redesigned Web services projects because of poorly implemented service-oriented architectures.

Further, SOA requires a major shift in the way software is developed and deployed within enterprises. Companies will have to move from the “Develop Now, Integrate Later” view to a “Develop for Integration” paradigm. The new paradigm, technologies, and standards created to support this shift require companies to implement their SOA in a well planned, well coordinated, and effectively managed way – which raises the requirement of SOA Governance.

Following elements are required to achieve SOA Governance:

1. Enterprise SOA Policies
2. Auditing & Conformance
3. Management: Track, Review & Improve
4. Integration

Policies set the goals that you use to direct and measure success.


- Customer name and contact information may not be transmitted as clear text
- Each message must carry information to uniquely identify the message source, destination, timestamp, and transaction ID, to meet mandatory archiving requirements
- Messages must contain an authorization token
- Password element lengths must be at least 6 characters long and contain both numbers and letters
- Every operation message must be uniquely identified and digitally signed
- Do not use RPC Encoded web service operations
- Do not use Solicit-Response style of operations
- Do not use XML ‘anyAttribute’ wildcards

Policies should not be left to documentation. Policies should be an active part of the operations of companies. Following the policy definition stage, policies should be put to work to detect, analyze, and audit compliance.

Friday, July 25, 2008

Passport vs OpenID vs Facebook Connect

A nice post by Dick Hardt, read it here...

Thursday, July 24, 2008

Introduction To Digital Identity

Wednesday, July 23, 2008

MySpace confirms OpenID support

MySpace announced yesterday to add support to OpenID by acting as an OpenID Provider.

This is the first ever leading social networking site to announce it's support towards OpenID.

Hope, Facebook will also follow...

Further details available here...

Deploying WSO2 Identity Solution in production with custom certificates

WSO2 Identity Solution comes with a certificate for the 'localhost' signed by a sample CA.

In a production environment you need to setup your own certificate to work with the WSO2 Identity Solution and this post explains how to do it.

These are the steps you need to do.

1. Create a private/public key pair for your server [say, identity-provider]
2. Create a sample CA
3. Get your public key signed by your CA
4. Download and install WSO2 Identity Solution
5. Configure Identity Solution to use your certificate for identity-provider

In this case, we'll be creating the certificate for the host name 'identity-provider' - to test this scenario as it is, please add the following entry to the C:\windows\system32\drivers\etc\hosts file. identity-provider

We use OpenSSL to build the required CA infrastructure. For Windows you can download Win32 OpenSSL v0.9.8g from here.Once installed make sure you add C:\OpenSSL\bin [i.e [INSTALLED_LOCATION]\bin] to the PATH env variable.

Create a folder "keystore" locally and inside that folder create two sub folders, "CA" and "IS".

From the "keystore" folder,

:\> cd IS

Creating private/public key pair for the server.

:\> keytool -genkey -alias identity-provider -keyalg RSA -sigalg MD5withRSA -keysize 1024 -dname "CN=identity-provider,L=SL,S=WS,C=LK" -keypass wso2is -keystore wso2is.jks -storepass wso2is

Creating a certificate signing request.

:\> keytool -certreq -v -alias identity-provider -file ../CA/csr.pem -keypass wso2is -storepass wso2is -keystore wso2is.jks

Building the CA infrastructure.

:\> cd ../CA

Creating CA public/private key pair - you need to give a password when requested.

:\> openssl req -x509 -newkey rsa:1024 -md5 -keyout wso2cakey.pem -out wso2cacert.crt

Signing the server certificate.

:\> openssl x509 -req -days 365 -md5 -in csr.pem -CA wso2cacert.crt -CAkey wso2cakey.pem -CAcreateserial -out ../IS/iscert.crt

:\> cd ../IS

Importing CA public certificate to the server keystore.

:\> keytool -import -alias wso2ca -file ../CA/wso2cacert.crt -keystore wso2is.jks -storepass wso2is

Importing the signed server certificate to the server keystore.

:\> keytool -import -alias identity-provider -file iscert.crt -keystore wso2is.jks -storepass wso2is -keypass wso2is

At the end of this process, you'll end up with a keystore, wso2is.jks at [keystore]\IS. The password we provided for this keystore and it's private key is wso2is.

Now let's download the WSO2 Identity Solution from here and unzip it to a local location.

You also need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 5.0 from here and copy the two jar files from the extracted jce directory (local_policy.jar and US_export_policy.jar) to $JAVA_HOME/jre/lib/security.

Now copy [keystore]\IS\wso2is.jks to [IS_UNZIPPED_LOCATION]\conf and replace the existing one.

Open the file [IS_UNZIPPED_LOCATION]\conf\server.xml and do a find for 'localhost' and do a replace all with 'identity-provider'.

That's all you need to do - to get this working.

Anyway following section in the same file is useful to have a look.

  <!-- Keystore file location-->
  <!-- Keystore type (JKS/PKCS12 etc.)-->
  <!-- Keystore password-->
  <!-- Private Key alias-->
  <!-- Private Key password-->

Now, you can start the server with [IS_UNZIPPED_LOCATION]\bin\wso2is.bat.

Just type https://identity-provider:12443 to access the Identity Provider home page.

You may see browser indicating a warning here - that is because our CA is not trusted by the browser. To avoid that you can simply add our CA cert to the trusted CA certificate store.

Tuesday, July 22, 2008

Facebook redesigned..!

Facebook with it's new design went live yesterday.

This is a much cleaner better UI than the predecessor.

Further details available here.

Monday, July 21, 2008

Deploying WSO2 Identity Solution over an existing MySQL user store

WSO2 Identity Solution can be used as an Information Card provider as well as an OpenID Provider.

This post explains how you can customize WSO2 Identity Solution to expose an existing user base residing on a MySQL database - and facilitate them with Information Cards and OpenID logins.

Let me further explain this scenario.

You have a set of users with a set of attributes defined for each.

Now the requirement is your company wants you to assign each of your users an OpenID and also run an OpenID Provider your self - and you need to do minimal changes to the existing system.

I'll explain everything you need to know here in a step-by-step manner.

Setting up the existing environment

- Download WampServer 2.0 from here and install it locally.

- Start the wampserver and run MySQL service.

- Add [WAMP_INSTALLED_LOCATION]\bin\mysql\mysql5.0.51b\bin to the PATH env variable.

:\>mysqladmin -u root password mysql

:\> mysql -u root -p

[type your password : mysql]



mysql>CREATE TABLE `users` (`uid` varchar(60) NOT NULL,`name` varchar(60) NOT NULL,`pass` varchar(32) NOT NULL,`mail` varchar(64) ,`openid` varchar(60) NOT NULL, `firstName` varchar(60) NOT NULL,`lastName` varchar(60) NOT NULL,PRIMARY KEY (`uid`));

mysql>INSERT INTO users VALUES ('prabath','prabath','prabath','','http://localhost:12080/user/prabath','prabath','siriwardena');


Now we are done with setting up the existing environment.

You may have already noticed that for my convenience I created the 'users' table with an 'openid' column - which you may not have in your existing 'users' table. In that case you need to alter the table 'users', add the new column 'openid' and populate that column with values derived from the 'uid' column - which will create unique OpenIDs for all your users.

Building & deploying WSO2 Identity Solution from source

- Download the latest code from the SVN repo:

- Then, from the root directory (say [Identity] ) of the downloaded code.

[Make sure you have installed Maven2]

:\> mvn -Drelease clean install

-The above will create a zip file distribution at [Identity]\modules\distribution\target.

- Unzip the Zip file to a local folder.

- Download MySQL JDBC driver from here and copy the mysql-connector-java-5.1.6-bin.jar to [IS_INSTALLED_DIR]\lib

- You also need to download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 5.0 from here and copy the two jar files from the extracted jce directory (local_policy.jar and US_export_policy.jar) to $JAVA_HOME/jre/lib/security.

- Start WSO2 Identity Solution with [IS_INSTALLED_DIR]\bin\wso2is.bat

Configuring WSO2 Identity Solution to use MySQL user store

- Go to url : https://localhost:12443/admin and login with admin/admin [user/password] - then select 'User Stores'

- Click 'sampleRealm' link [Here we are using the JDBCRealm to connect to the MySQL database].

- Click 'Edit'

- Set the following properties appropriately and update.

UserCredentialColumn : pass
ConnectionPassword : mysql
ConnectionUserName : root
ColumnNames : mail,openid,firstName,lastName
DriverName : com.mysql.jdbc.Driver
UserNameColumn : uid
ConnectionURL : jdbc:mysql://localhost/COMPANY_DB
UserTable : users

- Click 'Set as Default' against 'sampleRealm'.

- Click on 'Define Claims' and select 'Given name','Surname' & 'Email address' [Dont uncheck any claims which are already selected]

- Click on 'Claim Mappings'.

- Click on 'Given name','Surname','Email address' and 'OpenID', and do the claim mapping appropriately.

- Once done the claim mapping it should look like the following.

- Try login to Identity Solution with your credentials available in MySQL database [ in our case prabath/prabath] - go to the url : https://localhost:12443

- To test your OpenID [http://localhost:12080/user/prabath], Signout first and from the Home page [https://localhost:12443], Click on OpenID and then type your OpenID.

You can find more documentation on WSO2 Identity Solution from here.

Sunday, July 20, 2008

CardSpace private desktop

Windows desktops provide isolation from code running on other desktops.

Once you login to Windows, what you see is the default desktop - where users run their applications.

Lets start with the most familiar private desktop you see on a Windows environment.

Just press Ctrl+Alt+Del - here comes the 'winlogon' desktop, which is a private desktop, isolated from the applications running on the default desktop.

By switching to the winlogon private desktop - it makes more difficult for malicious applications running on the default desktop to steal sensitive information.

Now, lets go back to the subject.

Once the CardSpace pops up for card selection - it also creates a private desktop.

It looks like your machine is frozen and even the windows clock seems to be not running.

Actually - what you see here behind the Identity Selector is an image taken of your default desktop at the time the Identity Selector being invoked and this image being set as the background image of the CardSpace private desktop.

As per the reasons mentioned above, we get following benefits by running Identity Selector in a private desktop.

1. Protection for users when they enter confidential data while using Managed Information Cards.

2. Malicious applications running on the default desktop cannot access the Identity Selector to capture information regarding user's card usage.

Not all the time CardSpace runs on a private desktop.

In some cases, CardSpace UI also runs on the default desktop.

Say for example, once the CardSpace pops up, click the link 'Restore Cards' and then the 'Browse' button.

This action will switch the user from CardSpace private desktop to the default desktop.

But, even in this case the user won't feel that he's moving away from the private desktop - here this is a trick used to give the user a consistent experience by setting a 'faded desktop' image in the background.

Private desktops in most of the cases will protect you from malicious applications, but still you are well exposed to hardware based attacks such as external keyllogers which could intercept your keystrokes.

Mashup Server ready to ship with OpenID support

WSO2 Mashup Server latest release 1.5 is ready to ship on next Monday with OpenID support.

OpenID relying party support on Mashup Server is powered by WSO2 Identity Solution relying party components.

Further details on this new release available here.

Saturday, July 19, 2008

OpenID with PAPE in plain English

This post discusses how PAPE works and demonstrates it's usage with WSO2 Identity Solution.

[You may also read this blog post by Nandana on "OpenID, Phishing & PAPE, Are we there yet? "]

Let me first explain what PAPE is.

PAPE stands for OpenID Provider Authentication Policy Extension - which is an extension to the OpenID Authentication.

An extension to OpenID Authentication is a protocol that "piggybacks" on the authentication request and response. Extensions are useful for providing extra information about an authentication request or response as well as providing extra information about the subject of the authentication response.

With PAPE, an OpenID Relying Party can add additional information into the OpenID Authentication request - such as;

1. preferred_auth_policies
2. max_auth_age

Let me explain what each one of them means.

With preferred_auth_policies, an RP can attach zero or more authentication policy URIs that the OP SHOULD conform to when authenticating the user. If multiple policies are requested, the OP SHOULD satisfy as many as it can.

Let me make this much clearer.

If RP wants it's users to be authenticated in a phishing resistant manner, then RP will attach the policy URI, as the preferred_auth_policies.

If RP wants it's users to be authenticated in both a phishing resistant manner and a multi-factor way , then RP will attach the policy URIs, and as preferred_auth_policies.

One thing I want to emphasize here...

Given the fact that RP requested OP to do the user authentication in such a manner - does not mean OP will follow the exact authentication policy request.

In other words, an RP could request OP to authenticate it's users in a phishing resistant manner - but in case OP does not support phishing resistant authentication, then it will simply authenticate the user with the available method. But... OP will also let the RP know the method it used to authenticate the user. So - it becomes a decision up to the RP to decide whether to let user in or not.

Let's see how this works in a practical scenario.

We have hosted the WSO2 Identity Solution at and the PAPE demonstration is available at

Once you are at the demo site, find the section - "OpenID PAPE Demo" and type your Yahoo OpenID there.

Select "" as your authentication policy.

In this case an OpenID RP sends a PAPE request to an OP which does not support PAPE [Yes, Yahoo still does not support PAPE].

This is what you get as the response.

Authentication Policies: none
NIST Auth Level: 0
Auth Age: -1

For the time being lets only focus on "Authentication Policies" - here Yahoo OP returns no policies. That is Yahoo has ignored the PAPE request by the RP. So, now RP can decide whether to let user in or not.

Let's try another example. This time we use an OpenID from WSO2 OpenID Provider. You can go there, register yourself and get an OpenID.

WSO2 OpenID Provider supports login with both the username/password and Information Card based logins.

First directly login to the OP and then register a self-issued Information Card with the OP. We'll be using this Information card later-on to login.

Once you are at the demo site, find the section - "OpenID PAPE Demo" and type your WSO2 OpenID [] there.

Select "" as your authentication policy.

In this case an OpenID RP sends a PAPE request to an OP which supports PAPE.

So, once you are redidirected to OP for authentication, login with your registered Information card.

You'll get the following as the PAPE reponse.

Authentication Policies:
NIST Auth Level: 1
Auth Age: -1

This indicates you've being authenticated in a phishing-resistant manner.

In no means, PAPE does not limit you to the following three authentication policies.


Additional policies can be specified elsewhere and used between OPs and RPs.

For example, myOpenID defines the policy URI, for it's CallVerifID. In this post I blogged about how CallVerifID works.

Hope, by now it's very much clearer how PAPE works.

There are few things I skipped during the discussion.

Let's go back to them.

In PAPE request, RP also can add the parameter "max_auth_age" as well.

This is an optional parameter in the PAPE request, where the RP may or may not request.

Once max_auth_age is set in the PAPE request, if the End User has not actively authenticated to the OP within that number of seconds [max_auth_age] specified in a manner fitting the requested policies, the OP SHOULD authenticate the End User for this request.

Let's go back to the PAPE response. I skipped explaining two parameters, NIST Auth Level and Auth Age.

If the RP's request included the "max_auth_age" parameter then the OP MUST include "auth_time" [Auth Age] in its response. If "max_auth_age" was not requested, the OP MAY choose to include "auth_time" in its response or just send "-1" as the value.

The NIST Auth Level is the the Assurance Level as defined by the National Institute of Standards and Technology (NIST) corresponding to the authentication method and policies employed by the OP when authenticating the End User.

This value varies from 0 to 4 (inclusive).

Friday, July 18, 2008

Moving to GoDaddy...!

I own three domain names,, & - all are managed by Yahoo!.

I bought all these three years back for $1.99 per each.

But, it seems now Yahoo! has increased the charges to $34.95 per year.

Still GoDaddy, charges $6.99 per year and when you transfer your domain name from any where else to GoDaddy, then you get free, the time left on existing registration plus 1 year extension.

This looks cool and I am transferring all my domain names to GoDaddy.

Thursday, July 17, 2008

Firefox 3 - Sinhala localized build

A press conference held at Cinnamon Grand to announce the release of Sinhala Firefox 3 and more details available here.

Sinhala Firefox 3, available to download from here.

iPhone 3G Unlocked by Hackers from Brazil

Read more here...

Wednesday, July 16, 2008

Deploying Axis2 on GlassFish

Download GlassFish application server from here.

Go to [UNZIPPED_LOCATION]\glassfishv3-tp2\bin

:\> asadmin start-domain domain1

Go to the url http://localhost:8080/admin

Login as anonymous/[empty password]

Download Axis2 WAR distribution from here and

Click the link, "Deployment\Deploy Web Application" and point the WAR location to the downloaded axis2.war.

Hit the link, http://localhost:8080/axis2/ and click on "validate" - you should see the "Axis2 Happiness Page".

This blog post by Charitha, explains how to deploy Apache Axis2 on Resin and JBoss application servers.

Using WSO2 ESB with FIX - Supporting Financial Messaging

Tuesday, July 15, 2008

Let the rest discover your OpenID relying party

Let me first explain what OpenID Relying Party [RP] discovery is and what it is for.

This is a new feature introduced in OpenID Authentication 2.0.

With, RP discovery, you let software agents/OpenID Providers discover your site as an OpenID relying party.

OpenID providers use this feature to automatically verify that a return_to URL in an OpenID request is an OpenID relying party endpoint for the specified realm.

Have you ever seen this warning by Yahoo! - when you trying to use a Yahoo OpenID?

"Warning: This website does not meet Yahoo!'s requirements for website address. Do not share any personal information with this website unless you are certain that it is legitimate. "

This happens because, the relying party web site fails to meet OpenID RP discovery requirements.

Usually, as per the spec RP has to present an XRDS document in the following format, where the OpenID Provider can discover.

<Service xmlns="xri://$xrd*($v*2.0)">
When it comes to Yahoo OpenID Provider, it tries to find this XRDS document at the return_to url [return_to url is included in the OpenID authentication request it self].

So, make sure you have RP discovery information available at your return_to url.

This is how you do it.

Say for example, if your return_to url is, when you set it in the OpenID authentication request, you need to set it as below, with an added parameter.

Also, you need to set your realm as

If you are using WSO2 OpenID Relying Party components, this is how you set your return_to url and the realm in the authentication request.

[This article explains how to add OpenID support to your RP web site with WSO2 OpenID RP components, please refer the section "Adding OpenID Support with Simple Registration"]


Now, you can differenciate a 'login' request from a 'RP discovery' request.

Your openidloggedin.jsp page will have the logic to present the XRDS document for RP discovery, based on the request.

<%@page import=""%>


String login= (String) request.getParameter("login");

if (login==null)
String xrd = null;

xrd = "<xrds:XRDS xmlns:xrds=\"xri://$xrds\" xmlns:openid=\"\" xmlns=\"xri://$xrd*($v*2.0)\">\n" +
   "<Service xmlns=\"xri://$xrd*($v*2.0)\">\n"+

PrintWriter writer = response.getWriter();
else {
//User logs in... add your logic appropriately

To see a demonstration of how this works, go to, and type your Yahoo OpenID at "OpenID Simple Registration Demo".

WSO2 Beefs Up SOA Identity Solution

Read the complete article on ebizQ...

Monday, July 14, 2008

Deploying your OpenID relying party behind a proxy

This post dicusses how you can deploy your OpenID relying party behind an Apache front-end, which acts as a reverse proxy.

First, lets configure Apache to act as a reverse proxy. I assume your Apache server is running at identity-rp:12081 and your web application is running on Tomcat at http://localhost:12080/javarp. If you have different settings, please do the modifications appropriately.

Do the following changes in the httpd.conf.

LoadModule proxy_module modules/
LoadModule proxy_http_module modules/
LoadModule proxy_connect_module modules/

ProxyRequests Off
ProxyPreserveHost On

ProxyPass /javarp http://localhost:12080/javarp

<Location /javarp/>
     ProxyPassReverse /
     SetOutputFilter proxy-html
     RequestHeader unset Accept-Encoding
Now let's download the latest code from the SVN repo:

Then, from the root directory (say [Identity] ) of the downloaded code.

[Make sure you have installed Maven2]

:\> mvn -Drelease clean install

You need the following two jars from the build and copy them to your classpath.


This article explains how you can develop an OpenID Relying Party web site with WSO2 OpenID RP components. Please refer the section "Adding OpenID Support with Simple Registration".

You also need to do the following changes in addition to what is mentioned in the above document.

Set the return_to url;


Add the following to the web.xml of your web application.


All done and now you are set to run your web application.

Start both the Apache and your Tomcat servers and hit the url http://identity-rp:12081/javarp to access your web application.

Saturday, July 12, 2008

Next generation...

My sister's son, little Thamindhu, one year old, today...

Budusaranai... Puthata...

Friday, July 11, 2008

Microsoft SQL Server Data Services [SSDS]

This is a nice webcast on introducing SSDS.

MSDN July/2008 issue also has a good article on the subject.

Microsoft releases "Zermatt" Developer Framework for claims-based identity

Microsoft very recently released "Zermatt" Developer Framework for claims-based identity.

Zermatt is a set of .NET Framework classes; it is a framework for implementing claims-based identity in your applications. This can be used in any web application or web service that uses the .NET Framework version 3.5.

Zermatt helps building externalized authentication capabilities for "relying party" applications and build custom "identity providers" (STS).

Zermatt beta is avaialble to download from here.

Zermatt also requires .Net 3.5 to be installed. It has been verified on Windows 2K3 SP2 with IIS 6.0 and Windows Vista SP1 and Windows Server 2008 with IIS 7.0.

Thursday, July 10, 2008

Building & Deploying mod_cspace on Windows

mod_cpace is an Apache HTTPD module for processing Information Card based logins, which can be used with any Web application that is hosted with Apache HTTPD.

This has a binary distribution for Ubuntu, but NOT for Windows.

This post explains all what you need to know how to build mod_cpace for Windows from the source.

I am using Visual Studio 2008 Express Edition to do the build and the IDE can be freely downloaded from here.

Also, make sure you have installed .NET Framework 3.5 in your machine as well as the IE 7.

First we need to download the latest code from the SVN repository. You may use TortoiseSVN client for this, which is freely available from here.

Downloal all the code from to your local repository [LOCAL_REPO].

Now share the folder [LOCAL_REPO]\build\win32\vc\lib and map the network drive 'W' to this shared folder.

Solution file [mod_cspace.sln] is available at [LOCAL_REPO]\build\win32\vc\apache2 and click the file to open with VS 2008 Express Edition.

Add the following to your PATH env variable.


Add the following to your CLASSPATH env variable.


Now do the build in Debug mode with VS 2008 Express Edition.

You'll find mod_cspace.dll in [LOCAL_REPO]\build\win32\vc\apache2\Debug.

With this we complete building the module on Windows.

Now we need to configure SSL on WAMP. Please strictly follow the exact steps [with exact folder names and key names] in my previous post to do this. Don't miss a single step there.

Let's deploy our module in WAMP, now.

Copy mod_cspace.dll to c:\wamp\bin\apache\apache2.2.8\modules.

Now, let's edit httpd.conf [c:\wamp\bin\apache\apache2.2.8\conf]

Add the following to the file..

LoadModule cspace_module modules/mod_cspace.dll

#cspace_module configurations
<IfModule cspace_module>

#Make sure you give the absoulte path here to cscafile
CardSpaceCAFile "c:/wamp/bin/apache/apache2.2.8/conf/cscafile"

#Enable Cardspace login for php-sample web application
<Location /php-sample/>

Still, you miss two things.

- Download cscafile from here and copy it to c:\wamp\bin\apache\apache2.2.8\conf\.

- Download php-sample folder from here and copy it to c:\wamp\www.

All set... we are ready to GO...!!!

Start Apache server [if it is already running, stop and start] and type the URL https://identity-rp:12444/php-sample on your browser.

Okay... then... how do I know this works ???

We need to test our relying part web site with an Identity Provider.

Let's download WSO2 Identity Solution from here.

Unzip the downloaded ZIP file to a local folder [say [IS]].

Setting up the Identity Solution takes no more than 5 minutes... please follow the steps given here.

Startup the Identity Solution and go to the link https://localhost:12443 .

There you can register your self and sign in. Once signed in, you can download an Information Card from there. This guide, which is a very short one explains all what you need to know.

Now, you are almost done. But still, we need to say our Identity Provider that I trust php-sample as a relying party web site.

To do that I need to upload the certificate of this RP to my Identity Provider [IdP]. How to register a RP certificate with the IdP is explained here [look for "How to register your trusted Relying Party? "].

Still you have a question, I guess. How do I get the certificate of my php-sample [RP] ???

On IE 7, when you are at https://identity-rp:12444/php-sample - just right click the page --> Properties --> Certificates --> Details --> Copy to File --> Select 'DER' format --> Give a file name [e.g. site.cer] and save the certificate.

Are we done now..? Of course almost.. but, still there is something I skipped.

Remember the file "cscafile" ??? - which you downloaded from here.

This file contains public certificates of all the Identity Providers, who are accepted by the RP web site.

For this case you need not to do anything with this file - since I have already added the default public certificate of our Identity Provider - which ships with the Identity Solution.

But, in case you want to make this work with any other IdP, you need to get it's public certificate and add it to the cscafile file.

This is how you do it.

On IE 7 go to the IdP site --> just right click the page --> Properties --> Certificates --> Details --> Copy to File --> Select 'Base-64 Encoded' format --> Give a file name [e.g. site.cer] and save the certificate --> Open the saved certficate in notepad --> Copy and paste its content to cscafile.

Okay, finally we are done.

Hit the url, https://identity-rp:12444/php-sample and click the link "Login to this site" to initiate the InfoCard login.

Enabling SSL on WAMP

This step by step guide explains how you can enble SSL on WAMP.

1. Download WampServer 2.0 from here and install it to the default location [c:\wamp].

2. Now, we need to have a private/public key pair as well as a CA to sign our public key.

First, lets see how we can create a private/public key pair.

keytool -genkey -alias rpcert -keyalg RSA -keysize 1024 -dname "CN=identity-rp,L=SL,S=WS,C=LK" -keypass wso2key -keystore rpkeystore.jks -storepass wso2key

This will create a keystore [rpkeystore.jks] with public/private key pair.

My previous post explains how you can export your private key from the keystore. Just follow the steps given there and you'll end up with a file server.key, which is your private key.

Now, we need to sign our public certificate with a CA.

This - requires us to create a sample CA and following explains how to do that.

Here we use OpenSSL to build the required CA infrastructure. For Windows you can download Win32 OpenSSL v0.9.8g from here.

Once installed make sure you add C:\OpenSSL\bin [i.e [INSTALLED_LOCATION]\bin] to the PATH env variable.

openssl req -x509 -newkey rsa:1024 -keyout cakey.pem -out cacert.crt

The above will creare a public/private key pair for our sample CA.

Now, we need to create a certificate signing request to our server.

Go to the folder where you created the keystore [rpkeystore.jks] and issue the following command.

keytool -certreq -v -alias rpcert -file csr.pem -keypass wso2key -storepass wso2key -keystore rpkeystore.jks

Now copy the csr.pem to the folder where you generated keys for the CA and issue the following command from there.

openssl x509 -req -days 365 -in csr.pem -CA cacert.crt -CAkey cakey.pem -CAcreateserial -out server.crt

By now we have all the requiured files.

cacert.crt --> CA public certificate
server.crt --> Server public certificate signed by the CA
server.key --> Server private key.

Copy all the above three files to c:\wamp\bin\apache\apache2.2.8\conf [assuming you installed WAMP to the default location].

Also edit c:\WINDOWS\system32\drivers\etc\hosts file and add the following entry. identity-rp

If you could recall, when we creating the public certificate for our server, we created it for identity-rp.

3. Edit httpd.conf [C:\wamp\bin\apache\apache2.2.8\conf]

Uncomment the following two lines.

LoadModule ssl_module modules/

Include conf/extra/httpd-ssl.conf

Find Listen 80 and change it to Listen 12081 - that is our server is running on port number 12081.

Find ServerName and set it to ServerName identity-rp:12081.

4. Edit httpd-ssl.conf [C:\wamp\bin\apache\apache2.2.8\conf\extra]

Set Listen identity-rp:12444 - we are listening to port 12444 for secure communication.

Set <VirtualHost _default_:12444>

Set DocumentRoot "C:/wamp/www/"

Set ServerName identity-rp:12444

For the entire file find "C:/Program Files/Apache Software Foundation/Apache2.2" and replace with "C:/wamp/bin/apache/apache2.2.8".

Find SSLCertificateFile and set SSLCertificateFile "C:/wamp/bin/apache/apache2.2.8/conf/server.crt"

Find SSLCertificateKeyFile and set SSLCertificateKeyFile "C:/wamp/bin/apache/apache2.2.8/conf/server.key"

Find SSLCACertificateFile and set SSLCACertificateFile "C:/wamp/bin/apache/apache2.2.8/conf/cacert.crt"

5. Edit php.ini [C:\wamp\bin\apache\apache2.2.8\bin]

Uncomment the line extension=php_openssl.dll

6. Now we are done - do a syntax check and start the apache server.

:\> cd C:\wamp\bin\apache\apache2.2.8\bin
:\> httpd -t
:\> httpd --start

7. Type https://identity-rp:12444 on your browser - you'll see a certificate error at the brower - to avoid it install CA certificate in your browser.

Exporting keystore private key with WSAS

Java keytool does not come up with an easy way of exporting keystore private key.

Following is an alternative way you can do it with WSAS.

First download WSAS from here.

Now, we need to create a keystore.

keytool -genkey -alias rpcert -keyalg RSA -keysize 1024 -dname "CN=identity-rp,L=SL,S=WS,C=LK" -keypass wso2key -keystore rpkeystore.jks -storepass wso2key

The above will create a keystore with the name rpkeystore.jks having wso2key as the keystore password and wso2key as the private key password.

Now, lets see how we can export our private key from the keystore just created, with WSAS.

1. Start WSAS and type https://localhost:9443 on your brower and sigin with admin/admin [user/password].


2. View available keystores

3. Upload your keystore [just created] - keystore password : wso2key

4. Private key password: wso2key

5. Click to finish

6. Click on the keystore rpkeystore.jks

7. Copy and paste your private key to a new file called server.key and that is your exported private key from the keystore.

Thursday, July 3, 2008

Demand OpenID...!!!

Demand OpenID support from websites you sign in to every day, using a simple bookmarklet.

More info available here...

Web services security: Encryption with Rampart

This post discusses how to secure your web service with Rampart - through message encryption.

1. Setting up the environment

- Make sure you have installed Java and set your PATH env variable to C:\Program Files\Java\jdk1.5.0_06\bin [i.e : JAVA_HOME\bin]

- Download Axis2 1.4 - unzip and set the AXIS2_HOME env variable

- Download Rampart 1.4

- Copy [RAMPART_HOME]\lib\*.jar files to [AXIS2_HOME]\lib

- Copy [RAMPART_HOME]\modules\*.mar files to [AXIS2_HOME]\repository\modules

- Download Bouncy Castle jar and copy it to [AXIS2_HOME]\lib

- Download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 5.0 from here and copy the two jar files from the extracted jce directory (local_policy.jar and US_export_policy.jar) to $JAVA_HOME/jre/lib/security.

- Add all the jars inside [AXIS2_HOME]\lib to your CLASSPATH [I have created a batch file for this - so you can skip this step if you want]

2. Create keystores for the service and the client

- Assume you have a folder c:\keystores and two subfolders c:\keystores\service and c:\keystores\client

- For the case of encrption client will use the public key of the service and the service will use the public key of the client.

- For the case of decryption client will use it's private key and the service will use it's private key.

- So we need to have two key pairs for both the service and the client.

- Lets first create a key pair for the service and store them in a keystore.

\> cd c:\keystores\service
\> keytool -genkey -alias service -keyalg RSA -keysize 1024 -keypass servicekey -keystore service.jks -storepass servicestorekey

- This will create a keystore "service.jks" inside c:\keystores\service

- Password of the service.jks is "servicestorekey" [-storepass servicestorekey]

- Password of the private key of the service is "servicekey" [-keypass servicekey]

- Now lets create a key pair for the client and store them in a different keystore.

\> cd c:\keystores\client
\> keytool -genkey -alias client -keyalg RSA -keysize 1024 -keypass clientkey -keystore client.jks -storepass clientstorekey

- This will create a keystore "client.jks" inside c:\keystores\client

- Password of the client.jks is "clientstorekey" [-storepass clientstorekey]

- Password of the private key of the client is "clientkey" [-keypass clientkey]

- Now we are done creating two keystores with public/private key pairs for our service and the client

- But.. as I mentioned earlier client needs to know the public key of the service and the service needs to know the public key of the client.

- So we need to export the public key of the service from the service.jks and import it to the client.jks

\> cd c:\keystores\service
\> keytool -alias service -export -keystore service.jks -storepass servicestorekey -file servicepublickey.cer
\> cd c:\keystores\client
\> keytool -import -alias service -file ../service/servicepublickey.cer -keystore client.jks -storepass clientstorekey

- Now we need to export the public key of the client from the client.jks and import it to the service.jks

\> cd c:\keystores\client
\> keytool -alias client -export -keystore client.jks -storepass clientstorekey -file clientpublickey.cer
\> cd c:\keystores\service
\> keytool -import -alias client -file ../client/clientpublickey.cer -keystore service.jks -storepass servicestorekey

- Now we are done and we have two keystores.

1. c:\keystores\client\client.jks
2. c:\keystores\service\service.jks

3. Write and deploy the service

- Download and extract from here.

- Copy c:\keystores\service\service.jks and [rampart-sample]\service\ to [AXIS2_HOME]

- Build the service

[rampart-sample]\> classpath.bat
[rampart-sample]\> cd service
\> javac org/apache/rampart/samples/sample05/*.java
\> jar cvf SimpleService.aar *

- Copy the [rampart-sample]\service\SimpleService.aar to [AXIS2_HOME]\repository\services

- Run Axis2 simple server [[AXIS2_HOME]\bin\axis2server.bat]

- Type http://localhost:8080/axis2/services/ in the browser - you will simple SimpleService being deployed.

Now, lets highlight some of the Rampart related stuff we used while creating the service.

Service should ideally use the public key of the Client to encrypt the messages it sends and use it's own private key to decrypt messages it recieves.

By now we know both these keys are in service.jks keystore and we have copied it to the service classpath - where the service can pick the required keys.

All the configuration properties related to the service.jks are written to the property file [AXIS2_HOME]\

If you open the file, you'll see the following two properties there.

Here we specify the name of the keystore to use and the password to access the keystore. [this is the password we gave while we were creating the keystore].

Now the question is, how does the service pick this file.

That - we mention in the services.xml file [[rampart-sample]\service\META-INF\services.xml]

<parameter name="OutflowSecurity">
If you can recall - we use the alias 'client' at the time we import the client public key to the service.jks. So the same name is used here for the encryptionuser.

The above discussion is related to encryption. How come Rampart knows the password of it's private key - to decrypt the receiving messages.

<parameter name="InflowSecurity">
Here the Rampart uses a callback mechanism to retrieve the password - where the service author needs to implement the password callback class [[[rampart-sample]\service\org\apache\rampart\samples\sample05\].

4. Write and run the client

- Build the Client

[rampart-sample]\> classpath.bat
[rampart-sample]\> cd client
\> javac org/apache/rampart/samples/sample05/*.java
\> java org/apache/rampart/samples/sample05/Client http://localhost:8080/axis2/services/SimpleService.SimpleServiceHttpEndpoint C:\axis2-1.4\repository

- C:\axis2-1.4\repository is for [AXIS2_HOME]\repository

- If everything is fine client should run with no issues.

- Configurations related to Rampart is similar to the way it was discussed previously for the service.

- You'll find client.jks and at [rampart-sample]\client - which is in the path of execution.

- You'll also find password callback class at the client end at [rampart-sample]\client\org\apache\rampart\samples\sample05\