Thursday, December 22, 2011

#Numbers 2011

  • World Population by end of 2011 is 7 Billion.
  • Population in China is 1.3 Billion.
  • Population in India is 1.19 Billion.

  • 800 Million users by the end of 2011.
  • Expenses, 1 Billion US $ per year.
  • 3000 employees.
  • 10% of people have less than 10 friends, 20% have less than 25 friends, while 50% have over 100 friends.
  • Facebook's Datawarehousing Hadoop cluster gets 12 TB of compressed data per day, 800 TB of compressed data scanned per day, 25,000 map-reduce jobs per day, 65 millions files in HDFS and 30,000 simultaneous clients to the HDFS NameNode
  • Stores over 320 billion images, which translates to over 25 petabytes of data
  • Users upload one billion new photos (~60 terabytes) each week
  • In June 2011, Facebook hit one trillion page views that month, with 870 million unique visitors for the same period, giving the site a staggering 46.9% reach among all web surfers.

  • 330 Million Hotmail users.
  • 90,000 employees.
  • Bought Skype for $8.5 Billion.
  • In June 2011, Microsoft hit with 250 million unique visitors, giving the site a 14.5% reach among all web surfers.
  • In June 2011, MSN hit with 440 million unique visitors, giving the site a 25.8% reach among all web surfers.

  • 260 Million GMail users.
  • 62 Million Google+ users.
  • 31,000 employees.
  • Britney Spears has the highest number of followers in Google+ with 1,096,945 followers while Larry Page is the second.
  • In June 2011, YouTube hit with 800 million unique visitors, giving the site a 46.8% reach among all web surfers.
  • In June 2011, Blogspot hit with 340 million unique visitors, giving the site a 19.6% reach among all web surfers.

  • 250 Million Tweets per day.
  • 100 Million users.
  • 400 employees.
  • In June 2011, Twitter hit with 160 million unique visitors, giving the site a 9.3% reach among all web surfers.

  • 302 Million users.
  • 14,000 employees.
  • 42,000 nodes in the Hadoop cluster.
  • Flickr stores more than 5 Billion photos.
  • Flickr gets 100,000 queries per second.
  • In June 2011, Yahoo hit with 590 million unique visitors, giving the site a 34.4% reach among all web surfers.

  • 61,000 employees.
  • 500,000+ apps available in Apple App Store
  • 18,000,000,000 downloads from the Apple App Store
  • iTunes Store had 225 million active users by June 2011

Tuesday, December 20, 2011

WSO2 Secuity Team - all back in Colombo office to celebrate Christmas

2011 was a very busy year for WSO2 and specially for the Security team. We have 6 members in the team and it was hardly found a day all six were in Colombo WSO2 office...

Thilina was in Denmark in March for a customer engagement. In May he was in Switzerland and in July Thilina was in Denver for Cloud Identity conference. In November he was in Sweeden for another customer engagement.

Amila was in California from February to April. And back in US, in Raleigh, NC in September for a customer engagement. In October he was in New York for another customer engagement.

Asela was out during March/April for a customer engagement in North Carolina. In July Asela was in Denver for Cloud Identity conference and in New York once again for a customer engagement. In August he was at WSO2 Palo Alto office and did the Could Security workshop there with Asanka and me. Again in November he left to San Diego for a customer engagement. Just after one week he returned back to SL - he left to Germany for another customer engagement.

Hasini was out of the country in June - for a customer engagement in Indianapolis.

Manjula was in Thailiand in November to present WSO2 Identity Server at APICTA. And he left to Germany soon after that for a customer engagement - which finished in December.

I was in New York, Washington & Dallas in March for the WSO2 SOA Security & Identity Workshops and in July was in Denver for Cloud Identity conference and in New York once again for a customer engagement. In August I was at WSO2 Palo Alto office and did the Could Security workshop there with Asanka and Asela. Once again I was back in New York in October for another customer engagement and also for a workshop on Cloud Security. In November I was in Vancouver - Canada for the ApacheCon.

It's great to see the entire team is back in Colombo, by the end of the year...

2-legged OAuth with OAuth 1.0 and 2.0

OAuth 1.0 emerged from the large social providers like Facebook, Yahoo!, AOL, and Google. Each had developed its own alternative to the password anti-pattern. OAuth 1.0 reflected their agreement on a single community standard.

In 2009, an attack on OAuth 1.0 was identified which relied on an attacker initiating the OAuth authorization sequence, and then convincing a victim to finish the sequence – a result of which would be the attacker’s account at an (honest) client being assigned permissions to the victim’s resources at an (honest) RS.

OAuth 1.0a was the revised specification version that mitigated the attack.

In 2009, recognizing the value of more formalized standardization, that community contributed OAuth 1.0 to the IETF. It was within the IETF Working Group that the original OAuth 1.0 was reworked and clarified to become the Informative RFC 5849.

In 2010, Microsoft, Yahoo!, and Google created the Web Resource Authentication Protocol (WRAP), which was soon submitted into the IETF WG as input for OAuth 2.0. WRAP proposed significant reworking of the OAuth 1.0a model.

Among the changes were the deprecation of message signatures in favor of SSL, and a formal separation between the roles of ‘token issuance’ and ‘token reliance.’

Development of OAuth 2.0 in the IETF consequently reflects the input of both OAuth 1.0, OAuth 1.0a, and the WRAP proposal. It is fair to say that the very different assumptions about what are appropriate security protections between OAuth 1.0a and WRAP have created tensions within the IETG OAuth WG.

While OAuth 2.0 initially reflected more of the WRAP input, lately (i.e. fall 2010) there has been a swing in group consensus that the signatures of OAuth 1.0a that were deprecated by WRAP are appropriate and desirable in some situations. Consequently, signatures are to be added back as an optional security mechanism.

While many deployments of OAuth 1.0a survive, more and more OAuth 2.0 deployments are appearing – necessarily against a non-final version of the spec. For instance, Facebook, Salesforce, and Microsoft Azure ACS all use draft 10 of OAuth 2.0.

[The above paragraphs are direct extracts from the white-paper published by Ping Identity on OAuth]

OAuth provides a method for clients to access server resources on behalf of a resource owner (such as a different client or an end-user). It also provides a process for end-users to authorize third-party access to their server resources without sharing their credentials (typically, a username and password pair), using user-agent redirections.

In the traditional client-server authentication model, the client requests an access restricted resource (protected resource) on the server by authenticating with the server using the resource owner's credentials. In order to provide third-party applications access to restricted resources, the resource owner shares its credentials with the third-party. This creates several problems and limitations.

1. Third-party applications are required to store the resource owner's credentials for future use, typically a password in clear-text.

2. Servers are required to support password authentication, despite the security weaknesses created by passwords.

3. Third-party applications gain overly broad access to the resource owner's protected resources, leaving resource owners without any ability to restrict duration or access to a limited subset of resources.

4. Resource owners cannot revoke access to an individual third-party without revoking access to all third-parties, and must do so by changing their password.

5. Compromise of any third-party application results in compromise of the end-user's password and all of the data protected by that password.

OAuth addresses these issues by introducing an authorization layer and separating the role of the client from that of the resource owner.

The protocol centers on a three-legged scenario, delegating User access to a Consumer for resources held by a Service Provider. In many cases, a two-legged scenario is needed, in which the Consumer is acting on behalf of itself, without a direct or any User involvement.

OAuth was created to solve the problem of sharing two-legged credentials in three-legged situations. However, within the OAuth context, Consumers might still need to communicate with the Service Provider using requests that are Consumer-specific. Since the Consumers already established a Consumer Key and Consumer Secret, there is value in being able to use them for requests where the Consumer identity is being verified.

This specification defines how 2-legged OAuth works with OAuth 1.0. But it never became an IETF RFC.

With OAuth 1.0 - 2-legged OAuth includes two parties. The consumer and the service provider. Basically in this case consumer also becomes the resource owner. Consumer first needs to register a consumer_key and consumer_secret with the service provider. To access a Protected Resource, the Consumer sends an HTTP(S) request to the Service Provider's resource endpoint URI. The request MUST be signed as defined in OAuth Core 1.0 section 9 with an empty Token Secret.

All the requests to the Protected Resources MUST be signed by the Consumer and verified by the Service Provider. The purpose of signing requests is to prevent unauthorized parties from using the Consumer Key when making Protected Resources requests. The signature process encodes the Consumer Secret into a verifiable value which is included with the request.

OAuth does not mandate a particular signature method, as each implementation can have its own unique requirements. The protocol defines three signature methods: HMAC-SHA1, RSA-SHA1, and PLAINTEXT, but Service Providers are free to implement and document their own methods.

The Consumer declares a signature method in the oauth_signature_method parameter, generates a signature, and stores it in the oauth_signature parameter. The Service Provider verifies the signature as specified in each method. When verifying a Consumer signature, the Service Provider SHOULD check the request nonce to ensure it has not been used in a previous Consumer request.

The signature process MUST NOT change the request parameter names or values, with the exception of the oauth_signature parameter.

2-legged OAuth with OAuth 1.0 - the request to the protected resource will look like following.
            Authorization: OAuth realm="",

This blog post explains with an example, how to use 2-legged OAuth with OAuth 1.0 to secure RESTful service.

Now, let's look at OAuth 2.0 - still at the stage of a draft specification. This doesn't talk about 2-legged OAuth. But - it can be implemented with different approaches suggested in OAuth 2.0.

Have a look at this & this - both talk about how to implement 2-legged OAuth with OAuth 2.0 - and those discussions are from the OAuth 2.0 IETF work group.

OAuth 2.0 defines four roles:

1. resource owner : An entity capable of granting access to a protected resource (e.g. end-user).

2. resource server : The server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.

3. client : An application making protected resource requests on behalf of the resource owner and with its authorization.

4. authorization server : The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization.

In case of 2-legged OAuth, client becomes the resource owner.

We can at very high-level break the full OAuth flow in to two parts.

1. Get a token from the authorization server
2. Use the token to access the resource server

Let's see how the above two steps work under 2-legged OAuth.

OAuth 2.0 defines a concept called - "authorization grant" - which is a credential representing the resource owner's authorization (to access its protected resources) used by the client to obtain an access token. This specification defines four grant types.

1. authorization code
2. implicit
3. resource owner password credentials
4. client credentials

"Client Credentials" is the grant type which goes closely with 2-legged OAuth.

With "Client Credentials" grant type, the client can request an access token using only its client credentials (or other supported means of authentication) when the client is requesting access to the protected resources under its control.

Once the client makes this request to the authorization server - it will return back an access token to access the protected resource.

The access token returned back to the client could be either of type bearer of MAC.

The "mac" token type defined in ietf-oauth-v2-http-mac is utilized by issuing a MAC key together with the access token which is used to sign certain components of the HTTP requests by the client when accessing the protected resource.

The MAC scheme requires the establishment of a shared symmetric key between the client and the server. This is often accomplished through a manual process such as client registration.

The OAuth 2.0 specification offers two methods for issuing a set of MAC credentials to the client using..

1. OAuth 2.0 in the form of a MAC-type access token, using any supported OAuth grant type. [This is what we discussed above - an access token with 'MAC' type]

2. The HTTP "Set-Cookie" response header field via an extension attribute.

When using MAC type access tokens with 2-legged OAuth - the request to the protected resource will look like following.
GET /resource/1?b=1&a=2 HTTP/1.1
     Authorization: MAC id="h480djs93hd8",
Bearer type is defined here. It's a security token with the property that any party in possession of the token (a "bearer") can use the token in any way that any other party in possession of it can. Using a bearer token does not require a bearer to prove possession of cryptographic key material (proof-of-possession).

When using Bearer type access tokens with 2-legged OAuth - the request to the protected resource will look like following.
GET /resource HTTP/1.1
   Authorization: Bearer vF9dft4qmT

Also - the issued access token from the Authorization Server to the client, has an 'scope' attribute. [2-legged OAuth with OAuth 1.O doesn't have this scope attribute as well as access token concept - so resource server has to perform authorization separately based on the resource client going to access]

The client should request access tokens with the minimal scope and lifetime necessary. The authorization server will take the client identity into account when choosing how to honor the requested scope and lifetime, and may issue an access token with a less rights than requested.

When securing APIs with OAuth - this 'scope' attribute can be bound to different APIs. So, the authorization server can decide whether to let the client access this API or not.

Thursday, December 15, 2011

The first Kolamba DZone Community Meetup

We successfully completed the first Kolamba DZone meetup with 5O+ attendees from WSO2, University of Moratuwa, University of Colombo, Informatics and IFS.

We started the event by introducing DZone - since some guys were new to it..

Then we had a very interesting panel discussion on Big Data - which was followed by a demo.. and we ended up with some music.. and some 'nice' food.. Hope everyone enjoyed..

Thanks a lot WSO2 for sponsoring this event.. and thanks DZone for helping us to make this event a success..

Also thanks Srinath, Tharindu, Senaka, Wathsala, Deep, Shankar, Anjana and Buddhika for taking part in the panel discussion..

Thank you very much Pradeeban, Udedhika, Flora for your help in different aspects...

Thanks Charitha, Dassa, Chamara, ChamaraA for the wonderful performance at the end of the event...

At last , not least thanks a lot Harindu for owning everything and making everything perfect..

Wednesday, December 14, 2011

A SMALL cross-section of BIG Data

Big data is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target currently ranging from a few dozen terabytes to many petabytes of data in a single data set.

IDC estimated the digital universe to be around 1.8 zettabytes by 2011.

How big is a zettabyte? It's one billion terabytes. The current world population is 7 billion - that is, if you give a hard disk of 250 billion GB for each person on the earth - still that storage won't be sufficient.

Many sources contribute to this flood of data...

1. The New York Stock Exchange generates about one terabyte of new trade data per day.
2. Facebook hosts approximately 10 billion photos taking up one petabytes of storage.
3., the genealogy site, store around 2.5 petabytes of data.
4. The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20 terabytes per month.
5. The Large Harden Collider near Geneva will produce about 15 petabytes of data per year.
6. Everyday people create the equivalent of 2.5 trillion bytes of data from sensors, mobile devices, online transactions & social networks.

Facebook, Yahoo! and Google found themselves collecting data on an unprecedented scale. They were the first massive companies collecting tons of data from millions of users.

They quickly overwhelmed traditional data systems and techniques like Oracle and MySql. Even the best, most expensive vendors using the biggest hardware could barely keep up and certainly couldn’t give them tools to powerfully analyze their influx of data.

In the early 2000’s they developed new techniques like MapReduce, BigTable and Google File System to handle their big data. Initially these techniques were held proprietary. But they realized making the concepts public, while keeping the implementations hidden, will benefit them - since more people will contribute to those and the graduates they hire will have a good understanding prior to joining.

Around 2004/2005 Facebook, Yahoo! and Google started sharing research papers describing their big data technologies.

In 2004 Google published the research paper "MapReduce: Simplified Data Processing on Large Clusters".

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in this paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Google's implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable.

A typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers and the system easy to use. Hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day.

Doug Cutting who worked for Nutch, an open-source search technology project which are now managed through the Apache Software Foundation, read this paper published by Google and also another paper published by Google on Google's distributed file system [GFS]. He figured out GFS will solve their storage needs and MapReduce will solve the scaling issues they encountered with Nutch and implemented MapReduce and GFS. They named the GFS implementation for Nutch as the Nutch Distributed Filesystem [NDFS].

NDFS and the MapReduce implementation in Nutch were applicable beyond the realm of search, and in February 2006 they moved out of Nutch to form an independent sub project of Lucene called Hadoop and NDFS, became HDFS [Hadoop Distributed File System] - which is an implementation of GFS. During the same time Yahoo! extended their support for Hadoop and hired Doug Cutting.

At a very high-level, this is how HDFS works. Say we have a 300 MB file. [Hadoop also does really well with files of petabytes and terabytes.] The first thing HDFS is going to do is to split this up in to blocks. The default block size on HDFS right now is 128 MB. Once split-ed in to blocks we will have two blocks of 128 MB and another of 44 MB. Now HDFS will make 'n' number of ['n' is configurable - say 'n' is three] copies/replicas of each of these blocks. HDFS will now store these replicas in different DataNodes of the HDFS cluster. We also have a single NameNode, which keeps track of replicas and the DataNodes. NameNode knows where a given replica resides - whenever it detects a given replica is corrupted [DataNode keeps on running checksums on replicas] or the corresponding HDFS node is dowm, it will find out where else that replica is in the cluster and tells other nodes do 'n'X replication of that replica. The NameNode is a single point of failure - and two avoid that we can have secondary NameNode which in sync with the primary -and when primary is down - the secondary can take control. Hadoop project is currently working on implementing distributed NameNodes.

Again in 2006 Google published another paper on "Bigtable: A Distributed Storage System for Structured Data"

Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size, petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. This paper describes the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and describes the design and implementation of Bigtable.

BigTable maps two arbitrary string values (row key and column key) and timestamp (hence three dimensional mapping) into an associated arbitrary byte array. It is not a relational database and can be better defined as a sparse, distributed multi-dimensional sorted map.

Basically BigTable discussed how to build a distributed data store on top of GFS.

HBase by Hadoop is an implementation of BigTable. HBase is a distributed, column oriented database which is using HDFS for it's underlying storage and supports both batch-style computation using MapReduce and point queries.

Amazon, published a research paper in 2007 on "Dynamo: Amazon’s Highly Available Key-value Store".

Dynamo, is a highly available key-value storage system that some of Amazon’s core services use to provide an “always-on” experience. Apache Cassandra — brings together Dynamo's fully distributed design and BigTable's data model and written in Java - open sourced by Facebook in 2008. It is a NoSQL solution that was initially developed by Facebook and powered their Inbox Search feature until late 2010. In fact much of the initial development work on Cassandra was performed by two Dynamo engineers recruited to Facebook from Amazon. However, Facebook abandoned Cassandra in late 2010 when they built Facebook Messaging platform on HBase.

Also, besides using the way of modeling of BigTable, it has properties like eventual consistency, the Gossip protocol, a master-master way of serving the read and write requests that are inspired by Amazon's Dynamo. One of the important properties, the Eventual consistency - means that given a sufficiently long period of time over which no changes are sent, all updates can be expected to propagate eventually through the system and all the replicas will be consistent.

I used the term 'NoSQL' when talking about Cassandra. NoSQL (sometimes expanded to "not only SQL") is a broad class of database management systems that differ from the classic model of the relational database management system (RDBMS) in some significant ways. These data stores may not require fixed table schemas, usually avoid join operations, and typically scale horizontally.

The name "NoSQL" was in fact first used by Carlo Strozzi in 1998 as the name of file-based database he was developing. Ironically it's relational database just one without a SQL interface. The term re-surfaced in 2009 when Eric Evans used it to name the current surge in non-relational databases.

There are four categories of NoSQL databases.

1. Key-value stores : This is based on Amazon's Dynamo paper.
2. ColumnFamily / BigTable clones : Examples are HBase, Cassandra
3. Document Databases : Examples are CouchDB, MongoDB
4. Graph Database : Examples are AllegroGrapgh, Neo4j

As per Marin Dimitrov, following are the use cases for NoSQL databases - in other words following are the cases where relational databases do not perform well.

1. Massive Data Volumes
2. Extreme Query Volume
3. Schema Evolution

With NoSQL, we get the advantages like, Massive Scalability, High Availability, Lower Cost (than competitive solutions at that scale), Predictable elasticity and Schema flexibility.

For application programmers the major difference between relational databases and the Cassandra is it's data model - which is based on BigTable. The Cassandra data model is designed for distributed data on a very large scale. It trades ACID-compliant data practices for important advantages in performance, availability, and operational manageability.

If you want to compare Cassandra with HBase, then this is a good one. Another HBase vs Cassandra debate is here.

References :

[1]: MapReduce: Simplified Data Processing on Large Clusters
[2]: Bigtable: A Distributed Storage System for Structured Data
[3]: Dynamo: Amazon’s Highly Available Key-value Store
[4]: The Hadoop Distributed File System
[5]: ZooKeeper: Wait-free coordination for Internet-scale systems
[6]: An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
[7]: Cassandra - A Decentralized Structured Storage System
[8]: NOSQL Patterns
[9]: BigTable Model with Cassandra and HBase
[10]: LinkedIn Tech Talks : Apache Hadoop - Petabytes and Terawatts
[11]: O'Reilly Webcast: An Introduction to Hadoop
[12]: Google Developer Day : MapReduce
[13]: WSO2Con 2011 - Panel: Data, data everywhere: big, small, private, shared, public and more
[14]: Scaling with Apache Cassandra
[15]: HBase vs Cassandra: why we moved
[16]: A Brief History of NoSQL

Saturday, December 10, 2011

Possible bug in iPad push notifications ?

The applications that do support push notifications - will prompt a message whenever there is something to notify - even-though the user is not using the app at that time.

But - what if I have secured the iPad with a password.. Still these apps will prompt the message when iPad is in locked state.. Which looks like a bug for me..

Another possible bug.. when your iPad is locked and requires a password to unlock, still anyone can access your photos, just by clicking on the icon below..

May be iPad has some options to handle these scenarios - even in that case this shouldn't be the default behavior when iPad is protected with a password.

Tuesday, December 6, 2011

Symmetric/Asymmetric Encryption/Signature with Apache Rampart

What is meant by Symmetric ? Both the parties, client and server use the same key to encrypt and sign.

Now the question is how to establish this key..? Anyone of the two parties can generate the key, but.. how do we pass the generated key to the other end...

This is how it works in Web Services security...

1. Initiator generates a key
2. Signs/Encrypts the message with the generated key
3. Encrypts the key with the public key of the recipient
4. Builds an encrypted key element with the output from [3] and associates an ID with that element and stores it in-memory using ID as the key. [This is how Rampart stores it]
<xenc:EncryptedKey Id="EncKeyId-C1AFA8321D1093CA1913231781007902">
     Algorithm="" />
    <ds:KeyInfo xmlns:ds="">
5. EncryptedKey element will be included in the Security header of the SOAP message going from the sender to the recipient.

Here what you see under CipherValue element is the encrypted generated key.

The value of KeyIdentifier, which is r3iHLvhEdbQLQGh0iuDzzJMBz40=, is the base64 encoded SHA1 value of the fingerprint of the recipient's public key. Looking at this fingerprint value, recipient can pick the corresponding private key to decrypt the message and get the generated key out.

Also, let's have a look at the Algorithm attribute of EncryptionMethod element. This is the algorithm used to encrypt the generated key - and which to use is based on the Algorithm Suite defined in your security policy. In this case I have used Basic256 as the Algorithm Suite - so it uses rsa-oaep-mgf1p as the asymmetric key wrapping algorithm.

Now let's see what happens at the recipient end...

1. Recipient gets the message
2. Finds the private key corresponding to the fingerprint value in EncryptedKey element
3. Decrypts the encrypted generated key
4. Decrypts the message and verifies the signature using the key from [3]
5. Generates the response
6. Signs/Encrypts the response from the same key from [3]
7. Now it generates the SHA1 of the EncryptedKey element it receives from the client and adds the base64 encoded value to the response
8. Sends the response to the client

Once the client gets the message, it will perform following validations..

1. Client gets the message
2. Goes through all the stored EncryptedKey elements in-memory to check whether the SHA1 hash of any of them matches with the hash value in the response. If any match found, that's the Encrypted key.
3. EncryptedKey element in the memory also maintains the generated key in clear text, so client can find it
4. From the key found in [3]. client validates the message

With Symmetric binding, only the recipient needed to have a public/private key pair.

But in Asymmetric binding both the parties should have their own key pairs.

Even it's Asymmetric, the encryption happens with a generated symmetric key - the reason is Asymmetric encryption is resource consuming and also cannot operate on a large amount of data... so, WS-Security specification recommends to use symmetric key encryption with a generated key, even with the Asymmetric binding.

The major difference between the Asymmetric and the Symmetric is the way Signature been handled.

With Symmetric binding, both the request and the response are signed using the same generated key.. But in Asymmetric binding, the request is signed using the sender's private key and the response is signed using recipient's private key. In other words, Asymmetric provides a guarantee on non-repudiation while Symmetric binding does not.

Let's see how Asymmetric binding works in Web Services security..

1. Initiator/client generates a key
2. Encrypts the message with the generated key
3. Signs message with it's own private key
4. Encrypts the generated key with the public key of the recipient
5. Builds an encrypted key element with the output from [4] and associates an ID with that element. Do NOT store it in-memory as in the case of Symmetric.
<xenc:EncryptedKey Id="EncKeyId-C1AFA8321D1093CA1913231781007902">
     Algorithm="" />
    <ds:KeyInfo xmlns:ds="">
6. EncryptedKey element will be included in the Security header of the SOAP message going from the sender to the recipient.

The SOAP messages for Symmetric and Asymmetric requests look alike - cannot find any differences.

Now let's see what happens at the recipient end...

1. Recipient gets the message
2. Finds the private key corresponding to the fingerprint value in EncryptedKey element
3. Decrypts the encrypted generated key
4. Decrypts the message using the key from [3]
5. Verifies the signature of the message using the public key of the sender
6. Generates the response
7. Generates a new key
8. Encrypts the message with the generated new key
9. Signs message with it's own private key
10. Encrypts the generated key with the public key of the initiator [the client]
11. EncryptedKey element will be built with the encrypted generated key and included in the Security header of the SOAP message going from the service to the client.
12. Sends the response to the client

Once the client gets the message, it will perform following validations...

1. Client gets the message
2. Finds the private key corresponding to the fingerprint value in EncryptedKey element
3. Decrypts the encrypted generated key with the key from [2]
4. Decrypts the message using the key from [3]
5. Verifies the signature of the message using the public key of the service

One of my colleagues recently asked me how to find whether someone is using Asymmetric or Symmetric binding just by looking at the SOAP messages...

This is not possible by looking at the SOAP request - but looking at the SOAP response we can figure it out. When using Asymmetric binding SOAP response will have the EncryptedKey element inside the Security header - but not in the case of Symmetric binding.

Let's summarize the differences in the behavior between Asymmetric and Symmetric bindings.

Symmetric Binding Asymmetric Binding
Client Generates a key Generates a key
Service Uses the same key generated from the client Generates a key
Client Stores the generated key Does NOT store the generated key
Service Does not generate keys Does NOT store the generated key
Client Encrypts with the generated key Encrypts with the generated key
Service Encrypts with the client generated key Encrypts with the service generated key
Client Signs with the generated key Signs with the it's own private key
Service Signs with the client generated key Signs with the it's own private key
Client Adds the EncryptedKey element to the request Adds the EncryptedKey element to the request
Service Does NOT add the EncryptedKey element to the response Adds the EncryptedKey element to the response
Client Signature algorithm hmac-sha1 Signature algorithm rsa-sha1
Service Signature algorithm hmac-sha1 Signature algorithm rsa-sha1

Friday, December 2, 2011

Creating RESTful APIs Using the WSO2 Platform

APIs have become an essential and key success factor for any business. Businesses do not operate as silos anymore, therefore each business depends on B2B communications. In technical terms, different systems/applications need to communicate with each other to fulfill various business requirements. Publishing rich business APIs is the answer to the above requirements.

Architects and developers who implement APIs prefer to use REST as the standard by looking at the simplicity and flexibility it provides for the end-users of the API and use lightweight message formats like JSON and POX. Most enterprises struggle to expose RESTful APIs due to various technical limitations and spend more time to architect and implement the same.

This half-day workshop
focuses on how to expose your heterogeneous back-end services as a RESTful API in a quick and easy but architecturally accurate way, using the WSO2 Platform - and presented by Asanka Abeysinghe, Director, Solutions Architecture, WSO2.

Date : Thursday, 8 December - from 9.00 AM to 1.00 PM
Location : 4131, El Camino Real, Suite 200 Palo Alto, CA 94306
Admission : Free

Thursday, December 1, 2011

Kolamba DZone Community Meetup

The first ever DZone meetup in Sri Lanka happens on 15th December at WSO2 #58 office..

Sri Lanka has the highest number of Apace Committers out side USA and further in Google Summer of Code, University of Moratuwa - Sri Lanka was ranked as the top university Worldwide in terms of the number of awards received by students for the five year period from its inception in 2005. So, the interest among Sri Lankan tech community towards DZone Community Meetup undoubtedly expect to be very high...

We have picked "Big Data" as the theme for this meetup and have invited the experts in this area to share their thoughts.. All topics related to Big Data has lots of traction these days and DZone NoSQL Zone has a good collection of resources..

We would like to invite any of the DZone members/users around Colombo to join this meetup to share your thoughts..

WSO2 is happily sponsoring the event and we would expect DZone to send us some RefCards to share between the attendees..

Please confirm your attendance via

Location : WSO2, 5th Floor, 58 [ICIC Building], Dharmapala Mawatha, Colombo 07.

Friday, November 25, 2011

The depth of SAML [SAML Summary]

1. History

Security Assertion Markup Language (SAML) is an XML standard for exchanging authentication and authorization data between entities which is a product of the OASIS Security Services Technical Committee.

- SAML 1.0 was adopted as an OASIS standard in Nov 2002
- SAML 1.1 was ratified as an OASIS standard in Sept 2003
- SAML 2.0 became an OASIS standard in Mar 2005

Liberty Alliance donated its Identity Federation Framework (ID-FF) specification to OASIS, which became the basis of the SAML 2.0 specification. Thus SAML 2.0 represents the convergence of SAML 1.1, Liberty ID-FF 1.2, and Shibboleth 1.3.

2. SAML base standards

SAML is built upon the following technology standards.

- Extensible Markup Language (XML)
- XML Schema
- XML Signature
- XML Encryption (SAML 2.0 only)
- Hypertext Transfer Protocol (HTTP)

3. SAML Components

Assertions: Authentication, Attribute and Authorization information
Protocol: Request and Response elements for packaging assertions
Bindings: How SAML Protocols map onto standard messaging or communication protocols
Profiles: How SAML protocols, bindings and assertions combine to support a defined use case

4. Assertions and Protocols for SAML v2.0

The Security Assertion Markup Language (SAML) defines the syntax and processing semantics of assertions made about a subject by a system entity. In the course of making, or relying upon such assertions, SAML system entities may use other protocols to communicate either regarding an assertion itself, or the subject of an assertion. This specification defines both the structure of SAML assertions, and
an associated set of protocols, in addition to the processing rules involved in managing a SAML system. This specification is considered as the SAML Core specification and these constructs are typically embedded in other structures for transport, such as HTTP form POSTs and XML-encoded SOAP messages.

5. Bindings for SAML v2.0

Bindings for SAML specifies SAML protocol bindings for the use of SAML assertions and request-response messages in communications protocols and frameworks.

Mappings of SAML request-response message exchanges onto standard messaging or communication protocols are called SAML protocol bindings (or just bindings). An instance of mapping SAML requestresponse message exchanges into a specific communication protocol <FOO> is termed a binding for SAML or a SAML <FOO> binding. For example, a SAML SOAP binding describes how SAML request and response message exchanges are mapped into SOAP message exchanges.

The intent of this specification is to specify a selected set of bindings in sufficient detail to ensure that independently implemented SAML-conforming software can interoperate when using standard messaging or communication protocols.

Following bindings are covered under this specification.

- SAML SOAP Binding
- Reverse SOAP (PAOS) Binding
- HTTP Redirect Binding
- HTTP POST Binding
- HTTP Artifact Binding
- SAML URI Binding

6. Profiles for SAML v2.0

Profiles for SAML specifies profiles that define the use of SAML assertions and request-response messages in communications protocols and frameworks, as well as profiles that define SAML attribute value syntax and naming conventions.

One type of SAML profile outlines a set of rules describing how to embed SAML assertions into and extract them from a framework or protocol. Such a profile describes how SAML assertions are embedded in or combined with other objects (for example, files of various types, or protocol data units of communication protocols) by an originating party, communicated from the originating party to a receiving party, and subsequently processed at the destination. A particular set of rules for embedding SAML assertions into and extracting them from a specific class of <FOO> objects is termed a <FOO> profile of SAML.

For example, a SOAP profile of SAML describes how SAML assertions can be added to SOAP messages, how SOAP headers are affected by SAML assertions, and how SAML-related error states should be reflected in SOAP messages.

Another type of SAML profile defines a set of constraints on the use of a general SAML protocol or assertion capability for a particular environment or context of use. Profiles of this nature may constrain optionality, require the use of specific SAML functionality (for example, attributes, conditions, or bindings), and in other respects define the processing rules to be followed by profile actors.

Following profiles are covered under this specification.

- SSO Profiles of SAML [Web Browser SSO Profile,Enhanced Client or Proxy (ECP) Profile, Identity Provider Discovery Profile, Single Logout Profile, Name Identifier Management Profile]
- Artifact Resolution Profile
- Assertion Query/Request Profile
- Name Identifier Mapping Profile
- SAML Attribute Profiles

7. Metadata for SAML v2.0

SAML profiles require agreements between system entities regarding identifiers, binding support and endpoints, certificates and keys, and so forth. A metadata specification is useful for describing this information in a standardized way. This specification defines an extensible metadata format for SAML system entities, organized by roles that reflect SAML profiles. Such roles include that of SSO Identity Provider, SSO Service Provider, Affiliation, Attribute Authority, Attribute Requester, and Policy Decision Point.

This specification further defines profiles for the dynamic exchange of metadata among system entities, which may be useful in some deployments.

8. Conformance Requirements for SAML v2.0

This normative specification describes features that are mandatory and optional for implementations claiming conformance to SAML V2.0 and also specifies the entire set of documents comprising SAML V2.0.

9. Web Services Security: SAML Token Profile 1.1

This specification describes how to use SAML V1.1 and V2.0 assertions with the Web Services Security SOAP Message Security V1.1 specification.

10. SAML 2.0 profile of XACML

The OASIS eXtensible Access Control Markup Language [XACML] is a powerful, standard
language that specifies schemas for authorization policies and for authorization decision requests and responses.

This profile defines how to use SAML 2.0 to protect, transport, and request XACML schema instances and other information needed by an XACML implementation.

11. Security and Privacy Considerations for SAML

This non-normative document describes and analyzes the security and privacy properties of SAML defined in the core SAML specification and the SAML bindings and profiles specifications.

12. SAML V2.0 Kerberos Attribute Profile

This specification defines an attribute profile for the Kerberos protocol. The SAML V2.0 Kerberos Attribute Profile describes a SAML attribute profile for requesting and expressing Kerberos protocol messages. In this version of the specification, this is constrained to the Kerberos KRB-CRED message type. The mechanisms that are used to generate the Kerberos message are outside the scope of this document and are described by IETF RFC 4120: 'The Kerberos Network Authentication Service (V5)'.

13. SAML V2.0 Change Notify Protocol

The SAML V2.0 Change Notify Protocol describes request and response messages for informing SAML endpoints about available changes to subjects and attributes associated with subjects.

Thursday, November 24, 2011

SAML Assertions and XML Signature

SAML assertions and SAML protocol request and response messages may be signed, with the following benefits:

1. An assertion signed by the SAML authority supports:
– Assertion integrity.
– Authentication of the SAML authority to a SAML relying party.
– If the signature is based on the SAML authority’s public-private key pair, then it also provides for non-repudiation of origin.

2. A SAML protocol request or response message signed by the message originator supports:
– Message integrity.
– Authentication of message origin to a destination.
– If the signature is based on the originator's public-private key pair, then it also provides for non-repudiation of origin.

The [1] talks about only signing the Assertion - and [2] talks about signing the request as well as the response message which also carries the Assertion.

A digital signature is not always required in SAML. For example, it may not be required in the following situations:

- In some circumstances signatures may be “inherited," such as when an unsigned assertion gains protection from a signature on the containing protocol response message. "Inherited" signatures should be used with care when the contained object (such as the assertion) is intended to have non-transitory lifetime. The reason is that the entire context must be retained to allow validation, exposing the XML content and adding potentially unnecessary overhead.

- The SAML relying party or SAML requester may have obtained an assertion or protocol message from the SAML authority or SAML responder directly (with no intermediaries) through a secured channel, with the SAML authority or SAML responder having authenticated to the relying party or SAML responder by some means other than a digital signature.

It is recommended that, in all other contexts, digital signatures be used for assertions and request and response messages. Specifically:

- A SAML assertion obtained by a SAML relying party from an entity other than the SAML authority SHOULD be signed by the SAML authority.
- A SAML protocol message arriving at a destination from an entity other than the originating site SHOULD be signed by the origin site.

XML Signatures are intended to be the primary SAML signature mechanism.

Unless a profile specifies an alternative signature mechanism, enveloped XML Digital Signatures MUST be used if signing. This is bit different from the signature pattern recommended in WS-Security specification.

WS-Security specification says..

"Because of the mutability of some SOAP headers, producers SHOULD NOT use the Enveloped Signature Transform defined in XML Signature. Instead, messages SHOULD explicitly include the elements to be signed. Similarly, producers SHOULD NOT use the Enveloping Signature defined in XML Signature".

Although this contrasts with what is recommended in SAML specification - WS-Security doesn't have nothing to with SAML. SAML becomes a token type for WS-Security and SAML specification has the full control to define it's own recommendation for signing.

Why SAML specification recommends enveloped signature?

With enveloped signature, the signature element will be inside the element been signed it self - which is the Assertion element.

Enveloped signature is useful when we have a signed XML document that we wish to insert into other XML documents. Which is the case with SAML Assertion. There you get the SAML Assertion from the issuer and include it in to a request to the service provider.

SAML implementations also SHOULD use Exclusive Canonicalization [Excl-C14N], with or without comments, both in the element of , and as a algorithm. Use of Exclusive Canonicalization ensures that signatures created over SAML messages embedded in an XML context can be verified independent of that context.

Exclusive Canonicalization tries to figure out what namespaces you are actually using and just copies those. Specifically, it copies the ones that are "visibly used", which means the ones that are a part of the XML syntax. However, it does not look into attribute values or element content, so the namespace declarations required to process these are not copied. For example if you had an attribute like xx:foo="yy:bar" it would copy the declaration for xx, but not yy. It also does not copy the xml: attributes that are declared outside the scope of the signature.

Exclusive Canonicalization allows you to create a list of the namespaces that must be declared, so that it will pick up the declarations for the ones that are not visibly used.

Exclusive Canonicalization is useful when you have a signed XML document that you wish to insert into other XML documents - as in a signed SAML assertion which might be inserted as a XML Token in the security header of various SOAP messages. The Issuer who signs the assertion will be aware of the namespaces being used and able to construct the list. The use of Exclusive Canonicalization will insure the signature verifies correctly every time.

In contrast, the Inclusive Canonicalization copies all the declarations that are currently in force, even if they are defined outside of the scope of the signature. It also copies any xml: attributes that are in force, such as xml:lang or xml:base. This guarantees that all the declarations you might make use of will be unambigiously specified. The problem with this is that if the signed XML is moved into another XML document which has other declarations, the Inclusive Canonicalization will copy them and the signature will be invalid. This can even happen if you simply add an attribute in a different namespace to the surrounding context.

References :

Monday, November 21, 2011

Key Exchange Patterns with Web Services Security

When we have message level security with web services - how we achieve integrity and confidentiality is through keys. Keys are used to sign and encrypt messages been passed from the rqeuestor to the recipient or form the client to the service and vise versa.

During this blog post, we'll be discussing different key exchange patterns and their related use cases.

1. Direct Key Transfer

If one party has a token and key and wishes to share this with another party, the key can be directly transferred. WS-Secure Conversation is a good example for this. Under WS-Secure Conversation, when the security context token is created by one of the communicating parties and propagated with a message it occupies this pattern to do the key exchange. This is accomplished by the initiator sending an RSTR (either in the body or header) to the other party. The RSTR contains the token and a proof-of-possession token that contains the key encrypted for the recipient.

The initiator creates a security context token and sends it to the other parties on a message using the mechanisms described in WS-Trust specification. This model works when the sender is trusted to always create a new security context token. For this scenario the initiating party creates a security context token and issues a signed unsolicited <wst:RequestSecurityTokenResponse> to the other party. The message contains a <wst:RequestedSecurityToken> containing (or pointing to) the new security context token and a <wst:RequestedProofToken> pointing to the "secret" for the security context token.

2. Brokered Key Distribution

A third party MAY also act as a broker to transfer keys. For example, a requestor may obtain a token and proof-of-possession token from a third-party STS. The token contains a key encrypted for the target service (either using the service's public key or a key known to the STS and target service). The proof-of-possession token contains the same key encrypted for the requestor (similarly this can use public or symmetric keys).

WS-Secure Conversation also has an example for this pattern when the security context token is created by a security token service – The context initiator asks a security token service to create a new security context token. The newly created security context token is distributed to the parties through the mechanisms defined here and in WS-Trust. For this scenario the initiating party sends <wst:RequestSecurityToken> request to the token service and a <wst:RequestSecurityTokenResponseCollection> containing a <wst:RequestSecurityTokenResponse> is returned. The response contains a <wst:RequestedSecurityToken> containing (or pointing to) the new security context token and a <wst:RequestedProofToken> pointing to the "secret" for the returned context. The requestor then uses the security context token when securing messages to applicable services.

3. Delegated Key Transfer

Key transfer can also take the form of delegation. That is, one party transfers the right to use a key without actually transferring the key. In such cases, a delegation token, e.g. XrML, is created that identifies a set of rights and a delegation target and is secured by the delegating party. That is, one key indicates that another key can use a subset (or all) of its rights. The delegate can provide this token and prove itself (using its own key – the delegation target) to a service. The service, assuming the trust relationships have been established and that the delegator has the right to delegate, can then authorize requests sent subject to delegation rules and trust policies.

For example a custom token is issued from party A to party B. The token indicates that B (specifically B's key) has the right to submit purchase orders. The token is signed using a secret key known to the target service T and party A (the key used to ultimately authorize the requests that B makes to T), and a new session key that is encrypted for T. A proof-of-possession token is included that contains the session key encrypted for B. As a result, B is effectively using A's key, but doesn't actually know the key.

4. Authenticated Request/Reply Key Transfer

In some cases the RST/RSTR mechanism is not used to transfer keys because it is part of a simple request/reply. However, there may be a desire to ensure mutual authentication as part of the key transfer. The mechanisms of WS-Security can be used to implement this scenario.

Specifically, the sender wishes the following:
- Transfer a key to a recipient that they can use to secure a reply
- Ensure that only the recipient can see the key
- Provide proof that the sender issued the key

This scenario could be supported by encrypting and then signing. This would result in roughly the following steps:

1. Encrypt the message using a generated key
2. Encrypt the key for the recipient
3. Sign the encrypted form, any other relevant keys, and the encrypted key

However, if there is a desire to sign prior to encryption then the following general process is used:

1. Sign the appropriate message parts using a random key (or ideally a key derived from a random key)
2. Encrypt the appropriate message parts using the random key (or ideally another key derived from the random key)
3. Encrypt the random key for the recipient
4. Sign just the encrypted key

Most part of this blog post is extracted from WS-Trust 1.4 specification.

Sunday, November 20, 2011

Understanding Entropy

This blog post is inspired by a question asked from one of my team mates - so.. here I am trying to explain what is entropy and it's role in web services security.

In information theory, entropy is a measure of the uncertainty associated with a random variable. In other words, entropy adds randomness to a generated key.

In WS-Trust, under Holder-of-Key scenario - the Security Token Service has to generate a key and pass that to the client - which will later be used between the client and the service to secure the communication.

Let's see how this is done.. Let's have a look at some part of the client request to the Security Token Service.
Here you can see, the Entropy element is included in the request.

This optional element allows a requestor to specify entropy that is to be used in creating the key. The value of this element should be either a <xenc:EncryptedKey> or <wst:BinarySecret> depending on whether or not the key is encrypted. Secrets should be encrypted unless the transport/channel is already providing encryption. The BinarySecret element specifies a base64 encoded sequence of octets representing the requestor's entropy.

The keys resulting from a request are determined in one of three ways...

1. Specific
2. Partial
3. Omitted

In the case of specific keys, a <wst:RequestedProofToken> element is included in the response which indicates the specific key(s) to use unless the key was provided by the requestor(in which case there is no need to return it). This happens if the requestor does not provide entropy or issuer rejects the requestor's entropy.

In the case of partial, the <wst:Entropy> element is included in the response, which indicates partial key material from the issuer (not the full key) that is combined (by each party) with the requestor's entropy to determine the resulting key(s). In this case a <wst:ComputedKey> element is returned inside the <wst:RequestedProofToken> to indicate how the key is computed. This happens if the requestor provides entropy and the issuer honors it. Here you will see, in the response it will have an Entropy element - which includes the issuer's entropy.
In the case of omitted, an existing key is used or the resulting token is not directly associated with a key. This happens if the requestor provides entropy and the responder doesn't (issuer uses the requestor's key), then a proof-of-possession token need not be returned.

Following table summarizes the use of Entropy.

Requestor Issuer Result
Provides Entropy Uses requestor entropy as key No proof-of-possession token is returned
Provides Entropy Provides entropy No keys returned, key(s) derived using entropy from both sides according to method identified in response. Issuer's Entropy is returned to the client and the way the key was derived specified under ComputedKey element.
Provides Entropy Issues own key (rejects requestor's entropy) Proof-of-possession token contains issuer's key(s)
No Entropy provided Issues own key Proof-of-possession token contains issuer's key(s)
No Entropy provided Does not issue key No proof-of-possession token

Tuesday, November 15, 2011

Subject Confirmation support with Apache Rampart : Holder-of-Key

The Subject Confirmation is the process of establishing the correspondence between the subject and claims of SAML statements (in SAML assertions) and SOAP message content by verifying the confirmation evidence provided by an attesting entity.

SAML 1.1 Token Profile talks about three subject confirmation methods.

1. Holder-of-key Subject Confirmation Method
2. Bearer-key Subject Confirmation Method
2. Sender-vouches Subject Confirmation Method

With Holder-of-key Subject Confirmation Method, the attesting entity demonstrates that it is authorized to act as the subject of a holder-of-key confirmed SAML statement by demonstrating knowledge of any key identified in a holder-of-key SubjectConfirmation element associated with the statement by the assertion containing the statement. Statements attested for by the holder-of-key method MUST be associated, within their containing assertion, with one or more holder-of-key SubjectConfirmation elements.

Let's see how this works..

First the client application needs to request a token from the Security Token Service or the STS. This request is known as RST [wst:RequestSecurityToken] and goes inside the SOAP Body. Following is a sample RST.
Let's have a look at some of the key elements in the RST.

1. AppliesTo

This is the end point where the client going to use this token against.

2. KeyType :

Use Symmetric key when generating the key for the SubjectConfirmation.

3. KeySize

Use this key size when generating the key for the SubjectConfirmation.

4. Entropy/BinarySecret

WS-Trust allows the requestor to provide input to the key material via a wst:Entropy element in the request. The requestor might do this to satisfy itself as to the degree of entropy (cryptographic randomness if you will) of at least some of the material used to generate the actual key which is used for SubjectConfirmation.

5. Entropy/ComputedKeyAlgorithm :

The key derivation algorithm to use if using a symmetric key for P, where P is computed using client, server, or combined entropy.

With the key is computed using P_SHA1 from the TLS specification to generate a bit stream using entropy from both sides. The exact form is:
key = P_SHA1 (EntREQ, EntRES)
It is RECOMMENDED that EntREQ be a string of length at least 128 bits.

Now let's see how this request been processed at the STS end.

Based on the Key Type in the request - STS will decide whether to use Holder-of-key or not. For following key types, holder-of-key subject confirmation method will be used.

1. trust/200512/PublicKey
2. trust/200512/SymmetricKey

If it is SymmetricKey - then STS will generate a key - encrypt the key using the public certificate corresponding to the end point attached to the AppliesTo element in the RST and add that to the SubjectConfirmation element in the response.

Key generation is once again bit tricky.

If client provides an entropy and the key computation algorithm is then, the key is generated as a function of the client entropy and the STS entropy.

If client provides an entropy but the key computation algorithm is NOT then, the key is same as the client entropy.

If neither of above happens, then the server generates an ephemeral key.

Whatever the way the key is generated, it will be encrypted with the certificate corresponding to the AppliesTo end point and will be added in to the SubjectConfirmation element in the response.

As per the above code, what you see inside CipherValue element is the encrypted key. And it is encrypted from a certificate which is having the thumbprint reference Ye9D13/K1GFRvJjgw1kSr5/rYxE=. In other words, only the service which owns the certificate having the thumbprint reference Ye9D13/K1GFRvJjgw1kSr5/rYxE= would be able to decrypt the key - which is in fact the service end point attached to the AppliesTo element. BTW... can anybody in the middle fool the service endpoint just by replacing the SubjectConfirmation element..? This is prevented by STS signing the SubjectConfirmation element along with Assertion parent element with it's private key. So - the SAML token is protected for integrity.

Okay... now the token is at the client end... In which ways the client application going to use this token.

One way is to use it as a SupportingToken and the other way is to use it as a ProtectionToken.

When we use SAML token as a ProtectionToken, client application can use it to encrypt/sign the messages going from the client to the service end point. Then the question is which key would the client use to sign and encrypt - it's the same key added to the SubjectConfirmation by the STS - but it's encrypted with the public key of the service end point - so, client won't be able to decrypt it and get access to the hidden key.

There is another way, STS passes the generated key to the client. Let's look at the following element also included in the response passed from the STS to the client - this is out side the Assertion element.
Here in the Entropy/BinarySecret STS passed the entropy created to generate the key. The key is generated as a function of the client entropy and the STS entropy - client already knows the client entropy and can find the STS entropy inside Entropy/BinarySecret in the response - so, client can derive the key from those.

Following would be the WS-Security Policy at the service end, which expects SAML token as a ProtectionToken.
<wsp:Policy wsu:Id="SgnOnlyAnonymous"
 xmlns:wsp="" xmlns:wsa=""
        <Issuer xmlns="">
         <Address xmlns="">http://localhost:8080/axis2/services/STS
         <t:TokenType xmlns:t="">
         <t:KeyType xmlns:t="">
         <t:KeySize xmlns:t="">256</t:KeySize>
         <sp:RequireInternalReference />
       <sp:Basic128 />
       <sp:Lax />
     <sp:IncludeTimestamp />
     <sp:OnlySignEntireHeadersAndBody />
    <sp:Header Name="To" Namespace="" />
    <sp:Body />
     <sp:MustSupportRefKeyIdentifier />
     <sp:MustSupportRefIssuerSerial />
     <sp:MustSupportRefThumbprint />
     <sp:MustSupportRefEncryptedKey />
     <sp:RequireSignatureConfirmation />
     <sp:MustSupportIssuedTokens />
     <sp:RequireClientEntropy />
     <sp:RequireServerEntropy />
When we use the SAML token as a SupportingToken, we basically doing nothing from it that sending it as it is to the service end in the SOAP Security Header. SubjectConfirmation goes useless here - but service end can verify whether the token been issued by a trusted STS, by verifying the signature.

Following is the WS-Security Policy at the service end - which expects SAML token as a SupportingToken.
<wsp:Policy xmlns:wsp=""
      <Issuer xmlns="">
       <Address xmlns="">
       <t:Claims Dialect=""
        <ic:ClaimType Uri="" />
       <sp:RequireInternalReference />

Wednesday, November 9, 2011

Cross Domain Authentication Patterns - Kerberos with STS

Business Requirements :

1. Users from domain A - need to access a service in domain B
2. Not all the users from domain A should be able to access the service in domain B [only a given group of users]
3. Users are in a Windows domain and should be not asked again to enter any credentials to access the service in domain B

What we need to achieve is..

User logs in to his Windows machine and seamlessly accesses the service in domain B - with no additional authentication steps.

Pattern - as per the diagram above..

1 & 2 : User talks to Kerberos KDC [TGS] - authenticates and gets a Kerberos TGT. This communication with KDC happens underneath when user logs in to his Windows machine.

3 & 4 : User program using the TGT, gets a Kerberos ticket to access the STS.

5 & 6 : Using the Kerberos ticket issued to the user to access the STS - user program authenticates to STS and obtains a SAML token via WS-Trust. STS also carries out an authorization check to see whether the user is eligible to access the service in domain B.

7 & 8 : User program uses the obtained SAML token to authenticate to the service in domain B. The service will validate that the token is issued from a trusted STS by verifying the signature.

Tuesday, November 8, 2011

Claim based authorization with WSO2 Identity Server

This blog post explains how to set up WSO2 Identity Server to do claim based authorization with XACML.

1. Download WSO2 Identity Server latest version from here.

2. The default user store of WSO2 Identity Server is running on an embedded ApacheDS server. In case you need to point it to an external LDAP server you can do it through a change in the configuration. This blog post explains how to integrate Oracle Directory Server as the User Store of WSO2 Identity Server.

3. Start the WSO2 Identity Server from [IS_HOME]\bin

4. Let's now define our authorization policy in plain English.

"A given resource can be accessed only by any user having an email address from wso2 belonging to a particular role and all the requests to any other resource other than this should fail"

5. Save the following policy in to a local file and import the file to WSO2 Identity Server XACML engine. Main --> Entitlement --> Administration --> Import New Entitlement Policy --> File System and import the policy. Then from policy list view click on the Enable button against the uploaded policy to enable the policy.

6. Looking at the policy, you might have noticed that, I have used the claim And, this should map to the attribute id, corresponding to the email, in the underlying user store. If it is LDAP then, should map to the 'mail' attribute id. This is done through the Claim Management UI,

7. Configure --> Claim Management -->

8. Now you can see all the claims used under dialect

9. Click on Edit link against any of the claims you want to update and then set the "Mapped Attribute" value to the, attribute id name from the underlying user store.

10. You can try the policy we defined, from the Entitlement TryIt tool. Main --> Entitlement --> TryIt.

Monday, November 7, 2011

ApacheCon Vancouver : Training on Web Services Security

Today is the first day at ApacheCon 2011 @ Vancouver - Canada.

My training on Web Services Security started around 2.30 in the afternoon.

First part was a presentation on different security patterns and standards - then started digging in to the Rampart code.

All the samples I used are available here. You can use a simple SVN client to get that code. It comes as Eclipse projects and when loaded in to an Eclipse workspace, just set the SAMPLES_HOME environment variable in ECLIPSE to the root of the downloaded code - it should build fine then... Following are some of the resources that you can look in to..

2. Understanding WS – Security Policy Language
3. Applying policies at binding hierarchy
4. Password Callback Handlers Explained
5. SAML Bearer Confirmation Method, Sender Vouches & Holder-of-Key
6. Identity Delegation in WS Trust 1.4
7. WS Security Policy – Asymmetric Binding Explained

Thursday, October 27, 2011

Cloud Security Videos