Understanding Logjam and making WSO2 servers safe

LogJam’s discovery(May 2015) came as a result of follow-up investigations into the FREAK (Factoring attack on RSA-EXPORT Keys) flaw, which was revealed in March.

Logjam - vulnerabilities in Diffie-Hellman key exchange affect browsers and servers using TLS. Before we delve deep into the vulnerability, lets have a look at how Diffie-Hellman key exchange works.

How does Diffie-Hellman key exchange work?


Let's say Alice wants to send a secured message to Bob over an unsecured channel. Since the channel is not secured, the message has to be secured itself, to avoid by being seen by outsiders other than Bob.

Diffie-Hellman key exchange provides a way to exchange keys between two parties over an unsecured channel, so the established key can be used to encrypt the messages later. First both Alice and Bob have to agree on a prime modulus (p) and a generator (g) - publicly. These two numbers need not be protected. Then Alice selects a private random number (a) and calculates g^a mod p , which is also known as Alice's public secret - let's say its A.

In the same manner Bob also picks his own private random number (b) and calculates g^b mod p, which is also known as Bob's public secret - let's say its B.

Now, both will exchange their public secrets over the unsecured channel, that is A and B - or g^a mod p and g^b mod p.

Once Bob, receives A from Alice - he will calculate the common secret (s) in the following manner, A^b mod p and in the same way Alice will also calculate the common secret (s) - B^a mod p.

Bob's common secret: A^b mod p -> (g^a mod p ) ^b mod p -> g^(ab) mod p

Alice's common secret: B^a mod p -> (g^b mod p ) ^a mod p -> g^(ba) mod p

Here comes the beauty of the modular arithmetic. The common secret derived at the Bob's is the same which is derived at the Alice's end. The bottom line is - to derive the common secret either you should know p,g, a and B or p,g,b and A. Anyone intercepting the messages transferred over the wire would only know p,g,A and B. So - they are not in a position to derive the common secret.

How does Diffie-Hellman key exchange relate to TLS? 

Let's have look at how TLS works.


I would not go through here, explaining each and every message flow in the above diagram - but would only focus on the messages Server Key Exchange and the Client Key Exchange.

The Server Key Exchange message will be sent immediately after the Server Certificate message (or the Server Hello message, if this is an anonymous negotiation).  The Server Key Exchange message is sent by the server only when the Server Certificate message (if sent) does not contain enough data to allow the client to exchange a premaster secret. This is true for the following key exchange methods:
  • DHE_DSS 
  • DHE_RSA 
  • DH_anon
It is not legal to send the Server Key Exchange message for the following key exchange methods:
  • RSA 
  • DH_DSS 
  • DH_RSA 
Diffie-Hellman is used in TLS to exchange keys based on the crypto suite agreed upon during the Client Hello and Server Hello messages. If it is agreed to use DH as the key exchange protocol, then in the Server Key Exchange message server will send over the values of p, g and its public secret (Ys) - and will keep the private secret (Xs) for itself. In the same manner using the p and g shared by the server - client will generate its own public secret (Yc) and the private secret (Xc) - and will share Yc via the Client Key Exchange message with the server. In this way, both the parties can derive their own common secret.

How would someone exploit this and what is in fact LogJam vulnerability?

On 20th May, 2015, a group from INRIA, Microsoft Research, Johns Hopkins, the University of Michigan, and the University of Pennsylvania published a deep analysis of the Diffie-Hellman algorithm as used in TLS and other protocols. This analysis included a novel downgrade attack against the TLS protocol itself called Logjam, which exploits EXPORT cryptography.

In the DH key exchange, the cryptographic strength relies on the prime number (p) you pick, not in fact on the random numbers picked either by the server side or by the client side. It is recommended to have the prime number to be 2048 bits. Following table shows how hard it is to break DH key exchange based on the length of the prime number.


Fortunately no one is using 512 bits long prime numbers - but, except in EXPORT cryptography. During the crypto wars happened in 90's it was decided to make ciphers weaker when its being used to communicate outside USA and these weaker ciphers are known as EXPORT ciphers. This law was out-turned later, but unfortunately TLS was designed before that and it has the support for EXPORT ciphers. According to the EXPORT ciphers the DH prime numbers cannot be longer than 512 bits. If the client wants to use DH EXPORT ciphers with 512 bit prime number, then during the Client Hello message of the TLS handshake its has to send DH_EXPORT cipher suite.

None of the legitimate clients do not want to use a weak prime number - so will never suggest the server to use DH_EXPORT - but still - most servers do support DH_EXPORT cipher suite. That means, if someone in the middle manages to intercept the Client Hello initiated by the client and change the requested cipher suite to DH_EXPORT - then still the server will support it and key exchange will happen using a weaker prime number. These types of attacks are known as TLS downgrade attacks - since the original cipher suite used by the client was downgraded by changing the Client Hello message.

But, wouldn't this change ultimately detected by the TLS protocol itself. TLS has the provision to detect if any of the messages in the handshake got modified in the middle by validating the hash of all the messages sent and received by both the parties - at both the ends. This happens at the end of the handshake. Client derives the hash of the messages sent and received by it and sends the hash to the server - and server will validate the hash against the hash of all the messages sent and received by it. Then once again server derives the hash of the messages sent and received by it and sends the hash to the client  - and client will validate the hash against the hash of all the messages sent and received by it. Since by this time the common secret is established - the hash is encrypted by the derived secret key - which, at this point is known to the attacker. So, the attacker can create a hash that is accepted by both the parties - encrypts it and sends it over to both the client and the server.

To protect from this attack, the server should not respond to any of the weaker ciphers, in this case DHE_EXPORT.

How to remove the support for weaker ciphers from WSO2 Carbon 4.0.0+ based products ?

The cipher set which is used in a Carbon server is defined by the embedded Tomcat server (assuming JDK 1.7.*)
  • Open CARBON_HOME/repository/conf/tomcat/catalina-server.xml file. 
  • Find the Connector configuration corresponding to TLS. Usually there are only two connector configurations and connector corresponding to TLS have connector property, SSLEnabled=”true”. 
  • Add new property “ciphers” inside the TLS connector configurations with the value as follows.
    • If you are using tomcat version 7.0.34 :
      •  ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA"
    • If you are using tomcat version 7.0.59:
      •  ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA" 
  • Restart the server. 
Now to verify the configurations are all set correctly, you can run TestSSLServer.jar which can be downloaded from here

$ java -jar TestSSLServer.jar localhost 9443 

In the output you get by running the above command, there is a section called “Supported cipher suites”. If all configurations are done correctly, it should not contain any export ciphers. 

With Firefox v39.0 onwards it does not allow to access web sites which support DHE with keys less than 1023 bits (not just DHE_EXPORT). The key length of 768 bits and 1024 bits are assumed to be attackable depending on the computing resources the attacker has.  Java 7 uses keys with length 768 bits even for non export DHE ciphers. This will probably not be fixed until Java 8, so we cannot use these ciphers. Its recommended not just remove DHE_EXPORT cipher suites - but also all the DHE cipher suites. In that case use following for the 'ciphers' configuration.
  • If you are using tomcat version 7.0.34 :
    • ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA"
  • If you are using tomcat version 7.0.59:
    • ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA" 
The above is also applicable for Chrome v45.0 onwards.

How to remove the support for weaker ciphers from WSO2 pre-Carbon 4.0.0 based products ?

  • Open CARBON_HOME/repository/conf/mgt-transports.xml
  • Find the transport configuration corresponding to TLS - usually this is having the port as 9443 and name as https.
  • Add the following new element.
    • <parameter name="ciphers">SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA</parameter>

WSO2 Identity Server

The Inside Story


WSO2 was my second job since graduation.

As anyone who is excited of his first day at work, I walked into this amazing, yet quite simple office building in Flower road, right opposite to the Ladies' college. I knew nothing yet what I had to do - and thrown into a bunch of people who were developing a product called 'WSO2 Identity Solution'.

WSO2 was nothing big like it is today. If I am not mistaken - we had  no more than 30 engineers.

It was 1st of November 2007 - the entire Identity Solution team was busy working for its 1.0.0 release. Oh - yes.. I've been there since 1.0.0 release! and I am the only one yet remains in the team till its 5.0.0 release, which was in June 2014. This qualifies me enough to write this blog post. I've been with the team all the way through-out - all the way with sweet gains and bitter pains.

We had only three people actively working on the Identity Solution product in 2007. Do not count me yet - I just joined. It was Ruchith, Dimuthu and Dumindu. Nandana was there too, but he was mostly working on the Apache Rampart project.

Identity Solution 1.0.0 was released in December, 2007.

WSO2 Identity Solution 1.0.0 Released! http://blog.facilelogin.com/2008/12/wso2-identity-solution-10-released.html

I still remember, Dimuthu was arguing with Ruchith - 'Are you sure we want to release this now?'. Dimuthu was an amazing character in the Identity Solution team - and since then we are extremely good friends. After some years she went on maternity leave and came back - I wrote the following email to the team list (email group), which is a clear reflection of who Dimuthu was :-). She is now a Director at WSO2 and also leading the WSO2 App Factory.
The ring tone heard for a while with no one to pick - once again I hear from my next desk... 
The humming came around tea - comes once again and we know it's tea.... 
Sound of a 'punch' on the desk, hear we again - she has fixed an issue.. 
She is back - and she is a mother now - of a wonderful kid... 
She was a starting member of Axis2 and a key contributor to Rampart, WSO2 Carbon and Identity Server... 
More than anything else she is the 'Mother of User Manager'.. 
It's my utmost pleasure to welcome back DimuthuL - after the maternity leave..
(29th November, 2010)
Ruchith was the key architect behind the Identity Solution and also the first product lead. Interestingly, he is  also the very first WSO2 employee. Most of the Apache Rampart code is written by Ruchith. WSO2 had a great foundation on SOAP - and was actively involved in Apache Axis2 development. If someone is new to Rampart, it is the Axis2 module for SOAP security.

By December 2007 we only had very few products: Web Services Application Server (WSAS), Enterprise Service Bus (ESB), Data Services Solution (later became Data Services Server), Registry (later became Governance Registry) and the Identity Solution (later became Identity Server).

Identity Solution 1.0.0 only had the support for Information Card (InfoCard). Information Card is an open standard, mostly pushed by Microsoft, lead by Kim Cameron. Since WSO2 had a very strong foundation on WS-Security and WS-Trust, implementing the Information Card specification was quite straightforward. We were among the very few Java implementations that had support for Information Card by that time. We also actively participated in most of the interop events. On my first day at WSO2 I didn't meet Ruchith, he was in USA participating in an Information Card  interop event. Interop events definitely helped us to regulate the product in the right direction and validate our implementation. By the way, there was a popular joke at that time - an interop event is a place where all the other vendors test with Microsoft and fix anything that does not work with them :-).

In this article written by Dimuthu, explains how to add Information Card support for Java EE web applications: http://wso2.com/library/2994/

Identity Solution 1.0.0 was an Information Card provider. In addition to the product release we also released a set of Information Card relying party components along with the product. One was a Java EE servlet filter and the other one was an Apache module. Dumindu was the one who developed the Apache module - he was the C guy in the Identity Solution team.

How to setup an Information Card relying party with Apache: http://blog.facilelogin.com/2008/07/building-deploying-modcspace-on-windows.html

It was the era lot of changes started to happen in the field of Internet identity. Kim Cameron, the chief Identity Architect of Microsoft was one of the pioneers who lead the effort. He started building the famous Seven Laws of Identity with the community contribution. I was passionate about Kim's writings and he was a hero for me. Later when I got an opportunity to visit Microsoft in April 2008, with Ruchith to participate in an Information Card design event - I was hoping - I would be able to meet Kim - but unfortunately he didn't come for the event. Later - after many years - I met Kim several times and got the opportunity exchange few ideas. Arguably Kim is known as the father of the modern Internet Identity.

OpenID, in 2005 - followed the footsteps of SAML - and started to challenge Information Card, which was the most emerging standard by then and there was a lot of buzz around it. It was initiated by the founder of LiveJournal - Brad Fitzpatrick. The basic principle behind both OpenID and SAML, is the same. Both can be used to facilitate web single sign on and cross-domain identity federation. OpenID was not competing with Information Card, even though most got confused, when to use what.

It was my first task, to add OpenID support for WSO2 Identity Solution. It took almost 3+ months and shipped with Identity Solution 1.5.0 in April 2008 - just before the April holidays. In addition to OpenID 1.1 and 2.0  we also added OpenID InfoCard support. OpenID InfoCard specification talks about how to send OpenID claims in an Information Card. By the way, IS 1.5.0 was the first release which I acted as the release manager. For both IS 1.0.0 and 1.5.0 Chanaka (Jayasena) did the complete UI theming.

Information Cards vs OpenID Information Cards: http://blog.facilelogin.com/2008/01/infocards-vs-openid-infocards.html

Both IS 1.0.0 and 1.5.0 used struts to build it's web interface. Most of the WSO2 products had got rid of struts by then -  IS was the only remaining product. There was a rumor - it was because of Ruchith, Identity Solution was able to still go ahead with struts :-).

There was a discussion going on at that time to build a common framework to build all WSO2 products. With that, ESB won't have its own web interface, App Server won't have its own web interface - all the products will share the same web interface - same look and feel and the same object model. It was in early 2008 - we had the WSO2 Carbon kickoff meeting at the Trans Asia hotel (now Cinnamon Lake Side), Colombo. Both Ruchith and me were there representing the Identity Solution team. If I remember correctly I was the only one at that meeting who didn't speak a single word :-).

WSO2 Carbon later became an award winning framework to build servers - based on OSGi.

Just after the IS 1.5.0 release, we had a client interested in implementing OpenID support with it. This was my first client engagement as well as the very first client for the Identity Solution. Ruchith and I had to fly to London. We were informed about the client engagement a month or two before - but none of us was keen to apply for VISA. At that time it took minimum three weeks to get UK VISA. But it was just a week or two before we applied. Both of us were given an earful by our CTO, Paul. For everyone's surprise - both of us got VISA just within 3 days :-). Even today we do not know how that magic happened!

It was my first trip out of the country and I was lucky to be accompanied with Ruchith, who was an amazing guide! We met Paul (our CTO) in London and all three of us went to the client. Just before we entered into the client premise, Paul turned to us - first to Ruchith - 'You are the security expert' - then to me  - 'You are the OpenID expert'. We went in.

After finishing our stuff with our first Identity Solution client - both Ruchith and me flew to Seattle to participate in a technical design meeting at Microsoft around the future of Information Card. Then again we had to return back to London to finish some more stuff with our client. Interestingly, in our return trip to London - only at the hotel reception at the Holiday Inn, we found there were no hotel bookings for us. We got Udeshika (who is our Vice President of Administration now) on phone and she settled everything for us.

Few weeks after returning back to Colombo from 3 weeks long UK/USA trip - I had to get ready for the first ever Identity Solution webinar. Unlike nowadays we did not have webinars frequently, then. It was on 'Understanding OpenID' - I was over-joyed by the response!


Ruchith left the company in July, 2008 for his higher-studies. He joined University of Purdue and after completing his Ph.D last year,  now works for Amazon. The photo above was taken at Ruchith's farewell - from left to right - Dimuthu, Dumindu, Ruchith, Nandana and me. The image on Dumindu's shirt is an Information Card.

Never they leave.. just checking out.. : http://blog.facilelogin.com/2008/07/never-they-leave-just-checking-out.html

Everyone started to be fascinated about Carbon and OSGi. The plan was to build WSO2 WSAS and the ESB on top of WSO2 Carbon, first. Focus on the Identity Solution was diluted a bit during this time. Security was a key part in the Carbon core and the entire Identity Solution team had to work on the migration - to make the User Manager component OSGi compliant and make it a part of the Carbon platform.

WSO2 User Manager started as a WSO2 Commons project and it had its own release cycle and its own product page. It was just a library and included in all WSO2 products. User Manager knows how to connect you to an LDAP, Active Directory or to a JDBC based user store. Most of the User Manager code by that time was written by Dimuthu - she was called User Manager Manager :-)

Nandana was the Rampart guy. He was actively contributing to the Apache Rampart and Rahas. During the initial Carbon release Nandana played a key role in bringing them into the Carbon platform.

After the initial release of WSO2 WSAS and ESB on top of Carbon - next came the Identity Solution, Registry and the Mashup Server. It was almost a complete re-write. After Ruchith left the company for higher-studies I was appointed as the Product Manager of Identity Solution and had to lead the migration effort to Carbon platform. It was easy by then, since the complete pain was taken by ESB and WSAS teams - we knew who to meet when hit with an issue.


During the same period, in April 2009, Thilina joined Identity Solution team. In the above photo - from left to right Thilina, Nandana and me.

Thilina's addition to the team reduced our load on migration. His first contribution was to implement SAML 2.0 token profile support for Apache Rampart. Rampart was a key component in Identity Solution and one of our customers was waiting for the next Identity Solution,  requested  SAML 2.0 token profile support. In addition to that Thilina also implemented XMPP based authentication for OpenID logins. XACML 2.0 support was also another customer requirement. For anyone new to XACML, XACML is the de facto standard for policy based access control. I implemented that on top of Sun XACML implementation (later we forked Sun XACML as WSO2 Balana and implemented XACML 3.0 on top of it).

After IS 1.5.0 - it took more than an year to do the next IS release - which was IS 2.0.0.

Sumedha, who was leading the Data Services Solution, by that time (now the Director of API Architecture) came up with a suggestion to rename the product to Data Services Server. We followed the same approach and in July 2009 released the Identity Server 2.0.0. IS 2.0.0 is the very first version of the Identity Server built on top of WSO2 Carbon platform.

Even though we added XACML support for IS 2.0.0 - it was at very basic level. There was no editor - you simply need to write the policy by hand. The first comment from one of customers was : 'Nice - but you got to have a PhD in XACML to write policies with your Identity Server'. He was not kidding - we took it dead seriously. Later when Asela joined the IS team - he worked on developing one of the best XACML policy editors for the Identity Server.

Couple of months after the IS 2.0.0 release, in September 2009,  Nandana left the company to pursue higher-studies. He joined Universidad Politécnica de Madrid as a Ph.D student. Following photo was taken during  Nandana's farewell.


Soon after Nandana left, Thilina stepped up and filled the vacuum created by the absence of Nandana. By then we had Dimuthu, Thilina and me in the Identity Server team. In October 2009, we released Identity Server 2.0.1. In addition to the bug fixes we also added WS-Federation passive profile support for IS 2.0.1. Just after a month from the 2.0.1 release, I did a demo and a talk on its support for WS-Federation passive profile at the ApacheCon 2009 - in November. It was my first ApacheCon and also its the 10th anniversary of the ApacheCon.

I met Nandana who left WSO2 in September, at the ApacheCon.


We didn't add much to the Identity Server 2.0.2 release done in November 2009 and the Identity Server 2.0.3 release in February 2010. Mostly those were bug fix releases. Focus on Identity Server new features faded down a bit during this period mostly due to the increased interest on cloud.

Around mid 2009, during a company dinner with the board at the Hilton Hotel Colombo, I met Paul. Paul mentioned company's interest to move into the cloud and had identified to offer the WSO2 Governance Registry and the WSO2 Identity Server as cloud services to get started with.

Governance as a Service (GaaS) was the first ever WSO2 cloud offering. It was in January, 2010. Next to follow was Identity as a Service (IDaaS) in February, 2010. Thilina played a major role in adding muti-tenancy support for Identity Server. OpenID, Information Card, XACML all were made available as cloud services. In 2011, we were awarded with the KuppingerCole European Identity Award in cloud provider offerings category.

Still the Identity Server team was just 3 members: Dimuthu, Thilina and me. Most of the time Dimuthu focused more on the user management component.  It was the time we got few new faces. Amila (Jayasekara) joined the Identity Server team in March 2010,  Asela in April and  Hasini in September.  We were six then. Around the same time Dimithu went on maternity leave - Amila and Hasini started looking into what Dimuthu did. We were back to 5 members. 

Interestingly Asela joined WSO2 as a QA engineer and straightway started testing Identity Server. He is an amazing guy - and also a batch mate of Thilina from the University of Moratuwa. After few months of testing Identity Server and reporting bugs - we felt it was enough him having in QA and took him directly to the Identity Server development team. Just as a twist of fate, once Asela joined the development team he had to fix the bugs reported by himself as a QA engineer :-)

Once Hasini joined the team, Amila shifted from focusing on user manager to Rampart and WSS4J improvements. He worked on upgrading Rampart to work with WSS4J 1.6.

Once done with the IDaaS deployment - we  again started worrying about adding more ingredients into the Identity Server product. Initially GaaS had its own login and IDaaS had its own . There were also more cloud offerings to follow - ESB, App Server, Data Service Server and many more. One critical requirement raised, due to the need to login multiple times for each cloud service was, the support for single sign on (SSO) between all WSO2 cloud services. We already had support for OpenID - but then again we picked SAML over OpenID to cater our need there.

SAML was mostly used to facilitate web single sign on. It can be just within the same domain or between domains. SAML v2.0 - in 2005 - was built on the success of SAML 1.1 and other similar single sign on standards. It unified the building blocks of federated identity in SAML v1.1 with the inputs from Shibboleth initiative and the Liberty Alliance's Identity Federation Framework. It was a very critical step towards the full convergence for federated identity standards

Someone had to implement SAML and add it to the Identity Server. Naturally it was Thilina, who already had experience working with SAML - while implementing SAML 2.0 token profile support for Apache Rampart. Thilina implemented SAML 2.0 Web SSO and SLO profiles for Identity Server.

OAuth was another area of focus for us. OAuth 1.0 was the first step towards the standardization of identity delegation. I started adding OAuth 1.0 2-legged support for Identity Server - and in May, 2010 - when we released the Identity Server 3.0.0 - both the SAML 2.0 and OAuth 1.0 features were included.

2-legged OAuth with OAuth 1.0 and OAuth 2.0: http://blog.facilelogin.com/2011/12/2-legged-oauth-with-oauth-10-and-20.html

Asela played the release manager role of the  Identity Server 3.0.1 - released in September, 2010 - as a bug fix release.

Even though we had support for XACML in Identity Server we were left behind with two issues. We didn't have a proper XACML editor and also we didn't have the support for XACML 3.0. We implemented XACML support on top of Sun XACML and by that time it was a dead project and only had 2.0 support. Asela was our choice to work on this. His first serious task, after joining the Identity Server team was to implement a XACML policy wizard. It was not easy. We did not want to build just another editor. After some iterations Asela came up with one of the best policy editors out there for XACML. We included it for the first time in Identity Server 3.2.0, which was released in June, 2011.

IS 3.2.0 also had another key feature addition apart from the XACML policy editor. So far we shipped all WSO2 products with an H2 based user store. It was Amila who integrated Apache Directory Server LDAP with the Identity Server and all the other WSO2 products. Later, except Identity Server all the rest went back to use the same H2 based user store. In addition to that, Amila also integrated Kerberos KDC support from Apache DS with the Identity Server. Identity Server 3.2.0 could act as a Kerberos key distribution center.

While the team was working for the next Identity Server release, we had an interesting event in Sri Lanka : National Best Quality ICT Awards 2011. WSO2 participated there for the first time in 2010 and WSO2 ESB won the overall gold. 2011 was the first for WSO2 Identity Server. Along with the Identity Server, WSO2 Governance Registry, WSO2 Carbon and WSO2 Application Server were submitted for the awards, under different categories. Identity Server was submitted under the 'Research and Development' category. All three products were selected for the first round. Senaka (Gov Registry), Hiranya (Application Server), Sameera (Carbon) and I (Identity Server) presented each product before the judge panel. We went to the second round - and to the finals. We knew nothing about the awards till those were announced at the awards ceremony on 8th October. I missed the awards night - was with a customer in New Jersey, USA. It was Paul who passed me the message first, over chat - that Identity Server has won a Gold. I was with Amila and Asanka - we were thrilled by the news. Governance Registry won a Silver award, Carbon won a Merit award and Application Server won the overall Gold.

In the following photo Thilina (third from the left) is carrying the award for the WSO2 Identity Server.


In November 2011, we did the Identity Server 3.2.2 release. It had more improvements to the current feature set. One key improvement was to support OAuth 1.0 3-legged OAuth.

In December 2011 Identity Server 3.2.3 was released. Hasini was the release manager. One of the key improvements we did for IS 3.2.3 was to introduce a Thrift interface for the XACML PDP engine. Till then it was only SOAP and later we found that Thrift was 16 times faster than just SOAP over HTTP. IS 3.2.3 was a huge success. Even today,  the largest deployment of the Identity Server is based on IS 3.2.3. One of our clients runs Identity Server over a 4 million+ user base in Saudi Arabia, as an OpenID Provider.

The following photo was taken at the ELM office Saudi Arabia, who implemented the OpenID support with WSO2 Identity Server 3.2.3. Later they also did a case study with us.


Since IS 3.2.3 - it took almost an year to do the next release : Identity Server 4.0.0 . During this time we got two new faces to the team - Suresh and Johann. I knew Suresh well - since he did the internship at WSO2 and I also supervised his final year University project. Suresh and his team implemented OpenID based authentication for SOAP based services with WS-Security. Also they implemented some of the WS-Federation features to Apache Rampart. Johann was totally new to the WSO2 Identity Server team.

IS 4.0.0 had major improvements and feature additions. Thilina developed OAuth 2.0 support for IS 4.0.0 and also it became a key part of the WSO2 API Manager's success. It was the time WSO2 made its entry into the API Management market. Both Sumedha and I were initially involved in building it and later Sumedha lead it alone. I was mostly there since security is a key part of it. Thilina and Johann both got involved in the initial API Manager implementation. Johann mostly worked on integrating API Manager with WSO2 Business Activity Monitor for statistics.

In July 2012, both Thilin and Amila left the company to pursue higher-studies. Currently Thilina is a Ph.D student at the Colorado State University and Amila doing his Ph.D at the Indiana University Bloomington.

Asela was still busy with XACML. He is one of the top experts on it and writes the blog http://xacmlinfo.org/. It was high-time that we wanted to bring XACML 3.0 support to IS. Sun XACML project was dead silent and we made the decision to fork and add XACML 3.0 support on top of it. We called it WSO2 Balana. Interestingly Srinath came up with that name. Balana is a famous checkpoint closer to Kandy, Sri Lanka, which protected the hill country from British invasion. Asela himself did almost all the development to add XACML 3.0 support for Balana.

Another feature which we added to IS 4.0.0 was SCIM. One of the key standards for Identity Provisioning by that time was SPML. But it was too complex, bulky and biased to SOAP. People started to walk away from SPML. In parallel to the criticisms against SPML - another standard known as SCIM (Simple Could Identity Management - later it was changed to System for Cross-domain Identity Management) started to emerge. This was around mid 2010 - and initiated by Salesforce, Ping Identity, Google and others. WSO2 joined the effort sometime in early 2011.

SCIM is purely RESTful. The initial version supported both JSON and XML. SCIM introduced a REST API for provisioning and also a core schema (which also can be extended) for provisioning objects. SCIM 1.1 was finalized in 2012 - and then it was donated to the IETF. Once in IETF, it had to change the definition of SCIM to System for Cross-domain Identity Management and it's no more supporting XML - only JSON.

Hasini was our in-house SCIM expert. Not only she just implemented the SCIM support for Identity Server, she was also a SCIM design team committee member. SCIM was developed as a WSO2 Commons project - under the name WSO2 Charon. The name was suggested by Charith. The Charon 1.0.0 was released in March, 2012 just in time for the very 1st SCIM interop in Paris. Hasini represented WSO2 at the interop event.

One limitation we had in our SAML Web SSO implementation was that - we did not support attributes. It was Suresh who implemented SAML Attribute Profile support for the Identity Server.

We also did more improvements to our XACML implementation, targeting the 4.0.0 release. Johann was the one who brought in WS-XACML support to the Identity Server. In addition to the SOAP/HTTP, Thrift interfaces, we also added an WS-XACML interface to our XACML PDP. This is one of the standard ways to communicate between a XACML PEP and a PDP. WS-XACML is quite heavy and has huge impact on the performance. If not for a strong customer requirement - we might not have added WS-XACML to the Identity Server.

We also further improved our Active Directory user store manager. It was just read-only and Suresh implemented read/write capabilities and also later added the support for Active Directory Lightweight Directory Services (AD LDS).

Another feature we added to IS 4.0.0 was the Integrated Windows Authentication (IWA) support. With this, if you are already logged into your Windows domain, you need not to re-login to Identity Server. This was developed by Pulasthi - who was an intern then. After the graduation Pulasthi joined WSO2 in 2014 and joined the Identity Server team. The IWA support in IS 4.0.0 was just limited to the Identity Server's management console login. This was not available for SAML/OpenID based logins. Identity Server 5.0.0 later added that support.

With all these new features - Identity Server 4.0.0 was released in November 2012.

After the Identity Server 4.0.0 release we found an interesting client who was developing a mobile navigation app for a user base more than 600 millions. They were interested in using Identity Server. Suresh, Tharindu and I were there onsite for a week and came up with a design. Due to the large number of users - we all agreed to go ahead with a user store based on Apache Cassandra. In fact client suggested that and Tharindu who was an expert on Big Data was with us to confirm it. We implemented a Cassandra based user store manager and plugged in Cassandra as a user store to the Identity Server. With this feature and some minor improvements Identity Server 4.1.0 was released in February, 2013. We also added the support for multiple user stores at a very primary level to IS 4.1.0.

Identity Server team by now was five members: Hasini, Asela, Johann, Suresh and me. Everyone lifted themselves to fix the gap created by the absence of Thilina and Amila. Darshana and Pushpalanka joined the Identity Server team a couple of months after the IS 4.1.0 release. I knew both of them even before joining WSO2. The final year University project Darshana did was supervised by me. It was an interesting one - to build a XACML policy engine based on a RDBMS. Pushapalnka was an intern at WSO2 and during her internship period she did some interesting work around XACML.

The immediate next release after the IS 4.1.0 was 4.5.0. The main focus of IS 4.5.0 was to enhance the user-friendliness of its user management UI and strengthen its multiple user store support. In addition to that Suresh worked on adding OpenID Connect core support and Johann worked on implementing SAML 2.0 grant type for OAuth 2.0 profile.

Prior to IS 4.5.0 - and after IS 4.1.0 - the entire Identity Server team had to work hard on a customer project. We developed most of the features that went with IS 4.5.0 and bundled them to IS 4.2.0 (this version was never released). The entire team was so desperate to make the project a success - but - due to some stuff that are not under our control - we lost the project. This was the time Dulanja, Ishara, Venura and Dinuka joined the Identity Server team. Venura later left WSO2 to join a company in Singapore and Dinuka left to USA with his wife who got admission to a University for higher-studies. Dulanja and Ishara are still with WSO2,  later played a key role in the Identity Server 5.0 release.

In the following photo Johann, Asela and I were at a Walmart store, on our way back from the above client, after successfully completing the deployment. Some customers we win - some we loose - that's the life.


Hasini left the company in June 2013 for higher studies. It was few month prior to IS 4.5.0 release. She joined the University of Purdue. Following photo was taken at her farewell at the JAIC Hilton hotel Colombo.


Darshana was the release manager for the Identity Server 4.5.0 and was released in August 2013. Pushpalanka also played a key role in this release by developing the user store management component - that let you add and configure multiple user stores from the Identity Server's management console.

WSO2 Identity Server 4.6.0 was released few months after 4.5.0, in December 2013. It had only one feature, Identity Provider initiated SAML SSO support. Johann played the release manager role for this release.

The 4.6.0 release was the end of one generation of the Identity Server. Nobody knew that till we released Identity Server 5.0.0 in May 2014. We took a completely new approach to IS 5.0.0. Till then we were developing just isolated features. We changed that approach and started to build user experiences - instead of features. You need to develop features to build user experiences - but the angle you look into that is completely different. When you look into something from a different angle - what you see is different too.

Fifteen fundamentals behind WSO2 Identity Server 5.0.0: http://blog.facilelogin.com/2015/06/identity-brokerr-pattern-15-fundamentals.html

Building Identity Server 5.0.0 was not just a walk-in-the-park. It went through several iterations.  We had to throw-away some stuff when we found better ways of doing that. In addition to the identity broker support with identity token mediation and transformation, IS 5.0.0 also introduced a new user interface for end users. Prior to that, both the administrators and end users, had to use the same management console, and the functions available were filtered out by the role of the user. With IS 5.0.0 we built a whole new dashboard with jaggery - which is a home-grown framework to write webapps and HTTP-focused web services for all aspects of the application: front-end, communication, server-side logic and persistence in pure java-script.  This was initiated by Venura - who started developing the dahsboard in jsp - and then after couple of months - Venura moved to the WSO2 App Manager product, since it required some security expertise. After Venura left the team, Ishara took over - and we changed everything from jsp to jaggery. Venura left the company to Singapore after several months working in the App Manager team.

WSO2 Identity Server 5.0.0 - Authentication Framework: http://blog.facilelogin.com/2014/10/what-is-new-in-wso2-identity-server-500.html 

WSO2 Identity Server 5.0.0 release was a huge success. Most of the Identity Server deployments are now on IS 5.0.0 than any of its previous releases. The best thing about IS 5.0.0 is - it opened up a whole new world (being able to act as an identity broker) - and 5.0.0 release is the first step towards that direction - but an immensely sound foundation.

During the IS 5.0.0 release we got three new faces - Thanuja, Isura and Prasad. Even though they had very little experience at WSO2 by then, their contribution to the 5.0.0 release was fabulous.

Following photo was taken on the day we released Identity Server 5.0.0. From left, Suneth (who is from the QA team - and worked on Identity Server testing with Pavithra and Ushani. Pavithra was the QA lead for IS testing for so many releases), Isura, Ishara, Johann, me, Darshana, Dulanja, Thanuja and Prasad. Suresh, Pushpalanka and Chamath who also contributed to 5.0.0 release a lot, missed the shot.


On the same day Identity Server 5.0.0 was released, Johann was made the product manager. I was planning to move into the WSO2 Mountain View, USA office and Johann was carefully groomed into this position, for several months. Even-though I left the Identity Server team officially in May - I worked very closely with the team till I left the country to USA, on 14th January, 2015.

Following token was given to me by the team on my last day at the WSO2 Colombo office. It's a representation of all great memories I had with the Identity Server team, since its inception.

OAuth 2.0 with Single Page Applications

Single Page Applications (SPA) are known as untrusted clients. All the API calls from an SPA are made from a java-script (or any scripting language) running in the browser.

The challenge is how to access an OAuth secured API from an SPA?

Here, the SPA is acting as the OAuth client (according to the OAuth terminology), and it would be hard or rather impossible to authenticate the OAuth client. If we are to authenticate the OAuth client, the credentials should come from the SPA - or the java-script itself, running in the browser - which basically open to anyone who can see the web page.

The first fundamental in an SPA accessing an OAuth secured API is - the client cannot be authenticated in a completely legitimate manner.

How do we work-around this fundamental?

The most common way to authenticate a client in OAuth is via client_id and client_secret. If we are to authenticate an SPA, then we need to embed the client_id and client_secret to the java-script. This will give anyone out there the liberty to extract out those from the java-script and create their own client applications.

What can someone do with a stolen client_id/client_secret pair?

They can use it to impersonate a legitimate client application and fool the user to get his consent to access user resources on behalf of the legitimate user.

OAuth has security measures to prevent such illegitimate actions, by itself.

Both in the authorization code and the implicit grant types, in the grant request the client can send the optional parameter redirect_url. This tells the authorization server, where to redirect the user with the code (or the access token) - after authenticating and providing the consent at the authorization server. As a counter measure for the above attack, the authorization server must not just respect the redirect_url in the grant request blindly. It must validate it against the redirect_url registered with the authorization server at the time of client registration. This can be an exact one to one match or a regular expression.

In this way, even if the client_id and client_secret are stolen, the rogue client application will not be able to get hold of the access_token or any of the user information.

Then what is the risk of loosing the client_id and client_secret?

When you register an OAuth client application with an authorization server - the authorization server enforces throttling limits on the client application. Say for example, a given client application can only do 100 authentication requests within any one minute time interval. By stealing the client_id and client_secret the rogue client application can impact the legitimate application by eating all available request quota - or the throttling limit.

How do we avoid this happening in a SPA, where both the client_id and client_secret are publicly visible to anyone accessing the web page?

For an SPA there is no advantage in using the authorization code grant type - so it should use implicit grant type instead. authorization code grant should only be used in cases where you can protect the client_secret. Since an SPA cannot do that - you need not to use it.


One approach to overcome this drawback in an SPA is - make the client_id - a one-time-thing. Whenever you render the java-script - you get a new client_id and embed the new client_id in the java-script, and invalidate it - at its first use. Each instance of the application, rendered on the browser will have its own client_id, instead of sharing the same client_id in all the instances.

At the authorization server end, all these generated client_ids will be mapped to a single parent client_id - and the throttling limits are enforced on the parent client id.

Now if a rogue client application still want to eat the full or part of the request quota assigned to the legitimate application - then for each request it has to load the legitimate web application and scrape through it to find the client_id embedded in it and then use it. That means for each authentication request that goes to the authorization server - should have a request that goes to the SPA to load the java-script, prior to that. This can be protected by enforcing denial of service attack protection measures at the SPA end - and possibly black list the rogue client.


The next challenge is how to protect the access token. In an SPA access token will also be visible to the end user. When using implicit grant type the access token will be returned to the browser as an URI fragment and will be visible to the user. Now, the user (who is a legitimate one) can use this access token (instead of the application using it) to access the back-end APIs - and eat all the API request quota assigned to the application.

The second fundamental in an SPA accessing an OAuth secured API is - the access token cannot be made invisible to the end-user.

One lighter solution to this is - enforce a throttling limit at the per client per end-user level - in addition to the per client level. Then - the rogue end-user will just eat up the quota of his own - won't affect the other users accessing the same application.

Let's take another scenario - say its not the end user - but someone else steals the user's access token from the URI fragment and then wants to use it to access resources on behalf of the legitimate user.

As a protective method for this, we need to make the lifetime of the access token returns back in the URI fragment extremely short and also in its first usage, invalidate it. To get a new access token - immediate one (the access token) before has to be provided - in case authorization server finds the sequence (of access token) is broken at any point then it will invalidate all the access tokens issued against the original access token returned back in the URI fragment. This pattern is known as 'rolling access tokens'.


In summary, this blog suggests three approaches to protect single page applications in accessing OAuth 2.0 secured APIs.
  • One-time client_ids mapped to a single parent client_id
  • Per application per user throttling policies
  • Rolling access tokens

[WSO2Con UK 2015] Connected Identity : The Role of the Identity Bus

http://www.slideshare.net/prabathsiriwardena/connected-identity-the-role-of-the-identity-bus

Identity Broker Pattern : 15 Fundamentals

A recent research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent. If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-month total since 2008.

Gartner predicts by 2020, 60% of all digital identities interacting with enterprises will come from external identity providers.

I have written two blog posts in detail highlighting the need for an Identity Broker.
The objective of this blog post is to define fifteen fundamentals that should be ideally supported by an Identity Broker to cater future Identity and Access Management goals.

1st Fundamental 

Federation protocol agnostic :

  • Should not be coupled into a specific federation protocol like SAML, OpenID Connect, OpenID, WS-Federation, etc.
  • Should have the ability to connect to multiple identity providers over heterogeneous identity federation protocols. 
  • Should have the ability to connect to multiple service providers over heterogeneous identity federation protocols.
  • Should have the ability transform ID tokens between multiple heterogeneous federation protocols.

2nd Fundamental

Transport protocol agnostic : 

  • Should not be coupled into a specific transport protocol – HTTP, MQTT
  • Should have the ability read from and write into multiple transport channels.

3rd Fundamental

Authentication protocol agnostic : 

  • Should not be coupled into a specific authentication protocol, username/password, FIDO, OTP. 
  • Pluggable authenticators.

4th Fundamental 

Claim Transformation :

  • Should have the ability to transform identity provider specific claims into service provider specific claims and vice versa.
  • Simple claim transformations and complex transformations. An example of complex claim transformation would be to derive the age from the date-of-birth identity provider claim - or concatenate first name and last name claims from the identity provider to form the full name service provide claim.

5th Fundamental 

Home Realm Discovery:

  • Should have the ability to find the home identity provider corresponding to the incoming federation request looking at certain attributes in the request. 
  • The discovery process should be pluggable.
  • Filter based routing.

6th Fundamental 

Multi-option Authentication:

  • Should have the ability present multiple login options to the user, by service provider. 
  • Based on the service provider who initiates the authentication request, the identity broker will present login options to the user.

7th Fundamental

Multi-step Authentication:

  • Should have the ability present multiple step authentication (MFA) to the user, by service provider. 
  • Multi-factor Authentication (MFA) is an instance of multiple step authentication, where you plug in authenticators that do support multi-factor authentication into any of the steps.

8th Fundamental

Adaptive Authentication:

  • Should have the ability change the authentication options based on the context. 
  • The identity broker should have the ability to derive the context from the authentication request itself as well as from other supportive data.

9th Fundamental

Identity Mapping:

  • Should have the ability map identities between different identity providers. 
  • User should be able to maintain multiple identities with multiple identity providers and switch between identities when login into multiple service providers.

10th Fundamental

Multiple Attribute Stores:

  • Should have the ability connect to multiple attribute stores and build an aggregated view of the end user identity.

11th Fundamental

Just-in-time Provisioning:

  • Should have the ability to provision users to connected user stores in a protocol agnostic manner.

12th Fundamental

Manage Identity Relationships:

  • Should have the ability to manage identity relationships between different entities and take authentication and authorization decisions based on that. 
  • A given user can belong to a group, role and be the owner of devices from multiple platforms.
  • A device could have an owner, an administrator, a user and so on.

13th Fundamental

Trust Brokering:

  • Each service provider should identify which identity providers it trusts.

14th Fundamental

Centralized Access Control:

  • Who gets access to which user attribute? Which resources the user can access at the service provider?

15th Fundamental

Centralized Monitoring:

  • Should have the ability to monitor and generate statistics on each identity transaction, flows through the broker. 
  • The connected analytics engine should be able to do batch analytics, realtime analytics and predictive analytics. 

Connected Identity: Benefits, Risks & Challenges - EIC 2015 Recording


Two Security Patches Issued Publicly for WSO2 Identity Server 5.0.0

Wolfgang Ettlinger (discovery, analysis, coordination) from the SEC Consult Vulnerability Lab contacted WSO2 security team on 19th March and reported following three vulnerabilities in WSO2 Identity Server 5.0.0.

1) Reflected cross-site scripting (XSS, IDENTITY-3280)

Some components of the WSO2 Identity Server are vulnerable to reflected cross-site scripting vulnerabilities. The effect of this attack is minimal because WSO2 Identity Server does not expose cookies to JavaScript.

2) Cross-site request forgery (CSRF, IDENTITY-3280)

On at least one web page, CSRF protection has not been implemented. An attacker on the internet could lure a victim, that is logged in on the Identity Server administration web interface, on a web page e.g. containing a manipulated tag. The attacker is then able to add arbitrary users to the Identity Server.

3) XML external entity injection (XXE, IDENTITY-3192)

An unauthenticated attacker can use the SAML authentication interface to inject arbitrary external XML entities. This allows an attacker to read arbitrary local files. Moreover, since the XML entity resolver allows remote URLs, this vulnerability may allow to bypass firewall rules and conduct further attacks on internal hosts. This vulnerability was found already before being reported by Wolfgang Ettlinger and all our customers were patched. But the corresponding patch was not issued publicly. Also this attack is not harmful as it sounds to be since in all our production deployments, WSO2 Identity Server is run as a less privileged process, which cannot be used to exploit or gain access to read arbitrary local files.

WSO2 security team treats all the vulnerabilities that are reported to security@wso2.com, top most important and we contacted the reporter immediately and started working on the fix. The fixes were done on the reported components immediately - but we wanted to make sure we build a generic solution where all the possible XSS and CSRF attacks are mitigated centrally.

Once that solution is implemented as a patch to the Identity Server 5.0.0 - we tested the complete product using OWASP Zed Attack Proxy and CSRFTester. After testing almost all the Identity Server functionality with the patch - we released it to all our customers two weeks prior to the public disclosure date. The patch for XXE was released few months back. Also I would like to confirm that none of the WSO2 customers were exploited/attacked using any of theses vulnerabilities.

On 13th May, parallel to the public disclosure, we released both the security patches publicly. You can download following patches from http://wso2.com/products/identity-server/.
  • WSO2-CARBON-PATCH-4.2.0-1194 
  • WSO2-CARBON-PATCH-4.2.0-1095 
WSO2 thanks Wolfgang Ettlinger (discovery, analysis, coordination) from the SEC Consult Vulnerability Lab for responsibly reporting the identified issues and working with us as we addressed them, at the same time we are disappointed with the over-exaggerated article published on threatpost. The article was not brought into the attention of WSO2 security team before its being published, although the WSO2 security team responded to the query by the reporter immediately over email. Anyway we are fully aware that such reports are unavoidable and not under our control.

WSO2 security team is dedicated to protect all its customers and the larger community around WSO2 from all sort of security vulnerabilities. We appreciate your collaboration and please report any of the security issues you discover related to WSO2 products to security@wso2.com. 

MQTT Security Fundamentals

Identity Mediation Language (IML) - Requirements Specification

A recent research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent. If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-month total since 2008.

Gartner predicts by 2020, 60% of all digital identities interacting with enterprises will come from external identity providers.

I have written two blog posts in detail highlighting the need for an Identity Bus.
The objective of the Identity Mediation Language (IML) is to define a configuration language, that would run in an Identity Bus - to mediate and transform identity tokens between multiple service providers and identity providers in a protocol agnostic manner.

The objective of this blog post is to define the high-level requirements for the Identity Mediation Language. Your thoughts/suggestions are extremely valuable and highly appreciated to evolve this into a language where the global identity management industry will benefit.

1. Transform identity tokens from one protocol to another.

For example, the Identity Mediation Language should have the provision to the transform an incoming SAML request into an OpenID Connect request, and then the OpenID Connect response from the Identity Provider into a SAML response.

2.  The language should have the ability define a handler chain in inbound-authentication-request flow, outbound-authentication-request flow, outbound-authentication-response flow, inbound-authentication-response flow and any of the major channels identified as this specification evolves.


3. The language should define a common syntax and semantics, independent from any of the protocols.

Having a common a common syntax and semantics for the language, independent from any of the specific protocols will make it extensible. Support for specific protocols should be implemented as handlers. Each handler should be aware of how to translate the common syntax defined by the language to its own on syntax - and how to process them in a protocol specific manner.

Following is a sample configuration (which must evolve in the future). Language is not coupled to any implementations.  There can be global handlers for each protocol, but then again, should be able to override in each request flow if needed.


  "inbound-authentication-request" : 
              { "protocol": "saml", 
                 "handler" : "org.wso2.SAMLAuthReqRespHandler"
               },
  "outbound-authentication-request" :
              { "protocol": "oidc",
                 "handler" : "org.wso2.OIDCAuthReqRespHandler"
              }
}

4. The language should have the provision to define a common request/response path as well as override it per service provider.

5. The language should have the provision to identify the service provider by a unique name or by any other associate attributes. These attributes can be read from incoming transport headers or from the Identity Token itself. If read from the ID token, the way to identify that attribute must be protocol agnostic. For example, if the incoming ID token is a SAML token, then - if we need to identify the issuer from  an attribute in the SAML token, the language should define it as an XPATH, with out using  any SAML specific semantics.

6. The language should have the provision to define, whether an incoming request is just a pass-through.

7. The language should have the provision to define, which transport headers should be passed-through to outbound-authentication-request path.

8. The language should have the provision to define, which transport headers needs to be added to the outbound-authentication-request path.

9. The language should have the provision to log all the requests and responses. Should be able to configure logging per service providers. Log handler should be configurable, per service provider. Due to PCI compliance requirements we may not be able to log the complete message, all the time.

10. The language should have provision to retrieve attributes and attribute metadata, required by a given service provider, via an attribute handler. Also - it should have the provision to define attribute requirements inline.

11. The language should have provision to define, authentication requirements per service providers. The authentication can be using one more - local authenticators, federated authenticators or federated authenticators. The local authenticators will authenticate the end user, using a local credential store. The federated authenticators will talk to an external identity provider to authenticate users. The request path authenticators will authenticate users from the credentials attached to request itself.

12. The language should have the provision to define, multiple-option and multiple-steps authentication, per service provider. In multiple-option scenario, the relationship between in authenticators is an OR. That means, user needs to authenticate only using a single authenticator. With multiple-steps, the relationship between steps is an AND. User must authenticate in successfully in each step.

13. The language should have the provision to define, multiple-option and multiple-steps authentication, per user. The user can be identified by the username or by any other attribute of the user. For example, if user belongs to the 'admin' role, then he must authenticate with multi-factor authentication.

14. The language should have the provision to define, multiple-option and multiple-steps authentication, per user, per service provider. The user can be identified by the username or by any other attribute of the user. For example, if user belongs to the 'admin' role and if he accesses a high-privileged application then he must authenticate with multi-factor authentication, if the same person accesses the user profile dashboard, then multi-factor authentication is not required.

15. The language should not define any authenticators by itself.

16. The language should have the provision to define, authenticator sequences independently and then associate them to an authentication request path, just by a reference. Also should have the ability to define them inline.

17. The language should support defining requirements for adaptive authentication. A service provider may have a concrete authenticator sequence attached to it. At the same time, the language should have the provision to dynamically pick authenticator sequences in a context aware manner.

18. The language should have the provision to define, per service provider, authorization policies. Authorization policy may define the criteria under which an end-user can access the given service provider. Only if its satisfied by the authenticated user, the identity provider should send back the authentication response.

19. The language should have the provision to define how to transform a claim set obtained from an identity provider to a claim dialect specific to a give service provider. This can be a simple one to one claim transformation, for example http://claims.idp1.org/firstName --> http://claims.sp1.org/firstName, or a complex claim transformation like, http://claims.idp1.org/firstName + http://claims.idp1.org/lastName --> http://claims.sp1.org/fullName. The language should have the provision to do this claim transformation from inboudAuthenticationRequest flow to outboundAuthenticationRequest/localAuthenticationRequest flow and from outboundAuthenticationResponse/localAuthenticationResponse flow to inboundAuthenticationResponse flow.

20.  The language should have the provision to authenticate service providers, irrespective of the protocol used in the inboundAuthenticationRequest.

21. The language should have the provision to accept, authentication requests, provisioning requests,  authorization requests, attribute requests and any other types of requests from the service provider. The language should not be just limited to above four types of requests and must have the provision to extend.

22.  The language should define a way to retrieve identity provider and service provider metadata.

23. The language should have the provision to define rule-based user provisioning per service provider, per identity provider or per user - and per attributes associated with any of the above entities. For example, if the user is provisioned to the system via foo service provider and if his role is sales-team, provision the user to salesforce. Another example could be, provision everyone authenticated against the Facebook identity provider to a internal user store.

24. The language should have the provision to indicate to a policy decision point (for access controlling), from which service provider the access controlling request is initiated from and also with the other related metadata.

25. The language should have the provision to the specify just-in-time provisioning requirements per service provider, per identity provider or per user - and and per attributes associated with any of the above entities.

26. The language should have the provision to specify, per service provider - the user
store and the attribute store for local authentication and to retrieve user attributes locally.

27. The language should have the provision to specify an algorithm and a handler for force authentication for users already logged into the identity provider. There can be a case where user is already authenticated to the system with a single factor - when the same user tries to log into a service provider which needs two-factor authentication, then the algorithm handler should decide whether to force user authentication - and if so at which level. There can be an algorithm handler defined at the global level and also per service provider request flow.

28. The language should have the provision to specify a home-realm-discovery handler, per service provider. This handler should know how to read the request to find out relevant attributes that can be used to identify the authentication options for that particular request.

29. The language should have the provision to define attribute requirements for out-bound provisioning requests.

30. The language should have the provision to define claim transformations prior to just-in-time provisioning or out-bound provisioning.

31. The language must have the provision to define how to authenticate to external endpoints. The language should not be coupled into any specific authentication protocol.