Proposal : Resource Owner Initiated Delegation OAuth 2.0 Grant Type

Please see my previous blog post for more context on the $subject. After that, I was pointed to have a look at UMA. I've been following UMA for sometime and it once again builds a very useful/realistic set of use cases on top of OAuth 2.0. But still - it too, Client initiated or Requester initiated.

I believe my use case can be addressed in OAuth 2.0 it self with a new grant type or in UMA.

Let me break down this in to four phases.

1. Client registers with the Resource Server via Authorization Server
2. Resource Owner grants access to a selected Client.
3. Client gets access_token from the Authorization Server.
4. Client accesses the protected resource on behalf of the Resource Owner.

Let's dig deep.

1. Client registers with the Resource Server via Authorization Server.

This is the a normal OAuth request with any of the defined grant types with the scope resource_owner_initiated.

At the time of Client registration on the Resource Server, Client will be redirected to his Authorization Server.

In return Resource Server gets an access_token along with the client_id. Now Resource Server has a set of registered Clients with corresponding access_tokens.

2. Resource Owner grants access to a selected Client.

Resource Owner logs in to the Resource Server and he can see all registered Clients with the Resource Server.

Resource Owner picks a Client.

Let's say the client_id we got was Foo from Step-1, along with the access_token.

Now,  the Resource Server will redirect the Resource Owner to the Authorization Server.

GET /authorize? grant_type=resource_owner_initiated&client_id=Foo&scope=Bar
Host: server.example.com

If the above is successful then the Authorization Server will send the following to the Resource Server.

HTTP/1.1 200 OK

After this step completed, Authorization Server will create an access_token for the Client - to access the a Protected Resource on behalf of the Resource Owner .

3. Client gets access_token from the Authorization Server.

Client should be able to get all the resource owner delegated access_tokens for him or a single one by specifying the resource owner Id.

To get all the delegated access_tokens - from the Client to the Authorization Server.

POST /token HTTP/1.1
Host: server.example.com
Authorization: Basic XfhgFgdsdsdkhewe
Content-Type: application/x-www-form-urlencoded

grant_type=resource_owner_initiated

Response from the Authorization Server to the Client.

HTTP/1.1 200 OKContent-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache

{
   "delegated_tokens" : [
      {  "access_token":"2YotnFZFEjr1zCsicMWpAA",
          "token_type":"example",
          "expires_in":3600,
          "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",
          "resource_owner_id":"Bob"
      },
     {  "access_token":"2YotnFZFEjr1zCsicMWpAA",
          "token_type":"example",
          "expires_in":3600,
          "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",
          "resource_owner_id":"Tom"   
      } 
    ]
}

To get a single delegated access_token - from the Client to the Authorization Server.

POST /token HTTP/1.1
Host: server.example.com
Authorization: Basic XfhgFgdsdsdkhewe
Content-Type: application/x-www-form-urlencoded

grant_type=resource_owner_initiated&resource_owner_id=Bob

Response from the Authorization Server to the Client.

HTTP/1.1 200 OKContent-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache

{
          "access_token":"2YotnFZFEjr1zCsicMWpAA",
          "token_type":"example",
          "expires_in":3600,
          "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",
          "resource_owner_id":"Bob"
}

4. Client accesses the protected resource on behalf of the Resource Owner.

This is normal OAuth flow. Client can now access the Protected Resource with the access_token.

What OAuth lacks ? Resource owner initiated OAuth delegation

Irrespective of all the criticism against OAuth 2.0 - it has produced a very powerful, highly extensible authorization framework.

The use cases covered in the spec are not imaginary - rather very realistic.

At WSO2 we have integrated OAuth 2.0 support in to WSO2 Identity Server and WSO2 API Manager. Currently we do support all four grant types defined in the core specification. Also - we have very strong customer use cases for SAML2 grant types as well - which we have already started implementing.

If you look at the core specification - there are two key roles involved.

Resource Owner : An entity capable of granting access to a protected resource. When the resource owner is a person, it is referred to as an end- user.

Client : An application making protected resource requests on behalf of the resource owner and with its authorization. The term client does not imply any particular implementation characteristics (e.g. whether the application executes on a server, a desktop, or other devices).

Now if you look at the complete OAuth flow - you will notice that - all are initiated by the Client. This has ignored the use cases where the Resource Owner has to initiate the flow. Let me give a more concrete example.

I am a user of an online photo sharing site. There can be multiple Clients like Facebook applications, Twitter applications registered with it. Now I want to pick some client applications from the list and give them the access to my photos under different scopes. Validation of the trust/legitimacy of the registered applications can be carried out in other means.

That is just an example from social networking point of view. But there are more concrete enterprise use cases as well.

Let's take an access delegation use case.

I am an employee of Foo.com. I'll be going on vacation for two weeks - now I want to delegate some of my access rights to Peter only for that time. Conceptually OAuth fits nicely here. But - this is a use case which is initiated by the Resource Owner - which does not addressed in the OAuth specification.

How to import LDAP Groups but not Users to Liferay

When you configure Lifeary to talk to an LDAP - it imports users and groups in two different ways.

1. Import users and while importing users look at the "Group" attribute of the user and import groups.
2. Import groups and while importing groups look at the "User" attribute of the group and import all the users of that group.

By default Liferay uses the first approach.

If we want to import Groups only - not the users from the LDAP - then we need to have the 2nd approach.

To enable second approach, you need to add the following to the liferay_home/tomcat/webapps/ROOT/WEB-INF/classes/portal-ext.properties file.

ldap.import.method=group

Then you need to keep the "User" attribute of the group configuration blank.

Integrating WSO2 Identity Server with Liferay

Liferay has a highly extensible architecture. You decide what you want to override in Liferay - it has an extension somewhere. This blog post shows how to delegate Liferay's authentication and authorization functionality to WSO2 Identity Server.



One of the challenging part we found in this integration is the LDAP Users/Groups import. You can connect an LDAP to the Liferay. But, in that case to authenticate users to Liferay against the underlying LDAP it has to import all the users and groups to Liferay's underlying database, which is by default running on Hypersonic.

Since, we only need to keep the user data in a single LDAP, we wanted to avoid this duplication. But, it was not as easy as we thought. If you need to avoid that, then we need to write the complete persistence layer. To understand this better, I guess I need to elaborate more this. Let's take a step back and see how in fact Authentication and Authorization work in Liferay.

Liferay has a chain of authenticators. When you enter your username/password the chain of authenticators will get invoked. This is the place where we plugged in WSO2ISAuthenticator.

auth.pipeline.pre=org.wso2.liferay.is.authenticator.WSO2ISAuthenticator
auth.pipeline.enable.liferay.check=false 
wso2is.auth.service.endpoint.primary=https://localhost:9443/services/  

The above configuration [which should be in the liferay_home/tomcat/webapps/ROOT/WEB-INF/classes/portal-ext.properties] tells Liferay to load our custom authenticator. Also, the second entry says, once loaded our authenticator, do not invoke rest in the chain. Otherwise, the default Liferay authenticator will also get invoked. Third entry points to the AuthenticationAdmin service running in WSO2 Identity Server.

Now, the username/password go in to WSO2ISAuthenticator and it will talk to WSO2 Identity Server over SOAP to authenticate the user. Once authentication is done, the control once again will be passed in to the Liferay container.

Now is the tricky part. Liferay has it's own permission model -  who should be able to see portlets, who should be able to add portlets likewise. For this it needs to find which Liferay roles are attached to the logged in user or which Liferay roles are attached to any group the logged in user belongs to. To get these details it needs to talk to the underlying persistence layer - which will load details from Liferay's underlying database. This is why we wanted to have users imported here from the LDAP.

Even-though it's possible, we decided not to write a persistence layer - but only to override authentication and authorization.

Even in the case of authorization - there are two types. The authorization model governed by Liferay to display/add portlets to the portal. The authorization model used within the Portlet it self to display content within the portlet.

The first type is done by assigning portlet management permissions to a given Liferay role and assigning members [groups/users] to that role from the underlying LDAP. We did not want to do that. Because, that is very much on the portal administration side - and much specific to Liferay. But - the second model - is the one that directly deals with the business functions. That is what we wanted to do in a find-grained manner.

Let's dig more deep in to this...

Even the second model can be done with Liferay's roles and permission. Whenever you want to render something in the portlet that requires some restricted audience, then before rendering that you need to call req.isUserInRole("roleNme"). This is compliant with the JSR too. But the disadvantages are..

1. Our business functionalities in an SOA deployment should not be governed by Liferay roles. Liferay could only be a single channel to access the business functions.
2. We can achieve only the role based access control with this model.

Liferay, also has it's own way of permission checking, within a portlet via PermissionChecker API. You may have a look at this for further details.

Our approach was to write a utility function called hasPermission(). If you extend your portlet from org.wso2.liferay.xacml.connector.SecuredGenericPortlet then this will be automatically available for you. Or else - you can directly call it through AuthzChecker.hasPermission(). These functions are available from org.wso2.liferay.xacml.connector.jar file.

You can copy all jar dependencies from here and copy those to liferay_home/tomcat/lib/ext.
The connection between XACML connector deployed in Liferay and WSO2 XACML engine is through Thrift. You need to add following properties to portal-ext.properties file.

wso2is.auth.thrift.endpoint=localhost
wso2is.auth.thrift.port=10500
wso2is.auth.thrift.connection.timeout=10000
wso2is.auth.thrift.admin.user=admin
wso2is.auth.thrift.admin.user.password=admin
wso2is.auth.thrift.endpoint.login=https://localhost:9443/


Since by default Identity Server is using a self-signed certificate, either you have import it's public certificate to the trust store of Liferay or set the following two properties in portal-ext.properties file pointing to the Identity Server's key store.

wso2is.auth.thrift.system.trusstore=/wso2is-3.2.3/repository/resources/security/wso2carbon.jks
wso2is.auth.thrift.system.trusstore.password=wso2carbon


Please note that above configuration is tested with Liferay 6.1.1 and WSO2 Identity 3.2.3/4.0.0.

OAuth 1.0a Bounded Tokens

We had a very interesting discussion today on the $subject.

In fact, one of my colleagues came up with a question - "Why the hell OAuth 1.0 uses two keys to sign messages..? Why Not just the consumer secret ?"

Well.. In fact - the exact reverse of this argument is one key point Eran Hammer highlights in his famous blog post - to emphasize why OAuth 1.0a is better...

"Unbounded tokens - In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues."

Before I answer the question - let me very briefly explain OAuth 1.0a flow..

1. To become an OAuth consumer you need to have a Consumer Key and a Consumer Secret. You can obtain these from the OAuth Service Provider. These two parameters can be stored in a database or in the filesystem.

Consumer Key: A value used by the Consumer to identify itself to the Service Provider.
Consumer Secret: A secret used by the Consumer to establish ownership of the Consumer Key.


2. Using the Consumer Key and Consumer Secret, you need to talk to the OAuth Service Provider and get a Unauthorized Request Token.

Request to get the Unauthorized Request Token will include following parameters.
  • oauth_consumer_key
  • oauth_signature_method
  • oauth_signature
  • oauth_timestamp
  • oauth_nonce
  • oauth_version
  • oauth_callback
Here the certain parameters of the request will be signed by the Consumer Secret. Signature validation at the OAuth Service Provider end will confirm the identity of the Consumer.

As the response to the above token, you will get the following.
  • oauth_token
  • oauth_token_secret
  • oauth_callback_confirmed
Here, the oauth_token is per OAuth Consumer per Service Provider. But can only be used once. If an OAuth consumer tries to use twice, OAuth Service Provide should discard the request.

3. Now the OAuth Consumer, has the Request Token - and it needs to exchange this token to an Authorized Request Token. The Request Token needs to be authorized by the end user. So, the Consumer will redirect the end user to the Service Provider with following request parameter.
  • oauth_token
This oauth_token is the Request Token obtained in the previous step. Here there is no signature - and this token does not need to be signed. The token it self proves the identity of the OAuth Consumer.

As the response to the above, OAuth Consumer will get the following response.
  • oauth_token
  • oauth_verifier
Here, the oauth_token is the Authorized Request Token. This token is per User per Consumer per Service Provider.

The oauth_verifier is a verification code tied to the Authorized Request Token. The oauth_verifier and Authorized Request Token both must be provided in exchange for an Access Token. They also both expire together. If the oauth_callback is set to oob in Step 2, the oauth_verifier is not included as a response parameter and is instead presented once the User grants authorization to Consumer Application. The Service Provider will instruct the User to enter the oauth_verifier code in Consumer  Application. The Consumer must ask for this oauth_verifier code to ensure OAuth authorization can proceed. The oauth_verifier is intentionally short so that a User can type it manually.

Both the above two parameters are returned back to the OAuth Consumer via a browser redirect over SSL.

4. Now the OAuth Consumer has the Authorized Request Token. Now it will exchange that to an Access Token. Once again the Access Token is per User per Consumer per Service Provider. Here the communication is not a browser re-direct but a direct communication between the Service Provider and the OAuth Consumer. This step is needed because, in step 3, OAuth Consumer gets the token through the browser.

The request to the access token will include following parameters.
  • oauth_consumer_key
  • oauth_token
  • oauth_signature_method
  • oauth_signature
  • oauth_timestamp
  • oauth_nonce
  • oauth_version
  • oauth_verifier
Here the oauth_token is the Authorized Request Token from step - 3.

The signature here is generated using a combined key - Consumer Secret and Token Secret [from step -2 ] separated by an "&" character. I will explain later why two keys used here for signing.

As the response to the above, the Consumer will get the following...
  • oauth_token
  • oauth_token_secret
All four steps above should happen over TLS/SSL.

5. Now the OAuth Consumer can access the protected resource. The request will look like following.
  • oauth_consumer_key
  • oauth_token
  • oauth_signature_method
  • oauth_signature
  • oauth_timestamp
  • oauth_nonce
  • oauth_version
Here the oauth_token is the Access Token obtained in step-4. The signature is once again generated by a combined key. Consumer Secret and Token Secret [from step -4 ] separated by an "&" character.

The step - 5 does not required to be on TLS/SSL.

Let's get back to the original question...

Why the hell OAuth 1.0 uses two keys to sign messages..? Why Not just the consumer secret ?

During the OAuth flow there are three places where OAuth Consumer has to sign.

1. Request to get an Unauthorized Request Token. Here the request will be signed by the Consumer Secret only. ( Request for Temporary Credentials )

2. Request to get an Access Token. Here the request will be signed by the Consumer Secret and the Token Secret returned back with the Unauthorized Request Token in step - 2. ( Request for Token Credentials )

Why do we need two keys here ? It's for tighten security. What exactly that means..?

Have a look at step - 3. There the Consumer only sends the oauth_token from step - 2 to get User authorization. And as the response to this the Consumer will get the Authorized Request Token (oauth_token). Now in step - 4 Consumer uses this Authorized Request Token to get the Access Token. Here, the consumer needs to prove that - its the same consumer who got the unauthorized request token, now requesting for the Access Token. To prove the ownership of the Request Token, now the Consumer will sign the Access Token request with both the Consumer Secret and the Token Secret returned back with the Unauthorized Request Token.

3. Request to the protected resource. Here the request to the protected resource will be signed by the Consumer Secret and the Token Secret returned back with the Access Token in step - 4. ( Request for Resources )

Once again we need two keys here for tighten security. If OAuth only relies on the consumer secret for the signature, the person who steals can capture the access token [as it goes to the protected resource on HTTP] and signs the request with the stolen secret key. Then all the users of that OAuth Consumer will be compromised. Since we use two keys here, even though someone steals the consumer secret it cannot harm any of the end users - because it cannot get the Token Secret which is sent to the Consumer over SSL in step - 4. Also - you should never store Token Secrets at the Consumer end - that will reduce any risks of stealing Token Secrets. But, you need to store your Consumer secret as explained previously.

In any case if you have to store the Token Secret, then you need store it encrypted - and also make sure you store it in a different database from where you store your consumer secret. That is the best practice we follow when we store hashed passwords and the corresponding salt values.

Apart from the above benefit - the Token Secret can also act as a salt value. Since the request to the protected resource goes over the wire, a hacker can carry out a known plain text attack to guess the Key used to sign the message. If we only use the Consumer Secret, then all the users of the OAuth Consumer will be compromised. But, if we use the Consumer Secret with the Token Secret, then only that particular user will be compromised - not even the Consumer Secret.

Finally, OAuth presents a Bounded Token protocol over HTTP for accessing the protected resource. That means - each request to the protected resource the Conumer should prove that he owns the token, as well as - he is the one who he claims to be. In the bounded token model, an issued token can only be used by the exact person who the token was issued to. [Compare this with Bearer token in OAuth 2.0]. To prove the identity of the Consumer it self, it can sign the message it self with consumer_secret. To prove that he also owns the token, he needs to sign the message token_secret too. Anyone can know the Access Token, but it's only the original Consumer knows the token_secret. So the key to sign is derived from combining both the keys.

Security Patterns : Decentralized Federated SAML2 IdPs

Say you are a globally distributed enterprise with thousands of branches. Each branch has it's own user store and manages it's own users. Also, each branch hosts it's own web applications. The questions is, how to enable single sign-on across branches in a highly scalable manner.

1. User in Domain Foo tries to access a Web App in the same domain.

2. Web App finds out that the user is not authenticated and redirects him to the SAML2 IdP in the same domain.

3. SAML2 IdP finds out that the user does not have an authenticated session and prompts him to authenticate. Once authenticated SAML2 IdP will redirect the user to the Web App with the SAML2 Response. And also writes a cookie to the user's browser under the Foo SAML2 IdPs domain.

4. Now the user from Domain Foo tries to access a Web App in Domain Bar. When the request to the Domain Bar initiated from the Domain Foo, a proxy in the Domain Foo adds a special HTTP Header [federated-idp-domain] to indicate from which domain the request being generated.

5. Web App in Domain Bar finds out the user is not authenticated and redirects to the SAML2 IdP in it's own domain [Domain Bar].

6. The SAML2 IdP in Domain Bar figures out that the user does not have an authenticated session and also reads federated-idp-domain HTTP Header and finds out from where the request was initiated and redirects the user to it's own domain - Domain Foo. Here the Domain Bar IdP creates a SAML2 Request and sends that to the SAML2 IdP in Domain Foo.

7. Domain Foo IdP checks whether the user has an authenticated session. By the cookie, Foo IdP finds out that the user is already authenticated and sends back a SAML2 Response to the SAML2 IdP in Domain Bar.

8. Bar IdP validates the SAML2 Response and redirects the user back to the Web App in Domain Bar. Now the user can log in to the Web App. Bar IdP also writes a cookie to user's browser under it's own domain.

9. Now the user from Domain Foo tries to access another Web App in Domain Bar. When the request to the Domain Bar initiated from the Domain Foo, a proxy in the Domain Foo adds a special HTTP Header [federated-idp-domain] to indicate from which domain the request being generated.

10. The Web App in Domain Bar finds out the user is not authenticated and redirects to the SAML2 IdP in it's own domain [Domain Bar].

11.  The SAML2 IdP in Domain Bar figures out that the user has an authenticated session, by checking the cookie which it wrote in step-8. User will be redirected back to the Web App with the SAML2 Response. Now the user can log in to the Web App.

With this pattern, each SAML2 IdP in each domain will trust each other and will register with each other.

Each Web App in a given domain will only trust it's own IdP - and only needs to register with it's own IdP.

Security Patterns : Single Sign On across Web Applications and Web Services

The requirement is to have single sign on across different web applications - once the user is  authenticated he should be able to access all the web applications with no further authentication [by him self]. Also, the web applications need to access a set of back-end services with the logged in user's access rights and the back-end services will authorize the user [end-user] based on different claims, like role.

1. User hits the link to the WebApp
2. WebApp finds out user is not authenticated and redirects to the SAML2 IdP.
3. SAML2 Idp checks whether the user has an authenticated session - if not will prompt for credentials, once authenticated there ,user will be redirected back to WebApp with a SAML token, with the set of claims requested by the WebApp
4. Now, the WebApp needs to access a back-end web service with the logged in user's access rights. WebApp passes the SAML token to the PEP based on WS-Trust and authenticates it self [WebApp] to the PEP via trusted-sub-system pattern.
5. PEP will call XACML PDP to authorize the user, based on the claims provided in the SAML token.
6. XACML PDP returns back the decision to the PEP.
7. If it's a 'Permit' - PEP will let the user access the back-end web service.

WSO2 Identity Server : A flexible, extensible and robust platform for Identity Management

WSO2 Identity Server provides a flexible, extensible and robust platform for Identity Management. This blog post looks inside WSO2 Identity Server to identify different plug points available for Authentication, Authorization and Provisioning.

WSO2 Identity Server supports following standards/frameworks for authentication, authorization and provisioning.

1. SOAP based authentication API
2. Authenticators
3. OpenID 2.0 for decentralized Single Sign On
4. SAML2 Web Single Sign On
5. OAuth 2.0
6. Security Token Service based on WS-Trust
7. Role based access control and user management API exposed over SOAP
8. Fine-grained access control with XACML
9. Identity provisioning with SCIM

1. SOAP based authentication API

WSO2 Identity Server can be deployed over an Active Directory, LDAP [ApacheDS, OpenLDAP, Novell eDirectory, Oracle DS.. etc..] or a JDBC based user store. It's a matter of configuration change and once done end users/systems can use the SOAP based authentication API to authenticate against the underlying user store.


















Identity Server by default ships with the embedded ApacheDS - but in a real production setup we would recommend you to go for more production ready LDAP - like OpenLDAP - due to some scalability issues we uncovered in embedded ApacheDS.

The connection to the underlying user store is made through an instance of org.wso2.carbon.user.core.UserStoreManager. Based on your requirement you can either implement the above interface or extend org.wso2.carbon.user.core.common.AbstractUserStoreManager.
















2. Authenticators

By default authentication to the WSO2 Identity Server Management console is via username/password. Although this is what we have by default, we never limit the user to use username/password based authentication. It can be based on certificates or any other propitiatory token types - only thing you need to do is to write your custom authenticator.

There are two type of Authenticators - Front-end Authenticators and Back-end Authenticators. Front-end Authenticators deal with the user inputs and figure out what exactly it needs to have to authenticate a user. For example, WebSEAL authenticator reads basic auth header and iv-user header from the HTTP request and calls it Back-end counter-part to do the actual validation.

Back-end Authenticator exposes it's functionality out side via a SOAP based service and it can internally get connected to a UserStoreManager.

Management console also can have multiple Authenticators configured with it. Based on the applicability and the priority one Authenticator will be picked during the run-time.




















3. OpenID 2.0 for decentralized Single Sign On

WSO2 Identity Server supports following OpenID specifications.
  • OpenID Authentication 1.1
  • OpenID Authentication 2.0
  • OpenID Attribute eXchange
  • OpenID Simple Registration
  • OpenID Provider Authentication Policy Extension
OpenID support is built on top of OpenID4Java - and integrated with the underlying user store seamlessly. Once you deploy WSO2 Identity Server over any existing user store, all the users in the user store will get an OpenID automatically.

If you want to use one user store for OpenID authentication and another for the Identity Server management, that is also possible from the Identity Server 4.0.0 on wards.

 WSO2 Identity Server supports both dumb and smart modes and if you would like you can disable the dumb mode. Disabling dumb mode is needed to reduce the load on OpenID Provider and to force relying parties to use smart mode. Identity Server uses JCache based Infiinispan cache to replicate Associations among different nodes in a cluster.

4. SAML2 Web Single Sign On

WSO2 Identity Server supports SAML2 Web Single Sign On.

OpenID and SAML2 are both based on the same concept of federated identity. Following are some of the difference between them.
  • SAML2 supports single sign-out - but OpenID does not.
  •  SAML2 service providers are coupled with the SAML2 Identity Providers, but OpenID relying parties are not coupled with OpenID Providers.
  • OpenID has a discovery protocol which dynamically discovers the corresponding OpenID Provider, once an OpenID is given.
  • With SAML2, the user is coupled to the SAML2 IdP - your SAML2 identifier is only valid for the SAML2 IdP who issued it. But with OpenID, you own your identifier and you can map it to any OpenID Provider you wish.
  • SAML2 has different bindings while the only binding OpenID has is HTTP 






















5. OAuth 2.0

WSO2 Identity Server supports OAuth 2.0 Core draft 27. We believe there will not any drastic changes between draft v27 and the final version of the specification and keeping an eye on where it's heading to.

Identity Server uses Apache Amber as the underlying OAuth 2.0 implementation.
  • Supports all four grant types listed in the specification, namely Authorization Code grant, Implicit grant, Resource Owner Password grant and Client Credentials grant.
  •  Supports Refreshing access tokens with Refresh tokens
  •  Supports the "Bearer" token profile.
  • Support for distributed token caching using WSO2 Carbon Caching Framework.
  • Support for different methods of securing access tokens before persisting. An extension point is also available for implementing the custom token securing methods.
  •  Extensible callback mechanism to link the authorization server with the resource server.
  •  Supports a range of different relational databases to serve as the token store.
6. Security Token Service based on WS-Trust

WSO2 Identity Server supports Security Token Service based WS-Trust 1.3/14. This is based Apache Rampart.





















STS is seamlessly integrated with the underlying user store and to issue tokens users can be authenticated against it.

User attributes are by default fetched from the underlying user store - but provides extensions points where users can write their own attribute callback handlers. Once those callback handlers are registered with the STS - attributes can be fetched from any user store out side the default underlying user store.

STS can be secured with any of the following security scenarios.
  • UsernameToken
  • Signature
  • Kerberos
  • WS-Trust [Here the STS can act as a Resource STS]
7. Role based access control and user management API exposed over SOAP

WSO2 Identity Server can be deployed over an Active Directory, LDAP [ApacheDS, OpenLDAP, Novell eDirectory, Oracle DS.. etc..] or a JDBC based user store. It's a matter of configuration change and once done end users/systems can use the SOAP based API to manage users [add/remove/modify] and check user authorizations against the underlying user store.
















8. Fine-grained access control with XACML

WSO2 Identity Server supports XACML 2.0 and 3.0. All the policies will be stored in XACML 3.0 but yet capable of evaluating requests either from 2.0 and 3.0.

XACML PDP is exposed via following three interfaces.
  • SOAP based API
  • Thrift based API
  • WS-XACML
Identity Server can act as both a XACML PDP and a XACML PAP. These components are decoupled from each other.

By default all XACML policies are stored inside the Registry - but the users can have their own policy store, by extending PolicyStore API.
















9. Identity provisioning with SCIM

The SCIM support in WSO2 Identity Server is based on WSO2 Charon - and open source Java implementation of SCIM 1.0 released under Apache 2.0 license.

WSO2  Identity Server can act as either a SCIM provide or a consumer. Based on a configuration WSO2 IS can provision users to other systems that do have SCIM support.

OAuth 2.0 Playground with WSO2 Identity Server

WSO2 Identity Server adds OAuth 2.0 support from it's very next release. Hopefully by the end of this August. OAuth Core specification supports four grant types.

1. Authorization Code Grant (authorization_code)
2. Implicit Grant
3. Resource Owner Password Credentials Grant (password)
4. Client Credentials Grant (client_credentials)

First you need to setup the sample web app. You can download it from here and host it in Tomcat. I assume it runs at http://localhost:8080/playground. If the Identity Server is not running on 9443 - then you need to edit the web.xml of the web app appropriately.

Then you need to download the WSO2 Identity Server 4.0.0 server from here.

1. Start the server
2. Login with admin/admin
3. Main/Manage/OAuth/Register New Application




4. Select OAuth 2.0
5. Give an Application Name and any Callback Url. For the sample to work, it should be http://localhost:8080/playground/oauth2client



6. Once you click on "Add" you will be taken to the OAuth Management page
7. Click on the application you just created.



8. Copy the values of Client Id, Client Secret, Access Token Url and Authorie Url -- we need these values later during different stages in the web app.



That's it. We are done. Now go to the web app... http://localhost:8080/playground.

Authorization Grant Type : Select one of the four as per the OAuth spec.
Client Id : Client Id from the above image.
Client Secret : Client Secret from the above image.
Resource Owner User Name : Any valid user name from WSO2 IS.
Resource Owner Password : Password correponding to "Resource Owner User Name".
Scope : By default can be anything. No validation. You can override the functionality if needed.
Authorize Endpoint : Authorize Url from the above image.
Access Token Endpoint : Access Token Url from the above image.


Click on import photos... Then you can execute the OAuth flow by selecting the Grant Type you want.









You can download the complete code of sample web application from here.

From the root level type "mvn clean install" to build it.

Testing WSO2 Identity Server OAuth 2.0 support with Curl

WSO2 Identity Server adds OAuth 2.0 support from it's very next release. Hopefully by the end of this August. OAuth Core specification supports four grant types.

1. Authorization Code Grant (authorization_code)
2. Implicit Grant
3. Resource Owner Password Credentials Grant (password)
4. Client Credentials Grant (client_credentials)

In this blog post we only talk about last two grant types - since those can be directly executed via curl.

First you need to download the WSO2 Identity Server 4.0.0 server from here.

1. Start the server
2. Login with admin/admin
3. Main/Manage/OAuth/Register New Application













4. Select OAuth 2.0
5. Give an Application Name and any Callback Url [need not to be real for this case]


















6. Once you click on "Add" you will be taken to the OAuth Management page
7. Click on the application you just created.













8. Copy the values of Client Id and Client Secret -- we need these values later.






















Now lets see how we get an access token from Identity Server via curl.

This is how it works under Resource Owner Password Credentials grant type.

This is useful when the end user or the resource owner trusts the application. I will not talk about the advantages and disadvantages of this grant type here - will have another blog post on that. Anyway this is a grant type you should use with extra care.

$ curl --user Client_Id:Client_Secret  -k -d "grant_type=password&username=admin&password=admin" -H "Content-Type:application/x-www-form-urlencoded"  https://localhost:9443/oauth2/token

You need to replace Client_Id:Client_Secret with your values...

The response would be something like...

{"token_type":"bearer",
"expires_in":3600,
"refresh_token":"d78e445a78c9bdce17f349068495ebe",
"access_token":"3a1d3e2983fafc73eec3f894cb6eb4"}

Now you can use this access_token to access the protected resource.

Let's how to execute curl to get an access_token with Client Credentials Grant type. Here the client becomes the resource owner. Almost similar to 2-legged OAuth we talked under OAuth 1.0.

curl --user Client_Id:Client_Secret  -k -d "grant_type=client_credentials&username=admin&password=admin" -H "Content-Type:application/x-www-form-urlencoded"  https://localhost:9443/oauth2/token

You need to replace Client_Id:Client_Secret with your values...

The response would be.

{"token_type":"bearer",
"expires_in":3600,
"access_token":"9cdd18286e27dd768b74577276f217be"}

The 3rd Java Colombo Meetup

The third meetup of the Colombo Java User Group [JUG] was held on 19th July @ WSO2 #58 office...

Once again we had a good attendance. There were about more than 55 out of 100 registered.

Also by this meetup Java Colombo exceeded 300 subscribers and became the most active Meetup in Sri Lanka with the highest number of members.

Keynote was done by Hiranya on "Handling I/O in Java". It was the best keynote we had in Java meetups so far and was very well received by the audience. Take-away for me from the keynote is - "Hiranya's laws on I/O handling" :-). He came up with a nice set of best practices and things to avoid while doing I/O programming in Java. Hiranya has already blogged about this and can find his slides from here.

After the keynote we had the panel discussion on "Web Services with Java". Amila and Hiranya joined the panel and I had the privilege of moderating.

We started by talking about the building blocks of SOAP based web services, like SOAP, WSDL and UDDI. Then we talked briefly how to develop web services with Axis2 - where Amila also demonstrated by a sample. Further we discussed WS-Discovery and how it makes discovery less complex with respect to UDDI.

REST was also another focused topic during the discussion. Hiranya explained concepts associated with REST and explained how it differs from SOAP - when to use SOAP and when to use REST. Also we discussed limitations in REST and it's coupling to the underlying transport. Discussion finally ended up by talking about how to secure SOAP and RESTful services - where we talked about transport-level security and message-level security.

All-in-all it was successful a meetup and need to thank everyone who supported. Specially need to thank WSO2 for continuously sponsoring meetups.

Extending JMeter with a WS-Trust/STS sampler

JMeter does not have any inbuilt support for WS-Security or WS-Trust and that made me develop this STS Sampler for JMeter - which could make anyone's life better while load testing an STS.

First you need to have the Apache JMeter distribution. I am using v2.7.

Then you can download sts.sampler.zip from here - unzip it and copy the "repo" directory directly to  JMETER_HOME. Also copy all the jars inside lib.ext directory to JMETER_HOME/lib/ext.

That's it - now start the JMeter.

Under your thread group - right click - and add the Java Request Sampler...

Now, select org.wso2.apache.jmeter.sts.STSSampler as the classname - you will see the following then...























Let me briefly explain here - what exactly the different parameter names mean..

STS_End_Point : End point of the Security Token Service. If you are using the STS that comes with WSO2 Identity Server, then this would be https://localhost:9443/services/wso2carbon-sts

STS_Security_Policy : Location to the WS-Security Policy - that is being used to secure STS. It can be a security policy with UsernameToken and Sign & Encryption.

Applies_To : Against which service you are going to use this Token, obtained from the STS - or in other words, the scope of the token. This can be any URI known to the STS. STS may use this URI to find public key of that service and will use that to encrypt the key issued. So, whatever you put should be meaningful to your STS.

Token_Type : It can be any one of the following...

1. http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
2. http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1

Key_Type : It can be any one of the following...

1. /SymmetricKey : A symmetric key token is requested (default)
2. /PublicKey : A public key token is requested
3. /Bearer : A bearer token is requested. This key type can be used by requestors to indicate that
they want a security token to be issued that does not require proof of possession.

Key_Size :  Size of the key. By default it's being set to 256. This is an integer element indicates the size of the key REQUIRED specified in number of bits.

Claim_Dialect : Claim dialect which is known the STS. This can be used to group set of claims together.

Required_Claims :  URIs known to the STS which indicate the required set of attributes. This can be a comma separated list.

System_Trust_Store : When the URL to the STS is on https - this indicates the location to the JKS file which includes the public certificate corresponding to the STS endpoint.

System_Trust_Store_Password : Password to access System_Trust_Store

Username :  This is required when the STS is secured with UsernameToken security policy. This is the corresponding user name.

Password : Password corresponding to the above Username.

Encryption_Key_Store :  This is required when the STS is secured with WS-Security Encryption. Location of the JKS where the public key of the STS endpoint being stored.

Encryption_Key_Store_Password : Password corresponding to the Encryption_Key_Store.

Encryption_Key_Alias : Alias from the Encryption_Key_Store corresponding to the STS endpoint. This helps to load the public key of STS.

Signature_Key_Store :  This is required when the STS is secured with WS-Security Signature. Location of the JKS where the private key of the STS client being stored.

Signature_Key_Store_Password : Password corresponding to the Signature_Key_Store.

Signature_Key_Alias : Alias from the Signature_Key_Store corresponding to the STS client. This helps to load the private key of STS client.

Signature_Key_Password : Password corresponding to the private key of the STS client.

Following is an example configuration that I used to load test STS which ships with WSO2 Identity Server.





















Extending JMeter with a password digest generator

Recently I had to work on loading an OpenLDAP instance with 50,000 user records and carry on some stress testing.

JMeter was the best choice to populate the LDAP.

But.. in my case, OpenLDAP was configured not to accept any clear text passwords.

So, I could not login in with any of the random generated passwords I added via JMeter LDAP Request sampler.

This made me to write this extension to JMeter, which can be used in a generic way to generate message digest of a given text.

You can download the JAR file from here.

Then, you need to copy this to JMETER_HOME\lib\ext.

In your test plan, add a Java Request sampler to the thread group, just before where the digested text is needed. Now, select, org.wso2.apache.jmeter.message.digest.DigestGenerator.

Then you can set a hashing algorithm and which text to be digested. In that case, you can set a variable or text as it is. That's it -  once done, you can access the digest value via ${password}.


OAuth 2.0 with Pet Care House

OAuth 2.0 Integration Patterns with XACML

Authentication and Authorization with multiple user stores with identity chaining

This blog post explains, in step by step, how to configure WSO2 ESB and WSO2 Identity Server to work with multiple user stores to do run-time authentication and authorization checks with multiple user stores.

First download the deployment.zip from here and unzip it to the local file system. All the files referred in this blog post are inside this zip file.

Setting up WSO2 ESB

- Make sure ESB runs on default ports

- Copy repositoy.components.lib/org.wso2.identity.connector.ad-1.0.0.jar to [ESB_HOME]/repository/components/lib

- Copy org.repositoy.components.plugins/wso2.carbon.security.mgt-3.2.3.jar to [ESB_HOME]/repository/components/plugins

- Copy repositoy.conf/ad.prop to [ESB_HOME]/repository/conf - You can add any number of AD connections there - please update the file with your settings and following semantics.

- Add the following to the [ESB_HOME]/repository/conf/carbon.xml - just under root.
<CustomServicePasswordCallback>
          <ClassName>org.wso2.identity.connector.ad.ADPasswordCallbackHandler</ClassName>
    </CustomServicePasswordCallback>

- Start ESB

- Replace the Synapse configuration [Main --> Service Bus --> Source View] with the content from synapse/synapse.xml. This will create proxy called "test" with Entitlement Mediator - connecting to the Echo service.

- Secure the "test" proxy with UsernameToken, following the wizard. Select 'Everyone' for the role.

Setting up WSO2 Identity Server

 - IS running 9445 [If not change the Entitement Mediator configuration in ESB]

- Copy repositoy.components.lib/* to [IS_HOME]/repository/components/lib

- Copy repositoy.conf/ad.prop to [IS_HOME]/repository/conf - You can add any number of AD connections there - please update the file with your settings and following semantics.

- Copy repositoy.conf/entitlement-config.xml to [IS_HOME]/repository/conf 2.4 Start IS 2.5 Go to Main --> Entitlement  --> Administration --> Import New Entitlement Policy and import xacml/policy.xml from the file system and Enable the policy. Change the policy appropriately.

All set now, use TryIt from ESB against the "test" proxy.

Notes :

1. Echo service is Unsecured.

2. Any attriute Id referred from XACML policy must be declared in ad.prop in IS.

 e.g : user.attributes.1=mail,givenName

3. This also assumes IS has a user admin/admin. If not change the Entitement Mediator configuration in ESB

4. In IS Decision caching and Attribute caching disabled by default

Running two OpenLDAP instances in the same machine under MAC OS X

This blog post explains how to run two OpenLDAP instances in the same machine under MAC OS X.

1. Setup the first instance of OpenLDAP as explained in my previous blog post.

2. Execute the following commands in the same order.

$ sudo cp -r /private/etc/openldap /private/etc/openldap.node2

$ sudo cp -r /var/db/openldap /private/etc/openldap.node2

$ sudo rm -r  /var/db/openldap.node2/openldap-data/*db.*

$ sudo rm -r  /var/db/openldap.node2/openldap-data/*.bdb

$ sudo rm -r  /var/db/openldap.node2/openldap-data/log*.*

$ sudo rm -r  /var/db/openldap.node2/openldap-data/alock

$ sudo cp  -r  /var/db/openldap.node2/openldap-data/DB_CONFIG.example  /var/db/openldap.node2/openldap-data/DB_CONFIG

3. Open up /private/etc/openldap.node2/ldap.conf and change the port, say to 12389

4. Open up /private/etc/openldap.node2/slapd.conf and change all the references from /private/etc/openldap to /private/etc/openldap.node2

5.  Open up /private/etc/openldap.node2/slapd.conf and change all the references from /var/db/openldap to /var/db/openldap.node2

6. Start the first OpenLDAP server running on the default port.

$ sudo /usr/libexec/slapd -d3

6.  Start the second OpenLDAP from the following command.

 $ sudo /usr/libexec/slapd -f /private/etc/openldap.node2/slapd.conf -h ldap://localhost:12389  -d3

Setting up OpenLDAP under MAC OS X

This blog post explains how to setup OpenLDAP under Mac OS X and I have tried this out successfully under OS X Lion.

First we need to install the correct Xcode version corresponding to the OS X and then the latest MacPorts. Once this is done installing OpenLDAP via MacPorts is quite simple.

% sudo port -d selfupdate

% sudo port install openldap

The above will install OpenLDAP with Berkly DB back-end.

You will find the OpenLDAP configuration files at /private/etc/openldap

We need to worry about two configuration files here - slapd.conf and ldap.conf. You will find these two config files as slapd.conf.default and ldap.conf.default, in that case rename those to be slapd.conf and ldap.conf. Also make sure you copy the /private/var/db/openldap/openldap-data/DB_CONFIG.example to /private/var/db/openldap/openldap-data/DB_CONFIG.

First let's open up ldap.conf. There you need set the BASE for LDAP tree - and also the URI for the LDAP server. That's all - change those settings and save the file.

BASE dc=wso2,dc=com
#URI ldap://ldap.example.com ldap://ldap-master.example.com:666
URI ldap://192.168.1.83:389

#SIZELIMIT 12
#TIMELIMIT 15
#DEREF  never
TLS_REQCERT demand 

Next we need to modify the slapd.con file. This is one of the main LDAP configuration files.

Please make sure all related schema includes are there.. un-commented..

Then you need to set suffix, rootdn and rootpw.

suffix needs to be the same as what you defined for BASE in ldap.conf.

rootdn is the DN of the OpenLDAP root user. Here I have it as cn=admin,dc=wso2,dc=com.

Then the rootpw...

This is bit tricky and most people get this wrong.

If you just put any clear text value to rootpw - then when you try do an ldapsearch and try to authenticate, it will fail with the following error.

ldap_bind: Invalid credentials (49)

The reason is, the default distribution which comes with MacPorts, is built with clear text passwords being disabled. So you need to  generate the password in SHA first and then put it in to the slapd.conf. To generate the SHA password you can use the following command.

% slappasswd -s your-password

Also make sure that following two lines are un-commented...

modulepath /usr/libexec/openldap
moduleload back_bdb.la

Following is the complete slapd.conf file.

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include  /private/etc/openldap/schema/core.schema
include         /private/etc/openldap/schema/cosine.schema
include         /private/etc/openldap/schema/nis.schema
include         /private/etc/openldap/schema/inetorgperson.schema

# Define global ACLs to disable default read access.

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org

pidfile  /private/var/db/openldap/run/slapd.pid
argsfile /private/var/db/openldap/run/slapd.args

# Load dynamic backend modules:
modulepath /usr/libexec/openldap
moduleload back_bdb.la
# moduleload back_hdb.la
# moduleload back_ldap.la

# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 63-bit encryption for simple bind
# security ssf=1 update_ssf=112 simple_bind=64

# Sample access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
#  Allow self write access
#  Allow authenticated users read access
#  Allow anonymous users to authenticate
# Directives needed to implement policy:
# access to dn.base="" by * read
# access to dn.base="cn=Subschema" by * read
# access to *
# by self write
# by users read
# by anonymous auth
#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn.  (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!

#######################################################################
# BDB database definitions
#######################################################################

database bdb
suffix  "dc=wso2,dc=com"
rootdn  "cn=admin,dc=wso2,dc=com"
# Cleartext passwords, especially for the rootdn, should
# be avoid.  See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.
rootpw  {SSHA}BqYQBS48EZlLu4XYJxEXaOlRdseW2D4Y
# The database directory MUST exist prior to running slapd AND 
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory /private/var/db/openldap/openldap-data
# Indices to maintain
index objectClass eq

Once the above is done - we can start our OpenLDAP server...

% sudo /usr/libexec/slapd -d3

Now, we need to build our LDAP tree structure...

Save the following in to a file called root-ou.ldif.

dn:dc=wso2,dc=com
objectClass:dcObject
objectClass:organizationalUnit
dc:wso2
ou:WSO2
Now run the following command...

% ldapadd -D "cn=admin,dc=wso2,dc=com" -W -x -f root-ou.ldif

"cn=admin,dc=wso2,dc=com" is the value of rootdn that we setup in slapd.conf. When prompted for password, you can give the rootpw.

Now, let's add a OU called people under this.

Once again, save the following to a file called people-ou.ldiff.
dn: ou=people,dc=wso2,dc=com
objectClass: organizationalUnit
ou: people

Now run the following command...

% ldapadd -D "cn=admin,dc=wso2,dc=com" -W -x -f people-ou.ldif

If your OpenLDAP instance is running on a different port than the default one - we need to use the following command instead of the above.

% ldapadd -D "cn=admin,dc=wso2,dc=com" -H ldap://localhost:389 -W -x -f people-ou.ldif

This will create a OU structure as shown in the image below.. Basically you can connect Apache Directory Studio to your running OpenLDAP instance to view it.

















Everything should be fine by now...

OpenLDAP comes with set of default schema files, which you can find inside /private/etc/openldap/schema. If you want to have your own schema loaded in to OpenLDAP, what you have to do is, write your schema file and copy it to  /private/etc/openldap/schema and edit the slapd.conf to add an include pointing to your schema file. Then you need to restart the OpenLDAP server.

To stop the OpenLDAP instance you can use the following command...

% sudo kill  $(cat /private/var/db/openldap/run/slapd.pid)

/private/var/db/openldap/run/slapd.pid is the place where the process id of the OpenLDAP process being stored - and this location can be configured in slapd.conf.

OWASP Sri Lankan chapter inaugural meeting...

During last couple of months I got the opportunity to getting involved in more community events.

This time it's the inaugural meeting of OWASP Sri Lankan chapter.

It's great to see such a forum being formed and being active. My session there was on 'Ethical Hacking'. Mostly I focused on to demonstrate how silly mistakes from programmers could lead to catastrophic security breaches.





Also there was another session on “Security in Your Own Way,” which was presented by Rosita De Rose, Process Lead – 99X Technology.

All-in-all kudos the organizers - we need to keep this going...

You can read more details of the event from here...

WSO2 Charon - released in time for the SCIM interop @ IETF 83

The M1 build of WSO2 Charon was released last week just in time for the very first SCIM interop event scheduled to start this week in Paris.

By the time of this writing Hasini Gunasinghe, the one who lead SCIM effort from WSO2 front is in France to participate in IETF 83.

Simple Cloud Identity Management [SCIM] is an emerging open standard which defines a comprehensive REST API along with a platform neutral schema and a SAML binding to facilitate the user management operations across SaaS applications, placing specific emphasis on simplicity and interoperability as well.



SCIM challenges the Service Provisioning Markup Language [SPML].SPML is an XML-based framework, being developed by OASIS, for exchanging user, resource and service provisioning information between cooperating organizations. SPML version 1.0 was approved in October 2003. SPML version 2.0 was approved in April 2006. So, it's been there for almost a decade but hardly ever caught the attention of the community. One good reason is SPML been very biased to SOAP/XML. This also made different vendors to implement their own provisioning APIs. This is what Google implemented for Google Apps.

"Major cloud service providers...have found that they need to become more agile when configuring customer access to these services. The ability to provision user accounts rapidly, accurately, and in standardized fashion helps both service providers and their enterprise customers to achieve productive, access-controlled service usage faster. To meet this goal, these service providers, along with vendors...have collaboratively developed the new draft protocol called Simple Cloud Identity Management (SCIM)," according to the Forrester Research, Inc. report, Understanding Simple Cloud Identity Management, July 15, 2011.

That's the birth of SCIM.

WSO2 was following the progress of SCIM specification from the very beginning and was very keen to get involved. We have very close use cases for SCIM with our Stratos Platform as a Service [PaaS]. With SCIM we believe we could have better integration with Google Apps, Salesforce and other SaaS providers. Users from WSO2 Stratos will be able to provision their accounts to different SaaS providers who support SCIM. Not just for cloud, but also for our standalone Identity Server product, we believe SCIM could add a strong value. Someone running WSO2 Identity Server behind a firewall would be able to provision it's users to SaaS applications running in the cloud.

This thought process led us to do the WSO2 SCIM implementation. And to date it's the only Java SCIM implementation available under open source Apache 2.0 license.

Of course, we wanted a name to go ahead - among many name proposed we picked Charon - the guy how ferries you to Hades - which was proposed by Charith Wickramarachchi, one of my colleagues at WSO2.

WSO2 Charon includes four main modules.
  • Charon-Core: The API implements of SCIM specification. It provides API for both server side and consumer side such that a SCIM Service Provider or a SCIM Consumer can be developed based on Charon-Core.
  • Charon-Deployment: A reference implementation of SCIM service provider. It is a Apache Wink based webapp that can be deployed in an application server and make the SCIM service provider be exposed.
  • Charon-Samples: This contains a set of samples illustrating the SCIM Consumer side use cases which can be run against a SCIM server.
  • Charon-Utils: This contains a set of default implementations for the extension points made available in Charon-Core.
WSO2 Charon in it's M1 release supports following features from the SCIM specification and planing to be feature complete by May this year.
  • User operations
    • Create(POST)
    • Retrieve(GET)
    • Update(PUT)
    • Delete(DELETE)
    • List(GET)
    • User Schema. 
  •  Group operations
    • Create(POST)
    • Retrieve(GET)
    • Update(PUT)
    • Delete(DELETE)
    • List(GET)
    • Group Schema
  • Representation : JSON
  • SCIM Client API
  • Response Codes
  • Authentication : HTTP Basic Auth
  • SCIM Resource endpoints exposed as JAX-RS based REST resources using Apache Wink
  • In Memory User Store
  • JAX-RS Response handling
With all these feature, WSO2 Charon is ready for the very first SCIM interop event scheduled to hold tomorrow, 28th March @ IETF 83 in Paris.

UnboundID, SailPoint, Technology Nexus, BCPSOFT, Ping, Gluu, Courion & Salesforce will be there for the first interop together with WSO2.

The Java Colombo Lanuch

The first meetup of the Colombo Java User Group [JUG] was held on 15th March @ WSO2 #58 office...

It was a huge success - we were able to get more than 70 to attend the event while around 180 registered for the Colombo JUG just with a one month notice...



I was in a panel discussion which talked about secure coding with Java - with Hiranya, Srinath and Amila..

Some key areas I would like to highlight here from what we focused during the panel discussion..
  • Security concerns in application development - authentication, authorization, integrity, no-repudiation, confidentiality - best practices to follow while designing a login method -  exception shielding pattern.
  • How does Java security architecture address the above concerns -  JAAS, JGSS, Java Security Manager.
  • What are the security concerns in a distributed environment? 
  • What are the common types of attacks? and solutions - attacks like, Cross-site Scripting, Session Hijacking, SQL Injection, Log Injection were demonstrated during the session...
  • What are the security testing best practices?  - OWASP
After the panel - the brain storming session on 'Future of Java - Would Oracle kill Java ?" started...

It was quite interesting and was nicely moderated by Senaka.

Some key points highlighted during this session...
  • Oracle may not kill Java - but will look in to more commercial side of it by giving patches only for paying customers.
  • Oracle's response time so far for critical Java security bugs is highly satisfactory.
  • People were afraid when Oracle acquired MySQL and they had all reasons to kill MySQL but they did not. Further Oracle has contributed to improve the performance of MySQL.
  • What will happen to the J2ME? Most probably Android will kill J2ME.
  • Java7 adaptation is still slow.
  • No room for Java on iPad [iOS].
All-in-all it was nice couple of hours with Java enthusiasts...

Need to Thank everyone who contributed to the success of this event - specially WSO2, Dr. Sanjiva Weerawarana, Harindu, Hiranya and all other colleagues at WSO2.

Looking forward for the next Colombo JUG event sometime around late April...

    MapReduce with MongoDB

    MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. You can read about MapReduce from here.

    MongoDB is an open source document-oriented NoSQL database system written in C++. You can read more about MongoDB from here.

    1. Installing MangoDB.

    Follow the instructions from the MongoDB official documentation available here. In my case, I followed the instructions for OS X and it worked fine with no issues.

    I used sudo port install mongodb to install MongoDB and one issue I faced was regarding to the xcode version I had. Basically I installed xcode while I was in OS X Leopard and didn't update the xcode to the latest after moving to Lion. Once I updated the xcode, I could install mongodb with MacPort with no issue. Another hint - sometime your xcode installation doesn't work fine when you directly install it from the App Store - what you could do is, get xcode from the App Store and then go to the Launch Pad, find Install Xcode and install it from there.

    2. Running MongoDB

    Starting MongoDB is simple..

    Just type mogod in the terminal or in your command console.

    By default this will start the MongoDB server on 27017 and will use the /data/db/ directory to store data - yes, that is directory that you created in step - 1.

    In case you want to change those default settings - you can do it while starting the server.

    mongod --port [your_port] --dbpath [your_db_file_path]

    You need to make sure that your_db_file_path exists and its empty when you start the server for the first time...

    3. Starting MongoDB shell

    We can start MongoDB shell - to connect it to our MongoDB server and run commands from there.

    To start the MongoDB shell to connect to the MongoDB server running on the same machine with the default ports you only need to type mongo in the command line. If you are running MongoDB server on a different machine with a different port use the following.

    mongo [ip_address]:[port]

    e.g : mongo localhost:4000

    4. Let's create a Database first.

    In the MangoDB shell type the following...
    > use library
    

    The above is supposed to create a database called 'library'.

    Now to see whether your database been created, just type the following - which is supposed to list all the databases.
    > show dbs;

    You will notice that the database that you just created is not listed there. The reason is, MongoDB creates databases on-demand. It will get created only when we add something to it.

    5. Inserting data to MongoDB.

    Let's first create two books with the following commands.
    > book1 = {name : "Understanding JAVA", pages : 100}
    > book2 = {name : "Understanding JSON", pages : 200}

    Now, let's insert these two books in to a collection called books.
    > db.books.save(book1)
    > db.books.save(book2)

    The above two statements will create a collection called books under the database library. Following statement will list out the two books which we just saved.
    > db.books.find();
    
    { "_id" : ObjectId("4f365b1ed6d9d6de7c7ae4b1"), "name" : "Understanding JAVA", "pages" : 100 }
    { "_id" : ObjectId("4f365b28d6d9d6de7c7ae4b2"), "name" : "Understanding JSON", "pages" : 200 }

    Let's add few more records.
    > book = {name : "Understanding XML", pages : 300}
    > db.books.save(book)
    > book = {name : "Understanding Web Services", pages : 400}
    > db.books.save(book)
    > book = {name : "Understanding Axis2", pages : 150}
    > db.books.save(book)

    6. Writing the Map function

    Let's process this library collection in a way that, we need to find the number of books having pages less 250 pages and greater than that.
    > var map = function() {
    var category;
    if ( this.pages >= 250 ) 
    category = 'Big Books';
    else 
    category = "Small Books";
    emit(category, {name: this.name});
    };

    Here, the collection produced by the Map function will have a collection of following members.
    {"Big Books",[{name: "Understanding XML"}, {name : "Understanding Web Services"}]);
    {"Small Books",[{name: "Understanding JAVA"}, {name : "Understanding JSON"},{name: "Understanding Axis2"}]);

    7. Writing the Reduce function.
    > var reduce = function(key, values) {
    var sum = 0;
    values.forEach(function(doc) {
    sum += 1;
    });
    return {books: sum};
    };

    8. Running MapReduce against the books collection.
    > var count  = db.books.mapReduce(map, reduce, {out: "book_results"});
    > db[count.result].find()
    
    { "_id" : "Big Books", "value" : { "books" : 2 } }
    { "_id" : "Small Books", "value" : { "books" : 3 } } 

    The above says, we have 2 Big Books and 3 Small Books.

    Everything done above using the MongoDB shell, can be done with Java too. Following is the Java client for it. You can download the required dependent jar from here.
    import com.mongodb.BasicDBObject;
    import com.mongodb.DB;
    import com.mongodb.DBCollection;
    import com.mongodb.DBObject;
    import com.mongodb.MapReduceCommand;
    import com.mongodb.MapReduceOutput;
    import com.mongodb.Mongo;
    
    public class MongoClient {
    
     /**
      * @param args
      */
     public static void main(String[] args) {
    
      Mongo mongo;
      
      try {
       mongo = new Mongo("localhost", 27017);
       DB db = mongo.getDB("library");
    
       DBCollection books = db.getCollection("books");
    
       BasicDBObject book = new BasicDBObject();
       book.put("name", "Understanding JAVA");
       book.put("pages", 100);
       books.insert(book);
       
       book = new BasicDBObject();  
       book.put("name", "Understanding JSON");
       book.put("pages", 200);
       books.insert(book);
       
       book = new BasicDBObject();
       book.put("name", "Understanding XML");
       book.put("pages", 300);
       books.insert(book);
       
       book = new BasicDBObject();
       book.put("name", "Understanding Web Services");
       book.put("pages", 400);
       books.insert(book);
     
       book = new BasicDBObject();
       book.put("name", "Understanding Axis2");
       book.put("pages", 150);
       books.insert(book);
       
       String map = "function() { "+ 
                 "var category; " +  
                 "if ( this.pages >= 250 ) "+  
                 "category = 'Big Books'; " +
                 "else " +
                 "category = 'Small Books'; "+  
                 "emit(category, {name: this.name});}";
       
       String reduce = "function(key, values) { " +
                                "var sum = 0; " +
                                "values.forEach(function(doc) { " +
                                "sum += 1; "+
                                "}); " +
                                "return {books: sum};} ";
       
       MapReduceCommand cmd = new MapReduceCommand(books, map, reduce,
         null, MapReduceCommand.OutputType.INLINE, null);
    
       MapReduceOutput out = books.mapReduce(cmd);
    
       for (DBObject o : out.results()) {
        System.out.println(o.toString());
       }
      } catch (Exception e) {
       // TODO Auto-generated catch block
       e.printStackTrace();
      }
     }
    }