Bring Your Own IDentity (BYOID) vs Take Your Own IDentity (TYOID)

There are many places that talk about Bring Your Own IDentity (BYOID) - but quite rare they talk about Take Your Own IDentity (TYOID). What is the difference or both having the same meaning?

Both look almost the same. The difference lies on how you look at it.

BYOID is relying party centric - while TYOID is identity provider centric.

Let me explain this bit more.

If you have an Identity Management system that supports BYOID, that should be able to mediate different heterogeneous identity protocols and standards. To elaborate more on that, you should let people authenticate with their own Facebook credentials (OAuth 2.0), Twitter credentials (OAuth 1.0), Google / Yahoo credentials (OpenID), Salesforce credentials (SAML 2.0) and many more. The overall credential management and mapping cost is at the relying party end.

If you have an Identity Management system that supports TYOID - its identity provider centric - and you need not to rely on the capabilities of the relying party Identity Management system. Say, you got to talk to a partner system that only knows SAML based authentication. The partner service will trust your SAML IdP. When the users from your domain, want to authenticate - they will be redirected to your own IdP. Now, the users can login to the SAML IdP with their Kerberos credentials - and authenticate to the partner service via SAML.

Here, the partner service supported BYOID - by trusting the SAML2 IdP of an out side domain - and at the same time - the local Identity Management system of the out side domain supported TYOID by letting its users authenticate with their own Kerberos credentials. Similarly, it could also let users authenticate with any format of internal credentials or any social login.

BYOID with WSO2 Identity Server

Enterprises grow today with acquisitions, mergers and partnerships. Integration between systems that were never designed to work together is harder.

Recently we integrated WSO2 App Factory with Codenvy - so you can use Codenvy as the cloud IDE for the projects created and managed in WSO2 App Factory. The integration was quite easy - as both the sides supported OAuth. Codenvy could act as an OAuth client, while WSO2 App Factory, as the OAuth authorization server. With this, Codenvy let WSO2 App Factory users to use their own identities from WSO2 domain and log in.

Life is not easy as this always.

We have also met customers who had the requirements of integrating heterogenous identity management systems together. Once the company Foo created a partnership with the company Bar, the Bar users coming from its own user store should be able to access the applications hosted in company Foo. The challenge is, applications in company Foo, do not know how to talk to the user store in company Bar. Ideally we should let the users from company Bar to bring their own identities.

If we are to change either of the side - then, the cost is quite high. Enterprises today, are looking for solutions that could mediate heterogeneous identity protocols - that would permit 'Bring Your Own Identity (BYOID)'.

WSO2 Identity Server is capable of supporting BYOID with Chained Collaborative Federation (CCF) pattern.

Identity Server can mediate identities between OpenID, OAuth 1.0, OAuth 2.0, SAML 2.0 and OpenID Connect - and has an extensible architecture to extend to your custom needs.

Social login is also a key part in BYOID.

Most enterprises let customers and even the employees associate their social logins with the corporate credentials. In that way, they can bring the social identity, in to the enterprise.

WSO2 Identity Server, has the capability to support OpenID association with its released version (so you can integrate your Google account) and in the future releases we are planning to add seamless integration with Facebook, Twitter and LinkedIn accounts.

Fine-grained Access Control for APIs

OAuth is the de facto standard for API security and it's all about access delegation.

The resource owner delegates a limited set of access rights to a third party. In OAuth terminology, this is the “scope”. A given access token has a scope associated with it and it governs the access token’s capabilities.

XACML (eXtensible Access Control Markup Language) is the de facto standard for fine-grained access control. OAuth scope can be represented in XACML policies.

Say, for example a user delegates access to his Facebook profile to a third party, under the scope “user_activities”. This provides access to the user's list of activities as the activities connection. To achieve fine-grained access control, this can be represented in a XACML policy.
<Policy>
     <Target>
          <Anyof>
                <AllOf>
                     <Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
                         <AttributeValue DataType="http://www.w3.org/2001/XMLSchema#string">
                                user_activities
                         <AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:scope:scope-id" category="urn:oasis:names:tc:xacml:3.0:attribute-category:scope" DataType="http://www.w3.org/2001/XMLSchema#string" MustBePresent="false">
                      </Match>
                </AllOf>
         </AnyOf>
    </Target>
    <Rule Effect="Permit" RuleId="permit_rule">
    </Rule>
    <Rule Effect="Deny" RuleId="deny_rule">
    </Rule>
</Policy>
The above policy will be picked when the scope associated with the access token is equal to user_activities. Authorization Server first needs to find all the scopes associated with the given access token and build the XAML request accordingly. Authorization Server first gets the following introspection request.

token=gfgew789hkhjkew87
resource_id=GET https://graph.facebook.com/prabathsiriwardena/activities

Authorization Server now needs to find the scope and the client id associated with the given token and build the XACML request.
<Request>
      <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:oauth-client">
           <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:client:client-id">
               <AttributeValue 
                     DataType="http://www.w3.org/2001/XMLSchema#string">32324343434</AttributeValue>
          </Attribute>
     <Attributes>
    <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
         <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id">
             <AttributeValue
                     DataType="http://www.w3.org/2001/XMLSchema#string">GET</AttributeValue>
        </Attribute>
    </Attributes>
    <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:scope">
       <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:scope:scope-id">
              <AttributeValue  
                      DataType="http://www.w3.org/2001/XMLSchema#string">user_activities</AttributeValue>
       </Attribute>
    </Attributes>
    <Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
        <Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id">
              <AttributeValue 
                      DataType="http://www.w3.org/2001/XMLSchema#string"> 
                                       https://graph.facebook.com/prabathsiriwardena/activities</AttributeValue>
        </Attribute>
    </Attributes>
</Request>
The above request will pick the policy we defined first and evaluate the rules. Each rule can define the criteria, whether to permit or deny.

  1. User / System accesses the API passing an access token.
  2. API Gateway intercepts the request - finds the access token and calls OAuth Authorization Server (Introspection endpoint) to validate it. 
  3. Authorization Server, finds the scopes and the client id associated with access token, builds a XACML request can call XACML PDP. 
  4. XACML PDP evaluates the XACML requests against its policy set and returns back a XACML response. 
  5.  OAuth Authorization Server sends back a Introspection Response which indicates the validity of the token. 
  6. API Gateway validates Introspection Response and then invokes the backend business API.

Enterprise API Access/ Security Patterns

Accessing an API secured with OAuth, on behalf of a user logged into the system, with SAML2 Web SSO.

  1. User from domain Foo tries to access a web app deployed in domain Bar. The web app is secured with SAML2 Web SSO. 
  2. Web App finds user does not have an authenticated session. Finds out from the which domain the request was initiated from and redirects the user to the SAML2 IdP in his own domain. 
  3. User authenticates to the SAML2 IdP in his own domain. 
  4. SAML2 IdP from domain Foo sends a SAML response back to the web app in domain Bar. 
  5. Web app validates the SAML response. It has to trust the domain Foo SAML2 IdP. To access backend APIs - on behalf of the logged in user, web app needs an OAuth access token. Web app talks to the OAuth Authorization Server in its own domain, passing the SAML token. 
  6. OAuth Authorization Server trusts the SAML2 IdP in domain Foo. Validates the SAML token and sends back an access token. 
  7. Web app invokes the API with the access token. 
  8. API Gateway intercepts the request - finds the access token and calls OAuth Authorization Server to validate it. 
  9. OAuth Authorization Server validates the token and sends back a JWT (JSON Web Token) which includes end user details to the API Manager. 
  10. API Gateway adds the JWT as an HTTP header and invokes the backend business API. 

Accessing an API secured with OAuth on behalf of a user/system authenticated to a SOAP service with WS-Trust.

  1. User / System from domain Foo authenticates to the WS-Trust STS in his own domain. 
  2. STS returns back a SAML token to access the SOAP service in domain Bar. 
  3. User/System authenticates to the SOAP service in domain Bar with the SAML token. 
  4. SOAP service validates the SAML token. It has to trust the domain Foo STS. To access backend APIs - on behalf of the logged in user, SOAP service needs an OAuth access token. SOAP service talks to the OAuth Authorization Server in its own domain, passing the SAML token.
  5. OAuth Authorization Server trusts the STS in domain Foo. Validates the SAML token and sends back an access token. 
  6. SOAP service invokes the API with the access token. 
  7. API Gateway intercepts the request - finds the access token and calls OAuth Authorization Server to validate it. 
  8.  OAuth Authorization Server validates the token and sends back a JWT (JSON Web Token) which includes end user details to the API Manager. 
  9. API Gateway adds the JWT as an HTTP header and invokes the backend business API. 
 Fine-grained access control with XACML.

  1. User / System accesses the API passing an access token.
  2. API Gateway intercepts the request - finds the access token and calls OAuth Authorization Server to validate it. 
  3. Authorization Server, finds the scopes and the client id associated with access token, builds a XACML request can call XACML PDP. 
  4. XACML PDP evaluates the XACML requests against its policy set and returns back a XACML response. 
  5.  OAuth Authorization Server sends back a JWT (JSON Web Token) which includes end user details to the API Manager. 
  6. API Gateway adds the JWT as an HTTP header and invokes the backend business API.

The open source Identity Server which powers Saudi Arabia National Unemployment Assistance Program with 4 Million Users

In Saudi Arabia, ELM is a trusted provider of secure electronic services. Providing deep expertise in innovative and specialized e-services for government transaction and e-government initiatives, it is the first company in Saudi Arabia to have successfully launched a fully compliant e-government process.

In 2011, the Unemployment Assistance Program project launched to provide an allowance for all Saudi citizens who were currently unemployed and searching for a job. ELM had implemented the system, which included a portal where citizens could submit their requests, as well as applications running in the background that managed all the processes for qualifying the users and handling any payments. Initially, the project just covered one program.

However, a year later, as the Saudi government expanded services to include several programs, ELM recognized the need to streamline the administration for managing the user identities of everyone involved with the program. Recently ELM architected and implemented a system to manage a range of processes for the Saudi national Unemployment Assistance Program. Today, ELM relies on WSO2 Identity Server to manage some 4 million users of the Unemployment Assistance Program and ensure secure online transactions.

Interested in finding more..? Read this case study....

Achieving PCI-DSS compliancy in/with WSO2 middleware

The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance cardholder data security and facilitate the broad adoption of consistent data security measures globally.

PCI DSS provides a baseline of technical and operational requirements designed to protect cardholder data. This applies to all entities involved in payment card processing – including merchants, processors, acquirers, issuers, and service providers, as well as all other entities that store, process or transmit cardholder data.

PCI DSS comprises a minimum set of requirements for protecting cardholder data, and may be enhanced by additional controls and practices to further mitigate risks.

There are six key areas addressed in PCI-DSS.
  1. Network security
  2. Card-holder data security
  3. Identifying and managing vulnerabilities
  4. String access control measures
  5. Regular monitoring and testing
  6. Information Security Policies
Network Security
  • All WSO2 servers should be running behind a firewall - which only let in filtered traffic. 

  • Firewall rules should only permit a selected set of context paths to the WSO2 servers.

  • In case a given server being deployed in DMZ, make sure only the required components are installed. WSO2 Feature Manager supports removing all unnecessary features. Say, for example, if you deploy WSO2 ESB as a Security Gateway in the DMZ, you can remove the complete UI Management console from it.

  • Change the key stores. You can find two key stores (.jks ) that ship with WSO2 products at CARBON_HOME/repository/resources/security/ . Out of those two, wso2carbon.jks  is the primary key store and client-truststore.jks  is the trust store, where you will have the public certificates of trusted Certificate Authorities  (CAs ). Once you change the key stores, you need to update the corresponding entries in the following configuration files.
 CARBON_HOME/repository/conf/tomcat/catalina-server.xml
 CARBON_HOME/repository/conf/carbon.xml
 CARBON_HOME/repository/conf/axis2/axis2.xml
  • Use Secure Vault to encrypt all the passwords in the following configuration files - and make sure all default passwords are being changed.
 CARBON_HOME/repository/conf/user-mgt.xml
 CARBON_HOME/ repository /conf/carbon.xml
 CARBON_HOME/repository /conf/axis2/axis2.xml
 CARBON_HOME/repository/conf/datasources/master-datasources.xml
 CARBON_HOME/repository/conf/tomcat/catalina-server.xml
  • Change the default ports. By default, WSO2 ESB runs on HTTPS port 9443  and HTTPport 9763 . Also, WSO2 ESB exposes services over 8243  and 8280 . To change the ports you can update the value of <Offset>  at CARBON_HOME/repository/conf/carbon.xml
  • All connections to the user stores (LDAP / AD) should be over TLS.
Card-holder data security
  • Make sure any connection initiated from the WSO2 Carbon servers to the card holder data storage is protected by network/firewall rules or running over TLS.
  • WSO2 Servers store client credentials as a salted hash. Make sure you configure for a strong hashing algorithm.
  • Make sure your application data storage is well protected with encryption and access control rules.
  • Make sure data exposed via APIs hosted in WSO2 servers enabled only on TLS.
  • Access to the Management Console of WSO2 servers protected with TLS.
  • WSO2 products do not maintain user credentials or any confidential data in cache.
Identifying and managing vulnerabilities
  • Make sure the operating system, where WSO2 middleware running on, is up to date. Enable automatic update checks.
  • Use and regularly update anti-virus software or programs.
  • Make sure all the latest security patches for the WSO2 products are applied. All WSO2 productions customers are immediately informed whenever a security vulnerability being uncovered.
  • All WSO2 products go through static code analyzing tools and OWASP recommended tools to identify and mitigate security vulnerabilities at the code level.
  • WSO2 is part of all standard bodies, OASIS, W3C, IETF and other prominent open source communities. We are notified whenever a vulnerability being discovered, either at the code level or at the specification level.
Strong access control measures
  • Make sure you have proper access control rule at the network level and at the physical machine level where you have deployed WSO2 middleware.
  • Use standard fine grained access controlling to secure all your APIs exposing data. This can be done with XACML support in WSO2 product stack.
  • Have strong Role Based Access Control to the management console of WSO2 products. Always adhere to the principle of least privilege.
  • WSO2 servers, maintain audit logs of all privilege actions performed by end users.
  • Use WSO2 BAM and WSO2 CEP for fraud detection and analyze access patterns.
Regular monitoring and testing
  • Use network level monitoring tools to detect any violations in access control rules.
  • Use WSO2 BAM and WSO2 CEP for fraud detection and analyze access patterns.
Information Security Policies
  • Maintain a policy that addresses information security for all personnel.
  • Authentication and Access Control policies can be developed, governed, enforced and evaluated through WSO2 product stack.

OAuth 2.0 vs. OpenID Connect

OpenID Connect is a profile built on top OAuth 2.0. OAuth talks about access delegation while OpenID Connect talks about authentication. In other words, OpenID Connect builds an identity layer on top of OAuth 2.0.

Authentication is the act of confirming the truth of an attribute of a datum or entity. If I say, I am Peter - I need to prove that. I can prove that with something I know, something I have or with something I am. Once proven who I claim I am, then the system can trust me. Sometimes systems do not just want to identify end users just by the name. By name could help to identify uniquely - but how about other attributes. Before you get through the border control - you need to identify your self - by name - by picture - and also by fingerprints and eye retina. Those are validated in real-time against the data from the VISA office which issued the VISA for you. That check will make sure its the same person who claimed to have the VISA enters in to the country.

That is proving your identity. Proving your identity is authentication. Authorization is about what you can do. Your capabilities.

You could prove your identity at the border control by name - by picture - and also by fingerprints and eye retina - but it's your VISA that decides what you can do. To enter into the country you need to have a valid VISA that has not expired. A valid VISA is not a part of your identity - but a part of what you can do. Also what you can do inside the country depends on the VISA type. What you do with a B1 or B2 differs from what you can do with an L1 or L2. That is authorization.

OAuth 2.0 is about authorization. Not about authentication.

With OAuth 2.0, the client does not know about the end user (only exception is resource owner credentials grant type). It simply gets an access token to access a resource on behalf of the user. With OpenID Connect, the client will get an ID Token along with the access token. ID Token is a representation of the end user’s identity. What does it mean by securing an API with OpenID Connect ? Or is it totally meaningless ? OpenID Connect is at the Application level or at the Client level - not at the API level or at the Resource Server level. OpenID Connect helps, client or the application to find out who the end user is, but for the API that is meaningless. Only thing API expects is the access token. If the resource owner, or the API wants to find who the end user is - it has to query the Authorization Server. The OAuth Token Introspection specification currently does not support sending back the end user identity in the introspection response, but, it would be quite useful to have an user ID Token in the response (as in OpenID Connect) and was proposed to the OAuth IETF working group.

Web Services (SOAP) Security

Here I am posting some of presentations we did at WSO2 on XML Signature, XML Encryption, WS-Security, WS-SecurityPolicy and WS-Trust for the benefit of anyone who is interested in learning web services security. These are from a instructor-led training, so all the slides are not self-explanatory.









Chained Collaborative Federation (CCF) with WSO2 Identity Server

Chained Collaborative Federation (CCF) pattern implemented with WSO2 Identity Server provides following features/benefits.

1. Build a single sign-on solution across multiple web applications supporting heterogenous standards/protocols.

You may have a Liferay portal which supports OpenID based login, a Drupal server which requires SAML 2 Web SSO, a web application which relies on OpenID Connect. With WSO2 Identity Server,  all these heterogenous standards/protocols can be integrated together to build a unified SSO platform. Once you login to Liferay with OpenID, you will not require to re-authenticate when accessing Drupal or the web application. This can be further extended to build a unified SSO platform between on-premise and SaaS applications (GoogleApps, Salesforce).

2. Collaborative identity federation between multiple heterogenous identity providers.

I have a web application that only supports OpenID based login. To enable users from partner companies, out side my domain, to access this - they have to use an OpenID. In other words, each partner company should have an OpenID Provider deployed over their respective enterprise user store. In a realistic scenario, this is not a perfectly valid requirement. We can't expect all our partners to have an OpenID Provider, deployed in their respective domain. One may have an OpenID Provider, another may have a SAML2 IdP, and a CAS server, an OpenID Connect authorization server.. likewise..

With CCF pattern, WSO2 Identity Server provides a platform to integrate all these heterogenous identity providers.

The web application that supports OpenID login, can redirect any unauthenticated users to the WSO2 Identity Server via OpenID Directed Identity. Then Identity Server will give the user an option to pick the Identity Provider he wants to authenticate against. This identity provider may support SAML, OpenID, OpenID Connect, CAS or even a proprietary protocol. The Identity Server will take care of bridging the requested protocol (OpenID in this case) with the user selected one and initiate the flow. User will be redirected to his own Identity Provider and comes back to the Identity Server with the response. Identity Server will then build an OpenID response from it and send back to the web application.

Currently Identity Server can integrate clients and identity providers, who support OpenID, OpenID Connect, OAuth, SAML2 and WS - Federation (Passive). We are planning to add support for CAS in the future.

Some of the benefits of this approach.
  • Client application only needs to trust its own Identity Provider - not aware of any external Identity Providers.
  • Authentication protocol at the client side is completely decoupled from the Identity Provider. Each entity can select its own, independantly
  • The trust relationships between connected partners are maintained centrally.
3 . Home realm discovery.

In previous case, when an external user being redirected to the internal IdP (Identity Server) - he has to pick his Identity Provider from there. With the support for home realm discovery - Identity Server is capable of deriving user's home IdP from the request. So - the complete redirection flow will be transparent to the end user.

4 . Integrated Windows Authentication (IWA).

WSO2 Identity Server supports IWA. With this, we can faciliate Zero-login for your web applications that rely on OpenID, SAML, OpenID Connect and WS - Federation (Passive).

Building an ecosystem for API security

Enterprise API adoption has gone beyond predictions. It has become the ‘coolest’ way of exposing business functionalities to the outside world. Both your public and private APIs, need to be protected, monitored and managed. Here we focus on API Security. There are so many options out there to make someone easily confused. When to select one over the other is always a question – and you need to deal with it quite carefully to identify and isolate the tradeoffs.

Security is not an afterthought. It has to be an integral part of any development project – so as for APIs. API security has evolved a lot in last five years. The growth of standards, out there, has been exponential. OAuth is the most widely adopted standard - and almost the de-facto standard for API security.

OAuth is a result of a community effort to build a common standard based solution for identity delegation. Its design was well fed with pre-OAuth vendor specific protocols like, Google AuthSub, Yahoo BBAuth and Flicker Auth.

The core concept behind OAuth is to generate a short-lived temporary token under the approval of the resource owner and share it with the client who wants to access the resource on behalf of its owner. This is well explained by Eran Hammer, taking a parking valet key as an analogy. Valet key will let a third party to drive, but with restrictions like, only a mile or two. Also you cannot use the valet key to do anything other than driving, like opening the trunk. Likewise the temporary token issued under OAuth can only be used for the purpose its been issued to - not for anything else. If you authorize a third party application to import photos from your Flickr account via OAuth, that application can use the OAuth key for that purpose only. It cannot delete or add new photos. This core concept remains the same from OAuth 1.0 to OAuth 2.0.

What made OAuth 2.0 looks different from OAuth 1.0?

OAuth 1.0 is a standard, built for identity delegation. OAuth 2.0 is a highly extensible authorization framework. The best selling point in OAuth 2.0 is its extensibility by being an authorization framework.

OAuth 1.0 is coupled with signature-based security. Although it has provisions to use different signature algorithms, still it’s signature based. One of the key criticisms against OAuth 1.0 is the burden enforced on OAuth clients for signature calculation and validation. This is not a completely valid argument. This is where we need proper tools to the rescue. Why an application developer needs to worry about signature handling? Delegate that to a third party library and stay calm. If you think OAuth 2.0 is better than OAuth 1.0 because of the simplicity added through OAuth 2.0 Bearer Token profile (against the signature based tokens in 1.0) – you’ve been misled.

Let me reiterate. The biggest advantage of OAuth 2.0 is its extensibility. The core OAuth 2.0 specification is not tightly coupled with a token type. There are several OAuth profiles been discussed under IETF OAuth working group at the moment. The Bearer token profile is already a proposed IETF standard - RFC 6750.

The Bearer token profile is the mostly used one today for API Security. The access token used under Bearer token profile is a randomly generated string. Anyone who is in possession of this token can use it to access a secured API. In fact, that is what the name implies too. The protection of this token is facilitated through the underlying transport channel via TLS. TLS only provides the security while in transit. It's the responsibility of the OAuth Token Issuer (or the Authorization Server) and the OAuth client to protect the access token while being stored. Most of the cases, access token needs to be encrypted. Also, the token issuer needs to guarantee the randomness of the generated access token - and it has to be long enough to exhaust any brute-force attacks.

OAuth 2.0 has three major phases (To be precise the phase - 1 and phase - 2 could overlap based on the grant type).

1. Requesting an Authorization Grant.
2. Exchanging the Authorization Grant for an Access Token.
3. Access the resources with the Access Token.

OAuth 2.0 core specification does not mandate any access token type. Also the requester or the client cannot decide which token type it needs. It's purely up to the Authorization Server to decide which token type to be returned in the Access Token response - which is the phase 2.

The access token type provides the client the information required to successfully utilize the access token to make a request to the protected resource (along with type-specific attributes). The client must not use an access token if it does not understand the token type.

Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a request to the protected resource.

For example following is what you get for the Access Token response irrespective of which grant type you use (To be precise, if the grant type is client credentials, there won’t be any refresh_token in the response).

 HTTP/1.1 200 OK
  Content-Type: application/json;charset=UTF-8
  Cache-Control: no-store
  Pragma: no-cache

  {
    "access_token":"mF_9.B5f-4.1JqM",
    "token_type":"Bearer",
    "expires_in":3600,
    "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA"
  }

The above is for the Bearer and following is for the MAC.

HTTP/1.1 200 OK
  Content-Type: application/json
  Cache-Control: no-store

  {
    "access_token":"SlAV32hkKG",
    "token_type":"mac",
    "expires_in":3600,
    "refresh_token":"8xLOxBtZp8",
    "mac_key":"adijq39jdlaska9asud",
    "mac_algorithm":"hmac-sha-256"
  }

The MAC Token Profile is very much closer to what we have in OAuth 1.0.

OAuth authorization server will issue a MAC key along with the signature algorithm to be used and an access token that can be used as an identifier for the MAC key. Once the client has access to the MAC key, it can use it to sign a normalized string derived from the request to the resource server. Unlike in Bearer token, the MAC key will never be shared between the client and the resource server. It’s only known to the authorization server and the client. Once the resource server gets the signed message with MAC headers, it has to validate the signature by talking to the authorization server. Under the MAC token profile, TLS is only needed for the first step, during the initial handshake where the client gets the MAC key from the authorization server. Calls to the resource server need not to be on TLS, as we never expose MAC key over the wire.

MAC Access Token response has two additional attributes. mac_key and the mac_algorithm. Let me rephrase this - "Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter".

The MAC Token Profile defines the HTTP MAC access authentication scheme, providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host. In the above response access_token is the MAC key identifier. Unlike in Bearer, MAC token profile never passes it's top secret over the wire.

The access_token or the MAC key identifier is a string identifying the MAC key used to calculate the request MAC. The string is usually opaque to the client. The server typically assigns a specific scope and lifetime to each set of MAC credentials. The identifier may denote a unique value used to retrieve the authorization information (e.g. from a database), or self-contain the authorization information in a verifiable manner (i.e. a string consisting of some data and a signature).

The mac_key is a shared symmetric secret used as the MAC algorithm key. The server will not reissue a previously issued MAC key and MAC key identifier combination.

Phase-3 will utilize the access token obtained in phase-2 to access the protected resource.

Following shows how the Authorization HTTP header looks like when Bearer Token being used.

Authorization: Bearer mF_9.B5f-4.1JqM

This adds very low overhead on client side. It simply needs to pass the exact access_token it got from the Authorization Server in phase-2.

Under MAC token profile, this is how it looks like.

Authorization: MAC id="h480djs93hd8",
                    ts="1336363200",
                    nonce="dj83hs9s",
                    mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM="

id is the MAC key identifier or the access_token from the phase-2.

ts the request timestamp. The value is  a positive integer set by the client when making each request to the number of seconds elapsed from a fixed point in time (e.g. January 1, 1970 00:00:00 GMT).  This value is unique across all requests with the same timestamp and MAC key identifier combination.

nonce is a unique string generated by the client. The value is unique across all requests with the same timestamp and MAC key identifier combination.

The client uses the MAC algorithm and the MAC key to calculate the request mac.

Either we use Bearer or MAC - the end user or the resource owner is identified using the access_token. Authorization, throttling, monitoring or any other quality of service operations can be carried out against the access_token irrespective of which token profile you use.

APIs are not just for internal employees. Customers and partners can access public APIs, where we do not maintain credentials internally. In that case we cannot directly authenticate them. So, we have to have a federated authentication setup for APIs, where we would trust a given partner domain, but not individuals. The SAML 2.0 Bearer Assertion Profile for OAuth 2.0 addresses this concern.

The SAML 2.0 Bearer Assertion Profile, which is built on top of OAuth 2.0 Assertion Profile, defines the use of a SAML 2.0 Bearer Assertion as a way of requesting an OAuth 2.0 access token as well as a way of authenticating the client. Under OAuth 2.0, the way of requesting an access token is known as a grant type. Apart from making the token type decoupled from the core specification it also makes grant type decoupled too. Grant type defines a protocol to get the authorized access token from the resource owner. The OAuth 2.0 core specification defines four grant types - authorization code, implicit, client credentials and resource owner password. But it does not limit to four. A grant type is another way of extending the OAuth 2.0 framework. OAuth 1.0 was coupled to a single grant type, which is almost similar to the authorization code grant type in 2.0.

SAML2 Bearer Assertion Profile defines its own grant type (urn:ietf:params:oauth:grant-type:saml2-bearer). Using this grant type a client can get either a MAC token or a Bearer token from the OAuth authorization server.

A good use case for SAML2 grant type is a SAML2 Single Sign On (SSO) scenario. A partner employee can login to a web application using SAML2 SSO (we have to trust the partner's SAML2 IdP) and later the web application needs to access a secured API on behalf of the logged in user. To do that the web application can use the SAML2 assertion already provided and exchange that to an OAuth access token via SAML2 grant type. There we need to have an OAuth Authorization Server running inside our domain - which trusts the external SAML2 IdP.

Unlike the four other grant types defined in OAuth 2.0 core specification, SAML2 grant type needs the resource owner to define the allowed scope for a given client out-of-band.

JSON Web Token (JWT) Bearer Profile is almost the same as the SAML2 Assertion Profile. Instead of SAML tokens, this uses JSON Web Tokens. JWT Bearer profile also introduces a new grant type (urn:ietf:params:oauth:grant-type:jwt-bearer).

This provision for extensibility made OAuth 2.0 very much superior to OAuth 1.0. That does not mean it’s perfect in all means.

To be the de facto standard for API security, OAuth 2.0 needs to operate in a highly distributed manner and still be interoperable. We need to have clear boundaries and well-defined interfaces in between the client, the authorization server and the resource server. OAuth 2.0 specification breaks it into two major flows. The first is the process of getting the access token from the authorization server - which is based on a grant type. The second is the process of using it in a request to the resource server. The way the resource server talks to the authorization server to validate the token is not addressed in the core specification. Hence has lead vendor specific APIs to creep in between the resource server and the authorization server. This kills interoperability. The resource server is coupled with the authorization server and this results in vendor lock-in.

The Internet draft OAuth Token Introspection which is been discussed under the IETF OAuth working group at the moment defines a method for a client or a protected resource (resource server) to query an OAuth authorization server to determine metadata about an OAuth token. The resource server needs to send the access token and the resource id (which is going to be accessed)- to the authorization server's introspection endpoint. Authorization server can check the validity of the token - evaluate any access control rules around it - and send back the response to the resource server. In addition to the token validity information, it will further return back the scopes, client_id and some other metadata associated with the token.

Apart from having a well-defined interface between the OAuth authorization server and the resource server, a given authorization server should also have the capability to issue tokens of different types. To do this, the client should bring the required token type it needs in the authorization request. But in the OAuth authorization request there is no token type defined. This limits the capability of the authorization server to handle multiple token types simultaneously or it will require a form of out-of-band mechanism to associate token types against clients.

Both the authorization server and the resource server should have the ability to expose their capabilities and requirements through a standard metadata endpoint.

The resource server should be able to expose its metadata by resource, which type of a token a given request expects, the required scope likewise. Also the requirements could change based on the token type. If it is a MAC token, then the resource server needs to declare which signature algorithm it expects. This could be possibly supported via an OAuth extension to the WADL (Web Application Description Language). Similarly, the authorization server also needs to expose its metadata. These could be, the supported token types, grant types likewise.

User-Managed Access (UMA) Profile of OAuth 2.0 introduces a standard endpoint to share metadata at the authorization server level. The authorization server can publish its token end point, supported token types and supported grant types via this UMA authorization server configuration data endpoint as a JSON document.

The UMA profile also mandates a set of UMA specific metadata to be published through this end point. This couples the authorization server to UMA, which also addresses a bigger problem than the need to discover authorization server metadata. It would be more ideal to introduce the need to publish/discover authorization server metadata through an independent OAuth profile and extend that in UMA to address more UMA specific requirements.

The problem addressed by UMA is far beyond, than just exposing Authorization Server metadata. UMA, undoubtedly going to be one of the key ingredients in any ecosystem for API security.

UMA defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policy. UMA defines two standard interfaces for the Authorization Server. One interface is between the Authorization Server and the Resource Server (protection API), while the other is between Authorization Server and Client (authorization API).

To initiate the UMA flow, the resource owner has to introduce all his resource servers to the centralized authorization server. With this, each resource server will get an access_token from the authorization server - and that can be used by resource servers to access the protection API exposed out by the authorization server.The API consists of an OAuth resource set registration endpoint as defined by OAuth Resource Registration draft specification, an endpoint for registering client-requested permissions, and an OAuth token introspection endpoint.

Client or the Requesting party can be unknown to the resource owner. When it tries to access a resource, the resource server will provide the necessary details - so, the requesting party can talk to the authorization server via Authorization API and get a Requesting Party Token (RPT). This API once again is OAuth protected - so, the requesting party should be known to the authorization server.

Once the client has the RPT - it can present it to the Resource Server and get access to the protected resource. Resource Server uses OAuth introspection endpoint of the Authorization Server to validate the token.

This is a highly distributed, decoupled setup - and further can be extended by incorporating SAML2 grant type.

Token revocation is also an important aspect in API security.

Most of the OAuth authorization servers currently utilize vendor specific APIs. This couples the resource owner to a proprietary API, leading to vendor lock-in. This aspect is not yet being addressed by the OAuth working group. The Token Revocation RFC 7009 addresses a different concern. This proposes an endpoint for OAuth authorization servers, which allows clients to notify the authorization server when a previously obtained refresh or access token is no longer needed.

In most of the cases token revocation by the resource owner will be more prominent than the token revocation by the client as proposed in this draft. The challenge in developing a profile to revoke access tokens / refresh tokens by the resource owner is the lack of token metadata at the resource owner end. The resource owner does not have the visibility to the access token. In that case the resource owner needs to talk to a standard end point at the authorization server to discover the clients it had authorized before. As per the OAuth 2.0 core specification a client is known to the authorization server via the client-id attribute. Passing this back to the resource owner is less meaningful as in most of the cases it’s an arbitrary string. This can be fixed by introducing a new attribute called “friendly-name”.

The model proposed in both OAuth 1.0 as well as in OAuth 2.0 is client initiated. Client is the one who starts the OAuth flow, by first requesting an access token. How about the other way around? Resource owner initiated OAuth delegation. Say for example I am a user of an online photo-sharing site. There can be multiple clients like Facebook applications, Twitter applications registered with it. Now I want to pick some client applications from the list and give them access to my photos under different scopes.  Let’s take another example; I am an employee of Foo.com. I'll be going on vacation for two weeks - now I want to delegate some of my access rights to Peter only for that period of time. Conceptually OAuth fits nicely here. But - this is a use case, which is initiated by the Resource Owner - which is not being addressed in the OAuth specification. This would require introducing a new resource owner initiated grant type. The Owner Authorization Grant Type Profile Internet draft for OAuth 2.0 addresses a similar concern by allowing the resource owner to directly authorize a relying party or a client to access a resource.

Delegated access control talks about performing actions on behalf of another user. This is what OAuth addresses. Delegated “chained” access control takes one step beyond this. The OASIS WS-Trust (a speciation built on top of WS-Security for SOAP) specification addressed this concern from its 1.4 version on wards, by introducing the “Act-As” attribute. The resource owner delegates access to the client and the client uses the authorized access token to invoke a service resides in the resource server. This is OAuth so far. In a real enterprise use case it’s a common requirement that the resource or the service, needs to access another service or a set of services to cater a given request. In this scenario the first service going to act as the client to the second service, and also it needs to act on behalf of the original resource owner. Using the access token passed to it as it is - is not the ideal solution. The Chain Grant Type Internet draft for OAuth 2.0 is an effort to fix this. It defines a method by which an OAuth protected service or a resource, can use a received OAuth token from its client, in turn, to act as a client and access another OAuth protected service. This specification still at its draft-1 would require maturing soon to address these concerns in real enterprise API security scenarios.

The beauty of the extensibility produced by OAuth 2.0 should never be underestimated by any of the above concerns or limitations. OAuth 2.0 is on the right track to become the de facto standard for API security to address enterprise scale security concerns.

Landscapes in Mobile Application Security

There are different aspects in Cloud and Mobile application security - and in different angles you can look in to it.

Within the first decade of the 21st century – internet worldwide increased from 350 million to more than 2 billion and Mobile phone subscribers from 750 million to 5 billion - and today it hits 6 billion mark - where the world population is around 7 billion. Most of the mobile devices out there - even the cheapest ones could be used to access the internet.

Let me do a quick survey here. How many of you have, password protected your laptops? Answer is obvious - almost all. But do you know that only 30% of mobile users, password protect their mobile devices ? This leaves out 4.2 billion mobile devices - unprotected - out there. I am using multi-factor authentication to secure my corporate email account on Google Apps. I am using world’s deadliest - the most complex password ever to protect my corporate Salesforce account. Now what ? I leave my mobile phone unprotected. I am already logged in to Google Apps - I am already logged in to Salesforce. Now I leave all my confidential information accessible to anyone having access to my mobile device.

How about password reset? Google, Microsoft, Yahoo - and almost all cloud service providers use mobile phone based password resets. Having temporary access to your mobile device, someone can take your accounts for the life time.

Multi-factor authentication for mobile applications is also not well thought yet. That is mostly because with the false assumption - “My mobile is under my control always”. 113 cell phones are lost or stolen every minute in the U.S and $7 million worth of smartphones are lost daily. Both Google step-2 authentication and Facebook Code Generator always rely on a mobile phone for protecting web based access. But, none of their mobile applications are protected with multi-factor authentication. Phone based multi-factor authentication won’t work for mobile applications.

Why do we need to worry about all these at the corporate level? 62% of mobile workers currently use their personal smartphones for work.

These are well known facts or threats in the mobile world. Different vendors have their own solutions. Apple let you lock your device over the Internet or even wipe off  all it’s data. And again most of the mobile device management (MDM) solutions let you take control over your lost device. But, then again - how much time you leave for it to be on wrong hands will do enough damage. MDM solutions out there need to go beyond it’s simple definition to be an integral part of the corporate Identity Management system.

Over the last few years - almost - all the cloud service providers are becoming mobile friendly. All of them have provided RESTful JSON based APIs. Amazon AWS, Google Cloud Storage, Salesforce, Dropbox all of them have REST APIs . Except AWS - all the others are secured with OAuth 2.0. AWS uses its own authentication scheme.

OAuth 2.0 has ‘proven success’ in securing REST APIs. But for mobile applications OAuth 2.0 can be miserably misleading. It has four defined grant types. Authorization Code, Implicit, Resource Owner Password and Client Credentials.

Authorization code and the implicit are mostly being used for browser based applications. Its a mis-belief among application developers, that the Resource Owner and the Client Credentials grant types are for mobile applications. Those require you to provide your credentials to the application - directly. As a practice avoid it. If you develop a mobile application to access a secured cloud API using OAuth - use either the authorization code or the implicit grant type. There, your application needs to pop-up the native browser to redirect the user to the OAuth authorization server.

But - still that does not make you 100% not vulnerable to further attacks. Whenever there is a redirection through the browser - there is a possibility of a phishing attack. A rogue OAuth client application can have a “Login with Facebook” button - which will redirect you to a rogue OAuth authorization server - which looks like Facebook - where you will misinterpret it as Facebook and give away the credentials.

There are many countermeasures that can be taken against phishing. But, sadly - most of the OAuth authorization servers, including Facebook and Twitter do not follow any. Your Facebook or Twitter account credentials can be quite easily phished through your mobile phone - than from a laptop computer. It’s quite easier than you think.

If you have developed mobile applications with OAuth 2.0 - you might have encountered another limitation. You need to bake in - your client key and the client secret into the mobile application itself. This is required in the first phase of the OAuth flow - and it’s the identification of the client to the OAuth authorization server. What would happen if someone steals this from the device ?

Let’s have a look at an example. We have a mobile application which will access the Facebook friend list of an end user and stores that friend list in Google Cloud Storage using its REST API. Facebook friend list belongs to the end-user - but the Google Cloud Storage belongs to the mobile application. Mobile application has to register with Facebook as an OAuth client and gets a client key and a secret. Then using Authorization Code grant type - it can get an access token to - access the end-users friend list on-behalf of him. To store this in Google Cloud Storage, the application has to use client credentials grant type - where the authorization server is Google. To get this done - we need to bake-in the client key and the client secret into the application itself. Anyone getting access to these keys, will get access to the Google Cloud Storage too. This is an area still under research with no permanent solution yet. Solutions over there like - restrictions based on IP addresses, device Ids - will only make things bit harder - but, not fully impossible.

OAuth 2.0 has become the de facto standard for mobile application authentication. This, it self has given the applications a better failover capability in case of an attack. The recent attack against Buffer - a social media management service, which lets users cross-post in to social networking sites like Facebook and Twitter - is a very good example. Twitter, Facebook got flooded with posts from Buffer. But revoking the client key of Buffer sorted out the issue. And also attack on Buffer did not give the full control of users’ Facebook and Twitter accounts to attackers - as it was not storing passwords.

It takes an average of 20 seconds for a user to log into a resource. Not having to enter a password each time a user needs to access a resource saves time and makes users more productive and also reduces the frustration of multiple log-on events and forgotten passwords. Users only have one password to remember and update, and only one set of password rules to remember. Their initial login provides them with access to all resources, typically for the entire day or the week.

What are the challenges in building a single sign-on solution for mobile applications ?

If you provide multiple mobile applications for your corporate employees, to be installed in their mobile devices - its a pain to ask them to relogin to each application separately. Possibly all of them may be sharing the same user store. This is analogous to a case where Facebook users login into multiple third party mobile applications with their Facebook credentials.

In mobile world - this can be done in two ways.

First, each native mobile application - when it is needed to authenticate a user  - should popup the native browser - and start the OAuth flow. Your company should have a centralized OAuth authorization server, running on top of the corporate user store. All your mobile applications will redirect the user to the same Authorization Server - creating a single login session under the domain of the centralized authorization server - which indirectly facilitates single-sign on.






The other approach is known as “Native SSO”. The user experience in Native SSO is very much better
than the browser based SSO. Here you need to have a native mobile application developed for the corporate identity provider (IdP) - or the authorization server - which will be invoked by the other applications to initiate the OAuth flow - instead of popping up the browser. Although Native SSO provides better - improved user experience - it also makes phishing attacks much easier.





The other drawback in Native SSO is - it has a phase - which is not standard based. Your application should know beforehand who your Identity Provider is - and should program according to it’s interface to initiate Native SSO. Currently there is an attempt by OpenID Foundation to  build a standard Single Sign On (SSO) model for native applications installed on mobile devices. This introduces an OpenID Connect Client called an Authorization Agent - which  can obtain tokens on behalf of other installed native applications - thereby provisioning tokens to those applications and so enabling a Single Sign On experience for end users. The spec is at its very initial stage - and would require many more iterations before becoming a standard.

Lets take another example. Previous case we had an assumption that we only have a single user store - which is behind the centralized authorization server. Let’s take that assumption out. We need users outside our domain - say, from federated partners - to access our mobile applications and consume services. There needs to be a bootstrap process to establish trust between those federated partners and our authorization server. Doing this in a standard manner - the partners would require to support one of the federation standards out there. The best would be the SAML. So - we need to add partner SAML IdPs as trusted IdPs to our authorization server. And also we can define an authorization policy against each IdP - so that we know which rights they would have in our authorization server. When user being redirected to the authorization server - either through browser based or native SSO - he can pick against which IdP he wants authenticate. Based on the choice the user will be redirected to his home SAML IdP - and once authenticated - authorization server will resume the OAuth flow. This is a one time thing - and for other subsequent requests from other mobile applications - flow will be seamless to the user and would not required to being redirected to his home SAML IdP.

With all the Single Sign On use cases we discussed above - we still left with one more assumption - all the mobile applications will have a centralized authorization server. Let’s get rid of  that too.

One key requirement for any single sign on scenario is - we should be able to establish direct trust or brokered trust between applications and their users. In most of the cases this is established through IdPs. The first example we took was based on direct trust - while the second is on brokered trust. To accomplish this use case we need to build a trust relationship between all the authorization servers - taking part in - and also a middle man to mediate SSO. This use case is also highlighted in the Native SSO draft specification by OpenID Foundation - but no much details as of now.

Data in transit is another security concern. Forget about NSA and Angela Merkel. NSA has more than 5000 highly capable computer scientists - and they have control over security algorithm designs. So, lets take NSA out of the picture. In most of the scenarios mobile applications depend on TLS for data confidentiality, in transit. TLS has it’s own limitations as its based on transport and the confidentiality of the data terminates as it leaves the transport. Most of the data transport channels used within mobile applications use REST and JSON. We have JOSE working group under IETF,  working currently to produce a standard for doing message level encryption and signing for JSON payloads.

Let me take this discussion to another direction. Managed Cloud APIs. Amazon AWS, Google Cloud, Dropbox, Salesforce all expose APIs over REST and JSON. Even in this case, Twitter and Facebook. We develop mobile applications on top of these cloud APIs - to be used by our corporate employees. For the simplicity of explaining - I’ll take Twitter as the example. We have a corporate account with Twitter - which is used to tweet events related to the company - and mostly used by the marketing team. To tweet through the corporate account we need to share the official twitter password with them. Which is not ideal. Can’t we let them tweet through the same corporate account - but still authenticate with their corporate LDAP credentials ? And also - we need to enforce certain rules and policies. Any tweet mentioning client names should be blocked immediately. Also we need to collect statistics and do access controlling. In other words - to cater for all these requirements we need to turn the simple Twitter API - in to a managed API. Here we introduce an API Gateway - in between your mobile application and the Twitter API. Through API gateway we expose - our own API - which wraps the Twitter API. Now - the marketing team can authenticate to the Twitter Wrapper API using their corporate credentials and Tweet using the corporate Twitter account. The official Twitter credentials are never exposed - and kept just within the API Gateway. Twitter is a simple example - but the same applies to any cloud API - which you want to turn into a Managed Cloud API - to be consumed by your mobile applications.

Brief history of Identity Provisioning

I'll be doing a talk at the Open Source Developer Conference (OSDC) - Auckland - tomorrow and thought of putting few notes here related to my talk.

Lets explore the history of Identity Provisioning.





















OASIS Technical Committee for Service Provisioning was formed in 2001 to define an XML-based framework for exchanging user, resource, and service provisioning information. As a result, the SPML (Service Provisioning Mark Language) came up in 2003 - which was based on three proprietary provisioning standards by that time. IBM and Microsoft played a major role in building the SPML 1.0.

1. Information Technology Markup Language (ITML)

SPML 1.0 defined a Request/Response protocol as well as couple of bindings. Requests/Responses - all based on XML. Each operation has it own schema.

One of the bindings defined in SPML 1.0 is the SOAP binding. It says how to transfer SPML requests and responses wrapped in a SOAP message. All the SPML operations supported by the provisioning entity should be declared in the WSDL it self.

The other one is file binding. This binding refers to using SPML elements in a file, typically for the purposes of bulk processing provisioning data and provisioning schema documentation.

In the closing stages of SPML 1.0, IBM and MSFT felt strongly that support for complex XML objects needed to be done differently. The OASIS TC voted to postpone this effort until 2.0. As a result, IBM unofficially stated that they wouldn't be implementing 1.0 and would wait on the conclusion of the 2.0 process.

IBM and Microsoft who were part of the initial SPML specification went ahead and started building their own standard for provisioning via SOAP based services - which is WS-Provisioning. WS-Provisioning describes the APIs and Schemas necessary to facilitate interoperability between provisioning systems in a consistent manner using Web services. It includes operations for adding, modifying, deleting, and querying provisioning data. It also specifies a notification interface for subscribing to provisioning events. Provisioning data is described using XML and other types of schema. This facilitates the translation of data between different provisioning systems.

WS-Provisioning is part of the Service Oriented Architecture and has been submitted to the Organization for the Advancement of Structured Information Standards (OASIS) Provisioning Service Technical Committee.

OASIS PSTC took both SMPL 1.0 and WS-Provisioning specification as inputs and developed SPML 2.0 in 2006.

SPML 1.0 has been called a slightly improved Directory Services Markup Language (DSML). SPML 2.0 defines an extensible protocol (through Capabilities) with support for a DSML profile (SPMLv2 DSMLv2), as well as XML Schema profiles. SPML 2.0 differentiates between the protocol and the data it carries.

SPML 1.0 defined file bindings and SOAP bindings that assumed the SPML1.0 Schema for DSML. The SPMLv2 DSMLv2 Profile provides a degree of backward compatibility with SPML 1.0. The DSMLv2 profile supports a schema model similar to that of SPML 1.0. The DSMLv2 Profile may be more convenient for applications that access mainly targets that are LDAP or X500 directory services. The XSD Profile may be more convenient for applications that access mainly targets that are web services.

The SPML 2.0 protocol enables better interoperability between vendors, especially for the Core capabilities (those found in 1.0). You can “extend” SPML 1.0 using ExtendedRequest, but there is no guidance about what those requests can be. SPML 2.0 defines a set of “standard capabilities” that allow you to add support in well-defined ways.

SPML definitely addressed the key objective of forming the OASIS PSTC in 2001. It solved the interoperability issues. But - it was too complex to implement. It was SOAP biased and was addressing too much of concerns in provisioning than what actually was needed.

It was around 2009 - 2010 people started to talk about the death of SPML.

In parallel to the criticisms against SPML - another standard known as SCIM (Simple Could Identity Management) started to emerge. This was around mid 2010 - and initiated by Salesforce, Ping Identity, Google and others. WSO2 joined the effort sometime in early 2011 - and took part in all the interop events happened so far.

SCIM is purely RESTful. The initial version supported both JSON and XML. SCIM introduced a REST API for provisioning and also a core schema (which also can be extended) for provisioning objects. SCIM 1.1 was finalized in 2012 - and then it was donated to the IETF. Once in IETF, it has to change the definition of SCIM to System for Cross-domain Identity Management and it's no more supporting XML - only JSON.

As a result of the increasing pressure on OASIS PSTC - they started working on a REST binding for SPML - which is known as RESTPML around 2011. This is still based on XML and not yet active so far.

Enterprise Integration with WSO2 ESB






















Enterprise Integration with WSO2 ESB is my first book.

WSO2 ESB is one of the leading ESBs out there in terms of features, scalability, and performance. It's being battle-tested at eBay and many other Fortune 100 companies. At the time of this writing, eBay is handling more than 4 billion transactions per day through the WSO2 ESB. In this book, I will cover some of the key features of WSO2 ESB mostly used in enterprise integration. Each feature is covered at the introductory level, to help anyone who is not that familiar with the WSO2 ESB, to catch up and proceed.

I would first like to thank, Vinay Argekar, an Acquisition Editor at Packt Publishing who came up with the idea of writing a book on the most popular WSO2 product—undoubtedly the ESB. Then, I would like to thank Sneha Modi, Priyanka Shah, Subho Gupta, Romal Karani, Tanvi Bhatt, and all the others from Packt Publishing who helped me throughout to make this book a reality from the initial idea. Thank you very much for all your continuous support.

If not for Dr. Sanjiva Weerawarana and Paul Fremantle we wouldn't have had a WSO2 ESB today to talk about. They founded WSO2 in 2005 with a mission to build a new era for SOA, and WSO2 ESB is a key ingredient. Today WSO2 provides a fully open source platform with more than 16 products. I am truly grateful to both Dr. Sanjiva and Paul for everything they have done for this field and for the community. And also, for mentoring and moulding me with care and patience.

Also, I would like to thank Samisa Abeysinghe, who is our VP, Training at WSO2 for being a great mentor to me throughout all these years.

My beloved wife, Pavithra. She wanted me to write this book even more than I wanted to. If I say she is the driving force behind this book, I am not exaggerating. She simply went beyond just feeding me with all the encouragement, but also helped immensely in reviewing the book and developing samples. She was the first reader, as always. Thank you very much Pavithra.

Kasun Indrasiri, who is the Product Manager and the Architect of WSO2 ESB, added so much value to the book with his technical expertise by reviewing the book. Thank you very much Kasun. I would also like to thank Rajika, Kishanthan and Charitha for reviewing the book for technical accuracy. Your inputs are highly appreciated.

Miyuru Daminda, Dushan Abeyruwan, Isuru Udana, and all the members of the WSO2 ESB team helped me a lot, clarifying all my doubts related to the product internals. Thank you very much, I appreciate your help a lot.

Last, but not least, my parents and my sister are the driving force behind me all the time since my birth. If not for them I wouldn't be who I am today. I am so very grateful to them for leading my way to write my first book.

Although this sounds like a one-man effort—it's a team. I would like to thank everyone who supported me in different ways.

The books is now available to purchase at http://www.packtpub.com/enterprise-integration-with-wso2-esb/book

Building a Manufacturing Service Bus (MSB) with WSO2 ESB

Before getting in to the subject, I would like to introduce few terminology commonly used in the manufacturing industry.

The term Manufacturing Execution System (MES) was coined by AMR Research in 1990, the MES concept has evolved for almost three decades from the development of advanced, computer information systems for manufacturing. Following is the definition on an MES from the Manufacturing Execution System Association (MESA).
Manufacturing Execution Systems (MES) deliver information that enables the
optimization of production activities from order launch to finished goods. Using current
and accurate data, MES guides, initiates, responds to, and reports on plant activities as
they occur. The resulting rapid response to changing conditions, coupled with a focus on
reducing non value-added activities, drives effective plant operations and processes.
MES improves the return on operational assets as well as on-time delivery, inventory
turns, gross margin, and cash flow performance. MES provides mission-critical
information about production activities across the enterprise and supply chain via bidirectional communications.
MES while in operation has to talk with multiple - heterogeneous systems. Some are listed below.
  1. Product Lifecycle Management (PLM)
  2. Enterprise Resource Planning (ERP)
  3. Customer Relationship Management (CRM)
  4. Human Resource Management (HRM)
  5. Process Development Execution System (PDES)
  6. Supervisory Control And Data Acquisition (SCADA)
  7. Programmable Logic Controllers (PLC)
  8. Distributed Control Systems (DCS)
  9. Batch Automation Systems
Let's have a look at how information flows between MES and other connected systems.

From MES To PLM: production test results
From PLM To MES: product definitions, bill of operations (routings), electronic work instructions, equipment settings


From MES To ERP: production performance results, produced and consumed material
From ERP to MES: production planning, order requirements


From MES To CRM: product tracking and tracing information
From CRM To MES: product complaints


From MES To HRM: personnel performance
From HRM To MES: personnel skills, personnel availability


From MES To PDES: production test and execution results
From PDES To MES: manufacturing flow definitions


I haven't mentioned all the system there in the flow. The reason is, the systems mentioned above belong to a category called Level-4 Systems which is defined by ISA-95 standard.

ISA-95 as it is more commonly referred, is an international standard for developing an automated interface between enterprise and control systems. This standard has been developed for global manufacturers. It was developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous and repetitive processes.

A common data definition, B2MML(Business-to-Manufacturing Markup Language), has been defined within the ISA-95 standard to link MES systems to these Level 4 systems.

The B2MML standard defines a format for exchange of ISA-95 information and defines the specific method (XML documents) for exchanges. B2MML is what makes the ISA-95 standards implementable. The schemas are freely available at www.mesa.org.

So, all the Level-4 Systems and the MES should understand B2MML.

The other systems like SCADA, PLC, DCS and Batch Automation Systems belong to ISA-95 Level 2 systems.

From MES To PLC: work instructions, recipes, set points
From PLC To MES: process values, alarms, adjusted set points, production results

Most MES systems include connectivity as part of their product offering. Direct communication of plant floor equipment data is established by connecting to the Programmable logic controllers (PLC). Often, plant floor data is first collected and diagnosed for real-time control in a Distributed control system (DCS) or Supervisory Control and Data Acquisition (SCADA) system. In this case, the MES systems connect to these Level 2 system for exchanging plant floor data.

The industry standard for plant floor connectivity is OLE for process control (OPC).

Manufacturing Execution Systems (MES) deliver the information required for factory personnel to effectively manage the manufacturing process from order launch to the production of finished goods. The MES layer, which is responsible for managing the factory, sits below the ERP, which manages the business.

There are many common information requirements shared by the ERP and MES. An example is raw material inventory data. The ERP needs to know current raw material levels for inventory valuation purposes and for advanced planning. The MES needs to know current raw material inventory levels so that it can dispatch the correct raw materials to the correct work center at the right time. The difference has to do with the granularity of the information that is required. For the ERP, knowing the total on-hand inventory for each raw material is sufficient - it can use this data to calculate the current value of the inventory, and to plan future allocations of material to production. However, for the MES, this degree of detail is insufficient.

In order to optimize inventory usage, the MES needs to know each individual sub-lot of inventory, its quantity, its location, and its current status. 

There are significant business benefits to well-implemented ERP-MES integration: lean business processes that flow seamlessly across the ERP-MES boundary; data synchronization, so that the plant is always making product according to current specifications and the ERP can always plan based on current and accurate information from the shop floor.

Let's summarize everything we discussed above in a diagram.






















We already discussed about how to use B2MML to connect Layer-4 systems with MES. Now, let's focus on OPC and see how to use that to connect MES with Layer-2 systems.

OLE for Process Control (OPC), which stands for Object Linking and Embedding (OLE) for Process Control, is the original name for a standards specification developed in 1996 by an industrial automation industry task force. The standard specifies the communication of real-time plant data between control devices from different manufacturers.

Later OPC Foundation has officially renamed the acronym to mean "Open Platform Communications".

The change in name reflects the applications of OPC technology for applications in Process Control, discrete manufacturing, building automation, and many others. OPC has also grown beyond its original OLE (Object Linking and Embedding) implementation to include other data transportation technologies including XML, Microsoft's .NET Framework, and even the OPC Foundation's binary-encoded TCP format.

An OPC Server for one hardware device provides the same methods for an OPC Client to access its data as any and every other OPC Server for that same and any other hardware device. The aim was to reduce the amount of duplicated effort required from hardware manufacturers and their software partners, and from the SCADA and other HMI producers in order to interface the two.

Once a hardware manufacturer had developed their OPC Server for the new hardware device their work was done to allow any 'top end' to access their device, and once the SCADA producer had developed their OPC Client their work was done to allow access to any hardware, existing or yet to be created, with an OPC compliant server.

OPC servers provide a method for many different software packages (so long as it is an OPC Client) to access data from a process control device, such as a PLC or DCS. Traditionally, any time a package needed access to data from a device, a custom interface, or driver, had to be written. The purpose of OPC is to define a common interface that is written once and then reused by any business, SCADA, HMI, or custom software packages.

In January’2004 the OPC Foundation tasked a working group to create a new architecture that would take OPC to the forefront of technology and provide an interoperability framework that would be viable for the next 10 years and beyond. The result of that was the OPC-UA.

OPC UA supports two protocols. This is visible to application programmers only via changes to the URL. The binary protocol is opc.tcp://Server and http://Server is for Web Service. Otherwise OPC UA works completely transparent to the API.

The binary protocol offers the best performance/least overhead, takes minimum resources (no XML Parser, SOAP and HTTP required, which is important for embedded devices), offers best interoperability (binary is explicitly specified and allows fewer degrees of freedom during implementation) and uses a single arbitrarily choosable TCP port for communication easing tunneling or easy enablement through a firewall.

The Web Service (SOAP) protocol is best supported from available tools, e.g., from JAVA or .Net environments, and is firewall-friendly, using standard http/https ports.

The WSDL for SOAP binding can found here.

All we discussed so far is the background. What is the use of an Manufacturing Service Bus (MSB) in a manufacturing flow/process? Let's focus on that now.

























Here the ESB/MSB act as connecting layers for the MES between Layer-2 and Layer-4 systems. If we take a typical manufacturing flow, MSB will carry out the instructions provided to it by the MES. MES will have the information about the required recipes, the routes or the execution order of PLCs and the material information.





















After each PLC invocation from the MSB, the response data it receives will be passed back to the MES. These stored data can be consumed by the Layer-4 ERP, CRM systems.

By now, the following information flow between MES and the ERP (which we mentioned at the beginning) will be more meaningful.

From MES To ERP: production performance results, produced and consumed material
From ERP to MES: production planning, order requirements


In a production system, the role of the MSB simply goes beyond simply routing requests to PLCs or Layer-2 systems. MSB is also responsible for...

1. Handling and recover failures.
2. Handle transactions.
3. Perform under highload.
4. Load balance between multiple PLCs.
5. Collect operational statistics from PLCs.