LDAP vs Database

1. LDAP is mostly picked over Database if the read/write ratio is more than 10,000/1 (this number is bit arguable) - since its more optimized for read operations. So in general LDAP is more preferred for static data.

2. LDAP has scalability concerns - even Microsoft recommends MS SQL server over Active Directory - if the user base is more than 1.5 million. In most of the large scale deployments I am aware of Database is preferred over LDAP. There are some cases where people use MySQL to handle 60 million users. Some US state governments use MS SQL server to manage citizens. So, the rule-of-thumb is if the user base is less than 1 million then LDAP should fine. But, if it exceeds that then better to go ahead with a proven Database. Also  note that more than 90% of Fortune 500 companies use Microsoft Active Directory.

3. LDAP implementations provide more in-built functionalities like, password update/rotation policies, fine-grained access control via ACLs, account locking, groups and etc. If you are using a Database as the user store, then either you need to implement all those by yourself or use a third-party identity management system on top of the Database.

4. LDAP has inbuilt support to manage hierarchical relationships between user entities. If this is a requirement, and go for a Database - has to be implemented from the scratch.

5. Transactional data related to the user should not be stored in the LDAP. Those have to be maintained in the Database and linked to the LDAP user via entryUUID.

6. LDAP has inbuilt support for multi-valued attributes. Say someone can have multiple email addresses - phone numbers - LDAP by design supports keeping multiple values against the same attribute. If this is a requirement and go for a Database - has to be implemented from scratch.

7. LDAP has an inbuilt security model over the data it stores. You can define ACLs over attributes and sub-trees. We don't find this in a Database and has to be implemented from scratch.

 8. Both in LDAP and JDBC, the schema can be extended.

9. Integration with third-party applications is much flexible with LDAP - since its a standard

OpenID to OpenID Connect


Securing the Insecure - WSO2Con USA 2014

Securing the Insecure

The 33 years old, Craig Spencer returned back to USA on 17th October from Africa after treating Ebola patients. Just after few days, he was tested positive for Ebola. Everyone was concerned - specially the people around him - and the New Yorkers. The mayor of the New York came in front of the media and gave an assurance to its citizens - that they have the world's top medical staff as well as the most advanced medical equipments to treat Ebola - and they have been prepared for this for so many month. That for sure might have calm down most of the people.

Let me take another example.

When my little daughter was three months old, she used to go to anyone's hand. Now - she is eleven months and knows who her mother is. Whenever she finds any difficulty she keeps on crying till she gets to the mother. She only feels secured in her mother's arms.

When we type a password into the computer screen - we are so much worried that, it will be seen by our neighbors. But - we never worried of our prestigious business emails been seen by NSA. Why ? Either its totally out of our control - or - we believe NSA will only use them to tighten national security and for nothing else.

What I am try to say with all these examples is, insecurity is a perception. Its a perception triggered by undesirable behaviors. An undesirable behavior is a reflection of how much a situation deviates from the correctness.

Its all about perception and about building the  perception. There are no 100% secured systems on the earth. Most of the cryptographic algorithms developed in 80s and 90s are now broken due to the advancements in computer processing power.

Correctness


In the computer world, most developers and operators are concerned about the correctness. The correctness is about achieving the desired behavior. You deposit $ 1000 in your account you would expect the savings to grow exactly by 1000. You send a document to a printer and you would expect the output to be as it is as you see it on the computer screen.

The security is concerned about preventing undesirable behaviors.

C-I-A

There are three security properties that can lead into undesirable behaviors, if those are violated: confidentiality, integrity and availability.

Confidentiality means protecting data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption.

Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modiļ¬cation, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest.

Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on public endpoints, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack.

Attacks

In March, 2011 the RSA corporation was breached. Attackers were able to steal sensitive tokens related to RSA SecureID devices. These tokens were then used to break into companies that used SecureID.


In October, 2013 the Adobe corporation was breached. Both the source code and the customer records were stolen - including passwords.

Just a month after the Adobe attack, in November, 2013 - the Target was attacked and 40 million credit card and debit card data were stolen.

How all these attacks are possible? Many breaches begin by exploiting a vulnerability in the system under question. A vulnerability is a defect that an attacker can exploit to effect an undesired behavior, with a set of carefully crafted interactions. In general a defect is a problem in either the design or the implementation of the system so that it fails to meet its desired requirements.

To be precise, a flow is a defect in the design and a bug is a defect in the implementation. A vulnerability is a defect in the system that affects security relevant behavior of a system, rather than just the correctness.

If you take the RSA 2011 breach, it was based on a vulnerability in the Adobe flash player. A carefully crafted flash program when run by a vulnerable flash player, allowed the attacker to execute arbitrary code on the running machine - which was in fact due to a bug in the code.

To ensure security, we must eliminate bugs and design flows and make them harder to exploit.

The Weakest Link

In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.



The take-away there is, a proper security design should include all the communication links in the system. Your system is no stronger than its weakest link.

The Defense in Depth

 
A layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers. This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage?

Insider Attacks

Insider attacks are less powerful and less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, are all insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco.


Insider attacks are identified as a growing threat in the military. To address this concern, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Cyber Insider Threat (CINDER) in 2010. The objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible.

Security by Obscurity


Kerckhoffs' Principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. Microsoft’s NTLM design was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. In a proper security design, it’s highly recommended not to use any custom-developed algorithms or protocols. Standards are like design patterns: they’ve been discussed, designed, and tested in an open forum. Every time you have to deviate from a standard, should think twice—or more.

Software Security

The software security is only a part or a branch of computer security. Software security is kind of computer security that  focuses on the secure design and the implementation of software, using best language, tools and methods. Focus of study of software security is the 'code'. Most of the popular approaches to security treat software as a black box. They tend to ignore software security.

In other words, it focuses on avoiding software vulnerabilities, flaws and bugs. While software security overlaps with and complements other areas of computer security, it is distinguished by its focus on a secure system's code. This focus makes it a white box approach, where other approaches are more black box. They tend to ignore the software's internals.

Why is software security's focus on the code important?

The short answer is that software defects are often the root cause of security problems, and software security aims to address these defects directly. Other forms of security tend to ignore the software and build up defenses around it. Just like the walls of a castle, these defenses are important and work up to a point. But when software defects remain, cleaver attackers often find a way to bypass those walls.

Operating System Security

Let's consider a few standard methods for security enforcement and see how their black box nature presents limitations that software security techniques can address.

When computer security was growing up as a field in the early 1970s, the operating system was the focus. To the operating system, the code of a running program is not what is important. Instead, the OS cares about what the program does, that is, its actions as it executes. These actions, called system calls, include reading or writing files, sending network packets and running new programs. The operating system enforces security policies that limit the scope of system calls. For example, the OS can ensure that Alice's programs cannot access Bob's files. Or that untrusted user programs cannot set up trusted services on standard network ports.


The operating system's security is critically important, but it is not always sufficient. In particular, some of the security relevant actions of a program are too fine-grained to be mediated as system calls. And so the software itself needs to be involved. For example, a database management system or DMBS is a server that manages data whose security policy is specific to an application that is using that data. For an online store, for example, a database may contain security sensitive account information for customers and vendors alongside other records such as product descriptions which are not security sensitive at all. It is up to the DBMS to implement security policies that control access to this data, not the OS.

Operating systems are also unable to enforce certain kinds of security policies. Operating systems typically act as an execution monitor which determines whether to allow or disallow a program action based on current execution context and the program's prior actions. However, there are some kinds of policies, such as information flow policies, that can not be, that simply cannot be enforced precisely without consideration for potential future actions, or even non-actions. Software level mechanisms can be brought to bear in these cases, perhaps in cooperation with the OS.

Firewalls and IDS

Another popular sort of security enforcement mechanism is a network monitor like a firewall or intrusion detection system or IDS. A firewall generally works by blocking connections and packets from entering the network. For example, a firewall may block all attempts to connect to network servers except those listening on designated ports. Such as TCP port 80, the standard port for web servers. Firewalls are particularly useful when there is software running on the local network that is only intended to be used by local users. An intrusion detection system provides more fine-grained control by examining the contents of network packets, looking for suspicious patterns. For example, to exploit a vulnerable server, an attacker may send a carefully crafted input to that server as a network packet. An IDS can look for such packets and filter them out to prevent the attack from taking place. Firewalls and IDSs are good at reducing the avenues for attack and preventing known vectors of attack. But both devices can be worked around. For example, most firewalls will allow traffic on port 80, because they assume it is benign web traffic. But there is no guarantee that port 80 only runs web servers, even if that's usually the case. In fact, developers have invented SOAP, which stands for simple object access protocol (no more an acronym since SOAP 1.2), to work around firewall blocking on ports other than port 80. SOAP permits more general purpose message exchanges, but encodes them using the web protocol.


Now, IDS patterns are more fine-grained and are more able to look at the details of what's going on than our firewalls. But IDSs can be fooled as well by inconsequential differences in attack patterns. Attempts to fill those gaps by using more sophisticated filters can slow down traffic, and attackers can exploit such slow downs by sending lots of problematic traffic, creating a denial of service, that is, a loss of availability. Finally, consider anti-virus scanners. These are tools that examine the contents of files, emails, and other traffic on a host machine, looking for signs of attack. These are quite similar to IDSs, but they operate on files and have less stringent performance requirements as a result. But they too can often be bypassed by making small changes to attack vectors.

Heartbleed

Heartbleed is the name given to a bug in version 1.0.1 of the OpenSSL implementation of the transport layer security protocol or TLS. This bug can be exploited by getting the buggy server running OpenSSL to return portions of its memory. The bug is an example of a buffer overflow. Let's look at black box security mechanisms, and how they fare against Heartbleed.


Operating system enforcement and anti-virus scanners can do little to help. For the former, an exploit that steals data does so using the privileges normally granted to a TLS-enabled server. So the OS can see nothing wrong. For the latter, the exploit occurs while the TLS server is executing, therefore leaving no obvious traces in the file system. Basic packet filters used by IDSs can look for signs of exploit packets. The FBI issued signatures for the snort IDS soon after Heartbleed was announced. These signatures should work against basic exploits, but exploits may be able to apply variations in packet format such as chunking to bypass the signatures. In any case, the ramifications of a successful attack are not easily determined, because any exfiltrated data will go back on the encrypted channel. Now, compared to these, software security methods would aim to go straight to the source of the problem by preventing or more completely mitigating the defect in the software.

Threat Modeling

Threat modeling is a methodical, systematic approach to identifying possible security threats and vulnerabilities in a system deployment. First you need to identify all the assets in the system. Assets are the resources you have to protect from intruders. These can be user records/credentials stored in an LDAP, data in a database, files in a file system, CPU power, memory, network bandwidth, and so on. Identifying assets also means identifying all their interfaces and the interaction patterns with other system components. For example, the data stored in a database can be exposed in multiple ways. Database administrators have physical access to the database servers. Application developers have JDBC-level access, and end users have access to an API. Once you identify all the assets in the system to be protected and all the related interaction patterns, you need to list all possible threats and associated attacks. Threats can be identified by observing interactions, based on the CIA triad.


From the application server to the database is a JDBC connection. A third party can eavesdrop on that connection to read or modify the data flowing through it. That’s a threat. How does the application server keep the JDBC connection username and password? If they’re kept in a configuration file, anyone having access to the application server’s file system can find them and then access the database over JDBC. That’s another threat. The JDBC connection is protected with a username and password, which can potentially be broken by carrying out a brute-force attack. Another threat.

Administrators have direct access to the database servers. How do they access the servers? If access is open for SSH via username/password, then a brute-force attack is likely a threat. If it’s based on SSH keys, where those keys are stored? Are they stored on the physical personal machines of administrators or uploaded to a key server? Losing SSH keys to an intruder is another threat. How about the ports? Have you opened any ports to the database servers, where some intruder can telnet and get control or carry out an attack on an open port to exhaust system resources? Can the physical machine running the database be accessed from outside the corporate network? Is it only available over VPN?

All these questions lead you to identifying possible threats over the database server. End users have access to the data via the API. This is a public API, which is exposed from the corporate firewall. A brute-force attack is always a threat if the API is secured with HTTP Basic/Digest Authentication. Having broken the authentication layer, anyone could get free access to the data. Another possible threat is someone accessing the confidential data that flows through the transport channels. Executing a man-in-the-middle attack can do this. DoS is also a possible threat. An attacker can send carefully crafted, malicious, extremely large payloads to exhaust server resources. STRIDE is a popular technique to identify threats associated with a system in a methodical manner. STRIDE stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privileges.

A Brief History of OpenID Connect

OpenID, which followed in the footsteps of SAML in 2005, revolutionized web authentication. Brad Fitzpatrick, the founder of LiveJournal, initiated it. The basic principle behind both OpenID and SAML is the same. Both can be used to facilitate web single sign on and cross-domain identity federation. OpenID is more community friendly, user centric, and decentralized. Yahoo added OpenID support in mid-January 2008, MySpace announced its support for OpenID in and in late July of that same year,, and Google joined the party in late October. By December 2009, there were more than 1 billion OpenID enabled accounts. It was a huge success as a web single sign on.

OpenID and OAuth 1.0 address two different concerns. OpenID is about authentication, while OAuth 1.0 is about delegated authorization. As both of these standards were gaining popularity in their respective domains, there was interest in combining them so that one can authenticate a user and also get a token to access their resources on their behalf in a single step. The Google Step 2 project is the first serious effort in this direction. It introduced an OpenID extension for OAuth, which basically takes Oauth-related parameters in the OpenID request/response itself. The same people who initiated the Google Step 2 project later brought it into the OpenID foundation.

The Google Step 2 OpenID extension for OAuth specification is available at: http://step2.googlecode.com/svn/spec/openid_oauth_extension/latest/openid_oauth_extension.html.

OpenID has gone through two generations to date. OpenID 1.0/1.1/2.0 is the first generation, while OpenID extension for OAuth is the second. OpenID Connect is the third generation of OpenID.

Yahoo, Google, and many other OpenID Providers will discontinue their support for OpenID 2.0 by mid-2015, and they will migrate into OpenID Connect. 

Unlike OpenID extension for OAuth, OpenID Connect was built on top of OAuth. It simply introduces an identity layer on top of OAuth 2.0. This identity layer is abstracted into an ID token. An OAuth Authorization Server that supports OpenID Connect can return an ID token along with the access token itself.

OpenID Connect vs. OAuth 2.0: http://blog.facilelogin.com/2013/11/oauth-20-vs-openid-connect.html

OpenID Connect was ratified as a standard by its membership on February 26, 2014. OpenID Connect provides a lightweight framework for identity interactions in a RESTful manner. This was developed under the OpenID foundation, having its roots in OpenID, but OAuth 2.0 affected it tremendously.

The announcement by the OpenID Foundation regarding the launch of the OpenID Connect standard is available at: http://openid.net/2014/02/26/the-openid-foundation-launches-the-openid-connect-standard/

More details and the applications of the OpenID Connect are covered in my book Advanced API Security.

WSO2 Identity Server / Thinktecture - Identity Broker Interop

Today is the third and the final day of  the interop event happening right now in Virginia Beach, USA. Today we were able to successfully interop test a selected set of Identity Broker patterns with Thinktecture Identity Provider.

In the first scenario, a .NET web application deployed in IIS talks to Thinktecture via WS-Federation. Thinktecture is acting as the broker and asks the user to pick the Identity Provider. Then Thinktecture will redirect the user to the WSO2 IS via WS-Federation.


In the second scenario, WSO2 IS is acting as the broker. Salesforce which acts as the service provider talks to WSO2 IS via SAML 2.0. WSO2 IS asks the user to pick the Identity Provider. Then WSO2 IS will redirect the user to the Thinktecture via WS-Federation. In the return path WSO2 IS will convert the WS-Federation response into a SAML 2.0 response and sends it back to the Salesforce.


AMAZON Still Uses OpenID!

Few have noticed that Amazon still uses (at the time of this writing) OpenID for user authentication. Check it out yourself: go to www.amazon.com, and click the Sign In button. Then observe the browser address bar. You see something similar to the following, which is an OpenID authentication request:

https://www.amazon.com/ap/signin?_encoding=UTF8&
openid.assoc_handle=usflex&
openid.claimed_id= http://specs.openid.net/auth/2.0/identifier_select&
openid.identity= http://specs.openid.net/auth/2.0/identifier_select&
openid.mode=checkid_setup&
openid.ns=http://specs.openid.net/auth/2.0&
openid.ns.pape= http://specs.openid.net/extensions/pape/1.0&
openid.pape.max_auth_age=0&
openid.return_to= https://www.amazon.com/gp/yourstore/home

WSO2 Identity Server / Microsoft ADFS - Identity Broker Interop

We are in the middle of an interop event happening right now in Virginia Beach, USA. Today and yesterday we were able to successfully interop test a selected set of Identity Broker patterns with Microsoft ADFS 2.0/3.0.

In the first scenario, a .NET web application deployed in IIS talks to ADFS via WS-Federation. ADFS is acting as the broker and asks the user to pick the Identity Provider. Then ASFS will redirect the user to the WSO2 IS via WS-Federation.


In the second scenario, a .NET web application deployed in IIS talks to ADFS via SAML 2.0. ADFS is acting as the broker and it asks the user to pick the Identity Provider. Then ADFS will redirect the user to the WSO2 IS via SAML 2.0.


In the third scenario, WSO2 IS is acting as the broker. Salesforce which acts as the service provider talks to WSO2 IS via SAML 2.0. WSO2 IS asks the user to pick the Identity Provider. Then WSO2 IS will redirect the user to the ADFS via WS-Federation. In the return path WSO2 IS will convert the WS-Federation response into a SAML 2.0 response and sends it back to the Salesforce.


POODLE Attack and Disabling SSL V3 in WSO2 Carbon 3.2.0 Based Products

Early this week Google researchers announced that they have found a bug in the SSL 3.0 protocol. The exploit could be used to intercept critical data that’s supposed to be encrypted between clients and servers.

The exploit first allows attackers to initiate a “downgrade dance” that tells the client that the server doesn’t support the more secure TLS (Transport Layer Security) protocol and forces it to connect via SSL 3.0.

From there a man-in-the-middle attack can decrypt secure HTTP cookies.

POODLE

Google calls this the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.

This means, even both your server and the client support TLS, still due to the downgrade attack, both the parties can be forced to use SSL 3.0. If any one of the party disables its support for SSL 3.0 - that will help to mitigate the attack.

Both Chrome and Firefox already announced that they are going to disable the SSL 3.0 support by default. Firefox 34, with SSL 3.0 disabled,  will be released on 25th November. If you want to disable SSL 3.0 on firefox now, you can use the plugin SSL Version Control.

Chrome has already issued a patch to disable SSL 3.0.

Disable SSL V3 on WSO2 Carbon 3.2.0

Following explains how to disable SSL 3.0 support on WSO2 Carbon 3.2.0 based servers
  1. Open [product_home]/repository/conf/mgt-transports.xml
  2. Find the transport configuration corresponding to TLS - usually this is having the port as 9443 and name as https.
  3. If you are using JDK 1.6 then remove the parameter<parameter name="sslProtocol">TLS</pramater> from the above configuration and replace it with:  <parameter name="sslEnabledProtocols">TLSv1</pramater>
Following explains how to validate the fix. You can download TestSSLServer.jar from here.

$ java -jar TestSSLServer.jar localhost 9443 

Output before the fix

Supported versions: SSLv3 TLSv1.0
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv3
     RSA_EXPORT_WITH_RC4_40_MD5
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_EXPORT_WITH_DES40_CBC_SHA
     RSA_WITH_DES_CBC_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
     DHE_RSA_WITH_DES_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_256_CBC_SHA
     DHE_RSA_WITH_AES_256_CBC_SHA
  (TLSv1.0: idem)


Output after the fix

Supported versions: TLSv1.0
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  TLSv1.0
     RSA_EXPORT_WITH_RC4_40_MD5
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_EXPORT_WITH_DES40_CBC_SHA
     RSA_WITH_DES_CBC_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
     DHE_RSA_WITH_DES_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_256_CBC_SHA
     DHE_RSA_WITH_AES_256_CBC_SHA

POODLE Attack and Disabling SSL V3 in WSO2 Carbon 4.2.0 Based Products

Early this week Google researchers announced that they have found a bug in the SSL 3.0 protocol. The exploit could be used to intercept critical data that’s supposed to be encrypted between clients and servers.

The exploit first allows attackers to initiate a “downgrade dance” that tells the client that the server doesn’t support the more secure TLS (Transport Layer Security) protocol and forces it to connect via SSL 3.0.

From there a man-in-the-middle attack can decrypt secure HTTP cookies.

POODLE

Google calls this the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.

This means, even both your server and the client support TLS, still due to the downgrade attack, both the parties can be forced to use SSL 3.0. If any one of the party disables its support for SSL 3.0 - that will help to mitigate the attack.

Both Chrome and Firefox already announced that they are going to disable the SSL 3.0 support by default. Firefox 34, with SSL 3.0 disabled,  will be released on 25th November. If you want to disable SSL 3.0 on firefox now, you can use the plugin SSL Version Control.

Chrome has already issued a patch to disable SSL 3.0.

Disable SSL V3 on WSO2 Carbon 4.2.0

Following explains how to disable SSL 3.0 support on WSO2 Carbon 4.2.0 based servers
  1. Open [product_home]/repository/conf/tomcat/catalina-server.xml
  2. Find the Connector configuration corresponding to TLS - usually this is having the port as 9443 and sslProtocol as TLS.
  3. If you are using JDK 1.6 then remove the attribute sslProtocol="TLS" from the above configuration and replace it with:  sslEnabledProtocols="TLSv1"
  4.  If you are using JDK 1.7 then remove the attribute sslProtocol="TLS" from the above configuration and replace it with:  sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"
If you have enabled pass-thru transport in any WSO2 product (ESB, API Manager) - you also need to do the following configuration change.
  1. Open [product_home]/repository/conf/axis2/axis2.xml
  2. Find the transportReceiver configuration element for org.apache.synapse.transport.passthru.PassThroughHttpSSLListener
  3. If you are using JDK 1.6 - add the following parameter under transportReceiver.
  4.  <parameter name="HttpsProtocols">TLSv1</parameter> 
  5. If you are using JDK 1.7 - add the following parameter under transportReceiver.
  6.  <parameter name="HttpsProtocols">TLSv1,TLSv1.1,TLSv1.2</parameter> 
Following explains how to validate the fix. You can download TestSSLServer.jar from here.

$ java -jar TestSSLServer.jar localhost 9443 

To test the pass-thru transport use the following command with the corresponding port.

$ java -jar TestSSLServer.jar localhost 8243 

Output before the fix

Supported versions: SSLv3 TLSv1.0
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv3
     RSA_EXPORT_WITH_RC4_40_MD5
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_EXPORT_WITH_DES40_CBC_SHA
     RSA_WITH_DES_CBC_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
     DHE_RSA_WITH_DES_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_256_CBC_SHA
     DHE_RSA_WITH_AES_256_CBC_SHA
  (TLSv1.0: idem)


Output after the fix

Supported versions: TLSv1.0
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  TLSv1.0
     RSA_EXPORT_WITH_RC4_40_MD5
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_EXPORT_WITH_DES40_CBC_SHA
     RSA_WITH_DES_CBC_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
     DHE_RSA_WITH_DES_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_256_CBC_SHA
     DHE_RSA_WITH_AES_256_CBC_SHA

POODLE Attack and Disabling SSL V3 in Apache Tomcat

Early this week Google researchers announced that they have found a bug in the SSL 3.0 protocol. The exploit could be used to intercept critical data that’s supposed to be encrypted between clients and servers.

The exploit first allows attackers to initiate a “downgrade dance” that tells the client that the server doesn’t support the more secure TLS (Transport Layer Security) protocol and forces it to connect via SSL 3.0.

From there a man-in-the-middle attack can decrypt secure HTTP cookies.

POODLE

Google calls this the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.

This means, even both your server and the client support TLS, still due to the downgrade attack, both the parties can be forced to use SSL 3.0. If any one of the party disables its support for SSL 3.0 - that will help to mitigate the attack.

Both Chrome and Firefox already announced that they are going to disable the SSL 3.0 support by default. Firefox 34, with SSL 3.0 disabled,  will be released on 25th November. If you want to disable SSL 3.0 on firefox now, you can use the plugin SSL Version Control.

Chrome has already issued a patch to disable SSL 3.0.

Disable SSL V3 on Apache Tomcat

Following explains how to disable SSL 3.0 support on Apache Tomcat 7.x.x.
  1. Open [tomcat_home]/conf/server.xml
  2. Find the Connector configuration corresponding to TLS - usually this is having the port as 8443 and sslProtocol as TLS.
  3. If you are using JDK 1.6 then remove the attribute sslProtocol="TLS" from the above configuration and replace it with:  sslEnabledProtocols="TLSv1"
  4.  If you are using JDK 1.7 then remove the attribute sslProtocol="TLS" from the above configuration and replace it with:  sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2"
Following explains how to validate the fix. You can download TestSSLServer.jar from here.

$ java -jar TestSSLServer.jar localhost 8443 

Output before the fix

Supported versions: SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv3
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_RC4_128_SHA
     TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  (TLSv1.0: idem)
  (TLSv1.1: idem)
  TLSv1.2
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA256
     DHE_RSA_WITH_AES_128_CBC_SHA256
     TLS_ECDHE_RSA_WITH_RC4_128_SHA
     TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

Output after the fix

Supported versions: TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  TLSv1.0
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_RC4_128_SHA
     TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  (TLSv1.1: idem)
  TLSv1.2
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_3DES_EDE_CBC_SHA
     DHE_RSA_WITH_3DES_EDE_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA256
     DHE_RSA_WITH_AES_128_CBC_SHA256
     TLS_ECDHE_RSA_WITH_RC4_128_SHA
     TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
     TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

A Brief History of TLS

TLS has its roots in SSL. SSL was introduced by Netscape Communications in 1994 to build a secured channel between the Netscape browser and the web server it connects to. This was an important need at that time, just prior to the dot-com bubble. The SSL 1.0 specification was never released to the public, because it was heavily criticized for the weak crypto algorithms that were used. In November 1994, Netscape released the SSL 2.0 specification with many improvements. Most of its design was done by Kipp Hickman, with much less participation from the public community. Even though it had its own vulnerabilities, it earned the trust and respect of the public as a strong protocol. The very first deployment of SSL 2.0 was in Netscape Navigator 1.1.

In January 1996, Ian Goldberg and David Wagner discovered a vulnerability in the random-number-generation logic in SSL 2.0. Mostly due to US export regulations, Netscape had to weaken its encryption scheme to use 40-bit long keys. This limited all possible key combinations to a million million, which were tried by a set of researchers in 30 hours with many spare CPU cycles; they were able to recover the encrypted data.

Because SSL 2.0 was completely under the control of Netscape, Microsoft responded to its weaknesses by developing its own variant of SSL in 1995, called Private Communication Technology (PCT). PCT fixed many security vulnerabilities uncovered in SSL 2.0 and simplified SSL handshaking with fewer round trips required to establish a connection.

SSL 3.0 was released in 1996 by Netscape, and Paul Kocher was a key architect. In fact, Netscape hired Paul Kocher to work with its own Phil Karlton and Allan Freier to build SSL 3.0 from scratch. SSL 3.0 introduced a new specification language as well as a new record type and new data encoding, which made it incompatible with SSL 2.0. It fixed issues in its predecessor, introduced due to MD5 hashing. The new version used a combination of the MD5 and SHA-1 algorithms to build a hybrid hash. SSL 3.0 was the most stable of all. In 1996, Microsoft came up with a new proposal to merge SSL 3.0 and its own SSL variant PCT 2.0 to build a new standard called Secure Transport Layer Protocol (STLP).

Due to the interest shown by different vendors in solving the same problem in different ways, in 1996 the IETF initiated the TLS working group to standardize all vendor-specific implementations. All the major vendors, including Netscape and Microsoft, met under the chairmanship of Bruce Schneier in a series of IETF meetings to decide the future of TLS. TLS 1.0 (RFC 2246) was the result; it was released by the IETF in January 1999. The differences between TLS 1.0 and SSL 3.0 aren’t dramatic, but they’re significant enough that TLS 1.0 and SSL 3.0 don’t interoperate. TLS 1.0 was quite stable and stayed unchanged for seven years, until 2006. In April 2006, RFC 4346 introduced TLS 1.1, which made few major changes to 1.0. Two years later, RFC 5246 introduced TLS 1.2, which is the latest at the time of this writing.

Its not Aladdin - its Ali Baba :-)

Ali Baba is a character from the folk tale Ali Baba and the Forty Thieves. This story is included in many versions of the One Thousand and One Nights, to which it was added by Antoine Galland in the 18th century. It is one of the most familiar of the "Arabian Nights" tales.

In the story, Ali Baba is a poor woodcutter who discovers the secret of a thieves' den, entered with the pass phrase "open sesame".

Aladdin is also  a Middle Eastern folk tale. It is one of the tales in same book One Thousand and One Nights, and one of the best known, although it was actually added to the collection in the 18th century by Frenchman Antoine Galland

It was Ali Baba - not Aladin who knew the pass phrase  "open sesame".

It look like in the RFC 2617 :HTTP Authentication: Basic and Digest Access Authentication Aladin has stolen Ali Baba's pass phrase :-)
If the user agent wishes to send the userid "Aladdin" and password "open sesame", it would use the following header field:

Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==

Dynamic Client Registration Profile

According to the OAuth 2.0 core specification, all OAuth clients must be registered with the OAuth authorization server and obtain a client identifier before any interactions. The aim of the Dynamic Client Registration OAuth 2.0 profile is to expose an endpoint for client registration in a standard manner to facilitate on-the-fly registrations.

The Dynamic Client Registration OAuth 2.0 profile is available at https://datatracker.ietf.org/doc/draft-ietf-oauth-dyn-reg

The dynamic-registration endpoint exposed by the authorization server can be secured or not. If it’s secured, it can be secured with OAuth, HTTP Basic Authentication, Mutual TLS, or any other security protocol as desired by the authorization server. The Dynamic Client Registration profile doesn’t enforce any authentication protocols over the registration endpoint, but it must be secured with TLS. If the authorization server decides that it should allow the endpoint to be public and let anyone be registered, it can do so.

To register a client, it must pass all its metadata to the registration endpoint. Following lists out only a subset of the request attributes. In addition to these, the profile also has the provision to introduce your custom attributes as well.

POST /register HTTP/1.1 
Content-Type: application/json 
Accept: application/json 
Host: authz.server.com 


   "redirect_uris":["https://client.org/callback","https://client.org/callback2"],  
   "token_endpoint_auth_method":"client_secret_basic", 
   "grant_types": ["authorization_code" , "implicit"], 
   "response_types": ["code" , "token"], 


Let’s have a look at the definition of each parameter.

redirect_uris: An array of URIs under the control of the client. The user is redirected to one of these redirect_uris after the authorization grant.

token_endpoint_auth_method: The supported authentication scheme when talking to the token endpoint. If the value is client_secret_basic, the client sends its client ID and the client secret in the HTTP Basic Authorization header. If it’s client_secret_post, the client ID and the client secret are in the HTTP POST body. If the value is none, the client doesn’t want to authenticate, which means it’s a public client (as in the case of the OAuth Implicit grant type).

grant_types: An array of grant types supported by the client.

response_types: An array of expected response types from the authorization server.

Based on the policies of the authorization server, it can decide whether it should proceed with the registration. Even if it decides to go ahead with the registration, the authorization server need not accept all the suggested parameters from the client. For example, the client may suggest using both authorization_code and implicit as grant types, but the authorization server can decide what to allow. The same is true for the token_endpoint_auth_method: the authorization server can decide what to support.

The following is a sample response from the authorization server:

HTTP/1.1 200 OK 
Content-Type: application/json 
Cache-Control: no-store 
Pragma: no-cache 


"client_id":"iuyiSgfgfhffgfh", 
"client_secret": "hkjhkiiu89hknhkjhuyjhk", 
"client_id_issued_at":2343276600, 
 "client_secret_expires_at":2503286900, 
 "redirect_uris":["https://client.org/callback", "https://client.org/callback2"], 
 "grant_types": "authorization_code", 
 "token_endpoint_auth_method": "client_secret_basic"


Let’s have a look at the definition of each parameter.

client_id: A generated unique identifier for the client.

client_secret: The generated client secret corresponding to the client_id. This is optional. For the Implicit grant type, client_secret isn’t required.

client_id_issued_at: The number of seconds since January 1, 1970.

client_secret_expires_at: The number of seconds since January 1, 1970.

redirect_uris: Accepted redirect_uris.

token_endpoint_auth_method: The accepted authentication method for the token endpoint.

The Dynamic Client Registration OAuth 2.0 profile is extremely useful in mobile applications. Mobile client applications secured with OAuth have the client ID and the client secret baked into the application. These are the same for all installations for a given application. If a given client secret is compromised, that will affect all installations, and rogue client applications can be developed using the stolen keys. These rogue client applications can generate more traffic on the server and exceed the legitimate throttling limit, hence causing a denial of service attack. With dynamic client registration, you need not set the same client ID and client secret for all installations. During the installation process, the application can talk to the authorization server’s registration endpoint and generate a client ID and a client secret per installation.

More details and the applications of the OAuth 2.0 Dynamic Client Registrayion Profile are covered in my book Advanced API Security

Revamping WSO2 API Manager Key Management Architecture around Open Standards

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for scalably routing API traffic. It leverages proven, production-ready integration, security, and governance components from the WSO2 Enterprise Service Bus, WSO2 Identity Server, and WSO2 Governance Registry. In addition, it leverages the WSO2 Business Activity Monitor for Big Data analytics, giving you instant insight into APIs behavior.

One of the limitation we had in API Manager so far is its tight integration with the WSO2 Identity Server. WSO2 Identity Server acts as the key manager, which issues and validates OAuth tokens.

With the revamped architecture (still under discussion) we plan to make all integration points with the key manager, extensible - so you can bring in your own OAuth authorizations server. And also - we will ship the product with standard extension points. These extension points are built around corresponding OAuth 2.0 profiles. In case, your authorization server deviates from the standard, you need to implement the KeyManager interface and plug in your own implementation.

Screen Shot 2014-08-27 at 12.08.49 AM.png 

API Publisher 

API Developer first logs-in to the API Publisher and creates an API with all the related metadata and publishes it to the API Store and the API Gateway.

API Publisher will also publish API metadata into the external authorization server via OAuth Resource Set Registration endpoint [1].

Sample Request:


"name": "Photo Album", 
"icon_uri": "http://www.example.com/icons/flower.png", 
"scopes": [ "http://photoz.example.com/dev/scopes/view", 
                  "http://photoz.example.com/dev/scopes/all" ], 
"type": "http://www.example.com/rsets/photoalbum" 


name REQUIRED. A human-readable string describing a set of one or more resources. This name MAY be used by the authorization server in its resource owner user interface for the resource owner.

icon_uri OPTIONAL. A URI for a graphic icon representing the resource set. The referenced icon MAY be used by the authorization server in its resource owner user interface for the resource owner.

scopes REQUIRED. An array providing the URI references of scope descriptions that are available for this resource set.

type OPTIONAL. A string uniquely identifying the semantics of the resource set. For example, if the resource set consists of a single resource that is an identity claim that leverages standardized claim semantics for "verified email address", the value of this property could be an identifying URI for this claim.

Sample Response:

HTTP/1.1 201 
Created Content-Type: application/json 
ETag: (matches "_rev" property in returned object) ... 

{ "status": "created", "_id": (id of created resource set), 
  "_rev": (ETag of created resource set) 


The objective of publishing the resources to the authorization server is to make it aware of the available resources and the scopes associated with them. An identity administrator can build the relationship between these scopes and the enterprise roles. Basically you can associate scopes with enterprise roles.

API Store 

Application Developer logs-in to the API Store and discovers the APIs he/she wants for his application and subscribes to those - and finally creates an application. Each application is uniquely identified by its client id. There are two ways to associate a client id with an application created in API Store.

1. Application developer brings in the client id.

Application developer creates a client id out-of-band with the authorization server, and associates the client id with the application he just created in the API Store. In this case, Dynamic Client Registration endpoint of the Authorization Serve is not used (No step 3 & 4).

2. API Store calls Dynamic Client Registration endpoint of the external Authorization Server.

Once the application is created by the application developer (by grouping a set of APIs) - API Store will call the Dynamic Client Registration endpoint of the authorization server.

Sample Request (Step 3):

POST /register HTTP/1.1 
Content-Type: application/json 
Accept: application/json 
Host: authz.server.com 


"client_name": "My Application”, 
"redirect_uris":[" https://client.org/callback","https://client.org/callback2 "], "token_endpoint_auth_method":"client_secret_basic", 
"grant_types": ["authorization_code" , "implicit"], 
"response_types": ["code" , "token"], 
"scope": ["sc1" , "sc2"], 


client_name: Human-readable name of the client to be presented to the user during authorization. If omitted, the authorization server MAY display the raw "client_id" value to the user instead. It is RECOMMENDED that clients always send this field.

client_uri: URL of a web page providing information about the client. If present, the server SHOULD display this URL to the end user in a clickable fashion. It is RECOMMENDED that clients always send this field.

 logo_uri: URL that references a logo for the client. If present, the server SHOULD display this image to the end user during approval. The value of this field MUST point to a valid image file.

scope :Space separated list of scope values that the client can use when requesting access tokens. The semantics of values in this list is service specific. If omitted, an authorization server MAY register a client with a default set of scopes.

grant_types: Array of OAuth 2.0 grant types that the client may use.

response_types: Array of the OAuth 2.0 response types that the client may use.

token_endpoint_auth_method: The requested authentication method for the token endpoint.

redirect_uris: Array of redirection URI values for use in redirect-based flows such as the authorization code and implicit flows.

Sample Response (Step 4):

HTTP/1.1 200 OK 
Content-Type: application/json 
Cache-Control: no-store 
Pragma: no-cache 


"client_id":"iuyiSgfgfhffgfh", 
"client_secret": "hkjhkiiu89hknhkjhuyjhk", 
"client_id_issued_at":2343276600, 
"client_secret_expires_at":2503286900, 
"redirect_uris":[" https://client.org/callback ", " https://client.org/callback2 "], 
"grant_types": "authorization_code", 
"token_endpoint_auth_method": "client_secret_basic"

OAuth Client Application 

This is outside the scope of the API Manager. The client application can talk to the external authorization server via any of the grant types it supports and obtain an access token [3]. The scope parameter is optional in all the token requests - when omitted by the client, the authorization server can associate a default scope with the access token. If no scopes used at all - then the API Gateway can do an authorization check based on other parameters associated with OAuth client, end user, resource and action.

If the client sends a set of scopes with the OAuth grant request, then these scopes will be meaningful to the authorization server only if we have published API metadata into the external authorization server via the OAuth Resource Set Registration endpoint - from the API Publisher. Based on the user's role and the scopes associated with role, authorization server can issue the access token, only for a subset of the scopes request by the OAuth client.

Client Credentials Grant Type Sample Request:

POST /token HTTP/1.1 
Host: server.example.com Authorization: Basic Base64Encode(Client ID:Client Secret) Content-Type: application/x-www-form-urlencoded 

grant_type=client_credentials 

Sample Response:

HTTP/1.1 200 OK 
Content-Type: application/json;charset=UTF-8 
Cache-Control: no-store Pragma: no-cache 


"access_token":"2YotnFZFEjr1zCsicMWpAA", 
"token_type":"example", 
"expires_in":3600, 
"example_parameter":"example_value" 


Resource Owner Password Grant Type Sample Request:

POST /token HTTP/1.1 Host: server.example.com 
Authorization: Basic Base64Encode(Client ID:Client Secret) 
Content-Type: application/x-www-form-urlencoded 

grant_type=password&username=johndoe&password=A3ddj3w

Sample Response:

HTTP/1.1 200 OK 
Content-Type: application/json;charset=UTF-8 
Cache-Control: no-store Pragma: no-cache 


"access_token":"2YotnFZFEjr1zCsicMWpAA", 
"token_type":"example", "expires_in":3600, 
"refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA", 
"example_parameter":"example_value" 

API Gateway 

The API Gateway will intercept all the messages flowing between the OAuth client application and the API - and extract out the access token comes in the HTTP Authorization header. Once the access token is extracted out, API Gateway will call the Token Introspection endpoint[4] of the authorization server.

Sample Request:

POST /introspect HTTP/1.1
Host: authserver.example.com
Content-type: application/x-www-form-urlencoded
Accept: application/json
Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3

token=X3241Affw.4233-99JXJ

Sample Response:


"active": true, 
"client_id":"s6BhdRkqt3", 
"scope": "read write dolphin", 
"sub": "2309fj32kl", 
"user_id": "jdoe", 
"aud": "https://example.org/protected-resource/*", 
"iss": "https://authserver.example.com/" 


active REQUIRED. Boolean indicator of whether or not the presented token is currently active.

exp OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token will expire.

iat OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token was originally issued.

scope OPTIONAL. A space-separated list of strings representing the scopes associated with this token.

client_id REQUIRED. Client Identifier for the OAuth Client that requested this token.

sub OPTIONAL. Machine-readable identifier local to the AS of the Resource Owner who authorized this token.

user_id REQUIRED. Human-readable identifier for the user who authorized this token.

aud OPTIONAL. Service-specific string identifier or list of string identifiers representing the intended audience for this token.

iss OPTIONAL. String representing the issuer of this token.

token_type OPTIONAL. Type of the token as defined in OAuth 2.0

Once the API Gateway gets the token introspection response from the authorization server, it will check whether the client application (client id) has subscribed to the corresponding API and then also will validate the scope. API Gateway knows the required scopes for the API and the introspection response returns back the scopes associated with access token.

If everything is fine, API Gateway will generate a JWT and send it  to the downstream API. The generated JWT can optionally include user attributes as well. In that case API Gateway will  talk to the UserInfo endpoint of the authorization server.

Also - the API Gateway can simply pass-thru the access token as well - without validating the access token and its associated scopes. In that case API Gateway will only do throttling and monitoring.

Secured Endpoints

In this proposed revamped architecture, WSO2 API Manager has to talk to following endpoints exposed by the key manager.
  • Resource set registration
  • Dynamic client registration endpoint
  • Introspection endpoint
  • UserInfo endpoint
For the first three endpoints, the API Manager will just act as a trusted system. The corresponding KeyManager implementation should know how to authenticate to those endpoints. The OpenID Connect UserInfo endpoint will be invoked with the user provided access token, in run-time. This will work only if the corresponding access token has the privileges to read user's profile from the authorization server.

References

[1]: http://tools.ietf.org/html/draft-hardjono-oauth-resource-reg-02
[2]: http://tools.ietf.org/html/draft-ietf-oauth-dyn-reg-19
[3]: http://tools.ietf.org/html/rfc6749
[4]: http://tools.ietf.org/html/draft-richer-oauth-introspection-06

OAuth 2.0 Chain Grant Type Profile


Once the audience restriction is enforced on OAuth tokens, they can only be used against the intended audience. You can access an API with an access token that has an audience restriction corresponding to that API. If this API wants to talk to another protected API to form the response to the client, the first API must authenticate to the second API. When it does so, the first API can’t just pass the access token it received initially from the client. That will fail the audience-restriction validation at the second API.

The audience (aud) parameter is defined in the OAuth 2.0: Audience Information Internet draft available at http://tools.ietf.org/html/draft-tschofenig-oauth-audience-00. This is a new parameter introduced into the OAuth token-request flow and is independent of the token type. 

The Chain Grant Type OAuth 2.0 profile defines a standard way to address this concern. As shown in the above figure, according to the OAuth Chain Grant Type profile, the API hosted in the first resource server must talk to the authorization server and exchange the OAuth access token it received from the client for a new one that can be used to talk to the other API hosted in the second resource server.

The Chain Grant Type for OAuth 2.0 profile is available at https://datatracker.ietf.org/doc/draft-hunt-oauth-chain. 

The chain grant type request must be generated from the first resource server to the authorization server. The value of the grant type must be set to http://oauth.net/grant_type/chain and should include the OAuth access token received from the client. The scope parameter should express the required scopes for the second resource in space-delimited strings. Ideally, the scope should be the same as or a subset of the scopes associated with original access token. If there is any difference, then the authorization server can decide whether to issue an access token. This decision can be based on an out-of-band agreement with the resource owner:

POST /token HTTP/1.1 
Host: authz.server.net 
Content-Type: application/x-www-form-urlencoded 

grant_type= http://oauth.net/grant_type/chain 
oauth_token=dsddDLJkuiiuieqjhk238khjh 
scope=read 

This returns the following JSON response. The response includes an access token with a limited lifetime, but it should not have a refresh token. To get a new access token, the first resource server once again must present the original access token:

HTTP/1.1 200 OK 
Content-Type: application/json;charset=UTF-8 
Cache-Control: no-store 
Pragma: no-cache 

{ "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"Bearer", "expires_in":1800, } 

The first resource server can use the access token from this response to talk to the second resource server. Then the second resource server talks to the authorization server to validate the access token.

More details and the applications of the OAuth 2.0 Chain Grant Type Profile are covered in my book Advanced API Security

OAuth 2.0 Token Introspection Profile

OAuth 2.0 doesn’t define a standard API for communication between the resource server and the authorization server. As a result, vendor-specific, proprietary APIs have crept in to couple the resource server to the authorization server. The Token Introspection profile for OAuth 2.0 fills this gap by proposing a standard API to be exposed by the authorization server, allowing the resource server to talk to it and retrieve token metadata.

OAuth 2.0 Token Introspection Internet draft is available at https://datatracker.ietf.org/doc/draft-richer-oauth-introspection/

A token-introspection request can be generated by any party in possession of the access token. The introspection endpoint can be secured with HTTP Basic Authentication:

POST /introspection HTTP/1.1 
Accept: application/x-www-form-urlencoded 
Host: authz.server.com 
Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3 

token=X3241Affw.4233-99JXJ&token_type_hint=access_token&resource_id=http://my-resource 

Let’s have a look at the definition of each parameter.
  • token: The value of the access-token 
  • token_type_hint: The type of the token (either the access_token or the refresh_token) 
  • resource_id: An identifier that represents the corresponding resource for introspection
This request returns the following JSON response:

HTTP/1.1 200 OK 
Content-Type: application/json Cache-Control: no-store 


  "active": true, 
  "client_id":"s6BhdRkqt3", 
  "scope": "read write dolphin", 
  "sub": "2309fj32kl", 
  "aud": " http://my-resource/* 
 } 

Let’s have a look at the definition of each parameter.
  • active: Indicates whether the token is active. To be active, the token should not be expired or revoked. The authorization server can define its own criteria for how to define active. 
  • client_id: The identifier of the client to which the token was issued. 
  • scope: Approved scopes associated with the token. 
  • sub: The subject identifier of the user who approved the authorization grant. 
  • aud: The allowed audience for the token. 
The audience (aud) parameter is defined in the OAuth 2.0: Audience Information Internet draft available at http://tools.ietf.org/html/draft-tschofenig-oauth-audience-00. This is a new parameter introduced into the OAuth token-request flow and is independent of the token type. 

While validating the response from the introspection endpoint, the resource server should first check whether the value of active is set to true. Then it should check whether the value of aud in the response matches the aud URI associated with the resource server or the resource. Finally, it can validate the scope. The required scope to access the resource should be a subset of the scope values returned in the introspection response. If the resource server wants to do further access control based on the client or the resource owner, it can do so with respect to the values of sub and client_id.

More details and the applications of the OAuth 2.0 Token Introspection Profile are covered in my book Advanced API Security.

Single Sign-On with the Delegated Access Control Pattern

 Suppose a medium-scale enterprise has a limited number of RESTful APIs. Company employees are allowed to access these APIs via web applications while they’re behind the company firewall. All user data is stored in a Microsoft Active Directory, and all the web applications are connected to a Security Assertion Markup Language (SAML) 2.0 identity provider to authenticate users. The web applications need to access back-end APIs on behalf of the logged-in user.

The catch here is this last statement: “The web applications need to access back-end APIs on behalf of the logged-in user.” This suggests the need for an access-delegation protocol: OAuth. However, users don’t present their credentials directly to the web application—they authenticate through a SAML 2.0 identity provider.

In this case, you need to find a way to exchange the SAML token received in the SAML 2.0 Web SSO protocol for an OAuth access token, which is defined in the SAML grant type for the OAuth 2.0 specification. Once the web application receives the SAML token, as shown in step 3 of the above figure, it has to exchange it with an access token by talking to the OAuth authorization server.

The authorization server must trust the SAML 2.0 identity provider. Once the web application gets the access token, it can use it to access back-end APIs. The SAML grant type for OAuth doesn’t provide a refresh token. The lifetime of the access token issued by the OAuth authorization must match the lifetime of the SAML token used in the authorization grant.

After the user logs in to the web application with a valid SAML token, the web app creates a session for the user from there onward, and it doesn’t worry about the lifetime of the SAML token. This can lead to some issues. Say the SAML token expires, but the user still has a valid browser session in the web application. Because the SAML token has expired, you can expect that the corresponding OAuth access token obtained at the time of user login has expired as well. Now, if the web application tries to access a back-end API, the request will be rejected because the access token is expired. In such a scenario, the web application has to redirect the user back to the SAML 2.0 identity provider, get a new SAML token, and exchange that token for a new access token. If the session at the SAML 2.0 identity provider is still live, then this redirection can be made transparent to the end user.

This is one of the ten API security patterns covered in my book Advanced API Security. You can find more details about this from the book.

WSO2 Identity Server 5.0.0 Provisioning Framework

The WSO2 Identity Server 5.0.0 takes the identity management into a new direction. No more there will be federation silos or spaghetti identity anti-patterns. The authentication framework we introduced in IS 5.0.0 powers this all. Along with the authentication framework we also introduced a provisioning framework. The objective of this blog post is to introduce high-level concepts associated with the provisioning framework.

Inbound Provisioning

Inbound provisioning talks about how to provision users to the Identity Server. Out-of-the-box we do support inbound provisioning via a SOAP based API as well as the SCIM 1.1 API. Both the APIs support HTTP Basic Authentication. If you invoke the provisioning API with Basic Authentication credentials, then where to provision the user (to which user store) will be decided based on the inbound provisioning configuration of the resident service provider.

The SCIM API also supports OAuth 2.0. If the user authenticates to the SCIM API with OAuth credentials, then the system will load the configuration corresponding to the service provider who owns the OAuth client id. If you plan to invoke the SCIM API via a web application or a mobile application, we would highly recommend you to use OAuth instead of Basic Authentication. You simply needs to register your application as a service provider in Identity Server and then generate OAuth keys.

Just-in-time (JIT) Provisioning

Just-in-time provisioning talks about how to provision users to the identity server - at the time of federated authentication. A service provider initiates the authentication request, the user gets redirected to the Identity Server and then Identity Server redirects the user to an external identity provider for authentication. Just-in-time provisioning gets triggered in such a scenario when Identity Server receives a positive authentication response from the external identity provider. The Identity Server will provision the user to its internal user store with the user claims from the authentication response.

You configure JIT provisioning against an identity provider - not against service providers. Whenever you associate an identity provider with a service provider for outbound authentication, if the JIT provisioning is enabled for that particular identity provider, then the users from the external identity provider will be provisioned into the Identity Server's internal user store. In the JIT provisioning configuration you can also pick the provisioning user store.

JIT provisioning happens while in the middle of an authentication flow. The provisioning can happen in a blocking mode or in a non-blocking mode. In the blocking mode, the authentication flow will be blocked till the provisioning finishes - while in the non-blocking mode, provisioning happens in a different thread.

Outbound Provisioning

Outbound provisioning talks about provisioning users to external systems. This can be initiated by  any of the followings.

1. Inbound provisioning request (initiated by a service provider or the resident service provider)
2. JIT provisioning (initiated by a service provider)
3. Adding a user via the management console (initiated by the the resident service provider)
4. Assigning a user to a provisioning role (initiated by the the resident service provider)

WSO2 Identity Server out-of-the-box supports outbound provisioning with following connectors. You need to configure one or more outbound provisioning connectors with a given identity provider, and associate the identity provider with a service provider. All the provisioning requests must be initiated by a service provider - and will be provisioned to all the identity providers configured in the outbound provisioning configuration of the corresponding service provider.

1. SCIM
2. SPML
3. SOAP
4. Google Apps provisioning API
5. Salesforce provisioning API

Conditional Provisioning with Roles

If you want to provision a user to an external identity provider, say for example to Salesforce or Google Apps, based on the user's role, then you need to define one or more provisioning roles in the outbound provisioning configuration of the corresponding identity provider.