How to use Facebook as OAuth 2.0 Authorization Server with WSO2 API Manager

The WSO2 API Manager comes bundled with an API Gateway, OAuth 2.0 Authorization Server and API Store and API Publisher jaggery apps. To increase the first time users’ experience all these components come bundled in a single distribution that is able run on a single JVM. However production recommendation is to deploy the four (or atleast three with the jaggery apps together) in a distributed setup.

One can have a requirement to use the WSO2 API Gateway with an external OAuth 2.0 Authorization server. I.e. to decouple the resource server from the authorization server in OAuth 2.0 terms. The OAuth 2.0 specification is silent on this. It does not talk about the interaction between the Resource server and the Authorization server. WSO2 API Manager has its proprietary implementation for this. However this requirement can be achieved. This is possible to do by configuring a new API handler in place of the default APIAuthenticationHandler. Each API that is published to the WSO2 API Gateway consists of a set of 5 default API handlers that do authorization, throttling, usage monitoring, etc. However it is important to note that the throttling and monitoring are based on the authorization keys of the client. If the authorization is decoupled from the API Gateway we won’t be able to use the APIMgtUsageHandler, APIThrottleHandler, etc.

Let’s say we need to use Facebook as the OAuth 2.0 Authorization server with the WSO2 API Gateway. The following diagrams illustrate the current OAuth 2.0 access token validation model and the proposed new model.

Access Token Validation with WSO2 Authorization Server

Access Token Validation with WSO2 Authorization Server

Access Token Validation with Facebook Authorization Server

Access Token Validation with Facebook Authorization Server

MTOM + WS-Encryption + Rampart + WSO2 ESB

The behavior of Rampart when doing encryption of any attachments such MTOM is to Base64 encode the attachment, place it in the body and continue encrypting it as any other SOAP body payload. There are two simple solutions for this.
1. Use SSL
2. Write the WS-Security policy to exclude attachments from being encrypted.

For some people these two might be not be a solution. In this blog post I am trying to provide an alternative for them.

There is a configuration in Rampart to say, optimize a particular part in the soap message that is denoted using XPath. This optimization runs after any encryption. Using this you can direct Rampart to optimize the data that you just encrypted.

E.g. if you have a binary payload in your SOAP body and you want to encrypt it and send it as an attachment:
Your WS-SecurityPolicy would contain the following which says to encrypt the entire SOAP body.

<sp:EncryptedParts xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">
    <sp:Body/>
</sp:EncryptedParts>

And your Rampart configuration would have the following which says to optimize the binary content found at the specified XPath location.

<rampart:optimizeParts>
<rampart:expressions>
<rampart:expression>//xenc:EncryptedData/xenc:CipherData/xenc:CipherValue</rampart:expression>
</rampart:expressions>
<rampart:namespaces>
<rampart:namespace prefix="xenc" uri="http://www.w3.org/2001/04/xmlenc#"></rampart:namespace>
</rampart:namespaces>
</rampart:optimizeParts>

When working as above in the WSO2 ESB we encounter another problem. Imagine you’ve created a proxy service in the WSO2 ESB for a backend service which accepts an MTOM attachment. Now if the proxy service is not secured with any WS-Encryption then the the WSO2 ESB has no problems. You can try this out with the WSO2 ESB sample 51. The sample only talks about the unsecured proxy case. But you may go ahead and apply a out of the box security policy which involves only signatures and no encryption (Non-repudiation) and find that the sample is still working. But if you turn on a security policy which has encryption (e.g. Sign and encrypt – X509 Authentication) then the sample no longer works. The error at the backend service would look something like follows:

ContentID is null

This can be because of the reason explained earlier in the post, which is the client might be encoding it into the SOAP body payload and encrypting it. If you have followed the work around I have mentioned for that problem, then this is coming because of another problem. The way synapse handles MTOM is, when the SOAP message is received at the proxy service it transforms the SOAP message by adding back the binary attachment into the payload in the relevant part and saves some pointers as XPath expressins as to which were the attachment parts. What happens when leaving Synapse to the backend service is, the pointers are read back and the binary payload is again optimized as MTOM attachments. In our case the MTOM optimization is not handled by Synapse but instead by Rampart. This does not make sure that the message leaving Synapse is optimized. Actually Synapse does not even know that this message consisted an attachment because the Rampart module in the proxy service of the WSO2 ESB has taken care of the security processing and returned the SOAP body to a state where it does not contain any security related tags. That is the part denoted by the XPath

//xenc:EncryptedData/xenc:CipherData/xenc:CipherValue

is no more there in the SOAP body. Therefore unless we use another WS-SecurityPolicy for the endpoint of the WSO2 ESB and explicitly do the optimization the message leaving Synapse is not going to be optimized. However there seems to be an issue in that approach as well. It seems to be impossible with Rampart to have a WS-SecurityPolicy with only OptimizedParts configuration specified and no real security applied. Rampart ignores processing the complete policy if it does not find some minimum required configurations in the policy. Therefore this approach is also not going to work.

So the only workaround I was able to do was to write a class mediator for WSO2 ESB and explicitly MTOM optimize the binary payload.

How to try this

1. Setup sample 51 of WSO2 ESB.

2. The source code of the class mediator which does the explicit MTOM optimization can be found here. Add this to

<WSO2_ESB_HOME>/repository/components/dropins

3. The synapse configuration for the proxy service can be found here.
4. The custom security policy to be applied to the proxy service can be found here. Upload this security policy to the registry and apply it to the proxy service by selecting the “Policy From Registry” option.
5. The source code of the client program can be found here. Make sure the lib folder inside the project is added to the Java class path. Also you need to configure the client.properties file according to your environment.

This was tested with WSO2 ESB 4.5.1.

Signature verification with WSO2 API Manager

Digital signatures provide means of authentication, integrity and non-repudiation. OAuth 1.0a had digital signatures in it which are used during the “OAuth Dance” (an unofficial term used by Google developers to describe the set of steps performed in order to do the full OAuth authentication and authorization process to receive and access token) and as well as when going to access the protected resources with an access token. In OAuth 2.0 digital signatures were removed from the “OAuth Dance” citing the difficulty for the clients to do signing and primarily relied on transport layer security such as SSL over HTTP. OAuth 2.0 supports an extensible list of token profiles. The widely used profile is the bearer token profile. This profile does not involve any signatures with it. Another popular profile is the MAC token profile. There may be certain situations where users might prefer to have some kind of signatures and verification of them to have non-repudiation during the access token validation process. The MAC token profile seems the ideal candidate for this. The difference between OAuth 1.0a and OAuth 2.0 with MAC access tokens is that the MAC access token profile does not have to have signatures during the “OAuth Dance”.

The WSO2 API Manager currently only supports OAuth 2.0 bearer token profile out of the box.  It still does not support MAC token profile. But in case if you would want to have non-repudiation during the access of protected resources we have a work around that we can do. This work around is possible by adding a new ‘API Handler’ before the default ‘Authentication Hander’. It mimics the MAC token profile but is not an exact implementation of the profile. As of now this could be useful because we need not change any WSO2 API Manager code and you get signatures and signature verification during resource access.

This new handler is designed to work together with the default APIAuthenticationHandler which expects a bearer token by default. How it works is by sitting in front of the default handler, verifying the signatures in the request and converting the Authorization header to what the default handler expects and hands over the request to the default handler. This signature is calculated by signing the normalized request string by the consumer secret (which is where this implementation defers from the MAC token profile because we are not signing with a MAC key that was received in the token response step). This ensures that even if the bearer token is compromised, an illegitimate user is denied access to APIs, because he/she is unable to calculate the correct signature without knowing the shared consumer secret.

Signature verification can be implemented as an API handler similar to the ‘APIAuthenticationHandler’ or ‘APIUsageHandler’. After the introduction of this feature, the access token that was provided earlier can now function as the Mac Identifier. The consumer secret is used as the Mac key, which is a shared secret between the consumer and the provider used to sign the normalized request string. Timestamps and nonce are added to prevent replay attacks.

Engaging the Handler to an API

Follow the steps below to engage the handler to an API.

Note:

For demonstration purposes, we use selected WSO2 API Manager sample and .jar files in the steps below. Similar steps apply to other user-specific samples as well.

1. A compiled binary of a sample signature verification handler can be found here. The source code for this project can be found here.
Open the .jar file and search for the ‘verifier.properties’ file. It contains 4 properties as follows:

allowed.time.delay – The allowed time difference between the timestamp sent and the current time.

timediff.map.max.size – Maximum size of the map that should be maintained to keep timestamps, in order to prevent replay attacks.

nonce.map.max.size – Maximum size of the map that should be maintained to keep nonce values, in order to prevent replay attacks.

hash.algorithm – Hashing algorithm supported by SunJCE. For example, “HMacSHA1”, “HMacSHA256”, etc.

2. Copy the provided jar file to <AM_HOME>/repository/components/dropins folder where <AM_HOME> is the root of the WSO2 API Manager distribution. This is the location where any custom libraries are added.

3. Start WSO2 API Manager and log in to its Management Console.

4. Build the API Manager ‘YouTube’ sample using the instructions given up to section ‘Invoking the API’. Instructions can be found here

5. You can engage the developed handler to the API through the Management Console. Log in to the console and select ‘Main’ > ‘Service Bus’ > ‘Source View’.

6. In the ESB configuration that opens, as the first handler in the YouTube API, add the following line above ‘APIAuthenticationHandler’.

<handler class="org.wso2.carbon.apimgt.gateway.verifier.SignatureVerificationHandler"/>
SignatureVerificationHandler engaged to Youtube API

SignatureVerificationHandler engaged to Youtube API

The class ‘org.wso2.carbon.apimgt.gateway.verifier.SignatureVerificationHandler’ is the handler that we have implemented by extending the ‘org.apache.synapse.rest.AbstractHandler’ class and packed into the .jar file.

Invoking the API

Now that you have engaged the developed handler to the API, let’s see how to invoke this API using a REST client such as cURL. Note that none of the steps until invoking the API in the ‘YouTube’ sample have changed due to engaging the handler. The change only occurs in the way the API is invoked. Note the following differences in the new REST calls.

Previous cURL request (As seen in API Manager 1.3.0 documentation):

curl -H "Authorization :Bearer 8f74ac7a87caee6967b75dcda51b8edc" http://localhost:8280/youtube/1.0.0/most_viewed

Previous authorization header:

Authorization :Bearer 8f74ac7a87caee6967b75dcda51b8edc

New cURL request:

curl -H "Authorization :MAC id=\"8f74ac7a87caee6967b75dcda51b8edc\",ts=\"1347023000\",nonce=\"a1b2c3d4e5\",mac=\"5X/zg3RnRSMP1JkaMJCaqWOk/srpw4ybGwIbPRVNUYA=\""http://localhost:8280/youtube6/6.0.0/most_viewed

New authorization header:

Authorization :MAC id=\"8f74ac7a87caee6967b75dcda51b8edc\",ts=\"1347023000\",nonce=\"a1b2c3d4e5\",mac=\"5X/zg3RnRSMP1JkaMJCaqWOk/srpw4ybGwIbPRVNUYA=\"

The difference of the REST calls lies only in the Authorization header as follows:

The access token in the previous request has now become the ‘id’. ‘ts’ is the timestamp. It can be any positive integer value the client sends (ideally the number of seconds elapsed from 1/1/1970 0:0:0). However, a timestamp verification is done from the second request onwards from a particular ‘id’. The ‘nonce’ also is a string that is chosen by the client. It needs to be unique to the combination of timestamp and Mac identifier.

Considerations When Executing the new cURL Request

When invoking a cURL request with the new authentication header, every request from the same Mac identifier (which is the access token in the earlier method) should have a different timestamp value whose difference in value from the previous request is greater than the actual time elapsed between the previous request and this one. The Mac value is taken by creating the ‘Normalized Request’ string as shown in the ‘HMacGenerator’ console output, and hashing it using the ‘SunJCE’ library with the algorithm specified in ‘verifier.properties’ file. Then, we use the ‘ConsumerSecret’ as key and transcode it using Base64. Request String is equal to the full URL that comes in, minus the transport protocol (http,https) and the hostname:port pair. In other words, it is the string from the context onwards. URL encoding is done only for the query parameter values using the “UTF-8” encoding scheme. A Java client to generate HMac signature can be found in here. All required parameters including the algorithm to be used can be given as console inputs. Any algorithm that is supported by the SunJCE cryptographic provider such as “HMacSHA1”, “HMacSHA256”, etc. can be given. Before you run the command, ensure that the jar file in the ‘lib’ folder of the ‘HMacGenerator.jar’ is added to the Java classpath.

Hello World!

This is my first ever blog post and as per tradition I’ve named it “Hello World!”. I’ve been quite reluctant to start my own blog for some time now. Compared to my peers this blog comes quite late in my life with computers and software. But as the phrase goes, better late than never. It was not until I realized how valuable it can be to share information regarding the work I do with my fellow developers. In this blog for now I hope to share only technical content with respect to the work I do and software engineering in general. Currently my development work in the firm I work for is primarily focused towards the security space of middleware. So you can expect most of my posts to be related to middleware security. But who knows.. I might catch you by surprise with a fun one. Needless to say this blog could be boring for readers from a non IT background. As for the IT folks I will make sure that my posts are short, precise and useful.

Hasta la vista!