The inspiration for OAuth was to standardize how users authorize a site or application (the client) to access data at another site (the resource server). Clients wanting to access data on a resource server would ask the user for their credentials so that they could call the API or scrape the site – a horrible practice from a security point of view.
Flickr, Microsoft, Yahoo! and others came up with flows that allowed the client to redirect the user to the resource server to authorize release of the users data and then get a token to make API calls instead of the user’s password.
Each of these solved the same problem in a slightly different way and client developers had to learn each mechanism and terminology. Many say a need to standardizing the best practices to discourage the falling back to asking for the user’s password and OAuth was born.
One of the design decisions in OAuth 1.0 was to not require SSL. While lowering the barrier to developers by not requiring SSL was admirable, it effectively meant the developer had to implement crypto. While this was wrapped up in libraries and it would usually work – when it did not work – or worse – worked intermittently – it was difficult to debug. I know.
Another issue with OAuth 1.0 is there is no separation between the token issuer (authority) and the resource. This was not an issue for the original implementers, but as the cloud became important, the resource could be running in a completely different security context than the server granting authority.
OAuth 2.0 started life as a collaboration between Google, Microsoft, Salesforce.com and Yahoo! to address the issues with OAuth 1.0. The editor of OAuth did not see the work as building on OAuth, so we called it WRAP (Web Resource Authorization Protocol). When we described WRAP at an IIW meeting three years ago, the OAuth community asked us to have it be part of OAuth, and the work was contributed to the IETF OAuth working group.
The drama around OAuth continued, as people flopped back and forth. The specification was even broken into two parts (RFC 6749 and RFC 6750 edited by Mike Jones) because someone did not want to be associated with bearer tokens. But the community persevered, and we now have OAuth 2.0 RFCs that are simple to implement.
OAuth 2.0 brings three important enhancements:
- Simplicity: Client developers don’t need to do any cryptography or use a library to call OAuth 2.0 protected resources. The token can be passed in the HTTP headers or as a URL parameter. While HTTP headers are preferred, a URL parameter is simpler and allows API exploration with a browser.
- Token choice: implementers can use existing tokens that they already generate or consume. There are extension points so that the client can sign the token instead of it being a bearer token.
- Separation of roles: if the token is self-contained, then the resource can verify the token independently of the authorization server. Resources don’t have to call back to the authorization server to verify the token on each call, enabling higher performance and separation of security contexts.
Facebook, Google, Microsoft and Salesforce.com deployed early drafts of OAuth 2.0. I was at f8 when Facebook released the graph API which uses an early draft of OAuth 2.0. My colleagues and I were able to explore the API with our browsers sitting in the audience as the API was described. With this work now complete, many of us can now focus on the next layers in the identity stack.