OAuth 2.0 and OpenID Connect for existing APIs and Web Applications

TLDR; deploy a reverse proxy with OAuth 2.0 and OpenID Connect capabilities in front of your existing API and web applications to secure those assets with the modern security and access control standards without having to touch them.

Organizations these days have an obvious choice when it comes to developing new applications and APIs and securing them: modern standards such as OAuth 2.0 and OpenID Connect provide an easy and standardized way to protect their assets in a future proof and secure manner. Things become more complicated when dealing with the existing landscape of applications and APIs. Embedding them in the same modernized security environment would make them benefit from the same advantages in the areas of standardization, cost reduction and security. Yet the existing applications are often hard to modify and extend with support for the modern security protocols so how to deal with that?

Enter reverse proxy: by leveraging the well-known, mature and widely deployed concept of a reverse proxy in front of the assets you want to protect you can make this happen. The reverse proxy doesn’t just allow you do put your sensitive existing applications/APIs behind it in an unmodified way, but also provides an architectural advantage: by externalizing access control from your application you can advance the implementation on either side independently. Thus it allows you to upgrade your access control capabilities without touching your application and you can upgrade your applications without risking changes in the access control layer.

There’s probably a lot of reverse proxy deployments in your organization today: you may be able to leverage the OAuth/OIDC capabilities that come with them to implement this scenario. When it comes to Apache and NINGX deployments, take a look at mod_auth_openidc and lua-resty-openidc respectively. These components can extend your existing (and unprotected or proprietary secured) APIs with OAuth 2.0 capabilities and your Web Applications with OpenID Connect capabilities. Things become even better when you can deploy and manage these functions as components in your microservices environment, see also my previous post about that.

Note that this “enhanced” reverse proxy (or: gateway) can work for API use cases – implementing the OAuth 2.0 Resource Server capability to verify access tokens – as well as for Web Access Management and SSO use cases – implementing the OpenID Connect Relying Party functionality to consume ID tokens. And one can deploy those capabilities alongside in a single instance possibly reusing that instance for many APIs and/or web applications.

Also note that this concept, as it adheres to standards, is independent of the Authorization Server (for API/OAuth 2.0 use case) or the OpenID Connect Provider (for Web SSO). The OAuth/OIDC Provider can be an on-premises deployment of a vendor specific product, or even a cloud-based service which allows you to lookup your on-premises applications to one of the various Identity-as-a-Service offerings through OAuth 2.0/OpenID Connect.

The microservice variant (e.g. a dockerized version of such an Apache/NINGX system) is IMO the most attractive way forward for new deployments: you can have your lean-and-man-Web-Access-Management-and-API-security-gateway-in-a-box today by leveraging mod_auth_openidc or lua-resty-openidc! Stay tuned for a follow-up post on how to deal with (fine-grained) access control in this reverse proxy deployment.

Advertisements
Posted in Uncategorized | Leave a comment

Configuration-Managed Web Access Policies

Traditional Web Access Management (WAM) is heavily centralized because of legacy implementation restrictions. Access policies are defined, created and maintained in a centralized way by the IT department that runs a so-called access policy server. The policies execute at runtime by having endpoints call in to the Policy server, possibly caching the result for a while for performance reasons. The policy itself however remains on the policy server, in a central location. Clustering techniques would be used to scale the policy server itself.

Yet there are a number of recent developments (I’m counting 4) that have brought changes to this world: the micro-services paradigm, containerization technology like Docker, Dev-Ops methodologies, and software configuration management tools such as Ansible, Chef, Puppet, Vagrant etc.

This allows one make an interesting shift wrt. access policy execution. Since the vast majority of policies is fairly static those policies can actually be pushed out at configuration time to the endpoints themselves. The endpoints can then execute and enforce policies on their own at runtime without needing to callout to an external server. This greatly benefits performance and resilience. Changes to a policy would involve re-creating an updated configuration file for the application, pushing it out via the software configuration management tool and restarting the application server. Some may even go as far as creating new binary container images from the configuration files and restarting updated containers with new policies that replacing the original ones.

Especially when running a bunch of reverse proxy servers in front of your services and APIs to handle security such as OAuth 2.0 and/or OpenID Connect termination and token validation this presents an pattern that has architectural and business advantages over plain performance and resilience advantages. It allows one to avoid getting locked into vendor specific protocols that have dominated the web access control market for the last decades because of the lack of standardization (or adoption (read: XACML)) of open standards between Policy Enforcement Points (PEPs) and Policy Decision Points (PDPs).

The tooling around creating application specific configuration files with access control policies is freely available and widely deployed for a range of reverse proxies and applications. See e.g.: https://hub.docker.com/r/evry/oidc-proxy/

Just to make sure: I’m well aware that static policies may not be everything a modern environment needs in the near future. Policies based on dynamic data are becoming more important as we speak with user behavior analytics, aggregated log analysis etc.. Runtime behavior can only be aggregated and analyzed in a centralized location and needs to be consulted at policy execution time. But that does not change the fact that static policies (say 95% today) can be realized in the way as described.

So what does the future hold? Well, expect a blend. The vast majority of web access management policies deployed today is ready for a “distributed configuration managed” deployment using modern technologies as mentioned. We’ll see what the future brings for the dynamic part. For the record: I’m calling this paradigm C-WAM. (pronounced as Shhwam!)

PS: I’m sure you’ve noticed how mod_auth_openidc and Apache httpd.conf fits in the world as described…

Posted in Uncategorized | 1 Comment

Token Binding for the Apache webserver (part 2)

Last year I blogged about some experiments with Token Binding support for the Apache webserver here. Token Binding essentially secures cookies by binding them to an HTTPs connection so they can’t be used or replayed outside of that connection. Recently I went a bit further on that and developed an Apache module that implements Token Binding here: https://github.com/zmartzone/mod_token_binding.

The Token Binding module verifies the token binding material from the TLS connection and the HTTP headers. It then sets environment variables with the results of that verification so that other modules can use that to bind their cookies to the so-called Token Binding ID. As mentioned before, mod_auth_openidc would greatly benefit from such a feature, hence I’ve added support for Token Binding to release 2.2.0. Both “state” and “session” cookies can be bound to the TLS connection now.

But it is not just “co-located” modules that may benefit from the Token Binding termination that can be done with an Apache server now. There’s a work-in-progress by Brian Campbell on exposing Token Binding information to backend services via headers using the Apache server as a Reverse Proxy in front of those services. An implementation of this proposal via one of the environment variables set by mod_token_binding is as easy as using mod_headers with the following snippet in your HTTPd config:

RequestHeader set Token-Binding-Context "%{Token-Binding-Context}e"

Support for Token Binding depends on the user’s browser but it is available in recent versions of Chrome and Edge. I.e. in Chrome one can enable it by setting the flag chrome://flags/#enable-token-binding to “Enabled”.

The server-side prerequisites are: OpenSSL 1.1.x with a one-line patch, an Apache 2.4.x mod_ssl patched for handling the TLS token binding extension and Google’s token_bind library with a patch that I’ve created a pull request for. If you’re interested in prototyping this, the README should get you started. There’s a sample Docker image to get you to a quick functional server setup with all of the prerequisites listed above. Feel free to open an issue on the Github project if you have any feedback or questions.

PS: ping me for a binary package or a link to a running test instance.

Posted in Uncategorized | Leave a comment

OpenID Connect for Single Page Applications

OpenID Connect is an identity protocol that was designed not just for traditional Web SSO but it also caters for modern use cases like native mobile applications and API access. Most of these use cases have a clearly defined an preferred pattern as to which “grant type” or “flow” can be applied to it. For a good overview of that you can visit https://alexbilbie.com/guide-to-oauth-2-grants/ which presents the flow chart below:

screen-shot-2017-02-24-at-10-53-18-am

When dealing with Single Page Applications (SPAs) things become a bit more blurry since they can be seen as a mix between traditional Web SSO and API access: the application itself is loaded from a – possibly protected – regular website and the Javascript code starts calling APIs in another – or the same – domain(s). The spec, like the previous diagram, points to the so-called “Implicit grant type” for passing token(s) to the SPA since that grant was specifically designed with the “in-browser client” in mind.

Apart from some “Identerati” changing their minds about recommending Implicit over Authorization Code with a public Client for this use case – that’s probably a topic for a different post – implementing the Implicit grant type is not very popular from what I have seen in the field. Its main drawbacks are:

  • Implicit requires validation of the ID Token by the Client which means that crypto is done in the browser; this requires relatively heavy-weight libraries and more importantly, increases complexity
  • because it is not possible to keep a client secret in the browser, Implicit means using a public client which is inherently more insecure than a confidential client, see: https://tools.ietf.org/html/rfc6749#section-1.3.2

In other words, it is cumbersome, error-prone and less secure… That basically defeats the primary purpose of OpenID Connect for which the starting point is the Authorization Code grant type that allows for simple, zero-crypto (well, apart from TLS) and more secure flows. So I’ve been asked about using a server-side implementation of the Authorization Code grant type with a confidential Client for SPAs. I have added a draft implementation for that to the Apache web server through a so-called “session info hook” for mod_auth_openidc. It then works as follows:

  1. The SPA is served from an Apache webserver that has the mod_auth_openidc module loaded and configured.
  2. Upon accessing the SPA, mod_auth_openidc will first send the user to the OpenID Connect Provider and run through the Authorization Code flow using the confidential Client authentication as configured as a regular web SSO Client. It then creates a application session plus associated cookie on the Apache server for the user who can now load your SPA in his browser.
  3. The SPA can call the session info hook at any time by presenting a valid session cookie to obtain a JSON object with information such as the Access Token, the Access Token expiry, the claims from the ID Token, claims from the UserInfo endpoint etc.
  4. The SPA can extract information about the authenticated user from the session info object for its own purpose and can call APIs in another or the same domain using the Access Token in there.
  5. When the Access Token expires, the SPA can call the session info hook again to refresh the Access Token and mod_auth_openidc will refresh it using a refresh token that was stored as part of the session information.

The end result is that the SPA is using a secure server-side OIDC flow to authenticate the user and obtain & refresh an access token without requiring any crypto or security-sensitive code in the SPA itself! Let me know if you’re interested to learn more: hans.zandbelt@zmartzone.eu

Thanks to John Bradley, Mark Bostley, Brian Campbell, Jonathan Hurd and George Fletcher for their input.

Posted in Uncategorized | 3 Comments

OpenID Connect Relying Party Certification for mod_auth_openidc

Good news on the OpenID Connect front: after creating a software certification program for OpenID Connect Provider implementations (http://openid.net/certification/), the OpenID Foundation recently added a similar capability for testing and certifying Relying Party (RP) implementations.

After putting in quite some work I was able to run mod_auth_openidc through the RP test suite and acquire the first official RP certification for it on December 13th. See bottom of the page here: http://openid.net/certification/. One thing to be aware of is that RP testing is quite different from OP testing: as the RP certification test documentation puts it:

For each test, it is the responsibility of the tester to read and understand the expected result and determine whether the RP passed the test as described or not. This is because the RP test server typically doesn’t have enough information to determine whether the RP did the right thing with the information provided or not.

That means that the RP software developer needs to either a) write a lot of code to automate and interpret tests in a reproducible way or b) do a lot of manual work each time running the tests to prove that everything conforms to the specification. That makes it different from the OP certification process where the standard functions would be tested and results would be interpreted by the OP test suite itself without any effect on the OP software that is tested itself.

So why would an RP developer make this – rather significant – extra effort?

Firstly, the OpenID Foundation certification program aims to promote interoperability among implementations; this is certainly a great step in that direction and a significant boost for the OpenID Connect eco-system. But secondly and more important on a “local” scale, any self-respecting software product requires a integrated test suite for regression and continuous integration type of tests. Once the effort to for RP certification has been done and integrated in to an automated test suite for the RP software, it is a great tool to prevent bugs creeping up in new releases and a great help to improve the overall quality of the product.

Consider this a call to arms for all RP software developers out there to bite the bullet!

Posted in Uncategorized | Leave a comment

Apache module for OpenID Connect/Auth 2.0 release 2.0

Last Friday, September 9th 2016, I released a brand new version of mod_auth_openidc, the module that implements OpenID Connect RP and OAuth 2.0 RS functionality for the Apache webserver. So what’s the big deal with it and why does it have a new major version number?

The biggest change is that all JSON Signing and Encryption (JOSE) operations are now handled through a separate 3rd-party library. That library is cjose, an excellent open source project started by Matt Miller from Cisco which allows mod_auth_openidc to rely on a dedicated library for all crypto related operations. Since that library will have its own update and release pace this should result in better security overall for deployments e.g.: no backporting of JOSE/crypto fixes in the module is required, just update the cjose library.

The JOSE library also makes it easy to create JSON Web Tokens and use them to securely store all state that the module needs to keep track of. So now all state storage is protected by wrapping it in to a JWT and encrypting and signing it with the pre-configured passphrase. This allows for a secure deployment in shared cache/storage environments. For example one can now use a public memcache cluster across a bunch of Apache servers without having to worry that the session state can be accessed or altered by some 3rd-party internal administrator that also happens to have access to the same memcache cluster because of another application using it. The same thing holds for Redis storage or file-based storage.

On top of that, an encrypted and signed JWT stored in a temporary state cookie is used to keep track of the request state between the authentication requests that the module sends out and the corresponding authentication responses that it receives from the Provider.

One other change that is also worth mentioning is the support for so-called “chunked” cookies. This means that a large cookie value is now split out over a number of smaller cookies. This in its turn allows for client-side session management, i.e. storing all session related state in a client-side cookie rather than using any server-side storage/dependency, without the risk of running over session cookie limits all too easily – which happened quite a lot with certain Providers out there previously.

Well, there’s a lot more good stuff in 2.0.0 that you can see for yourself here: https://github.com/pingidentity/mod_auth_openidc/releases/tag/v2.0.0. Let me know if you have any feedback!

Posted in Uncategorized | Leave a comment

Token Binding for the Apache webserver

The vast majority of authenticated access from clients to servers on the Internet relies on tokens that are typically “bearer” tokens so that anyone who obtains the token can use it to access the protected resources associated with it. This holds for traditional cookies in browsers but also for identity tokens being transported between federated domains in SAML and OpenID Connect and of course OAuth 2.0 bearer tokens. The Token Binding (tokbind) Working Group in the IETF develops a set of specifications that allows for binding the token to a particular client so that even if a token gets stolen or lost, it can not be used outside of the client that it was intended for.

In essence the Token Binding mechanism consists of the client generating a public/private key pair (on-the-fly, in memory) that is associated with a specific client/server pair and by having the “token issuance service” including the public key (aka. “Token Binding ID”) in the token at token creation time. Whenever the client then sends a request to the resource server it will also need to present some proof of ownership of the private key that correlates to the Token Binding ID  that is now embedded in the token.

The draft-ietf-tokbind-https document from the same Working Group maps that idea to HTTP requests running over TLS connections where the client needs to present the ownership of the private key by including a header in the HTTP request with a signed piece of data that consists of a unique reference (aka. EKM) to the TLS connection that was used to setup the initial connection to the server. This essentially binds the token to the TLS connection that a specific client has with a specific server. That is interesting stuff that allows for far more secure implementations of WebSSO flows – amongst other scenarios. That is why I did some research into leveraging Token Binding with mod_auth_openidc. In my case there are three prerequisites to make this work:

Presentation1

  1. the Apache server protecting the resources needs to support the Token Binding TLS extension on the TLS (mod_ssl) level, needs to verify the secure header and needs to pass up the verified Token Binding ID to the upper layer
  2. the browser needs to support generating the key pair, sending the Token Binding ID on TLS connection establishment, siging the EKM and sending the secure header that includes the EKM signature and the Token Binding ID
  3. mod_auth_openidc needs to issue a session cookie includes the Token Binding ID in the session cookie and on subsequent requests needs to compare that against the secure header

For 1. I added preliminary support for Token Binding in the Apache HTTP web server in a fork on Github here and for 2. I was able to use a nightly build of Chromium to prove that it actually works. This means that mod_auth_openidc’s session cookies can no longer be replayed outside of the original browser (without relying on obscure low-security tricks that exist today such as browser fingerprinting). I’m hoping to build out support to include “referred tokens” so that also 3rd-party issued id_tokens and Auth 2.0 tokens can be bound to the browser/Apache server as specified by http://self-issued.info/?p=1577.

It may look like quite a bit of work but it actually wasn’t that hard to develop so hopefully we get extensive support for Token Binding in regular releases of web servers and browsers as soon as the specification matures.

Posted in Uncategorized | 1 Comment