OpenID Connect for Single Page Applications

OpenID Connect is an identity protocol that was designed not just for traditional Web SSO but it also caters for modern use cases like native mobile applications and API access. Most of these use cases have a clearly defined an preferred pattern as to which “grant type” or “flow” can be applied to it. For a good overview of that you can visit which presents the flow chart below:


When dealing with Single Page Applications (SPAs) things become a bit more blurry since they can be seen as a mix between traditional Web SSO and API access: the application itself is loaded from a – possibly protected – regular website and the Javascript code starts calling APIs in another – or the same – domain(s). The spec, like the previous diagram, points to the so-called “Implicit grant type” for passing token(s) to the SPA since that grant was specifically designed with the “in-browser client” in mind.

Apart from some “Identerati” changing their minds about recommending Implicit over Authorization Code with a public Client for this use case – that’s probably a topic for a different post – implementing the Implicit grant type is not very popular from what I have seen in the field. Its main drawbacks are:

  • Implicit requires validation of the ID Token by the Client which means that crypto is done in the browser; this requires relatively heavy-weight libraries and more importantly, increases complexity
  • because it is not possible to keep a client secret in the browser, Implicit means using a public client which is inherently more insecure than a confidential client, see:

In other words, it is cumbersome, error-prone and less secure… That basically defeats the primary purpose of OpenID Connect for which the starting point is the Authorization Code grant type that allows for simple, zero-crypto (well, apart from TLS) and more secure flows. So I’ve been asked about using a server-side implementation of the Authorization Code grant type with a confidential Client for SPAs. I have added a draft implementation for that to the Apache web server through a so-called “session info hook” for mod_auth_openidc. It then works as follows:

  1. The SPA is served from an Apache webserver that has the mod_auth_openidc module loaded and configured.
  2. Upon accessing the SPA, mod_auth_openidc will first send the user to the OpenID Connect Provider and run through the Authorization Code flow using the confidential Client authentication as configured as a regular web SSO Client. It then creates a application session plus associated cookie on the Apache server for the user who can now load your SPA in his browser.
  3. The SPA can call the session info hook at any time by presenting a valid session cookie to obtain a JSON object with information such as the Access Token, the Access Token expiry, the claims from the ID Token, claims from the UserInfo endpoint etc.
  4. The SPA can extract information about the authenticated user from the session info object for its own purpose and can call APIs in another or the same domain using the Access Token in there.
  5. When the Access Token expires, the SPA can call the session info hook again to refresh the Access Token and mod_auth_openidc will refresh it using a refresh token that was stored as part of the session information.

The end result is that the SPA is using a secure server-side OIDC flow to authenticate the user and obtain & refresh an access token without requiring any crypto or security-sensitive code in the SPA itself! Let me know if you’re interested to learn more:

Thanks to John Bradley, Mark Bostley, Brian Campbell, Jonathan Hurd and George Fletcher for their input.

Posted in Uncategorized | 3 Comments

OpenID Connect Relying Party Certification for mod_auth_openidc

Good news on the OpenID Connect front: after creating a software certification program for OpenID Connect Provider implementations (, the OpenID Foundation recently added a similar capability for testing and certifying Relying Party (RP) implementations.

After putting in quite some work I was able to run mod_auth_openidc through the RP test suite and acquire the first official RP certification for it on December 13th. See bottom of the page here: One thing to be aware of is that RP testing is quite different from OP testing: as the RP certification test documentation puts it:

For each test, it is the responsibility of the tester to read and understand the expected result and determine whether the RP passed the test as described or not. This is because the RP test server typically doesn’t have enough information to determine whether the RP did the right thing with the information provided or not.

That means that the RP software developer needs to either a) write a lot of code to automate and interpret tests in a reproducible way or b) do a lot of manual work each time running the tests to prove that everything conforms to the specification. That makes it different from the OP certification process where the standard functions would be tested and results would be interpreted by the OP test suite itself without any effect on the OP software that is tested itself.

So why would an RP developer make this – rather significant – extra effort?

Firstly, the OpenID Foundation certification program aims to promote interoperability among implementations; this is certainly a great step in that direction and a significant boost for the OpenID Connect eco-system. But secondly and more important on a “local” scale, any self-respecting software product requires a integrated test suite for regression and continuous integration type of tests. Once the effort to for RP certification has been done and integrated in to an automated test suite for the RP software, it is a great tool to prevent bugs creeping up in new releases and a great help to improve the overall quality of the product.

Consider this a call to arms for all RP software developers out there to bite the bullet!

Posted in Uncategorized | Leave a comment

Apache module for OpenID Connect/Auth 2.0 release 2.0

Last Friday, September 9th 2016, I released a brand new version of mod_auth_openidc, the module that implements OpenID Connect RP and OAuth 2.0 RS functionality for the Apache webserver. So what’s the big deal with it and why does it have a new major version number?

The biggest change is that all JSON Signing and Encryption (JOSE) operations are now handled through a separate 3rd-party library. That library is cjose, an excellent open source project started by Matt Miller from Cisco which allows mod_auth_openidc to rely on a dedicated library for all crypto related operations. Since that library will have its own update and release pace this should result in better security overall for deployments e.g.: no backporting of JOSE/crypto fixes in the module is required, just update the cjose library.

The JOSE library also makes it easy to create JSON Web Tokens and use them to securely store all state that the module needs to keep track of. So now all state storage is protected by wrapping it in to a JWT and encrypting and signing it with the pre-configured passphrase. This allows for a secure deployment in shared cache/storage environments. For example one can now use a public memcache cluster across a bunch of Apache servers without having to worry that the session state can be accessed or altered by some 3rd-party internal administrator that also happens to have access to the same memcache cluster because of another application using it. The same thing holds for Redis storage or file-based storage.

On top of that, an encrypted and signed JWT stored in a temporary state cookie is used to keep track of the request state between the authentication requests that the module sends out and the corresponding authentication responses that it receives from the Provider.

One other change that is also worth mentioning is the support for so-called “chunked” cookies. This means that a large cookie value is now split out over a number of smaller cookies. This in its turn allows for client-side session management, i.e. storing all session related state in a client-side cookie rather than using any server-side storage/dependency, without the risk of running over session cookie limits all too easily – which happened quite a lot with certain Providers out there previously.

Well, there’s a lot more good stuff in 2.0.0 that you can see for yourself here: Let me know if you have any feedback!

Posted in Uncategorized | Leave a comment

Token Binding for the Apache webserver

The vast majority of authenticated access from clients to servers on the Internet relies on tokens that are typically “bearer” tokens so that anyone who obtains the token can use it to access the protected resources associated with it. This holds for traditional cookies in browsers but also for identity tokens being transported between federated domains in SAML and OpenID Connect and of course OAuth 2.0 bearer tokens. The Token Binding (tokbind) Working Group in the IETF develops a set of specifications that allows for binding the token to a particular client so that even if a token gets stolen or lost, it can not be used outside of the client that it was intended for.

In essence the Token Binding mechanism consists of the client generating a public/private key pair (on-the-fly, in memory) that is associated with a specific client/server pair and by having the “token issuance service” including the public key (aka. “Token Binding ID”) in the token at token creation time. Whenever the client then sends a request to the resource server it will also need to present some proof of ownership of the private key that correlates to the Token Binding ID  that is now embedded in the token.

The draft-ietf-tokbind-https document from the same Working Group maps that idea to HTTP requests running over TLS connections where the client needs to present the ownership of the private key by including a header in the HTTP request with a signed piece of data that consists of a unique reference (aka. EKM) to the TLS connection that was used to setup the initial connection to the server. This essentially binds the token to the TLS connection that a specific client has with a specific server. That is interesting stuff that allows for far more secure implementations of WebSSO flows – amongst other scenarios. That is why I did some research into leveraging Token Binding with mod_auth_openidc. In my case there are three prerequisites to make this work:


  1. the Apache server protecting the resources needs to support the Token Binding TLS extension on the TLS (mod_ssl) level, needs to verify the secure header and needs to pass up the verified Token Binding ID to the upper layer
  2. the browser needs to support generating the key pair, sending the Token Binding ID on TLS connection establishment, siging the EKM and sending the secure header that includes the EKM signature and the Token Binding ID
  3. mod_auth_openidc needs to issue a session cookie includes the Token Binding ID in the session cookie and on subsequent requests needs to compare that against the secure header

For 1. I added preliminary support for Token Binding in the Apache HTTP web server in a fork on Github here and for 2. I was able to use a nightly build of Chromium to prove that it actually works. This means that mod_auth_openidc’s session cookies can no longer be replayed outside of the original browser (without relying on obscure low-security tricks that exist today such as browser fingerprinting). I’m hoping to build out support to include “referred tokens” so that also 3rd-party issued id_tokens and Auth 2.0 tokens can be bound to the browser/Apache server as specified by

It may look like quite a bit of work but it actually wasn’t that hard to develop so hopefully we get extensive support for Token Binding in regular releases of web servers and browsers as soon as the specification matures.

Posted in Uncategorized | 1 Comment

Client Certificates and REST APIs

In high-security environments the use of public/private key infrastructures (PKI) is wide-spread for good reasons. A mechanism that is often found in more traditional client/server environments is to leverage X.509 client certificates to authenticate clients on a TLS level to the server. That achieves the desired high security that PKI promises us but comes with a number of drawbacks:

  1. it is cumbersome to deploy and maintain across clients
  2. the application development support on both the server side and the client side is non-existent or shaky
  3. the transport level nature of this mechanism not mix with less secure traffic on the same TCP port and makes SSL offloading (i.e. proxy deployments) harder

For the reasons above in modern REST API environments it is next to impossible to require TLS client certificate authentication. Yet it is often desirable from a security perspective or perhaps even a regulatory requirement to use PKI in those environments. Enter OAuth 2.0 and the token paradigm! OAuth 2.0 splits client/server interaction apart in a phase where the client *gets* a so-called access token from an Authorization Server and a phase where the client *uses* the access token to get access to the API.

In an OAuth 2.0 system it is fine to authenticate Clients to the Authorization Server using TLS client certificates and to then issue a (short-lived) access token *derived* from that initial authentication down to the Client, which the Client in its turn can use towards the API without having to present that client certificate ever to the API. This alleviates the pains as described earlier for regular client/server interaction and pushes it on to the interaction of the Client with the Authorization Server. One could argue that it is a specialized “jumpbox” through which short-lived access to regular systems can be obtained. Well as it happens that is OK: that is an interaction that is based on a much tighter relationship between the two actors, does not need to mix traffic, is there for authentication purposes only, has good security properties and support, and more importantly it can be *changed* or upgraded more easily because of the controlled nature.

That last “change” statement is where the true value of OAuth 2.0 comes in: TLS client certificate authentication to the Authorization Server is alleviating just a part of the pains that were mentioned earlier. To get rid off all the drawbacks we can actually use a different “grant” that allows us to present a signed “piece of data” to the Authorization Server on the HTTP/application level instead of doing TLS client certificate authentication on the TCP level. That interaction  could be architected as described in an the IETF RFC 7521 standard here: and the “piece of data” could take the form of a SAML assertion or a JSON Web Token as described in RFC 7522 and RFC 7523 respectively.

In this way we can obtain PKI-strength access to APIs, leveraging X.509 certificates without ever doing TLS client certificate authentication on the wire. To achieve the appropriate level of security we can tune the  access token expiry timeouts to ensure data “freshness” and in case we want to mitigate lost/stolen token effects.

Cliff-hanger: stay tuned for a future post on so-called “proof-of-possession” access tokens that tightens up security even more (at the cost of bringing back some of the drawbacks…) and allows us to relax requirements on token expiry…

Posted in Uncategorized | Leave a comment

The importance of Audience in Web SSO

When dealing with Single Sign On for Web Applications (Web SSO) the stakeholders involved in offering those applications i.e. developers, architects, application owners and their administrators, tend to care about how to identify and authenticate the end user who is accessing the web application. And in fact that used to be the only thing they really needed to care about in the early days of the web when everything was built and deployed in house. That has changed as time moved along:

  • the range of web applications offered by the same business party today is developed by different software development companies and contractors
  • that suite of applications is deployed on infrastructure belonging to different administrative domains, e.g. an Infrastructure-as-a-Service provider, a hosting party and/or a data center operator
  • those applications are often managed by different parties, e.g. hosting providers,  contractors, managed service providers

Now those developments brought along a challenge in Web SSO since the classic solutions were (and some still are) built on leveraging a single cookie across a set of applications. That cookie presents the identity of the user in a verifiable way by consulting a policy server or by verifying a signature. That is all nice and well except that nature of an SSO mechanism that leverages a single cookie that can be used to access different applications works against itself when different parties are involved in offering those applications: suddenly all of those parties can get a hold of that cookie i.e. an application developer could obtain it through modified code, an infrastructure provider could get it from logs or introspection, an application manager could get it from the application itself or a monitoring system.


But since they obtain that cookie (and they can because their application needs it) they can now also turn around and play it against any of the other applications connected to the same SSO system and thus impersonate the end user in those applications! That is an unacceptable risk for banking and financial businesses but in fact for any company with a reasonable security and compliance policy. In addition to being exposed to this risk when leveraging different external parties to offer applications they are also at risk of employees (application administrators) in one business unit turn bad and hijack applications in other business units.

The way forward is as simple as elegant: make sure that every application receives its own token that cannot be used against another application.  The security property of such a token that makes the difference here is called “audience”. The “audience” defines who the intended recipient of the token is and in the modern web it is just as important as the “subject” i.e. the authenticated end user. You may know the concept from the SAML and OpenID Connect protocols where those tokens are issued by a central entity called Identity Provider but always target a specific recipient.


All of the above is one more reason (warning, long sentence…) to move away from previous-century, single-domain cookie-based legacy Web Access Management Systems towards modern standards-based per-application token based security!

Posted in Uncategorized | 7 Comments

OpenID Connect for NGINX

I have added support for OpenID Connect and OAuth 2.0 to the NGINX web server and put it up on github here:

You’ll notice that it uses the scripting language Lua to realize those features. At first glance it may not sound attractive to use a scripting language in a high-performance web server like NGINX but Lua can be compiled to byte-code so that it achieves “performance levels comparable with native C language programs both in terms of CPU time as well as memory footprint” and the scripting powers make up for a great deal. As a proof of that: a lot of widely used extensions to NGINX have been written in Lua.

Adding OpenID Connect support in this way was a lot easier than coding it in C as I did previously for the Apache mod_auth_openidc module. Lua comes with a wide range of standard and non-standard libraries that can be leveraged when implementing a simple REST/JSON extension like OpenID Connect, e.g.:

  • a JSON parser
  • crypto/random functions
  • server-wide caching based on shared memory
  • http client functions
  • encrypted session management framework

This allowed me to focus just on the core OpenID Connect functionality and to write a first version of this script in less than 2 days time, with most of my time spent learning Lua and NGINX… Also: robustness in a scripting language is a lot easier to achieve than in C (goodbye segmentation faults!). Ok, the implementation of OpenID Connect is very basic (read: Basic Client Profile only) right now but hopefully spec coverage will increase over the next months. And as a matter of fact this project shows how easy it is to write a basic OpenID Connect client in less than 400 lines of code.

Note that doing this in Lua also provides us with great flexibility in terms of access control: in fact all of the scripting power of Lua can be applied to create complex rules that act on the claims provided by the OpenID Connect id_tokenthe claims returned from the UserInfo endpoint or the results of access token introspection.

Happy OIDC-NGINXing!

Posted in Uncategorized | 5 Comments

mod_auth_openidc 1 year anniversary

My project mod_auth_openidc, a module that implements OpenID Connect RP functionality for the Apache web server, is exactly 1 year old today so I thought it would be nice to describe the current status.

As always happens with these projects, it ended up to be a bit more than originally planned, in both features and work… Apart from the OpenID Connect RP functionality, it is now also a full-fledged OAuth 2.0 Resource Server that can validate an access token against an Authorization Server in a so-called “introspection call” or validate it locally if the access token is a JWT.

  • There are more than 75 configuration primitives and 4 types of supported caching/session-sharing backends (trust me, it is flexible, not overengineered…)
  • Over 200 unique visitors and 1000 views to the project website every week
  • over 50 people starred the Github project, over 20 forks in Github were created so far by other contributors, about 20 clones per week are pulled for local compilation/development
  • Almost 2000 downloads of binary packages (provided for Debian/Ubuntu/Redhat/CentOS)
  • included in the stock Ubuntu (Utopic) distribution and the upcoming stable release of Debian Jessie

Try it here:

Posted in Uncategorized | 4 Comments

OpenID Connect Session Management

In my last post I wrote about the problem of Single Logout (SLO) in federated Web SSO systems. Traditional SLO approaches would use either front-channel or back-channel mechanisms, each with their own challenges as described thoroughly by the Shibboleth people here.

OpenID Connect, the new kid on the SSO protocol block, has added a very interesting new approach to SLO and Session Management in general that is worth closer inspection. The proposal adds two hidden iframes, one served by the Relying Party (RP) and one served by the OpenID Connect Identity Provider (OP), to each webpage presented by the RP to synchronize session state between OP and RP.  Don’t stop reading yet. I can hear you think “what??, oh nooo!!” which is also what my initial reaction was when I first read the draft specification… But I decided to implement it in mod_auth_openidc to get to know it better and the more I worked with it, the more I started to like it. Yes, I agree initially looks like a “hairy and obtrusive” approach (and in the end that may always be a pain point for some), but there are a number of advantages over traditional approaches:

  • Simplicity
    The 2 iframes contain almost trivial code and the backend handling code is relatively simple as well. Besides that, no per-RP state needs to be communicated or maintained at the OP which greatly simplifies that state management overall.
  • Lightweight on network traffic
    The mechanism is browser-side mostly: after the initial session establishment and iframe download the OP session state cookie is checked client-side only.
  • Fits modern web application development
    Today’s web applications rely heavily on client-side processing anyway, Session Management can be a natural part of that. Adding two iframes to every page is not difficult to do in modern application frameworks (or templates) and adds only marginal overhead to the Javascript code bundles that are already downloaded. (If you don’t believe me, just go and inspect browser traces for the world’s most popular sites… or consider the increasingly popular Single Page Applications)

The fact that this mechanism only works when the page is loaded in a browser and loses control once tabs or windows are closed may be mitigated by adding event listeners that act on close events. We’ll need to take care of robustness anyway, see below.

So does this mechanism bring us failsafe SLO? Well, no, not in itself because it does still not warrant against network connectivity interruptions and state synchronization failures. We must also implement relatively short timeouts on session state updates on both the RP and OP iframes to create a robust system. Although the point of the current draft is to avoid this type of checks, I can’t see a reliable SLO system work without them. Work is ongoing in the OpenID Foundation to refine and extend SLO mechanisms, but in itself I think the current draft is a very interesting (and to me unexpected) twist to SLO that I wanted to call out here. Besides that, in its current form it is already as good as any SLO mechanism out there (for whatever that is worth…)

Posted in Uncategorized | Leave a comment

Single Logout: Why Not

Single logout (SLO) and centralized session management across a range of web applications that is connected to a Single Signon system has traditionally been a tricky problem. Due to the nature of the web it cannot be guaranteed that all application sessions are terminated at the same time with 100% certainty. Failure may be deemed acceptable for SSO because it results in inconvenience (users will just try again or give up) but is typically unacceptable in SLO since it leads to insecurity because of an unpredictable resulting session/logon state. Since the point of introducing SLO is usually to increase security (by killing sessions) we have actually achieved a worse result now with SLO than without…  i.e predictable logged on state vs. unpredictable logged out state.

Notice that this means that users will alway have to manually/visually inspect the results of a logout action to see if it succeeded. And if it did not succeed, they’d have to revert to removing session cookies in another way, most likely by closing their browser. Well, since closing the browser is the final-and-only remedy anyway, my position on SLO has always been to advise users to close their browser to achieve failsafe SLO. (I’m deliberately staying away from weird browser implementations/hacks that make session cookies survive browser restarts…)

The reason for writing this rant is actually to make a point about OpenID Connect’s proposed Session Management features; that will be a followup post though…

Posted in Uncategorized | 2 Comments