The importance of Audience in Web SSO

When dealing with Single Sign On for Web Applications (Web SSO) the stakeholders involved in offering those applications i.e. developers, architects, application owners and their administrators, tend to care about how to identify and authenticate the end user who is accessing the web application. And in fact that used to be the only thing they really needed to care about in the early days of the web when everything was built and deployed in house. That has changed as time moved along:

  • the range of web applications offered by the same business party today is developed by different software development companies and contractors
  • that suite of applications is deployed on infrastructure belonging to different administrative domains, e.g. an Infrastructure-as-a-Service provider, a hosting party and/or a data center operator
  • those applications are often managed by different parties, e.g. hosting providers,  contractors, managed service providers

Now those developments brought along a challenge in Web SSO since the classic solutions were (and some still are) built on leveraging a single cookie across a set of applications. That cookie presents the identity of the user in a verifiable way by consulting a policy server or by verifying a signature. That is all nice and well except that nature of an SSO mechanism that leverages a single cookie that can be used to access different applications works against itself when different parties are involved in offering those applications: suddenly all of those parties can get a hold of that cookie i.e. an application developer could obtain it through modified code, an infrastructure provider could get it from logs or introspection, an application manager could get it from the application itself or a monitoring system.


But since they obtain that cookie (and they can because their application needs it) they can now also turn around and play it against any of the other applications connected to the same SSO system and thus impersonate the end user in those applications! That is an unacceptable risk for banking and financial businesses but in fact for any company with a reasonable security and compliance policy. In addition to being exposed to this risk when leveraging different external parties to offer applications they are also at risk of employees (application administrators) in one business unit turn bad and hijack applications in other business units.

The way forward is as simple as elegant: make sure that every application receives its own token that cannot be used against another application.  The security property of such a token that makes the difference here is called “audience”. The “audience” defines who the intended recipient of the token is and in the modern web it is just as important as the “subject” i.e. the authenticated end user. You may know the concept from the SAML and OpenID Connect protocols where those tokens are issued by a central entity called Identity Provider but always target a specific recipient.


All of the above is one more reason (warning, long sentence…) to move away from previous-century, single-domain cookie-based legacy Web Access Management Systems towards modern standards-based per-application token based security!

Posted in Uncategorized | 7 Comments

OpenID Connect for NGINX

I have added support for OpenID Connect and OAuth 2.0 to the NGINX web server and put it up on github here:

You’ll notice that it uses the scripting language Lua to realize those features. At first glance it may not sound attractive to use a scripting language in a high-performance web server like NGINX but Lua can be compiled to byte-code so that it achieves “performance levels comparable with native C language programs both in terms of CPU time as well as memory footprint” and the scripting powers make up for a great deal. As a proof of that: a lot of widely used extensions to NGINX have been written in Lua.

Adding OpenID Connect support in this way was a lot easier than coding it in C as I did previously for the Apache mod_auth_openidc module. Lua comes with a wide range of standard and non-standard libraries that can be leveraged when implementing a simple REST/JSON extension like OpenID Connect, e.g.:

  • a JSON parser
  • crypto/random functions
  • server-wide caching based on shared memory
  • http client functions
  • encrypted session management framework

This allowed me to focus just on the core OpenID Connect functionality and to write a first version of this script in less than 2 days time, with most of my time spent learning Lua and NGINX… Also: robustness in a scripting language is a lot easier to achieve than in C (goodbye segmentation faults!). Ok, the implementation of OpenID Connect is very basic (read: Basic Client Profile only) right now but hopefully spec coverage will increase over the next months. And as a matter of fact this project shows how easy it is to write a basic OpenID Connect client in less than 400 lines of code.

Note that doing this in Lua also provides us with great flexibility in terms of access control: in fact all of the scripting power of Lua can be applied to create complex rules that act on the claims provided by the OpenID Connect id_tokenthe claims returned from the UserInfo endpoint or the results of access token introspection.

Happy OIDC-NGINXing!

Posted in Uncategorized | 5 Comments

mod_auth_openidc 1 year anniversary

My project mod_auth_openidc, a module that implements OpenID Connect RP functionality for the Apache web server, is exactly 1 year old today so I thought it would be nice to describe the current status.

As always happens with these projects, it ended up to be a bit more than originally planned, in both features and work… Apart from the OpenID Connect RP functionality, it is now also a full-fledged OAuth 2.0 Resource Server that can validate an access token against an Authorization Server in a so-called “introspection call” or validate it locally if the access token is a JWT.

  • There are more than 75 configuration primitives and 4 types of supported caching/session-sharing backends (trust me, it is flexible, not overengineered…)
  • Over 200 unique visitors and 1000 views to the project website every week
  • over 50 people starred the Github project, over 20 forks in Github were created so far by other contributors, about 20 clones per week are pulled for local compilation/development
  • Almost 2000 downloads of binary packages (provided for Debian/Ubuntu/Redhat/CentOS)
  • included in the stock Ubuntu (Utopic) distribution and the upcoming stable release of Debian Jessie

Try it here:

Posted in Uncategorized | 4 Comments

OpenID Connect Session Management

In my last post I wrote about the problem of Single Logout (SLO) in federated Web SSO systems. Traditional SLO approaches would use either front-channel or back-channel mechanisms, each with their own challenges as described thoroughly by the Shibboleth people here.

OpenID Connect, the new kid on the SSO protocol block, has added a very interesting new approach to SLO and Session Management in general that is worth closer inspection. The proposal adds two hidden iframes, one served by the Relying Party (RP) and one served by the OpenID Connect Identity Provider (OP), to each webpage presented by the RP to synchronize session state between OP and RP.  Don’t stop reading yet. I can hear you think “what??, oh nooo!!” which is also what my initial reaction was when I first read the draft specification… But I decided to implement it in mod_auth_openidc to get to know it better and the more I worked with it, the more I started to like it. Yes, I agree initially looks like a “hairy and obtrusive” approach (and in the end that may always be a pain point for some), but there are a number of advantages over traditional approaches:

  • Simplicity
    The 2 iframes contain almost trivial code and the backend handling code is relatively simple as well. Besides that, no per-RP state needs to be communicated or maintained at the OP which greatly simplifies that state management overall.
  • Lightweight on network traffic
    The mechanism is browser-side mostly: after the initial session establishment and iframe download the OP session state cookie is checked client-side only.
  • Fits modern web application development
    Today’s web applications rely heavily on client-side processing anyway, Session Management can be a natural part of that. Adding two iframes to every page is not difficult to do in modern application frameworks (or templates) and adds only marginal overhead to the Javascript code bundles that are already downloaded. (If you don’t believe me, just go and inspect browser traces for the world’s most popular sites… or consider the increasingly popular Single Page Applications)

The fact that this mechanism only works when the page is loaded in a browser and loses control once tabs or windows are closed may be mitigated by adding event listeners that act on close events. We’ll need to take care of robustness anyway, see below.

So does this mechanism bring us failsafe SLO? Well, no, not in itself because it does still not warrant against network connectivity interruptions and state synchronization failures. We must also implement relatively short timeouts on session state updates on both the RP and OP iframes to create a robust system. Although the point of the current draft is to avoid this type of checks, I can’t see a reliable SLO system work without them. Work is ongoing in the OpenID Foundation to refine and extend SLO mechanisms, but in itself I think the current draft is a very interesting (and to me unexpected) twist to SLO that I wanted to call out here. Besides that, in its current form it is already as good as any SLO mechanism out there (for whatever that is worth…)

Posted in Uncategorized | Leave a comment

Single Logout: Why Not

Single logout (SLO) and centralized session management across a range of web applications that is connected to a Single Signon system has traditionally been a tricky problem. Due to the nature of the web it cannot be guaranteed that all application sessions are terminated at the same time with 100% certainty. Failure may be deemed acceptable for SSO because it results in inconvenience (users will just try again or give up) but is typically unacceptable in SLO since it leads to insecurity because of an unpredictable resulting session/logon state. Since the point of introducing SLO is usually to increase security (by killing sessions) we have actually achieved a worse result now with SLO than without…  i.e predictable logged on state vs. unpredictable logged out state.

Notice that this means that users will alway have to manually/visually inspect the results of a logout action to see if it succeeded. And if it did not succeed, they’d have to revert to removing session cookies in another way, most likely by closing their browser. Well, since closing the browser is the final-and-only remedy anyway, my position on SLO has always been to advise users to close their browser to achieve failsafe SLO. (I’m deliberately staying away from weird browser implementations/hacks that make session cookies survive browser restarts…)

The reason for writing this rant is actually to make a point about OpenID Connect’s proposed Session Management features; that will be a followup post though…

Posted in Uncategorized | 2 Comments

OpenID Connect and POST bindings

One of the interesting differences between OpenID Connect and SAML is that the core OpenID Connect specification does not specify a binding that is similar to SAML POST where the IDP/OP uses HTTP POST to pass tokens to the SP/RP. There is an OAuth 2.0 extension ( that can be used to do that, by using response_mode=form_post but there’s no guarantee that your partner supports it since it is optional.

However OpenID Connect Core does use the standard OAuth 2.0 “fragment” binding for front-channel flows. That binding is primarily meant for in-browser clients because the data in the fragment is not supposed to leave the browser. Nevertheless that flow can be turned in to a POST binding for web server clients by using Javascript to extract the token from the fragment and POST it to the web server client. A Javascript example can be found in Serving this code on the redirect_uri will effectively implement a POST binding but the actual POST code/trigger is served by the RP instead of the OP as opposed to SAML POST where the IDP triggers the POST. Note also that that specific Javascript code is site-dependent.

As an improvement on that, I have developed some Javascript that is site-agnostic and does effectively the same as described above except it uses a plain old HTML form for POST-ing the data instead of a XMLHttpRequest. This very same code has the side effect that it will also make the server support the optional OAuth 2.0 form_post response mode. So by serving the Javascript code as pasted below on the redirect URI and developing the server side code to deal with the POST, you will effectively support both the form_post and fragment response modes for web server clients in one blow. How about that :-)! FWIW: I’m using this in mod_auth_openidc ( quite successfully.

   <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <script type="text/javascript">
      function postOnLoad() {
        encoded = location.hash.substring(1).split('&');
        for (i = 0; i < encoded.length; i++) {
          encoded[i].replace(/\+/g, " ");
          var n = encoded[i].indexOf("=");
          var input = document.createElement("input");
          input.type = "hidden";
 = decodeURIComponent(encoded[i].substring(0, n));
          input.value = decodeURIComponent(encoded[i].substring(n+1));
        document.forms[0].action = window.location.href.substr(0, window.location.href.indexOf('#'));
  <body onload="postOnLoad()">
    <form method="post" action=""><p><input type="hidden" name="response_mode" value="fragment"></p></form>
Posted in Uncategorized | 11 Comments

Importing InCommon metadata into PingFederate

Every now and then the question pops up: “how can I import InCommon metadata into PingFederate”?

InCommon distributes SAML 2.0 metadata of all participants in the InCommon federation in a flat file. The individual EntityDescriptors of participants are included in a an outer XML element named EntitiesDescriptor. Out-of-the-box PingFederate won’t be able to consume this EntitiesDescriptor  because it only knows how to deal with a (single) EntityDescriptor at a time.

The good news is that PingFederate has a number of APIs that can be leveraged to get around this current limitation. It is not difficult to build a script that parses the individual EntityDescriptors out of the outer EntitiesDescriptor and pushes them one-by-one over the API  as single entities (cq. IDPs or SPs). A sample script that I wrote using the SOAP based Connection Management API can be found here:

Here’s a screenshot of how the Admin GUI screen looks after the import of 1653 SPs and 349 IDPs from InCommon:

Screen Shot 2014-04-14 at 2.15.23 PM

And yes, this scripts works against arbitrary EntitiesDescriptors, e.g. the ones from the UK Access Federation or the Dutch SURFconext.

But there’s more: recently PingFederate has been extended with an Admin REST API. The REST API is even simpler to deal with than the SOAP API. As of today it only handles IDP connections but that functionality can already be used by an SP to process the InCommon metadata and e.g. update all of the certificates that have been replaced or renewed. In this way no manual update of key material is needed when IDPs rollover, add or remove signing certificates. The script can be found here:

So after importing or manually configuring a bunch of InCommon IDPs in PingFederate, the SP administrator can now run this script as a cronjob to automatically incorporate changes that the configured IDPs make to their signing keys.

In summary, these APIs and scripts allow for scaling up federation to large numbers of partners by automating maintenance tasks, and last but not least, it will make federation administrators sleep a lot better…

Posted in Uncategorized | 4 Comments