Using an OAuth 2.0 Resource Server with Certificate-Bound Access Tokens

ZmartZone has implemented OAuth 2.0 Resource Server functionality in Apache/NGINX modules so these components can be used as a reverse proxy in front of APIs or other backends. In such a setup the backend does not have to deal with security but outsources it to a proxy sitting in front of it in a similar way that TLS termination is often offloaded to a load-balancer.

Most of the OAuth 2.0 deployments today use so-called bearer access tokens that are easy to deploy and use. This type of access token is not bound to the Client presenting the token which means that an attacker intercepting an access token can just use that token to get access to the resources/APIs/services as if it were the Client. So called Proof-of-Possession semantics for access tokens prevent that type of attack and present a more secure setup but such a system is typically harder to implement, deploy and maintain.

A relative simple variant of Proof-of-Possession for access tokens is specified in RFC 8705 OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens. This specification leverages a (possibly self-signed) certificate held by the Client to bind an access token cryptographically to the private key associated with that certificate.

This specification is implemented in liboauth2 1.4.1 which is used in the Apache module mod_oauth2 3.2.1. This means that you can now require and verify OAuth 2.0 certificate bound access tokens for your API in a very simple way that is easy to deploy. All it takes is an Apache server in front of your API, configured with something like:

AuthType oauth2
OAuth2TokenVerify jwk "{\"kty\":\"RSA\",\"kid\":\"one\",\"use\":\"sig\",\"n\":\"...\",\"e\":\"AQAB\" }" type=mtls&mtls.policy=optional
SSLVerifyClient optional_no_ca

Acknowledgement: this work was performed in a partnership with Connect2ID. Thanks Vladimir Dzhuvinov.

Posted in Uncategorized | Leave a comment

MVP OAuth 2.x / OpenID Connect modules for NGINX and Apache

Last year I wrote about a new development of a generic C-library for OAuth 2.x / OpenID Connect

It’s taken a bit longer than anticipated – due to various circumstances – but there’s now a collection of modules that is at the Minimum Viable Product stage i.e. the code is stable and production quality, but it is limited in its feature set.


Posted in Uncategorized | 1 Comment

Replacing legacy enterprise SSO systems with modern standards

I would like to highlight a digital transformation that I have witnessed and contributed to since a few years now. This transformation is about the adoption of standards based software to achieve Single Sign On (SSO) in enterprise environments, replacing legacy vendor systems that have been around for a decades.

About 20 years ago companies started to implement intra-enterprise SSO based on vendor proprietary mechanisms and protocols. This comes with a strong dependency on the vendor  and over the years thereafter in most cases it resulted in extremely high costs in licensing, infrastructure and professional services.

In addition, the legacy mechanisms are often based on enterprise-wide cookies which makes them vulnerable to user impersonation attacks, especially from insiders, and also makes it hard to move applications into the cloud. See also:

On top of that, many legacy systems would come with proprietary “agent” software that has strong a dependency on (a specific version of) the policy server from the same vendor, making upgrades very painful and time consuming. Implementing an open standard reduces that dependency and makes it easier to apply upgrades or switch between vendors.

I have observed that the biggest uptake of the OpenID Connect standard recently is in enterprise SSO, replacing legacy vendor proprietary systems and I am happy be able to contribute to that in the form of open source software for a variety of web servers and reverse proxies. Please reach out to me if you think this software can be useful for your business as well.

Posted in Uncategorized | Leave a comment

Deploy OpenID Connect and OAuth 2.0 with a Reverse Proxy Architecture

I haven’t had much time for blog posts lately but if you are interested in my take on modern identity and how to rollout OpenID Connect and OAuth 2.0 for all of your applications without modifying them, please see the recording of my talk last June 26 at Identiverse 2019 in Washington DC titled “Deploy OpenID Connect and OAuth 2.0 with a Reverse Proxy Architecture”.

Posted in Uncategorized | Leave a comment

OAuth 2.0 and OpenID Connect libraries for C

Just about 5 years ago I started to develop an OpenID Connect plugin for the Apache web server.  Over the years it has become a pretty popular project and lots and lots of input from real world experience has come along to improve it. As with all software, there comes a time to revisit the initial idea, the design choices and the scope of the project and that is exactly what I have done over the last 6 months.

I found that a lot of existing code could be refactored into a generic OAuth/OIDC library for C and that that library could then conveniently be used to build C-based plugins for various web servers and reverse proxies, not just Apache. This is similar to the approach that mod_security version 3 took. Today marks the announcement of such libraries for OAuth 2.0 / OpenID Connect and the availability some of these plugins. Here’s what I came up with so far:

And here is the first batch of plugins built on those libraries:

Note that over the course of 2019 I will rewrite mod_auth_openidc into version 3 that leverages liboauth2 and at the same time develop a native plugin (called ngx_openidc_module) for NGINX.

Please reach out to me if you want to be an early adopter of the new stuff and are willing to provide feedback.

Posted in Uncategorized | Leave a comment

Token Binding specs are RFC: deploy NOW with mod_token_binding

Update 10/18/2018: Google Chrome actually removed Token Binding support in version 70 so you’ll have to be on an older version to see this work in action, or use MS Edge, also see here. See here to learn how to run two Chrome versions side-by-side.

Today the so-called Token Binding specifications have formally been promoted to proposed standards RFC by the IETF. Essentially those specs define how to securely bind tokens to the communication channel running between a client and a server. As simple as that sounds, it is an extremely important step forward in web security: tokens are used everywhere across the Internet today  (e.g. for SSO, APIs, Mobile Apps) but moreover, the ubiquitous HTTP cookie also falls into the security token category.

In this post we’ll focus on securing HTTP cookies in Web Applications. HTTP cookies are pervasive on the web yet the vulnerability window for stealing such a cookie is usually larger than the window for stealing a security token, the damage is typically larger and abuse is harder to detect. Applying Token Binding to HTTP cookies closes a vulnerability in the web application landscape and prevents session hijacking by stealing a session cookie. See for example a class of XSS attacks that will now fail: but also retrieving a session cookie from application/server logs, from the network, or through malicious client software etc. is no longer effective.

Implementing the Token Binding protocol itself in your application by interfacing with the TLS layer is complex and may go wrong easily. So how can an application leverage Token Binding today without jumping through time-consuming and dangerous hoops . Well, applications on or behind Apache are catered for by mod_token_binding: they can leverage a header or environment value set by mod_token_binding in their session cookie (or in other tokens they generate for that matter). The hard protocol security bits are dealt with by mod_token_binding and a trivial 2-step implementation process remains for the application itself. See below for a sample in PHP:

  1. At session creation time: put the Token Binding ID provided in the environment variable set by mod_token_binding into the session state
$tokenBindingID = apache_getenv('Sec-Provided-Token-Binding-ID');
if (isset($tokenBindingID)) {
  $_SESSION['TokenBindingID'] = $tokenBindingID;

2. On subsequent requests: check the Token Binding ID stored in the session or token against the (current) Token Binding ID provided in an environment variable

if (array_key_exists('TokenBindingID', $_SESSION)) {
  $tokenBindingID = apache_getenv('Sec-Provided-Token-Binding-ID');
  if ($_SESSION['TokenBindingID'] != tokenBindingID) {

Et voila, we’ve bound the PHP session cookie to the TLS channel with the Token ID provided by mod_token_binding that’s all there is to it!

As another example is mod_auth_openidc, the OpenID Connect RP client for Apache HTTPd. This module already leverages this functionality for its own session cookie, its state cookie and the id_token, the latter if only supported by the Provider.

Anyone using mod_auth_openidc should look to deploy it in conjunction with the Token Binding module now! (but be aware that you need to upgrade your Apache server to >= 2.4.26 and build/deploy it with a OpenSSL >= 1.1.1 stack)

But also legacy applications may use Apache HTTPd with mod_auth_openidc deployed as a Reverse Proxy in front of their origin server so that their own session can only be started/accessed through an authenticated OIDC session cookie that leverages Token Binding. That makes them now effectively use Token Binding to protect themselves against session hijacking without touching the application and/or their own application session cookie [1].

Looking forward to lots of happy mod_token_binding usage!

[1] Note that the users’s browser also needs to support Token Binding to leverage all of this good work, but Microsoft’s Edge and Google’s Chrome already do this and support in other browsers such as Firefox is near.

Posted in Uncategorized | Leave a comment

A Security Token Service client for the Apache webserver

I’ve recently been working on mod_sts a so-called Security Token Service client for the Apache web server. This module allows for exchanging arbitrary security tokens by calling into a remote Security Token Service (STS). It operates as a so-called “active” STS client, i.e. a non-browser, non-interactive STS client. The STS function allows for creating a split between tokens presented from remote security domains (customers, clients, partners, consumers) and tokens generated and used within an internal (or in any case, a different) security domain. So why would you want to do that:

  • a split between external (“source”) tokens and internal (“target”) tokens may be enforced for security reasons, to separate external requests/tokens from internal requests/tokens whilst keeping “on-behalf-of-a-user” semantics; backend services would never see/obtain customer tokens directly, which may satisfy a compliance/regulatory obligation or just good security practice because internal services woud not be able to impersonate the original client
  • for legacy reasons in case your backend only supports consuming a proprietary/legacy token format/protocol and you don’t want to enforce support for that legacy onto your external clients that use a standardized token/protocol (or vice versa…)
  • when the backend service needs to call a 3rd-party service to fulfill its function it is good security practice to obtain a new token “on its own behalf” rather than calling the remote service with the source token obtained from its client; this use case is often handled with a static service credential but in that case the (verifiable) context of the original user is lost; the STS allows for obtaining a new token for calling a 3rd-party service whilst retaining the “on-behalf-of” semantics for the original client/user

This module can be used in scenario’s where an Apache server is put in front of an origin server as a Reverse Proxy/Gateway that consumes “source” tokens presented by external clients but needs to forward those requests presenting a different “target” security token, turning around and acting as its own client to a backend service. Note that the backend service can also be an application that is hosted on the Apache server itself, e.g. a PHP application or a 3rd-party provided application.

I turned out as a pretty generic and powerful module catering for “delegation”, “impersonation” and “legacy wrapping” scenario’s as described above, dealing with:

  1. arbitrary incoming token formats and protocols e.g. OAuth 2.0 access tokens, cookies, JWTs, legacy tokens in headers etc.
  2. arbitrary outgoing token formats and protocols e.g. OAuth 2.0 access tokens,  vendor specific or legacy cookies/tokens, JWTs etc.
  3. a few types of – somewhat standardized – STS protocols i.e. WS-Trust, but also the new OAuth 2.0 Token Exchange protocol, a work in progress and even a “twisted” Resource Owner Password Credentials grant that I’ve seen a number of times in the field being used for that purpose

You’ll be pleased to know that it plays nicely with mod_auth_openidc in OAuth 2.0 Resource Server mode, consuming a verified access token from an environment variable set by mod_auth_openidc.

Happy STS-ing with mod_sts and let me know if you have questions, suggestions or need support.

Posted in Uncategorized | 1 Comment

Access Control using Reverse Proxy XACML PEPs

Following the previous post that I wrote a while ago about authenticating reverse proxies in front of resources you want to protect with OpenID Connect or OAuth 2.0, this post is about the next step: access control using those proxies. Whilst the plugins that I talked about have basic access control possibilities built in to them, it may be that you want to integrate it with a central XACML Policy Engine that your company already deploys. To facilitate that, I have developed plugins that implement the XACML 3.0 Policy Enforcement Point logic into NGINX and Apache HTTPd.

In this way you can write and maintain advanced access control logic in XACML policies using your XACML 3.0 Policy Administration Point and enforce those policies directly in your reverse proxy web servers that protect your business assets. The communication between the web server PEP and the PDP engine is done using the XACML 3.0 REST and JSON Profiles, so it has minimal overhead in terms of processing and payload.

Look here for the NGINX plugin:

There’s a similar plugin for Apache 2.x that can be purchased under a commercial agreement. For details contact:

Posted in Uncategorized | Leave a comment

OAuth 2.0 and OpenID Connect for existing APIs and Web Applications

TLDR; deploy a reverse proxy with OAuth 2.0 and OpenID Connect capabilities in front of your existing API and web applications to secure those assets with the modern security and access control standards without having to touch them.

Organizations these days have an obvious choice when it comes to developing new applications and APIs and securing them: modern standards such as OAuth 2.0 and OpenID Connect provide an easy and standardized way to protect their assets in a future proof and secure manner. Things become more complicated when dealing with the existing landscape of applications and APIs. Embedding them in the same modernized security environment would make them benefit from the same advantages in the areas of standardization, cost reduction and security. Yet the existing applications are often hard to modify and extend with support for the modern security protocols so how to deal with that?

Enter reverse proxy: by leveraging the well-known, mature and widely deployed concept of a reverse proxy in front of the assets you want to protect you can make this happen. The reverse proxy doesn’t just allow you do put your sensitive existing applications/APIs behind it in an unmodified way, but also provides an architectural advantage: by externalizing access control from your application you can advance the implementation on either side independently. Thus it allows you to upgrade your access control capabilities without touching your application and you can upgrade your applications without risking changes in the access control layer.

There’s probably a lot of reverse proxy deployments in your organization today: you may be able to leverage the OAuth/OIDC capabilities that come with them to implement this scenario. When it comes to Apache and NINGX deployments, take a look at mod_auth_openidc and lua-resty-openidc respectively. These components can extend your existing (and unprotected or proprietary secured) APIs with OAuth 2.0 capabilities and your Web Applications with OpenID Connect capabilities. Things become even better when you can deploy and manage these functions as components in your microservices environment, see also my previous post about that.

Note that this “enhanced” reverse proxy (or: gateway) can work for API use cases – implementing the OAuth 2.0 Resource Server capability to verify access tokens – as well as for Web Access Management and SSO use cases – implementing the OpenID Connect Relying Party functionality to consume ID tokens. And one can deploy those capabilities alongside in a single instance possibly reusing that instance for many APIs and/or web applications.

Also note that this concept, as it adheres to standards, is independent of the Authorization Server (for API/OAuth 2.0 use case) or the OpenID Connect Provider (for Web SSO). The OAuth/OIDC Provider can be an on-premises deployment of a vendor specific product, or even a cloud-based service which allows you to lookup your on-premises applications to one of the various Identity-as-a-Service offerings through OAuth 2.0/OpenID Connect.

The microservice variant (e.g. a dockerized version of such an Apache/NINGX system) is IMO the most attractive way forward for new deployments: you can have your lean-and-man-Web-Access-Management-and-API-security-gateway-in-a-box today by leveraging mod_auth_openidc or lua-resty-openidc! Stay tuned for a follow-up post on how to deal with (fine-grained) access control in this reverse proxy deployment.

Posted in Uncategorized | Leave a comment

Configuration-Managed Web Access Policies

Traditional Web Access Management (WAM) is heavily centralized because of legacy implementation restrictions. Access policies are defined, created and maintained in a centralized way by the IT department that runs a so-called access policy server. The policies execute at runtime by having endpoints call in to the Policy server, possibly caching the result for a while for performance reasons. The policy itself however remains on the policy server, in a central location. Clustering techniques would be used to scale the policy server itself.

Yet there are a number of recent developments (I’m counting 4) that have brought changes to this world: the micro-services paradigm, containerization technology like Docker, Dev-Ops methodologies, and software configuration management tools such as Ansible, Chef, Puppet, Vagrant etc.

This allows one make an interesting shift wrt. access policy execution. Since the vast majority of policies is fairly static those policies can actually be pushed out at configuration time to the endpoints themselves. The endpoints can then execute and enforce policies on their own at runtime without needing to callout to an external server. This greatly benefits performance and resilience. Changes to a policy would involve re-creating an updated configuration file for the application, pushing it out via the software configuration management tool and restarting the application server. Some may even go as far as creating new binary container images from the configuration files and restarting updated containers with new policies that replacing the original ones.

Especially when running a bunch of reverse proxy servers in front of your services and APIs to handle security such as OAuth 2.0 and/or OpenID Connect termination and token validation this presents an pattern that has architectural and business advantages over plain performance and resilience advantages. It allows one to avoid getting locked into vendor specific protocols that have dominated the web access control market for the last decades because of the lack of standardization (or adoption (read: XACML)) of open standards between Policy Enforcement Points (PEPs) and Policy Decision Points (PDPs).

The tooling around creating application specific configuration files with access control policies is freely available and widely deployed for a range of reverse proxies and applications. See e.g.:

Just to make sure: I’m well aware that static policies may not be everything a modern environment needs in the near future. Policies based on dynamic data are becoming more important as we speak with user behavior analytics, aggregated log analysis etc.. Runtime behavior can only be aggregated and analyzed in a centralized location and needs to be consulted at policy execution time. But that does not change the fact that static policies (say 95% today) can be realized in the way as described.

So what does the future hold? Well, expect a blend. The vast majority of web access management policies deployed today is ready for a “distributed configuration managed” deployment using modern technologies as mentioned. We’ll see what the future brings for the dynamic part. For the record: I’m calling this paradigm C-WAM. (pronounced as Shhwam!)

PS: I’m sure you’ve noticed how mod_auth_openidc and Apache httpd.conf fits in the world as described…

Posted in Uncategorized | 1 Comment