I _love_ JWTs for API authentication - one of the nicest APIs I ever consumed was essentially JSON RPC over JWTs. Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL. You can no longer have nice click-to-copy snippets in your public docs. You either have an SDK ready to go in your customer's language or ecosystem of choice, or you're asking them to write a bunch of scary security-adjacent code just to get to their first successful request. No, I don't have a JWT library recommendation for Erlang, sorry.
Not that an API couldn't support both API Keys and JWT based authentication, but one is a very established and well understood pattern and one is not. Lowest common denominator API designs are hard to shake.
That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request? What does that give you?
Also what made that API so nice? Was this a significant part of it?
> That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request?
Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
This was supposed to be the way bearer tokens were supposed to be used.
> Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
A "reusable" bearer JWT with a particular audience satisfies all three - as long as the channel and the software are properly protected from inspection/exfiltration. Native clients are often considered properly protected (even when they open themselves up to supply chain attacks by throwing unaudited third party libraries in); browser clients are a little less trustworthy due to web extensions and poor adoption of technologies like CSP.
A proof of possession JWT (or auxiliary mechanisms like DPoP) will also satisfy all three properties - as long as your client won't let its private key be exfiltrated.
It is when you can't have all three properties that you start looking at other risk mitigations, such as making a credential one time use (e.g. first-use-wins) when you can't trust it won't be known to attackers once sent, or limiting validity times under the assumption that the process of getting a new token is more secure.
Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful. Persisting every token ever received would be costly and have a latency impact. Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing. Five minutes is a common value here, and some software will reject a lifetime of over an hour because of the cost of enforcing the single use policy.
I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
> The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
Those are desirable properties.
But session hijacking is a known problem. You have to expect a scenario where an attacker fishes one of your tokens and uses it on your behalf to access your things.
To mitigate that attack vector, either you use single-user tokens or short-lived tokens.
Also, clients are already expected to go through authentication flows to request tokens with specific sets of claims and/or scopes to perform specific requests.
Single-use tokens were expected to be the happy flow of bearer token schemes such as JWTs. That's how you eliminate a series of attack vectors.
> Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful.
Not quite.
Single-use tokens are stateful because resource servers need to track a list of revoked tokens. But "stateful" only means that you have to periodically refresh a list of IDs.
Short-lived tokens are stateless. A JWT features "issued at" time, "not before" time, and "expiration" time. Each JWT already specifies the time window when resource servers should deem it valid.
> Persisting every token ever received would be costly and have a latency impact.
No need. As per the JWT's RFC, JWTs support the JWT ID property. You only need to store the JWT ID of a revoked token, not the whole token. Also, you only need to store it during the time window it's valid.
> Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing.
I think "easier" is the only argument, and it's mostly about laziness.
Authentication flows already support emitting both access tokens and refresh tokens, and generating new tokens is a matter of sending a request with a refresh token.
Ironically, the "easy" argument boils down arguing in favor of making it easy to pull session hijacking attacks. That's what developers do when they fish tokens from some source and send them around.
> I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
This personal belief is not grounded in reality.
It's absurd to argue that clients having to do a request each 5 minutes is something that requires you to "reevaluate your infrastructure requirements". You're able to maintain infrastructure that handles all requests from clients, but you draw the line in sending a refresh request each minute or so?
It's also absurd to argue about proof-of-possession and other nonsense. The token validation process is the same: is the token signed? Can you confirm the signature is valid? Is the token expired/revoked? This is something your resource servers already do at each request. There is no extra requirements.
You're effectively talking about an attacker breaking https aren't you? Unless you can detail another way to
get at a user's token. I'm curious to hear about it.
I did, and xss and session sniffing listed on the OWASP web page, would be prevented by following OAuth flows. So that just leaves mitm, which as I said, is effectively breaking https.
Each JWT was passed as a query param over a 307 redirect from my service to the other side, so the JWT itself was the whole request to prevent tampering from the browser. It was for an internal tool that did one thing, did it well, and never caused me any problems.
Back in the day I worked at a place that had HMAC signing on an http endpoint.
50% of the support issues were because people could not properly sign requests and it caused me to learn how to make http in all sorts of crap to help support them.
Easy to imagine that haha. That’s part of the reason I’d lean on a standard like JOSE and make signing happen automatically for users who prefer to use an SDK
> Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL.
Just generate the JWT using, e.g. https://github.com/mike-engel/jwt-cli ? It’s different, and a little harder the first time, but not any kind of ongoing burden.
IMO this is a tooling issue. You can make your SDK generate keys and even base64 encode them so they appear opaque to the uninitiated (like an API key)
Installing a dependency for myself is just and a little harder the first time. Asking every developer who will ever consume my service over CURL to install a dependency is absolutely an ongoing burden.
I am pretty sure with the right tooling JWTs (or something similar) could be much easier to use and serve more needs/use cases than they tend to be used for today.
Even the very foundational libraries needed to create/sign/handle JWTs in many programming languages are kind of clunky. And I think subconsciously as developers when we encounter clunky (ie high accidental complexity) libraries/apis we sense that the overall project is kind of amateurish, or will take some trial and error to set up properly. Sometimes that's no big deal, but with auth you can't afford to risk your company or product on someone's side project.
For example, in Go, there is really only one major jwt implementation in use [0] and it's the side project of some guy with a day job working on protobufs [1,2]. Also, with all due respect to the contributors because it's a good library considering the level of support it has, it is just not easy to use or get started with.
Part of the problem is also that the JWT specification [3,4] is a bad mix of overly prescriptive and permissive regarding "claims". I actually think it needs to be replaced with something better because it's a serious problem: it adds a bunch of unnecessary fluff to deal with special claims like "email_verified" when that use case could easily just be treated like any other application-specific jwt payload data, AND it then adds a bunch of complexity because almost everything is optional.
Then of course there's the giant problem of handling your own private keys and identity/security infrastructure + all the associated risks. Nothing mature makes that easy, so everybody naturally prefers to delegate it to auth providers. But that tends to make it hard to fully leverage the underlying jwts (eg with custom claims) and might force you into an authorization model/impl that's less flexible than what JWTs actually support, because now you have to use your auth provider's apis.
I think there really needs to be some kind of all-in-one library or tool for developers to operate their own hmac/jwks/authn/authz safely with good enough defaults that typical use cases require ~no configuration. And maybe either a jwtv2 or second spec that strips out the junk from the jwt spec and prescribes basic conventions for authorization. That's actually the only realistic path to fully leveraging jwt for identity and authz, because you couldn't build something like that on top of auth providers' APIs since they're too restricted/disparate (plus the providers are incentivized to sneakily lock you in to their APIs and identity/authz).
Anyway, this is a project I've been toying with for about a year now and we have some funding/time at my company to start tackling it as an open source project. Hit me up if you're interested.
Interesting, so instead of OpenAI giving me an API key, I give them a public key, which they register. Sounds like what we already do with GitHub. I like it.
Which, unless I'm missing something, undercuts the entire article? The private key, in the generated keypair, is the thing that you can then never commit to your VCS.
When you "register" the public key with whatever the relying party is, you're also likely going to bind it to some form of identity, so you can't leak this private key to others, either. (And I'm curious, of course, how the relying party comes to trust the public key. That call would seem to require its own form of auth, though we can punt that same as it would be punted for an API key you might download.)
> Sorry, are you expecting some way to authenticate without any secrets?
I'm not. "It’s truly wild to me what some of y’all will tolerate." What, exactly, are we tolerating that is solved by asymmetric key pairs?
> The post is talking about simplifying things by eliminating all the back and forth. It’s not pretending to invent a secret-less auth system.
Well, then, I'm lost. What back & forth was eliminated?
In one system, we download an API key. In this system, we upload a public key. In both, we have a transfer; the direction doesn't really matter. Someone has to generate some secret somewhere, and bind it to the identity, which is what I was saying above, and is apparently the wildness that I'm tolerating.
Yes but when you have to do this 13 times, it gets really annoying to manage all those API keys. Especially if you need them in different processing contexts. If I could just have a single public/private key pair for my app it would simplify managing all the extra services I use.
> Visit our website. Create an account. Verify your email. Create a project. Add your credit card. Go to settings. Create an API key. Add it to your password manager. Drop it in your .env file. Download our SDK. Import it. Pass your env var in.
This is the pitch. But it seems like you fixated on the next part of the paragraph where it talks about api keys in version control.
I’ll agree with you in as much as this isn’t a massive change - but i like the approach for being an incremental improvement - and for challenging the status quo
In theory, I as the service provider know when my key database has been compromised. In theory. In practice, I will never know if a customer has been compromised, however up to a point a compromised user box can forward tokens to an attacker. So pending on whether you ever rotat the private keys, it’s a matter of ho long an attacker can retreat to a server they own to continue the attack.
In a way this reminds me a bit of SRP, which was an attempt to handle login without the server ever having your password. Which makes me think this is something to be integrated with password managers.
and it’s easy to do keypair generation in the browser using subtle crypto. That API doesn’t provide jwk generation, but seems like it would be relatively easy to do even without the jose module. And the browser cab “download” (really just save) the keypair using the Blob API. Keys need not leave the browser.
An api developer portal could just offer
- generate a new keypair?
- upload your existing public key?
…As a choice. Either way the public key gets registered for your account.
This is actually how GCP has always done service account authentication. A GCP service account key is an asymmetric keypair and Google stores the public key. AWS is somewhat similar, but they use an symmetric HMAC so they store the same secret key you use.
It's interesting to imagine taking the pubkey as identity concept to its full extents in situations like this, for example if you could create a cloud account, spin up resources, and authorize payment for them all programmatically without having to enter payment details on a form (because your keypair can authorize payment with the whatever payment method you use)
Even better: Imagine a world where you could just host your public keys on e.g. mydomain.com/.well-known/jwks.json, you register with a service provider with me@mydomain.com, then the service automatically pulls public keys from that. Then, all you have to do is sign new keys with an appropriate audience like aud:"serviceprovider.com".
And for the public email providers, a service like Gravatar could exist to host them for you.
I feel like I’m not understanding the target audience for this post: are there people/companies out there specifically paying other companies to be their key-holding party for JWT issuance purposes? I know about SSO providers of course, but that’s several layers of abstraction up.
(Maybe my confusion here is that these JWTs are being described as self-signed, as if there’s a JWK PKI cabal out there, like the bad old days of the Web PKI. There isn’t one that I know of!)
The key distinction I am getting at is: self-signed as in “signed with a self-issued key pair”, as opposed to using an API key/credential that has been issued to you
The model here feels not entirely dissimilar to Passkeys? Both are user provided auth tokens??
[Ed: allegations that the following is inaccurate! Probably checks out? Yes I meant the browser not the domain bound part, that seems solid.] Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
They're not constrained that way at all. The communication between browsers and various passkey-holding software and hardware is an open standard. There are open-source apps that can hold and sync passkeys. I don't know why everyone keeps repeating this obvious falsehood.
Not sure which way of constraint you're referring to, but WebAuthn credentials are bound to a domain via Relying Party ID.
There's a proposal for cross-domain usage via Related Origins, but that scheme depends on the authority of the relying party, meaning you can't say "I'd like to be represented by the same keypair across this set of unrelated domains"
> Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
Passkeys are not constrained so you have to trust cloud providers or anyone else with all your critical keys. The key is resident in whatever software or hardware you want to use, and anyone can create passkey software or hardware that will work with Chrome etc. I'm talking about (and I'm pretty sure the OP was referring to) the other side of WebAuthn: where the credentials surfaced to JavaScript via WebAuthn actually come from and how the browser relays requests that a challenge is signed.
Yeah, I am sort of a fan of Passkeys in principal, but they are domain bound (you can't use them across domains).
I wish there were something built into browsers that offered a scheme where your pubkey = your identity, but in short there are a lot of issues with that
This is already something in mainstream authentication applications you host yourself on your own domain. We use Keycloak. I don't know why anyone would install a JavaScript library to do this. It's not that difficult.
Fair. I assume you mean asymmetric key cryptography and not JWKs in particular? JOSE is a pretty good library if you need the latter and you’re already working in JS
is the author suggesting allowing the client to set their own claims and using that to auth whatever action they are going to take? I have to be misunderstanding what they are saying - that sounds fraught with risk
(Author here) The JWT signer should be the authority setting claims, so if your server is the authority and the client is untrusted, the server can provide the client a pre-signed JWT with the claims it needs, and the client can send that along with requests to the API.
But this scheme is flexible. You could also have the client send "requested" claims for the server to consider adding if allowed when getting a JWT.
You could also reverse-proxy client requests through your server, adding any claims the server allows.
On the B2B2C section, my mind immediately went to OAuth. For a developer like Bob giving his end users access to a service, wouldn't a standard OAuth flow where his users grant permission to his app would be the more conventional and secure solution?
It feels like that model handles key management, delegation, and revocation in a well-established way.
What am I missing here that makes this a better fit?
> What am I missing here that makes this a better fit?
From a cursory read, the answer is "it doesn't".
The blogger puts up a strawman argument to complain about secret management and downloading SDKs, but the blogger ends up presenting as a tradeoff the need to manage public and private keys, key generation at the client side, and not to mention services having to ad-hoc secret verification at each request.
This is already a very poor tradeoff, but to this we need to factor in the fact that this is a highly non-standard, ad-hoc auth mechanism.
I recall that OAuth1 had a token generation flow that was similar in the way clients could generate requests on the fly with nonces and client keys. It sucked.
Spot on. The burden and complexity of that cryptographic signing on the client is exactly what OAuth2 was created to avoid. Thanks for making that connection.
> Visit our website. Create an account. Verify your email. Create a project. Add your credit card. Go to settings. Create an API key. Add it to your password manager. Drop it in your .env file. Download our SDK. Import it. Pass your env var in. Never share your API key. Make sure you never commit it to source control.
None of this "BS" actually goes away with self-signed JWTs, right? Just replace mentions of "API Key" with public/private key and it's otherwise a similar process I think.
1. With self-signed JWTs, you could start consuming APIs with free tiers immediately, without first visiting a site and signing up. (I could see this pattern getting traction as it helps remove friction, especially if you want to be able to ask an LLM to use some API).
2. Compare this scheme to something like the Firebase SDK, where there's a separate server-side "admin" sdk. With self-signed JWTs, you just move privileged op invocations to claims – consuming the API is identical whether from the client or server.
3. The authority model is flexible. As long as the logical owner of the resource being accessed is the one signing JWTs, you're good. A database service I'm working on embeds playgrounds into the docs site that use client-generated JWKs to access client-owned DB instances.
Yeah that's not happening. In fact most services with free tiers still ask for a credit card number, and if not still ask for a lot of information. It is a marketing scheme after all.
For web dev, where, sadly, it's the norm to have about 13 different services for a website, it would greatly simplify having to herd 13 API keys around
> you could start consuming APIs with free tiers immediately, without first visiting a site and signing up
I’m yet to see a website that provides an API and doesn’t have a ToS that you have to agree to. Unless you control both parties, or you expose your service only to pre-vetted customers, there is no legal department that is going to allow this.
you put as part of the claims in the jwt that you agree to the TOS (may be something like { ... TOS:www.service.com/tos.txt, TOSAgreed:true ... }), which you sign. Then this is an explicit agreement from you as a client.
Correct, because to fully leverage self-signed (or distributed or decentralized or whatever) JWTs you have to handle identity in a different way than typical consumer auth works on the Internet today.
Right now, user identity is ~based on email, which is in practice very often delegated to a third party like gmail, because most people do not host their own email. So identity = foo@bobsemailservice.cool. To properly validate self-signed JWTs you'd have to instead either host or delegate distinct JWKS endpoints for each identity. Then identity = https://bobsjwkshost.fun/foo. You still have to create and verify an account through a third party, otherwise, your self-signed JWT is just as credible as a self-signed TLS cert.
In both cases the underlying identity model is actually based on DNS, IP, and internet domains - whoever controls those for your email/JWKS controls what gets served by those addresses. So to fully self-host email or JWKS you need to own your own domain and point email/http to IP addresses or other domains. And you need to have something servery running under that IP to handle email/https. That's a big hurdle to setup as is, but even so, all this really does is kick up identity delegation one more notch to the IANA, who decide who owns what domains and IP addresses, by delegating them to registrars and regional Internet registries, with further delegation to AS to actually dole out IP addresses to proles.
I recently started the process of becoming a registrar and securing my own IP addresses and realized THERE ARE STILL MORE LAYERS. Now I have to pay a whole bunch of money and fill out paperwork to prove my business' legal legitimacy and get the stupid magic Internet numbers we call IP addresses. Which then relies on my country's legal and financial systems... And by the way, this part is just completely infeasible for a regular user to actually follow through with (costs are high five digits USD or low six digits USD, IIRC, takes a ton of time).
Because the IANA is a stodgy bureaucratic Important Institution That I Cannot Change, IMO the best way to implement self-hosted auth is by making it as cheap and simple as possible to register a domain and serve email/jwks on it. If we pay them even more money they'll give us a TLD which we should be able to make available for $0.20 (IANA base registration fee because of course) or just eat the cost and make it free pending some other verification process. And then we can set up a portable "serverless" JWKS/SMTP handling tool that people can run.
I've been thinking about this self-hosted identity/auth problem quite a lot and I think there's no ideal solution. If you're fully decentralized and trustless you're probably using "blockchain" and won't have enough throughput, will lock people out of everything online if they lose a key, and will still probably make consumers pay for it to stave off sybil attacks. Also, to use the internet you have to trust the IANA anyway. So just upgrade every authenticated user into their own identity provider and make it extremely cheap and easy to operate, and at long last you can get those precious seconds back by signing up for Internet services with a JWKS path instead of an email address.
This article uses "ES256" for the alg, GitHub uses "RS256" as their alg and a very deranged few use "none".
The point here is this article is giving the developer lots of rope to hang themselves with the JOSE standard on JWT/K/S and it is a sure way to implement it incorrectly and have lots of security issues.
PASETO is a much better alternative to work with: https://paseto.io with none of the downsides of the JOSE standard.
Yes you can create unsigned JWTs. Don't do that and don't accept any such tokens as valid (which would be the even bigger facepalm worthy mistake).
Just do it right (and at this point it is widely documented what the pitfalls are here), comply with the widely used and commonly supported standards, and follow the principle of the least amount of surprise. Which is kind of important in a world where things need to be cross integrated with each other and where JWTs, JOSE, and associated standards like OpenID connect are basically used by world+dog in a way that is perfectly secure and 100% free of these issues.
Honestly, it's not that hard.
The paradox with Paseto is that if you are smart enough to know what problem it fixes, you shouldn't be having that problem and also be smart enough to know that using "none" as an algorithm is a spectacularly bad idea. You shouldn't need Paseto to fix it if you somehow did anyway. And of course you shouldn't be dealing with the security layer in your product at all if that is at all confusing to you.
I _love_ JWTs for API authentication - one of the nicest APIs I ever consumed was essentially JSON RPC over JWTs. Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL. You can no longer have nice click-to-copy snippets in your public docs. You either have an SDK ready to go in your customer's language or ecosystem of choice, or you're asking them to write a bunch of scary security-adjacent code just to get to their first successful request. No, I don't have a JWT library recommendation for Erlang, sorry.
Not that an API couldn't support both API Keys and JWT based authentication, but one is a very established and well understood pattern and one is not. Lowest common denominator API designs are hard to shake.
> sign a JWT per request
That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request? What does that give you?
Also what made that API so nice? Was this a significant part of it?
> That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request?
Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
This was supposed to be the way bearer tokens were supposed to be used.
> What does that give you?
Security.
https://en.wikipedia.org/wiki/Session_hijacking
> Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
A "reusable" bearer JWT with a particular audience satisfies all three - as long as the channel and the software are properly protected from inspection/exfiltration. Native clients are often considered properly protected (even when they open themselves up to supply chain attacks by throwing unaudited third party libraries in); browser clients are a little less trustworthy due to web extensions and poor adoption of technologies like CSP.
A proof of possession JWT (or auxiliary mechanisms like DPoP) will also satisfy all three properties - as long as your client won't let its private key be exfiltrated.
It is when you can't have all three properties that you start looking at other risk mitigations, such as making a credential one time use (e.g. first-use-wins) when you can't trust it won't be known to attackers once sent, or limiting validity times under the assumption that the process of getting a new token is more secure.
Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful. Persisting every token ever received would be costly and have a latency impact. Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing. Five minutes is a common value here, and some software will reject a lifetime of over an hour because of the cost of enforcing the single use policy.
I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
> The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
Those are desirable properties.
But session hijacking is a known problem. You have to expect a scenario where an attacker fishes one of your tokens and uses it on your behalf to access your things.
To mitigate that attack vector, either you use single-user tokens or short-lived tokens.
Also, clients are already expected to go through authentication flows to request tokens with specific sets of claims and/or scopes to perform specific requests.
Single-use tokens were expected to be the happy flow of bearer token schemes such as JWTs. That's how you eliminate a series of attack vectors.
> Generally an extremely short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful.
Not quite.
Single-use tokens are stateful because resource servers need to track a list of revoked tokens. But "stateful" only means that you have to periodically refresh a list of IDs.
Short-lived tokens are stateless. A JWT features "issued at" time, "not before" time, and "expiration" time. Each JWT already specifies the time window when resource servers should deem it valid.
> Persisting every token ever received would be costly and have a latency impact.
No need. As per the JWT's RFC, JWTs support the JWT ID property. You only need to store the JWT ID of a revoked token, not the whole token. Also, you only need to store it during the time window it's valid.
> Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing.
I think "easier" is the only argument, and it's mostly about laziness.
Authentication flows already support emitting both access tokens and refresh tokens, and generating new tokens is a matter of sending a request with a refresh token.
Ironically, the "easy" argument boils down arguing in favor of making it easy to pull session hijacking attacks. That's what developers do when they fish tokens from some source and send them around.
> I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity there, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
This personal belief is not grounded in reality.
It's absurd to argue that clients having to do a request each 5 minutes is something that requires you to "reevaluate your infrastructure requirements". You're able to maintain infrastructure that handles all requests from clients, but you draw the line in sending a refresh request each minute or so?
It's also absurd to argue about proof-of-possession and other nonsense. The token validation process is the same: is the token signed? Can you confirm the signature is valid? Is the token expired/revoked? This is something your resource servers already do at each request. There is no extra requirements.
>But session hijacking is a known problem.
You're effectively talking about an attacker breaking https aren't you? Unless you can detail another way to get at a user's token. I'm curious to hear about it.
> You're effectively talking about an attacker breaking https aren't you?
No. There are many ways to fish bearer tokens. Encryption in transit only addresses some of them.
I'm all ears, please provide one potential way.
> I'm all ears, please provide one potential way.
Just Google for session hijacking attacks. There's a wealth of information on the topic. It's a regular entry in OWASP top 10.
I did, and xss and session sniffing listed on the OWASP web page, would be prevented by following OAuth flows. So that just leaves mitm, which as I said, is effectively breaking https.
Each JWT was passed as a query param over a 307 redirect from my service to the other side, so the JWT itself was the whole request to prevent tampering from the browser. It was for an internal tool that did one thing, did it well, and never caused me any problems.
Back in the day I worked at a place that had HMAC signing on an http endpoint.
50% of the support issues were because people could not properly sign requests and it caused me to learn how to make http in all sorts of crap to help support them.
Easy to imagine that haha. That’s part of the reason I’d lean on a standard like JOSE and make signing happen automatically for users who prefer to use an SDK
> Unfortunately they represent a huge usability hit over API Keys for the average joe. Involving cryptography to sign a JWT per request makes an API significantly harder to consume with tools like Postman or CURL.
Just generate the JWT using, e.g. https://github.com/mike-engel/jwt-cli ? It’s different, and a little harder the first time, but not any kind of ongoing burden.
You can even get Postman to generate them for you: https://learning.postman.com/docs/sending-requests/authoriza..., although I have not bothered with this personally.
IMO this is a tooling issue. You can make your SDK generate keys and even base64 encode them so they appear opaque to the uninitiated (like an API key)
Installing a dependency for myself is just and a little harder the first time. Asking every developer who will ever consume my service over CURL to install a dependency is absolutely an ongoing burden.
Instead do AWS SIGV4 request signing! It's built into curl these days.
I am pretty sure with the right tooling JWTs (or something similar) could be much easier to use and serve more needs/use cases than they tend to be used for today.
Even the very foundational libraries needed to create/sign/handle JWTs in many programming languages are kind of clunky. And I think subconsciously as developers when we encounter clunky (ie high accidental complexity) libraries/apis we sense that the overall project is kind of amateurish, or will take some trial and error to set up properly. Sometimes that's no big deal, but with auth you can't afford to risk your company or product on someone's side project.
For example, in Go, there is really only one major jwt implementation in use [0] and it's the side project of some guy with a day job working on protobufs [1,2]. Also, with all due respect to the contributors because it's a good library considering the level of support it has, it is just not easy to use or get started with.
Part of the problem is also that the JWT specification [3,4] is a bad mix of overly prescriptive and permissive regarding "claims". I actually think it needs to be replaced with something better because it's a serious problem: it adds a bunch of unnecessary fluff to deal with special claims like "email_verified" when that use case could easily just be treated like any other application-specific jwt payload data, AND it then adds a bunch of complexity because almost everything is optional.
Then of course there's the giant problem of handling your own private keys and identity/security infrastructure + all the associated risks. Nothing mature makes that easy, so everybody naturally prefers to delegate it to auth providers. But that tends to make it hard to fully leverage the underlying jwts (eg with custom claims) and might force you into an authorization model/impl that's less flexible than what JWTs actually support, because now you have to use your auth provider's apis.
I think there really needs to be some kind of all-in-one library or tool for developers to operate their own hmac/jwks/authn/authz safely with good enough defaults that typical use cases require ~no configuration. And maybe either a jwtv2 or second spec that strips out the junk from the jwt spec and prescribes basic conventions for authorization. That's actually the only realistic path to fully leveraging jwt for identity and authz, because you couldn't build something like that on top of auth providers' APIs since they're too restricted/disparate (plus the providers are incentivized to sneakily lock you in to their APIs and identity/authz).
Anyway, this is a project I've been toying with for about a year now and we have some funding/time at my company to start tackling it as an open source project. Hit me up if you're interested.
[0] https://github.com/golang-jwt/jwt
[1] https://github.com/golang-jwt
[2] https://mfridman.com/
[3] https://datatracker.ietf.org/doc/html/rfc7519#section-4.1
[4] https://www.iana.org/assignments/jwt/jwt.xhtml
Interesting, so instead of OpenAI giving me an API key, I give them a public key, which they register. Sounds like what we already do with GitHub. I like it.
Which, unless I'm missing something, undercuts the entire article? The private key, in the generated keypair, is the thing that you can then never commit to your VCS.
When you "register" the public key with whatever the relying party is, you're also likely going to bind it to some form of identity, so you can't leak this private key to others, either. (And I'm curious, of course, how the relying party comes to trust the public key. That call would seem to require its own form of auth, though we can punt that same as it would be punted for an API key you might download.)
Sorry, are you expecting some way to authenticate without any secrets?
Could you describe how that would work? If two people have the same info, how on earth do you tell which is which?
The post is talking about simplifying things by eliminating all the back and forth. It’s not pretending to invent a secret-less auth system.
> Sorry, are you expecting some way to authenticate without any secrets?
I'm not. "It’s truly wild to me what some of y’all will tolerate." What, exactly, are we tolerating that is solved by asymmetric key pairs?
> The post is talking about simplifying things by eliminating all the back and forth. It’s not pretending to invent a secret-less auth system.
Well, then, I'm lost. What back & forth was eliminated?
In one system, we download an API key. In this system, we upload a public key. In both, we have a transfer; the direction doesn't really matter. Someone has to generate some secret somewhere, and bind it to the identity, which is what I was saying above, and is apparently the wildness that I'm tolerating.
Yes but when you have to do this 13 times, it gets really annoying to manage all those API keys. Especially if you need them in different processing contexts. If I could just have a single public/private key pair for my app it would simplify managing all the extra services I use.
> Visit our website. Create an account. Verify your email. Create a project. Add your credit card. Go to settings. Create an API key. Add it to your password manager. Drop it in your .env file. Download our SDK. Import it. Pass your env var in.
This is the pitch. But it seems like you fixated on the next part of the paragraph where it talks about api keys in version control.
I’ll agree with you in as much as this isn’t a massive change - but i like the approach for being an incremental improvement - and for challenging the status quo
In theory, I as the service provider know when my key database has been compromised. In theory. In practice, I will never know if a customer has been compromised, however up to a point a compromised user box can forward tokens to an attacker. So pending on whether you ever rotat the private keys, it’s a matter of ho long an attacker can retreat to a server they own to continue the attack.
In a way this reminds me a bit of SRP, which was an attempt to handle login without the server ever having your password. Which makes me think this is something to be integrated with password managers.
Yes.
and it’s easy to do keypair generation in the browser using subtle crypto. That API doesn’t provide jwk generation, but seems like it would be relatively easy to do even without the jose module. And the browser cab “download” (really just save) the keypair using the Blob API. Keys need not leave the browser.
An api developer portal could just offer - generate a new keypair? - upload your existing public key?
…As a choice. Either way the public key gets registered for your account.
The end. Easy.
This is actually how GCP has always done service account authentication. A GCP service account key is an asymmetric keypair and Google stores the public key. AWS is somewhat similar, but they use an symmetric HMAC so they store the same secret key you use.
It's interesting to imagine taking the pubkey as identity concept to its full extents in situations like this, for example if you could create a cloud account, spin up resources, and authorize payment for them all programmatically without having to enter payment details on a form (because your keypair can authorize payment with the whatever payment method you use)
Even better if they would take a private CA cert.
Even better: Imagine a world where you could just host your public keys on e.g. mydomain.com/.well-known/jwks.json, you register with a service provider with me@mydomain.com, then the service automatically pulls public keys from that. Then, all you have to do is sign new keys with an appropriate audience like aud:"serviceprovider.com".
And for the public email providers, a service like Gravatar could exist to host them for you.
Wouldn't that be nice.
This is similar to ssh key auth. (Pubkey, privkey)
Github has a cool little article on making JWTs for their API. Very useful!
https://docs.github.com/en/apps/creating-github-apps/authent...
The JWT website is also super useful https://www.jwt.io/
I feel like I’m not understanding the target audience for this post: are there people/companies out there specifically paying other companies to be their key-holding party for JWT issuance purposes? I know about SSO providers of course, but that’s several layers of abstraction up.
(Maybe my confusion here is that these JWTs are being described as self-signed, as if there’s a JWK PKI cabal out there, like the bad old days of the Web PKI. There isn’t one that I know of!)
The key distinction I am getting at is: self-signed as in “signed with a self-issued key pair”, as opposed to using an API key/credential that has been issued to you
The model here feels not entirely dissimilar to Passkeys? Both are user provided auth tokens??
[Ed: allegations that the following is inaccurate! Probably checks out? Yes I meant the browser not the domain bound part, that seems solid.] Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
They're not constrained that way at all. The communication between browsers and various passkey-holding software and hardware is an open standard. There are open-source apps that can hold and sync passkeys. I don't know why everyone keeps repeating this obvious falsehood.
Not sure which way of constraint you're referring to, but WebAuthn credentials are bound to a domain via Relying Party ID.
There's a proposal for cross-domain usage via Related Origins, but that scheme depends on the authority of the relying party, meaning you can't say "I'd like to be represented by the same keypair across this set of unrelated domains"
I was referring to this:
> Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
Passkeys are not constrained so you have to trust cloud providers or anyone else with all your critical keys. The key is resident in whatever software or hardware you want to use, and anyone can create passkey software or hardware that will work with Chrome etc. I'm talking about (and I'm pretty sure the OP was referring to) the other side of WebAuthn: where the credentials surfaced to JavaScript via WebAuthn actually come from and how the browser relays requests that a challenge is signed.
Ah, yes I agree
Yeah, I am sort of a fan of Passkeys in principal, but they are domain bound (you can't use them across domains).
I wish there were something built into browsers that offered a scheme where your pubkey = your identity, but in short there are a lot of issues with that
This is already something in mainstream authentication applications you host yourself on your own domain. We use Keycloak. I don't know why anyone would install a JavaScript library to do this. It's not that difficult.
I wish someone would have used keycloak at my place. They decided to write it all by hand instead.
Fair. I assume you mean asymmetric key cryptography and not JWKs in particular? JOSE is a pretty good library if you need the latter and you’re already working in JS
> Fair. I assume you mean asymmetric key cryptography and not JWKs in particular?
There's some degree of confusion in your comment. JWKs is a standard to represent cryptographic keys. It is an acronym for JSON Web key set.
> JOSE is a pretty good library (...)
JOSE is a set of standards that form a framework to securely transfer claims.
We’re using JWKs.
Ah, and just the subtle crypto API to generate keys? Or are you not generating them on the client?
is the author suggesting allowing the client to set their own claims and using that to auth whatever action they are going to take? I have to be misunderstanding what they are saying - that sounds fraught with risk
(Author here) The JWT signer should be the authority setting claims, so if your server is the authority and the client is untrusted, the server can provide the client a pre-signed JWT with the claims it needs, and the client can send that along with requests to the API.
But this scheme is flexible. You could also have the client send "requested" claims for the server to consider adding if allowed when getting a JWT.
You could also reverse-proxy client requests through your server, adding any claims the server allows.
In some apps, the client may be the signing authority (e.g. it owns the resource it's accessing).
In that case, the client can possess the JWK keypair and do its own signing.
Some engineers forgot the secret/salt part of generating the jwt. Sometimes you can just pack some claims in there and encode it and it works!!
On the B2B2C section, my mind immediately went to OAuth. For a developer like Bob giving his end users access to a service, wouldn't a standard OAuth flow where his users grant permission to his app would be the more conventional and secure solution?
It feels like that model handles key management, delegation, and revocation in a well-established way.
What am I missing here that makes this a better fit?
> What am I missing here that makes this a better fit?
From a cursory read, the answer is "it doesn't".
The blogger puts up a strawman argument to complain about secret management and downloading SDKs, but the blogger ends up presenting as a tradeoff the need to manage public and private keys, key generation at the client side, and not to mention services having to ad-hoc secret verification at each request.
This is already a very poor tradeoff, but to this we need to factor in the fact that this is a highly non-standard, ad-hoc auth mechanism.
I recall that OAuth1 had a token generation flow that was similar in the way clients could generate requests on the fly with nonces and client keys. It sucked.
Spot on. The burden and complexity of that cryptographic signing on the client is exactly what OAuth2 was created to avoid. Thanks for making that connection.
> Visit our website. Create an account. Verify your email. Create a project. Add your credit card. Go to settings. Create an API key. Add it to your password manager. Drop it in your .env file. Download our SDK. Import it. Pass your env var in. Never share your API key. Make sure you never commit it to source control.
None of this "BS" actually goes away with self-signed JWTs, right? Just replace mentions of "API Key" with public/private key and it's otherwise a similar process I think.
The things that change are:
1. With self-signed JWTs, you could start consuming APIs with free tiers immediately, without first visiting a site and signing up. (I could see this pattern getting traction as it helps remove friction, especially if you want to be able to ask an LLM to use some API).
2. Compare this scheme to something like the Firebase SDK, where there's a separate server-side "admin" sdk. With self-signed JWTs, you just move privileged op invocations to claims – consuming the API is identical whether from the client or server.
3. The authority model is flexible. As long as the logical owner of the resource being accessed is the one signing JWTs, you're good. A database service I'm working on embeds playgrounds into the docs site that use client-generated JWKs to access client-owned DB instances.
The problem I see with (1) is that it becomes a little bit too easy to regenerate public keys and circumvent free tier metering.
Yeah that's not happening. In fact most services with free tiers still ask for a credit card number, and if not still ask for a lot of information. It is a marketing scheme after all.
I guess that's easily addressed by requiring an account and a public key to access the free tier. Still better than having to get yet another API key.
Same difference to most people and dead on arrival.
For web dev, where, sadly, it's the norm to have about 13 different services for a website, it would greatly simplify having to herd 13 API keys around
For sure. Would likely need to be combined with another mechanism like IP rate limits
I assure you it's far too easy to get as many ip addresses as you want if your interest is in avoiding rate limits.
Valid
> you could start consuming APIs with free tiers immediately, without first visiting a site and signing up
I’m yet to see a website that provides an API and doesn’t have a ToS that you have to agree to. Unless you control both parties, or you expose your service only to pre-vetted customers, there is no legal department that is going to allow this.
you put as part of the claims in the jwt that you agree to the TOS (may be something like { ... TOS:www.service.com/tos.txt, TOSAgreed:true ... }), which you sign. Then this is an explicit agreement from you as a client.
Correct, because to fully leverage self-signed (or distributed or decentralized or whatever) JWTs you have to handle identity in a different way than typical consumer auth works on the Internet today.
Right now, user identity is ~based on email, which is in practice very often delegated to a third party like gmail, because most people do not host their own email. So identity = foo@bobsemailservice.cool. To properly validate self-signed JWTs you'd have to instead either host or delegate distinct JWKS endpoints for each identity. Then identity = https://bobsjwkshost.fun/foo. You still have to create and verify an account through a third party, otherwise, your self-signed JWT is just as credible as a self-signed TLS cert.
In both cases the underlying identity model is actually based on DNS, IP, and internet domains - whoever controls those for your email/JWKS controls what gets served by those addresses. So to fully self-host email or JWKS you need to own your own domain and point email/http to IP addresses or other domains. And you need to have something servery running under that IP to handle email/https. That's a big hurdle to setup as is, but even so, all this really does is kick up identity delegation one more notch to the IANA, who decide who owns what domains and IP addresses, by delegating them to registrars and regional Internet registries, with further delegation to AS to actually dole out IP addresses to proles.
I recently started the process of becoming a registrar and securing my own IP addresses and realized THERE ARE STILL MORE LAYERS. Now I have to pay a whole bunch of money and fill out paperwork to prove my business' legal legitimacy and get the stupid magic Internet numbers we call IP addresses. Which then relies on my country's legal and financial systems... And by the way, this part is just completely infeasible for a regular user to actually follow through with (costs are high five digits USD or low six digits USD, IIRC, takes a ton of time).
Because the IANA is a stodgy bureaucratic Important Institution That I Cannot Change, IMO the best way to implement self-hosted auth is by making it as cheap and simple as possible to register a domain and serve email/jwks on it. If we pay them even more money they'll give us a TLD which we should be able to make available for $0.20 (IANA base registration fee because of course) or just eat the cost and make it free pending some other verification process. And then we can set up a portable "serverless" JWKS/SMTP handling tool that people can run.
I've been thinking about this self-hosted identity/auth problem quite a lot and I think there's no ideal solution. If you're fully decentralized and trustless you're probably using "blockchain" and won't have enough throughput, will lock people out of everything online if they lose a key, and will still probably make consumers pay for it to stave off sybil attacks. Also, to use the internet you have to trust the IANA anyway. So just upgrade every authenticated user into their own identity provider and make it extremely cheap and easy to operate, and at long last you can get those precious seconds back by signing up for Internet services with a JWKS path instead of an email address.
This article uses "ES256" for the alg, GitHub uses "RS256" as their alg and a very deranged few use "none".
The point here is this article is giving the developer lots of rope to hang themselves with the JOSE standard on JWT/K/S and it is a sure way to implement it incorrectly and have lots of security issues.
PASETO is a much better alternative to work with: https://paseto.io with none of the downsides of the JOSE standard.
Yes you can create unsigned JWTs. Don't do that and don't accept any such tokens as valid (which would be the even bigger facepalm worthy mistake).
Just do it right (and at this point it is widely documented what the pitfalls are here), comply with the widely used and commonly supported standards, and follow the principle of the least amount of surprise. Which is kind of important in a world where things need to be cross integrated with each other and where JWTs, JOSE, and associated standards like OpenID connect are basically used by world+dog in a way that is perfectly secure and 100% free of these issues.
Honestly, it's not that hard.
The paradox with Paseto is that if you are smart enough to know what problem it fixes, you shouldn't be having that problem and also be smart enough to know that using "none" as an algorithm is a spectacularly bad idea. You shouldn't need Paseto to fix it if you somehow did anyway. And of course you shouldn't be dealing with the security layer in your product at all if that is at all confusing to you.
Haven't heard of PASETO, but I'll check it out. I'd say JOSE is an implementation detail of what I'm advocating for, so very open to alternatives.
JWTs and JOSE have a bad reputation for footguns and ignoring modern cryptographic principles.
PASETO is the “mostly fixed” version of JWTs, but if you’re looking for something with more features, biscuits are quite interesting:
https://www.biscuitsec.org
what is this drawback???? surely there must be a catch somewhere right??? if its that easy then everyone would love to use this
That site is blocked by Fortinet as "pornography."
Modern day AV software:
Did you contact Fortinet since you're the one that apparently utilizes them?
Bummer. Not sure what I can do about that, but I assure you it is not pornography!
Sounds like someone must have gotten their "graphies" mixed up