Open Redirect Vulnerability in KeystoneJS

Author Marc Wickenden

Date 12 September 2018

This post is about an interesting security issue I found in KeystoneJS, the Node.js/Express based content management framework.

It’s a simple Open Redirect weakness in the sign in page of version 4, which is currently in beta but widely deployed out on the Internet. It was interesting to me for two reasons, which shall I explain after I’ve discussed the bug.

The bug

Open Redirects simply mean a user has control over the target of an HTTP redirect in the application. The most common place to find these, as in this example, is a login page. It’s very typical behaviour for unauthenticated users to request a page which requires authentication, for example an admin dashboard, and be forwarded to the login page for authentication. Once authenticated the application redirects them back to their original target URL to provide a nicer user experience.

KeystoneJS v4 has reimplemented the admin functionality in React and the following shows the vulnerable piece of code.

{% gist e4029d88f29e3bc2596b3bfcf70c3d67 %}

I actually found this issue while I was reviewing the use of the qs NPM package in this application because the version bundled at the time of writing has a known weakness when parsing query string parameters. I couldn’t see a way to exploit that in this context, YMMV, but I did spot this issue.

This is client side JavaScript found in the file /admin/client/Signin/index.js. The JavaScript parses the URL query string in line 13 and uses this value to set the {from} value in the React template at line 20. Upon successful login the user is redirected to the value of {from}.

As you can possibly see, some validation of the from parameter is performed on line 14. Firstly it checks that the value is a string (line 14). This I believe is part of the mitigation for the use of a vulnerable version of qs. The second thing it does is ensure the first character is a forward slash to ensure it’s a relative path. This is where the vulnerability is.

By using a protocol relative URL such as // we can easily bypass this restriction and specify an off-host target. Example:

Protocol relative URLs inherit the protocol of the current URL so, in the example above, it’s https, right at the start of the URL.

Abusing Open Redirect

Why is this an issue? The clearest issue with open redirect vulnerabilities is the credibility they lend to phishing attacks. Say I want to attack a company who are running KeystoneJS for their website. Phishing credentials for their site would be a fairly simple way to get access and what better than a believable looking email with a link that definitely points to the target website?

Now we can perform a really credible looking attack, something like this.

  1. Generate a login failure page on the target site. Clone this and host it somewhere we control.
  2. Send a phishing email to a content editor with a target redirect of our malicious site.
  3. When (if) the user logs in to the valid site they will then be redirected off to our “login failure” site.
  4. Assuming they do not spot the hostname change in the URL they may believe they have just failed to log in, even though they have logged in fine, just been navigated away.
  5. Victim re-enters credentials on our site, we capture them and redirect them back to the original target site.
  6. Bcause they were authenticated they arrive at their destination as if nothing ever happened.
Demonstration of KeystoneJS Open Redirect

P.S. If you’d like a really simple handler for open redirect proof of concepts, check out our AWS Lambda function that you can throw up behind an API Gateway using Serverless. Head over to


I found this bug interesting for two reasons.

Firstly, protocol relative URLs are surprisingly effective at bypassing restrictions in lots of places. Open redirect is definitely near the top of the list but also where users have control of file naming in uploads and such like. Finding it in a well used open source framework just goes to show you still need to exercise your own judgement over the security implementation.

Secondly, when reviewing the Github repository I was mildly surprised to see not only how long the bug had existed, well over a year (original commit December 2016) but more interestingly I could see that the previous version of KeystoneJS, v3, was not vulnerable. In fact, the same developer responsible for committing the weakly protected code in v4 committed a very robustly defended piece of code in previous versions.

It just goes to show, when we’re knee deep in a technical problem we can become expert for that period of time. The memory fades over time however and, if we’re not keeping on top of our security awareness we can make mistakes that we would not have made in the past.

If this is something that worries you and you are looking for a security partner to help you write more resilient, more secure code more consistently, with training, consultancy and review services tailored around your improvement areas, get in touch.

Final thoughts

Getting this issue resolved has proven to be pretty painful. For whatever reason the original project owners (Thinkmill) have been busy with other (commercial) projects and that has understandably taken priority. The impact on KeystoneJS is that there appears to be no traction on bug fixes and on getting v4 out of beta and onto a stable release. See their Github issues for more on this, in particular this one -

I am by no means blaming anyone. Life moves on. Projects get born then die. Momentum is hard to maintain. What it does highlight is that, even with an open source project and a large, active community, security issues can present themselves and be left unresolved for some time.

I’m under no illusions here. Had this been an RCE or something of that nature, I reckon this bug would have been fixed weeks ago. It’s not anything like that serious and would require a user to be tricked to exploit it. That said, there was no clear policy for reporting security issues and a fairly unsatisfactory process when I did find someone to take a look.

If you’re maintaining an open source project it can’t be stated enough times. Make it really clear and obvious how you want security issues reported and what people can expect to happen if they do report one. Maybe something like the project can be used for guidance here? A file in your repo perhaps?

I’m a big fan of open source. The world runs on it and I genuinely think it is advantageous from a security point of view but, like all things, we can’t assume security issues are getting picked up. A project may have thousands of users but 99.9% of them are just that - users. They’re not security auditing the code so, if it’s critical, you need to do some due diligence about the way security is handled for that project.

For reference, I’ve posted the full timeline for disclosure here.


All dates are in 2018 and times in UK. Third parties are in Australia.

Date - TimeAction
18 March 18:14Email sent to contact at
20 March 08:39No response. Keystone Gitter suggested emailing the company behind the framework directly as support from them had been very lacking recently, something of a source of frustration by the sounds of it. Emailed hello at
21 March 23:09Response from John and Ben, acknowledging my email and asking for details.
21 March 23:18I reply with full details and a proof
22 March 04:14Email from John acknowledging "community angst" and that the "issue does sound both important and fairly easy to address". Ben assigned to resolve.
29 March 11:29Nothing heard so I send an email to John and Ben asking for a status update.
29 March 12:17Email from Ben saying he'll be looking at it "tomorrow, and releasing a patch of it by Monday".
29 March 12:19I email Ben and John to acknowledge this and offer help if required.
18 April 22:13I email again as no commits have been made.
20 April 02:51Reply from Ben saying he has a local branch with potential fix, can I review it?
20 April 06:08I acknowledge this and agree to review.
22 April 03:23Ben emails me his potential patch.
23 April 06:47I note a weakness in the patch and email Ben with an alternative regular expression.
24 April 01:40Email from Ben saying he will check that and let me know when it gets released.
23 May 20:41Still no commits. I email Ben asking for an update.
24 May 16:37I comment on a Github issue tagging the new dev lead Jared asking for help to get this moving.
24 May 17:38Github mention from Jared to me asking for more info at an email address.
24 May 17:43I send all the info to the new dev lead requesting it.
31 May 15:52New dev lead Jared replies to say they are looking at it.
24 July 13:20I email Jared as I notice there's been activity on GitHub and a commit to the affected page.
24 July 16:09Email back saying that another dev Stephen had taken the lead on fixing it and looping me in with them.
26 July 07:04Email from Stephen saying he believed the issue was fixed in commit 1c93aa293.
12 September 19:00I finally get around to writing this blog post.


About The Author

Marc Wickenden

Technical Director at 4ARMED, you can blame him for our awesome technical skills and business-led solutions. You can tweet him at @marcwickenden.

Related Articles