At the Sunshine PHP conference in 2017, I presented a tutorial called "Build Security In". The tl;dr of the presentation was that, in order for secure development practices to be effective, they have to be integrated either as a part of the planning process or in the planning stages of your workflow. Unfortunately the session itself took an interesting (and unfortunate) turn so much of the content I was going to share didn't end up being presented.
So, I'm going to write up a series of articles that share the same information I had planned for that session, each focusing on different aspects of "starting it right" when building your applications. While the series focuses on steps to take when starting in on a new application, the theories still hold up in the land of legacy applications. You don't have to be starting fresh to apply secure development principles to your applications - you can start at any time. All it takes is a little time to integrate it into your workflow and let the rest of the effect trickle down the line. The key is to "push left" and get it as close to the start of the process as possible.
Later on in the series I'll be using a sample application I created for the tutorial to illustrate some of the concepts presented in these tutorials. If you'd like to jump ahead and see what that looks like you can head over to this repository and clone away.
Once you've cloned it, look through the
README.md for information on how to set it up and get the simple Slim-based application up and running. You can even use PHP's own built-in webserver to run it, no separate web server provided. I'll get into more detail on this application and this setup in the next part of this series, so if you can't get it up and running don't worry - there's more info coming!
I'm going to start this article off the way the presentation did - with the definition of a few basic security terms that you may or may not be familiar with. They'll be referenced during the rest of the articles in this series so I wanted to be sure everyone had a good understanding before we move into the actual functionality.
Threat modeling is something that a lot of developers might not be familiar with. There's something similar to it that I've seen in lots of meetings, though.
Threat modeling is a procedure for optimizing network security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system. - searchsecurity.techtarget.com
While this sounds like it might be a complex technical document (and yet another one to keep up with), a threat model can be pretty simple and, fortunately, doesn't have to be done all at once. Here's an example of a super simple threat model for a basic web application:
In this example you'll notice a few things. First off, there's some notation to clarify here. You'll notice that there's a few different shapes for the different pieces of the model. In threat modeling there's a few main kinds of notations:
The arrows then show the flow of data between the points, helping you to visualize how it moves through your application and, most importantly, where the data "changes hands". That's where that dotted red line comes in. That line is called a "thread boundary". It doesn't have to be red but it helps when the rest of the diagram is black and white. This dotted line shows you a key point in the threat modeling exercise - it shows where you need some kind of security control, be it access verification, data filtration or validation.
If you're just getting started with threat modeling in your application, I suggest you start with something more at the level of the diagram above. Lay out the overall diagram of your application and how it relates to other systems. Then, with that defined, get down to the details, breaking it up into individual components and potentially even smaller pieces of functionality. How deep should you go down the rabbit hole? Well, that's up to you but the more detailed you can make the threat model the better. A helpful hint I give people just starting out on the modeling is to think about the most used parts of your application first and dig into those first. The most used portions of your application have a higher risk factor than the seldom used parts making it more important to be sure they're as secure as possible.
If you're interested in more on threat modeling and methods of assessing risk, check out this article about STRIDE and this one about DREAD. Both of these provide a good structure you can use to more effectively identify high risk areas in your application.
Once you have your threat model worked up you can then move on to other parts of the application. The first of these I want to cover are two on the authentication/authorization side of things, how you're protecting your application. First lets start with the "official" definition as pulled from the fount of all knowledge, Wikipedia:
In information security, computer science, and other fields, the principle of least privilege (also known as the principle of minimal privilege or the principle of least authority) requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose. - Wikipedia
There's a lot of terms in there you may or may not be familiar with but here's the basic idea: don't give users more permission than they need. In a lot of applications I've seen there's only two kinds of users: normal users of the system and the administrators. While for a lot of system this works (a sort of on/off situation) this stops working when things get more complex. Any application of sufficient complexity is going to graduate away from this basic two-role system and need something a bit more industrial strength. There's going to come a time when those same basic users shouldn't be allowed to even see a certain part of the application much less use the functionality it offers. This leads to the need for additional privileges and permissions.
So, where does the principle of Least Privilege play into this? Well, when you're creating your authentication system, you should keep simplicity in mind and lay out the permissioning in your application so that users only have what they require and nothing more. This gets a bit more tricky when you start introducing roles into your application where they contain groupings of permissions.
Usually what I recommend is a hybrid approach to implementing an RBAC system: allow users to be placed in groups but also allow the editing of their individual permissions. For example, say a new user is added to the HR group in your company and you need to add them into the system. Naturally you'd add them to the "HR" group and they'd get the set of permissions that come with it. A month down the road my implement a feature that only certain members of the HR group should be able to access. If the feature is different enough from the existing functionality you might add in a new permission to protect it. What happens if you only allow people to be in groups and not manually edit their permissions? Not everyone in the HR group should get the new permission, right? You'd be stuck at that point so do yourself a favor and set up a system where, when a role is assigned to a user, it sets all of the related permissions individually (more of a "permission template" than anything).
Next up is another permissioning related topic, the concept of "Separation of Privilege". In the previous section we talked about only applying the minimum amount of access that a user needs to get their job done. This concept is slightly different and focuses less on the user and more on the application side. Here's what Wikipedia has to say on the topic:
In computer programming and computer security, privilege separation is a technique in which a program is divided into parts which are limited to the specific privileges they require in order to perform a specific task. This is used to mitigate the potential damage of a computer security attack. - Wikipedia
Previously I talked about the permissions as they were applied to the user, this concept focuses on how they're applied to the application. This basically means using permissioning and identity validation to break up the functional pieces of the application and require different types of access depending on your access requirements.
When most people think of access control in their software this is what they naturally think of. They ask themselves "how should I protect this?" or "what kind of permissions does a user need to perform this operation?" It's a pretty natural extension of access control and most developers probably didn't even know there was a formal name for it.
I'm not going to cover this one too much more as it's largely self-explanatory but I will offer one word of advice here. If you're working through your application and updating current features or adding new ones, one of the first questions to ask is who will be using it. This needs to be outlined from the very beginning (the planning stages) so that the enforcement of that access control can be correct. Pushing this "left" back into the planning process also allows you some time to work through any planned piece of development that might have additional access requirements.
The "Economy of Mechanism" relates back to a mantra I preach in just about every one of my security talks: "complexity is the enemy of security". The more complex the system the harder it is to understand. The harder it is to understand the more likely it is that bugs and vulnerabilities could creep in and be missed until an attacker stumbles on them.
The US-CERT definition is a bit more formal that mine but the basics are all there:
One factor in evaluating a system's security is its complexity. If the design, implementation, or security mechanisms are highly complex, then the likelihood of security vulnerabilities increases. [...] One strategy for simplifying code is the use of choke points, where shared functionality reduces the amount of source code required for an operation. Simplifying design or code is not always easy, but developers should strive for implementing simpler systems when possible. US-CERT (Computer Emergency Readiness Team)
They add in an interesting suggestion for applying this concept in your code: the use of "choke points" and shared functionality to make for a simpler system with less attack surface. Less attack points also means a lower level of risk overall and makes it easier to protect the application with solid, well-tested security controls.
An easy example of this is a pretty typical feature of most applications, the use of a login page. This acts as a "choke point" for the users of your application and forces them through a central place to identify themselves. This is also a great place to practice the "Economy of Mechanism" idea and keep the process as simple as possible. In this particular example, there's a few things that are pretty common to the login process:
Sounds pretty simple, right? If you're doing much more than this chances are there's more complexity than you need happening. There might be some other checks you need to do as a part of your login process (other authentication steps) but on the whole this is really all that's required. Any other enforcement, like checking permissions or roles, should come when the functionality or content is requested.
If there are other pieces of functionality you need to introduce to the process, I encourage you to abstract those out and put them into a security-specific location, standardizing the location of those controls in your application. This also adds to the overall simplicity of the application - having a single place to look for all controls rather than checks spread out across the application.
This all sounds well and good if you're working with a new application but most developers don't have that luxury. They're working with a codebase they inherited with a large amount of legacy code to wrangle. I recommend making changes to simplify your security controls much like you would any other feature: incrementally. Whether it's doing some minor refactoring on a current feature or adding in a new one, include a review of the current security controls it uses and see if there's a way to reduce those down and make the overall process easier to understand and easier to maintain for the future.
I also have a two part series of posts with suggestions for simplification in legacy applications if you're not fortunate enough to be starting with a clean slate too. If you're in that situation I'd recommend checking those out as well as the slides from a presentation I gave discussing the same topic.
I like to start off my presentations with a quick reminder to all of those in attendance: there's no such thing as 100% secure. I know that's a bit counter-intuitive to what the goals of application security are but it's the truth. Ask any other security professional and they'll tell you the same thing. Fortunately there are things that you can do to help make it as secure as possible. Essentially what it boils down to is making your application more difficult to breach than the next guy.
Often times attackers aren't actually after the contents of your database or trying to exploit the application and gain access to corporate secrets. If you're a smaller company chances are they're mostly in it for the use of your application as a platform for other attacks or some other piece of a larger attack puzzle.
One thing that they could abuse is your user permissioining and evaluation. Authentication and authorization, especially with any moderately complex application can get very complicated very quickly. It's easy to forget a check here or forget to modify a policy there. This leaves that particular part of the application open to exploit. Another common flaw is to rely on cached credentials, probably pulled in when the user authenticated to the application. While this can be convenient for the developer (just check on the user already in the session) it also builds a weakness into your system.
The idea of "Complete Mediation" runs counter to this. As we have in previous sections, let's go with the official definition first:
A software system that requires access checks to an object each time a subject requests access, especially for security-critical objects, decreases the chances of mistakenly giving elevated permissions to that subject. A system that checks the subject's permissions to an object only once can invite attackers to exploit that system. US-CERT (Computer Emergency Readiness Team)
Here the US-CERT group is making a simple recommendation (though it sounds more complex in the way they put it): every time a user accesses functionality or data in your system you should check their access levels and verify their pass/fail status. Simple, right? This on-demand checking prevents any kind of issues with the caching of credentials and any bypass this might allow.
If you're a performance minded developer, you're probably cringing at this one, though. The thought of having to reach back into the database and re-evaluate the user's permissioning each time can definitely cause a slight performance hit but think about it this way: would you rather have a millisecond of extra load time or leave a security hole in your application for attackers to exploit?
I've seen a lot of applications that will, when the user logs in, cache the user and their permissions in the current session for easy access. I hinted at the main issue with this kind of functionality earlier but lets get into a bit more detail. Imagine an attacker managed to get access to a privileged account in your system. You have good logging and altering in place and it picks up the anomalous behavior, paging you and your security team and alerting them to the intruder. You diligently hop onto your system and lock the account and revoke all of their permissions. That'll take care of it, right? Unfortunately, if you've cached the user and their credentials in the session, no, you effectively haven't done anything. That attacker would still have the full level of access until their session expires (which could be never depending on how you have your auth system set up).
The answer there is simple and I've already covered it, but I want to state it one last time - check every time. Anytime a user accesses data, any time they access a new piece of functionality, and anytime their permission level changes (like logging in or using an admin resource).
There's a trend that I've seen in several different projects that I've worked on in the past. While it can make things more convenient, it also can potentially leave a system open to more abuse: the sharing of the main functionality of the application and its administrative interface. For example, a user of the system may be able to view another user's basic information on their profile but an admin may be able to see the same page and update the information too. While this in itself isn't a vulnerability, it can easily lead to one if all of the correct checks aren't in place.
This term is another one from the US-CERT list. Here's their summary of the issue:
Avoid having multiple subjects sharing mechanisms to grant access to a resource. For example, serving an application on the Internet allows both attackers and users to gain access to the application. Sensitive information can potentially be shared between the subjects via the mechanism. A different mechanism (or instantiation of a mechanism) for each subject or class of subjects can provide flexibility of access control among various users and prevent potential security violations that would otherwise occur if only one mechanism was implemented. US-CERT (Computer Emergency Readiness Team)
Their definition basically describes the situation above, the co-mingling of the administrative functionality with that of a normal user. They make the suggestion of splitting out this functionality to help protect from unintentional vulnerabilities caused by issues like forgotten (or incorrect) authorization checks.
If you need to keep administrative functionality as a part of the same application, it's highly suggested that you segment it out and thoroughly vet the protection of that section of the site for correctness. One example of this is having a split off
/admin directly that allows for more functionality and features that the normal users of the system just shouldn't have access to. This takes the burden of having the correct checks all over your code for every piece of "admin" functionality 100% correct and drastically reduces the overall risk of there being a flaw somewhere that's not discovered until it's too late.
If you wanted to be even safer, you could do what I've seen a few other (usually larger) projects do and have a completely separate application for the administration of your application. Usually this other application requires the same kind of higher privilege level that the user would need in the application but has several major advantages:
While this option allows for the most protection there's also more work involved in getting it up and running and essentially means that you'll have two applications to manage instead of just one. That being said, the extra work to get it up and running could be well worth it when it comes to the level of protection required for your application. Additionally, with the rise of tools like Composer for package and dependency management, it's easier than ever to modularize your application and pull in the parts you need across both projects.
Finally I want to talk about a topic that's been a point of contention between those wanting to secure an application and those working on the look and feel: usability and security. As anyone that's done work with various company sites or corporate tools can tell you, the design and user experience of a tool is very important and can make getting things done easier if done correctly. There's certain techniques that UX/UI designers have for making things clearer to users without having to hold their hand and explain the entire interface one thing at a time. The rub here comes in when security gets tossed into the mix.
Most users have been trained that one level of protection (like a username and password) are "secure enough" and don't consider it an imposition to be asked for it before entering an application. If you add additional security controls on top of that, like two-factor authentication, users tend to get a little twitchy and start complaining. They don't understand why they need to put in some other secret code just to get to their account when they just use usernames/passwords for everything else.
Here's what the US-CERT has to say about the idea of "psychological acceptability" (or what security controls users will find acceptable):
Accessibility to resources should not be inhibited by security mechanisms. If security mechanisms hinder the usability or accessibility of resources, then users may opt to turn off those mechanisms. Where possible, security mechanisms should be transparent to the users of the system or at most introduce minimal obstruction. US-CERT (Computer Emergency Readiness Team)
The key words in this statement as related to the security of web applications echo the situation above: "hinder the usability", "opt to turn off" and "should be transparent". While this may sound like a relatively easy task to accomplish it entirely depends on the application how easy it is to actually accomplish. The base kind of security is "transparent security" where the user doesn't even know they're being protected. As a web developer this means using things like correct password hashing practices or tested and verified authentication/authorization controls to protect the application. These don't have much to do with the usability of the application of the effects of them being enforced do.
Take for example an application that defines a password policy. That's a security control that's built in to help make the user's choice of password more secure than just their pet's name or their date of birth. However, the UX/UI folks will still need to be involved to figure out the best way to show those requirements and what kind of error messaging is the most effective when sharing error messages with users. This is a pretty easy one for users to understand, fortunately. It's when you start getting into controls they may not be all that familiar with that things can get a bit more dicey.
Each application is different so there's not one easy fix for this kind of problem but there is one tip I can share that can help make things easier down the line. When you're thinking about a new security control and how to add it into your application with the least amount of resistance, the key is to make it secure by default. In theory this is an easy concept to grasp but making it work with your application and with your users may not be quite so easy. Think about the maximum level of acceptable behavior for the control when planning the feature and start there. If at all possible don't give users an "out" and let them disable the feature. A security control that falls into these categories will almost always be accepted by your users:
Always start with the maximum level of security that you can and then, if you need to, you can dial it back based on user feedback. Don't just assume that the users will hate it and be too hesitant to introduce it. Based on my experience that only causes more confusion and heartache down the road. The more assurance you can give your users and the better you can have them understand the reasoning for it the happier they'll be to accept it and move on. Adding the control is one thing, communicating it out to the customer is a completely different story.
I know this has been a lot of information to digest for the "introduction" to this new series but I wanted to be sure we were all on the same page with things before getting into the "meat" of the different topics involved in building security into your application from the start. The remainder of the series will cover a wide range of other topics and will include more examples on the code side of things and be a bit more hands-on for those worried that they're just in for another wall of text to read through and understand.
In the next part of the series we're going to start in on the "outside" of the typical web application and cover authentication and proving the identity of your user as much as possible. This includes the basic username/password protection, validation and other options to help ensure the person on the other end of the line is who they say they are. I hope you'll join me in this series and I hope that I've already provided you some food for thought to discuss during your next planning session and which controls you may need to include.
Also remember, check out the latest version of this application that we'll be using during this series to take some of the boilerplate work out and help us focus on the real issues in each section.
With over 12 years of experience in development and a focus on application security Chris is on a quest to bring his knowledge to the masses, making application security accessible to everyone. He also is an avodcate for security in the PHP community and provides application security training and consulting services.