We've come a long way in this series so far: we started off with some of he basics about creating our API then built our initial prototype. In part three of the series we refactored the code and added on a MVC structure to make it more maintainable in the future. This brings up up to date and ready for the next step in the process.
Up until now, we've basically only been sharing static data. Our API went from a simple Slim route that responded with a "Hello world!" message out to...well...a more robust version that still responded with the same "Hello world!" message. Obviously, no useful API out there is going to just serve up static data. That's one of the points of even having an API - allowing your users access to the real-time and latest data inside of your system in a programmatically accessible way.
It makes sense that the next step in the process is to bring in something to store this dynamic data and allow our API application to access it. A database fits this need perfectly. Thanks to the Phinx migration tool and the Eloquent model handling we've already included, using this database will be a snap.
One thing to note here: in the course of this tutorial I'll be making use of MySQL but there won't be much that's specific to MySQL in the database functionality. You can migrate this functionality over to you database of choice pretty easily be it PostgreSQL, Oracle or even Microsoft SQL.
First thing we need to do is set up the database we're going to use in our application. I'm going to be using the MySQL command line for these examples but there's plenty of other tools out there that can do these same tasks. I'm not going to cover the setup and configuration of the MySQL server itself here, I'm just assuming you already have that up and running at this point.
Here's the basic commands to create our sample database via the command line:
> create database myapi;
> grant all on myapi.* to 'myapiuser'@'localhost' identified by 'myapi42';
> flush privileges;
To keep things simpler and a bit more consistent, I'm going to be using the above credential information (username myapiuser
, password myapi42
and database name myapi
) so you can see how things fit together in other parts of the system. These same credentials will be reused later in the Phinx configuration.
If all goes well you should now have a database ready and waiting for your application to connect with it. There's two things we'll need to do to make sure our API code knows how to make the database connection. The first is updating the .env
file with this new credential information. You'll remember that we created the .env
file as a part of our previous part of the series when we were refactoring things. The settings we put in there were just defaults so lets go in and update that now with our actual connection details:
DB_HOST=localhost
DB_NAME=myapi
DB_USER=myapiuser
DB_PASS=myapi42
When the request comes in and the application bootstraps the .env
file will be loaded and all of these values will be available in the $_ENV
super global. You'll also remember that we set up our database configuration in the bootstrap/db.php
configuration to use these values. Now when the Eloquent models try to connect to the database they can use the correctly configured capsule to reach it.
Before we get too much further, lets be sure this connection is working. We don't have any tables to connect a model to yet but we can tell from what kind of error message we get back if it's having connection issues or problems finding the matching table. If all goes well, it should be #2.
We'll need to do two things: create a simple model and try to use it in a controller. First, let's make the model in App\Model\Test.php
:
<?php
namespace App\Model;
use Illuminate\Database\Eloquent\Model;
class Test extends Model
{
// nothing to see, move along
}
Save this, then go over to the IndexController
in the App\Controller\IndexController.php
file and add change the index
method to look like this:
public function index()
{
$tests = \App\Model\Test::all();
return $this->jsonSuccess('Hello world!');
}
The Eloquent functionality will see this use of the Test
model, try to make the connection to the database and, hopefully, just report back that the table tests
can't be found. Give that a shot and see if you get an error message similar to:
Base table or view not found: 1146 Table 'myapi.tests' doesn't exist (SQL: select * from `tests`)"
If not and you get something about the connection, you'll need to verify your web server can reach the MySQL server on the expected port and that ensure you have all of the credentials correct in the .env
configuration. You can remove these two changes now, they were just verifying the database was accessible and that the application was functioning as expected.
With this connection in place, we'll move on to the next tool in the process, setting up and using Phinx.
First off, we'll need to initialize our project using the Phinx command line tool. This tool will set up a configuration file for us We'll then create the migrations
and seeds
directories where our database files will be stored. Go in to the root directory of the application (again, not public/
but one level up) and use the following command:
> vendor/bin/phinx init
Phinx by Rob Morgan - https://phinx.org. 0.8.0
created ./phinx.yml
> mkdir db/seeds
> mkdir db/migrations
You should now have those two new directories in your db/
folder and a phinx.yml
configuration file in your root directory. Phinx is a powerful tool that allows you to work with multiple environments with a single configuration file. For our purposes, however, we're only interested in using the development
section. This is the default that Phinx uses anyway so it makes one less configuration option that will need changing.
Open up the phinx.yml
file and look for the development
section. Then update these settings to match the ones we previously put into the .env
file for our database configuration:
development:
adapter: mysql
host: localhost
name: myapi
user: myapiuser
pass: 'myapi42'
port: 3306
charset: utf8
At this point we've already verified the database connection is good so we can assume that Phinx won't have any issues connecting either.
Alright, things are all configured and set up so lets start getting into the functionality of the application. Since we're focusing on the database setup for this article, we're going to go ahead and create a few tables for some of the base API user handling. While many "API first" designs will have the user creation right along side the rest of the API endpoints, I'd recommend if possible having the user or account creation off in a separate process.
User handling (authentication and authorization) is one of the more complex things you can do in applications (and not just web-based ones either). By splitting off user creation and management from the API, you're also reducing the attack surface of the API for features that could be abused. This is not to say that I think it should never be included, I'd just suggest that you start off with it elsewhere and migrate it over if it becomes required in the future.
I'm not going to cover secure user signups via an API in this set of articles. I might circle back around to that at the end but for now we're going to keep things simple.
Our first migration is to create the user
table that we can use to store details about the users of our system. We're going to use Phinx to create the migration and then fill in the content to define the table:
> vendor/bin/phinx create CreateUserTable
This should create a file named something similar to db/migrations/20170508142420_create_user_table.php
(the numbers will be different but the last part of the filename should be the same). If you open up this file in your editor, put the following in the change
method:
<?php
use Phinx\Migration\AbstractMigration;
class User extends AbstractMigration
{
public function change()
{
$table = $this->table('users');
$table->addColumn('username', 'string')
->addColumn('password', 'string')
->addColumn('password_reset_date', 'datetime')
->addColumn('email', 'string')
->addColumn('name', 'text')
->addColumn('status', 'text')
->addColumn('created_at', 'datetime')
->addcolumn('updated_at', 'datetime')
->create();
}
}
While we have some of the traditional columns in our user
table definition, there's also a few that might need a bit of explaining. These other columns will work with security controls we'll be adding into the code to help protect the system:
password_reset_date: It's a good idea to have the passwords in your system rolled over from time to time, especially with users being as bad about passwords as people tend to be. This reset date, even if not initially used, can provide a reference point for the "age" of the password should you choose to enforce it.
status: I see a lot of user management systems out there neglect to have any kind of status on the users in their system. By having a single "switch" on the user record itself you have instant control to disable an account if it's been compromised or if you suspect some kind of abuse. I usually recommend using strings for these values rather than "magic numbers" that require you to know the code to understand them. We'll be setting up class constants to handle these strings for us so we never have to reference them directly.
You can then run the migrate
command and have this table created:
> vendor/bin/phinx migrate
Next is the matching model for the user
table. In the App\Model\User.php
file, put the following code:
<?php
namespace App\Model;
class User
{
const STATUS_ACTIVE = 'active';
const STATUS_DISABLED = 'disabled';
protected $fillable = [
'username', 'name', 'email', 'password', 'password_reset_date', 'status'
];
public function keys()
{
return $this->hasMany('\App\Model\ApiKey', 'id', 'user_id');
}
}
You can see that it sets up those constants previously mentioned for the user status and defines some fillable
values. This is the special configuration that Eloquent uses to define what fields can be "filled" (set) when the object is created or updated. If a property other than these is set, it won't save into the database and just kind of disappear into the ether of bits.
In order to have some sample data to work with, we're going to seed some users into our system. These users will just have basic information and simple passwords. This information should only be used for testing and should never end up in production. Weak passwords like these can lead to compromise.
Make the new seeder with the Phinx command:
> vendor/bin/phinx seed:create UserSeeder
using seed base class Phinx\Seed\AbstractSeed
created ./db/seeds/UserSeeder.php
Now we fill in the seeder with our data:
<?php
use Phinx\Seed\AbstractSeed;
class UserSeeder extends AbstractSeed
{
public function run()
{
$users = $this->table('users');
$users->truncate();
$defaultUsers = [
[
'username' => 'user1',
'password' => password_hash('test1234', PASSWORD_DEFAULT),
'name' => 'Active User #1',
'email' => 'user1@example.com',
'status' => 'active',
'created_at' => date('Y-m-d H:i:s'),
'updated_at' => date('Y-m-d H:i:s'),
'password_reset_date' => date('Y-m-d H:i:s')
],
[
'username' => 'user2',
'password' => password_hash('test5678', PASSWORD_DEFAULT),
'name' => 'Active User #2',
'email' => 'user2@example.com',
'status' => 'active',
'created_at' => date('Y-m-d H:i:s'),
'updated_at' => date('Y-m-d H:i:s'),
'password_reset_date' => date('Y-m-d H:i:s')
],
[
'username' => 'user3',
'password' => password_hash('test9012', PASSWORD_DEFAULT),
'name' => 'Inactive User #1',
'email' => 'user3@example.com',
'status' => 'disabled',
'created_at' => date('Y-m-d H:i:s'),
'updated_at' => date('Y-m-d H:i:s'),
'password_reset_date' => date('Y-m-d H:i:s')
]
];
$users->insert($defaultUsers)->save();
}
}
This seeder will insert three users:
user1
that is activeuser2
that is activeuser3
this is disabledWe'll be using user1
and user2
to show how to correctly connect to the API and user3
will help us show how exceptions work and are returned when exceptions are thrown. Now we run the seeder to insert our user data:
> vendor/bin/phinx seed:run -s UserSeeder
You'll notice the call to truncate
there at the beginning - this is just an easy way to tell Phinx to remove all data from the table when the database seed is run to "reset" it. Otherwise, since Phinx doesn't check for uniqueness, we could end up with duplicate data in our users
table and that could make things messy.
This completes the creation of the user storage for now - we'll come back to this later and use this model when we need to get at current user information.
With the basic user storage all set up, we're going to move into the next piece of security-related functionality: the handling of the API keys. If you'll remember from way back in part one of the series we looked at how the API authentication was going to work. We're going to make use of API keys during the initial connection to the database along with the username on the account. I mentioned these keys in an earlier part of the series. They're just randomly generated hashes created from data pulled from the random_bytes
function. They're not based on the user's password or any other account related data - just random data.
In several implementations, each user will get their API key. This key is generated for them and can be used for account requests acting as them. This setup, however, provides the same kind of single point of failure that a password reused across services does. Imagine if you're using the same API key in multiple places or for multiple services and one of those is compromised. For the same reason you should use different passwords in different accounts, API keys should also be able to be changed or, in this case, deleted and regenerated.
In our API we're going to create a multiple key solution that provides the flexibility for the user to generate as many keys as they might need and describe what they relate to with a brief description. For the purposes of these tutorials we're just going to create some random keys we can use to connect to the API instead of creating a full blown web UI for a user to manage them. Ultimately you'd want this functionality to live in your application. You could possibly have it as another set of endpoints in the API but this just keeps things simpler for now.
Another benefit of having multiple keys and making them so a user can split them out and have multiple is the ability to revoke them. Should something happen and compromise is somehow detected using that key, the user can simply delete it, essentially revoking it. This offers quick and easy protection for the user and their applications without having to go through a complicated process of updating multiple other services in a single-key situation.
Now that we've defined what we're wanting here, lets set up the database table and matching model. We'll then seed this table with values relating to the current user set. Use Phinx to create the new migration:
> vendor/bin/phinx create CreateApiKeyTable
using migration base class Phinx\Migration\AbstractMigration
using default template
created /db/migrations/20170512114142_create_api_key_table.php
Then fill it in with the following:
<?php
use Phinx\Migration\AbstractMigration;
class CreateApiKeyTable extends AbstractMigration
{
public function change()
{
$table = $this->table('api_key');
$table->addColumn('key', 'string')
->addColumn('description', 'text')
->addColumn('user_id', 'integer')
->addColumn('status', 'string', ['default' => 'active'])
->addColumn('created_at', 'datetime')
->addcolumn('updated_at', 'datetime')
->create();
}
}
Let's briefly go over the columns in this table and what they'll be used for. The key
, description
and user_id
fields are pretty self-explanatory. The status
field provides the same kind of "on/off switch" that it did on the user table. This allows the user to disable a key rather than having to delete it is they want to make it unusable for a short amount of time. This is also a handy feature for the administrators of the site should they see unusual activity coming from the account using that key. They can disable it, notify the user and do the research they need before re-enabling it.
Run the Phinx migrate command to add this new table:
> vendor/bin/phinx migrate
== 20170512114142 CreateApiKeyTable: migrating
== 20170512114142 CreateApiKeyTable: migrated 0.0395s
All Done. Took 0.0779s
The table is added so now lets create the model to match. In App\Models\ApiKey.php
put the following:
<?php
namespace App\Model;
use Illuminate\Database\Eloquent\Model;
class ApiKey extends Model
{
protected $fillable = [
'key', 'description', 'user_id'
];
protected $table = 'api_key';
public function user()
{
return $this->hasOne('\App\Model\User', 'id', 'user_id');
}
}
There's a few things that are new in this model versus some of the previous examples. The first is the $table
class property. This is just a way for us to tell the Eloquent functionality what table it can use to find API key information. By default it just adds an "s" to the model name to look for it but our table is api_key
not api_keys
so we redefine the table.
The other new piece is the addition of the user()
method. This is a special method that sets up a relationship between our API keys and the user table through Eloquent's hasOne
relationships. This is just one of the many kinds of relationships Eloquent allows, check out the Laravel manual for information on this and others. In this case we're telling it that the key relates to a single user record with the connection api_key.user_id
to user.id
. Then, when we have an instance of an ApiKey
model, we can just call the user
property to automagically get the related user (an example of this comes later).
Finally, we want to make some sample keys so we can make some connections to the API. As with the other data, we're going to build out a seeder to do the hard work for us. First, use Phinx to make the seed class:
> vendor/bin/phinx seed:create ApiKeySeeder
using seed base class Phinx\Seed\AbstractSeed
created ./db/seeds/ApiKeySeeder.php
Now open it up and put the following in it:
<?php
use Phinx\Seed\AbstractSeed;
class ApiKeySeeder extends AbstractSeed
{
public function run()
{
$apiKey = $this->table('api_key');
$apiKey->truncate();
// Build out 6 randomized tokens and assign them to users, two for each
$users = $this->fetchAll('SELECT * FROM users');
foreach ($users as $user) {
for ($i = 0; $i < 2; $i++) {
$apiKey->insert([
'key' => $this->generateToken(),
'description' => 'Random token #'.$i,
'user_id' => $user['id'],
'created_at' => date('Y-m-d H:i:s'),
'updated_at' => date('Y-m-d H:i:s')
])->save();
}
}
}
/**
* Generate a randomized token
* @return string Generated token
*/
private function generateToken()
{
return hash('sha512', random_bytes(128));
}
}
In the code above we're doing a few things to generate some as-random-as-possible tokens, making two for each user:
SELECT
to get the current user information so we can use the IDsSHA512
hash keys using the random_bytes function for seed dataapi_key
table with the key, description and related user IDI'm sure some of you are wondering "Why SHA512"? The simple answer is "for the length". The length of the string means that it would take an attacker more time than other, shorter hashes to brute force. Having a longer hash like this also has the side benefit of potentially tripping your "failed logins" monitoring. You do have log tracking and monitoring, right?
Now run the seeder:
> vendor/bin/phinx seed:run -s ApiKeySeeder
If you look in the api_key
table you'll see a few rows with the data we just inserted complete with randomly generated API keys, descriptions, the user ID they're related to and the status of the key. Currently they're all marked as active so they'd all work with any of the requests made to the API.
Why include a
status
column? Having this column allows us to immediately revoke keys if we find out they're compromised rather than just deleting them.
Later in the series we'll include functionality to manage these keys. I want to get into using them first, though. Having multiple keys gives us a bit more flexibility if we want to try out things for different users. Now that we have the keys in place, we need a way to handle the sessions that are created as a part of the API requests.
As was mentioned previously in the series (back in part one) I described the flow of the request but here's a quick refresher:
This token used for the validation/login has to be stored on the server side for validation so we're going to make a new table to contain this information. With each request the user makes, this table will be consulted and the hash will be verified. Much like the other sections in this article we're going to make a migration and model so we can use it in the application.
First we'll create the migration, once again using Phinx:
> vendor/bin/phinx create CreateApiSessionTable
This will create the new file in your db/migrations/
folder that will create the table when executed. Open up that file and put the following into it:
<?php
use Phinx\Migration\AbstractMigration;
class CreateApiSessionTable extends AbstractMigration
{
public function change()
{
$table = $this->table('api_session');
$table->addColumn('key_id', 'integer')
->addColumn('session_id', 'string')
->addColumn('expiration', 'datetime')
->addColumn('user_id', 'integer')
->addColumn('created_at', 'datetime')
->addcolumn('updated_at', 'datetime')
->create();
}
}
In this table structure we're setting up a few things:
key_id
that will link the record back to our api_key
table so we know which key it relates to.session_id
that will contain the randomly generated session hash.expiration
date for the token (they don't need to live forever after all)user_id
that's associated with the sessionThere's plenty of other metadata that could be added around these basic fields but this is a good place to start. Now we run our migration and add this new table:
ccornutt@ccc.local [/var/www/secure-api] $ vendor/bin/phinx migrate
== 20170620203436 CreateApiSessionTable: migrating
== 20170620203436 CreateApiSessionTable: migrated 0.1338s
All Done. Took 0.3034s
There should now be an api_session
table in your database.
Finally, we need a way to access the table and its data from inside the application - this calls for another model. In App\Models\ApiSession.php
put the following:
<?php
namespace App\Model;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Capsule\Manager as DB;
class ApiSession extends Model
{
protected $fillable = [
'expiration', 'user_id', 'key_id', 'session_id'
];
protected $table = 'api_session';
public function user()
{
return $this->hasOne('\App\Model\User', 'id', 'user_id');
}
public function apiKey()
{
return $this->hasOne('\App\Model\ApiKey', 'key_id', 'id');
}
}
This model links to the api_session
table and provides some relationships we can reference to fetch the API key and the user related to the current session. In following parts of this series we'll see how to populate this table and generate the randomized session hash that'll be shared on login to the API.
In this article we've gone through a lot of setup on the database side to prep for the next parts of our series - the implementation of the secure API requests. With these tables and the database laid out and created we have a good foundation to build on to make our API. Working through it like this also helps give a better understanding of how the data model is structured and where the links are between the data.
I'll wrap this part of the series up with a simple diagram showing you these relations for the more visually inclined:
I hope you've enjoyed this part of the series and will stay tuned for more to come. In the next article I'm going to get into some of the actual implementation of the API and get back to the PHP side. We'll create the endpoints involved in the login process and show how to generate the session hashes and send them back to the user.
With over 12 years of experience in development and a focus on application security Chris is on a quest to bring his knowledge to the masses, making application security accessible to everyone. He also is an avodcate for security in the PHP community and provides application security training and consulting services.