Dave Hall Consulting logo

bash

AWS Parameter Store

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

Switching Installation Profiles on Existing Drupal Sites

In my last blog post I outlined how to use per project installation profiles. If you read that post and want to use installation profiles to take advantage of site wide content changes and centralised dependency management, this post will show you how to do it quickly and easily.

The easiest way to switch installation profiles is using the command line with drush. The following command will do it for you:

$ drush vset --exact -y install_profile my_profile

An alternative way of doing this is by directly manipulating the database. You can run the following SQL on your Drupal database to switch installation profiles:


UPDATE variable SET value = 'my_profile' WHERE name = 'install_profile';
-- Clear the cache using MySQL only syntax, when DB caching is used.
TRUNCATE cache;

Before you switch installation profiles, you should check that you have all the required modules enabled in your site. If you don't have all of the modules required by the new installation profile enabled in your site, your are likely to have issues. The best way to ensure you have all the dependencies enabled is to run the following one liner:

drush en $(grep dependencies /path/to/my-site/profiles/my_profile/my_profile.info | sed -n 's/dependencies\[\]=\(.*\)/\1/p')

Even though it is pretty easy to switch installation profiles I would recommend starting your project with a project specific installation profile.

Edit: Jaime Schmidt picked up a missing step in the instructions above. You need to enable the installation profile in the system table. The easiest way to do that is with this drush one liner:

echo UPDATE system SET schema_version = 0 WHERE name = 'my_profile' | drush sqlc && drush cc all

Further Edit: Marji Cermak picked up a typo in the dependencies one liner. It is one word I use a lot and always misspell.

Managing per Project Installation Profiles

Unbeknown to many users, installation profiles are what is used to install a Drupal site. The two profiles that ship with core are standard and minimal. Standard gives new users a basic, functional Drupal site. Minimal provides a very minimal configuration so developers and site builders can start building a new site. A key piece of a Drupal distro is an installation profile.

I beleive that developers and more experienced site builders should be using installation profiles as part of their client sites builds. In Drupal 7 an installation profile is treated like a special module, so it can implement hooks - including hook_update_N(). This means that the installation profile is the best place for controlling turning modules on and off, switching themes or any other site wide configuration changes that can't be handled by features or a module specific update hook.

In an ideal world you could have 1 installation profile that is used for all of your projects and you just include it in your base build. Unfortunately installation profiles tend to evolve into being very project specific. At the same time you are likely to want a common starting point. I like to give my installation profiles unique names, rather than something generic like "my_profile", I prefer to use "[client_prefix]_profile". I'll cover project prefixes in another blog post.

After some trial and error, I've settled on a solution which I think works for having a common starting point for an installation profile that will diverge overtime using a unique namespace. My solution relies on some basic templates, a bash script with a bit of sed. I could have written all of this in PHP and even made a drush plugin for it, but I prefer to do this kind of thing on the command line with bash. I'm happy to work with someone to port it to a drush plugin if you're interested.

Here is a simple example of the templates you could use for creating your installation profile. The version on github is closer to what I actually use for clients, along with the build script.

base.info

name = PROFILE_NAME
description = PROFILE_DESCRIPTION
core = 7.x
dependencies[] = block
dependencies[] = dblog

base.install

<?php
/**
 * @file
 * Install, update and uninstall functions for the the PROFILE_NAME install profile.
 */

/**
 * Implements hook_install().
 *
 * Performs actions to set up the site for this profile.
 *
 * @see system_install()
 */
function PROFILE_NAMESPACE_install() {
  // Enable some standard blocks.
  $default_theme = variable_get('theme_default', 'bartik');
  $values = array(
    array(
      'module' => 'system',
      'delta' => 'main',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'content',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'user',
      'delta' => 'login',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'navigation',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'management',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 1,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'help',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'help',
      'pages' => '',
      'cache' => -1,
    ),
  );
  $query = db_insert('block')->fields(array('module', 'delta', 'theme', 'status', 'weight', 'region', 'pages', 'cache'));
  foreach ($values as $record) {
    $query->values($record);
  }
  $query->execute();

  // Allow visitor account creation, but with administrative approval.
  variable_set('user_register', USER_REGISTER_VISITORS_ADMINISTRATIVE_APPROVAL);

  // Enable default permissions for system roles.
  user_role_grant_permissions(DRUPAL_ANONYMOUS_RID, array('access content'));
  user_role_grant_permissions(DRUPAL_AUTHENTICATED_RID, array('access content'));
}

// Add hook_update_N() implementations below here as needed.

base.profile

<?php
/**
 * @file
 * Enables modules and site configuration for a PROFILE_NAME site installation.
 */

/**
 * Implements hook_form_FORM_ID_alter() for install_configure_form().
 *
 * Allows the profile to alter the site configuration form.
 */
function PROFILE_NAMESPACE_form_install_configure_form_alter(&$form, $form_state) {
  // Pre-populate the site name with the server name.
  $form['site_information']['site_name']['#default_value'] = $_SERVER['SERVER_NAME'];
}

Some developers might recognise the code above, it is from the minial installation profile.

The installation profile builder script is a simple bash script that relies on sed.

build-profile.sh

#!/bin/bash
#
# Installation profile builder
# Created by Dave Hall http://davehall.com.au
#

FILES="base.info base.install base.profile"
OK_NS_CHARS="a-z0-9_"
SCRIPT_NAME=$(basename $0)

namespace="my_profile"
name=""
description="My automatically generated installation profile."
target=""

usage() {
  echo "usage: $SCRIPT_NAME -t target_path -s profile_namespace [-d 'project_descrption'] [-n 'human_readable_profile_name']"
}

while getopts  "d:n:s:t:h" arg; do
  case $arg in
    d)
      description="$OPTARG"
      ;;
    n)
      name="$OPTARG"
      ;;
    s)
      namespace="$OPTARG"
      ;;
    t)
      target="$OPTARG"
      ;;
    h)
      usage
      exit
      ;;
  esac
done

if [ -z "$target" ]; then
  echo ERROR: You must specify a target path. >&2
  exit 1;
fi

if [ ! -d "$target" -o ! -w "$target" ]; then
  echo ERROR: The target path must be a writable directory that already exists. >&2
  exit 1;
fi

ns_test=${namespace/[^$OK_NS_CHARS]//}
if [ "$ns_test" != "$namespace" ]; then
  echo "ERROR: The namespace can only contain lowercase alphanumeric characters and underscores ($OK_NS_CHARS)" >&2
  exit 1
fi

if [ -z "$name" ]; then
  name="$namespace";
fi

for file in $FILES; do
  echo Processing $file
  sed -e "s/PROFILE_NAMESPACE/$namespace/g" -e "s/PROFILE_NAME/$name/g" -e "s/PROFILE_DESCRIPTION/$description/g" $file > $target/$file
done

echo Completed generating files for $name installation profile in $target.


Place all of the above files into a directory. Before you can generate your first profile you must run "chmod +x build-profile.sh" to make the script executable.

You need to create the output directory, for testing we will use ~/test-profile, so run "mkdir ~/test-profile" to create the path. To build your profile run "./build-profile.sh -s test -t ~/test-profile". Once the script has run you should have a test installation profile in ~/test-profile.

I will continue to maintain this as a project on github.

Check Drupal Module Status Using Bash

When you run a lot of drupal sites it can be annoying to keep track of all of the modules contained in a platform and ensure all of them are up to date. One option is to setup a dummy site setup with all the modules installed and email notifications enabled, this is OK, but then you need to make sure you enable the additional modules every time you add something to your platform.

I wanted to be able to check the status of all of the modules in a given platform using the command line. I started scratching the itch by writing a simple shell script to use the drupal updates server to check for the status of all the modules. I kept on polishing it until I was happy with it, there are some bits of which are a little bit ugly, but that is mostly due to the limitations of bash. If I had to rewrite the it I would do it in PHP or some other language which understands arrays/lists and has http client and xml libraries.

The script supports excluding modules by using a extended grep regular expression pattern and nominating a major version of drupal. When there is a version mismatch it will be shown in bolded red, while modules where the versions match will be shown in green. The script filters out all dev and alpha releases, after all the script is designed for checking production sites. Adding support for per module update servers should be pretty easy to do, but I don't have modules to test this with.

To use the script, download it, save it somewhere handy, such as

~/bin/check-module-status.sh
, make it executable (run
chmod +x ~/bin/check-module-status.sh
). Now it is ready for you to run it -
~/bin/check-module-status.sh /path/to/drupal
and wait for the output.