HashiCorp Vault is a tool for centralized secrets management. Secrets could be API keys, passwords, certificates, etc. A central secrets system enables locking down who can access secrets, rotating secrets automatically, keeping audit logs about secrets access, revoking compromised secrets and much more. In my opinion, the best way to learn a technology is to use it. For this reason, we are going to build a demo system in this blog post.

Demo Scenario

We will use Vault to manage three sets of credentials for PostgreSQL. One credential will have superuser rights and shall be used for database administration. The other two credentials will have read-only and write-only access respectively. The write-only credential will be used by the “logger application” that writes a row to a table in PostgreSQL every five seconds. The read-only credential will be used by the “server application” that reads the same table the logger application writes to and returns the data as a HTML page.

Demo scenario overview

Demo scenario overview

Installation

You will need Docker, Docker Compose and the Vault client to follow along. Everything else is in the demo git repository:

1
git clone https://github.com/Nick-Triller/vault-demo.git

All files that are referred to in commands are in the directorydemo/files of the git repository.

Initialize Vault

Start the demo with docker-compose up -d. By default, the Vault client assumes Vault can be reached at https://localhost:8200. Set the environment variable “VAULT_ADDR” to change the default. In my environment on Linux without a virtual machine for Docker, Vault can be reached at http://localhost:8200 (note the lack of TLS in contrast to the default).

1
export VAULT_ADDR=http://localhost:8200

Verify Vault can be reached with

1
vault status

The output should indicate Vault is sealed. Next, we initialize Vault. Initialization generates the master key that is used to encrypt and decrypt the underlying master encryption key. The master key is stored in memory only. This means the master key needs to be provided every time a Vault instance is restarted. However, it is possible to store the master key in a separate service that is responsible to provide it to Vault automatically. To learn more, search for “auto unseal”.

Vault uses Shamir’s Secret Sharing to split the master key into multiple shards. A configurable fraction of all shards, which are also called unseal keys, is sufficient to reconstruct the master key. The idea is to distribute the unseal keys across multiple people. Losing a single unseal key is no problem as long as the minimum number of unseal keys is available. However, there is no way to unseal Vault if more keys are lost. All secrets will be lost forever - choose the number of shards and the key threshold accordingly. For the demo, we generate three shards of which two are required to reconstruct the master key:

1
vault operator init -key-shares 3 -key-threshold 2

Make sure to store the unseal keys and root token generated by vault operator init securely. We can unseal our Vault instance with vault operator unseal. Each time we run this command, we will be asked to enter an unseal key. We need to provide two unseal keys to unseal successfully.

1
2
vault operator unseal
vault operator unseal

Root Token

The root token returned by vault operator init will be used to authenticate with Vault and configure it as root user. The token auth method is the first method of authentication in Vault and the only authentication method that can’t be disabled.

Generally, tokens are similar to session IDs on websites. Authentication works by verifying your identity and then generating a token that is associated with that identity. On the other hand, authorization is handled with policies that govern the access priviliges users have in Vault. The root policy is attached to root tokens, thus root tokens are allowed access to everything in Vault.

Furthermore, root tokens never expire. We will revoke the initial root token manually once the demo system is setup. A new root token can be generated at any time with vault operator generate-root. Unseal keys have to be provided to generate a new root token.

To use the root token, run vault login and enter the root token.

Enable Auditing

Auditing is disabled initially. Audit devices must be enabled by a root user with vault audit enable. If there are enabled audit devices, Vault has to succeed with logging to at least one device before it responds. This means Vault will be blocked if all audit devices are blocking.

We use stdout as audit device. Logging to stdout makes sense if a centralized logging system collects all container logs. All authenticated requests, including errors, will be logged:

1
vault audit enable file file_path=stdout

Create Admin Role in PostgreSQL

Next, we create a new role in PostgreSQL. This role will be used for administrative purposes, e. g. for the creation of tables. We don’t create PostgreSQL roles for the demo applications. These are created by Vault as will be explained below.

1
2
3
4
5
docker-compose exec postgres bash
psql -U vault
CREATE ROLE admin WITH SUPERUSER LOGIN PASSWORD 'zOhT73BbC6W6fF2GO6MK';
\du
\q

The password will be rotated by Vault regularly. As we connect to PostgreSQL from inside the container, we don’t need to provide a password.

Create Policies in Vault

We create three policies that define which secrets can be accessed by users. The policy applogger-policy.hcl specifies that any user with this policy can access the “database/creds/logger” secret:

1
2
3
path "database/creds/logger" {
  capabilities = [ "read" ]
}

Create the policy with the name applogger: vault policy write applogger applogger-policy.hcl

The other two policies look very similar. The policy for the appserver for which read-only access to PostgreSQL is sufficient allows access to the “database/creds/readonly” secret. Create it with vault policy write appserver appserver-policy.hcl

1
2
3
path "database/creds/readonly" {
  capabilities = [ "read" ]
}

Finally, we create a policy for the database administrator with  vault policy write dbadmin dbadmin-policy.hcl

1
2
3
path "database/static-creds/admin" {
  capabilities = [ "read" ]
}

We associate these policies to users once we create the users in Vault. You can query policies with vault read sys/policy/ and vault read sys/policy/<policy name>.

Configure Auth Methods in Vault and Create Users

Auth methods are components in Vault that perform authentication and assign an identity and policies to users. Some auth methods are designed for authentication by machines, others for authentication by humans. We will use three auth methods in this demo. The first one, token auth, was already mentiond. Additionally, we will use the userpass and approle auth methods.  

As indicated by the name, userpass auth method allows authenticating to Vault with a username and password. We will use userpass auth method for the database administrator user. In reality, the LDAP auth method would make more sense for this use case. Enable userpass and create a user with the following commands. We specify the username ntriller, a password and the policies that apply to the user.

1
2
vault auth enable userpass
vault write auth/userpass/users/ntriller password=rKDFmzKIdD8HgwvQguWk policies=dbadmin

Let’s verify logging in with the new user works: vault login -method=userpass username=ntriller password=rKDFmzKIdD8HgwvQguWk. Login with the root token again afterwards.

The approle auth method allows machines / applications to authenticate with Vault-defined roles. Firstly, enable the auth method:

1
vault auth enable approle

Next, we create two approles, one for the server app and one for the logger app. We associate the appserver and applogger policies we created before to the respective approles.

1
2
vault write auth/approle/role/appserver policies="appserver"
vault write auth/approle/role/applogger policies="applogger"

To authenticate, we need the role_id of the approle and a secret_id. The role_id is just an id for the role and acts as a secondary secret. The secret_id acts like a password. Each instance of an application should get it’s own secret_id, but the role_id would be the same. We fetch the role_ids for the two roles and store them in files:

1
2
vault read -field=role_id auth/approle/role/appserver/role-id > readerapp/vault/role-id.txt
vault read -field=role_id auth/approle/role/applogger/role-id > loggerapp/vault/role-id.txt

Next, generate secret_ids and also store them in files.

1
2
vault write -field=secret_id -f auth/approle/role/appserver/secret-id > readerapp/vault/secret-id.txt
vault write -field=secret_id -f auth/approle/role/applogger/secret-id > loggerapp/vault/secret-id.txt

Each time we run vault write -f auth/approle/role/<roleName>/secret-id, a new secret id is generated. In reality, a configuration management or provisioning system would supply the role id and secret id to the application. We will mount the role id and secret id files into containers to use them.

Verify login with role_id and secret_id works with the following command. Add your role_id and secret_id from the files in demo/readerapp/vault/ that we created. You should receive a token from Vault.

1
vault write auth/approle/login role_id="<your role id>" secret_id="<your secret id>"

Configure Secrets Engine

Secrets engines are components which store, generate or encrypt data. These can be enabled and disabled like auth methods. We will use the databases secrets engine which **generates secrets dynamically. **Every time a secret is requested, the databases secrets engine creates a new role in PostgreSQL and returns the username and password. Once the secret expires, the engine deletes the role in PostgreSQL. The time a leaked credential can be used by an attacker is minimized.

A PostgreSQL role can’t be dropped if database objects exist that were created by it. For this reason, we use a static role for the database administrator role. With static roles, the engine does not drop a role in PostgreSQL once the associated secret expires. Instead, the password of the role is changed regularly.

Enable the databases secrets engine:

1
vault secrets enable database

It will be mounted on the path “/database” by default. It can also be mounted multiple times under different paths. We configure how the postgresql plugin should connect to PostgreSQL next:

1
vault write database/config/postgresql plugin_name=postgresql-database-plugin allowed_roles="*" connection_url=postgres://{{username}}:{{password}}@postgres:5432/postgres?sslmode=disable username="vault" password="SDUltVgaf110T0wnQOku"

Vault should be able to connect to PostgreSQL and manage roles now. Rotate the root credentials immediately to let Vault change the initial PostgreSQL root password:

1
vault write -force database/rotate-root/postgresql

Finally, we create roles in the databases secrets engine. They map a secret path to a role in PostgreSQL and specify how Vault creates PostgreSQL roles or rotates their password (for static roles).

1
2
3
vault write database/roles/readonly db_name=postgresql creation_statements=@readonly.sql default_ttl=10m max_ttl=20m
vault write database/roles/logger db_name=postgresql creation_statements=@logger.sql default_ttl=10m max_ttl=20m
vault write database/static-roles/admin db_name=postgresql rotation_statements=@rotation.sql username="admin" rotation_period=86400

We use very short “time to live” values for testing purposes. The password of the static admin role is rotated every 24 hours, while the dynamic roles expire after 20 minutes. The files readonly.sql, logger.sql, and rotation.sql look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# readonly.sql
CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "{{name}}";

# logger.sql
CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT INSERT ON ALL TABLES IN SCHEMA public TO "{{name}}";
GRANT USAGE, SELECT ON SEQUENCE record_id_seq TO "{{name}}";

# rotation.sql
ALTER USER "{{name}}" WITH PASSWORD '{{password}}';

This is where we specify the access rights of the PostgreSQL roles. “readonly” can read all tables, “logger” can insert into all tables and use the sequence “record_id_seq” that is used for auto-incrementing the id column. The admin role has the superuser attribute set and therefore can do anything in PostgreSQL.

Create Database Objects in PostgreSQL

Let’s get admin credentials for PostgreSQL to create database objects such as tables. We could read the database admin secret because we use the root token and can access anything. We authenticate with Vault as dbadmin user anyways for testing purposes.

1
vault login -method=userpass username=ntriller password=rKDFmzKIdD8HgwvQguWk

Fetch the database admin credentials:

1
vault read database/static-creds/admin

Use the credentials to login to PostgreSQL and create the table defined in the file “schema.sql”. I used DBeaver Community for this step.

1
2
3
4
5
CREATE TABLE record(
    id serial PRIMARY KEY, # Sequence will be created implicitly
    content TEXT NOT NULL,
    created_on TIMESTAMP NOT NULL DEFAULT NOW()
)

Integrate Applications with Vault

Vault and PostgreSQL are fully configured. The last missing piece is connecting Vault to the applications that need credentials to access PostreSQL. Of course, it’s possible to add code to the applications themselves. The applications would need to authenticate with Vault and manage secrets renewal. However, we will use two tools from HashiCorp to manage authentication to Vault and secrets renewal outside of our application code, Vault Agent and Consul Template.

Vault Agent manages retrieving an auth token and stores it in a file. Consul Template provides a convenient way to populate values from Consul or Vault into the file system. It will read the Vault token and use it to retrieve the secrets our applications need. The application can simply read the secrets from a file. However, the application might have to reread the secret from the file. Consul Template handles secret renewal automatically. Consul Template can restart the application automatically once secrets we depend on change, but we won’t make  use of this feature in this demo. Instead, we re-read the secret file and reconnect to the database for each request. The picture below illustrates the responsibilites of the components.

Integrating applications and Vault with Vault Agent and Consul Template

Integrating applications and Vault with Vault Agent and Consul Template

Vault Agent and Consul Template could be integrated as sidecar containers in Kubernetes. We use Docker Compose for this demo, therefore the Vault Agent, Consul Template and Application containers communicate via mounted files. The complete Docker Compose file looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
version: '3.1'
services:

  vault:
    image: 'library/vault:1.3.0'
    restart: 'unless-stopped'
    ports: 
    - '8200:8200'
    volumes:
    - './files/vault.hcl:/vault/config/vault.hcl:ro'
    cap_add:
      - 'IPC_LOCK'
    command: ["server", "-log-level=debug"]

  postgres:
    image: 'library/postgres:12.0-alpine'
    restart: 'unless-stopped'
    ports:
    - '5432:5432'
    environment:
    - 'POSTGRES_PASSWORD=SDUltVgaf110T0wnQOku'
    - 'POSTGRES_USER=vault'

  #
  # LOGGER
  #
  agent-logger:
    image: 'library/vault:1.3.0'
    restart: 'unless-stopped'
    volumes:
    - './files/agent.hcl:/conf/agent.hcl:ro'
    - './loggerapp/vault:/conf/vault/:ro'
    - './volumes/logger-agent/:/out/'
    command: ["agent", "-config=/conf/agent.hcl"]

  template-logger:
    image: 'hashicorp/consul-template:0.23.0-alpine'
    restart: 'unless-stopped'
    volumes:
    - './files/template-conf.hcl:/conf/conf.hcl:ro'
    - './files/logger_creds.ctmpl:/conf/template.ctmpl:ro'
    - './volumes/logger-agent/:/conf/agent/:ro'
    - './volumes/logger-template/:/out/'
    command: ["consul-template", "-config=/conf/conf.hcl"]

  app-logger:
    restart: 'unless-stopped'
    build:
      dockerfile: Dockerfile
      context: ./loggerapp
    volumes:
    - './volumes/logger-template/:/app/creds:ro'

  #
  # SERVER
  #
  agent-server:
    image: 'library/vault:1.3.0'
    restart: 'unless-stopped'
    volumes:
    - './files/agent.hcl:/conf/agent.hcl:ro'
    - './readerapp/vault/:/conf/vault/:ro'
    - './volumes/reader-agent/:/out/'
    command: ["agent", "-config=/conf/agent.hcl"]

  template-server:
    image: 'hashicorp/consul-template:0.23.0-alpine'
    restart: 'unless-stopped'
    volumes:
    - './files/template-conf.hcl:/conf/conf.hcl:ro'
    - './files/reader_creds.ctmpl:/conf/template.ctmpl:ro'
    - './volumes/reader-agent/:/conf/agent/:ro'
    - './volumes/reader-template/:/out/'
    command: ["consul-template", "-config=/conf/conf.hcl"]

  app-server:
    restart: 'unless-stopped'
    build:
      dockerfile: Dockerfile
      context: ./readerapp
    ports:
    - '8000:8000'
    volumes:
    - './volumes/reader-template:/app/creds:ro'

The files with the *.ctmp ending define which secrets are going to be fetched by Consul Template and the format of the file that is going to be created:

1
2
3
4
5
{{- with secret "database/creds/readonly" -}}
[database]
username={{ .Data.username }}
password={{ .Data.password }}
{{- end }}

The Applications

The applications themselves are fairly simple. The logger app code is located in the directory demo/loggerapp. Approximately every five seconds, the logger app creates a row in the record table in PostgreSQL. The database credentials are read anew and a new database connection gets established for each write. Each row contains a random string, a timestamp and an ID. The timestamp and ID are added by PostgreSQL. This is the complete code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import psycopg2
import configparser
from datetime import datetime
import time
import string
import random
import os
import logging

log = logging.getLogger(__name__)
config_section = "database"

def read_conf():
  config = configparser.ConfigParser()
  config.read("./creds/creds.txt")
  if config_section not in config:
      config[config_section] = {}
  config[config_section]["host"] = os.environ.get("DB_HOST") or "postgres"
  return config

def connect(host, username, password):
  conn_str = f"dbname='postgres' user='{username}' host='{host}' password='{password}'"
  conn = psycopg2.connect(conn_str)
  return conn

def randomString(stringLength=10):
    """Generate a random string of fixed length"""
    letters = string.ascii_lowercase
    return ''.join(random.choice(letters) for i in range(stringLength))

def main():
  while True:
    config = read_conf()
    username = config[config_section]["username"]
    password = config[config_section]["password"]
    host = config[config_section]["host"]
    with connect(host, username, password) as conn:
      with conn.cursor() as cur:
        data = randomString()
        cur.execute("INSERT INTO record (content) VALUES (%s)", (data,))
        conn.commit()
        log.debug("Saved record in DB, data:" + data)
    time.sleep(5)

if __name__ == "__main__":
  logging.basicConfig(level=logging.DEBUG)
  main()

The reader app, located in the directory demo/readerapp, contains an HTTP server with one endpoint. Every time the endpoint is called, the last 100 entries from the record table are returned. This is the complete code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import psycopg2
from psycopg2.extras import RealDictCursor
import configparser
import logging
import os
import json
import datetime
from flask import Flask, escape, request

log = logging.getLogger(__name__)
app = Flask(__name__)
config_section = "database"

def read_conf():
  config = configparser.ConfigParser()
  config.read("./creds/creds.txt")
  if config_section not in config:
      config[config_section] = {}
  config[config_section]["host"] = os.environ.get("DB_HOST") or "postgres"
  return config

def connect(host, username, password):
  conn_str = f"dbname='postgres' user='{username}' host='{host}' password='{password}'"
  conn = psycopg2.connect(conn_str)
  return conn

@app.route('/')
def hello():
  config = read_conf()
  username = config[config_section]["username"]
  password = config[config_section]["password"]
  host = config[config_section]["host"]
  with connect(host, username, password) as conn:
    with conn.cursor(cursor_factory=RealDictCursor) as cur:
      cur.execute("SELECT * FROM record ORDER BY created_on DESC LIMIT 100")
      result = cur.fetchall()
      return "<pre>" + json.dumps(result, sort_keys=True, indent=2, default=default) + "</pre>"

def default(o):
  if isinstance(o, (datetime.date, datetime.datetime)):
    return o.isoformat()

if __name__ == '__main__':
  logging.basicConfig(level=logging.DEBUG)
  app.run(host='0.0.0.0', port=8000)

Let’s validate everything works as expected. This is the result if we browse to <docker host ip>:8000/:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[
  {
    "content": "mrndtwxghi",
    "created_on": "2019-11-24T17:21:28.247688",
    "id": 8
  },
  {
    "content": "bmxxntflce",
    "created_on": "2019-11-24T17:21:23.237531",
    "id": 7
  },
  {
    "content": "kmwqcrpjcu",
    "created_on": "2019-11-24T17:21:18.227440",
    "id": 6
  },
  {
    "content": "oqquxnorwy",
    "created_on": "2019-11-24T17:21:13.217458",
    "id": 5
  },
  {
    "content": "idjyxxtbmz",
    "created_on": "2019-11-24T17:21:08.206553",
    "id": 4
  },
  {
    "content": "keadpwphce",
    "created_on": "2019-11-24T17:21:03.196565",
    "id": 3
  },
  {
    "content": "diygajnzwr",
    "created_on": "2019-11-24T17:20:58.184586",
    "id": 2
  },
  {
    "content": "tnqtblfmcn",
    "created_on": "2019-11-24T17:20:53.173890",
    "id": 1
  }
]

Audit Secret Access

Take a look at the audit log with

1
docker-compose logs vault

Revoke the Root Token

Finally, we revoke the root token.

1
vault lease revoke <token>

The token can’t be used to authenticate with Vault after revocation.

Conclusion

Vault introduces some operational complexity, however, the security advantages are also significant. I imagine Vault is especially useful for management of secrets across multiple cloud providers. I like the possibility of integrating Vault with applications without changing the application itself with Vault Agent and Consul Template. Vault is a great solution for high risk environments such as finance.