Ansible playbook to create a baseline configuration after provisioning a new VM
Go to file
Tobias Diekershoff 15568bf7cc
All checks were successful
continuous-integration/drone/push Build is passing
added /tmp on davy to monitoring
2024-10-14 10:34:39 +02:00
.reuse delete single .license files 2021-07-16 17:00:39 +02:00
group_vars used storagebox2 by default for new backup configuration 2024-07-08 16:52:30 +02:00
host_vars added gahn, htz4 to monitoring 2024-10-09 14:13:41 +02:00
inventory@65ba6f6bf7 bump inventory 2024-07-08 16:53:31 +02:00
LICENSES reduce reuse files 2021-07-16 17:34:36 +02:00
roles added /tmp on davy to monitoring 2024-10-14 10:34:39 +02:00
.drone.yml add docs-centralizer CI job 2023-01-13 10:03:29 +01:00
.gitignore fix licensing issues 2021-07-22 13:19:05 +02:00
.gitmodules changed submodules to https 2021-07-12 10:56:13 +02:00
.gitmodules.license added licenses 2021-07-12 11:43:45 +02:00
ansible.cfg added ability to store secrets in vault 2021-06-03 14:44:21 +02:00
open_the_vault.sh added licenses 2021-07-12 11:43:45 +02:00
playbook.yml ensure backup role is run host-by-host 2022-05-18 13:12:34 +02:00
README.md Adapt README and remove pipenv 2023-01-25 09:27:29 +01:00
vault_passphrase.gpg Re-encrypt to tobiasd 2023-06-28 11:57:59 +02:00
vault_passphrase.gpg.license added licenses 2021-07-12 11:43:45 +02:00

Baseline Playbook

in docs.fsfe.org REUSE
status Build Status

Ansible playbook to create a baseline configuration after provisioning a new VM

Table of Contents

Background

Traditionally, when a new host was provisioned, several playbooks were usually needed to configure things like monitoring and backups. needed. This playbook repository serves the purpose of unifying this the changes needed on most new hosts. It uses to following roles to achieve this task:

Security

This playbook repository contains secrets that are encrypted using the Ansible vault. The passphrase to decrypt the vault lives in the GPG-encrypted file vault_passphrase.gpg. If you need access to the encrypted parts of this playbook or you want to be able to encrypt variables whilst setting up a new host, simply create an issue in this repository and we will review your request.

Install

You need a host that runs at least Debian 10 and an up-to-date version of our inventory, the latter of which can be attained by running:

git clone --recurse-submodules git@git.fsfe.org:fsfe-system-hackers/baseline.git

or when you have already cloned the repository without recursing into submodules:

git submodule update --remote inventory

To use this repository to provision a new host, you need to activate the virtual environment using pipenv which you can install via pip or your favourite package manager. Then, simply run

Next, you obviously need Ansible (at least version 2.10). The easiest way to install Ansible is pipx. After installing it, you can simply run:

pipx install --include-deps ansible

Usage

In order to configure a new host, you need to do the following:

  1. Apply the label baseline to the desired host in the inventory.
  2. Add a host configuration file in host_vars. Take a look at this example to get an idea.
  3. Then, simply run
ansible-playbook playbook.yml

If you just want to run certain parts of the playbook, take a look at the available tags and then simply limit the playbook run tasks with those tags.

ansible-playbook playbook.yml -t hardening

Available tags are:

  • hardening
  • sshd
  • fail2ban
  • unattended-upgrades
  • monitoring
  • backup

Example: add/change configuration of client monitoring

In order to add a new server to the monitoring, or update it's client plugins, and including the necessary change on the monitoring server, run:

ansible-playbook playbook.yml -l example.fsfeurope.org -t monitoring

You won't have to define the icinga2 server, the playbook integrates it on its own.

Also read the detailed wiki entry on how to add a new server to the monitoring and adding client plugins if you're interested.

Gotchas

Note that if you run this for multiple servers at once, please add -f 1 as a parameter. Otherwise, rewriting the authorized_keys file that happens during backup initialisation on the remote storage might cause issues if accessed by multiple processes at once.

Configuration

To configure the behaviour of the roles for the host in question take a look at group_vars/all.yml. This file specifies the default configuration for our hosts and looks as follows:

##########################################################################################
# hardening | sshd
##########################################################################################
sshd_accept_env: LANG LC_*
sshd_allow_agent_forwarding: "no"
sshd_allow_tcp_forwarding: "no"
sshd_authentication_methods: any
sshd_banner: /etc/issue.net
sshd_challenge_response_authentication: "no"
sshd_ciphers: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr
sshd_client_alive_count_max: 1
sshd_client_alive_interval: 200
sshd_compression: "no"
sshd_gssapi_authentication: "no"
sshd_hostbased_authentication: "no"
sshd_ignore_user_known_hosts: "yes"
sshd_kerberos_authentication: "no"
sshd_kex_algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
sshd_log_level: VERBOSE
sshd_login_grace_time: 20
sshd_macs: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256
sshd_max_auth_tries: 3
sshd_max_sessions: 3
sshd_max_startups: 10:30:60
sshd_password_authentication: "no"
sshd_permit_empty_passwords: "no"
sshd_permit_root_login: "yes"
sshd_permit_user_environment: "no"
sshd_port: 22
sshd_print_last_log: "yes"
sshd_print_motd: "no"
sshd_rekey_limit: 512M 1h
sshd_strict_modes: "yes"
sshd_subsystem: sftp internal-sftp
sshd_tcp_keep_alive: "no"
sshd_use_dns: "no"
sshd_use_pam: "yes"
sshd_x11_forwarding: "no"

##########################################################################################
# hardening | fail2ban
##########################################################################################
fail2ban_loglevel: INFO
fail2ban_logtarget: /var/log/fail2ban.log
fail2ban_ignoreself: "true"
fail2ban_ignoreips: "127.0.0.1/8 ::1"
fail2ban_bantime: 600
fail2ban_findtime: 600
fail2ban_maxretry: 5
fail2ban_destemail: "system-monitoring@lists.fsfe.org"
fail2ban_sender: root@{{ ansible_fqdn }}
fail2ban_jail_configuration:
  - option: enabled
    value: "true"
    section: sshd
  - option: mode
    value: "aggressive"
    section: sshd

##########################################################################################
# unattended-upgrades
##########################################################################################
unattended_origins_patterns:
  # security updates
  - "origin=Debian,codename=${distro_codename},label=Debian-Security"
  # updates including non-security updates
  - "origin=Debian,codename=${distro_codename},label=Debian"
unattended_autoclean_interval: 21
unattended_download_upgradeable: 1
unattended_automatic_reboot: true
unattended_verbose: 1
unattended_mail: "system-monitoring@lists.fsfe.org"
unattended_mail_only_on_error: true

##########################################################################################
# fsfe-backup
##########################################################################################
borgbackup_servers:
  - fqdn: u124410.your-storagebox.de
    user: u124410-sub2
    type: hetzner
    home: ""
    pool: servers
    options: ""
borgbackup_cron_day: "1-7"
borgbackup_cron_mailto: admin+backup@fsfe.org
borgbackup_pre_commands:
  - '[[ ! -f "/root/bin/backup.sh" ]] || /root/bin/backup.sh'
borgbackup_post_commands:
  - "/usr/local/bin/borg-wrapper list"
  - '[[ ! -f "/root/bin/post-backup.sh" ]] || /root/bin/post-backup.sh'

If you want to change those variables for a new host, say example.fsfeurope.org, simply copy the relevant entries from above to the relevant file in host_vars (e.g. example.fsfeurope.org.yml) and amend them as you see fit. For more information on which variables take precedence over others, refer to the Ansible documentation