Ansible playbook to create a baseline configuration after provisioning a new VM
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Max Mehl 3593ec6f40
add debierne
1 month ago
.reuse delete single .license files 12 months ago
LICENSES reduce reuse files 12 months ago
group_vars run post-backup.sh by default if it exists 2 months ago
host_vars add debierne 1 month ago
inventory@ddfa68a34f add debierne 1 month ago
roles fix task on hosts that do not have nullmailer installed 1 month ago
.drone.yml added licenses 12 months ago
.gitignore fix licensing issues 11 months ago
.gitmodules changed submodules to https 12 months ago
.gitmodules.license added licenses 12 months ago
Pipfile added licenses 12 months ago
Pipfile.license added licenses 12 months ago
Pipfile.lock added licenses 12 months ago
Pipfile.lock.license added licenses 12 months ago
README.md run post-backup.sh by default if it exists 2 months ago
ansible.cfg added ability to store secrets in vault 1 year ago
open_the_vault.sh added licenses 12 months ago
playbook.yml ensure backup role is run host-by-host 1 month ago
vault_passphrase.gpg added ability to store secrets in vault 1 year ago
vault_passphrase.gpg.license added licenses 12 months ago

README.md

Baseline Playbook

REUSEstatus Build Status

Ansible playbook to create a baseline configuration after provisioning a new VM

Table of Contents

Background

Traditionally, when a new host was provisioned, several playbooks were usually needed to configure things like monitoring and backups. needed. This playbook repository serves the purpose of unifying this the changes needed on most new hosts. It uses to following roles to achieve this task:

Security

This playbook repository contains secrets that are encrypted using the Ansible vault. The passphrase to decrypt the vault lives in the GPG-encrypted file vault_passphrase.gpg. If you need access to the encrypted parts of this playbook or you want to be able to encrypt variables whilst setting up a new host, simply create an issue in this repository and we will review your request.

Install

You need a host that runs at least Debian 10 and an up-to-date version of our inventory, the latter of which can be attained by running:

git clone --recurse-submodules git@git.fsfe.org:fsfe-system-hackers/baseline.git

or when you have already cloned the repository without recursing into submodules:

git submodule update --remote inventory

To use this repository to provision a new host, you need to activate the virtual environment using pipenv which you can install via pip or your favourite package manager. Then, simply run

Next, you obviously need Ansible. To obtain and use the version of Ansible this playbook is tested with (2.11), obtain a working version of pipenv and run:

pipenv install
pipenv shell

Usage

In order to configure a new host, you need to do the following:

  1. Apply the label baseline to the desired host in the inventory.
  2. Add a host configuration file in host_vars. Take a look at this example to get an idea.
  3. Then, simply run
ansible-playbook playbook.yml

If you just want to run certain parts of the playbook, take a look at the available tags and then simply limit the playbook run tasks with those tags.

ansible-playbook playbook.yml -t hardening

Available tags are:

  • hardening
  • sshd
  • fail2ban
  • unattended-upgrades
  • monitoring
  • backup

Example: add/change configuration of client monitoring

In order to add a new server to the monitoring, or update it's client plugins, and including the necessary change on the monitoring server, run:

ansible-playbook playbook.yml -l example.fsfeurope.org -t monitoring

You won't have to define the icinga2 server, the playbook integrates it on its own.

Also read the detailed wiki entry on how to add a new server to the monitoring and adding client plugins if you're interested.

Gotchas

Note that if you run this for multiple servers at once, please add -f 1 as a parameter. Otherwise, rewriting the authorized_keys file that happens during backup initialisation on the remote storage might cause issues if accessed by multiple processes at once.

Configuration

To configure the behaviour of the roles for the host in question take a look at group_vars/all.yml. This file specifies the default configuration for our hosts and looks as follows:

##########################################################################################
# hardening | sshd
##########################################################################################
sshd_accept_env: LANG LC_*
sshd_allow_agent_forwarding: "no"
sshd_allow_tcp_forwarding: "no"
sshd_authentication_methods: any
sshd_banner: /etc/issue.net
sshd_challenge_response_authentication: "no"
sshd_ciphers: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr
sshd_client_alive_count_max: 1
sshd_client_alive_interval: 200
sshd_compression: "no"
sshd_gssapi_authentication: "no"
sshd_hostbased_authentication: "no"
sshd_ignore_user_known_hosts: "yes"
sshd_kerberos_authentication: "no"
sshd_kex_algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
sshd_log_level: VERBOSE
sshd_login_grace_time: 20
sshd_macs: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256
sshd_max_auth_tries: 3
sshd_max_sessions: 3
sshd_max_startups: 10:30:60
sshd_password_authentication: "no"
sshd_permit_empty_passwords: "no"
sshd_permit_root_login: "yes"
sshd_permit_user_environment: "no"
sshd_port: 22
sshd_print_last_log: "yes"
sshd_print_motd: "no"
sshd_rekey_limit: 512M 1h
sshd_strict_modes: "yes"
sshd_subsystem: sftp internal-sftp
sshd_tcp_keep_alive: "no"
sshd_use_dns: "no"
sshd_use_pam: "yes"
sshd_x11_forwarding: "no"

##########################################################################################
# hardening | fail2ban
##########################################################################################
fail2ban_loglevel: INFO
fail2ban_logtarget: /var/log/fail2ban.log
fail2ban_ignoreself: "true"
fail2ban_ignoreips: "127.0.0.1/8 ::1"
fail2ban_bantime: 600
fail2ban_findtime: 600
fail2ban_maxretry: 5
fail2ban_destemail: "system-monitoring@lists.fsfe.org"
fail2ban_sender: root@{{ ansible_fqdn }}
fail2ban_jail_configuration:
  - option: enabled
    value: "true"
    section: sshd
  - option: mode
    value: "aggressive"
    section: sshd

##########################################################################################
# unattended-upgrades
##########################################################################################
unattended_origins_patterns:
  # security updates
  - "origin=Debian,codename=${distro_codename},label=Debian-Security"
  # updates including non-security updates
  - "origin=Debian,codename=${distro_codename},label=Debian"
unattended_autoclean_interval: 21
unattended_download_upgradeable: 1
unattended_automatic_reboot: true
unattended_verbose: 1
unattended_mail: "system-monitoring@lists.fsfe.org"
unattended_mail_only_on_error: true

##########################################################################################
# fsfe-backup
##########################################################################################
borgbackup_servers:
  - fqdn: u124410.your-storagebox.de
    user: u124410-sub2
    type: hetzner
    home: ""
    pool: servers
    options: ""
borgbackup_cron_day: "1-7"
borgbackup_cron_mailto: admin+backup@fsfe.org
borgbackup_pre_commands:
  - '[[ ! -f "/root/bin/backup.sh" ]] || /root/bin/backup.sh'
borgbackup_post_commands:
  - "/usr/local/bin/borg-wrapper list"
  - '[[ ! -f "/root/bin/post-backup.sh" ]] || /root/bin/post-backup.sh'

If you want to change those variables for a new host, say example.fsfeurope.org, simply copy the relevant entries from above to the relevant file in host_vars (e.g. example.fsfeurope.org.yml) and amend them as you see fit. For more information on which variables take precedence over others, refer to the Ansible documentation