This repository provides a comprehensive guide to create an Ansible project and playbooks to automate the provisioning and configuration of an Ubuntu server. Whether you're setting up a cloud VPS or a local server, the included Ansible roles will install and configure essential services like Docker, Caddy for reverse proxying, Fail2Ban for security, and more. Designed for developers and system administrators, this setup simplifies server management and automates best practices for secure, scalable infrastructure.
- Abstract
- Prerequisites
- Ansible Project Setup
- Server Configuration
- Resources
- Feedback
Welcome to the Ubuntu Server Setup with Ansible project, a comprehensive guide to automating the process of setting up and configuring an Ubuntu server from scratch using Ansible.
Originally developed as part of OmniOpenCon 2024, this project has evolved into a powerful resource for developers, system administrators, and DevOps engineers looking to automate server provisioning. We leverage Ansible to handle all tasks that typically require manual intervention, including security configurations, package installations, and service setups.
By following this guide, you will:
- Gain a clear understanding of Ansible's role in automating infrastructure setup.
- Learn how to write Ansible playbooks to define the desired state of your Ubuntu server.
- Set up a fully automated VPS environment, tailored for development and production use cases.
- Deploy services like Docker to run your applications in isolated environments.
- Use Caddy as a reverse proxy to manage both public and private services.
This project is designed to minimize manual setup and centralize configuration management within Ansible, allowing for reproducible, scalable infrastructure deployments. Whether you're setting up a cloud VPS or a local server, this project aims to streamline your DevOps workflow.
Let's dive into automation and build a more efficient, scalable infrastructure!
Before diving into the automation process, make sure you have the following:
-
A VPS (Virtual Private Server) with root access.
- You can use any cloud provider like AWS, DigitalOcean, Linode, or a local VPS provider.
- SSH access to your VPS.
-
Local Environment Setup:
- Ansible installed on your local machine.
- SSH key pair for secure access to your VPS.
- Basic knowledge of Linux commands and system administration.
- A text editor like VS Code or Vim.
-
Domain Name & DNS Access:
- A registered domain name that you control.
- Access to manage the domainβs DNS records (e.g., via your registrar or a DNS management service like Cloudflare).
- You will need to set up an A or CNAME record pointing to your VPSβs public IP address.
- The domain will be used to set up SSL certificates and access public services.
If you donβt have a VPS yet, you can follow our VPS Setup Guide to create one.
If you donβt have an SSH key pair, you can follow our SSH Key Pair Generation Guide to create one.
If you donβt have Ansible installed on your local machine, you can follow our Ansible Installation Guide.
If you are new to Linux commands, you can refer to our Basic Linux Commands Guide for a quick overview.
You will need to create the following DNS records:
-
A Record: Points your domain to your VPS public IP.
Example: example.com -> 123.45.67.89 (your VPS public IP)
-
CNAME Records: Points subdomains to your domain.
Example: www.example.com -> example.com
As we will have multiple services running on our VPS, it's a good practice to use subdomains for each service. Therefore, you can create in advance the following subdomains CNAME records:
- pgadmin.example.com -> example.com
- gitea.example.com -> example.com
- umami.example.com -> example.com
- yacht.example.com -> example.com
- notify.example.com -> example.com
- memo.example.com -> example.com
- semaphore.example.com -> example.com
Althoug we can use *
to create a wildcard record, it's better to create a CNAME record for each subdomain you want to use.
Now that you have your VPS set up and Ansible installed, letβs start with the Ansible configuration. We will create a new Ansible project to hold our playbooks and configurations.
- Create a new directory for your Ansible project:
mkdir ansible-project
- Change to the newly created directory:
cd ansible-project
Let's start with a simple Ansible Playbook to print "Hello, World!". Create a new file named site.yml
in your project directory with the following content:
---
- name: Configure VPS
hosts: localhost
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
Now, run the playbook using the following command:
ansible-playbook site.yml
If you see the output Hello, World!
, your Ansible setup is working correctly.
Ansible uses the inventory file to define the target hosts for the playbooks. The inventory file can be in various formats, including INI, YAML, and JSON. In this guide, we will use the YAML format for the inventory file.
Ansible is looking for the inventory file in the following order:
- The file specified using the
-i
option in theansible-playbook
command. - The default inventory file located at
/etc/ansible/hosts
. - The default inventory file located in the project directory named
inventory
.
Now, let's test the connection with our VPS. The first step is to create an inventory file to define the VPS host. Create a new file named inventory.yml
in your project directory with the following content:
---
vps:
hosts:
mycloud.com:
If you followed the steps in SSH Key Pair Generation Guide, you can use mycloud.com
as the host name. If you used a different host name, replace mycloud.com
with your VPS host name that you defined in the SSH configuration file.
In this example, we defined a group named vps
with a single host named mycloud.com
. You can define multiple hosts and groups in the inventory file based on your requirements.
After creating the inventory file, update the site.yml
playbook to use the VPS group or host's name as the target host:
---
- name: Configure VPS
hosts: vps
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If you see the output pong
, the connection with your VPS is successful.
Ansible uses a configuration file to define settings like the default inventory file, remote user, and connection type. Create a new file named ansible.cfg
in your project directory with the following content:
[defaults]
localhost_warning = False
interpreter_python=auto_silent
[inventory]
inventory_unparsed_warning = False
This configuration file disables the warning for using localhost
as the target host, and disables the warning for unparsed inventory files. It also sets the Python interpreter to auto-detect.
Ansible is looking for the configuration file in the following order:
ANSIBLE_CONFIG
environment variable.ansible.cfg
in the current directory.~/.ansible.cfg
in the user's home directory./etc/ansible/ansible.cfg
.
If you want to use a different configuration file, you can set the ANSIBLE_CONFIG
environment variable to the desired file path.
Usually is best to have the configuration file in your home directory if you don't have special settings per project.
Ansible playbooks are YAML files that define a set of tasks to be executed on the target hosts. Playbooks can include multiple plays, each targeting different hosts or groups of hosts. Each play consists of tasks of modules that define the actions to be performed on the target hosts.
An Ansible module is a reusable, standalone script that performs specific tasks on the target hosts. Modules can be used to manage files, install packages, configure services, and more.
A list with all the Ansible modules can be found here.
In Ansible we have also this notion of Collections
. Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. Collections make it easier to share and reuse Ansible content.
When you install Ansible, you get a set of built-in modules that are part of the ansible.builtin
collection. You can use these modules in your playbooks without any additional configuration. Also, by default ansible will come with some of the community
collections.
A list of all the community collections can be found here.
Ansible roles are a way to organize playbooks and tasks into reusable units. Roles allow you to encapsulate the configuration of a specific component or service, making it easier to reuse and share across different projects.
Roles are stored in the roles
directory within the Ansible project directory. Each role consists of a predefined directory structure containing tasks, handlers, variables, and other configuration files.
To create a new role, use the ansible-galaxy
command:
ansible-galaxy init <role_name>
This command creates a new role directory structure with the following subdirectories:
defaults
: Contains default variables for the role.files
: Contains files to be copied to the target hosts.handlers
: Contains handlers that are triggered by tasks.meta
: Contains metadata for the role.tasks
: Contains tasks to be executed on the target hosts.templates
: Contains Jinja2 templates to be rendered on the target hosts.vars
: Contains variables specific to the role.
Roles can be included in playbooks using the roles
directive:
---
- name: Configure VPS
hosts: localhost
roles:
- role: <role_name>
This directive tells Ansible to include the specified role in the playbook.
Also you can import a role from a different directory:
---
- name: Configure VPS
hosts: localhost
tasks:
- import_role:
name: /path/to/role
In this guide, we will create different roles to organize our playbooks and tasks. Typically, a role is created for each service or component you want to configure. According to the Ansible documentation, a role should be a self-contained collection of variables, tasks, files, templates, and modules that can be reused to configure a specific component or service.
Now that we have our Ansible project set up, let's move on to configuring our VPS. We will focus on the following areas:
- Package Installation: Install essential packages like Docker, Cady, Git, and Python.
- Security Setup: Configure firewall rules, enable SSH hardening, and set up fail2ban.
- Service Configuration: Create an configure Docker containers with different services.
The first step in setting up our VPS is to install essential packages required for development. We will use Ansible to automate the package installation process. We will create manually an Ansible role named packages
to handle the package installation tasks.
- Create a new folder
roles
in your project directory:
mkdir roles
- Create a new role named
packages
either using theansible-galaxy
command or manually:
Using ansible-galaxy
:
ansible-galaxy init roles/packages
Manually:
mkdir -p roles/packages
mkdir -p roles/packages/tasks
- Inside
roles/packages/tasks/
, create a new file namedapt.yml
with the following content:
- Create a new file (if it doesn't exist) named
main.yml
in theroles/packages/tasks/
directory with the following content:
---
- import_tasks: apt.yml
- Update the
site.yml
playbook to include thepackages
role:
---
- name: Configure VPS
hosts: vps
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: packages
- Run the playbook to install the required packages on your VPS:
ansible-playbook -i inventory.yml site.yml
Running the playbook in this form will fail. The reason is that we attempt to install packages with our regular user, which does not have the necessary permissions. To fix this, we need to tell Ansible to run the tasks with elevated privileges. We can do this by enabling privilege escalation for our import role task.
- Update the
site.yml
playbook to enable privilege escalation:
---
- name: Configure VPS
hosts: vps
tasks:
- ansible.builtin.import_role:
name: packages
become: true
- Run the playbook again with privilege escalation enabled:
ansible-playbook -i inventory.yml site.yml --ask-become-pass
You will be prompted to enter the password for the sudo user on your VPS. Enter the password to proceed with the installation.
To avoid passing the --ask-become-pass
flag every time you run the playbook, you can update ansible.cfg to enable privilege escalation by default:
Add the following lines to the ansible.cfg
file:
[privilege_escalation]
become_ask_pass = true
After updating the configuration file, you can run the playbook without the --ask-become-pass
flag:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, you should see the required packages installed on your VPS.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
Docker is a popular platform for developing, shipping, and running applications in containers. In this section, we will automate the installation of Docker on our VPS using Ansible.
- Create a new file named
docker.yml
in theroles/packages/tasks/
directory with the content from the following file:
- Create a new file named
daemon.json
in theroles/packages/files/
directory with the following content:
{
"default-address-pools": [
{
"base": "172.18.0.0/16",
"size": 24
}
],
"experimental": false,
"features": {
"buildkit": true
}
}
This file contains the Docker daemon configuration settings. In this example, we define a default address pool for container networking and enable the BuildKit feature.
- Create a new file
main.yml
in theroles/packages/handlers/
directory with the following content:
---
- name: restart_docker
become: true
ansible.builtin.systemd_service:
name: docker
state: restarted
This is an Ansible handler that restarts the Docker service after the configuration changes have been applied.
A handler is a task that is triggered by other tasks. Handlers are only executed once at the end of the play, after all the tasks have been completed.
- Update the
site.yml
playbook to include thedocker
role:
---
- name: Configure VPS
hosts: vps
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
Before running the playbook, we need to discuss a little bit how to handle sensitive data in Ansible. As you were able to see, in our docker tasks, we have a variable {{ username }}
that is not defined anywhere. This is because we don't want to hardcode sensitive data in our playbooks. Instead, we can use Ansible Vault to encrypt sensitive data and store it securely.
Ansible Vault is a feature that allows you to encrypt sensitive data in your playbooks. You can use Ansible Vault to encrypt variables, files, and even entire playbooks. The encryption algorithm used by Ansible Vault is AES256.
To create a new encrypted file using Ansible Vault, you can use the following command:
ansible-vault create secrets.yml
You will be prompted to enter a password to encrypt the file. Make sure to use a strong password and keep it secure.
Add the following content to the secrets.yml
file:
---
username: <your_vps_username>
Ansible will open the default text editor to enter the content of the file. After entering the content, save and close the file. The file will be encrypted using the password you provided.
Note: If you try to save the file without entering any content, you will get an error messsage.
To edit an existing encrypted file, you can use the edit
command:
ansible-vault edit secrets.yml
To view the contents of an encrypted file, you can use the view
command:
ansible-vault view secrets.yml
To run a playbook that uses an encrypted file, you need to provide the vault password using the --ask-vault-pass
flag:
ansible-playbook -i inventory.yml site.yml --ask-vault-pass
However, typing the vault password every time you run a playbook can be cumbersome. To avoid this, you can store the vault password in a file and reference it in the ansible.cfg
file:
Add the vault_password_file
option under defaults
section with the path to your text file which contains the vault password.
[defaults]
vault_password_file = ~/.vault-pass.txt
Make sure to set the correct permissions on the vault password file to keep it secure:
chmod 600 ~/.vault-pass.txt
Now, we need to reference the secretes.yml
file in our site.yml
playbook. All the variables defined in the secrets.yml
file will be available to all the tasks in the playbook.
Update the site.yml
playbook to include the secrets.yml
file:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, Docker will be installed on your VPS.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
Caddy is a powerful, extensible web server that can be used to serve static websites, reverse proxy services, and more. In this section, we will use Ansible to install Caddy on our VPS.
- Create a new file named
caddy.yml
in theroles/packages/tasks/
directory with content from the following file:
- Now, we will create a basic Caddy configuration file. Create a new file named
Caddyfile.j2
in theroles/packages/templates/
directory with the following content:
<your_domain> {
root * /var/www/html
file_server
}
This configuration file defines a simple Caddy server that serves files from the /var/www/html
directory.
- Let's create a simple HTML file to serve using Caddy. Create a new file named
index.html
in theroles/packages/files/
and copy the content from the following file:
- Add to
main.yml
file in theroles/packages/handlers/
directory with the following content:
- name: restart_caddy
become: true
ansible.builtin.systemd_service:
name: caddy
state: reloaded
This is an Ansible handler that reloads the Caddy service after the configuration changes have been applied.
- Update the
site.yml
playbook to include thecaddy
role:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
- Run the playbook to install Caddy on your VPS:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, Caddy will be installed on your VPS. You can access the Caddy server by visiting https://<your_domain>
in your web browser and see the content of the index.html
file.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: packages
tasks_from: apt.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
Security is a critical aspect of any infrastructure setup. In this section, we will focus on setting up basic security measures on our VPS using Ansible.
The SSH configuration file is located at /etc/ssh/sshd_config
and contains settings related to the SSH server. We will use Ansible to update the SSH configuration file to enhance security.
First, we will create a new role named security
to handle all the tasks related to security setup.
- Create a new role named
security
:
mkdir -p roles/security/tasks
mkdir -p roles/security/handlers
- Inside
roles/security/tasks/
, create a new file namedssh.yml
with the content from the following file:
This task hardens the SSH configuration by setting the following options:
Protocol 2
: Specifies that only SSH protocol version 2 should be used.PermitRootLogin no
: Disables root login via SSH.PasswordAuthentication no
: Disables password authentication.AllowUsers {{ username }}
: Specifies the user allowed to log in via SSH.Port {{ ssh_port }}
: Specifies the SSH port to use.PubkeyAuthentication yes
: Enables public key authentication.ClientAliveInterval 300
: Sends a null packet to the client every 300 seconds.ClientAliveCountMax 0
: Disables client alive messages.MaxAuthTries 3
: Limits the number of authentication attempts.LogLevel VERBOSE
: Sets the log level to VERBOSE.MaxStartups 10:30:60
: Limits the number of concurrent unauthenticated connections.
- Create a new file
main.yml
in theroles/security/handlers/
directory with the following content:
---
- name: Restart SSH service
ansible.builtin.systemd_service:
name: ssh
state: restarted
enabled: yes
listen: "restart_ssh"
- Update the
site.yml
playbook to include thesecurity
role:
---
- hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
become: true
Now, there is one more thing to do. As you noticed, in our ssh.yml
tasks, we have a new sensitive variable called ssh_port
that is not defined anywhere. We will define this variable in the secrets.yml
file and use them in the ssh.yml
tasks.
Use the ansible-vault
command with edit
to open the secrets.yml
file and add the ssh_port
variable:
ansible-vault edit secrets.yml
Add the following content to the secrets.yml
file:
---
ssh_port: 2222
Another important aspect related to Ansible handlers, is that they are triggered only at the end of the play. This means that if you have multiple tasks that require a handler, the handler will be triggered only after all the tasks in playbook have been executed.
This can be useful when you want to restart a service only once, even if multiple tasks require it.
However, if a task in playbook fails before the handler is triggered, the handler will not be executed.
In our scenario, we want to ensure that the handler is triggered right after the SSH configuration is updated. We do this because the SSH service will be restarted, and we want to ensure that the new configuration is applied. Thus, the remaining tasks in the playbook will use the new SSH port.
To ensure that the handler is always triggered, you have meta: flush_handlers
directive in the ssh.yml
tasks:
---
- name: Force all notified handlers to run at this point
ansible.builtin.meta: flush_handlers
- name: Update Ansible to use new SSH port
ansible.builtin.set_fact:
ansible_port: "{{ ssh_port }}"
The first task, meta: flush_handlers
, forces all notified handlers to run at that point in the playbook, not waiting for the normal sync points. This ensures that the handler is triggered even if a task fails before it.
The second task updates the Ansible port to use the new SSH port defined in the secrets.yml
file. This allows Ansible to use the new port for SSH connections in subsequent tasks.
Now, let's run the playbook to update the SSH configuration on your VPS:
ansible-playbook -i inventory.yml site.yml
At this point, the SSH port has been changed to 2222
. We need to update our SSH client configuration ~/.ssh/config
to use the new port when connecting to VPS.
Add the following content to the ~/.ssh/config
file:
Host mycloud.com
HostName <vps_ip_or_domain>
User <your_username>
Port 2222
π Congratulations! You have successfully updated the SSH configuration on your VPS.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
become: true
A firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. In this section, we will use Ansible to configure the Uncomplicated Firewall (UFW) on our VPS.
- Inside
roles/security/tasks/
, create a new file namedfirewall.yml
with content from the following file:
- Update the
site.yml
playbook to include thefirewall
role:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: security
tasks_from: firewall.yml
become: true
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, the UFW firewall will be configured to allow SSH, HTTP, and HTTPS connections on your VPS.
The UFW configuration file is located at /etc/ufw/ufw.conf
. This file contains the settings for the UFW firewall, including the default policy and logging options.
Below are some useful commands to manage UFW:
ufw status
: Displays the status of the UFW firewall.ufw allow <port>/<protocol>
: Allows incoming traffic on a specific port and protocol.ufw deny <port>/<protocol>
: Denies incoming traffic on a specific port and protocol.ufw delete <rule_number>
: Deletes a specific rule from the UFW configuration.ufw reload
: Reloads the UFW configuration.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: firewall.yml
become: true
Fail2ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. In this section, we will use Ansible to install and configure Fail2ban on our VPS.
- Inside
roles/security/tasks/
, create a new file namedfail2ban.yml
with content from the following file:
- Append the following content to the
main.yml
file in theroles/security/handlers/
directory:
- name: Restart Fail2ban service
ansible.builtin.systemd_service:
name: fail2ban
state: restarted
enabled: yes
listen: "restart_fail2ban"
This is an Ansible handler that restarts the Fail2ban service after the configuration changes have been applied.
- Update the
site.yml
playbook to include thefail2ban
role:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: security
tasks_from: fail2ban.yml
become: true
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, Fail2ban will be installed and configured to protect your VPS from brute-force attacks.
Now, few things about fail2ban. Fail2ban is a service that monitors log files for failed login attempts and blocks the IP addresses of the attackers. The configuration file for Fail2ban is located at /etc/fail2ban/jail.local
. This file contains the settings for the services that Fail2ban monitors and the actions to take when an attack is detected.
In our playbook, we copied a custom configuration for the SSH service. This configuration enables Fail2ban for the SSH service, sets the log file path to /var/log/auth.log
, and specifies the maximum number of retries and the ban time.
Below are some useful commands to manage Fail2ban:
fail2ban-client status
: Displays the status of all jails.fail2ban-client status <jail_name>
: Displays the status of a specific jail.fail2ban-client set <jail_name> unbanip <ip_address>
: Unbans an IP address from a jail.fail2ban-client reload
: Reloads the Fail2ban configuration.
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: firewall.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: fail2ban.yml
become: true
In this section, we will focus on setting up different services running in Docker containers on our VPS. We will use Ansible to automate the service configuration process.
Before we start, we need to think about managing our docker containers configuration, and in order to not repeat ourselves, we can use Ansible variables to store the configuration of our containers. This way, we can easily update the configuration in one place and have it applied to all the containers.
- Create a new folder named
group_vars
in your project directory:
mkdir group_vars
- Inside the
group_vars
directory, create a new file namedall.yml
with the following content:
---
docker_networks:
- name: public
driver: bridge
- name: private
driver: bridge
The group_vars
folder is a special directory that contains variables that apply to all hosts in the inventory. In this case, we define a list of Docker networks that we will use in our services.
If you have multiple VPS hosts and want to define specific variables for each host, you can create a file with the host name in the group_vars
directory. For example, if you have a host named mycloud.com
, you can create a file named mycloud.com.yml
in the group_vars
directory with the host-specific variables.
The variables defined in the host-specific file will override the variables defined in the all.yml
file. The file all.yml
is used to define variables that apply to all hosts in the inventory.
Now, let's move on to configuring the services.
Docker networks allow containers to communicate with each other securely. In this section, we will use Ansible to create Docker networks on our VPS.
Why we want two networks? We will create a public network that will be used by services that need to be accessed from the internet, and a private network that will be used by services that should not be exposed to the internet. Also, using multiple networks allows us to isolate services and control the traffic between them.
- Inside
roles/services/tasks/
, create a new file namednetworks.yml
with the following content:
---
- name: Create Docker networks
ansible.builtin.docker_network:
name: "{{ item.name }}"
driver: "{{ item.driver }}"
loop: "{{ docker_networks }}"
- Update the
site.yml
playbook to import thenetworks.yml
tasks:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
As you noticed, this time we don't need to use the become: true
directive because the docker_network
module does not require elevated privileges.
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, the Docker networks will be created on your VPS.
To check the Docker networks, you can use the following command:
docker network ls
π― At this point, our site.yml playbook should look like this |
---|
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- ansible.builtin.import_role:
name: packages
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
become: true
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: firewall.yml
become: true
- ansible.builtin.import_role:
name: security
tasks_from: fail2ban.yml
become: true
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
PostgreSQL is a powerful, open-source relational database management system. In this section, we will use Ansible to install and configure PostgreSQL in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedpostgresql.yml
with content from the following file:
- Next step is to define the variables used in the
postgresql.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# PostgreSQL Container Configuration
postgres:
data_volume: postgres_data
container_name: postgres
container_image: postgres:latest
container_hostname: postgres
network: private
For the postgres_db_user
and postgres_db_pass
variables, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
postgres_db_user: postgres
postgres_db_pass: mysecretpassword
- Update the
site.yml
playbook to include thepostgresql
tasks:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
π If the playbook runs successfully, PostgreSQL will be installed and configured in a Docker container on your VPS.
As we can see, our main playbook is getting bigger and bigger. To avoid running all the tasks every time we want to update a service, we can split the tasks into separate playbooks and include them in the main playbook.
Let's create three new playbooks for each each type of configuration: packages.yml
, security.yml
, services.yml
.
Create a new file named packages.yml
in the root of your project directory with the following content:
---
- name: Configure VPS Packages
hosts: vps
vars_files:
- secrets.yml
become: true
tasks:
- ansible.builtin.import_role:
name: packages
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
Create a new file named security.yml
in the root of your project directory with the following content:
---
- name: Configure VPS Security
hosts: vps
vars_files:
- secrets.yml
become: true
tasks:
- ansible.builtin.import_role:
name: security
tasks_from: ssh.yml
- ansible.builtin.import_role:
name: security
tasks_from: firewall.yml
- ansible.builtin.import_role:
name: security
tasks_from: fail2ban.yml
Create a new file named services.yml
in the root of your project directory with the following content:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
Now, update the site.yml
playbook to include the new playbooks:
---
- name: Configure VPS
hosts: vps
vars_files:
- secrets.yml
tasks:
- name: Print Hello World
ansible.builtin.debug:
msg: "Hello, World!"
- name: Ping
ansible.builtin.ping:
- name: Install packages
ansible.builtin.import_playbook: packages.yml
- name: Configure security
ansible.builtin.import_playbook: security.yml
- name: Configure services
ansible.builtin.import_playbook: services.yml
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml site.yml
PgAdmin is a popular open-source administration and development platform for PostgreSQL. In this section, we will use Ansible to install and configure PgAdmin in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedpgadmin.yml
with content from the following file:
- Next step is to define the variables used in the
pgadmin.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# PgAdmin Container Configuration
pgadmin:
data_volume: pgadmin_data
container_image: dpage/pgadmin4:latest
container_name: pgadmin
container_hostname: pgadmin
network: public
For the pgadmin_email
and pgadmin_password
variables, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
pgadmin_email: [email protected]
pgadmin_password: secret
- Update the
services.yml
playbook to include thepgadmin
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
After updating the playbook, run it using the following command:
ansible-playbook -i inventory.yml services.yml
If the playbook runs successfully, PgAdmin will be installed and configured in a Docker container on your VPS.
Now, we want to access the PgAdmin web interface. To do this, we need to configure Caddy to act as a reverse proxy for the PgAdmin container.
- Update the
Caddyfile.j2
template file to include a reverse proxy configuration for PgAdmin:
<your_domain> {
root * /var/www/html
file_server
}
pgadmin.<your_domain> {
reverse_proxy localhost:8081 {
header_up X-Scheme {scheme}
}
}
This configuration file defines a reverse proxy for the PgAdmin container running on port 8081. The header_up
directive is used to set the X-Scheme
header to the scheme of the request.
To apply the changes we need to run the packages.yml
playbook which includes the caddy.yml
tasks:
ansible-playbook -i inventory.yml packages.yml
Now, is time to introduce some other Ansible concepts. As you can see often we do changes in our Caddy configuration, but we run the whole packages.yml
playbook to apply the changes. This is not a problem when we have a small number of tasks, but as the number of tasks grows, it can be time-consuming to run the entire playbook.
Therefore, we can use tags to run only specific tasks in the playbook.
In packages.yml
playbook, add the caddy
tag to the caddy.yml
tasks:
---
- name: Configure VPS Packages
hosts: vps
vars_files:
- secrets.yml
become: true
tasks:
- ansible.builtin.import_role:
name: packages
tags: packages
- ansible.builtin.import_role:
name: packages
tasks_from: docker.yml
tags: docker
- ansible.builtin.import_role:
name: packages
tasks_from: caddy.yml
tags: caddy
Now, you can run the packages.yml
playbook with the caddy
tag to apply the changes:
ansible-playbook -i inventory.yml packages.yml --tags caddy
After this, you should be able to login to PgAdmin using the following URL:
http://pgadmin.<your_domain>
π― At this point, our services.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- After updating the
Caddyfile.j2
template file, run thesite.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml site.yml
If the playbook runs successfully, Caddy will be configured to act as a reverse proxy for the PgAdmin container.
To access PgAdmin using the domain name, you can use the following URL in your web browser:
http://pgadmin.<your_domain>
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
Gitea is a self-hosted Git service that is similar to GitHub. In this section, we will use Ansible to install and configure Gitea in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedgitea.yml
with content from the following file:
Here is the list with the environment variables used in the Gitea container and what they do:
GITEA__database__DB_TYPE=postgres
: Specifies the database type (PostgreSQL).GITEA__database__HOST=postgres
: Specifies the database host (PostgreSQL container).GITEA__database__NAME={{ gitea_database_name }}
: Specifies the database name.GITEA__database__USER={{ gitea_database_user }}
: Specifies the database user.GITEA__database__PASSWD={{ gitea_database_pass }}
: Specifies the database password.GITEA__service__DISABLE_REGISTRATION=true
: Disables user registration.GITEA__service__EMAIL_DOMAIN_ALLOWLIST={{ gitea['email_domain'] }}
: Specifies the allowed email domain.GITEA__service__DEFAULT_USER_VISIBILITY=private
: Sets the default user visibility to private.GITEA__service__DEFAULT_ORG_VISIBILITY=private
: Sets the default organization visibility to private.GITEA__server__SSH_PORT=222
: Specifies the SSH port.GITEA__repository__DEFAULT_PRIVATE=private
: Sets the default repository visibility to private.GITEA__repository__FORCE_PRIVATE=true
: Forces repositories to be private.GITEA__openid__ENABLE_OPENID_SIGNIN=false
: Disables OpenID sign-in.GITEA__openid__ENABLE_OPENID_SIGNUP=false
: Disables OpenID sign-up.GITEA__cors__ENABLED=true
: Enables CORS.GITEA__cors__ALLOW_DOMAIN={{ gitea['domain'] }}
: Specifies the allowed domain for CORS.
- Next step is to define the variables used in the
gitea.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Gitea Container Configuration
gitea:
data_volume: gitea_data
container_image: gitea/gitea:latest
container_name: gitea
container_hostname: gitea
network: public
domain: gitea.<your_domain>
email_domain: "<your_domain>"
For the gitea_database_name
, gitea_database_user
, and gitea_database_pass
variables, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
gitea_database_name: gitea
gitea_database_user: gitea
gitea_database_pass: mysecretpassword
- Update the
services.yml
playbook to include thegitea
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- Now, we need to expose the Gitea interface using Caddy as a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Gitea:
gitea.<your_domain> {
reverse_proxy localhost:3000
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Last step is to run the
services.yml
playbook to install and configure Gitea:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Gitea will be installed and configured in a Docker container on your VPS. To access Gitea using the domain name, you can use the following URL in your web browser:
https://gitea.<your_domain>
When we access Gitea for the first time, we need to configure the instance settings. You don't have to change anything in the database settings, because we already configured them in the gitea.yml
tasks. All that you have to do here is to create the admin user in Administrator Account Settings
section. This is required because by default we disabled the user registration.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
Umami is a simple, easy-to-use, self-hosted web analytics solution. In this section, we will use Ansible to install and configure Umami in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedumami.yml
with content from the following file:
- Next step is to define the variables used in the
umami.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Umami Container Configuration
umami:
container_image: ghcr.io/umami-software/umami:postgresql-latest
container_name: umami
container_hostname: umami
network: public
For the umami_database_name
, umami_database_user
, umami_database_pass
and umami_app_secret
variables, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
umami_database_name: umami
umami_database_user: umami
umami_database_pass: mysecretpassword
umami_app_secret: mysecretappsecret
- Update the
services.yml
playbook to include theumami
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
- Now, we need to expose the Umami interface using Caddy as a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Umami:
umami.<your_domain> {
reverse_proxy localhost:3001
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Last step is to run the
services.yml
playbook to install and configure Umami:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Umami will be installed and configured in a Docker container on your VPS. To access Umami using the domain name, you can use the following URL in your web browser:
https://umami.<your_domain>
By default Umami
creates a user with the following credentials:
- Username:
admin
- Password:
umami
When we access Umami for the first time, we need to change the password for the admin
user.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
Yacht is a self-hosted web interface for managing Docker containers. In this section, we will use Ansible to install and configure Yacht in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedyacht.yml
with content from the following file:
- Next step is to define the variables used in the
yacht.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Yacht Container Configuration
yacht:
data_volume: yacht_data
container_image: selfhostedpro/yacht:latest
container_name: yacht
container_hostname: yacht
network: public
- Update Caddy to expose the Yacht interface using a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Yacht:
yacht.<your_domain> {
reverse_proxy localhost:8000
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Update the
services.yml
playbook to include theyacht
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: yacht.yml
- Run the
services.yml
playbook to install and configure Yacht:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Yacht will be installed and configured in a Docker container on your VPS. To access Yacht using the domain name, you can use the following URL in your web browser:
https://yacht.<your_domain>
You can login to Yacht using the following credentials:
- Username:
[email protected]
- Password:
pass
After you login, you can update the password and user email in the Settings
section.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
- ansible.builtin.import_role:
name: services
tasks_from: yacht.yml
In this section, we will install Ntfy
(pronounced notify) on our VPS. Ntfy is a utility for sending notifications when a command finishes. We will use Ansible to install and configure Ntfy in a Docker container on our VPS.
For more details about this service and its documentation, check this link: Ntfy
Ntfy works by publishing messages to a queue and then having a notification service consume those messages and send them to the user. There is a mobile app available for Ntfy that you can use to subscribe to a specific queue and get notifications on your phone.
The publishing of messages can be done by a simple curl command, or using email, or by integrating it with other services like GitHub Actions or GitLab CI/CD pipelines.
Here is an example from the Ntfy documentation on how to publish a message using curl when you are low on space on your server:
#!/bin/bash
mingigs=10
avail=$(df | awk '$6 == "/" && $4 < '$mingigs' * 1024*1024 { print $4/1024/1024 }')
topicurl=https://notify.<your_domain>/mytopic
if [ -n "$avail" ]; then
curl \
-d "Only $avail GB available on the root disk. Better clean that up." \
-H "Title: Low disk space alert on $(hostname)" \
-H "Priority: high" \
-H "Tags: warning,cd" \
$topicurl
fi
For more examples and integrations, check the Ntfy documentation.
- Inside
roles/services/tasks/
, create a new file namednotify.yml
with content from the following file:
- Next step is to define the variables used in the
notify.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Notify Container Configuration
notify:
data_volume: notify_data
container_image: binwiederhier/ntfy:latest
container_name: notify
container_hostname: notify
network: public
domain: notify.<your_domain>
For notify_admin_user
and notify_admin_pass
variables, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
notify_admin_user: admin
notify_admin_pass: mysecretpassword
- Update Caddy to expose the Notify interface using a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Notify:
notify.<your_domain> {
reverse_proxy localhost:2586
@httpget {
protocol http
method GET
path_regexp ^/([-_a-z0-9]{0,64}$|docs/|static/)
}
redir @httpget https://{host}{uri}
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Update the
services.yml
playbook to include thenotify
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: notify.yml
- Run the
services.yml
playbook to install and configure Notify:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Notify will be installed and configured in a Docker container on your VPS. To access Notify using the domain name, you can use the following URL in your web browser:
https://notify.<your_domain>
You can login to Notify using the credentials you defined in the secrets.yml
file.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
- ansible.builtin.import_role:
name: services
tasks_from: yacht.yml
- ansible.builtin.import_role:
name: services
tasks_from: notify.yml
Memos is a simple, self-hosted note-taking service. In this section, we will use Ansible to install and configure Memos in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedmemos.yml
with content from the following file:
- Next step is to define the variables used in the
memos.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Memos Container Configuration
memos:
data_volume: memos_data
container_image: neosmemo/memos:stable
container_name: memos
container_hostname: memos
network: public
- Update Caddy to expose the Memos interface using a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Memos:
memo.<your_domain> {
reverse_proxy localhost:5230
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Update the
services.yml
playbook to include thememos
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: memos.yml
- Run the
services.yml
playbook to install and configure Memos:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Memos will be installed and configured in a Docker container on your VPS. To access Memos using the domain name, you can use the following URL in your web browser:
https://memo.<your_domain>
The first user created in Memos is the admin user. After you login, you can stop the registration of new users in the Settings
section.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
- ansible.builtin.import_role:
name: services
tasks_from: yacht.yml
- ansible.builtin.import_role:
name: services
tasks_from: notify.yml
- ansible.builtin.import_role:
name: services
tasks_from: memos.yml
Semaphore is a simple, self-hosted CI/CD service backed by Ansible. In this section, we will use Ansible to install and configure Semaphore in a Docker container on our VPS.
- Inside
roles/services/tasks/
, create a new file namedsemaphore.yml
with content from the following file:
- Next step is to define the variables used in the
semaphore.yml
tasks. Update thegroup_vars/all.yml
file with the following content:
# Semaphore Container Configuration
semaphore:
container_image: semaphoreui/semaphore:latest
container_name: semaphore
container_hostname: semaphore
network: public
For the sensitive data, we will use the secrets.yml
file to store the sensitive data.
Edit the secrets.yml
file using ansible-vault edit secrets.yml
and add the following content:
semaphore_db_user: semaphore
semaphore_db_pass: mysecretpassword
semaphore_db_name: semaphore
semaphore_admin_name: admin
semaphore_admin_user: admin
semaphore_admin_email: [email protected]
semaphore_admin_pass: mysecretpassword
semaphore_access_key_encryption: gs72mPntFATGJs9qK0pQ0rKtfidlexiMjYCH9gWKhTU= # generate a new key using `openssl rand -base64 32`
- Update Caddy to expose the Semaphore interface using a reverse proxy. Update the
Caddyfile.j2
template file to include a reverse proxy configuration for Semaphore:
semaphore.<your_domain> {
reverse_proxy localhost:3002
}
- After updating the
Caddyfile.j2
template file, run thepackages.yml
playbook to apply the changes:
ansible-playbook -i inventory.yml packages.yml -t caddy
- Update the
services.yml
playbook to include thesemaphore
tasks:
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
# The other tasks are above
- ansible.builtin.import_role:
name: services
tasks_from: semaphore.yml
- Run the
services.yml
playbook to install and configure Semaphore:
ansible-playbook -i inventory.yml services.yml
π If the playbook runs successfully, Semaphore will be installed and configured in a Docker container on your VPS. To access Semaphore using the domain name, you can use the following URL in your web browser:
https://semaphore.<your_domain>
You can login to Semaphore using the credentials you defined in the secrets.yml
file.
π― At this point, our service.yml playbook should look like this |
---|
---
- name: Configure VPS Services
hosts: vps
vars_files:
- secrets.yml
tasks:
- ansible.builtin.import_role:
name: services
tasks_from: networks.yml
- ansible.builtin.import_role:
name: services
tasks_from: postgresql.yml
- ansible.builtin.import_role:
name: services
tasks_from: pgadmin.yml
- ansible.builtin.import_role:
name: services
tasks_from: gitea.yml
- ansible.builtin.import_role:
name: services
tasks_from: umami.yml
- ansible.builtin.import_role:
name: services
tasks_from: yacht.yml
- ansible.builtin.import_role:
name: services
tasks_from: notify.yml
- ansible.builtin.import_role:
name: services
tasks_from: memos.yml
- ansible.builtin.import_role:
name: services
tasks_from: semaphore.yml
- Ansible Documentation
- Ansible Galaxy
- Ansible Roles
- Ansible Tags
- Ansible Vault
- Docker Documentation
- Caddy Documentation
- PostgreSQL Documentation
- PgAdmin Documentation
- Gitea Documentation
- Umami Documentation
- Yacht Documentation
- Ntfy Documentation
- Memos Documentation
- Semaphore Documentation
Feel free to fork this repository, add more services, and create a pull request. If you have any questions or feedback, please create an issue. I would be happy to help you.
If you found this guide helpful, please give it a βοΈ on GitHub and share it with your friends.
Thank you for reading! π