Browse Source

feat: init

pull/3/head
Karan Sharma 2 years ago
commit
b5a5cd3678
  1. 11
      .gitignore
  2. 21
      LICENSE
  3. 40
      README.md
  4. 22
      digitalocean-infra/README.md
  5. 53
      digitalocean-infra/firewalls.tf
  6. 35
      digitalocean-infra/main.tf
  7. 12
      digitalocean-infra/outputs.tf
  8. 19
      nginx/README.md
  9. 13
      nginx/conf.d/adguard.conf
  10. 15
      nginx/conf.d/ip.conf
  11. 18
      nginx/conf.d/nginxconfig.io/general.conf
  12. 11
      nginx/conf.d/nginxconfig.io/proxy.conf
  13. 12
      nginx/conf.d/nginxconfig.io/security.conf
  14. 11
      nginx/docker-compose.yml
  15. 3
      pi/Vagrantfile
  16. 3
      pi/ansible.cfg
  17. 9
      pi/inventory.sample
  18. 2
      pi/roles/k3s-agents/handlers/main.yml
  19. 16
      pi/roles/k3s-agents/tasks/agent.yml
  20. 5
      pi/roles/k3s-agents/tasks/main.yml
  21. 17
      pi/roles/k3s-agents/templates/k3s-agent.service.j2
  22. 3
      pi/roles/k3s-agents/vars/main.yml
  23. 2
      pi/roles/k3s-common/handlers/main.yml
  24. 48
      pi/roles/k3s-common/tasks/common.yml
  25. 5
      pi/roles/k3s-common/tasks/main.yml
  26. 1
      pi/roles/k3s-common/vars/main.yml
  27. 2
      pi/roles/k3s-control/handlers/main.yml
  28. 71
      pi/roles/k3s-control/tasks/control.yml
  29. 5
      pi/roles/k3s-control/tasks/main.yml
  30. 24
      pi/roles/k3s-control/templates/k3s-control.service.j2
  31. 1
      pi/roles/k3s-control/vars/main.yml
  32. 7
      pi/roles/raspbian/handlers/main.yml
  33. 25
      pi/roles/raspbian/tasks/apt.yml
  34. 7
      pi/roles/raspbian/tasks/hostname.yml
  35. 4
      pi/roles/raspbian/tasks/locale.yml
  36. 21
      pi/roles/raspbian/tasks/main.yml
  37. 80
      pi/roles/raspbian/tasks/ssh.yml
  38. 9
      pi/roles/raspbian/tasks/user.yml
  39. 6
      pi/roles/raspbian/vars/main.yml
  40. 3
      pi/roles/raspbian/vars/secret.sample
  41. 46
      pi/roles/raspbian/vars/secret.yml
  42. 25
      pi/setup.yml
  43. 0
      wireguard/README.md
  44. 15
      wireguard/inventory.example
  45. 3
      wireguard/playbook.yml
  46. 30
      wireguard/roles/wireguard/defaults/main.yml
  47. 6
      wireguard/roles/wireguard/handlers/main.yml
  48. 84
      wireguard/roles/wireguard/tasks/main.yml
  49. 28
      wireguard/roles/wireguard/tasks/setup.yml
  50. 67
      wireguard/roles/wireguard/templates/wg.conf.j2

11
.gitignore

@ -0,0 +1,11 @@
.vagrant
packer*
www
inventory
*.tfvars
.terraform.tfstate.lock.info
.terraform
*.tfstate
*.tfstate.backup
*.backup
*.env

21
LICENSE

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019 Karan Sharma
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

40
README.md

@ -0,0 +1,40 @@
# Hydra
> Setup scripts for my home server setup named "Hydra"
## Overview
I run a Kubernetes cluster using [k3s](https://k3s.io/) on Raspberry Pi4 (2x nodes). I've prepared an Ansible Playbook to prepare the base OS ([Raspbian Buster Lite](https://www.raspberrypi.org/downloads/raspbian/)), configure some sane defaults and create a k3s cluster.
## Getting Started
If you're interested in bootstrapping a RPi cluster with k3s, you can refer to the following instructions to use the Ansible playbook.
### Prerequisites
- You need to flash Raspbian OS image on your RPi.
- You need to enable `SSH` access, which can be done by `sudo touch /boot/ssh`.
### Running the playbook
- Copy the `inventory.sample` to `inventory` and replace the hosts with your RPi nodes.
- Replace the `pi/roles/raspbian/vars/secret.sample` with your own secrets in `pi/roles/raspbian/vars/secret.yml`. I generally prefer to use Ansible Vault to encrypt the secrets.
- Run the playbook using `ansible-playbook pi/setup.yml`. This will configure your RPi for:
- Changing the default password for `pi`.
- Disabling Password based SSH access.
- Configure hostname on all RPi nodes.
- Enable `cgroups` features.
- Update the GPU memory to lowest possible (16M) since we are going to use this as a server.
- Bootstrap a k3s cluster on the control-plane node and join a agent node with the control-plane node.
### Credits
> I referred to the following resources while creating the Playbook
- [https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/](https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/)
- [https://github.com/rancher/k3s/tree/master/contrib/ansible/](https://github.com/rancher/k3s/tree/master/contrib/ansible/)
## Services Hosted
> List of services I am running on Hydra.
WIP

22
digitalocean-infra/README.md

@ -0,0 +1,22 @@
# Droplet Infra
Uses DigitalOcean Terraform provider to provision resources for managing personal server where the VPN runs. Additionally, a Kubernetes control plane is also deployed and this node acts as the master node.
## Getting Started
**Note**: Before you begin, export API token for performing any kind of ops with DO API:
```sh
export DIGITALOCEAN_TOKEN=<>
```
```shell
tf init
```
## Applying changes
```shell
tf plan
tf apply
```

53
digitalocean-infra/firewalls.tf

@ -0,0 +1,53 @@
resource "digitalocean_firewall" "web" {
name = "web-inbound"
droplet_ids = [digitalocean_droplet.alphard.id]
inbound_rule {
protocol = "tcp"
port_range = "80"
source_addresses = ["0.0.0.0/0", "::/0"]
}
inbound_rule {
protocol = "tcp"
port_range = "443"
source_addresses = ["0.0.0.0/0", "::/0"]
}
}
resource "digitalocean_firewall" "ssh" {
name = "ssh-inbound"
droplet_ids = [digitalocean_droplet.alphard.id]
inbound_rule {
protocol = "tcp"
port_range = "22"
source_addresses = ["0.0.0.0/0", "::/0"]
}
}
resource "digitalocean_firewall" "outbound-all" {
name = "allow-all-outbound"
droplet_ids = [digitalocean_droplet.alphard.id]
outbound_rule {
protocol = "tcp"
port_range = "1-65535"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "udp"
port_range = "53"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "icmp"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
}

35
digitalocean-infra/main.tf

@ -0,0 +1,35 @@
provider "digitalocean" {
# You need to set this in your .bashrc
# export DIGITALOCEAN_TOKEN="Your API TOKEN"
#
}
# Create a new SSH key
resource "digitalocean_ssh_key" "mrkaran" {
name = "mrkaran.dev"
public_key = file("~/.ssh/mrkaran_rsa.pub")
}
# Create a new droplet in the blr1 region (master node)
resource "digitalocean_droplet" "alphard" {
image = "ubuntu-18-04-x64"
name = "alphard"
region = "blr1"
monitoring = true
size = "s-1vcpu-2gb"
ipv6 = true
private_networking = true
tags = [
"hydra",
"k8s-master",
"vpn",
]
ssh_keys = [digitalocean_ssh_key.mrkaran.fingerprint]
}
# Attach the floating ip to droplet
resource "digitalocean_floating_ip" "alphard" {
droplet_id = digitalocean_droplet.alphard.id
region = digitalocean_droplet.alphard.region
}

12
digitalocean-infra/outputs.tf

@ -0,0 +1,12 @@
# Record some meta about droplet created
output "alphard_ipv4" {
value = digitalocean_droplet.alphard.ipv4_address
}
output "alphard_droplet_name" {
value = digitalocean_droplet.alphard.name
}
output "alphard_droplet_id" {
value = digitalocean_droplet.alphard.id
}

19
nginx/README.md

@ -0,0 +1,19 @@
# Nginx configuration for my domains
## Setup
`docker-compose up -d` spawns an `nginx` container with the config files mounted at `/etc/nginx/conf.d` path in container.
### Servers
- [ip.mrkaran.dev](conf.d/ip.conf)
- [adguard.mrkaran.dev](conf.d/adguard.conf)
### SSL
All of my domains and subdomains are handled via Cloudflare. SSL certs are issued by Certificate and that's why there's no `listen 443` directives in these `nginx` configurations. Unless ofcourse any site is not behind Cloudflare proxy due to any reason, the SSL certs configuration will be present for the same.
### References
- https://github.com/trimstray/nginx-admins-handbook
- https://nginxconfig.io/

13
nginx/conf.d/adguard.conf

@ -0,0 +1,13 @@
server {
server_name adguard.mrkaran.dev;
# security
include conf.d/nginxconfig.io/security.conf;
# reverse proxy
location / {
proxy_pass http://10.47.0.5:6000; # anchor ip in do
include conf.d/nginxconfig.io/proxy.conf;
}
}

15
nginx/conf.d/ip.conf

@ -0,0 +1,15 @@
server {
server_name ip.mrkaran.dev;
real_ip_header CF-Connecting-IP;
# security
include conf.d/nginxconfig.io/security.conf;
location / {
default_type text/plain;
return 200 "$http_cf_connecting_ip\n";
}
# general
include conf.d/nginxconfig.io/general.conf;
}

18
nginx/conf.d/nginxconfig.io/general.conf

@ -0,0 +1,18 @@
# favicon.ico
location = /favicon.ico {
log_not_found off;
access_log off;
}
# robots.txt
location = /robots.txt {
log_not_found off;
access_log off;
}
# gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;

11
nginx/conf.d/nginxconfig.io/proxy.conf

@ -0,0 +1,11 @@
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;

12
nginx/conf.d/nginxconfig.io/security.conf

@ -0,0 +1,12 @@
# security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# . files
location ~ /\.(?!well-known) {
deny all;
}

11
nginx/docker-compose.yml

@ -0,0 +1,11 @@
version: '3.7'
services:
nginx:
image: 'nginx:latest'
ports:
- '80:80'
- '443:443'
volumes:
- '$PWD/conf.d:/etc/nginx/conf.d'
restart: unless-stopped

3
pi/Vagrantfile

@ -0,0 +1,3 @@
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/disco64"
end

3
pi/ansible.cfg

@ -0,0 +1,3 @@
[defaults]
roles_path = ../roles
inventory = ../inventory

9
pi/inventory.sample

@ -0,0 +1,9 @@
[hydra:children]
control
agent
[control]
hydra-control ansible_ssh_host=hydra-control.local ansible_ssh_user=pi ansible_ssh_port=22 remote_user=pi
[agent]
hydra-agent-1 ansible_ssh_host=raspberrypi.local ansible_ssh_user=pi ansible_ssh_port=22 remote_user=pi ansible_ssh_pass=raspberry

2
pi/roles/k3s-agents/handlers/main.yml

@ -0,0 +1,2 @@
- name: reboot
reboot:

16
pi/roles/k3s-agents/tasks/agent.yml

@ -0,0 +1,16 @@
---
- name: Copy K3s service file
template:
src: "k3s-agent.service.j2"
dest: "{{ systemd_dir }}/k3s-agent.service"
owner: root
group: root
mode: 0755
- name: Enable and check K3s service
systemd:
name: k3s-agent
daemon_reload: yes
state: restarted
enabled: yes

5
pi/roles/k3s-agents/tasks/main.yml

@ -0,0 +1,5 @@
---
- import_tasks: agent.yml
tags:
- agent

17
pi/roles/k3s-agents/templates/k3s-agent.service.j2

@ -0,0 +1,17 @@
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target
[Service]
ExecStart=/usr/local/bin/k3s agent --server {{ k3s_server_address }} --token {{ k3s_cluster_token }}
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target

3
pi/roles/k3s-agents/vars/main.yml

@ -0,0 +1,3 @@
systemd_dir: /etc/systemd/system
k3s_server_address: "{{ hostvars[groups['control'][0]].k3s_server_address }}"
k3s_cluster_token: "{{ hostvars[groups['control'][0]].k3s_cluster_token }}"

2
pi/roles/k3s-common/handlers/main.yml

@ -0,0 +1,2 @@
- name: reboot
reboot:

48
pi/roles/k3s-common/tasks/common.yml

@ -0,0 +1,48 @@
- name: Set GPU memory split to 16 MB
lineinfile:
path: /boot/config.txt
line: 'gpu_mem=16'
create: yes
- name: Add cgroup directives to boot commandline config
lineinfile:
path: /boot/cmdline.txt
regexp: '((.)+?)(\scgroup_\w+=\w+)*$'
line: '\1 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory'
backrefs: yes
- name: Download k3s binary armhf
get_url:
url: https://github.com/rancher/k3s/releases/download/{{ k3s_version }}/k3s-armhf
dest: /usr/local/bin/k3s
owner: root
group: root
mode: 755
when: ( ansible_facts.architecture is search("arm") )
and
( ansible_facts.userspace_bits == "32" )
- name: Create directory .kube
file:
path: /home/{{ ansible_user }}/.kube
state: directory
owner: "{{ ansible_user }}"
- name: Create kubectl symlink
file:
src: /usr/local/bin/k3s
dest: /usr/local/bin/kubectl
state: link
- name: Create crictl symlink
file:
src: /usr/local/bin/k3s
dest: /usr/local/bin/crictl
state: link
- name: Point hostname to localhost (k3s requirement)
lineinfile:
path: /etc/hosts
line: "127.0.0.1 {{inventory_hostname}}"
notify:
- reboot

5
pi/roles/k3s-common/tasks/main.yml

@ -0,0 +1,5 @@
---
- import_tasks: common.yml
tags:
- common

1
pi/roles/k3s-common/vars/main.yml

@ -0,0 +1 @@
k3s_version: v0.8.1

2
pi/roles/k3s-control/handlers/main.yml

@ -0,0 +1,2 @@
- name: reboot
reboot:

71
pi/roles/k3s-control/tasks/control.yml

@ -0,0 +1,71 @@
---
- name: Copy K3s service file
register: k3s_service
template:
src: "k3s-control.service.j2"
dest: "{{ systemd_dir }}/k3s.service"
owner: root
group: root
mode: 0755
- name: Enable and check K3s service
systemd:
name: k3s
daemon_reload: yes
state: restarted
enabled: yes
- name: Wait for node-token
wait_for:
path: /var/lib/rancher/k3s/server/node-token
- name: Register node-token file access mode
stat:
path: /var/lib/rancher/k3s/server
register: p
- name: Change file access node-token
file:
path: /var/lib/rancher/k3s/server
mode: "g+rx,o+rx"
- name: Read node-token from control node
slurp:
src: /var/lib/rancher/k3s/server/node-token
register: node_token
- name: Store control node-token
set_fact:
k3s_cluster_token: "{{ node_token.content | b64decode | regex_replace('\n', '') }}"
- name: Print cluster node token
debug:
msg: "{{ k3s_cluster_token }}"
- name: Restore node-token file access
file:
path: /var/lib/rancher/k3s/server
mode: "{{ p.stat.mode }}"
- name: Copy config file to user home directory
copy:
src: /etc/rancher/k3s/k3s.yaml
dest: /home/{{ ansible_user }}/.kube/config
remote_src: yes
owner: "{{ ansible_user }}"
- name: Set server address
set_fact:
k3s_server_address: "https://{{ ansible_default_ipv4.address }}:6443"
run_once: yes
- name: Print server address
debug:
msg: "{{ k3s_server_address }}"
- name: Replace https://localhost:6443 with {{k3s_server_address}}
replace:
path: /home/{{ ansible_user }}/.kube/config
regexp: 'https://localhost:6443'
replace: 'https://{{k3s_server_address}}'

5
pi/roles/k3s-control/tasks/main.yml

@ -0,0 +1,5 @@
---
- import_tasks: control.yml
tags:
- control

24
pi/roles/k3s-control/templates/k3s-control.service.j2

@ -0,0 +1,24 @@
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target
[Service]
Type=notify
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target

1
pi/roles/k3s-control/vars/main.yml

@ -0,0 +1 @@
systemd_dir: /etc/systemd/system

7
pi/roles/raspbian/handlers/main.yml

@ -0,0 +1,7 @@
- name: restart sshd
service:
name: ssh
state: restarted
- name: reboot
reboot:

25
pi/roles/raspbian/tasks/apt.yml

@ -0,0 +1,25 @@
- name: Update apt cache and upgrade
apt:
update_cache: yes
upgrade: yes
- name: Install dependencies
apt:
name: "{{ packages }}"
vars:
packages:
- apt-transport-https
- vim
- name: install unattended-upgrades
apt:
name: "unattended-upgrades"
state: present
- name: Remove useless packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes

7
pi/roles/raspbian/tasks/hostname.yml

@ -0,0 +1,7 @@
---
- name: updating hostname to {{inventory_hostname}} from {{ansible_hostname}}
hostname:
name: "{{inventory_hostname}}"
notify:
- reboot

4
pi/roles/raspbian/tasks/locale.yml

@ -0,0 +1,4 @@
- name: Ensure US locale exists
locale_gen:
name: en_US.UTF-8
state: present

21
pi/roles/raspbian/tasks/main.yml

@ -0,0 +1,21 @@
---
- import_tasks: apt.yml
tags:
- apt
- import_tasks: locale.yml
tags:
- locale
- import_tasks: user.yml
tags:
- user
- import_tasks: ssh.yml
tags:
- ssh
- import_tasks: hostname.yml
tags:
- hostname

80
pi/roles/raspbian/tasks/ssh.yml

@ -0,0 +1,80 @@
---
- include_vars: secret.yml
- name: upload authorized keys for user {{ ansible_ssh_user }}
authorized_key:
user: "{{ ansible_ssh_user }}"
key: "{{ item }}"
state: present
with_items: "{{ ssh_public_keys }}"
- name: disable ssh remote root login
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "^#?PermitRootLogin"
line: "PermitRootLogin no"
state: present
notify:
- restart sshd
- name: enable ssh strict mode
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "^#?StrictModes"
line: "StrictModes yes"
state: present
notify:
- restart sshd
- name: disable X11 forwarding
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "^#?X11Forwarding"
line: "X11Forwarding no"
state: present
notify:
- restart sshd
- name: disable ssh password login
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
with_items:
- regexp: "^#?PasswordAuthentication"
line: "PasswordAuthentication no"
- regexp: "^#?ChallengeResponseAuthentication"
line: "ChallengeResponseAuthentication no"
- regexp: "^#?UsePAM"
line: "UsePAM no"
- regexp: "^#?PermitEmptyPasswords"
line: "PermitEmptyPasswords no"
notify:
- restart sshd
- name: set ssh allowed users to {{ ansible_ssh_user }}
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "^#?AllowUsers"
line: "AllowUsers {{ ansible_ssh_user }}"
notify:
- restart sshd
- name: add ssh banner info
lineinfile:
dest: "{{ ssh_sshd_config }}"
regexp: "^#?Banner"
line: "Banner /etc/issue.net"
state: present
notify:
- restart sshd
- name: update ssh banner
copy:
content: "{{ ssh_banner }}"
dest: /etc/issue.net
when: ssh_banner != None
notify:
- restart sshd

9
pi/roles/raspbian/tasks/user.yml

@ -0,0 +1,9 @@
---
- include_vars: secret.yml
- name: Update the default password of pi user
become: yes
user:
name: "pi"
password: "{{ pi_password|password_hash('sha512', pi_password_salt) }}"

6
pi/roles/raspbian/vars/main.yml

@ -0,0 +1,6 @@
ssh_sshd_config: "/etc/ssh/sshd_config"
ssh_banner: Welcome to Hydra! The HomeLab Server.
# interface: eth0
# ip_address:
# routers:
# dns_servers:

3
pi/roles/raspbian/vars/secret.sample

@ -0,0 +1,3 @@
pi_password: pi
ssh_public_keys: your-ssh-public-key
pi_password_salt: salt

46
pi/roles/raspbian/vars/secret.yml

@ -0,0 +1,46 @@
$ANSIBLE_VAULT;1.1;AES256
38633337333137666632303434383435616335313665396334313235623933326330623438613966
6336323837323136323063396137316336313665323133370a653138656137353037386335346532
38623634353134643534326634653866653937343034316462633634393664633436623464393065
6534326131313066360a623134383135653563373938303436326331313735386634636463386238
62623034653533346463623431363338636666323437356635343338363733353133306232306366
33326537396139353963336539336635326364313938353563373933633865386135353361373138
38633134326134343830303433323533353635376230386332393662616262306365393065653632
34663432313033623337633038643031313438656664316237333464643731653836646235373463
36363764363861653935343337616333393034353462313233613763313965373038666263393964
34633830663533623066643531363063663038656136376265346234376263373062653865393336
34663262323634303863353936346663663066643564663462383931373839343634366133623637
30353832303463396561383462333135303565373364613865656133623161623633633235313434
33343965313239383064313732363563306132643263343063333364333536363430666231636365
61306236393237333365393639663866633462376631396135363731636263323634663030663436
36323935333565366561376138386436636636353166643930306538396264316533323332306166
32386438353666363862343235376530666464306438336166393962643936316661663633623430
30646362643438636633643265386137313739336337396363663137656466303334623537663332
62623432643039313563393566323061393232663236666139316430646161393339316334663330
31333339393234316436393739633261333030663633623532363935396434343930353933633365
61343963366461626634373332663437653332343164636438393535336261393062343835356335
34393530363231383136623131396537396630393736396237333264336530663439633434333133
37363730376436643138626138383039383965383436393336663037326433393130623635313161
39383965316434383432326263646432333565363932653863666132646363623366346339373865
66363362643437663733353837623435393162633166336631306163656561326437303139656461
35616666386234326562663934316333336333616261343435363232613436346538376666353135
35326635346533643538643239386539353933613836623330613162363064633538663264376135
32306663303833336433623661633137623662613762393864333338623638313863633766366365
31613134633137336438313233356232333034373963323437363466363665653565303833366537
32396362306239633132363066303061643861313034306561363364343337313961306532346363
62653435306463663538633762623833356236366136636464323465386535623539313366386630
38343639623136323037363364393235316239643630313833353734346139663361616635666564
64343834373536316134303835653666383235656664633130366638343961363938663734396330
35663136636134643366356663646162306437646138316633346434386464323662363564636238
66313834303831376466346262353135333534323436303033623837303565633535326461376531
30303830383964306138656263336466383736373830666163646137663932643065343863643462
32366137656161343762393830316431366566653136663665396438653664333839323436323965
32616238636337633662643637326365316463373838363635613635663038313333633465356433
34366562303130623338316431626330386334393332373064633163383064383131376531343166
33373561663030626435393761363161373535643438653133643439386335343462613736343835
61656636356633373136393061663663316330653963333763386532363830363735306539326234
38613532396165336132303238613766646231646539343165373531366633643662633739386136
39373734323561633436633235313936646162313439323663643438393239383038643536353738
61373265373935626435636139303433643264663030306464303764373361363566336630333232
36626131343034316535303239306138306461653034663865656236376166663363666537363930
35376234313963646463623539343531383238316538616665386663306164616661

25
pi/setup.yml

@ -0,0 +1,25 @@
- name: Setup sane defaults for Raspbian
hosts: hydra
become: true
roles:
- role: raspbian
- name: Install k3s and configure cgroups
hosts: hydra
become: true
roles:
- role: k3s-common
- name: Configure k3s control plane
hosts: control
become: true
roles:
- role: k3s-control
- name: Provision agent nodes
hosts: agent
gather_facts: yes
become: true
roles:
- role: k3s-agents

0
wireguard/README.md

15
wireguard/inventory.example

@ -0,0 +1,15 @@
vpn:
hosts:
alphard:
ansible_user: root
ansible_host: alphard
wireguard_address: 10.9.0.1/32
wireguard_allowed_ips: "10.9.0.1/32, 192.168.2.0/24"
wireguard_endpoint: multi.exemple.com
peers:
laptop-personal:
public_key: lol
wireguard_address: 10.9.0.2/32
wireguard_allowed_ips: "10.9.0.2/32, 192.168.3.0/24"
wireguard_persistent_keepalive: 15
wireguard_endpoint: nated.exemple.com

3
wireguard/playbook.yml

@ -0,0 +1,3 @@
- hosts: vpn
roles:
- wireguard

30
wireguard/roles/wireguard/defaults/main.yml

@ -0,0 +1,30 @@
---
# Directory to store WireGuard configuration on the remote hosts
wireguard_remote_directory: "/etc/wireguard"
# The default port WireGuard will listen if not specified otherwise.
wireguard_port: "51820"
# The default interface name that wireguard should use if not specified otherwise.
wireguard_interface: "wg0"
# The default address for wireguard server for other peers to connect to.
wireguard_address: "10.200.200.1/24"
# wireguard_preup:
# - echo 1 > /proc/sys/net/ipv4/ip_forward
# - ufw allow 51820/udp
wireguard_postup:
# Configure iptables to setup a NAT on eth0 and forward the packets (ipv4 and ipv6) on interface wg0 to eth0
- iptables -A FORWARD -i wg0 -j ACCEPT
- iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
- ip6tables -A FORWARD -i wg0 -j ACCEPT
- ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
wireguard_postdown:
# Delete the rule since when wireguard is down, wg0 doesn't exist
- iptables -D FORWARD -i wg0 -j ACCEPT
- iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
- ip6tables -D FORWARD -i wg0 -j ACCEPT
- ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

6
wireguard/roles/wireguard/handlers/main.yml

@ -0,0 +1,6 @@
---
- name: restart wireguard
service:
name: "[email protected]{{ wireguard_interface }}"
state: restarted

84
wireguard/roles/wireguard/tasks/main.yml

@ -0,0 +1,84 @@
---
- import_tasks: setup.yml
tags: wg-install
- name: Set WireGuard IP (without mask)
set_fact:
wireguard_ip: "{{ wireguard_address.split('/')[0] }}"
- name: Register if config/private key already exists on target host
stat:
path: "{{ wireguard_remote_directory }}/{{ wireguard_interface }}.conf"
register: config_file_stat
tags:
- wg-generate-keys
- wg-config
- block:
- name: Generate WireGuard private key
shell: "wg genkey"
register: wg_private_key_result
tags:
- wg-generate-keys
- name: Set private key fact
set_fact:
private_key: "{{ wg_private_key_result.stdout }}"
tags:
- wg-generate-keys
when: not config_file_stat.stat.exists
- block:
- name: Read WireGuard config file
slurp:
src: "{{ wireguard_remote_directory }}/{{ wireguard_interface }}.conf"
register: wg_config
tags:
- wg-config
- name: Set private key fact
set_fact:
private_key: "{{ wg_config['content'] | b64decode | regex_findall('PrivateKey = (.*)') | first }}"
tags:
- wg-config
when: config_file_stat.stat.exists
- name: Derive WireGuard public key
shell: "echo '{{ private_key }}' | wg pubkey" # noqa 306
register: wg_public_key_result
changed_when: false
tags:
- wg-config
- name: Set public key fact
set_fact:
public_key: "{{ wg_public_key_result.stdout }}"
tags:
- wg-config
- name: Create WireGuard configuration directory
file:
dest: "{{ wireguard_remote_directory }}"
state: directory
mode: 0700
tags:
- wg-config
- name: Generate WireGuard configuration file
template:
src: wg.conf.j2
dest: "{{ wireguard_remote_directory }}/{{ wireguard_interface }}.conf"
owner: root
group: root
mode: 0600
tags:
- wg-config
notify:
- restart wireguard
- name: Start and enable WireGuard service
service:
name: "[email protected]{{ wireguard_interface }}"
state: started
enabled: yes

28
wireguard/roles/wireguard/tasks/setup.yml

@ -0,0 +1,28 @@
---
- name: Add WireGuard repository
apt_repository:
repo: "ppa:wireguard/wireguard"
state: present
update_cache: yes
tags:
- wg-install
- name: Install required packages
package:
name: "{{ packages }}"
state: present
vars:
packages:
- software-properties-common
- linux-headers-{{ ansible_kernel }}
- wireguard
tags:
- wg-install
- name: Enable IPv4 forwarding
sysctl:
name: net.ipv4.ip_forward
value: "1"
reload: yes
tags: wg-install

67
wireguard/roles/wireguard/templates/wg.conf.j2

@ -0,0 +1,67 @@
#jinja2: lstrip_blocks:"True",trim_blocks:"True"
[Interface]
# {{ inventory_hostname }}
Address = {{hostvars[inventory_hostname].wireguard_address}}
PrivateKey = {{private_key}}
ListenPort = {{wireguard_port}}
{% if hostvars[inventory_hostname].wireguard_dns is defined %}
DNS = {{hostvars[inventory_hostname].wireguard_dns}}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_fwmark is defined %}
FwMark = {{hostvars[inventory_hostname].wireguard_fwmark}}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_mtu is defined %}
MTU = {{hostvars[inventory_hostname].wireguard_mtu}}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_table is defined %}
Table = {{hostvars[inventory_hostname].wireguard_table}}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_preup is defined %}
{% for wg_preup in hostvars[inventory_hostname].wireguard_preup %}
PreUp = {{ wg_preup }}
{% endfor %}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_predown is defined %}
{% for wg_predown in hostvars[inventory_hostname].wireguard_predown %}
PreDown = {{ wg_predown }}
{% endfor %}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_postup is defined %}
{% for wg_postup in hostvars[inventory_hostname].wireguard_postup %}
PostUp = {{ wg_postup }}
{% endfor %}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_postdown is defined %}
{% for wg_postdown in hostvars[inventory_hostname].wireguard_postdown %}
PostDown = {{ wg_postdown }}
{% endfor %}
{% endif %}
{% if hostvars[inventory_hostname].wireguard_save_config is defined %}
SaveConfig = true
{% endif %}
{% for peer in hostvars[inventory_hostname].peers %}
[Peer]
# {{ peer }}
PublicKey = {{ hostvars[inventory_hostname].peers[peer].public_key }}
{% if hostvars[inventory_hostname].peers[peer].wireguard_allowed_ips is defined %}
AllowedIPs = {{hostvars[inventory_hostname].peers[peer].wireguard_allowed_ips}}
{% else %}
AllowedIPs = {{hostvars[inventory_hostname].peers[peer].wireguard_ip}}/32
{% endif %}
{% if hostvars[inventory_hostname].peers[peer].wireguard_persistent_keepalive is defined %}
PersistentKeepalive = {{hostvars[inventory_hostname].peers[peer].wireguard_persistent_keepalive}}
{% endif %}
{% if hostvars[inventory_hostname].peers[peer].wireguard_port is defined and hostvars[inventory_hostname].peers[peer].wireguard_port is number %}
{% if hostvars[inventory_hostname].peers[peer].wireguard_endpoint is defined and hostvars[inventory_hostname].peers[peer].wireguard_endpoint != "" %}
Endpoint = {{hostvars[inventory_hostname].peers[peer].wireguard_endpoint}}:{{hostvars[inventory_hostname].peers[peer].wireguard_port}}
{% else %}
Endpoint = {{host}}:{{hostvars[inventory_hostname].peers[peer].wireguard_port}}
{% endif %}
{% elif hostvars[inventory_hostname].peers[peer].wireguard_endpoint is defined and hostvars[inventory_hostname].peers[peer].wireguard_endpoint != "" %}
Endpoint = {{hostvars[inventory_hostname].peers[peer].wireguard_endpoint}}:{{wireguard_port}}
{% elif hostvars[inventory_hostname].peers[peer].wireguard_endpoint == "" %}
# No endpoint defined for this peer
{% else %}
Endpoint = {{peer}}:{{wireguard_port}}
{% endif %}
{% endfor %}
Loading…
Cancel
Save