Files
docs/content/articles/2025-04-04-servers.md
Michael Housh f43a191908
All checks were successful
CI / release (push) Successful in 5m45s
feat: Renames servers article to servers overview.
2025-04-08 15:04:13 -04:00

5.4 KiB

date, updated, tags, primaryTag
date updated tags primaryTag
2025-04-04 2025-04-08 servers, infrastructure, homelab infrastructure

Servers Overview

Documentation about how the servers are setup.

Hardware

Currently there are (3) mac-mini's that run Fedora Linux, one of which is owned by the company, two are my personal machines. I try to segment the services based on that. Meaning services that I run primarily for personal items are running on servers that I own, while services that are supporting business functionality run on the companies server.

All of the servers run the services in Docker Containers, which allows for them to be isolated from the host system (server) and makes them more easily portable between servers if needed.

There is also a Raspberry-Pi that runs Home Assitant, which is another one of my personal devices.

Server DNS Name IP Address
mighty-mini mightymini.housh.dev 192.168.50.6
franken-mini frankenmini.housh.dev 192.168.50.5
rogue-mini roguemini.housh.dev 192.168.50.4
home-assistant homeassitant.housh.dev 192.168.30.5
NAS nas.housh.dev 192.168.10.105
Backup NAS nas.hhe 192.168.1.10

You can read more about the network setup here.

Note: The backup NAS is used to backup our primary NAS, for now it is not easy to use, and will be used for camera / security footage in the future.

Containers

Services run inside of docker containers that are spread between several servers, which run them. The containers are deployed using a container orchestrator, currently using komo.

Click here for komo's documentation

All of the services have a corresponding repository for their configuration that is hosted on an internal git server. The configuration will consist of a docker compose file (generally named compose.yaml). There is often an example.env file for the service, these are examples for documentation and variable naming purposes. The environment variables themselves are setup in the container orchestrator for the service to prevent sensitive data being "leaked".

Container orchestrator

The container orchestrator is where the actual configuration for the service is done. It configures which physical server that the service will run on, it is responsible for pulling the proper container images, pulls the configuration / compoose.yaml file from the repository, sets up environment variables, and deploys the service onto the server. It also has some features for monitoring CPU and Memory usage of the servers.

The primary reason for the orchestrator is automate the deployment of services when there is a change. It also has a web interface that allows all of it to be managed by a single "pane of glass", essentially.

All services are automated, except for the primary load balancer / reverse proxy (explained later).

Service Overview

This section gives an overview of how the services are setup / routed.

Caddy (Reverse Proxy)

All of the containers are accessed through the primary load balancer / reverse proxy (caddy). This is responsible for routing traffic to the appropriate service and server based on the domain name. It also handles / automates TLS certificates via Let's Encrypt.

Click here for caddy's documentation.

Each server has a reverse proxy, this allows services to also be accessed on the server's domain name, but is an implementation detail that doesn't matter very much. Most services are accessed via the primary reverse proxy at a *.housh.dev address.

Below is an overview:

network-overview

Note: The primary caddy instance is the only service that is currently not automated when changes are made to it. Because it is what routes traffic to basically everything, the communication loop is broken during an update which stalls the update. Currently the changes are pushed to the server properly, but I then have to ssh into the server and restart the caddy instance. This is something that is on my list of things to fix / figure out a different solution for.

DNS / Access

Currently services are only available when connected to our network, remote access may be implemented in the future. If access is required outside of our network then using our VPN is required. The VPN setup is done automatically via unifi (our network router).

DNS is what translates domain names to IP addresses, currently the public DNS records are handled by cloudflare. Cloudflare is used to validate that we own the housh.dev domain name in order for Let's Encrypt to issue free TLS certificates. TLS is used to encrypt traffic over the web (https://).

Internal DNS records are setup in our unifi router Settings -> Routing -> DNS. The internal DNS is fairly simple and just needs to map to servers appropriately (primarily just to the primary caddy instance, which then handles all the routing to the individual service that is requested). All devices that connect to the network will be able to use the internal DNS to resolve host names properly (meaning it all should just work automatically without any knowledge from the user).