Skip to content

Commit

Permalink
docs(self-hosted): document and simplify setup
Browse files Browse the repository at this point in the history
Based on the work of @efahl with a bunch of simplifications.

* Drop `CONTAINER_SOCK` and `CONTAINER_HOST` in favour of
  `CONTAINER_SOCKET_PATH`.
* Install dependencies directly instead of using `pip`.
* Mention Podman socket via `systemd` or `podman service`
* Drop `asu.env`, there are no good defaults since variables are not
  evaluated, the Podman socket path varies etc.
* Drop outdated screenshot of `auc`.
* Some guidance on Squid cache setup.

Supersedes #1032

Signed-off-by: Paul Spooren <[email protected]>
  • Loading branch information
aparcar committed Feb 15, 2025
1 parent 24f4b7a commit e01967f
Show file tree
Hide file tree
Showing 7 changed files with 128 additions and 60 deletions.
6 changes: 4 additions & 2 deletions .github/workflows/podman.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,10 @@ jobs:
- name: Start the containers
run: |
podman system service --time=0 unix:///tmp/podman.sock &
cp misc/asu.env .env
export CONTAINER_SOCKET_PATH="/tmp/podman.sock"
podman system service --time=0 "unix://$CONTAINER_SOCKET_PATH" &
echo "PUBLIC_PATH=$(pwd)/public" > .env
echo "CONTAINER_SOCKET_PATH=$CONTAINER_SOCKET_PATH" >> .env
podman-compose up -d
- name: Let the containers start
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ jobs:
- name: Test with pytest
run: |
podman system service --time=0 unix:///tmp/podman.sock &
export CONTAINER_HOST="unix:///tmp/podman.sock"
export CONTAINER_SOCKET_PATH="/tmp/podman.sock"
podman system service --time=0 "unix://$CONTAINER_SOCKET_PATH" &
poetry run coverage run -m pytest -vv --runslow
poetry run coverage xml
Expand Down
126 changes: 98 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,16 +41,13 @@ re-install any packages.

### CLI

With `OpenWrt SNAPSHOT-r26792 or newer` the CLI app `auc` was replaced with [`owut`](https://openwrt.org/docs/guide-user/installation/sysupgrade.owut) as a more comprehensive CLI tool to provide an easy way to upgrade your device.
With `OpenWrt SNAPSHOT-r26792 or newer` (and in the 24.10 release) the CLI app
[`auc`](https://github.com/openwrt/packages/tree/master/utils/auc) was replaced
with [`owut`](https://openwrt.org/docs/guide-user/installation/sysupgrade.owut)
as a more comprehensive CLI tool to provide an easy way to upgrade your device.

![owut](misc/owut.png)

The [`auc`](https://github.com/openwrt/packages/tree/master/utils/auc) package
performs the same process as the `luci-app-attendedsysupgrade`
from SSH/the command line.

![auc](misc/auc.png)

## Server

The server listens for image requests and, if valid, automatically generates
Expand All @@ -61,60 +58,133 @@ immediately without rebuilding.
### Active server

* [sysupgrade.openwrt.org](https://sysupgrade.openwrt.org)
* Create a pullrequest to add your server here
* [ImmortalWrt](https://sysupgrade.kyarucloud.moe)
* Create a pull request to add your server here

## Run your own server

For security reasons each build happens inside a container so that one build
can't affect another build. For this to work a Podman container runs an API
service so workers can themselfs execute builds inside containers.

Please install Podman and test if it works:
### Installation

The server uses `podman-compose` to manage the containers. On a Debian based
system, install the following packages:

```bash
sudo apt install podman-compose
```

A [Python library](https://podman-py.readthedocs.io/en/latest/) is used to
communicate with Podman over a socket. To enable the socket either `systemd` is
required or the socket must be started manually using the Podman itself:

```bash
# systemd
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket

# manual (must stay open)
podman system service --time=0 unix:/run/user/$(id -u)/podman/podman.sock
```

Now you can either use the latest ASU containers or build them yourself, run
either of the following two commands:

podman run --rm -it docker.io/library/alpine:latest
```bash
# use existing containers
podman-compose pull

Once Podman works, install `podman-compose`:
# build containers locally
podman-compose build
```
The services are configured via environment variables, which can be set in a
`.env` file

pip install podman-compose
```bash
echo "PUBLIC_PATH=$(pwd)/public" > .env
echo "CONTAINER_SOCK=/run/user/$(id -u)/podman/podman.sock" >> .env
# optionally allow custom scripts running on first boot
echo "ALLOW_DEFAULTS=1" >> .env
```

Now it's possible to run all services via `podman-compose`:

# where to store images and json files
echo "PUBLIC_PATH=$(pwd)/public" > .env
# absolute path to podman socket mounted into worker containers
echo "CONTAINER_SOCK=/run/user/$(id -u)/podman/podman.sock" >> .env
podman-compose up -d
```bash
podman-compose up -d
```

This will start the server, the Podman API container and two workers. The first
run needs a few minutes since available packages are parsed from the upstream
server. Once the server is running, it's possible to request images via the API
on `http://localhost:8000`. Modify `podman-compose.yml` to change the port.
This will start the server, the Podman API container and one worker. Once the
server is running, it's possible to request images via the API on
`http://localhost:8000`. Modify `podman-compose.yml` to change the port.

### Production

For production it's recommended to use a reverse proxy like `nginx` or `caddy`.
You can find a Caddy sample configuration in `misc/Caddyfile`.

If you want your server to remain active after you log out of the server, you
must enable "linger" in `loginctl`:

```bash
loginctl enable-linger
```

#### System requirements

* 2 GB RAM (4 GB recommended)
* 2 CPU cores (4 cores recommended)
* 50 GB disk space (200 GB recommended)

#### Squid Cache

Instead of creating and uploading SNAPSHOT ImageBuilder containers everyday,
only a container with installed dependencies and a `setup.sh` script is offered.
ASU will automatically run that script and setup the latest ImageBuilder. To
speed up the process, a Squid cache can be used to store the ImageBuilder
archives locally. To enable the cache, set `SQUID_CACHE=1` in the `.env` file.

To have the cache accessible from running containers, the Squid port 3128 inside
a running container must be forwarded to the host. This can be done by adding
the following line to the `.config/containers/containers.conf` file:

```toml
[network]
pasta_options = [
"-a", "10.0.2.0",
"-n", "24",
"-g", "10.0.2.2",
"--dns-forward", "10.0.2.3",
"-T", "3128:3128"
]
```

> If you know a better setup, please create a pull request.
### Development

After cloning this repository, create a Python virtual environment and install
the dependencies:
After cloning this repository, install `poetry` which manages the Python
dependencies.

```bash
apt install python3-poetry
poetry install
```

#### Running the server

poetry install
poetry run fastapi dev asu/main.py
```bash
poetry run fastapi dev asu/main.py
```

#### Running a worker

# podman unix socket (not path), no need to mount anything
export CONTAINER_HOST=unix:///run/user/1001/podman/podman.sock
poetry run rq worker
```bash
source .env # poetry does not load .env
poetry run rq worker
```

### API

Expand Down
2 changes: 1 addition & 1 deletion asu/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class Settings(BaseSettings):
max_defaults_length: int = 20480
repository_allow_list: list = []
base_container: str = "ghcr.io/openwrt/imagebuilder"
container_host: str = "localhost"
container_socket_path: str = ""
container_identity: str = ""
branches: dict = {
"SNAPSHOT": {
Expand Down
2 changes: 1 addition & 1 deletion asu/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ def get_container_version_tag(input_version: str) -> str:

def get_podman() -> PodmanClient:
return PodmanClient(
base_url=settings.container_host,
base_url=f"unix://{settings.container_socket_path}",
identity=settings.container_identity,
)

Expand Down
7 changes: 0 additions & 7 deletions misc/asu.env

This file was deleted.

41 changes: 22 additions & 19 deletions podman-compose.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,3 @@
version: "2"

volumes:
redis:
# grafana-storage:

services:
server:
image: "docker.io/openwrt/asu:latest"
Expand All @@ -13,6 +7,8 @@ services:
restart: unless-stopped
command: uvicorn --host 0.0.0.0 asu.main:app
env_file: .env
environment:
REDIS_URL: "redis://redis:6379/0"
volumes:
- $PUBLIC_PATH/store:$PUBLIC_PATH/store:ro
ports:
Expand All @@ -28,40 +24,47 @@ services:
restart: unless-stopped
command: rqworker --logging_level INFO
env_file: .env
environment:
REDIS_URL: "redis://redis:6379/0"
volumes:
- $PUBLIC_PATH:$PUBLIC_PATH:rw
- $CONTAINER_SOCK:$CONTAINER_SOCK:rw
- $CONTAINER_SOCKET_PATH:$CONTAINER_SOCKET_PATH:rw
depends_on:
- redis

redis:
image: "docker.io/redis/redis-stack-server"
restart: unless-stopped
volumes:
- ./redis-data:/data/:rw
ports:
- "127.0.0.1:6379:6379"

# Optionally add more workers
# worker2:
# image: "docker.io/openwrt/asu:latest"
# restart: unless-stopped
# command: rqworker --logging_level INFO
# env_file: .env
# environment:
# REDIS_URL: "redis://redis:6379/0"
# volumes:
# - $PUBLIC_PATH:$PUBLIC_PATH:rw
# - $CONTAINER_SOCK:$CONTAINER_SOCK:rw
# - $CONTAINER_SOCKET_PATH:$CONTAINER_SOCKET_PATH:rw
# depends_on:
# - redis

redis:
image: "docker.io/redis/redis-stack-server"
restart: unless-stopped
volumes:
- redis:/data/:rw
ports:
- "127.0.0.1:6379:6379"

#
# Optionally add a Squid cache container when using `SQUID_CACHE`
# squid:
# image: "docker.io/ubuntu/squid:latest"
# restart: unless-stopped
# ports:
# - "127.0.0.1:3128:3128"
# volumes:
# - ".squid.conf:/etc/squid/conf.d/snippet.conf:ro"
# - "./squid/:/var/spool/squid/:rw"
# - "./squid-data/:/var/spool/squid/:rw"

# Optionally add a Grafana container when using `SERVER_STATS`
# grafana:
# image: docker.io/grafana/grafana-oss
# container_name: grafana
Expand All @@ -75,4 +78,4 @@ services:
# GF_SERVER_ROOT_URL: https://sysupgrade.openwrt.org/stats/
# GF_SERVER_SERVE_FROM_SUB_PATH: "true"
# volumes:
# - grafana-storage:/var/lib/grafana
# - ./grafana-data:/var/lib/grafana

0 comments on commit e01967f

Please sign in to comment.