diff --git a/.github/workflows/build_docs.yaml b/.github/workflows/build_docs.yaml
new file mode 100644
index 00000000..ca368652
--- /dev/null
+++ b/.github/workflows/build_docs.yaml
@@ -0,0 +1,86 @@
+name: Build documentation
+
+# TODO: Only run on ./docs folder change
+
+on:
+ push:
+ branches: ["master"]
+ paths:
+ - 'docs/**'
+ # Specify to run a workflow manually from the Actions tab on GitHub
+ workflow_dispatch:
+
+permissions:
+ id-token: write
+ pages: write
+
+env:
+ INSTANCE: Writerside/kc
+ ARTIFACT: webHelpKC2-all.zip
+ DOCS_FOLDER: ./docs
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+
+ - name: Build Writerside docs using Docker
+ uses: JetBrains/writerside-github-action@v4
+ with:
+ instance: ${{ env.INSTANCE }}
+ artifact: ${{ env.ARTIFACT }}
+ location: ${{ env.DOCS_FOLDER }}
+
+ - name: Upload artifact
+ uses: actions/upload-artifact@v3
+ with:
+ name: docs
+ path: |
+ artifacts/${{ env.ARTIFACT }}
+ artifacts/report.json
+ retention-days: 7
+
+ test:
+ needs: build
+ runs-on: ubuntu-latest
+ steps:
+ - name: Download artifacts
+ uses: actions/download-artifact@v3
+ with:
+ name: docs
+ path: artifacts
+
+ - name: Test documentation
+ uses: JetBrains/writerside-checker-action@v1
+ with:
+ instance: ${{ env.INSTANCE }}
+
+ deploy:
+ environment:
+ name: github-pages
+ url: ${{ steps.deployment.outputs.page_url }}
+ needs: [build, test]
+ runs-on: ubuntu-latest
+ steps:
+ - name: Download artifacts
+ uses: actions/download-artifact@v3
+ with:
+ name: docs
+
+ - name: Unzip artifact
+ run: unzip -O UTF-8 -qq '${{ env.ARTIFACT }}' -d dir
+
+ - name: Setup Pages
+ uses: actions/configure-pages@v4
+
+ - name: Package and upload Pages artifact
+ uses: actions/upload-pages-artifact@v3
+ with:
+ path: dir
+
+ - name: Deploy to GitHub Pages
+ id: deployment
+ uses: actions/deploy-pages@v4
diff --git a/docs/Writerside/cfg/buildprofiles.xml b/docs/Writerside/cfg/buildprofiles.xml
new file mode 100644
index 00000000..eacb9238
--- /dev/null
+++ b/docs/Writerside/cfg/buildprofiles.xml
@@ -0,0 +1,14 @@
+
+
+
+
+ knight-crawler-logo.png
+
+
+
+ true
+
+
+
+
diff --git a/docs/Writerside/images/knight-crawler-logo.png b/docs/Writerside/images/knight-crawler-logo.png
new file mode 100644
index 00000000..1f87a43b
Binary files /dev/null and b/docs/Writerside/images/knight-crawler-logo.png differ
diff --git a/docs/Writerside/kc.tree b/docs/Writerside/kc.tree
new file mode 100644
index 00000000..07382e89
--- /dev/null
+++ b/docs/Writerside/kc.tree
@@ -0,0 +1,13 @@
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/Writerside/topics/External-access.md b/docs/Writerside/topics/External-access.md
new file mode 100644
index 00000000..1a2d78e5
--- /dev/null
+++ b/docs/Writerside/topics/External-access.md
@@ -0,0 +1,57 @@
+# External access
+
+This guide outlines how to use Knight Crawler on devices like your TV. While it's currently limited to the device of
+installation, we can change that. With some extra effort, we'll show you how to make it accessible on other devices.
+This limitation is set by Stremio, as [explained here](https://github.com/Stremio/stremio-features/issues/687#issuecomment-1890546094).
+
+## What to keep in mind
+
+Before we make Knight Crawler available outside your home network, we've got to talk about safety. No software is
+perfect, including ours. Knight Crawler is built on lots of different parts, some made by other people. So, if we keep
+it just for your home network, it's a bit safer. But if you want to use it over the internet, just know that keeping
+your devices secure is up to you. We won't be responsible for any problems or lost data if you use Knight Crawler that way.
+
+## Initial setup
+
+To enable external access for Knight Crawler, whether it's within your home network or over the internet, you'll
+need to follow these initial setup steps:
+
+- Set up Caddy, a powerful and easy-to-use web server.
+- Disable the open port in the Knight Crawler docker-compose.yaml file.
+
+
+### Caddy
+
+A basic Caddy configuration is included with Knight Crawler in the deployment directory.
+
+deployment/docker/optional-services/caddy
+
+```Generic
+deployment/
+└── docker/
+ └── optional-services/
+ └── caddy/
+ ├── config/
+ │ ├── snippets/
+ │ │ └── cloudflare-replace-X-Forwarded-For
+ │ └── Caddyfile
+ ├── logs/
+ └── docker-compose.yaml
+```
+
+ports:
+ - "8080:8080"
+
+By disabling the default port, Knight Crawler will only be accessible internally within your network, ensuring added security.
+
+## Home network access
+
+## Internet access
+
+### Through a VPN
+
+### On the public web
+
+## Troubleshooting?
+
+## Additional Resources?
diff --git a/docs/Writerside/topics/Getting-started.md b/docs/Writerside/topics/Getting-started.md
new file mode 100644
index 00000000..fdafdf0f
--- /dev/null
+++ b/docs/Writerside/topics/Getting-started.md
@@ -0,0 +1,192 @@
+# Getting started
+
+Knight Crawler is provided as an all-in-one solution. This means we include all the necessary software you need to get started
+out of the box.
+
+## Before you start
+
+Make sure that you have:
+
+- A place to host Knight Crawler
+- [Docker](https://docs.docker.com/get-docker/) and [Compose](https://docs.docker.com/compose/install/) installed
+- A [GitHub](https://github.com/) account _(optional)_
+
+
+## Download the files
+
+Installing Knight Crawler is as simple as downloading a copy of the [deployment directory](https://github.com/Gabisonfire/knightcrawler/tree/master/deployment/docker).
+
+A basic installation requires only two files:
+- deployment/docker/.env.example
+- deployment/docker/docker-compose.yaml.
+
+For this guide I will be placing them in a directory on my home drive ~/knightcrawler.
+
+Rename the .env.example file to be .env
+
+```
+~/
+└── knightcrawler/
+ ├── .env
+ └── docker-compose.yaml
+```
+
+## Initial configuration
+
+Below are a few recommended configuration changes.
+
+Open the .env file in your favourite editor.
+
+> If you are using an external database, configure it in the .env file. Don't forget to disable the ones
+> included in the docker-compose.yaml.
+
+### Database credentials
+
+It is strongly recommended that you change the credentials for the databases included with Knight Crawler. This is best done
+before running Knight Crawler for the first time. It is much harder to change the passwords once the services have been started
+for the first time.
+
+```Bash
+POSTGRES_PASSWORD=postgres
+...
+MONGODB_PASSWORD=mongo
+...
+RABBITMQ_PASSWORD=guest
+```
+
+Here's a few options on generating a secure password:
+
+```Bash
+# Linux
+tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1
+# Or you could use openssl
+openssl rand -hex 32
+```
+```Python
+# Python
+import secrets
+
+print(secrets.token_hex(32))
+```
+
+### Your time zone
+
+```Bash
+TZ=London/Europe
+```
+
+A list of time zones can be found on [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
+
+### Consumers
+
+```Bash
+JOB_CONCURRENCY=5
+...
+MAX_CONNECTIONS_PER_TORRENT=10
+...
+CONSUMER_REPLICAS=3
+```
+
+These are totally subjective to your machine and network capacity. The above default is pretty minimal and will work on
+most machines.
+
+`JOB_CONCURRENCY` is how many films and tv shows the consumers should process at once. As this affects every consumer
+this will likely cause exponential
+strain on your system. It's probably best to leave this at 5, but you can try experimenting with it if you wish.
+
+`MAX_CONNECTIONS_PER_TORRENT` is how many peers the consumer will attempt to connect to when it is trying to collect
+metadata.
+Increasing this value can speed up processing, but you will eventually reach a point where more connections are being
+made than
+your router can handle. This will then cause a cascading fail where your internet stops working. If you are going to
+increase this value
+then try increasing it by 10 at a time.
+
+> Increasing this value increases the max connections for every parallel job, for every consumer. For example
+> with the default values above this means that Knight Crawler will be on average making `(5 x 3) x 10 = 150`
+> connections at any one time.
+>
+{style="warning"}
+
+`CONSUMER_REPLICAS` is how many consumers should be initially started. You can increase or decrease the number of consumers whilst the
+service is running by running the command `docker compose up -d --scale consumer=`.
+
+### GitHub personal access token
+
+This step is optional but strongly recommended. [Debrid Media Manager](https://debridmediamanager.com/start) is a media library manager
+for Debrid services. When a user of this service chooses to export/share their library publicly it is saved to a public GitHub repository.
+This is, essentially, a repository containing a vast amount of ready to go films and tv shows. Knight Crawler comes with the ability to
+read these exported lists, but it requires a GitHub account to make it work.
+
+Knight Crawler needs a personal access token with read-only access to public repositories. This means we can not access any private
+repositories you have.
+
+1. Navigate to GitHub settings ([GitHub token settings](https://github.com/settings/tokens?type=beta)):
+ - Navigate to `GitHub settings`.
+ - Click on `Developer Settings`.
+ - Select `Personal access tokens`.
+ - Choose `Fine-grained tokens`.
+
+2. Press `Generate new token`.
+
+3. Fill out the form with the following information:
+ ```Generic
+ Token name:
+ KnightCrawler
+ Expiration:
+ 90 days
+ Description:
+
+ Repository access:
+ (checked) Public Repositories (read-only)
+ ```
+
+4. Click `Generate token`.
+
+5. Take the new token and add it to the bottom of the .env file:
+ ```Bash
+ # Producer
+ GITHUB_PAT=
+ ```
+
+## Start Knight Crawler
+
+To start Knight Crawler use the following command:
+
+```Bash
+docker compose up -d
+```
+
+Then we can follow the logs to watch it start:
+
+```Bash
+docker compose logs -f --since 1m
+```
+
+> Knight Crawler will only be accessible on the machine you run it on, to make it accessible from other machines navigate to [External access](External-access.md).
+>
+{style="note"}
+
+To stop following the logs press Ctrl+C at any time.
+
+The Knight Crawler configuration page should now be accessible in your web browser at [http://localhost:7000](http://localhost:7000)
+
+## Start more consumers
+
+If you wish to speed up the processing of the films and tv shows that Knight Crawler finds, then you'll likely want to
+increase the number of consumers.
+
+The below command can be used to both increase or decrease the number of running consumers. Gradually increase the number
+until you encounter any issues and then decrease until stable.
+
+```Bash
+docker compose up -d --scale consumer=
+```
+
+## Stop Knight Crawler
+
+Knight Crawler can be stopped with the following command:
+
+```Bash
+docker compose down
+```
diff --git a/docs/Writerside/topics/Overview.md b/docs/Writerside/topics/Overview.md
new file mode 100644
index 00000000..a42c2ea2
--- /dev/null
+++ b/docs/Writerside/topics/Overview.md
@@ -0,0 +1,30 @@
+# Overview
+
+
+
+Knight Crawler is a self-hosted [Stremio](https://www.stremio.com/) addon for streaming torrents via
+a [Debrid](Supported-Debrid-services.md "Click for a list of Debrid services we support") service.
+
+We are active on [Discord](https://discord.gg/8fQdxay9z2) for both support and casual conversation.
+
+> Knight Crawler is currently alpha software.
+>
+> Users are responsible for ensuring their data is backed up regularly.
+>
+> Please read the changelogs before updating to the latest version.
+>
+{style="warning"}
+
+## What does Knight Crawler do?
+
+Knight Crawler is an addon for [Stremio](https://www.stremio.com/). It began as a fork of the very popular
+[Torrentio](https://github.com/TheBeastLT/torrentio-scraper) addon. Knight crawler essentially does the following:
+
+1. It searches the internet for available films and tv shows.
+2. It collects as much information as it can about each film and tv show it finds.
+3. It then stores this information to a database for easy access.
+
+When you choose on a film or tv show to watch on Stremio, a request will be sent to your installation of Knight Crawler.
+Knight Crawler will query the database and return a list of all the copies it has stored in the database as Debrid
+links.
+This enables playback to begin immediately for your chosen media.
diff --git a/docs/Writerside/topics/Supported-Debrid-services.md b/docs/Writerside/topics/Supported-Debrid-services.md
new file mode 100644
index 00000000..847c81c6
--- /dev/null
+++ b/docs/Writerside/topics/Supported-Debrid-services.md
@@ -0,0 +1,3 @@
+# Supported Debrid services
+
+Start typing here...
diff --git a/docs/Writerside/writerside.cfg b/docs/Writerside/writerside.cfg
new file mode 100644
index 00000000..06ecc768
--- /dev/null
+++ b/docs/Writerside/writerside.cfg
@@ -0,0 +1,8 @@
+
+
+
+
+
+
+
+