Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Documentation time #119

Draft
wants to merge 18 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions .github/workflows/build_docs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
name: Build documentation

# TODO: Only run on ./docs folder change

on:
push:
branches: ["master"]
paths:
- 'docs/**'
# Specify to run a workflow manually from the Actions tab on GitHub
workflow_dispatch:

permissions:
id-token: write
pages: write

env:
INSTANCE: Writerside/kc
ARTIFACT: webHelpKC2-all.zip
DOCS_FOLDER: ./docs

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Build Writerside docs using Docker
uses: JetBrains/writerside-github-action@v4
with:
instance: ${{ env.INSTANCE }}
artifact: ${{ env.ARTIFACT }}
location: ${{ env.DOCS_FOLDER }}

- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: docs
path: |
artifacts/${{ env.ARTIFACT }}
artifacts/report.json
retention-days: 7

test:
needs: build
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs
path: artifacts

- name: Test documentation
uses: JetBrains/writerside-checker-action@v1
with:
instance: ${{ env.INSTANCE }}

deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
needs: [build, test]
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs

- name: Unzip artifact
run: unzip -O UTF-8 -qq '${{ env.ARTIFACT }}' -d dir

- name: Setup Pages
uses: actions/configure-pages@v4

- name: Package and upload Pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: dir

- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
14 changes: 14 additions & 0 deletions docs/Writerside/cfg/buildprofiles.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<buildprofiles xsi:noNamespaceSchemaLocation="https://resources.jetbrains.com/writerside/1.0/build-profiles.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<variables>
<header-logo>knight-crawler-logo.png</header-logo>
</variables>
<build-profile instance="kc">
<variables>
<noindex-content>true</noindex-content>
</variables>
</build-profile>

</buildprofiles>
Binary file added docs/Writerside/images/knight-crawler-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 13 additions & 0 deletions docs/Writerside/kc.tree
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE instance-profile
SYSTEM "https://resources.jetbrains.com/writerside/1.0/product-profile.dtd">

<instance-profile id="kc" name="Knight Crawler"
start-page="Overview.md">

<toc-element topic="Overview.md"/>
<toc-element topic="Getting-started.md">
</toc-element>
<toc-element topic="External-access.md"/>
<toc-element topic="Supported-Debrid-services.md"/>
</instance-profile>
57 changes: 57 additions & 0 deletions docs/Writerside/topics/External-access.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# External access

This guide outlines how to use Knight Crawler on devices like your TV. While it's currently limited to the device of
installation, we can change that. With some extra effort, we'll show you how to make it accessible on other devices.
This limitation is set by Stremio, as [explained here](https://github.com/Stremio/stremio-features/issues/687#issuecomment-1890546094).

## What to keep in mind

Before we make Knight Crawler available outside your home network, we've got to talk about safety. No software is
perfect, including ours. Knight Crawler is built on lots of different parts, some made by other people. So, if we keep
it just for your home network, it's a bit safer. But if you want to use it over the internet, just know that keeping
your devices secure is up to you. We won't be responsible for any problems or lost data if you use Knight Crawler that way.

## Initial setup

To enable external access for Knight Crawler, whether it's within your home network or over the internet, you'll
need to follow these initial setup steps:

- Set up Caddy, a powerful and easy-to-use web server.
- Disable the open port in the Knight Crawler <path>docker-compose.yaml</path> file.


### Caddy

A basic Caddy configuration is included with Knight Crawler in the deployment directory.

<path>deployment/docker/optional-services/caddy</path>

```Generic
deployment/
└── docker/
└── optional-services/
└── caddy/
├── config/
│ ├── snippets/
│ │ └── cloudflare-replace-X-Forwarded-For
│ └── Caddyfile
├── logs/
└── docker-compose.yaml
```

ports:
- "8080:8080"

By disabling the default port, Knight Crawler will only be accessible internally within your network, ensuring added security.

## Home network access

## Internet access

### Through a VPN

### On the public web

## Troubleshooting?

## Additional Resources?
192 changes: 192 additions & 0 deletions docs/Writerside/topics/Getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
# Getting started

Knight Crawler is provided as an all-in-one solution. This means we include all the necessary software you need to get started
out of the box.

## Before you start

Make sure that you have:

- A place to host Knight Crawler
- [Docker](https://docs.docker.com/get-docker/) and [Compose](https://docs.docker.com/compose/install/) installed
- A [GitHub](https://github.com/) account _(optional)_


## Download the files

Installing Knight Crawler is as simple as downloading a copy of the [deployment directory](https://github.com/Gabisonfire/knightcrawler/tree/master/deployment/docker).

A basic installation requires only two files:
- <path>deployment/docker/.env.example</path>
- <path>deployment/docker/docker-compose.yaml</path>.

For this guide I will be placing them in a directory on my home drive <path>~/knightcrawler</path>.

Rename the <path>.env.example</path> file to be <path>.env</path>

```
~/
└── knightcrawler/
├── .env
└── docker-compose.yaml
```

## Initial configuration

Below are a few recommended configuration changes.

Open the <path>.env</path> file in your favourite editor.

> If you are using an external database, configure it in the <path>.env</path> file. Don't forget to disable the ones
> included in the <path>docker-compose.yaml</path>.

### Database credentials

It is strongly recommended that you change the credentials for the databases included with Knight Crawler. This is best done
before running Knight Crawler for the first time. It is much harder to change the passwords once the services have been started
for the first time.

```Bash
POSTGRES_PASSWORD=postgres
...
MONGODB_PASSWORD=mongo
...
RABBITMQ_PASSWORD=guest
```

Here's a few options on generating a secure password:

```Bash
# Linux
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1
# Or you could use openssl
openssl rand -hex 32
```
```Python
# Python
import secrets

print(secrets.token_hex(32))
```

### Your time zone

```Bash
TZ=London/Europe
```

A list of time zones can be found on [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)

### Consumers

```Bash
JOB_CONCURRENCY=5
...
MAX_CONNECTIONS_PER_TORRENT=10
...
CONSUMER_REPLICAS=3
```

These are totally subjective to your machine and network capacity. The above default is pretty minimal and will work on
most machines.

`JOB_CONCURRENCY` is how many films and tv shows the consumers should process at once. As this affects every consumer
this will likely cause exponential
strain on your system. It's probably best to leave this at 5, but you can try experimenting with it if you wish.

`MAX_CONNECTIONS_PER_TORRENT` is how many peers the consumer will attempt to connect to when it is trying to collect
metadata.
Increasing this value can speed up processing, but you will eventually reach a point where more connections are being
made than
your router can handle. This will then cause a cascading fail where your internet stops working. If you are going to
increase this value
then try increasing it by 10 at a time.

> Increasing this value increases the max connections for every parallel job, for every consumer. For example
> with the default values above this means that Knight Crawler will be on average making `(5 x 3) x 10 = 150`
> connections at any one time.
>
{style="warning"}

`CONSUMER_REPLICAS` is how many consumers should be initially started. You can increase or decrease the number of consumers whilst the
service is running by running the command `docker compose up -d --scale consumer=<number>`.

### GitHub personal access token

This step is optional but strongly recommended. [Debrid Media Manager](https://debridmediamanager.com/start) is a media library manager
for Debrid services. When a user of this service chooses to export/share their library publicly it is saved to a public GitHub repository.
This is, essentially, a repository containing a vast amount of ready to go films and tv shows. Knight Crawler comes with the ability to
read these exported lists, but it requires a GitHub account to make it work.

Knight Crawler needs a personal access token with read-only access to public repositories. This means we can not access any private
repositories you have.

1. Navigate to GitHub settings ([GitHub token settings](https://github.com/settings/tokens?type=beta)):
- Navigate to `GitHub settings`.
- Click on `Developer Settings`.
- Select `Personal access tokens`.
- Choose `Fine-grained tokens`.

2. Press `Generate new token`.

3. Fill out the form with the following information:
```Generic
Token name:
KnightCrawler
Expiration:
90 days
Description:
<blank>
Repository access:
(checked) Public Repositories (read-only)
```

4. Click `Generate token`.

5. Take the new token and add it to the bottom of the <path>.env</path> file:
```Bash
# Producer
GITHUB_PAT=<YOUR TOKEN HERE>
```

## Start Knight Crawler

To start Knight Crawler use the following command:

```Bash
docker compose up -d
```

Then we can follow the logs to watch it start:

```Bash
docker compose logs -f --since 1m
```

> Knight Crawler will only be accessible on the machine you run it on, to make it accessible from other machines navigate to [External access](External-access.md).
>
{style="note"}

To stop following the logs press <shortcut>Ctrl+C</shortcut> at any time.

The Knight Crawler configuration page should now be accessible in your web browser at [http://localhost:7000](http://localhost:7000)

## Start more consumers

If you wish to speed up the processing of the films and tv shows that Knight Crawler finds, then you'll likely want to
increase the number of consumers.

The below command can be used to both increase or decrease the number of running consumers. Gradually increase the number
until you encounter any issues and then decrease until stable.

```Bash
docker compose up -d --scale consumer=<number>
```

## Stop Knight Crawler

Knight Crawler can be stopped with the following command:

```Bash
docker compose down
```
30 changes: 30 additions & 0 deletions docs/Writerside/topics/Overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Overview

<img alt="The image shows a Knight in silvery armour looking forwards." src="knight-crawler-logo.png" title="Knight Crawler logo" width="100"/>

Knight Crawler is a self-hosted [Stremio](https://www.stremio.com/) addon for streaming torrents via
a [Debrid](Supported-Debrid-services.md "Click for a list of Debrid services we support") service.

We are active on [Discord](https://discord.gg/8fQdxay9z2) for both support and casual conversation.

> Knight Crawler is currently alpha software.
>
> Users are responsible for ensuring their data is backed up regularly.
>
> Please read the changelogs before updating to the latest version.
>
{style="warning"}

## What does Knight Crawler do?

Knight Crawler is an addon for [Stremio](https://www.stremio.com/). It began as a fork of the very popular
[Torrentio](https://github.com/TheBeastLT/torrentio-scraper) addon. Knight crawler essentially does the following:

1. It searches the internet for available films and tv shows.
2. It collects as much information as it can about each film and tv show it finds.
3. It then stores this information to a database for easy access.

When you choose on a film or tv show to watch on Stremio, a request will be sent to your installation of Knight Crawler.
Knight Crawler will query the database and return a list of all the copies it has stored in the database as Debrid
links.
This enables playback to begin immediately for your chosen media.
Loading