-
Notifications
You must be signed in to change notification settings - Fork 604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to bootstrap cluster from PVC/Volume #4073
Comments
Hi @joryirving, sorry you're having trouble with this. One thing I did want to clarify is that
Additionally, in order to help you out further, I was hoping to clarify one detail about your use-case. Are you looking to use the volume/NFS defined in |
I'm essentially using it in a home kubernetes cluster, and it's for DR/cluster bootstrapping. Essentially I back up to 3 repos right now: I would like to get rid of repo3, so I can drop minio as a dependency. How it's currently working is that I backup hourly to repo1/3, with long-term retention, that way if I accidentally blow up my homelab, I can wipe the entire k8s cluster, and bootstrap where I left off with backups/WAL. It will essentially pick up where it left off as a bootstrap, and then continue backing up to that same location. This works when I use Minio as a bootstrap location/backup location, but it fails to work since you can't bootstrap a cluster from a volume. I'm not using it to clone the cluster elsewhere, just as a DR situation (which happens... often). I'm curious what the use-case is for backing up to PVC, if you're unable to use it to pre-populate a DB, or am I missing something obvious there? |
I'm doing this on a fresh kubernetes cluster, so there's no lingering PV. Restores from PV/PVC work, bootstraps from PV/PVC do not. |
I tested this and you're right, the proposed solution in the issue I mentioned does not work |
Overview
Hi team,
I'm trying to DR my postgres cluster. I currently backup to 3 repos:
I've repeatedly and successfully restored postgres clusters on a new k8s cluster via minio, however I'm running into a scenario where trying to do that via PVC/Volume fails, as it doesn't create the pod/PVC
postgres-repo-host-0
until after the cluster is up and running. If I remove and recreate the cluster it works fine, but from a "blank" install of the operator it fails.Environment
Kubernetes
1.32.1
Bare metal install (Talos Linux)
PGO
ubi8-5..7.2-0
Postgres
ubi-16.6-1
(16)Storage
local-hostpath
(openebs)Steps to Reproduce
Install PGO operator from scratch.
Create postgres cluster using
dataSource.pgbackrest.repo.volume
for the first time.Pod fails to find data to restore from.
It appears to a condition where the PVC/Volume isn't created until after the cluster is successfully running.
EXPECTED
I'm able to successfully bootstrap a new cluster from a backup on an NFS system.
ACTUAL
The cluster hangs and is unable to bootstrap.
Logs
N/A, as I worked around it by bootstrapping from S3 to reduce downtime.
Additional Information
This is an example of my
postgrescluster
that tried (and failed) to restore from PVChttps://github.com/joryirving/home-ops/blob/9614dc3d6bab8a53ddf7344890765e4f057c7827/kubernetes/main/apps/database/crunchy-postgres/cluster/cluster.yaml
Specifically here:
I'm manually creating the PV for the PVC to bind to here:
https://github.com/joryirving/home-ops/blob/9614dc3d6bab8a53ddf7344890765e4f057c7827/kubernetes/main/apps/database/crunchy-postgres/cluster/nfs-pvc.yaml
The text was updated successfully, but these errors were encountered: