Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accept dash in keeper uid ? #319

Closed
LordFPL opened this issue Jul 20, 2017 · 4 comments
Closed

Accept dash in keeper uid ? #319

LordFPL opened this issue Jul 20, 2017 · 4 comments

Comments

@LordFPL
Copy link

LordFPL commented Jul 20, 2017

Hello,

Because of the limit on the characters accepted for the uid of keepers, I left the choice in random. Unfortunately, this leads to old uid in the list during the restarts (thank you in advance for the "remove" command of the PR301), and this even caused an instability due to an old uid that was still considered master for the cluster (I remade an init... sorry I have no trace to understand how this could happen...).

As I use Nomad, there are different solutions to give a fixed uid to a task (${node.unique.name}, ${node.unique.id}, ${attr.unique.hostname}...), but all contain dashes or dots.

Why have you forbidden dashes? I understand for points, and underscores could also have been banned due to DNS problems this can cause, but the dashes I admit to not understanding the problem.

@sgotti
Copy link
Member

sgotti commented Jul 20, 2017

@LordFPL Every keeper should have a persistent volume so if restarted it will see the previous data. Looks like you are using ephemeral volumes with your keepers that will lead to data loss if all your keepers are restarted at the same time.

The keeper uid, if not provided, is generated at first start and saved in the volume, so subsequent restarts will use the saved uid.

Regarding the uid format, it's limited to the postgresql replication slots accepted characters (now this isn't used anymore since the db uid generated by the sentinel is currently used for persistent slots names).

@LordFPL
Copy link
Author

LordFPL commented Jul 20, 2017

@sgotti I have a persistent storage with nomad (see #101), but i think my mistake was a temporary test on another storage... and a return to initial storage... i think this is the cause of my problem.

I resolved my problem with meta vars by node (https://www.nomadproject.io/docs/agent/configuration/client.html#meta) on nomad.

IMHO having uid with dash (to use ${node.unique.id} for example), and having configurable id on sentinels / proxies (to have a more readable stolonctl status) can be good things... but it cost time and i'm not a golang dev :(

Thanks for your return anyway, i close this issue :)

@LordFPL LordFPL closed this as completed Jul 20, 2017
@sgotti
Copy link
Member

sgotti commented Jul 22, 2017

@LordFPL glad to know you figured out the problem.

IMHO having uid with dash (to use ${node.unique.id} for example), and having configurable id on sentinels / proxies (to have a more readable stolonctl status) can be good things... but it cost time and i'm not a golang dev :(

Sentinels and proxies don't accept an uid because every instance, to avoid possible duplicate uid collisions (used in various points like sentinel election and for waiting for proxies closing connections), will generate a random one. Instead keepers must have fixed (and different) uid and they can be provided (but the user must care that there aren't two keepers with the same uid) or can be generated at first start (and saved inside the keeper persistent data).

@sgotti
Copy link
Member

sgotti commented Jul 22, 2017

@LordFPL open PR #320 to better document this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants