-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FAQ section of README.md updated #172
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -75,6 +75,46 @@ For our need to forcibly close connections to unelected masters and handle keepe | |
|
||
We are open to alternative solutions (PRs are welcome) like using haproxy if they can met the above requirements. For example, an hypothetical haproxy based proxy needs a way to work with changing ip addresses, get the current cluster information and being able to forcibly close a connection when an haproxy backend is marked as failed (as a note, to achieve the latter, a possible solution that needs testing will be to use the [on-marked-down shutdown-sessions](https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.2-on-marked-down) haproxy server option). | ||
|
||
### Does proxy send read-only requests to standbys? | ||
|
||
Currently the proxy redirects all requests to the master. There is a [feature request](https://github.com/sorintlab/stolon/issues/132) for using the proxy also for standbys but it's low in the priority list. There is a workaround though. | ||
|
||
Application can learn cluster configuration from `stolon/cluster/mycluster/clusterdata` key. Consul allows to subscribe to updates of this key like this: | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is also available with etcd. Not sure if this should be detailed but could just say that one can use the watching features of the store. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK, I'll fix this. |
||
``` | ||
http://localhost:8500/v1/kv/stolon/cluster/mycluster/clusterdata?wait=0s&index=14379 | ||
``` | ||
|
||
... where 14379 is ModifyIndex of a key reported by Consul. | ||
|
||
### How stolon decide which standby should be promoted to master? | ||
|
||
Currently it tries to find the best standby, the one with the xlog location nearest to the master latest knows xlog location. If a master is down there's no way to know its latest xlog position (stolon get and save it at some intervals) so there's no way to guarantee that the standby is not behind but just that the best standby of the ones available will be choosen. | ||
|
||
### Does synchronous replication mean that I can't loose any data? | ||
|
||
Since version 9.6 PostgreSQL supports [synchronous replication to the quorum](https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html#GUC-SYNCHRONOUS-STANDBY-NAMES). Unfortunately stolon doesn't support this feature yet and configures replication like this: | ||
|
||
``` | ||
# on postgres1 server | ||
synchronous_standby_names = 'postgres2,postgres3' | ||
``` | ||
|
||
According to PostgreSQL documentation: | ||
|
||
> Specifies a comma-separated list of standby names that can support synchronous replication, as described in Section 25.2.8. At any one time there will be at most one active synchronous standby; transactions waiting for commit will be allowed to proceed after this standby server confirms receipt of their data. The synchronous standby will be the first standby named in this list that is both currently connected and streaming data in real-time (as shown by a state of streaming in the pg\_stat\_replication view). Other standby servers appearing later in this list represent potential synchronous standbys. | ||
|
||
It means that in case of netsplit synchronous standby can be not among majority nodes. In this case some recent changes will be lost. Although it's not a major problem for most web projects, currently you shouldn't use stolon for storing data that under no circumstances can't be lost. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I won't add the concept of quorum here since it creates more confusion. Also the postgres doc doesn't talk about "quorum". In addition the real problem here is not a netsplit (this is just one of the possible causes) but the fact that we let postgres choose the active synchronous standby, so the sentinel cannot know what was the active synchronous standby when the master was declared as dead. So the unique ways the sentinel has is to find the "best" standby based on the last know xlog position. But if both the master and the active synchronous standby goes down at the same time another standby will be choosed and it cannot be in full sync. I opened #173 with a description and a solution to this. This will work also with postgresql <= 9.5 but with the limitation of setting only one sync standby. Thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And I've realized that the answer here is not entirely true anyway. If cluster size is 3 then master + one synchronous replica make a quorum, so in this case data can't be lost. I'll rewrite this. #173 looks good to me. To determine which version of PostgreSQL is running is simple, and knowing that we know what to write to postgresql.conf if user would like to have a real consistency. |
||
### Does stolon use Consul as a DNS server as well? | ||
|
||
Consul (or etcd) is used only as a key-value storage. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't completely get the meaning of this question. Do you mean registering a service in consul? If so which service (the proxies?)? I don't see why stolon should do this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consul has a concept of services (API like
You can also request an SRV record, in this case you will also receive port numbers:
This is very convenient for applications that are not aware of Consul. Everything you need - is to use a proper DNS without any caching (Consul's TTL is 0) and use domain names like So basically the question is whether a client can determine where are current master and standbys using DNS protocol. |
||
### Lets say I have multiple stolon clusters. Do I need a separate Consul / etcd cluster for each stolon cluster? | ||
|
||
It depends on your architecture and where the different stolon clusters are located. In general, if two clusters live on complitely different hardware, to to handle all possible courner cases (like netslits) you need a separate Consul / etcd cluster for each stolon cluster. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
|
||
## Contributing to stolon | ||
|
||
stolon is an open source project under the Apache 2.0 license, and contributions are gladly welcomed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add an example (also if this is going to change with #160) on what to do with that data (ie get the clusterview.keeperole infos)
The real problem with this that it's not assured that the standby are in sync, dead or else without more logic.