You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to introduce a scaling tool of transferring libraries from one node to another to help balance server load.
Use case
Node A has 100 relatively low-use libraries, but over time the load continues to grow beyond the vertical limits of the node. The admin needs to 'evict' a certain number of libraries to a new node in order to balance the load better. A reasonable number might be 1/2 or 1/4 or something.
Requirements
When a library transfer begins, it is not required that replicas connected to the library remain connected, but it is preferable if possible.
Since no single replica has a guaranteed 'complete' library, the server must be responsible for transferring data to the new node
During authorization, new replicas should be directed to the new node for their sync endpoint
Given this, there are two routes...
1: Immediately disconnect all replicas, begin a library transfer, and refuse replica connection at the new node until the transfer is complete (messy)
2: Begin library transfer using normal sync, but between servers. Resync/rebroadcast new ops to the new node via transparent replication of traffic (just pipe incoming requests straight to other server). After initial sync is complete, drop current replicas. Continue piping traffic until all replicas are gone, then delete the library.
2 definitely seems more robust. Remains to be seen if server ordering will still work in this capacity, but I think it will.
This all depends on #263, as the "piping" aspect will definitely be async, and a complete round-trip will be required to get the server order.
The text was updated successfully, but these errors were encountered:
There's some synchronicity with #247 here, as the stated requirements of this issue imply we'd need to synchronize the transfer between the Verdant server and the authorization token generation, which seems kinda loose.
If tokens already asked their server for the real token (rather than relying on shared secret), then the server could respond with the transferred endpoint in the token.
But what I may be overlooking here is how multiple verdant server nodes will agree upon which node to send a library's replica to. I'm thinking as if they have a shared database, but they don't (currently). Will this require something like Redis? Maybe ZeroMQ?
I'd like to introduce a scaling tool of transferring libraries from one node to another to help balance server load.
Use case
Node A has 100 relatively low-use libraries, but over time the load continues to grow beyond the vertical limits of the node. The admin needs to 'evict' a certain number of libraries to a new node in order to balance the load better. A reasonable number might be 1/2 or 1/4 or something.
Requirements
When a library transfer begins, it is not required that replicas connected to the library remain connected, but it is preferable if possible.
Since no single replica has a guaranteed 'complete' library, the server must be responsible for transferring data to the new node
During authorization, new replicas should be directed to the new node for their sync endpoint
Given this, there are two routes...
1: Immediately disconnect all replicas, begin a library transfer, and refuse replica connection at the new node until the transfer is complete (messy)
2: Begin library transfer using normal sync, but between servers. Resync/rebroadcast new ops to the new node via transparent replication of traffic (just pipe incoming requests straight to other server). After initial sync is complete, drop current replicas. Continue piping traffic until all replicas are gone, then delete the library.
2 definitely seems more robust. Remains to be seen if server ordering will still work in this capacity, but I think it will.
This all depends on #263, as the "piping" aspect will definitely be async, and a complete round-trip will be required to get the server order.
The text was updated successfully, but these errors were encountered: