Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: track latest reported status and report on reconnect #30576

Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/storage/src/render.rs
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ pub fn build_ingestion_dataflow<A: Allocate>(
&health_stream,
crate::healthcheck::DefaultWriter {
command_tx: storage_state.internal_cmd_tx.clone(),
updates: Rc::clone(&storage_state.object_status_updates),
updates: Rc::clone(&storage_state.shared_status_updates),
},
storage_state
.storage_configuration
Expand Down Expand Up @@ -457,7 +457,7 @@ pub fn build_export_dataflow<A: Allocate>(
&health_stream,
crate::healthcheck::DefaultWriter {
command_tx: storage_state.internal_cmd_tx.clone(),
updates: Rc::clone(&storage_state.object_status_updates),
updates: Rc::clone(&storage_state.shared_status_updates),
},
storage_state
.storage_configuration
Expand Down
74 changes: 64 additions & 10 deletions src/storage/src/storage_state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,9 @@ impl<'w, A: Allocate> Worker<'w, A> {
timely_worker.index(),
timely_worker.peers(),
),
object_status_updates: Default::default(),
shared_status_updates: Default::default(),
latest_status_updates: Default::default(),
reported_status_updates: Default::default(),
internal_cmd_tx,
internal_cmd_rx,
read_only_tx,
Expand Down Expand Up @@ -308,11 +310,20 @@ pub struct StorageState {
/// Statistics for sources and sinks.
pub aggregated_statistics: AggregatedStatistics,

/// Status updates reported by health operators.
/// A place shared with running dataflows, so that health operators, can
/// report status updates back to us.
///
/// **NOTE**: Operators that append to this collection should take care to only add new
/// status updates if the status of the ingestion/export in question has _changed_.
pub object_status_updates: Rc<RefCell<Vec<StatusUpdate>>>,
pub shared_status_updates: Rc<RefCell<Vec<StatusUpdate>>>,

/// The latest status update for each object.
pub latest_status_updates: BTreeMap<GlobalId, StatusUpdate>,

/// The latest status update that has been _reported_ back to the
/// controller. This will be reset when a new client connects, so that we
/// can determine what updates we have to report again.
pub reported_status_updates: BTreeMap<GlobalId, StatusUpdate>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existence of this map surprises me! Naively I'd have thought that we can just unconditionally send the contents of latest_status_updates on reconnect. Or, if that's somehow hard to set up, wouldn't an initial_status_reported flag suffice?


/// Sender for cluster-internal storage commands. These can be sent from
/// within workers/operators and will be distributed to all workers. For
Expand Down Expand Up @@ -458,13 +469,7 @@ impl<'w, A: Allocate> Worker<'w, A> {
self.report_frontier_progress(&response_tx);
self.process_oneshot_ingestions(&response_tx);

// Report status updates if any are present
if self.storage_state.object_status_updates.borrow().len() > 0 {
self.send_storage_response(
&response_tx,
StorageResponse::StatusUpdates(self.storage_state.object_status_updates.take()),
);
}
self.report_status_updates(&response_tx);

if last_stats_time.elapsed() >= stats_interval {
self.report_storage_statistics(&response_tx);
Expand Down Expand Up @@ -831,6 +836,50 @@ impl<'w, A: Allocate> Worker<'w, A> {
}
}

/// Pumps latest status updates from the buffer shared with operators and
/// reports any updates that need reporting.
pub fn report_status_updates(&mut self, response_tx: &ResponseSender) {
let mut to_report = Vec::new();

// Pump updates into our state and stage them for reporting.
if self.storage_state.shared_status_updates.borrow().len() > 0 {
for shared_update in self.storage_state.shared_status_updates.take() {
let id = shared_update.id;

to_report.push(shared_update.clone());

self.storage_state
.latest_status_updates
.insert(id, shared_update.clone());
self.storage_state
.reported_status_updates
.insert(id, shared_update);
}
}

// We reset what we have reported when a new client/controller connects,
// such that we can re-send our latest status updates.
//
// And here we report any status where our latest known status differs
// from what has been reported.
for (id, latest_update) in self.storage_state.latest_status_updates.iter() {
let reported_update = self.storage_state.reported_status_updates.get(id);
if let Some(reported_update) = reported_update {
if reported_update == latest_update {
continue;
}
}
to_report.push(latest_update.clone());
self.storage_state
.reported_status_updates
.insert(id.clone(), latest_update.clone());
}

if to_report.len() > 0 {
self.send_storage_response(response_tx, StorageResponse::StatusUpdates(to_report));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A place that's going to conflict with #31261 (fyi @petrosagg)

}
}

/// Report source statistics back to the controller.
pub fn report_storage_statistics(&mut self, response_tx: &ResponseSender) {
let (sources, sinks) = self.storage_state.aggregated_statistics.emit_local();
Expand Down Expand Up @@ -1135,6 +1184,9 @@ impl<'w, A: Allocate> Worker<'w, A> {
*frontier = Antichain::from_elem(<_>::minimum());
}

// Reset the reported status updates for the remaining objects.
self.storage_state.reported_status_updates.clear();

// Execute the modified commands.
for command in commands {
self.storage_state.handle_storage_command(command);
Expand Down Expand Up @@ -1288,6 +1340,8 @@ impl StorageState {
self.ingestions.remove(&id);
self.exports.remove(&id);

let _ = self.reported_status_updates.remove(&id);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to remove the latest_status_updates entry too?


// This will stop reporting of frontiers.
//
// If this object still has its frontiers reported, we will notify the
Expand Down
Loading