-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unclear how to use drop_result. #22
Comments
Thank you for your query. Indeed, it seems the What is your use-case for discarding the result? |
PS. Will try to fix it on weekends. |
Thanks for the quick reply! My use case is pretty simple. I'd like to perf test redis in a similar way as redis benchmark, but with more customized control. Obviously, I'd like the client to loose as little time on parsing as possible, i.e. just to verify that the response which came back is «valid» (i.e. it's type is retrievable) and drop the remaining data. After my investigations, I see bredis as perfect accomplishment. |
Could you, please, try PR #24 ? How to use it is basically : using Policy = r::parsing_policy::drop_result;
...
c.async_read(rx_buff, read_callback, count, Policy{}); the last 2 parameters are mandatory for your purpose. The count could be equal to Also, please share your benchmark results. I'm still not completely convinced that the PR should be merged, but I see another use case for |
Thanks a lot for an incredible devotion and quick implementation. I gave the implementation a try and here are my thoughts: Generally, I'd like to skip the entire result's payload but still see what the "high level result" was. This allows to make a conclusion about the response, e.g. |
What you are asking for, is some kind of "partial result drop", meanwhile the
So, for your, purposes, I'd suggest you not to extract results, but to scan the existing markers, like that: template <typename Iterator>
class not_error : public boost::static_visitor<bool> {
public:
template <typename T> bool operator()(const T &value) const {
return true;
}
bool operator()(const markers::error_t<Iterator> &value) const {
return false;
}
};
...
c.async_read(rx_buff, [&](const auto &error_code, result_t &&r) {
auto success = boost::apply_visitor(not_error<Iterator>(), r.result);
if (!success) std::abort();
}); The markers will still be allocated, but they are quite light-weight. I'll think about possibility to inject custom on-flight parsing policy, but that's surely will be non-trivial. |
Ivan thanks a lot for your explanation. This is exactly what I was looking for. Over the next few days I'll gather some data, to give you insights on performance benefits (if there are any). I'll post it in this thread here. Regarding |
@ovanes any news, so far? I have updated performance testing against
where |
Sorry for the delay. Below are my findings: Test Setup
Test ResultsNotes:
Actual Results:
Some NotesWhen running the tests without pipelining with low number of connections it is clearly observable, that Redis CPU utilization stays under 90%, which lets the performance and efficiency in the benchmarking tools compete. With higher number of connections or bigger pipelines CPU utilization of Redis reaches 100%. Given that, there is no real competition (or very minimal one) of benchmarking tools but more like which tool is more lucky to get faster response from Redis. Maybe it'd be a good idea to have a test for performance benchmarking tool, which repeatedly reads the same key. Doing so, it'd put that key into Cache and make Redis serve it in the fastest possible way. Finally, it can be even more advantageous to avoid real TCP Sockets but using Unix Domain Sockets instead which can result in much better throughput and lower latency. |
@ovanes Thanks a lot for sharing the results. Let's keep that page, as it might be interesting for other people. I have also a few ideas how to improve performance yet further. |
I put more thoughts into the test result interpretation... |
Yes, please, go ahead. In the current implementation, it performs double-parsing, first pass to determine the end of expected reply (i.e .with drop policy), and the 2nd pass to deliver reply to client code. It also interesting, how you get the numbers for |
@basiliscos Unfortunately, I don't fully understand that question:
IMO
Just replace the values in |
Documentation states, that it's possible to use
drop_result
as part of the parsing policy: https://github.com/basiliscos/cpp-bredis#parse_result_titerator-policyHowever, it's pretty unclear how to use it with the
Connection
object.I read through the source code and
Connection
seems to be have hard-codedkeep_result
as policy type. How would I usedrop_result
with Connection object?Do I understand the intention of
drop_result
properly, that it'd cause the parser to verify that the response isn't an error, but the payload itself is dropped.The text was updated successfully, but these errors were encountered: