-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request queueing tracking #717
Comments
Just to give a bit more context. We're doing SSR so we had to do something similar to Airbnb's https://medium.com/airbnb-engineering/operationalizing-node-js-for-server-side-rendering-c5ba718acfc9. So, I am sure more people would have the same difficulty. We want to be able to measure how much time is spent on queueing (e.g. time it takes to the request go from HTTP server to the Node server). |
It would be great If you could either give me some guidance on how I could do this on custom for my app way or contributing back to the library. |
Thanks for the suggestion! This is definitely something we should support. It would be especially useful for managed services that cannot otherwise be instrumented. Depending on the use case, it's also possible however that this is not necessary. For example, our NGINX integration means that you can instrument NGINX itself, meaning you don't need to calculate this on the upstream service, and you also get visibility even if the upstream is never reached (for example 502s and 504s).
I'm not sure I fully understand what that means. Could you better describe what |
@rochdev good one. we're using HAProxy in a similar way to NGINX. So it only forwards requests if there's bandwith to do so. I've noticed there's an integration for HAProxy, that I could use. and it seems that it gives me what I want |
We are collecting these metrics, but they don't show as part of the APM traces they are collected separately. We would like to have the request queueing part as part of the trace so the HAProxy integration it's not sufficient. |
In the case of NGINX we indeed also have an APM integration, but not for HAProxy at the moment. So your best bet in this case would definitely be the request queueing, at least for now. Unfortunately, since there is no built-in way to do this, you will have to rely on a workaround until we release this feature. The following steps will need to be taken in order to update the trace before it's flushed:
This (untested) snippet should do the trick: // right after tracer.init()
tracer.use('express', {
hooks: {
request: (span, req, res) => {
const startTime = parseInt(req.headers['x-request-start']) // multiply/divide if needed
const queueSpan = tracer.startSpan('http.queue', {
childOf: span,
startTime
})
const requestContext = span.context()
const queueContext = queueSpan.context()
queueContext._parentId = requestContext._parentId
requestContext._parentId = queueContext._spanId
queueContext._tags['service.name'] = `${queueContext._tags['service.name']}-haproxy`
queueSpan.finish()
}
}
}) I know this is a pretty big workaround and is definitely not ideal, but for now this would be the only way to get this to work without the actual feature implemented. |
Bump on this. Request queueing metrics would be very helpful. |
Any update on this to add this feature with the tracer itself? |
+1 |
1 similar comment
+1 |
Hello,
I see that in ruby it's possible to track request queueing but it's not possible to do that in dd-trace-js. At least couldn't find it in docs and in the code.
Is it possible to add that ?
Thank you,
The text was updated successfully, but these errors were encountered: