Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request queueing tracking #717

Open
guilherme opened this issue Oct 21, 2019 · 10 comments
Open

Request queueing tracking #717

guilherme opened this issue Oct 21, 2019 · 10 comments

Comments

@guilherme
Copy link

Hello,
I see that in ruby it's possible to track request queueing but it's not possible to do that in dd-trace-js. At least couldn't find it in docs and in the code.

Is it possible to add that ?

Thank you,

@guilherme
Copy link
Author

guilherme commented Oct 22, 2019

Just to give a bit more context. We're doing SSR so we had to do something similar to Airbnb's https://medium.com/airbnb-engineering/operationalizing-node-js-for-server-side-rendering-c5ba718acfc9. So, I am sure more people would have the same difficulty.

We want to be able to measure how much time is spent on queueing (e.g. time it takes to the request go from HTTP server to the Node server).

@guilherme
Copy link
Author

It would be great If you could either give me some guidance on how I could do this on custom for my app way or contributing back to the library.

@rochdev
Copy link
Member

rochdev commented Oct 22, 2019

Thanks for the suggestion! This is definitely something we should support. It would be especially useful for managed services that cannot otherwise be instrumented.

Depending on the use case, it's also possible however that this is not necessary. For example, our NGINX integration means that you can instrument NGINX itself, meaning you don't need to calculate this on the upstream service, and you also get visibility even if the upstream is never reached (for example 502s and 504s).

We want to be able to measure how much time is spent on queueing (e.g. time it takes to the request go from HTTP server to the Node server).

I'm not sure I fully understand what that means. Could you better describe what HTTP server and Node server are and how they interact?

@guilherme
Copy link
Author

@rochdev good one. we're using HAProxy in a similar way to NGINX. So it only forwards requests if there's bandwith to do so.

I've noticed there's an integration for HAProxy, that I could use. and it seems that it gives me what I want haproxy.backend.queue.current. I am going to try it.

@guilherme
Copy link
Author

guilherme commented Oct 22, 2019

We are collecting these metrics, but they don't show as part of the APM traces they are collected separately. We would like to have the request queueing part as part of the trace so the HAProxy integration it's not sufficient.

@rochdev
Copy link
Member

rochdev commented Oct 22, 2019

In the case of NGINX we indeed also have an APM integration, but not for HAProxy at the moment. So your best bet in this case would definitely be the request queueing, at least for now.

Unfortunately, since there is no built-in way to do this, you will have to rely on a workaround until we release this feature.

The following steps will need to be taken in order to update the trace before it's flushed:

  1. Add a span hook on the request span. For the purpose of this workaround example, I will assume you are using Express.
  2. Create a new span with the timing received from HAProxy. It should be a child of the request span at this point so that it copies the shared internal trace state.
  3. Update the root span to use the new span as its parent.

This (untested) snippet should do the trick:

// right after tracer.init()
tracer.use('express', {
  hooks: {
    request: (span, req, res) => {
      const startTime = parseInt(req.headers['x-request-start']) // multiply/divide if needed
      const queueSpan = tracer.startSpan('http.queue', {
        childOf: span,
        startTime
      })
      const requestContext = span.context()
      const queueContext = queueSpan.context()

      queueContext._parentId = requestContext._parentId
      requestContext._parentId = queueContext._spanId
      queueContext._tags['service.name'] = `${queueContext._tags['service.name']}-haproxy`
      queueSpan.finish()
    }
  }
})

I know this is a pretty big workaround and is definitely not ideal, but for now this would be the only way to get this to work without the actual feature implemented.

@sahellebusch
Copy link

Bump on this. Request queueing metrics would be very helpful.

@mintusah25
Copy link

Any update on this to add this feature with the tracer itself?

@AndrejGajdos
Copy link

+1

1 similar comment
@eddowh
Copy link

eddowh commented Apr 3, 2024

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants