-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(telemetry): integration exception tracking #11732
base: main
Are you sure you want to change the base?
Conversation
|
BenchmarksBenchmark execution time: 2025-01-16 19:15:54 Comparing candidate commit f613c49 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 394 metrics, 2 unstable metrics. |
Collect, dedupe, ddtrace.contrib logs, and send to the telemetry.
Report only an error or an exception with a stack trace. Added tags and stack trace (without redaction)
Add count
b11966f
to
ec8f7ca
Compare
) | ||
|
||
|
||
class _TelemetryConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we are introducing telemetry-specific logic into a logging source. Can we try to see if there is a different design that allows keeping the two separate, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really "introducing", since some of this was already there to capture errors, and this change just extends it to exception tracking.
Alternatively, we would have to duplicate all the logging calls in the contib modules just to have exception tracking, which is easy to forget to add, and just introduces code duplication in the instrumentation code.
I'll consider adding a separate telemetry logger if you think that's a better solution. It will probably need to be in the same package, because my attempt to put it in a telemetry package ended with
ImportError: cannot import name 'get_logger' from partially initialized module 'ddtrace.internal.logger' (most likely due to circular import)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have introduced DDTelemetryLogger to separate concerns. Please let me know what you think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks. I think we really need to move all telemetry-related code to the already existing telemetry sources. For instance, we already parse DD_INSTRUMENTATION_TELEMETRY_ENABLED
in
dd-trace-py/ddtrace/settings/config.py
Lines 509 to 510 in 0e31457
self._telemetry_enabled = _get_config("DD_INSTRUMENTATION_TELEMETRY_ENABLED", True, asbool) | |
self._telemetry_heartbeat_interval = _get_config("DD_TELEMETRY_HEARTBEAT_INTERVAL", 60, float) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the feedback! While I agree with the general concern about coupling software components, I would appreciate some clarification and guidance on how the proposed improvements can be implemented effectively. My previous attempts to achieve this didn’t succeed, so your input would be invaluable.
Could you elaborate on what you mean by "all telemetry-related code"? Moving DDTelemetryLogger
to the telemetry module isn’t straightforward because it is tightly coupled with DDLogger
. Its primary functionality revolves around logging - extracting exceptions and passing them to the telemetry module. As a result, its logic and state are more closely tied to the logger than to telemetry itself.
Regarding the configuration, this is indeed a trade-off. Moving it to the telemetry module would result in circular dependency issues during initialization. Any suggestions on how to address these challenges while keeping the codebase clean and decoupled would be greatly appreciated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Yury,
n ddtrace/contrib/, we define 0 error logs, 49 warning logs, and 118 debug logs (GitHub search). This accounts for only a small fraction of the errors that occur.
In most cases, when ddtrace instrumentation fails at runtime, an unhandled exception is raised. These exceptions are not captured by ddtrace loggers.
If an exception escapes a user's application and reaches the Python interpreter, it will be captured by the TelemetryWriter exception hook. Currently, this hook only captures startup errors, but it could be extended to capture exceptions raised during runtime.
Rather than defining a ddtrace logger primarily for debug logs, we could capture critical runtime integration errors directly using the telemetry exception hook. This approach decouples the telemetry writer from the core loggers and ensures that one error per failed process is captured, eliminating the need for rate limiting.
Would this approach capture the errors you're concerned about?
Additionally, I’m a big fan of using telemetry metrics where possible. Metrics are easier to send and ingest, have lower cardinality, and are generally simpler to monitor and analyze. While a metric wouldn’t provide the context of tracebacks, it would be valuable if we could define telemetry metrics to track integration health.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking a look and sharing your thoughts, Munir!
I appreciate your suggestion, it makes perfect sense and would complement this effort well. Extending the telemetry exception hook to capture runtime errors in addition to startup errors would indeed provide valuable insight and ensure that critical errors are visible to us. It would be interesting to hear how the telemetry exception hook would need to be modified to do this, as I thought it already covered this.
However, I think this is a slightly different goal than the one addressed in this PR. I think that reporting caught exceptions in our instrumentation can still be valuable, even though most caught exceptions in the contrib code are currently logged at the debug level. While this approach ensures that they remain largely invisible to customers (which makes sense), these exceptions can still be very useful to us internally, particularly in identifying and improving potentially broken integration code.
Without this functionality, we remain unaware of the problems associated with these caught exceptions that this PR is intended to address. The primary consumer of this data would be our team, not end users. While uncaught exceptions are visible to users, caught exceptions, while less severe, can provide us with actionable insights to improve the product and that is the idea behind this change. I hope this clarifies the intent and need behind the proposed changes.
This change is part of the cross tracer Integration Exception Tracking initiative:
It's intended to capture integration related errors and report them to the telemetry backend. It does this by implementing a
DDTelemetryLogger
that is used by the instrumentation code that is part of theddtrace.contrib
package, and only catches logs with level error or higher, or others that have an exception attached.The motivation for having
DDTelemetryLogger
is that the logger is the natural way to report integration specific errors. This eliminates the need to duplicate this logic and helps ensure that such exceptions are not forgotten to report to telemetry. Here are some numbers about theddtrace.contrib
package log statementsexc_info=True
)exc_info=True
)To avoid overloading the telemetry backend, and also to minimize the impact of traversing and formatting the trace back,
DDTelemetryLogger
introduces a rate limiter that will not report the same error more than once per 60 second heartbeat interval.For the trace back it does some processing including:
Jira ticket: AIDM-389
Checklist
Reviewer Checklist