-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot test all fields of a Record
#5192
Comments
It is possible when constructing the "full" SDK logs pipeline (logger provider and simple processor). It has been designed this way to ensure encapsulation of the data which is supposed to be set by the SDK. Yet, I see that it can be inconvenient for testing. The same problem would be for testing custom processors. Notice we do not want the processor to be able to modify the resource nor the scope. |
Record
Record
Are you saying any external package that want to test their transforms needs to install a full SDK? That seems less than ideal. |
Sure, but we don't want the processor to modify anything without cloning it and they are still able to do that. This may be a reason our API is not providing the interface we want. |
It might make sense to migrate to something like We could have a |
I say that this is the current state. Thus this issue is not a bug but an enhancement.
I agree. |
There is a missing set of fundamental functionality with the logs SDK. This deficit is why it is classified and tagged as a bug. Adding addition enhancing functionality is not what this issue is about. It is asking to provide functionality to make this package fundamentally usable. |
I think we could do something similar to #5195. The name could additionaly warn that it should be used only for testing exporters and processors like My other idea is providing |
Using names of types to try and restrict use has a bad smell to it. It does not use the semantics of the Go language to make the restrictions intended. It relies on users knowing what a We could just as easily add methods to the I would rather explore changes to the Record API itself to see if we can design it in way that supports the use we want to promote and allows the functionality users will need from the API. |
Maybe it would be better to have such functionality under sdk/log/logtest. I believe that @MrAlias also mentioned this idea during the SIG meeting. I think we could do this by copying record.go and record_test.go to logtest and adding SetResource and SetInstrumentationScope (and tests) in seperate files (e.g. testrecord.go and testrecord_test.go). It is possible to convert between types that have exactly same fields. See: https://go.dev/play/p/rMq8vtYDx5O. From: #5200 (comment) EDIT: This technique does not work across packages if there are unexported fields. #5215 |
I unassigned myself as currently I think have no good ideas on how to follow-up
The only idea I currently have is to expand https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/sdk.md#readwritelogrecord with something more or less like:
However, I think it may be in conflict with open-telemetry/opentelemetry-specification#3902 (comment) On the other hand, there is a precedence from the C++ Logs SDK @MrAlias, thoughts? |
The problem will be also with limits, dropped attributes, attributes deduplication. I start to feel that it is better (more trustworthy) to test custom processors and exporters using it with a logger provider created in the test. |
Notes from talk with @pellared
|
More things blocked by not allowing a way to set a record's limits: #5230 |
The
Record
has both methods to return theResource
andScope
it was created with by the SDK. However, when testing from export packages (i.e.otlploghttp
) it is not possible to set these values. Therefore, it is not possible to test real data from multiple sources in an exporter.We are going to need to export a way to construct a
Record
with these values or add "Set" methods for them.This will also apply to limits and the
DroppedAttributes
method when #5190 merges.The text was updated successfully, but these errors were encountered: