-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publish relevant crate names #5
Draft
r1viollet
wants to merge
45
commits into
main
Choose a base branch
from
r1viollet/publish_crates
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Deliver shared library alongside static library * Add exporter example * Activate LTO * Optimize for size instead of speed * Reduce number of codegen units to further reduce ouput library size. * Make dynamic linking the default for libddprof_ffi.pc and provide libddprof_ffi-static.pc for static linking * Remove RPATH/RUNPATH from cdylib on alpine-musl Rust adds some weird RPATH/RUNPATH to cdylibs on alpine-musl (cf. https://git.alpinelinux.org/aports/tree/community/rust/alpine-target.patch,https://git.alpinelinux.org/aports/tree/community/rust/need-rpath.patch).
* Bump version to 0.4.0-rc.1
… macOS (#26) * Upgrade to latest version of `standard` gem * Cleanup old versions from Rakefile We don't support using an up-to-date `Rakefile` to release a version other than the current in `version.rb` so it doesn't make sense to keep old versions around. * Avoid packaging a few more useless files * Refactor `Rakefile` to avoid repetition, add experimental support for macOS * Add mechanism to exclude some libddprof tarball files from package * Package libddprof shared library only, ignore static library This is still WIP, since the 0.3.0 release doesn't actually ship a shared library. * Code reformatter fixes * Tweak experimental macOS packaging * Add pkg-config variant that sets rpath The `ddprof_ffi_with_rpath.pc` includes the linker flags `-Wl,-rpath,${libdir}` so that libddprof can be linked to and used in the directory it gets installed to. Without this, we would need to set the same flags in the libddprof user side, or use `LD_LIBRARY_PATH` or some other mean to tell the OS how to find libddprof. These two links helped me figure this out: * https://amir.rachum.com/blog/2016/09/17/shared-libraries/ * mesonbuild/meson#4027 Also included is a change to `ffi-build.sh` so that the macOS dynamic library is setup to correctly being used with rpath.
By shipping the debug information in a separate file, downstream packages (e.g. the Ruby packages) can easily remove the debug information while packaging libddprof, but it's still very easy to restore it. This approach is documented in <https://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html> and in the `objdump` man pages. Thanks @nsavoire for the tips. The `strip`'d file retains the `build-id` information. I've tested with `gdb` and it seems to work: ``` $ gdb libddprof_ffi.so GNU gdb (Ubuntu 11.1-0ubuntu2) 11.1 Reading symbols from libddprof_ffi.so... Reading symbols from /working/build-linux-amd64/lib/libddprof_ffi.so.debug... (gdb) info functions All defined functions: File /cargo/registry/src/github.com-1ecc6299db9ec823/addr2line-0.13.0/src/lib.rs: 966: static fn addr2line::Function::parse_children<gimli::read::endian_slice::EndianSlice<gimli::endianity::LittleEndian>>(); 549: static fn addr2line::ResUnit::parse_functions<gimli::read::endian_slice::EndianSlice<gimli::endianity::LittleEndian>>(); 480: static fn addr2line::ResUnit::parse_lines<gimli::read::endian_slice::EndianSlice<gimli::endianity::LittleEndian>>(); 660: static fn addr2line::name_attr<gimli::read::endian_slice::EndianSlice<gimli::endianity::LittleEndian>>(); 1162: static fn addr2line::{{impl}}::for_each_range::{{closure}}<gimli::read::endian_slice::EndianSlice<gimli::endianity::LittleEndian>,closure-0>(); File /cargo/registry/src/github.com-1ecc6299db9ec823/compiler_builtins-0.1.32/src/macros.rs: 262: fn compiler_builtins::int::udiv::__udivmodti4(); 269: fn compiler_builtins::int::udiv::__umodti3::__umodti3(); ...etc... ``` and if the `.debug` file is deleted: ``` (gdb) info functions All defined functions: Non-debugging symbols: 0x0000000000075d00 _init 0x0000000000075d30 __stack_chk_fail@plt 0x0000000000075d40 __tls_get_addr@plt 0x0000000000075d50 __fxstat64@plt 0x0000000000075d60 __gmon_start__@plt 0x0000000000075d70 _Unwind_Resume@plt 0x0000000000075d80 __cxa_finalize@plt 0x0000000000075dc0 deregister_tm_clones 0x0000000000075df0 register_tm_clones 0x0000000000075e30 __do_global_dtors_aux 0x0000000000075e70 frame_dummy 0x0000000000075ea5 std::error::<impl core::convert::From<E> for alloc::boxed::Box<dyn std::error::Error>>::from 0x0000000000075edd std::error::Error::description 0x0000000000075eea std::error::Error::cause ...etc... ``` This PR is an alternative to #28, which instead shipped two copies of the shared library binaries (one `strip`d, and an untouched one).
… reporting of external data using libddprof_ffi (#30) * RFC: Introduce `ddprof_ffi_Buffer_from_byte_slice` function to enable reporting of external data using libddprof_ffi # Context: The `ProfileExporterV3` high-level API enables libddprof_ffi callers to report profiling data without needing to know anything about the backend API. The current signature of `build` (used to prepare a request) is: ```rust #[export_name = "ddprof_ffi_ProfileExporterV3_build"] pub unsafe extern "C" fn profile_exporter_build( exporter: Option<NonNull<ProfileExporterV3>>, start: Timespec, end: Timespec, files: Slice<File>, timeout_ms: u64, ) -> Option<Box<Request>> ``` in particular, the payload data to be reported (usually pprof and JSON data) is provided using a `File`, which is defined as: ```rust #[repr(C)] pub struct File<'a> { name: ByteSlice<'a>, file: Option<NonNull<Buffer>>, } ``` and further, a buffer is represented as ```rust /// Buffer holds the raw parts of a Rust Vec; it should only be created from /// Rust, never from C. #[repr(C)] pub struct Buffer { ptr: *const u8, len: size_t, capacity: size_t, } ``` It follows from the required usage of `Buffer` that libddprof_ffi callers can only provide `Buffers` that were created by libddprof_ffi. The only way a libddprof_ffi caller can get a `Buffer` containing profiling data, is through usage of `ddprof_ffi_Profile_serialize` which returns a `EncodedProfile` which contains a `Buffer` with the encoded data. Thus, this means that libddprof_ffi callers **cannot provide their own data** to be reported, e.g. to report a pprof that was encoded using different means (for instance, if the runtime provided it), or to report the `code_provenance.json` or `metrics.json` extra files. # Use case: My use case is the integration of libddprof_ffi into the Ruby profiler: 1. I want to make this a staged introduction, so in the first version I will only be using libddprof_ffi to report data, not to encode it 2. Ruby is already reporting `code_provenance.json` and this should continue to be reported even after moving to libddprof_ffi # Alternatives considered: I considered two main approaches to this problem: 1. Introduce a new `ddprof_ffi_Buffer_from_byte_slice` API. This API copies a `ByteSlice` (an array of bytes) into a `Buffer`, which we can then happily use. Advantage: simpler, as no other APIs need to change to support this Disadvantage: introduces an extra copy of the data (even if temporary) 2. a) Modify `profile_exporter_build` to receive a `ByteSlice` inside `File`, not a `Buffer` Advantage: resulting API simpler and more concise to use; avoids extra copy from (1.) Disadvantage: involves several breaking API changes 2. b) Leave existing `profile_exporter_build` API, but add an extra one to receive `ByteSlice`s Advantage: no breaking API changes; avoids extra copy from (1.) Disadvantage: adds complexity and possibly duplication In this PR, I decided to go with (1.), as it seems to me the extra copy is not that roblematic. # Additional Notes: This is my first time writing Rust code so please DO suspect that EVERYTHING IS OFF AND THAT THIS IS A TERRIBLE MISTAKE. And then help me figure out how to improve it :) This writeup may be a bit overkill for this small change, but this is 20% sharing with the rest of the team what I'm doing and 80% working through this myself and making sure I understand what's up :) # How to test the change: This is a new API; my plan is to integration test it on the dd-trace-rb side: write a test that validates that the correct HTTP requests are done, with the correct payloads. * Test that `byte_slice_to_buffer_conversion` copies input data * Update and wrap comment I try to keep comments and code to 80 lines despite Rust's preference for the 100 column limit. I don't make code worse just to make 80 cols, but for comments it's pretty easy. Co-authored-by: Levi Morrison <[email protected]>
Currently, Java uses microsecond precision for uploaded profile start/end, whereas Ruby, Python, Go and libddprof use second precision. While discussing with @flodav he made a good point that just truncating the start and end would mean that we could induce quite a bit of error. I decided to bump it to nanos, since we have it anyway. Open to change, if it's overkill :)
…er instead of a ByteSlice (#33) * [PROF-4780] Breaking change: Change FFI File struct to contain a Buffer instead of a ByteSlice In #30 we added the `ddprof_ffi_Buffer_from_byte_slice` function to allow FFI users to provide their own data to be reported using the `ProfileExporterV3` (via `ddprof_ffi_ProfileExporterV3_build`). Thus, if one wanted to report some data, it first converted it from a `ByteSlice` into a `Buffer`, and then invoke `libddprof` with it. Here's a (simplified) example from the Ruby profiler: ```c ddprof_ffi_File files[1]; ddprof_ffi_Slice_file slice_files = {.ptr = files, .len = 1}; ddprof_ffi_Buffer *pprof_buffer = ddprof_ffi_Buffer_from_byte_slice((ddprof_ffi_ByteSlice) { .ptr = /* byte array with data we want to send */, .len = /* ... */ }); files[0] = (ddprof_ffi_File) {.name = /* ... */, .file = pprof_buffer}; ddprof_ffi_Request *request = ddprof_ffi_ProfileExporterV3_build(exporter, start, finish, slice_files, timeout_milliseconds); ddprof_ffi_Buffer_free(pprof_buffer); ``` This approach had a few downsides: 1. It copied the data to be reported twice. It would be first copied into a `Buffer`, and then inside `ddprof_ffi_ProfileExporterV3_build` it would be copied again. 2. **Callers manually needed to clean up the `Buffer` afterwards using `ddprof_ffi_Buffer_free`**. After discussing this with @morrisonlevi, we decided to go the other way: change the `File` to contain a `ByteSlice` directly. This avoids the extra copy in (1.), as well as the caller needing to manage the `Buffer` objects manually in (2.). This is a breaking API change for libddprof FFI users. * Fix exporter.cpp example compilation by enabling C++ 11 * Update exporter.cpp example with new `File` struct API * Use target_compile_features instead of cmake_cxx_standard to enable c++11 * Update examples/ffi/exporter.cpp Minor cleanup as suggested during review. Co-authored-by: Nicolas Savoire <[email protected]> Co-authored-by: Nicolas Savoire <[email protected]>
* Switch the http client from reqwest to hyper * Create a Connector that can handle unix socket streams * Add rustls hyper support * Rename Client to HttpClient to work around cbindgen * Better error message when the request times out * Add tests for uds handling * skip uds feature on non unix systems because tokio doesn't support it * Switch the multipart implementation to hyper-multipart-rfc7578 * Enable tls1.2 on rustls Co-authored-by: Levi Morrison <[email protected]>
…PIs, and adjust Slices (#34) * Fix clippy lints and expose dependencies of API hyper::Uri and chrono::{DateTime, Utc} are used in the public Rust API, so we need to `pub use` them so consumers can use these types and for them to be compatible. * Update for Rust 1.56 * Downgrade ux to 0.1.3 to avoid const new 0.1.4 uses const new, which requires Rust 1.57+ * Fix 3rdparty license paths and clippy lint * Fix license file * Remove unused Exporter APIs and adjust Slices (#35) * Remove unused Exporter APIs and adjust Slices CharSlice -> intended to be valid utf-8 string. ByteSlice -> not intended to be valid utf-8 string (like binary data). * Fix quoting for CMake * Fix exporter example, address CR
* Add helper for creating CharSlice from a string literal It looks like everyone that uses libddprof has a variant of this macro, so I think it makes sense to just have it in `ffi.h`. This specific implementation is built to only work with string literals, aka there's no accidental/unexpected `strlen` going on. * Improve error message when using ddprof_ffi_CharSlice_from_literal This macro relies on a weird trick to fail compilation when you pass in a char* and not a literal (the trick is that `char *foo = "" "foo";` is valid but `char *bar; char *foo = "" bar;` is not), but the error message would be kinda cryptic. Since most compilers show the macro when the compilation fails, I've added a nice comment which gets shown by compilers: ``` foo.c: In function 'endpoint_from': foo.c:133:27: error: expected '}' before 's' byte_slice_from_literal(s); ^ foo.c:15:113: note: in definition of macro 'byte_slice_from_literal' /* NOTE: Compilation fails if you pass in a char* instead of a literal */ ((ddprof_ffi_CharSlice) {.ptr = "" string, .len = sizeof(string) - 1}) ^~~~~~ foo.c:15:91: note: to match this '{' /* NOTE: Compilation fails if you pass in a char* instead of a literal */ ((ddprof_ffi_CharSlice) {.ptr = "" string, .len = sizeof(string) - 1}) ``` * Change name to DDPROF_FFI_CHARSLICE_C Co-authored-by: Levi Morrison <[email protected]>
Ugh, I forgot to check that CI was clean before merging a last-minute change to the PR.
…ge strategy (#39) * Switch Ruby to use ddprof_ffi_with_rpath.pc In a previous commit I added the code to generate `ddprof_ffi_with_rpath.pc` but forgot to update the Ruby helpers to use that file. (My testing did not flag this because my initial plan was to change `ddprof_ffi.pc` directly to add the rpath flags, and the rename came late in the branch.) I will include a matching change on the dd-trace-rb side. * Package libddprof 0.5.0.rc1, including arm64 (aarch64) binaries * Exclude libddprof debug info from packaged gems * Simplify files_for helper * Actually push built arm64 gem * Change fallback package with no platform to contain all binaries See comment in `Rakefile` for details on this decision. * Update Cargo.lock for 0.5.0-rc.1
* Add Vec type, remove Buffer, add SendResult_drop The Vec type is a generalization of Buffer. In particular, this PR exposes an API to create and manage Vec<Tag> and replaces existing uses of Buffer with Vec<u8>. It includes a helper to parse DD_TAGS as well. One of the APIs, ddprof_ffi_Buffer_reset, was removed. Previously this was required when using the failure case of the SendResult type, but now there is ddprof_ffi_SendResult_drop, which should be used instead. * Add #[must_use] to some FFI functions
* Return SerializeResult for serializing a profile This removes the direct `eprintln` usage and allows the caller to choose what to do with it instead. Adds these FFI functions: - `ddprof_ffi_SerializeResult_drop` - `ddprof_ffi_Vec_u8_as_slice`. The EncodedProfile has a Vec<u8> but the struct ddprof_ffi_File requires a Slice<u8>. And removes `ddprof_ffi_EncodedProfile_delete`. * Clean up exporter * Add #[must use] to ddprof_ffi_Vec_u8_as_slice
* Github CI lint and and tests * fix missing licenses
This started with enhancing tags, but when I was done I discovered a sigsev which could be triggered even from Rust code. I knew I had screwed up the lifetimes and unsafe code. So, I started working and I kept unravelling more and more. Slice and vec have been split into their own files as lib.rs was getting large. Many From traits were changed to work with &T instead of T because of lifetime issues. In this PR, some C FFI APIs for tags have been removed. A subsequent PR will add them back and enhance them. I wanted to keep the PR size to be somewhat manageable. Some places using some form of string have changed to use `Cow<'static, str>`. This allows you to borrow static strings, and own all others. When calling from C, the difference is very little because they _should_ have been copied (probably, was unsafe if not). I believe these are the changes which actually fixed the crash.
* Proposal for refactoring of https connector to make root certificates optional * Tentative fix (#48) Co-authored-by: Nicolas Savoire <[email protected]> * return error when HTTPs cannot be used but is requested * Correctly handle error when no root certificates are found * Use latest common multipart released package * Cleanup implementation + add tests * Fix linting errors * enable matrix build and test on all expected platforms * Rename MaybeHttpsConnector to Connector * Fix compilation on Windows * Make pin_project_lite usage compile on windows * Impl own pin project * Settle on using pin_project Co-authored-by: r1viollet <[email protected]> Co-authored-by: Nicolas Savoire <[email protected]>
Co-authored-by: Ivo Anjo <[email protected]>
* Fix compilation of main branch Invert order of debug and pin_project macros to fix compilation issue on rust 1.56 * Bump version to 0.6.0-rc.1 * Fix license check requirements
* Fix typos in error constant and message * WIP: Asynchronous cancellation for ddprof_ffi_ProfileExporterV3_send By default, Ruby (and Java) set a profile export timeout of 30 seconds AKA in the worst case, a call to `ddprof_ffi_ProfileExporterV3_send` can block for that amount of time. For Ruby in particular, we want to call `ddprof_ffi_ProfileExporterV3_send` from a Ruby thread, but there needs to be a way to interrupt a long-running `send` in case the application wants to exit (e.g. the user pressed ctrl+c). Without a way to interrupt a `send`, the Ruby VM will hang until the timeout is hit. To fix this, we make use of tokio's `CancellationToken`: this concurrency utility is built so that it can be used to signal from one thread to another that we want to return early. **NOTE**: I'm marking this PR as WIP since I need some help with the lifetime of the `CancellationToken`: right now it's getting dropped by the functions that use it, whereas it should only be dropped when `ddprof_ffi_CancellationToken_drop` manually does it. * Simplify `send` function * Fix crashes when using CancellationToken and add example The insight needed to fix my first attempt at implementing cancellation is that the `Box` type automatically drops whatever it was pointing to when the current scope ends. Thus, to leak something out of Rust's control, we need to use `Box::into_raw(Box::new(ThingToBeLeaked()))`, which returns a `*mut ThingToBeLeaked` aka a raw, unsafe pointer. Then, to borrow that something for use, we can use `unsafe { raw_pointer.as_ref() }` to get a `&ThinkToBeLeaked` back. Finally, when we want to free that thing again, we can turn it back into a box; here I'm doing it implicitly, but one other way is to use `Box::from_raw`. Then `drop` can be used again, explicitly or implicitly. * Remove irrelevant comment from ffi header * Run `cargo fmt` * Minor tweak to comment * Make linter happy * Add CancellationToken for sending HTTP requests * Fix format * Fix name on Windows * Make `CancellationToken` optional when calling `Request::send` * Introduce `ddprof_ffi_CancellationToken_clone` and update example to use it A cloned CancellationToken is connected to the CancellationToken it was created from. Either the cloned or the original token can be used to cancel or provided as arguments to send. The useful part is that they have independent lifetimes and can be dropped separately. Thus, it's possible to do something like: ```c cancel_t1 = ddprof_ffi_CancellationToken_new(); cancel_t2 = ddprof_ffi_CancellationToken_clone(cancel_t1); // On thread t1: ddprof_ffi_ProfileExporterV3_send(..., cancel_t1); ddprof_ffi_CancellationToken_drop(cancel_t1); // On thread t2: ddprof_ffi_CancellationToken_cancel(cancel_t2); ddprof_ffi_CancellationToken_drop(cancel_t2); ``` Without clone, both t1 and t2 would need to synchronize to make sure neither was using the cancel before it could be dropped. With clone, there is no need for such synchronization, both threads have their own cancel and should drop that cancel after they are done with it. Co-authored-by: Levi Morrison <[email protected]>
Following the merge of the PR # 45, the license file requires an update
…ge interface implementation (#50) * Import simple telemetry implementation without native_deps and language interface implementation * Add License header to all new source files * move code from info module into a single file * Implement ddcommon and rename ddtelemetry * Fix testcases for ddcommon container_id * add missing licence files * Add missing copyright headers * auto format new cargo.tomls
* Update 3rd party license file * Allow mutex_atomic clippy lint motivations: rust-lang/rust-clippy#4295
* Update 3rd party license file * Allow mutex_atomic clippy lint rust-lang/rust-clippy#4295 * add more lints to correspond with existing build system * Remove empty trailing lines * Fix cargo home directory in LICENSE-3rdparty file * Export new license file on check fail Co-authored-by: paullegranddc <[email protected]> Co-authored-by: paullegranddc <[email protected]>
* Initial version of CODEOWNERS * update CODEOWNERS with new group names * Update CODEOWNERS
Almost all other results: * `NewProfileExporterV3Result` * `SerializeResult` * `PushTagResult` expose a `...Result_Tag` named `..._RESULT_ERR` and an `err` field inside their definition, so it's weird to use `failure` just for `SendResult`, which otherwise looks exactly like all the others. Here's how `ffi.h` looks after this change: ```diff typedef enum ddprof_ffi_SendResult_Tag { DDPROF_FFI_SEND_RESULT_HTTP_RESPONSE, - DDPROF_FFI_SEND_RESULT_FAILURE, + DDPROF_FFI_SEND_RESULT_ERR, } ddprof_ffi_SendResult_Tag; typedef struct ddprof_ffi_SendResult { @@ -150,7 +150,7 @@ struct ddprof_ffi_HttpStatus http_response; }; struct { - struct ddprof_ffi_Vec_u8 failure; + struct ddprof_ffi_Vec_u8 err; }; }; } ddprof_ffi_SendResult; ``` The `ParseTagsResult` is still inconsistent (uses `error_message`) but that one is inconsistent in other ways (doesn't expose a tag, uses a pointer to a Vec_u8), so I think it's not a bad idea that `error_message` is different as it needs to be handled differently.
I'm following the instructions in <ruby/README.md> on how to package newer versions of libddprof for release. I haven't actually been pushing every release to rubygems.org, since the PR on the Ruby side that will pull it in is still unmerged (<DataDog/dd-trace-rb#1936>) and thus there's no need to litter up the releases list. Nevertheless, I like being ready to release, so I decided to bump this version anyway.
bwoebi
added a commit
that referenced
this pull request
May 24, 2024
It apparently leads to race conditions if libpthread and libc aren't loaded at the same time. In this case a library linking against libpthread is dlopen()'ed dynamically from the trampoline. It led to interesting libc memory corruptions, like in getaddrinfo: #2 0x00007fe45328df67 in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7fe4533a05d0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196 #3 0x00007fe453296329 in malloc_printerr (ar_ptr=0x7fe4535dc760 <main_arena>, ptr=<optimized out>, str=0x7fe4533a06d8 "double free or corruption (out)", action=3) at malloc.c:4967 #4 _int_free (av=0x7fe4535dc760 <main_arena>, p=<optimized out>, have_lock=0) at malloc.c:3843 #5 0x00007fe453283247 in _IO_new_fclose (fp=0x7fe448001d20) at iofclose.c:84 etc. on older glibc versions. Signed-off-by: Bob Weinand <[email protected]>
bwoebi
added a commit
that referenced
this pull request
May 24, 2024
It apparently leads to race conditions if libpthread and libc aren't loaded at the same time. In this case a library linking against libpthread is dlopen()'ed dynamically from the trampoline. It led to interesting libc memory corruptions, like in getaddrinfo: #2 0x00007fe45328df67 in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7fe4533a05d0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196 #3 0x00007fe453296329 in malloc_printerr (ar_ptr=0x7fe4535dc760 <main_arena>, ptr=<optimized out>, str=0x7fe4533a06d8 "double free or corruption (out)", action=3) at malloc.c:4967 #4 _int_free (av=0x7fe4535dc760 <main_arena>, p=<optimized out>, have_lock=0) at malloc.c:3843 #5 0x00007fe453283247 in _IO_new_fclose (fp=0x7fe448001d20) at iofclose.c:84 etc. on older glibc versions. Signed-off-by: Bob Weinand <[email protected]>
bwoebi
added a commit
that referenced
this pull request
May 24, 2024
It apparently leads to race conditions if libpthread and libc aren't loaded at the same time. In this case a library linking against libpthread is dlopen()'ed dynamically from the trampoline. It led to interesting libc memory corruptions, like in getaddrinfo: #2 0x00007fe45328df67 in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7fe4533a05d0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196 #3 0x00007fe453296329 in malloc_printerr (ar_ptr=0x7fe4535dc760 <main_arena>, ptr=<optimized out>, str=0x7fe4533a06d8 "double free or corruption (out)", action=3) at malloc.c:4967 #4 _int_free (av=0x7fe4535dc760 <main_arena>, p=<optimized out>, have_lock=0) at malloc.c:3843 #5 0x00007fe453283247 in _IO_new_fclose (fp=0x7fe448001d20) at iofclose.c:84 etc. on older glibc versions. Signed-off-by: Bob Weinand <[email protected]>
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. To override this behavior, add the keep-open label or update the PR. |
github-actions
bot
added
the
stale
Used by actions/stale to identify PRs that have been inactive for 90+ days
label
Oct 27, 2024
ivoanjo
added
keep-open
Overrides actions/stale auto-closing stale PRs
and removed
stale
Used by actions/stale to identify PRs that have been inactive for 90+ days
labels
Oct 28, 2024
Added keep-open for this one; not sure exactly what we want to do with it but I don't think we should close it at this point. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Publish all the relevant crates to crates.io
This branch can be useful to maintain comments / remember what crates have been secured.
Motivation
Securing the relevant crates for Datadog.
Additional Notes
We do not need to merge this PR.
How to test the change?
Check the crates here