Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPEX v2.5.10+xpu pytorch-triton-xpu instructions does not work #754

Open
simonlui opened this issue Dec 21, 2024 · 4 comments
Open

IPEX v2.5.10+xpu pytorch-triton-xpu instructions does not work #754

simonlui opened this issue Dec 21, 2024 · 4 comments
Assignees

Comments

@simonlui
Copy link

Describe the bug

According to https://intel.github.io/intel-extension-for-pytorch/xpu/2.5.10+xpu/tutorials/known_issues.html, the workaround for installing triton for IPEX is this command.

# Install correct version of pytorch-triton-xpu
pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559  --index-url https://download.pytorch.org/whl/nightly/xpu

However, after running and installing that package and trying to run torch.compile, this is the truncated output.

ImportError: libsycl.so.7: cannot open shared object file: No such file or directory

The above exception was the direct cause of the following exception:
...
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
ImportError: libsycl.so.7: cannot open shared object file: No such file or directory

It seems like this command only worked for oneAPI Base Toolkit 2024 components. oneAPI Base Toolkit 2025 components which IPEX v2.5.10+xpu installs is using libsycl.so.8 instead of libsycl.so.7 as can be seen doing an ldd command on sycl-ls

❯ ldd /opt/intel/oneapi/2025.0/bin/sycl-ls
...
        libsycl.so.8 => /opt/intel/oneapi/compiler/2025.0/lib/libsycl.so.8 (0x00007f017b600000)
...

So I am pretty sure the instructions are incorrect here. Or I am doing something wrong. For reference, if you install the latest pytorch-triton-xpu by leaving out the version check, you get the following backtrace instead.

!!! Exception during processing !!! must be called with a dataclass type or instance
Traceback (most recent call last):
  File "/home/simonlui/Code_Repositories/ComfyUI/execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/Code_Repositories/ComfyUI/execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/Code_Repositories/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/simonlui/Code_Repositories/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/Code_Repositories/ComfyUI/comfy_extras/nodes_torch_compile.py", line 17, in patch
    m.add_object_patch("diffusion_model", torch.compile(model=m.get_model_object("diffusion_model"), backend=backend))
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/__init__.py", line 2447, in compile
    return torch._dynamo.optimize(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 716, in optimize
    return _optimize(rebuild_ctx, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 790, in _optimize
    compiler_config=backend.get_compiler_config()
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/__init__.py", line 2237, in get_compiler_config
    from torch._inductor.compile_fx import get_patched_config_dict
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 49, in <module>
    from torch._inductor.debug import save_args_for_compile_fx_inner
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_inductor/debug.py", line 26, in <module>
    from . import config, ir  # noqa: F811, this is needed
    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_inductor/ir.py", line 77, in <module>
    from .runtime.hints import ReductionHint
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torch/_inductor/runtime/hints.py", line 36, in <module>
    attr_desc_fields = {f.name for f in fields(AttrsDescriptor)}
                                        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simonlui/.conda/envs/comfyui/lib/python3.12/dataclasses.py", line 1289, in fields
    raise TypeError('must be called with a dataclass type or instance') from None
TypeError: must be called with a dataclass type or instance

Versions

PyTorch version:   2.5.1+cxx11.abi
PyTorch CXX11 ABI: Yes
IPEX version:      2.5.10+xpu
IPEX commit:       90fdb70e7
Build type:        Release

OS:                Fedora Linux 40 (Workstation Edition) (x86_64)
GCC version:       (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
Clang version:     18.1.8 (Fedora 18.1.8-1.fc40)
IGC version:       2025.0.4 (2025.0.4.20241205)
CMake version:     version 3.30.5
Libc version:      glibc-2.39

Python version:    3.12.5 | Intel Corporation | (main, Sep  9 2024, 23:35:37) [GCC 14.1.0] (64-bit runtime)
Python platform:   Linux-6.11.11-200.fc40.x86_64-x86_64-with-glibc2.39
Is XPU available:  True
DPCPP runtime:     2025.0
MKL version:       2025.0

GPU models and configuration onboard: 
N/A

GPU models and configuration detected: 
* [0] _XpuDeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) oneAPI Unified Runtime over Level-Zero', type='gpu', driver_version='1.3.30049.600000', total_memory=15473MB, max_compute_units=512, gpu_eu_count=512, gpu_subslice_count=32, max_work_group_size=1024, max_num_sub_groups=128, sub_group_sizes=[8 16 32], has_fp16=1, has_fp64=1, has_atomic64=1)

Driver version: 
* intel_opencl: 24.26.30049.6-1.fc40
* level_zero:   N/A

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        48 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               32
On-line CPU(s) list:                  0-31
Vendor ID:                            AuthenticAMD
Model name:                           AMD Ryzen 9 5950X 16-Core Processor
CPU family:                           25
Model:                                33
Thread(s) per core:                   2
Core(s) per socket:                   16
Socket(s):                            1
Stepping:                             0
Frequency boost:                      enabled
CPU(s) scaling MHz:                   68%
CPU max MHz:                          5084.0000
CPU min MHz:                          550.0000
BogoMIPS:                             6800.15
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization:                       AMD-V
L1d cache:                            512 KiB (16 instances)
L1i cache:                            512 KiB (16 instances)
L2 cache:                             8 MiB (16 instances)
L3 cache:                             64 MiB (2 instances)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-31
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[conda] common_cmplr_lib_rt       2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] common_cmplr_lic_rt       2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] dal                       2025.0.0              intel_957    https://software.repos.intel.com/python/conda
[conda] dpcpp-cpp-rt              2025.0.4                 pypi_0    pypi
[conda] dpcpp_cpp_rt              2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] dpctl                     0.18.1                  py312_0    https://software.repos.intel.com/python/conda
[conda] dpnp                      0.16.0                  py312_0    https://software.repos.intel.com/python/conda
[conda] fortran_rt                2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] icc_rt                    2024.2.1             intel_1100    https://software.repos.intel.com/python/conda
[conda] impi-devel                2021.14.0                pypi_0    pypi
[conda] impi-rt                   2021.14.0                pypi_0    pypi
[conda] impi_rt                   2021.14.0             intel_790    https://software.repos.intel.com/python/conda
[conda] intel-cmplr-lib-rt        2025.0.4                 pypi_0    pypi
[conda] intel-cmplr-lib-ur        2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] intel-cmplr-lic-rt        2025.0.4                 pypi_0    pypi
[conda] intel-extension-for-pytorch 2.5.10+xpu               pypi_0    pypi
[conda] intel-fortran-rt          2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] intel-gpu-ocl-icd-system  1.0.0                         1    https://software.repos.intel.com/python/conda
[conda] intel-opencl-rt           2025.0.4                 pypi_0    pypi
[conda] intel-openmp              2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] intel-pti                 0.10.0                   pypi_0    pypi
[conda] intel-sycl-rt             2025.0.4                 pypi_0    pypi
[conda] intelpython               2025.0.0                      1    https://software.repos.intel.com/python/conda
[conda] ipp                       2022.0.0              intel_808    https://software.repos.intel.com/python/conda
[conda] llvm-spirv                14.0.0                        0    https://software.repos.intel.com/python/conda
[conda] mkl                       2025.0.1                 pypi_0    pypi
[conda] mkl-dpcpp                 2025.0.1                 pypi_0    pypi
[conda] mkl-service               2.4.2                   py312_0    https://software.repos.intel.com/python/conda
[conda] mkl_fft                   1.3.11          py312h3948073_81    https://software.repos.intel.com/python/conda
[conda] mkl_random                1.2.8           py312hd605fbb_101    https://software.repos.intel.com/python/conda
[conda] mkl_umath                 0.1.2           py312h481091c_111    https://software.repos.intel.com/python/conda
[conda] numpy                     1.26.4          py312h689b997_11    https://software.repos.intel.com/python/conda
[conda] numpy-base                1.26.4          py312h23d403b_11    https://software.repos.intel.com/python/conda
[conda] oneccl                    2021.14.1                pypi_0    pypi
[conda] oneccl-bind-pt            2.5.0+xpu                pypi_0    pypi
[conda] oneccl-devel              2021.14.1                pypi_0    pypi
[conda] onemkl-sycl-blas          2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-datafitting   2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-dft           2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-lapack        2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-rng           2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-sparse        2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-stats         2025.0.1                 pypi_0    pypi
[conda] onemkl-sycl-vm            2025.0.1                 pypi_0    pypi
[conda] opencl_rt                 2025.0.0             intel_1169    https://software.repos.intel.com/python/conda
[conda] python                    3.12.5          h2324612_5_cpython    https://software.repos.intel.com/python/conda
[conda] pytorch-triton-xpu        3.1.0+91b14bf559          pypi_0    pypi
[conda] scikit-learn-intelex      2025.0.0        py312_intel_957    https://software.repos.intel.com/python/conda
[conda] scipy                     1.13.1                  py312_8    https://software.repos.intel.com/python/conda
[conda] smp                       0.1.5                  py312_22    https://software.repos.intel.com/python/conda
[conda] tbb                       2022.0.0              intel_402    https://software.repos.intel.com/python/conda
[conda] tbb4py                    2022.0.0        py312_intel_402    https://software.repos.intel.com/python/conda
[conda] tcm                       1.2.0                 intel_589    https://software.repos.intel.com/python/conda
[conda] torch                     2.5.1+cxx11.abi          pypi_0    pypi
[conda] torchaudio                2.5.1+cxx11.abi          pypi_0    pypi
[conda] torchdiffeq               0.2.5                    pypi_0    pypi
[conda] torchsde                  0.2.6                    pypi_0    pypi
[conda] torchvision               0.20.1+cxx11.abi          pypi_0    pypi
[conda] transformers              4.47.1                   pypi_0    pypi
[conda] umf                       0.9.0                 intel_590    https://software.repos.intel.com/python/conda
[conda] xgboost                   2.1.1           0_gbcb4472py312_0    https://software.repos.intel.com/python/conda
@xiguiw xiguiw self-assigned this Dec 23, 2024
@xiguiw
Copy link
Contributor

xiguiw commented Dec 23, 2024

@simonlui

Thanks for finding and report the issue.
This indicates it depends on libsycl.so.7 (oneAPI 2024.) but current oneAPI version is libsycl.so.8 (2025.0)

Seemed something of the documents about the triton xpu version.

Let me check it.

...
libsycl.so.8 => /opt/intel/oneapi/compiler/2025.0/lib/libsycl.so.8 (0x00007f017b600000)

# Install correct version of pytorch-triton-xpu
pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559  --index-url https://download.pytorch.org/whl/nightly/xpu

However, after running and installing that package and trying to run torch.compile, this is the truncated output.

ImportError: libsycl.so.7: cannot open shared object file: No such file or directory

The above exception was the direct cause of the following exception:
...
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
ImportError: libsycl.so.7: cannot open shared object file: No such file or directory

@xiguiw xiguiw removed their assignment Dec 30, 2024
@ZhaoqiongZ ZhaoqiongZ self-assigned this Dec 30, 2024
@ZhaoqiongZ
Copy link
Contributor

Hi @simonlui , could you please share the process you've done?

I suppose you miss the step to install and activate dpcpp
Here is the link to download dpcpp https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler-download.html
and follow the feature guide to activate it with commands source {dpcpproot}/env/vars.sh

@simonlui
Copy link
Author

simonlui commented Jan 2, 2025

I ran into it with ComfyUI and trying to include torch.compile there but I can reproduce it with a minimal example. Using the same conda environment, if I try and run the inference example provided by https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/features/torch_compile_gpu.html, I can save that as a separate python file as test.py and run that and it shows the same error.

❯ python test.py
/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/torchvision/io/image.py:14: UserWarning: Failed to load image Python extension: 'libjpeg.so.8: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
[W101 18:46:52.760029978 OperatorEntry.cpp:155] Warning: Warning only once for all operators,  other operators may also be overridden.
  Overriding a previously registered kernel for the same operator and the same dispatch key
  operator: aten::_cummax_helper(Tensor self, Tensor(a!) values, Tensor(b!) indices, int dim) -> ()
    registered at /build/pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
  dispatch key: XPU
  previous kernel: registered at /build/pytorch/build/aten/src/ATen/RegisterCPU.cpp:30476
       new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:2971 (function operator())
/home/simonlui/.conda/envs/comfyui/lib/python3.12/site-packages/intel_extension_for_pytorch/xpu/amp/autocast_mode.py:22: UserWarning: torch.xpu.amp.autocast is deprecated. Please use torch.amp.autocast('xpu') instead.
  warnings.warn(
Traceback (most recent call last):
...
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
ImportError: libsycl.so.7: cannot open shared object file: No such file or directory

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

I am pretty sure I have everything installed correctly and I ran source /opt/intel/oneapi/compiler/latest/env/vars.sh on top of source /opt/intel/oneapi/setvars.sh. Did not solve the issue.

@simonlui
Copy link
Author

simonlui commented Jan 3, 2025

Using source /opt/intel/oneapi/compiler/2024.2/env/vars.sh gives me a segmentation fault using the above test.py test case, of which I'll post the top and bottom parts of the gdb backtrace.

#0  0x00007ffd92566321 in _pi_result sycl::_V1::detail::plugin::call_nocheck<(sycl::_V1::detail::PiApiKind)19, _pi_context*, unsigned long*>(_pi_context*, unsigned long*) const () from /opt/intel/oneapi/compiler/2024.2/lib/libsycl.so.7
#1  0x00007ffd92560536 in sycl::_V1::detail::context_impl::getNative() const () from /opt/intel/oneapi/compiler/2024.2/lib/libsycl.so.7
#2  0x00007ffe005d25cd in update(sycl::_V1::queue, std::unordered_map<sycl::_V1::queue, l0_resc_handles, std::hash<sycl::_V1::queue>, std::equal_to<sycl::_V1::queue>, std::allocator<std::pair<sycl::_V1::queue const, l0_resc_handles> > >&) () from /home/simonlui/.triton/cache/5a926a63a1a0972c6fabd6ce71015398/spirv_utils.so
#3  0x00007ffe005d3cd2 in initContext(_object*, _object*) () from /home/simonlui/.triton/cache/5a926a63a1a0972c6fabd6ce71015398/spirv_utils.so
#4  0x0000555555792c67 in cfunction_call (func=0x7ffd92a03830, args=<optimized out>, kwargs=<optimized out>) at /usr/local/src/conda/python-3.12.5/Objects/methodobject.c:548
#5  0x00005555557649db in _PyObject_MakeTpCall (tstate=0x555555c24ed8 <_PyRuntime+459704>, callable=0x7ffd92a03830, args=0x7ffff7fb87f8, nargs=<optimized out>, keywords=<optimized out>) at /usr/local/src/conda/python-3.12.5/Objects/call.c:240
#6  0x000055555576f940 in _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>) at Python/bytecodes.c:2714
#7  0x0000555555661503 in _PyEval_EvalFrame (tstate=0x555555c24ed8 <_PyRuntime+459704>, frame=0x7ffff7fb8788, throwflag=0) at /usr/local/src/conda/python-3.12.5/Include/internal/pycore_ceval.h:91
#8  _PyEval_Vector (tstate=0x555555c24ed8 <_PyRuntime+459704>, func=0x7ffea58ee660, locals=0x0, args=0x7fffffff34e0, argcount=<optimized out>, kwnames=0x0) at /usr/local/src/conda/python-3.12.5/Python/ceval.c:1683
#9  _PyFunction_Vectorcall (func=0x7ffea58ee660, stack=0x7fffffff34e0, nargsf=<optimized out>, kwnames=0x0) at /usr/local/src/conda/python-3.12.5/Objects/call.c:419
#10 _PyObject_FastCallDictTstate (tstate=0x555555c24ed8 <_PyRuntime+459704>, callable=0x7ffea58ee660, args=0x7fffffff34e0, nargsf=<optimized out>, kwargs=<optimized out>) at /usr/local/src/conda/python-3.12.5/Objects/call.c:133
...
#258 pymain_run_file (config=0x555555bc7ab8 <_PyRuntime+77720>) at /usr/local/src/conda/python-3.12.5/Modules/main.c:379
#259 pymain_run_python (exitcode=0x7fffffffbcf4) at /usr/local/src/conda/python-3.12.5/Modules/main.c:633
#260 Py_RunMain () at /usr/local/src/conda/python-3.12.5/Modules/main.c:713
#261 0x000055555581b3b7 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>) at /usr/local/src/conda/python-3.12.5/Modules/main.c:767
#262 0x00007ffff7ce5088 in __libc_start_call_main (main=main@entry=0x55555581b2a0 <main>, argc=argc@entry=2, argv=argv@entry=0x7fffffffbf88) at ../sysdeps/nptl/libc_start_call_main.h:58
#263 0x00007ffff7ce514b in __libc_start_main_impl (main=0x55555581b2a0 <main>, argc=2, argv=0x7fffffffbf88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffbf78) at ../csu/libc-start.c:360
#264 0x000055555581b1ed in _start ()

So yeah, I have no clue. It feels like something is up with my environment or configuration with a dnf/yum based Linux distro as other people have no issues with apt Linux distros, it seems. I will probably spin up a Docker container next to see what is going on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants