Skip to content

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #21827

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM.

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #21827

Triggered via pull request February 28, 2024 21:01
Status Success
Total duration 4m 55s
Artifacts

codeql.yml

on: pull_request
Matrix: Analyze
Fit to window
Zoom out
Zoom in