Skip to content

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #20939

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM.

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #20939

Triggered via pull request January 30, 2024 18:20
Status Success
Total duration 5m 7s
Artifacts

codeql.yml

on: pull_request
Matrix: Analyze
Fit to window
Zoom out
Zoom in