Skip to content

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #20936

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM.

Adding cuda kernel (optimized for sm80) for block-wise 4b quantized float 16 GEMM. #20936

Triggered via pull request January 30, 2024 17:42
Status Success
Total duration 5m 0s
Artifacts

codeql.yml

on: pull_request
Matrix: Analyze
Fit to window
Zoom out
Zoom in