-
Notifications
You must be signed in to change notification settings - Fork 12
Issues: foundation-model-stack/fms-acceleration
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Enable GPTQModel to handle GraniteMoeParallelExperts
help wanted
Extra attention is needed
#112
opened Nov 20, 2024 by
fabianlim
Slow down observed for BigCode Santa Coder
bug
Something isn't working
help wanted
Extra attention is needed
question
Further information is requested
#110
opened Nov 13, 2024 by
fabianlim
ScatterMoE Gradient Norm Needs to be Properly Computed When Used With FSDP
bug
Something isn't working
help wanted
Extra attention is needed
#109
opened Nov 11, 2024 by
fabianlim
Improve Documentation
documentation
Improvements or additions to documentation
help wanted
Extra attention is needed
#108
opened Nov 8, 2024 by
fabianlim
Extract ScatterMoE TriTon Kernels from Kernel Hyperdrive Fork
dependency
Issue arises because of a dependency
help wanted
Extra attention is needed
triton
involves triton kernels
#105
opened Nov 8, 2024 by
fabianlim
Incorporate Liger
dependency
Issue arises because of a dependency
help wanted
Extra attention is needed
triton
involves triton kernels
#104
opened Nov 8, 2024 by
fabianlim
ScatterMoE to support LoRA Adapters
help wanted
Extra attention is needed
question
Further information is requested
#103
opened Nov 6, 2024 by
fabianlim
ScatterMoE to support Quantized PEFT
help wanted
Extra attention is needed
question
Further information is requested
#101
opened Nov 6, 2024 by
fabianlim
Numba JIT TypingErrors Thrown on Multipack Functions
bug
Something isn't working
question
Further information is requested
#100
opened Nov 6, 2024 by
fabianlim
FOAK Cross Entropy Loss Will Not Work with New Loss Functions After Transformers 4.46
future
Will be affected in future versions (e.g., deprecation)
urgent
Time sensitivity involved.
#98
opened Oct 29, 2024 by
fabianlim
Register Kernels as AutoGrad Ops
future
Will be affected in future versions (e.g., deprecation)
help wanted
Extra attention is needed
#91
opened Oct 11, 2024 by
fabianlim
Slowdown and Higher Memory Consumption for GPTQ-LoRA with Bfloat16
question
Further information is requested
#84
opened Sep 12, 2024 by
achew010
Ensure Model is Correctly Loaded for Augmentation Purposes
question
Further information is requested
#77
opened Aug 29, 2024 by
fabianlim
Introduce Liger Fused Cross Entropy Kernel to FOAK Plugin
triton
involves triton kernels
tuning
#76
opened Aug 29, 2024 by
achew010
Inconsistency in Padding-Free Benchmarks with Different Transformers Versions
question
Further information is requested
#70
opened Aug 19, 2024 by
achew010
Quantized Peft Benchmark Experiments Run Out of Memory with Non-Zero Lora Dropout
question
Further information is requested
#50
opened Jul 12, 2024 by
achew010
Enable CUDA Unit Tests in GH Workflows
help wanted
Extra attention is needed
#39
opened Jun 24, 2024 by
fabianlim
Release Upper Limit on Torch, Transformers, Accelerate, and Others
dependency
Issue arises because of a dependency
#17
opened May 23, 2024 by
fabianlim
ProTip!
Adding no:label will show everything without a label.