Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add stable diffusion #10

Merged
merged 6 commits into from
Apr 4, 2024
Merged

feat: add stable diffusion #10

merged 6 commits into from
Apr 4, 2024

Conversation

Cifko
Copy link
Collaborator

@Cifko Cifko commented Mar 22, 2024

No description provided.

Copy link
Contributor

@jorgeantonio21 jorgeantonio21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good ! I left a few comments to be addressed before merging.

.vscode/settings.json Outdated Show resolved Hide resolved
atoma-inference/src/models/llama.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/mod.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/token_output_stream.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/stable_diffusion.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/stable_diffusion.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/stable_diffusion.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/stable_diffusion.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/stable_diffusion.rs Outdated Show resolved Hide resolved
@Cifko Cifko force-pushed the inference-models branch 3 times, most recently from 8f45df7 to 5cd6905 Compare March 27, 2024 15:48
@Cifko Cifko changed the title WIP add llama and stable diffusion feat: add stable diffusion Mar 27, 2024
@Cifko Cifko force-pushed the inference-models branch 7 times, most recently from c531d5b to 5e55a46 Compare April 3, 2024 13:17
@Cifko Cifko force-pushed the inference-models branch from 5e55a46 to 2e748e8 Compare April 3, 2024 13:22
@Cifko Cifko mentioned this pull request Apr 3, 2024
Copy link
Contributor

@jorgeantonio21 jorgeantonio21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, left a few comments !

Cargo.toml Outdated Show resolved Hide resolved
atoma-inference/src/candle/mod.rs Outdated Show resolved Hide resolved
pub fn device() -> Result<Device, candle::Error> {
if cuda_is_available() {
info!("Using CUDA");
Device::new_cuda(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we want to initialize the Device with ordinal = 0 ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well that's the first card in the system. So if we do plan to support multi-gpu system then we can add support for selecting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely allow for this, as this will be crucial to the network

atoma-inference/src/models/candle/mod.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/candle/mod.rs Outdated Show resolved Hide resolved
atoma-inference/src/models/candle/mod.rs Outdated Show resolved Hide resolved
atoma-inference/Cargo.toml Outdated Show resolved Hide resolved
Cargo.toml Outdated Show resolved Hide resolved
@jorgeantonio21 jorgeantonio21 merged commit 3370fce into main Apr 4, 2024
1 check passed
@Cifko Cifko deleted the inference-models branch April 10, 2024 08:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants