diff --git a/index.md b/index.md index ec8ef86..ec3a20e 100644 --- a/index.md +++ b/index.md @@ -33,10 +33,10 @@ JuliaGenAI is a new organization, and as such, it is still in the process of dev ### Model training and inference -- [Lux.jl](https://github.com/LuxDL/Lux.jl). A modern deep learning framework for Julia, intended for training large models. It is a modern approach to Flux.jl, and will eventually supersede Flux. Lux supports [many predefined layers](https://lux.csail.mit.edu/dev/api/Lux/layers), including attention layers through [Boltz](https://lux.csail.mit.edu/dev/api/Domain_Specific_Modeling/Boltz). -- [Flux.jl](https://github.com/FluxML/Flux.jl). Flux is a machine-learning library for Julia that is flexible and allows the building of complex models. However, at the time of writing, We're not aware of any Large Language Models (LLMs) that have been trained in Flux. +- [Lux.jl](https://github.com/LuxDL/Lux.jl). A purely functional deep learning framework for Julia, in the spirit of [Jax](https://jax.readthedocs.io/en/latest/). Lux supports [many predefined layers](https://lux.csail.mit.edu/dev/api/Lux/layers), including attention layers through [Boltz](https://lux.csail.mit.edu/dev/api/Domain_Specific_Modeling/Boltz). +- [Flux.jl](https://github.com/FluxML/Flux.jl). Flux is the most popular deep learning library for Julia. It is performant, flexible and allows the building of complex models. However, at the time of writing, We're not aware of any Large Language Models (LLMs) that have been trained in Flux. Flux is to Lux what pytorch is to Jax, in particular stateful vs stateless, although the two julia libraries share very similar interfaces. - [Llama2.jl](https://github.com/cafaxo/Llama2.jl). Llama2.jl provides simple code for inference and training of llama2-based language models based on [llama2.c](https://github.com/karpathy/llama2.c). It supports loading quantized weights in GGUF format (`q4_K_S` variant). Training is only experimental at this stage. -- [Transformers.jl](https://github.com/chengchingwen/Transformers.jl). Transformers.jl is a Julia package that provides a high-level API for using pre-trained transformer models. It also allows to download any model from the Hugging Face hub with `@hgf_str` macro string. +- [Transformers.jl](https://github.com/chengchingwen/Transformers.jl). Transformers.jl is a Julia package built on to top of Flux.jl that provides a high-level API for using pre-trained transformer models. It also allows to download any model from the Hugging Face hub with `@hgf_str` macro string. ## Why Julia? @@ -61,4 +61,4 @@ You can reach out to the organizing committee: - Cameron Pfiffer at [cameron@pfiffer.org](mailto:cameron@pfiffer.org) - Jan Siml (Slack: @svilup / Zulip: Jan Siml) -You can reach us on the Julia [Slack](https://julialang.org/slack/) & [Zulip](https://julialang.zulipchat.com/) at the `#generative-ai` channel. You can also find us at the [Discord](https://discord.gg/mm2kYjB) server, or the discussion forum at [discourse.julialang.org](https://discourse.julialang.org/). \ No newline at end of file +You can reach us on the Julia [Slack](https://julialang.org/slack/) & [Zulip](https://julialang.zulipchat.com/) at the `#generative-ai` channel. You can also find us at the [Discord](https://discord.gg/mm2kYjB) server, or the discussion forum at [discourse.julialang.org](https://discourse.julialang.org/).