Skip to content

Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。

License

Notifications You must be signed in to change notification settings

seanzhang-zhichen/llama3-chinese

Repository files navigation

中文  |  English



Llama3-Chinese


GitHub Contributors

Introduce

Llama3-Chinese is a large model trained on 500k high-quality Chinese multi-turn SFT data, 100k English multi-turn SFT data, and 2k single-turn self-cognition data, using the training methods of DORA and LORA+ based on Meta-Llama-3-8B as the base.

Github: https://github.com/seanzhang-zhichen/llama3-chinese

DEMO

Download Model

Model Download
Meta-Llama-3-8B 🤗 HuggingFace 🤖 ModelScope
Llama3-Chinese-Lora 🤗 HuggingFace 🤖 ModelScope
Llama3-Chinese (merged model) 🤗 HuggingFace 🤖 ModelScope

Merge LORA Model (Skippable)

1、Download Meta-Llama-3-8B

git clone https://www.modelscope.cn/LLM-Research/Meta-Llama-3-8B.git

2、Download Llama3-Chinese-Lora

From ModelScope

git lfs install
git clone https://www.modelscope.cn/seanzhang/Llama3-Chinese-Lora.git

From HuggingFace

git lfs install
git clone https://huggingface.co/zhichen/Llama3-Chinese-Lora

3、Merge Model

python merge_lora.py \
    --base_model path/to/Meta-Llama-3-8B \
    --lora_model path/to/lora/Llama3-Chinese-Lora  \
    --output_dir ./Llama3-Chinese

Download Llama3-Chinese (Merged Model)

From ModelScope

git lfs install
git clone https://www.modelscope.cn/seanzhang/Llama3-Chinese.git

From HuggingFace

git lfs install
git clone https://huggingface.co/zhichen/Llama3-Chinese

Inference

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "zhichen/Llama3-Chinese"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "你好"},
]

input_ids = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
    input_ids,
    max_new_tokens=2048,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

CLI DEMO

python cli_demo.py --model_path zhichen/Llama3-Chinese

WEB DEMO

python web_demo.py --model_path zhichen/Llama3-Chinese

VLLM WEB DEMO

1、Use vllm deploy model

python -m vllm.entrypoints.openai.api_server --served-model-name Llama3-Chinese --model ./Llama3-Chinese(Replace it with your own merged model path)

2、This command is executed on the CLI

python vllm_web_demo.py --model Llama3-Chinese

Train Dataset

deepctrl-sft-data

LICENSE

This project can only be used for research purposes, and the project developer shall not bear any harm or loss caused by the use of this project (including but not limited to data, models, codes, etc.). For details, please refer to DISCLAIMER

The License agreement of the Llama3-Chinese project code is the Apache License 2.0. The code is free for commercial use, and the model weights and data can only be used for research purposes. Please attach a link to Llama3-Chinese and the licensing agreement in the product description.

Citation

If you used Llama3-Chinese in your research, cite it in the following format:

@misc{Llama3-Chinese,
  title={Llama3-Chinese},
  author={Zhichen Zhang, Xin LU, Long Chen},
  year={2024},
  howpublished={\url{https://github.com/seanzhang-zhichen/llama3-chinese}},
}

Acknowledgement

meta-llama/llama3
hiyouga/LLaMA-Factory

Star History

Star History Chart

About

Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages