-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FastAPI Framework Setup Modification #9
Open
XXXJumpingFrogXXX
wants to merge
2
commits into
sugarlabs:main
Choose a base branch
from
XXXJumpingFrogXXX:fastapi
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,125 @@ | ||
from fastapi import APIRouter | ||
from pydantic import BaseModel | ||
|
||
from unsloth import FastLanguageModel | ||
import torch | ||
|
||
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! | ||
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ | ||
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. | ||
|
||
alpaca_prompt = """Below is an instruction that describes a task, along with an input that provides additional context. Write a response that appropriately completes the request. | ||
|
||
### Instruction: | ||
{} | ||
|
||
### Input: | ||
{} | ||
|
||
### Response: | ||
{}""" | ||
|
||
class Question(BaseModel): | ||
query: str | ||
|
||
router = APIRouter() | ||
|
||
@router.post("/generate_answer") | ||
def generate_answer(value: Question): | ||
try: | ||
# Load the llama model and tokenizer from the pretrained model | ||
llama_model, llama_tokenizer = FastLanguageModel.from_pretrained( | ||
model_name="Antonio27/llama3-8b-4-bit-for-sugar", | ||
max_seq_length=max_seq_length, | ||
dtype=dtype, | ||
load_in_4bit=load_in_4bit, | ||
) | ||
|
||
# Load the gemma model and tokenizer from the pretrained model | ||
gemma_model, gemma_tokenizer = FastLanguageModel.from_pretrained( | ||
model_name="unsloth/gemma-2-9b-it-bnb-4bit", | ||
max_seq_length=max_seq_length, | ||
dtype=dtype, | ||
load_in_4bit=load_in_4bit, | ||
) | ||
|
||
# Prepare llama model for inference | ||
FastLanguageModel.for_inference(llama_model) | ||
llama_tokenizer.pad_token = llama_tokenizer.eos_token | ||
llama_tokenizer.add_eos_token = True | ||
|
||
# Tokenize the input question for the llama model | ||
inputs = llama_tokenizer( | ||
[ | ||
alpaca_prompt.format( | ||
f''' | ||
Your task is to answer children's questions using simple language. | ||
Explain any difficult words in a way a 3-year-old can understand. | ||
Keep responses under 60 words. | ||
\n\nQuestion: {value.query} | ||
''', # instruction | ||
"", # input | ||
"", # output - leave this blank for generation! | ||
) | ||
], return_tensors="pt").to("cuda") | ||
|
||
# Generate output using the llama model | ||
outputs = llama_model.generate(**inputs, max_new_tokens=256, temperature=0.6) | ||
decoded_outputs = llama_tokenizer.batch_decode(outputs) | ||
|
||
# Extract the response text | ||
response_text = decoded_outputs[0] | ||
|
||
# Use regex to find the response section in the output | ||
match = re.search(r"### Response:(.*?)(?=\n###|$)", response_text, re.DOTALL) | ||
if match: | ||
initial_response = match.group(1).strip() | ||
else: | ||
initial_response = "" | ||
|
||
# Prepare gemma model for inference | ||
FastLanguageModel.for_inference(gemma_model) | ||
gemma_tokenizer.pad_token = gemma_tokenizer.eos_token | ||
gemma_tokenizer.add_eos_token = True | ||
|
||
# Tokenize the initial response for the gemma model | ||
inputs = gemma_tokenizer( | ||
[ | ||
alpaca_prompt.format( | ||
f''' | ||
Modify the given content for a 5-year-old. | ||
Use simple words and phrases. | ||
Remove any repetitive information. | ||
Keep responses under 50 words. | ||
\n\nGiven Content: {initial_response} | ||
''', # instruction | ||
"", # input | ||
"", # output - leave this blank for generation! | ||
) | ||
], return_tensors="pt").to("cuda") | ||
|
||
# Generate adjusted output using the gemma model | ||
outputs = gemma_model.generate(**inputs, max_new_tokens=256, temperature=0.6) | ||
decoded_outputs = gemma_tokenizer.batch_decode(outputs) | ||
|
||
# Extract the adjusted response text | ||
response_text = decoded_outputs[0] | ||
|
||
# Use regex to find the response section in the output | ||
match = re.search(r"### Response:(.*?)(?=\n###|$)", response_text, re.DOTALL) | ||
if match: | ||
adjusted_response = match.group(1).strip() | ||
else: | ||
adjusted_response = "" | ||
|
||
# Return the final adjusted response in a success dictionary | ||
return { | ||
'success': True, | ||
'response': { | ||
"result": adjusted_response | ||
} | ||
} | ||
|
||
except Exception as e: | ||
return {'success': False, 'response': str(e)} | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,27 +1,26 @@ | ||
|
||
from transformers import GPT2Tokenizer, GPT2LMHeadModel | ||
|
||
|
||
# We should rename this | ||
class AI_Test: | ||
def __init__(self): | ||
pass | ||
|
||
def generate_bot_response(self, question): | ||
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2") | ||
model = GPT2LMHeadModel.from_pretrained("distilgpt2") | ||
|
||
prompt = ''' | ||
Your task is to answer children's questions using simple language. | ||
Explain any difficult words in a way a 3-year-old can understand. | ||
Keep responses under 60 words. | ||
\n\nQuestion: | ||
''' | ||
|
||
input_text = prompt + question | ||
|
||
inputs = tokenizer.encode(input_text, return_tensors='pt') | ||
outputs = model.generate(inputs, max_length=150, num_return_sequences=1) | ||
answer = tokenizer.decode(outputs[0], skip_special_tokens=True) | ||
|
||
return answer | ||
import os | ||
import uvicorn | ||
from fastapi import FastAPI | ||
from fastapi.middleware.cors import CORSMiddleware | ||
|
||
from chat.router import router as chat_router | ||
# from piggy.router import router as piggy_router | ||
|
||
# Create a FastAPI application instance with custom documentation URL | ||
app = FastAPI( | ||
docs_url="/sugar-ai/docs", | ||
) | ||
|
||
# Include the chat router with a specified prefix for endpoint paths | ||
app.include_router(chat_router, prefix="/sugar-ai/chat") | ||
# Include the piggy router with a specified prefix for endpoint paths (currently commented out) | ||
# app.include_router(piggy_router, prefix="/sugar-ai/piggy") | ||
|
||
# Add CORS middleware to allow cross-origin requests from any origin | ||
app.add_middleware( | ||
CORSMiddleware, | ||
allow_origins=["*"], # Allow requests from any origin | ||
allow_credentials=True, # Allow sending of credentials (e.g., cookies) | ||
allow_methods=["*"], # Allow all HTTP methods | ||
allow_headers=["*"], # Allow all headers | ||
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
|
||
from transformers import GPT2Tokenizer, GPT2LMHeadModel | ||
|
||
|
||
# We should rename this | ||
class AI_Test: | ||
def __init__(self): | ||
pass | ||
|
||
def generate_bot_response(self, question): | ||
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2") | ||
model = GPT2LMHeadModel.from_pretrained("distilgpt2") | ||
|
||
prompt = ''' | ||
Your task is to answer children's questions using simple language. | ||
Explain any difficult words in a way a 3-year-old can understand. | ||
Keep responses under 60 words. | ||
\n\nQuestion: | ||
''' | ||
|
||
input_text = prompt + question | ||
|
||
inputs = tokenizer.encode(input_text, return_tensors='pt') | ||
outputs = model.generate(inputs, max_length=150, num_return_sequences=1) | ||
answer = tokenizer.decode(outputs[0], skip_special_tokens=True) | ||
|
||
return answer |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an extended version of
generate_bot_response
inoriginal_main.py
below, if that's the case then you should delete that one as it was based on your draft PR to chat activity.Also, does the prompt you've used here provide better responses than the one you used earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the first question: This is not an extended version of generate_bot_response from original_main.py. This version utilizes the Unsloth library within Chat Activity and is significantly different from original_main.py. Additionally, we plan to remove the original_main.py file in the final modifications.
For the second question: Yes, the new prompt encourages the model to generate better responses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it's significantly different from
original_main.py
, but it's achieving the same goal as that so generate_bot_response from original_main.py needs to be deleted to avoid duplicity.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I agree! I will delete that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Can you also start working on Kshitij's part? I think you can work with what he has so far.