Replies: 10 comments 118 replies
-
Hey @thisadev! I'm here to help you with any issues or questions you have. I'm a bot and can assist with bug fixes, answer your questions, and guide you on becoming a contributor. Let's take a look at the issue you're facing with the structured output. To use a Pydantic class called
Here's an example: from typing import List
from pydantic import BaseModel
from llama_index.legacy.llms.openai import OpenAI
from llama_index.legacy.prompts.base import PromptTemplate
from llama_index.legacy.output_parsers.pydantic import PydanticOutputParser
from llama_index.legacy.program.llm_program import LLMTextCompletionProgram
from llama_index.legacy.core.query_engine.retriever_query_engine import RetrieverQueryEngine
from llama_index.legacy.core.base.base_retriever import BaseRetriever
from llama_index.legacy.schema import QueryBundle
class Page(BaseModel):
"""Data model for a page."""
page: int
summary: str
# Define the prompt template
prompt_template_str = "Tell me about the history of Python programming language."
# Create the LLM instance (replace with your Azure Hosted LLM)
llm = OpenAI(temperature=0, model="gpt-3.5-turbo-0613")
# Create the output parser
output_parser = PydanticOutputParser(output_cls=Page)
# Create the prompt template
prompt = PromptTemplate(prompt_template_str)
# Create the LLMTextCompletionProgram
program = LLMTextCompletionProgram(
output_parser=output_parser,
output_cls=Page,
prompt=prompt,
llm=llm,
verbose=True
)
# Create a retriever (assuming you have a retriever implementation)
retriever = BaseRetriever()
# Create the RetrieverQueryEngine with the output_cls set to Page
query_engine = RetrieverQueryEngine.from_args(
retriever=retriever,
llm=llm,
output_cls=Page,
node_postprocessors=[lim_reorder, reranker],
response_synthesizer=response_synthesizer
)
# Query the engine and get the response
response = query_engine._query(query_bundle=QueryBundle(query_str=prompt_template_str))
print(response.page)
# > 1
print(response.summary)
# > 'Python was created by Guido van Rossum and first released in 1991...' This setup ensures that the |
Beta Was this translation helpful? Give feedback.
-
@dosu I've tried an approach like below with my CustomLLM using the as_structured_llm. However I still get the pydantic validation error Below is the customLLM `import json class PageResults(BaseModel): class AzureHostedLLM(CustomLLM):
` And the below is how I called the Pydantic class as my output_cls in the response synthesizer. `prompt_helper = PromptHelper(context_window=4096, num_output=320, chunk_overlap_ratio=.2, chunk_size_limit=2048)
|
Beta Was this translation helpful? Give feedback.
-
@dosu In my retriever output, I have metadata which is |
Beta Was this translation helpful? Give feedback.
-
@dosu Where is the class |
Beta Was this translation helpful? Give feedback.
-
@dosu I get the below error.
In the Llamaindex documentation, below is the QueryFusionRetreiver class which we need to customise. `class QueryFusionRetriever(BaseRetriever):
|
Beta Was this translation helpful? Give feedback.
-
@dosu The below is a reranking function. Can we use that in the above case rather customising the retriever? `def reciprocal_rank_fusion(search_results_dict, k=60):
|
Beta Was this translation helpful? Give feedback.
-
@dosu I have a Chroma DB collection with the metadata field title. How can I add a metadata Filter in Llamaindex in order to filter a list of similar titles matched using regex patterns? An example of what I'm trying is as follows. `unwanted_patterns = [
|
Beta Was this translation helpful? Give feedback.
-
@dosu I have a set of nodes retrieved from a vectorstore where I want to answer a question from each node iteratively. How can I improve the efficiency of this using Llamaindex? Is there any method to generate answers without sending it to an LLM API? |
Beta Was this translation helpful? Give feedback.
-
@dosu I am trying to create a QASummaryQueryEngineBuilder using a Chromadb vectorstore. My Code is as below.
However I got an error in running a query. |
Beta Was this translation helpful? Give feedback.
-
@dosu I'm using Llamaindex RAG with Python Joblib Parallel. I am using a custom LLM and local embed model. However, when run in parallel the Settings LLM and embed_model does not recognised and give an error to set embed_model and llm. How can I use these Llamaindex settings with parallel with joblib? |
Beta Was this translation helpful? Give feedback.
-
Hi I'm using a RetrieverQueryEngine with custom LLM in Llamaindex. The custom LLM is based on an Azure Hosted LLM. I want to use Pydantic class called Page as the output_cls inside the query engine or in the response synthesizer.
query_engine = RetrieverQueryEngine.from_args( output_cls=Page, retriever= hybrid_retriever, llm = llm, node_postprocessors = [lim_reorder,reranker], response_synthesizer=response_synthesizer, )
`from pydantic import BaseModel
class Page(BaseModel):
"""Data model for page number and summary"""
Beta Was this translation helpful? Give feedback.
All reactions