Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Validation error for LLMMetadata when using SambaNovaCloud #17028

Closed
Ashish-Soni08 opened this issue Nov 21, 2024 · 1 comment · Fixed by #17029
Closed

[Bug]: Validation error for LLMMetadata when using SambaNovaCloud #17028

Ashish-Soni08 opened this issue Nov 21, 2024 · 1 comment · Fixed by #17029
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@Ashish-Soni08
Copy link

Bug Description

The code in the integration that I believe is causing the issue -> Llamaindex on Github

llama-index-code

The error I get -> when I try to run index.as_query_engine() method

validation error

Version

0.12.0

Steps to Reproduce

pip install "llama-index-llms-sambanovacloud"
pip install sseclient-py
from llama_index.llms.sambanovacloud import SambaNovaCloud

LLM_MODEL: str = "Meta-Llama-3.2-3B-Instruct"

llm = SambaNovaCloud(api_key=config["SAMBANOVA_API_KEY"],
             model=LLM_MODEL,
             max_tokens=1024,
             temperature=0.2
             )

Settings.llm = llm

query_engine = index.as_query_engine(similarity_top_k=10,
                                     node_postprocessors=[rerank_model]
                                     )

Relevant Logs/Tracbacks

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
Cell In[24], line 3
      1 # Create the query engine
----> 3 query_engine = index.as_query_engine(similarity_top_k=10,
      4                                      node_postprocessors=[rerank_model]
      5                                      )

File /workspaces/Google-Cloud-Gemini-Hackathon/samba-ai/lib/python3.12/site-packages/llama_index/core/indices/base.py:381, in BaseIndex.as_query_engine(self, llm, **kwargs)
    374 retriever = self.as_retriever(**kwargs)
    375 llm = (
    376     resolve_llm(llm, callback_manager=self._callback_manager)
    377     if llm
    378     else Settings.llm
    379 )
--> 381 return RetrieverQueryEngine.from_args(
    382     retriever,
    383     llm=llm,
    384     **kwargs,
    385 )

File /workspaces/Google-Cloud-Gemini-Hackathon/samba-ai/lib/python3.12/site-packages/llama_index/core/query_engine/retriever_query_engine.py:102, in RetrieverQueryEngine.from_args(cls, retriever, llm, response_synthesizer, node_postprocessors, callback_manager, response_mode, text_qa_template, refine_template, summary_template, simple_template, output_cls, use_async, streaming, **kwargs)
     79 """Initialize a RetrieverQueryEngine object.".
     80 
     81 Args:
   (...)
     98     streaming (bool): Whether to use streaming.
     99 """
    100 llm = llm or Settings.llm
--> 102 response_synthesizer = response_synthesizer or get_response_synthesizer(
    103     llm=llm,
    104     text_qa_template=text_qa_template,
    105     refine_template=refine_template,
    106     summary_template=summary_template,
    107     simple_template=simple_template,
    108     response_mode=response_mode,
    109     output_cls=output_cls,
    110     use_async=use_async,
    111     streaming=streaming,
    112 )
    114 callback_manager = callback_manager or Settings.callback_manager
    116 return cls(
    117     retriever=retriever,
    118     response_synthesizer=response_synthesizer,
    119     callback_manager=callback_manager,
    120     node_postprocessors=node_postprocessors,
    121 )

File /workspaces/Google-Cloud-Gemini-Hackathon/samba-ai/lib/python3.12/site-packages/llama_index/core/response_synthesizers/factory.py:63, in get_response_synthesizer(llm, prompt_helper, text_qa_template, refine_template, summary_template, simple_template, response_mode, callback_manager, use_async, streaming, structured_answer_filtering, output_cls, program_factory, verbose)
     57 callback_manager = callback_manager or Settings.callback_manager
     58 llm = llm or Settings.llm
     59 prompt_helper = (
     60     prompt_helper
     61     or Settings._prompt_helper
     62     or PromptHelper.from_llm_metadata(
---> 63         llm.metadata,
     64     )
     65 )
     67 if response_mode == ResponseMode.REFINE:
     68     return Refine(
     69         llm=llm,
     70         callback_manager=callback_manager,
   (...)
     78         verbose=verbose,
     79     )

File /workspaces/Google-Cloud-Gemini-Hackathon/samba-ai/lib/python3.12/site-packages/llama_index/llms/sambanovacloud/base.py:206, in SambaNovaCloud.metadata(self)
    204 @property
    205 def metadata(self) -> LLMMetadata:
--> 206     return LLMMetadata(
    207         context_window=None,
    208         num_output=self.max_tokens,
    209         is_chat_model=True,
    210         model_name=self.model,
    211     )

File /workspaces/Google-Cloud-Gemini-Hackathon/samba-ai/lib/python3.12/site-packages/pydantic/main.py:212, in BaseModel.__init__(self, **data)
    210 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    211 __tracebackhide__ = True
--> 212 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
    213 if self is not validated_self:
    214     warnings.warn(
    215         'A custom validator is returning a value other than `self`.\n'
    216         "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
    217         'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
    218         category=None,
    219     )

ValidationError: 1 validation error for LLMMetadata
context_window
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.9/v/int_type
@Ashish-Soni08 Ashish-Soni08 added bug Something isn't working triage Issue needs to be triaged/prioritized labels Nov 21, 2024
Copy link

dosubot bot commented Nov 21, 2024

@Ashish-Soni08 Hello there! I'm here to assist you with any coding bugs or questions you have. If you need any help, feel free to ask.

To resolve the validation error for LLMMetadata related to the context_window when using the SambaNovaCloud integration in LlamaIndex, ensure that the context_window is correctly set and is a valid integer. The error occurs because context_window is being set to None, which is not a valid integer.

You should modify the metadata property in the SambaNovaCloud class to ensure that context_window is initialized with a valid integer value. Here's an example of how you might set it:

@property
def metadata(self) -> LLMMetadata:
    """LLM metadata."""
    return LLMMetadata(
        context_window=1024,  # Set this to a valid integer value
        num_output=self.max_tokens,
        is_chat_model=True,
        model_name=self.model,
    )

Make sure that the context_window value is appropriate for your model's capabilities and does not exceed its context length. Adjust the value as needed based on your specific requirements and the model's specifications [1].

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant