Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AIQuality Really Drops Off When There is a lot of context #269

Open
jlewi opened this issue Oct 1, 2024 · 1 comment
Open

AIQuality Really Drops Off When There is a lot of context #269

jlewi opened this issue Oct 1, 2024 · 1 comment

Comments

@jlewi
Copy link
Owner

jlewi commented Oct 1, 2024

Here's a long notebook
https://gist.github.com/jlewi/31857545ec62b36d2949ccd904918d53

The final markdown cell contained the following prompt

Fetch the trace 6518ae12b7aa41271978d9da16403df2 and render the request as HTML
Use the GetLLMLogs endpoint to fetch it and write it to a file
Then use Foyle llms render to render it as HTML

The LLM gave a really terrible response. It ended up giving a curl command with GetTrace which is a non-existent RPC.
The generated block id was 01J9545C1FHMBD9YNHFYTZGGSY.

I think the problem is all of the context (both in the notebook itself) and also likely retrieved examples is confusing the AI.

@jlewi
Copy link
Owner Author

jlewi commented Oct 4, 2024

I think this could be partially due to the fact that our embeddings computation

func (v *Vectorizer) Embed(ctx context.Context, text string) (*mat.VecDense, error) {

Tries to compute the embeddings of the full notebook. If this exceeds the context then know embeddings are retrieved and we don't get any RAG results. So for really long learning we don't benefit from RAG.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant