Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: error messages get swallowed up by dev agents #76

Open
rysweet opened this issue May 31, 2024 · 1 comment
Open

BUG: error messages get swallowed up by dev agents #76

rysweet opened this issue May 31, 2024 · 1 comment
Assignees

Comments

@rysweet
Copy link
Collaborator

rysweet commented May 31, 2024

     at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
     at Azure.AI.OpenAI.OpenAIClient.GetEmbeddingsAsync(EmbeddingsOptions embeddingsOptions, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.RunRequestAsync[T](Func`1 request)
     --- End of inner exception stack trace ---
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.RunRequestAsync[T](Func`1 request)
     at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.GetEmbeddingsAsync(IList`1 data, Kernel kernel, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Embeddings.EmbeddingGenerationExtensions.GenerateEmbeddingAsync[TValue,TEmbedding](IEmbeddingGenerationService`2 generator, TValue value, Kernel kernel, CancellationToken cancellationToken)
     at Microsoft.SemanticKernel.Memory.SemanticTextMemory.SearchAsync(String collection, String query, Int32 limit, Double minRelevanceScore, Boolean withEmbeddings, Kernel kernel, CancellationToken cancellationToken)+MoveNext()
     at Microsoft.SemanticKernel.Memory.SemanticTextMemory.SearchAsync(String collection, String query, Int32 limit, Double minRelevanceScore, Boolean withEmbeddings, Kernel kernel, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()

2
at Microsoft.AI.Agents.Orleans.AiAgent`1.AddKnowledge(String instruction, String index, KernelArguments arguments) in /Users/ryan/src/project-oagents/src/Microsoft.AI.Agents.Orleans/AiAgent.cs:line 68
at Microsoft.AI.DevTeam.ProductManager.CreateReadme(String ask) in /Users/ryan/src/project-oagents/samples/gh-flow/src/Microsoft.AI.DevTeam/Agents/ProductManager/ProductManager.cs:line 65
Microsoft.AI.DevTeam.ProductManager: Error: Error creating readme

Microsoft.SemanticKernel.HttpOperationException: This model's maximum context length is 4095 tokens, however you requested 5730 tokens (5730 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
Status: 400 (model_error)

Content:
{
"error": {
"message": "This model's maximum context length is 4095 tokens, however you requested 5730 tokens (5730 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}


Error is in the logs but what the bot posts in issues isn't helpful: "Sorry, I got tired, can you try again please?"

@rysweet
Copy link
Collaborator Author

rysweet commented May 31, 2024

I can understand not wanting to include the error in the comment but could it maybe link to log analytics?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants