Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request errors on initial use: Missing valid openai response #11

Open
whoabuddy opened this issue Jun 2, 2023 · 8 comments
Open

Request errors on initial use: Missing valid openai response #11

whoabuddy opened this issue Jun 2, 2023 · 8 comments

Comments

@whoabuddy
Copy link

I am really excited about the concept here but after the initial setup ran into some errors on Linux Mint 20.3 (Ubuntu Focal).

Steps taken:

  • install with npm i -g smol-dev-js
  • run smol-dev-js setup
    • Chose OpenAI (on list for Anthropic!)
    • Entered API key
    • Set remaining settings for project I was in
  • run smole-dev-js run

Output from terminal:

$ smol-dev-js run
--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
CityCoins are cryptocurrencies that allow you to support your favorite cities while earning Stacks and Bitcoin.
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
✔ [you]:  … Suggest something please
🐣 [ai]: (node:147273) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7905,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"}]}
## Recieved error ...
[invalid_request_error] undefined
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7873,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested format. No apology is needed."}]}
## Recieved error ...
[invalid_request_error] undefined
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
Last Completion null
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)

Also noticed Recieved is misspelled in the error above 🔬

How can I check the warn logs that were mentioned?

@jojonoparat
Copy link

Can you check whether your OpenAI account has an access to chatGPT-4 model from your API key ? List all the available models for your account with curl
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY"

and see whether it contains chatgpt-4 model from response

@rowntreerob
Copy link

rowntreerob commented Jun 3, 2023

+1 have subscribe at openai and api-key and access to very long list of models including "gpt 3.5" and "gpt 3.5 turbo"

however, no access to gpt 4

prereq "get api key" , get "pay as you go account w billing" are NOT enough for openai to be the config'd model

@whoabuddy
Copy link
Author

Thanks @jojonoparat looking at the list gpt4 isn't on there, I assumed it would be but just signed up for the waitlist. Is that the problem? Could this work with 3.5?

@jojonoparat
Copy link

image
I have not tried it myself but may be you could hard code it from this file by adding or changing some line like this
model = "gpt-3.5-turbo" //<-- your available model name

@PicoCreator
Copy link
Owner

The current prompts were not designed to support gpt-3.5 context size - so they may get very unexpected behaviours

@whoabuddy
Copy link
Author

whoabuddy commented Jun 8, 2023

thank you both for the quick replies

@jojonoparat that makes sense, looking there you could use a prompt per L35 too that would override the config 😎

@PicoCreator in that case I"ll leave it as-is and hope for access to Claude or GPT-4 soon!

It would be helpful if there was:

  • a small warning / set of instructions to check available models in readme I see the update now!
  • an error message that indicates the gpt-4 model isn't found / available

@rowntreerob
Copy link

while wait for 4.0 , i tried override ( hard-code to "gpt-3.5-turbo-16k-0613")

no change except the literal val in error.msg...

- ## Recieved error ...
- [invalid_request_error] undefined
- ## Unable to handle prompt for ...
- {"model":"gpt-3.5-turbo-16k-0613","temperature":0,"max_tokens":3923,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested

@liam-k
Copy link

liam-k commented Nov 9, 2023

I’m having the same problem. The weird thing is, the prompt works at first and GPT correctly lays out the plan. Just once I actually confirm the plan and it tries to do it, it breaks. So yeah, seems to be ai-bridge.

🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
🐣 [ai]: Studying 0 dependencies (awaiting in parallel)
🐣 [ai]: Preparing summaries for smol-er sub-operations ...
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":5394,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"user","content":"You are an AI developer...."}]}

## Recieved error ...
[tokens] undefined
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants