Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check overhead for comment evaluation #174

Open
gentlementlegen opened this issue Oct 28, 2024 · 0 comments
Open

Check overhead for comment evaluation #174

gentlementlegen opened this issue Oct 28, 2024 · 0 comments

Comments

@gentlementlegen
Copy link
Member

          > ```diff

! Failed to run comment evaluation. Error: 400 This model's maximum context length is 128000 tokens. However, your messages resulted in 148540 tokens. Please reduce the length of the messages.

<!--
https://github.com/ubiquity-os-marketplace/text-conversation-rewards/actions/runs/11463496789
{
  "status": 400,
  "headers": {
    "access-control-expose-headers": "X-Request-ID",
    "alt-svc": "h3=\":443\"; ma=86400",
    "cf-cache-status": "DYNAMIC",
    "cf-ray": "8d6a8635dd992009-IAD",
    "connection": "keep-alive",
    "content-length": "284",
    "content-type": "application/json",
    "date": "Tue, 22 Oct 2024 15:29:41 GMT",
    "openai-organization": "ubiquity-dao-8veapj",
    "openai-processing-ms": "375",
    "openai-version": "2020-10-01",
    "server": "cloudflare",
    "set-cookie": "__cf_bm=urRioyrKlQBCiRkxcgeZKjDpvmvjEQsjfq1o9zASCxs-1729610981-1.0.1.1-u3eEr.AKdcx2EGJuW2nauw6LA5zK0ZDXyOKJiCI01E_pfZOpnzWIJoxgLq_OlO8BDT_WFfSD_jFjjW6Fnmx_Mw; path=/; expires=Tue, 22-Oct-24 15:59:41 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=qIG5Ao6fOQ9MAWT6hlX2fjC8G.yTYmXl4vzXjH7Qqsg-1729610981415-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None",
    "strict-transport-security": "max-age=31536000; includeSubDomains; preload",
    "x-content-type-options": "nosniff",
    "x-ratelimit-limit-requests": "5000",
    "x-ratelimit-limit-tokens": "450000",
    "x-ratelimit-remaining-requests": "4999",
    "x-ratelimit-remaining-tokens": "83951",
    "x-ratelimit-reset-requests": "12ms",
    "x-ratelimit-reset-tokens": "48.806s",
    "x-request-id": "req_bb581eb70b2276ea9a9c563b12f6343b"
  },
  "request_id": "req_bb581eb70b2276ea9a9c563b12f6343b",
  "error": {
    "message": "This model's maximum context length is 128000 tokens. However, your messages resulted in 148540 tokens. Please reduce the length of the messages.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "context_length_exceeded"
  },
  "code": "context_length_exceeded",
  "param": "messages",
  "type": "invalid_request_error",
  "caller": "/home/runner/work/text-conversation-rewards/text-conversation-rewards/dist/index.js:291:6136492"
}
-->

@gentlementlegen perhaps we have too much overhead with each pull? And by that I mean headers and such not the main content? Because I don't imagine that each pull actually has that much "body" content. This easily can be optimized as I see some have barely any comments.

Originally posted by @0x4007 in ubiquity-os/ubiquity-os-kernel#80 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants