-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs Request: Compare Average Latency & Cost #52
Comments
@bestickley: Based on my personal experience, the page load time depends on the amount of traffic your website receives. The initial page load time (or "cold start") typically has an overhead of 2 seconds for me. However, if you have consistent traffic, subsequent requests tend to be faster. Additionally, the page load time can be influenced by whether or not you cache the response. When combined with Cloudfront and an efficient caching policy, page responses can be served from a cached version at Cloudfront in less than 250ms. Most companies use Vercel directly, but cdn costs could be very high depending on the use-case. |
@martg0, thank you for sharing your experience! That is very helpful. |
I work on a medium-sized app. open-next and serverless-nextjs are so cheap that Vercel is hard to justify. Since costs scale in a linear fashion, based on my projections, it would cost around $50 per 10 million requests if using CloudFront and Lambda@Edge. But it really depends on your infrastructure or your website. As for latency, nothing is stopping you from deploying the same application to Vercel and AWS. I've found that they're pretty similar, at least in my experience, since the functions are hit constantly. One thing that Vercel definitely has the upper hand on is deployment speed (time from merge or Another benefit of Vercel (and seed, sst's deployment and management platform) is pull request previews. |
To add to @juliocorzo , you can also setup your own PR preview by setting up a GH Actions and deploying to With the new experimental streaming feature, TTFB is in the single digit ms so it's extremely responsive. |
Hi, thank you for OpenNext! I have a question that I think many others will have and would be helpful to document. Given a medium sized app using Next.js, what is the average cold start using OpenNext? This is an important question because the promise of SSR is improved performance, but if using serverless degrades performances significantly, that may be a blocker for some development teams given latency requirements.
The cold start latency is a trade-off with cost, both runtime and operational. Serverless infrastructure is less expensive for run time cost for small-medium scale and is usually less expensive from an operational perspective (less complex). If a development team needs less latency than OpenNext but still want to build on AWS, what do they do? They could look to AWS AppRunner which enables container workloads implying less cold-starts while keeping small operational complexity. BUT, it'll likely be more expensive than serverless compute used by OpenNext. All this to say, it would be helpful to have comparison tables like:
Average Latency
Cost
Has anyone created a similar comparison? If you could share to benefit this project, that would be great!
The text was updated successfully, but these errors were encountered: