Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs Request: Compare Average Latency & Cost #52

Open
bestickley opened this issue Mar 8, 2023 · 4 comments
Open

Docs Request: Compare Average Latency & Cost #52

bestickley opened this issue Mar 8, 2023 · 4 comments
Labels
documentation Improvements or additions to documentation

Comments

@bestickley
Copy link

bestickley commented Mar 8, 2023

Hi, thank you for OpenNext! I have a question that I think many others will have and would be helpful to document. Given a medium sized app using Next.js, what is the average cold start using OpenNext? This is an important question because the promise of SSR is improved performance, but if using serverless degrades performances significantly, that may be a blocker for some development teams given latency requirements.

The cold start latency is a trade-off with cost, both runtime and operational. Serverless infrastructure is less expensive for run time cost for small-medium scale and is usually less expensive from an operational perspective (less complex). If a development team needs less latency than OpenNext but still want to build on AWS, what do they do? They could look to AWS AppRunner which enables container workloads implying less cold-starts while keeping small operational complexity. BUT, it'll likely be more expensive than serverless compute used by OpenNext. All this to say, it would be helpful to have comparison tables like:

Average Latency

Infrastructure Small App Medium App Large App
OpenNext (Lambda & Lambda@Edge) X ms X ms X ms
App Runner X ms X ms X ms
Vercel X ms X ms X ms
Other? X ms X ms X ms

Cost

Infrastructure Small App Medium App Large App
OpenNext (Lambda & Lambda@Edge) $X $X $X
App Runner $X $X $X
Vercel $X $X $X
Other? $X $X $X

Has anyone created a similar comparison? If you could share to benefit this project, that would be great!

@martg0
Copy link

martg0 commented Mar 9, 2023

@bestickley: Based on my personal experience, the page load time depends on the amount of traffic your website receives. The initial page load time (or "cold start") typically has an overhead of 2 seconds for me. However, if you have consistent traffic, subsequent requests tend to be faster. Additionally, the page load time can be influenced by whether or not you cache the response. When combined with Cloudfront and an efficient caching policy, page responses can be served from a cached version at Cloudfront in less than 250ms. Most companies use Vercel directly, but cdn costs could be very high depending on the use-case.

@bestickley
Copy link
Author

@martg0, thank you for sharing your experience! That is very helpful.

@juliocorzo
Copy link

juliocorzo commented May 20, 2023

I work on a medium-sized app. open-next and serverless-nextjs are so cheap that Vercel is hard to justify. Since costs scale in a linear fashion, based on my projections, it would cost around $50 per 10 million requests if using CloudFront and Lambda@Edge. But it really depends on your infrastructure or your website.

As for latency, nothing is stopping you from deploying the same application to Vercel and AWS. I've found that they're pretty similar, at least in my experience, since the functions are hit constantly.

One thing that Vercel definitely has the upper hand on is deployment speed (time from merge or npx sst deploy to live.) This is because, if you're using Lambda@Edge at least, updating CloudFront can take a long time. The problem is less relevant if you're using normal Lambda with function URLs.

Another benefit of Vercel (and seed, sst's deployment and management platform) is pull request previews.

@khuezy
Copy link
Contributor

khuezy commented Sep 8, 2023

To add to @juliocorzo , you can also setup your own PR preview by setting up a GH Actions and deploying to --stage <SHA> or branch_name

With the new experimental streaming feature, TTFB is in the single digit ms so it's extremely responsive.

@conico974 conico974 added the documentation Improvements or additions to documentation label Sep 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

5 participants