Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance optimisation #1140

Open
Wonshtrum opened this issue Oct 10, 2024 · 1 comment
Open

Performance optimisation #1140

Wonshtrum opened this issue Oct 10, 2024 · 1 comment

Comments

@Wonshtrum
Copy link
Member

Some recent analysis of flame graphs and call graphs have revealed some inefficiencies that could be greatly improved:

  • editor::HttpContext::on_headers can take up to 75% of the Http::readable time and 30% of Http::backend_readable, mainly due to expensive formats
  • Kawa::as_io_slice is up to 3 times slower than Kawa::prepare and takes up to 60% of Http::backend_writable time and 25% of Http::writable due to a Vec allocation
  • fmt::Display of certain structs seems really expensive, taking up to 23% of total runtime (I don't know if we can avoid it, most structs come from std like SocketAddr)

It also revealed some broader problems. Here are the ones that stand out with the percentage of total time execution in an highly optimized build with no logs nor access logs:

  • allocation related code: 50%
  • on_headers: 28%
  • as_io_slice: 14%
  • realloc: 14%
  • fmt related code: 14%
  • gettime: 11%
@Wonshtrum
Copy link
Member Author

Related note even if it doesn't impact performances noticeably, the port is still separated from the authority on the Sozu size rather than kawa's. It is also done redundantly and with two different methods: once in frontend_from_request with hostname_and_port and once in log_request with split_once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants