diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 9db4040411..136644d208 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,10 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the default hardware template on AWS, GCP and Azure on {ecloud}. +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud} with the following instances (for more details see {cloud}/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]): + ** AWS: c6gd + ** Azure: fsv2 + ** GCP: n2.68x32x45 * For each hardware template, testing with several sizes: 1 GB, 4 GB, 8 GB, and 32 GB. * For each size, using a fixed number of APM agents: 10 agents for 1 GB, 30 agents for 4 GB, 60 agents for 8 GB, and 240 agents for 32 GB. * In all scenarios, using medium sized events. Events include @@ -29,47 +32,47 @@ specific setup, the size of APM event data, and the exact number of agents. | *1 GB* (10 agents) -| 9,000 +| 15,000 events/second -| 6,000 +| 14,000 events/second -| 9,000 +| 17,000 events/second | *4 GB* (30 agents) -| 25,000 +| 29,000 events/second -| 18,000 +| 26,000 events/second -| 17,000 +| 35,000 events/second | *8 GB* (60 agents) -| 40,000 +| 50,000 events/second -| 26,000 +| 34,000 events/second -| 25,000 +| 48,000 events/second | *16 GB* (120 agents) -| 72,000 +| 96,000 events/second -| 51,000 +| 57,000 events/second -| 45,000 +| 90,000 events/second | *32 GB* (240 agents) -| 135,000 +| 133,000 events/second -| 95,000 +| 89,000 events/second -| 95,000 +| 143,000 events/second |====