Skip to content

Commit

Permalink
misc: add service-level benchmarks (#1006)
Browse files Browse the repository at this point in the history
  • Loading branch information
ianbotsf authored Aug 9, 2023
1 parent f8c9121 commit 3a20e42
Show file tree
Hide file tree
Showing 21 changed files with 1,473 additions and 0 deletions.
8 changes: 8 additions & 0 deletions .changes/2fcce0d9-a174-41ab-bb48-f18bbd5a3c5f.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"id": "2fcce0d9-a174-41ab-bb48-f18bbd5a3c5f",
"type": "misc",
"description": "Add service-level benchmarks",
"issues": [
"awslabs/aws-sdk-kotlin#968"
]
}
2 changes: 2 additions & 0 deletions codegen/sdk/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ tasks["jar"].enabled = false
fun getProperty(name: String): String? {
if (project.hasProperty(name)) {
return project.properties[name].toString()
} else if (project.ext.has(name)) {
return project.ext[name].toString()
}

val localProperties = Properties()
Expand Down
1 change: 1 addition & 0 deletions settings.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ include(":aws-runtime:aws-config")
include(":aws-runtime:aws-endpoint")
include(":aws-runtime:aws-http")
include(":tests")
include(":tests:benchmarks:service-benchmarks")
include(":tests:codegen:event-stream")
include(":tests:e2e-test-util")

Expand Down
93 changes: 93 additions & 0 deletions tests/benchmarks/service-benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Service benchmarks

This module is used for benchmarking the performance of generated clients against AWS services. The top 7 services (by
traffic coming from the AWS SDK for Kotlin) are tested and metrics are captured with summaries distilled after the runs
are complete

## Instructions

To run the benchmarks:
* `./gradlew :tests:benchmarks:service-benchmarks:bootstrapAll`
This ensures that all the required service clients are bootstrapped and ready to be built. **You only need to do this
once** in your workspace unless you clean up generated services or make a change to codegen.
* `./gradlew build`
This builds the whole SDK.
* `./gradlew :tests:benchmarks:service-benchmarks:run`
This runs the benchmark suite and prints the results to the console formatted as a Markdown table.

## Baseline as of 8/8/2023

The following benchmark run serves as a baseline for future runs:

### Environment

| Hardware type | Operating system | SDK version |
|----------------|------------------|-----------------|
| EC2 m5.4xlarge | Amazon Linux 2 | 0.30.0-SNAPSHOT |

### Results

| | Overhead (ms) | n | min | avg | med | p90 | p99 | max |
| :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| **S3** | | | | | | | | |
| —HeadObject | | 1715 | 0.334 | 0.561 | 0.379 | 0.521 | 3.149 | 20.071 |
| —PutObject | | 739 | 0.306 | 0.492 | 0.337 | 0.389 | 7.958 | 16.556 |
| **SNS** | | | | | | | | |
| —GetTopicAttributes | | 3041 | 0.235 | 0.494 | 0.354 | 0.461 | 2.964 | 17.129 |
| —Publish | | 1001 | 0.199 | 0.394 | 0.224 | 0.420 | 1.262 | 16.160 |
| **STS** | | | | | | | | |
| —AssumeRole | | 1081 | 0.273 | 0.419 | 0.349 | 0.485 | 0.622 | 14.781 |
| —GetCallerIdentity | | 4705 | 0.157 | 0.242 | 0.184 | 0.217 | 0.414 | 13.459 |
| **CloudWatch** | | | | | | | | |
| —GetMetricData | | 1500 | 0.174 | 1.352 | 0.219 | 3.239 | 13.830 | 15.193 |
| —PutMetricData | | 2452 | 0.133 | 1.194 | 0.144 | 1.911 | 13.007 | 14.862 |
| **CloudWatch Events** | | | | | | | | |
| —DescribeEventBus | | 1500 | 0.156 | 0.290 | 0.187 | 0.238 | 0.530 | 18.934 |
| —PutEvents | | 4577 | 0.152 | 0.293 | 0.176 | 0.378 | 3.921 | 10.022 |
| **DynamoDB** | | | | | | | | |
| —GetItem | | 4223 | 0.135 | 0.154 | 0.148 | 0.164 | 0.216 | 2.415 |
| —PutItem | | 3059 | 0.130 | 0.154 | 0.145 | 0.178 | 0.193 | 1.771 |
| **Pinpoint** | | | | | | | | |
| —GetEndpoint | | 555 | 0.220 | 0.401 | 0.406 | 0.452 | 0.506 | 6.606 |
| —PutEvents | | 415 | 0.242 | 0.400 | 0.420 | 0.466 | 0.619 | 2.762 |

## Methodology

This section describes how the benchmarks actually work at a high level:

### Selection criteria

These benchmarks select a handful of services to test against. The selection criterion is the top 7 services by traffic
coming from the AWS SDK for Kotlin (i.e., not from other SDKs, console, etc.). As of 7/28, those top 7 services are S3,
SNS, STS, CloudWatch, CloudWatch Events, DynamoDB, and Pinpoint (in descending order).

For each service, two APIs are selected roughly corresponding to a read and a write operation (e.g., S3::HeadObject is
a read operation and S3::PutObject is a write operation). Efforts are made to ensure that the APIs selected are the top
operations by traffic but alternate APIs may be selected in the case of low throttling limits, high setup complexity,
etc.

### Workflow

Benchmarks are run sequentially in a single thread. This is the high-level workflow for the benchmarks:

* For each benchmark service:
* Instantiate a client with a [special telemetry provider](#telemetry-provider)
* Run any necessary service-specific setup procedures (e.g., create/configure prerequisite resources)
* For each benchmark operation:
* Run any necessary operation-specific setup procedures (e.g., create/configure prerequisite resources)
* Warmup the API call
* Measure the API call
* Aggregate operation metrics
* Run any necessary operation-specific cleanup procedures (e.g., delete resources created in the setup step)
* Run any necessary service-specific cleanup procedures (e.g., delete resources created in the setup step)
* Print overall metrics summary

### Telemetry provider

A custom [benchmark-specific telemetry provider][1] is used to instrument each service client. This provider solely
handles metrics (i.e., no logging, tracing, etc.). It captures specific histogram metrics from an allowlist (currently
only `smithy.client.attempt_overhead_duration`) and aggregates them for the duration of an operation run (not including
the warmup phase). After the run is complete, the metrics are aggregated and various statistics are calculated (e.g.,
minimum, average, median, etc.).

[1]: common/src/aws/sdk/kotlin/benchmarks/service/telemetry/BenchmarkTelemetryProvider.kt
104 changes: 104 additions & 0 deletions tests/benchmarks/service-benchmarks/build.gradle.kts
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0
*/
buildscript {
repositories {
mavenCentral()
}

val atomicFuVersion: String by project

dependencies {
classpath("org.jetbrains.kotlinx:atomicfu-gradle-plugin:$atomicFuVersion")
}
}

plugins {
kotlin("multiplatform")
application
}

application {
mainClass.set("aws.sdk.kotlin.benchmarks.service.BenchmarkHarnessKt")
}

extra.set("skipPublish", true)

val platforms = listOf("common", "jvm")

platforms.forEach { platform ->
apply(from = rootProject.file("gradle/$platform.gradle"))
}

val requiredServices = setOf(
// Top 7 services called by Kotlin SDK customers as of 7/25/2023, in descending order of call volume
"s3",
"sns",
"sts",
"cloudwatch",
"cloudwatchevents",
"dynamodb",
"pinpoint",

// Services required as prerequisites for setup
"iam", // Create roles for STS::AssumeRole
)

val missingServices = requiredServices.filterNot { rootProject.file("services/$it/build.gradle.kts").exists() }

if (missingServices.isEmpty()) {
val optinAnnotations = listOf("kotlin.RequiresOptIn", "aws.smithy.kotlin.runtime.InternalApi")

kotlin {
sourceSets {
all {
val srcDir = if (name.endsWith("Main")) "src" else "test"
val resourcesPrefix = if (name.endsWith("Test")) "test-" else ""
// the name is always the platform followed by a suffix of either "Main" or "Test" (e.g. jvmMain, commonTest, etc)
val platform = name.substring(0, name.length - 4)
kotlin.srcDir("$platform/$srcDir")
resources.srcDir("$platform/${resourcesPrefix}resources")
languageSettings.progressiveMode = true
optinAnnotations.forEach { languageSettings.optIn(it) }
}

val atomicFuVersion: String by project
val coroutinesVersion: String by project
val smithyKotlinVersion: String by project

commonMain {
dependencies {
api("aws.smithy.kotlin:runtime-core:$smithyKotlinVersion")
implementation(project(":aws-runtime:aws-core"))
implementation("org.jetbrains.kotlinx:atomicfu:$atomicFuVersion")
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:$coroutinesVersion")

requiredServices.forEach { implementation(project(":services:$it")) }
}
}
}
}
} else {
logger.warn(
"Skipping build for {} project, missing the following services: {}. To ensure this project builds, run the " +
"{}:bootstrapAll task.",
project.name,
missingServices.joinToString(", "),
project.path,
)
}

tasks.register("bootstrapAll") {
val bootstrapArg = requiredServices.joinToString(",") { "+$it" }
val bootstrapProj = project(":codegen:sdk")
bootstrapProj.ext.set("aws.services", bootstrapArg)
dependsOn(":codegen:sdk:bootstrap")
}

tasks.named<JavaExec>("run") {
classpath += objects.fileCollection().from(
tasks.named("compileKotlinJvm"),
configurations.named("jvmRuntimeClasspath"),
)
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
package aws.sdk.kotlin.benchmarks.service

import aws.sdk.kotlin.benchmarks.service.definitions.*
import aws.sdk.kotlin.benchmarks.service.telemetry.MetricSummary
import aws.smithy.kotlin.runtime.client.SdkClient
import aws.smithy.kotlin.runtime.io.use
import kotlin.time.Duration.Companion.seconds
import kotlin.time.ExperimentalTime
import kotlin.time.TimeSource

val DEFAULT_WARMUP_TIME = 5.seconds
val DEFAULT_ITERATION_TIME = 15.seconds

private val benchmarks = setOf(
S3Benchmark(),
SnsBenchmark(),
StsBenchmark(),
CloudwatchBenchmark(),
CloudwatchEventsBenchmark(),
DynamoDbBenchmark(),
PinpointBenchmark(),
).map {
@Suppress("UNCHECKED_CAST")
it as ServiceBenchmark<SdkClient>
}

suspend fun main() {
val harness = BenchmarkHarness()
harness.execute()
}

class BenchmarkHarness {
private val summaries = mutableMapOf<String, MutableMap<String, Map<String, MetricSummary>>>()

suspend fun execute() {
benchmarks.forEach { execute(it) }
println()
printResults()
}

private suspend fun execute(benchmark: ServiceBenchmark<SdkClient>) {
benchmark.client().use { client ->
println("${client.config.clientName}:")

println(" Setting up...")
benchmark.setup(client)

try {
benchmark.operations.forEach { execute(it, client) }
} finally {
benchmark.tearDown(client)
}
}
println()
}

private suspend fun execute(operation: OperationBenchmark<SdkClient>, client: SdkClient) {
println(" ${operation.name}:")

println(" Setting up...")
operation.setup(client)

try {
println(" Warming up for ${operation.warmupMode.explanation}...")
forAtLeast(operation.warmupMode) {
operation.transact(client)
}

Common.metricAggregator.clear()

println(" Measuring for ${operation.iterationMode.explanation}...")
forAtLeast(operation.iterationMode) {
operation.transact(client)
}

val summary = Common.metricAggregator.summarizeAndClear()
summaries.getOrPut(client.config.clientName, ::mutableMapOf)[operation.name] = summary
} finally {
println(" Tearing down...")
operation.tearDown(client)
}
}

private fun printResults() {
val table = ResultsTable.from(summaries)
println(table)
}
}

@OptIn(ExperimentalTime::class)
private inline fun forAtLeast(runMode: RunMode, block: () -> Unit) {
val start = TimeSource.Monotonic.markNow()

when (runMode) {
is RunMode.Time -> {
var cnt = 0
while (start.elapsedNow() < runMode.time) {
block()
cnt++
}
println(" (completed $cnt iterations)")
}

is RunMode.Iterations -> {
repeat(runMode.iterations) {
block()
}
println(" (took ${start.elapsedNow()})")
}
}
}

private val RunMode.explanation get() = when (this) {
is RunMode.Iterations -> "$iterations iterations"
is RunMode.Time -> time.toString()
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0
*/
package aws.sdk.kotlin.benchmarks.service

import aws.sdk.kotlin.benchmarks.service.telemetry.BenchmarkTelemetryProvider
import aws.sdk.kotlin.benchmarks.service.telemetry.MetricAggregator
import aws.smithy.kotlin.runtime.ExperimentalApi
import aws.smithy.kotlin.runtime.retries.StandardRetryStrategy
import aws.smithy.kotlin.runtime.util.Uuid

object Common {
val metricAggregator = MetricAggregator()

val noRetries = StandardRetryStrategy {
maxAttempts = 1
}

@OptIn(ExperimentalApi::class)
val telemetryProvider = BenchmarkTelemetryProvider(metricAggregator)

fun random(prefix: String = "") = "$prefix${Uuid.random()}"
}
Loading

0 comments on commit 3a20e42

Please sign in to comment.