Skip to content

Commit

Permalink
JOSS: More references
Browse files Browse the repository at this point in the history
  • Loading branch information
perdelt committed Dec 23, 2024
1 parent f898fbe commit a2db7c2
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 1 deletion.
15 changes: 15 additions & 0 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -289,4 +289,19 @@ @misc{DBEngines

@article{Erdelt2022DBMSBenchmarker, nodoi = {10.21105/joss.04628}, url = {https://doi.org/10.21105/joss.04628}, year = {2022}, publisher = {The Open Journal}, volume = {7}, number = {79}, pages = {4628}, author = {Patrick K. Erdelt and Jascha Jestel}, title = {DBMS-Benchmarker: Benchmark and Evaluate DBMS in Python}, journal = {Journal of Open Source Software} }

@InProceedings{10.1007/978-3-031-68031-1_9,
author="Erdelt, Patrick K.",
editor="Nambiar, Raghunath
and Poess, Meikel",
title="A Cloud-Native Adoption of Classical DBMS Performance Benchmarks and Tools",
booktitle="Performance Evaluation and Benchmarking",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="124--142",
abstract="Classical DBMS benchmarks cover a variety of use cases, for example: microbatch in-line insertion and highly concurrent row-level access (YCSB), batch offline loading into a data warehouse and concurrently running complex analytical queries (TPC-H) and business transactions (TPC-C). These use cases are still relevant in the cloud era, where we build data pipelines of microservices. In this paper we adopt the above benchmarks and four popular tools to the cloud-native pattern. On the one hand, this helps in assessing the performance of data pipelines that have a DBMS at their core. On the other hand, it makes benchmarking a scalable, elastic and observable process that can be automated. In a series of experiments, we (1) inspect Kubernetes jobs and benchmarking tools and whether they are suitable for combination, (2) monitor resource consumption of all components, i.e., also the drivers, (3) inspect scaling behaviour and look for peak performance points. We show that tools and workloads respond differently to scale-out and that the cloud-native pattern is fruitful for benchmarking.",
isbn="978-3-031-68031-1"
}



2 changes: 1 addition & 1 deletion paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ It serves as the orchestrator [@10.1007/978-3-030-94437-7_6] for distributed par

See the [homepage](https://github.com/Beuth-Erdelt/Benchmark-Experiment-Host-Manager) and the [documentation](https://bexhoma.readthedocs.io/en/latest/) for more details.

Used by [@Erdelt2022DBMSBenchmarker; @10.1007/978-3-030-84924-5_6]
Used by [@Erdelt2022DBMSBenchmarker; @10.1007/978-3-030-84924-5_6; 10.1007/978-3-031-68031-1_9]

# Statement of Need

Expand Down

0 comments on commit a2db7c2

Please sign in to comment.