From 5042f9279c9a3f66c3f67b968da7e6488f5e849d Mon Sep 17 00:00:00 2001 From: veedata Date: Mon, 10 Jun 2024 16:05:16 +0000 Subject: [PATCH 1/2] Update citations --- _data/citations.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_data/citations.yaml b/_data/citations.yaml index caccb82..4fbbd34 100644 --- a/_data/citations.yaml +++ b/_data/citations.yaml @@ -72,7 +72,7 @@ - David H.C. Du publisher: '' date: '2023-11-06' - link: https://ieeexplore.ieee.org/document/10360958/ + link: https://doi.org/10.1109/ICCD58817.2023.00050 conference: IEEE ICCD 2023 plugin: sources.py file: sources.yaml @@ -90,7 +90,7 @@ - Fenggang Wu publisher: '' date: '2023-11-06' - link: https://ieeexplore.ieee.org/document/10360961/ + link: https://doi.org/10.1109/ICCD58817.2023.00041 conference: IEEE ICCD 2023 plugin: sources.py file: sources.yaml From bd4b9e70e7af2d1d68e179b6c284a696c44bd7a6 Mon Sep 17 00:00:00 2001 From: Viraj Thakkar Date: Tue, 11 Jun 2024 09:38:26 -0700 Subject: [PATCH 2/2] Update GPT Project title --- _research/gpt_project.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_research/gpt_project.md b/_research/gpt_project.md index a040bfb..bc94fa2 100644 --- a/_research/gpt_project.md +++ b/_research/gpt_project.md @@ -1,13 +1,13 @@ --- -title: LLM-Assisted Configuration Tuning for Log-Structured Merge-tree-based Key-Value Stores -tags: LSM-KVS, Tuning, LLM +title: LLM-Assisted Configuration Tuning for Storage and Memory systems +tags: Tuning, LLM --- Storage and Memory systems have undergone a variety of modifications and transformations, and are widely used in today's IT infrastructure. These systems usually have over 100 options (e.g. HBase and RocksDB) to tune performance for particular hardware (e.g., CPU, Memory, and Storage), software, and workloads (e.g., random, skewed, and read/write intensive). ASU-IDI focuses on developing an LLM-assisted auto-tuning framework for storage and memory systems to enhance performance. -Tuning Storage and Memory systems Log-Structured Merge-tree-based Key-Value Stores (LSM-KVS) like RocksDB and HBase with appropriate configurations is challenging, usually requiring IT professionals with appropriate expertise to run hundreds of benchmarking evaluations. Existing related studies on tuning solutions are still limited, lacking generality, adaptiveness to the versions and deployments. We believe the recent advancements of Large-Language-Models (LLMs) like OpenAI's GPT-4 can be a promising solution to achieve auto-tuning: +Tuning Storage and Memory systems for example Key-Value Stores like LSM-KVS, and cache systems like CacheLib with appropriate configurations is challenging, usually requiring IT professionals with appropriate expertise to run hundreds of benchmarking evaluations. Existing related studies on tuning solutions are still limited, lacking generality, adaptiveness to the versions and deployments. We believe the recent advancements of Large-Language-Models (LLMs) like OpenAI's GPT-4 can be a promising solution to achieve auto-tuning: -1. LLMs are trained using collections of LSM-KVS-related blog, publications, and almost all the open-sourced code, which makes the LLMs a real "expert"; +1. LLMs are trained using collections of tuning recommendation blogs, publications, and almost all the open-sourced code, which makes the LLMs a real "expert"; 2. LLMs has the strong inferential capability to analyze the benchmarking results and achieve automatic and interactive adjustments on particular hardware and workloads. However, how to design the auto-tuning framework based on LLMs and benchmarking tools, how to generate appropriate prompts for LLMs, and how to calibrate the unexpected errors and wrong configurations are three main challenges to be addressed.