Skip to content

Commit

Permalink
Merge pull request #12 from asu-idi/basic-info
Browse files Browse the repository at this point in the history
Basic info
  • Loading branch information
veedata authored Jun 11, 2024
2 parents 9280673 + cd4bce5 commit 5b3056e
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions _research/gpt_project.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
---
title: LLM-Assisted Configuration Tuning for Log-Structured Merge-tree-based Key-Value Stores
tags: LSM-KVS, Tuning, LLM
title: LLM-Assisted Configuration Tuning for Storage and Memory systems
tags: Tuning, LLM
---

Storage and Memory systems have undergone a variety of modifications and transformations, and are widely used in today's IT infrastructure. These systems usually have over 100 options (e.g. HBase and RocksDB) to tune performance for particular hardware (e.g., CPU, Memory, and Storage), software, and workloads (e.g., random, skewed, and read/write intensive). ASU-IDI focuses on developing an LLM-assisted auto-tuning framework for storage and memory systems to enhance performance.

Tuning Storage and Memory systems Log-Structured Merge-tree-based Key-Value Stores (LSM-KVS) like RocksDB and HBase with appropriate configurations is challenging, usually requiring IT professionals with appropriate expertise to run hundreds of benchmarking evaluations. Existing related studies on tuning solutions are still limited, lacking generality, adaptiveness to the versions and deployments. We believe the recent advancements of Large-Language-Models (LLMs) like OpenAI's GPT-4 can be a promising solution to achieve auto-tuning:
Tuning Storage and Memory systems for example Key-Value Stores like LSM-KVS, and cache systems like CacheLib with appropriate configurations is challenging, usually requiring IT professionals with appropriate expertise to run hundreds of benchmarking evaluations. Existing related studies on tuning solutions are still limited, lacking generality, adaptiveness to the versions and deployments. We believe the recent advancements of Large-Language-Models (LLMs) like OpenAI's GPT-4 can be a promising solution to achieve auto-tuning:

1. LLMs are trained using collections of LSM-KVS-related blog, publications, and almost all the open-sourced code, which makes the LLMs a real "expert";
1. LLMs are trained using collections of tuning recommendation blogs, publications, and almost all the open-sourced code, which makes the LLMs a real "expert";
2. LLMs has the strong inferential capability to analyze the benchmarking results and achieve automatic and interactive adjustments on particular hardware and workloads.

However, how to design the auto-tuning framework based on LLMs and benchmarking tools, how to generate appropriate prompts for LLMs, and how to calibrate the unexpected errors and wrong configurations are three main challenges to be addressed.

0 comments on commit 5b3056e

Please sign in to comment.