+ Page Not Found
+ +Try searching the whole site for the content you want:
+ + +diff --git a/preview/pr-3/404.html b/preview/pr-3/404.html new file mode 100644 index 0000000..ba0e0c8 --- /dev/null +++ b/preview/pr-3/404.html @@ -0,0 +1,542 @@ + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Try searching the whole site for the content you want:
+ + +We are always working on exciting new ideas. If you are interested in +collaborating with us, or if you have any questions, we would love to hear.
+ + +Brickyard Engineering is the home to the Intelligent Data Infrastructure Lab (IDI). You will often find one of our team members in the vicinity of the following address:
+ +Street Address
+699 S. Mill Ave.
+Tempe, AZ 85281
Building Code
+BYENG
The Intelligent Data Infrastructure (IDI) research lab delves into database systems, storage technologies, and next-generation data infrastructure. We tackle emerging challenges, designing cutting-edge solutions to advance data management and analysis in today’s dynamic digital landscape. Our work spans a wide range of topics, including distributed systems, cloud computing, machine learning, and data analytics. We are passionate about pushing the boundaries of data infrastructure.
+Our Publications
+ + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
+ + + + +Our Projects
+ + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
+ + + + +Our Team
+ + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
+ + + + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
+ + + + + + + + + + + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
+ + + + + + + + + + + +Zhichao Cao is an assistant professor in the School of Computing and Augmented Intelligence at Arizona State University. He leads the Intelligent Data Infrastructure (IDI) research lab, where he conducts research in the areas of database systems (e.g., key-value stores, graph databases, and timeseries databases), storage systems (e.g., file systems, cloud storage, and deduplication systems), and next-generation data infrastructure (e.g., disaggregated infrastructure, computing-in-X, and wireless datacenter). His research interests also lie in the design and development of data management systems for new memory and storage technologies, such as SMR, IMR, NVM, CXL, RDMA, ZNS, and DNA. Moreover, Prof. Cao’s research also encompasses big data systems, with a focus on the development of query engines for large-scale scientific computing in HPC and storage solutions for AI/ML platforms.
+ +Prior to joining ASU, Prof. Cao worked as a research scientist at Facebook, where he contributed to storage and database research from 2018 to 2021. He earned his bachelor’s degree in Automation from Tsinghua University in 2013 and his doctoral degree in Computer Science from the University of Minnesota, Twin Cities, in 2020.
+ + + + + + + + + + + ++ +Design and develop an LLM-assisted auto-tuning framework for Log-Structured Merge-tree-based Key-Value Stores (LSM-KVS) to achieve better performance. + +
+Design and develop an LLM-assisted auto-tuning framework for Log-Structured Merge-tree-based Key-Value Stores (LSM-KVS) to achieve better performance.
+ +Log-Structured Merge-tree-based Key-Value Stores (LSM-KVS) are widely used in today’s IT infrastructure , and usually have over 100 options (e.g., HBase and RocksDB ) to tune performance for particular hardware (e.g., CPU, memory, and storage), software, and workloads (e.g., random, skewed, and read/write intensive) . However, tuning the LSM-KVS with appropriate configurations is always challenging, usually requiring IT professionals with LSM-KVS expertise to run hundreds of benchmarking evaluations. Existing related studies on LSM-KVS tuning solutions are still limited, lacking generality, adaptiveness to the versions and deployments. We believe the recent advancements of Large-Language-Models (LLMs) like OpenAI’s GPT-4 can be a promising solution to achieve LSM-KVS auto-tuning: 1) LLMs are trained using collections of LSM-KVS-related blog, publications, and almost all the open-sourced code, which makes the LLMs a real “expert” of LSM-KVS; 2) LLMs has the strong inferential capability to analyze the benchmarking results and achieve automatic and interactive adjustments for LSM-KVS on particular hardware and workloads. However, how to design the auto-tuning framework based on LLMs and benchmarking tools, how to generate appropriate prompts for LLMs, and how to calibrate the unexpected errors and wrong configurations are three main challenges to be addressed.
+ +We propose to design and develop an LLM-assisted auto-tuning framework as shown in Figure with the following workflow: 1) Use default options file and a collection of system and hardware information as initial input. 2) Use a feedback loop with the LLM API and create new prompts for LLM with option changes and the processed benchmarking results in the previous iterations. 3) The newly generated options from LLM are calibrated (cleaned and corrected) for a new round of benchmarking; And 4) after several iterations, the benchmarking results have converged and it generates the final optimized option configurations. Note that the whole process is automatically deployed and executed without human intervention. We implemented the framework prototype on RocksDB v8.8.1 and OpenAI’s GPT-4-1106 model, and open-sourced. Our preliminary evaluations show that with 5 iterations of auto-tuning, our framework achieves up to 20% of throughput improvement compared with default configurations.
+