diff --git a/.nojekyll b/.nojekyll index 68a0184..598faf8 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -d52de7b8 \ No newline at end of file +5bb44101 \ No newline at end of file diff --git a/about.html b/about.html index ab9e271..9a89ca4 100644 --- a/about.html +++ b/about.html @@ -95,11 +95,7 @@
Consider to learn Stanford CS224U if you want to learn more fundamental knowledge about the LLM evaluation.
+Numerous leaderboards exist for Large Language Models (LLMs), each compiled based on the benchmarks of these models. By examining these leaderboards, we can identify which benchmarks are particularly effective and informative for evaluating the capabilities of LLMs.
diff --git a/notes/Diffusion Model/sd.html b/notes/Diffusion Model/sd.html index 2c16161..ec51af0 100644 --- a/notes/Diffusion Model/sd.html +++ b/notes/Diffusion Model/sd.html @@ -97,11 +97,7 @@Benchmarks for evaluating large language models come in various forms, each serving a unique purpose. They can be broadly categorized into general benchmarks, which assess overall performance, and specialized benchmarks, designed to evaluate the model’s proficiency in specific areas such as understanding the Chinese language or its ability to perform function calls.
Consider to learn Stanford CS224U if you want to learn more fundamental knowledge about the LLM evaluation.
+Numerous leaderboards exist for Large Language Models (LLMs), each compiled based on the benchmarks of these models. By examining these leaderboards, we can identify which benchmarks are particularly effective and informative for evaluating the capabilities of LLMs.
diff --git a/notes/Large Language Model/moe.html b/notes/Large Language Model/moe.html index c5243f3..a48f8f5 100644 --- a/notes/Large Language Model/moe.html +++ b/notes/Large Language Model/moe.html @@ -160,11 +160,7 @@