Skip to content

Commit

Permalink
Correct llm pitfall in pitfalls.md (#1086)
Browse files Browse the repository at this point in the history
This looks very much like a Freudian slip, as the abbreviation LLM was probably translated by GPT as “Language Learning Model”.
  • Loading branch information
Chr1st1an02 authored Sep 25, 2023
1 parent 53fdd76 commit 8d87cda
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/basics/pitfalls.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import Pitfalls from '@site/docs/assets/basics/pitfalls.svg';
- Understand the biases and problems that LLMs have
:::

Language Learning Models (LLMs) are powerful tools that have revolutionized many aspects of technology, from customer service to content creation. However, like any technology, they are not without their flaws. Understanding these pitfalls is crucial for effectively using LLMs and mitigating potential issues. This article will explore some of the common pitfalls of LLMs, including issues with citing sources, bias, hallucinations, math, and prompt hacking.
Large Language Models (LLMs) are powerful tools that have revolutionized many aspects of technology, from customer service to content creation. However, like any technology, they are not without their flaws. Understanding these pitfalls is crucial for effectively using LLMs and mitigating potential issues. This article will explore some of the common pitfalls of LLMs, including issues with citing sources, bias, hallucinations, math, and prompt hacking.

## Citing Sources

Expand Down Expand Up @@ -47,4 +47,4 @@ LLMs can be manipulated or "hacked" by users to generate specific content. This

## Conclusion

In conclusion, while LLMs are powerful and versatile tools, they come with a set of pitfalls that users need to be aware of. Issues with accurately citing sources, inherent biases, generating false information, difficulties with math, and susceptibility to prompt hacking are all challenges that need to be addressed when using these models. By understanding these limitations, we can use LLMs more effectively and responsibly, and work towards improving these models in the future.
In conclusion, while LLMs are powerful and versatile tools, they come with a set of pitfalls that users need to be aware of. Issues with accurately citing sources, inherent biases, generating false information, difficulties with math, and susceptibility to prompt hacking are all challenges that need to be addressed when using these models. By understanding these limitations, we can use LLMs more effectively and responsibly, and work towards improving these models in the future.

1 comment on commit 8d87cda

@vercel
Copy link

@vercel vercel bot commented on 8d87cda Sep 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

learn-prompting – ./

learn-prompting-git-main-trigaten.vercel.app
learn-prompting.vercel.app
learn-prompting-trigaten.vercel.app

Please sign in to comment.