From 57db3c039f641cc088b8214614cb02284fcd8645 Mon Sep 17 00:00:00 2001 From: Graham Odds Date: Wed, 1 Nov 2023 10:05:16 +0000 Subject: [PATCH] adding video posts --- ...-07-05-gpt3-creativity-from-determinism.md.md | 16 ++++++++++++++++ ...2023-10-31-mitigating-prompt-injections.md.md | 15 +++++++++++++++ 2 files changed, 31 insertions(+) create mode 100644 _posts/2023-07-05-gpt3-creativity-from-determinism.md.md create mode 100644 _posts/2023-10-31-mitigating-prompt-injections.md.md diff --git a/_posts/2023-07-05-gpt3-creativity-from-determinism.md.md b/_posts/2023-07-05-gpt3-creativity-from-determinism.md.md new file mode 100644 index 0000000000..1225f770ee --- /dev/null +++ b/_posts/2023-07-05-gpt3-creativity-from-determinism.md.md @@ -0,0 +1,16 @@ +--- +title: "GPT3: How do you get creativity from determinism?" +date: 2023-07-05 00:00:00 Z +categories: +- Tech +summary: We've all now seen the apparently creative outputs of Generative AI, with + astonishing results that seem to border on human creativity. How does GPT-3 achieve + this? In this short talk, I lift the lid to reveal the probabilistic elements that + allow an otherwise deterministic model to give all the appearances of creativity. +author: cprice +video_url: https://www.youtube.com/embed/QCvoDgwDMNg +short-author-aside: true +layout: video_post +--- + +We've all now seen the apparently creative outputs of Generative AI, with astonishing results that seem to border on human creativity. How does GPT-3 achieve this? In this short talk, I lift the lid to reveal the probabilistic elements that allow an otherwise deterministic model to give all the appearances of creativity. diff --git a/_posts/2023-10-31-mitigating-prompt-injections.md.md b/_posts/2023-10-31-mitigating-prompt-injections.md.md new file mode 100644 index 0000000000..88bcce6e7c --- /dev/null +++ b/_posts/2023-10-31-mitigating-prompt-injections.md.md @@ -0,0 +1,15 @@ +--- +title: Mitigating prompt injections on Generate AI systems +date: 2023-10-31 00:00:00 Z +categories: +- Tech +summary: We demonstrate our web app used for experimenting with different types + of prompt injection attacks and mitigations on LLMs and how easy it can be to + hack GPT through malicious prompts. +author: dhinrichs +video_url: https://www.youtube.com/embed/TD3RG9YPKEY +short-author-aside: true +layout: video_post +--- + +We demonstrate our web app used for experimenting with different types of prompt injection attacks and mitigations on LLMs and how easy it can be to hack GPT through malicious prompts.