Skip to content

Commit

Permalink
Adding video posts
Browse files Browse the repository at this point in the history
  • Loading branch information
godds authored Nov 1, 2023
2 parents d5cf9a5 + 57db3c0 commit 875bf06
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 0 deletions.
16 changes: 16 additions & 0 deletions _posts/2023-07-05-gpt3-creativity-from-determinism.md.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
title: "GPT3: How do you get creativity from determinism?"
date: 2023-07-05 00:00:00 Z
categories:
- Tech
summary: We've all now seen the apparently creative outputs of Generative AI, with
astonishing results that seem to border on human creativity. How does GPT-3 achieve
this? In this short talk, I lift the lid to reveal the probabilistic elements that
allow an otherwise deterministic model to give all the appearances of creativity.
author: cprice
video_url: https://www.youtube.com/embed/QCvoDgwDMNg
short-author-aside: true
layout: video_post
---

We've all now seen the apparently creative outputs of Generative AI, with astonishing results that seem to border on human creativity. How does GPT-3 achieve this? In this short talk, I lift the lid to reveal the probabilistic elements that allow an otherwise deterministic model to give all the appearances of creativity.
15 changes: 15 additions & 0 deletions _posts/2023-10-31-mitigating-prompt-injections.md.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Mitigating prompt injections on Generate AI systems
date: 2023-10-31 00:00:00 Z
categories:
- Tech
summary: We demonstrate our web app used for experimenting with different types
of prompt injection attacks and mitigations on LLMs and how easy it can be to
hack GPT through malicious prompts.
author: dhinrichs
video_url: https://www.youtube.com/embed/TD3RG9YPKEY
short-author-aside: true
layout: video_post
---

We demonstrate our web app used for experimenting with different types of prompt injection attacks and mitigations on LLMs and how easy it can be to hack GPT through malicious prompts.

0 comments on commit 875bf06

Please sign in to comment.