Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

268 enable configuration of the llm evalutation defence in sandbox #336

Merged
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
a6c0cfc
renames defence trypr LLM_EVALUATION to EVALUATION_LLM_INSTRUCTIONS
pmarsh-scottlogic Sep 28, 2023
5a323c3
correct grammar in promptInjectionEvalTemplate
pmarsh-scottlogic Sep 28, 2023
8cd6503
renames defence type in frontend LLM_EVALUATION to EVALUATION_LLM_INS…
pmarsh-scottlogic Sep 28, 2023
05f2f1f
renames to EVALUATION_LLM_INSTRUTIONS on the defences panel
pmarsh-scottlogic Sep 29, 2023
2cbb7b4
renames to Evaluation LLM instructions on the defences panel
pmarsh-scottlogic Sep 29, 2023
a678573
Adds defence configs to Evaluation llm instructions defenceInfo front…
pmarsh-scottlogic Sep 29, 2023
2e06666
Adds defence configs to Evaluation_LLM_EVALUATIONS in the backend, wi…
pmarsh-scottlogic Sep 29, 2023
0d59556
Separates promptInjectionEvalTemplate into the template and the prepr…
pmarsh-scottlogic Sep 29, 2023
37a986e
makes templates file export promptInjectionEvalPrePrompt
pmarsh-scottlogic Sep 29, 2023
5661b1d
renames qaPrompt to qaPrePrompt
pmarsh-scottlogic Sep 29, 2023
78b83a0
prompt injection evaluator preprompt now taken from the session
pmarsh-scottlogic Oct 2, 2023
3c9228c
renames qAcontextTemplate to qAMainPrompt
pmarsh-scottlogic Oct 2, 2023
a6f753e
renames retrievalQAPrePrompt to qAPrePrompt
pmarsh-scottlogic Oct 2, 2023
a3424d3
renames retrievalQAPrePromptSecure to qAPrePromptSecure
pmarsh-scottlogic Oct 2, 2023
16963c9
renames promptInjectionEvalTemplate to promptInjectionEvalMainPrompt
pmarsh-scottlogic Oct 2, 2023
254a920
renames generic prePrompt parameter in evaluation model to promptInje…
pmarsh-scottlogic Oct 2, 2023
e843149
passes maliciousPromptEvalPrePrompt down the chain from detectTrigger…
pmarsh-scottlogic Oct 2, 2023
f10a657
sets promptInjectionEvalTemplate directly rather than setting to a va…
pmarsh-scottlogic Oct 2, 2023
9711071
sets promptInjectionEvalPrePrompt as default session value rather tha…
pmarsh-scottlogic Oct 2, 2023
043c576
splits malicious prompt eval into prePrompt and mainPrompt
pmarsh-scottlogic Oct 2, 2023
874fd82
constructs the malicious prompt eval template from the given prepromp…
pmarsh-scottlogic Oct 2, 2023
7fda12a
adds method to get malicious prompt eval pre prompt from session storage
pmarsh-scottlogic Oct 2, 2023
a693ea1
exports getMaliciousPromptEvalPrePrompt from defence.ts
pmarsh-scottlogic Oct 2, 2023
1cd5acc
fixes eval llm initialisation in langchain unit tests
pmarsh-scottlogic Oct 2, 2023
d47ec08
fixes eval llm initialisation in defences integration tests
pmarsh-scottlogic Oct 2, 2023
fa3f7b5
fix instantiation of eval LLM in langchain integration tests
pmarsh-scottlogic Oct 2, 2023
77293af
adds generic function for making the prompt template
pmarsh-scottlogic Oct 2, 2023
7f539b2
switches qa template to use the generic function
pmarsh-scottlogic Oct 2, 2023
c13af92
renames all preprompt variables that originate from the config do ind…
pmarsh-scottlogic Oct 2, 2023
97452a0
eval initialisation now uses the generic makePromptTemplate function
pmarsh-scottlogic Oct 2, 2023
00c2c6c
adds unit tests for defence.ts
pmarsh-scottlogic Oct 2, 2023
663c6f4
renames sessionPrePrompt to configPrePrompt
pmarsh-scottlogic Oct 2, 2023
5027ab6
exports makePromptTemplate for testing
pmarsh-scottlogic Oct 3, 2023
2c69eb7
fixes spelling mistake, replaces getQAPromptTemplate tests with makeP…
pmarsh-scottlogic Oct 3, 2023
3de1d3c
cleans up langchain.test.ts
pmarsh-scottlogic Oct 3, 2023
83a24d0
adds minimalMockExample.test.ts
pmarsh-scottlogic Oct 3, 2023
9af0723
stripped down working mock to make 2minimalMockExample
pmarsh-scottlogic Oct 3, 2023
1c737c6
delete experimental test files
pmarsh-scottlogic Oct 3, 2023
fde7b1e
completes makePromptTemplate tests
pmarsh-scottlogic Oct 3, 2023
8d28e33
Merge branch 'dev' into 268-enable-configuration-of-the-llm-evalutati…
pmarsh-scottlogic Oct 4, 2023
12e2c15
removes unnecessarry import as
pmarsh-scottlogic Oct 4, 2023
82b3736
always gets eval LLM instructions from config
pmarsh-scottlogic Oct 4, 2023
57f45ed
adds a new line between pre prompt and main prompt for prompt template
pmarsh-scottlogic Oct 4, 2023
79b90c7
fixes tests which were broken by the new line
pmarsh-scottlogic Oct 4, 2023
614902e
Fix jest mock issue, typo, other warnings
chriswilty Oct 13, 2023
22fa317
Correct dom access, preserve whitespace in preprompts
chriswilty Oct 13, 2023
e7df174
Merge branch 'dev' into 268-enable-configuration-of-the-llm-evalutati…
chriswilty Oct 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backend/.eslintrc.cjs
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ module.exports = {
checksVoidReturn: false,
},
],

"@typescript-eslint/unbound-method": ["error", { ignoreStatic: true }],
"func-style": ["error", "declaration"],
"prefer-template": "error",
},
Expand Down
1 change: 1 addition & 0 deletions backend/jest.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@ module.exports = {
modulePathIgnorePatterns: ["build", "coverage", "node_modules"],
preset: "ts-jest",
testEnvironment: "node",
silent: true,
};
78 changes: 57 additions & 21 deletions backend/src/defence.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ import { ChatDefenceReport } from "./models/chat";
import { DEFENCE_TYPES, DefenceConfig, DefenceInfo } from "./models/defence";
import { LEVEL_NAMES } from "./models/level";
import {
retrievalQAPrePromptSecure,
maliciousPromptEvalPrePrompt,
promptInjectionEvalPrePrompt,
qAPrePromptSecure,
systemRoleDefault,
systemRoleLevel1,
systemRoleLevel2,
Expand All @@ -24,11 +26,20 @@ function getInitialDefences(): DefenceInfo[] {
value: process.env.EMAIL_WHITELIST ?? "",
},
]),
new DefenceInfo(DEFENCE_TYPES.LLM_EVALUATION, []),
new DefenceInfo(DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS, [
{
id: "prompt-injection-evaluator-prompt",
chriswilty marked this conversation as resolved.
Show resolved Hide resolved
value: promptInjectionEvalPrePrompt,
},
{
id: "malicious-prompt-evaluator-prompt",
value: maliciousPromptEvalPrePrompt,
},
]),
new DefenceInfo(DEFENCE_TYPES.QA_LLM_INSTRUCTIONS, [
{
id: "prePrompt",
value: retrievalQAPrePromptSecure,
value: qAPrePromptSecure,
},
]),
new DefenceInfo(DEFENCE_TYPES.RANDOM_SEQUENCE_ENCLOSURE, [
Expand Down Expand Up @@ -177,7 +188,7 @@ function getEmailWhitelistVar(defences: DefenceInfo[]) {
);
}

function getQALLMprePrompt(defences: DefenceInfo[]) {
function getQAPrePromptFromConfig(defences: DefenceInfo[]) {
return getConfigValue(
defences,
DEFENCE_TYPES.QA_LLM_INSTRUCTIONS,
Expand All @@ -186,10 +197,26 @@ function getQALLMprePrompt(defences: DefenceInfo[]) {
);
}

function getPromptInjectionEvalPrePromptFromConfig(defences: DefenceInfo[]) {
return getConfigValue(
defences,
DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS,
"prompt-injection-evaluator-prompt",
""
);
}

function getMaliciousPromptEvalPrePromptFromConfig(defences: DefenceInfo[]) {
return getConfigValue(
defences,
DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS,
"malicious-prompt-evaluator-prompt",
""
);
}

function isDefenceActive(id: DEFENCE_TYPES, defences: DefenceInfo[]) {
return defences.find((defence) => defence.id === id && defence.isActive)
? true
: false;
return defences.some((defence) => defence.id === id && defence.isActive);
}

function generateRandomString(string_length: number) {
Expand Down Expand Up @@ -230,16 +257,14 @@ function transformRandomSequenceEnclosure(
const randomString: string = generateRandomString(
Number(getRandomSequenceEnclosureLength(defences))
);
const introText: string = getRandomSequenceEnclosurePrePrompt(defences);
const transformedMessage: string = introText.concat(
return getRandomSequenceEnclosurePrePrompt(defences).concat(
randomString,
" {{ ",
message,
" }} ",
randomString,
". "
);
return transformedMessage;
}

// function to escape XML characters in user input to prevent hacking with XML tagging on
Expand Down Expand Up @@ -274,12 +299,7 @@ function transformXmlTagging(message: string, defences: DefenceInfo[]) {
const prePrompt = getXMLTaggingPrePrompt(defences);
const openTag = "<user_input>";
const closeTag = "</user_input>";
const transformedMessage: string = prePrompt.concat(
openTag,
escapeXml(message),
closeTag
);
return transformedMessage;
return prePrompt.concat(openTag, escapeXml(message), closeTag);
}

//apply defence string transformations to original message
Expand Down Expand Up @@ -370,15 +390,29 @@ async function detectTriggeredDefences(
}

// evaluate the message for prompt injection
const evalPrompt = await queryPromptEvaluationModel(message, openAiApiKey);
const configPromptInjectionEvalPrePrompt =
getPromptInjectionEvalPrePromptFromConfig(defences);
const configMaliciousPromptEvalPrePrompt =
getMaliciousPromptEvalPrePromptFromConfig(defences);

const evalPrompt = await queryPromptEvaluationModel(
message,
configPromptInjectionEvalPrePrompt,
configMaliciousPromptEvalPrePrompt,
openAiApiKey
);
if (evalPrompt.isMalicious) {
if (isDefenceActive(DEFENCE_TYPES.LLM_EVALUATION, defences)) {
defenceReport.triggeredDefences.push(DEFENCE_TYPES.LLM_EVALUATION);
if (isDefenceActive(DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS, defences)) {
defenceReport.triggeredDefences.push(
DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS
);
console.debug("LLM evalutation defence active.");
defenceReport.isBlocked = true;
defenceReport.blockedReason = `Message blocked by the malicious prompt evaluator.${evalPrompt.reason}`;
} else {
defenceReport.alertedDefences.push(DEFENCE_TYPES.LLM_EVALUATION);
defenceReport.alertedDefences.push(
DEFENCE_TYPES.EVALUATION_LLM_INSTRUCTIONS
);
}
}
return defenceReport;
Expand All @@ -391,7 +425,9 @@ export {
detectTriggeredDefences,
getEmailWhitelistVar,
getInitialDefences,
getQALLMprePrompt,
getQAPrePromptFromConfig,
getPromptInjectionEvalPrePromptFromConfig,
getMaliciousPromptEvalPrePromptFromConfig,
getSystemRole,
isDefenceActive,
transformMessage,
chriswilty marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
88 changes: 60 additions & 28 deletions backend/src/langchain.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,12 @@ import { CHAT_MODELS, ChatAnswer } from "./models/chat";
import { DocumentsVector } from "./models/document";

import {
maliciousPromptTemplate,
promptInjectionEvalTemplate,
qAcontextTemplate,
retrievalQAPrePrompt,
maliciousPromptEvalPrePrompt,
maliciousPromptEvalMainPrompt,
promptInjectionEvalPrePrompt,
promptInjectionEvalMainPrompt,
qAMainPrompt,
qAPrePrompt,
} from "./promptTemplates";
import { LEVEL_NAMES } from "./models/level";
import { PromptEvaluationChainReply, QaChainReply } from "./models/langchain";
Expand Down Expand Up @@ -62,21 +64,26 @@ async function getDocuments(filePath: string) {
chunkSize: 1000,
chunkOverlap: 0,
});
const splitDocs = await textSplitter.splitDocuments(docs);
return splitDocs;

return await textSplitter.splitDocuments(docs);
}

// join the configurable preprompt to the context template
function getQAPromptTemplate(prePrompt: string) {
if (!prePrompt) {
// choose between the provided preprompt and the default preprompt and prepend it to the main prompt and return the PromptTemplate
function makePromptTemplate(
configPrePrompt: string,
defaultPrePrompt: string,
mainPrompt: string,
templateNameForLogging: string
) {
if (!configPrePrompt) {
// use the default prePrompt
prePrompt = retrievalQAPrePrompt;
configPrePrompt = defaultPrePrompt;
}
const fullPrompt = prePrompt + qAcontextTemplate;
console.debug(`QA prompt: ${fullPrompt}`);
const template: PromptTemplate = PromptTemplate.fromTemplate(fullPrompt);
return template;
const fullPrompt = `${configPrePrompt}\n${mainPrompt}`;
console.debug(`${templateNameForLogging}: ${fullPrompt}`);
return PromptTemplate.fromTemplate(fullPrompt);
}

// create and store the document vectors for each level
async function initDocumentVectors() {
const docVectors: DocumentsVector[] = [];
Expand Down Expand Up @@ -124,24 +131,36 @@ function initQAModel(
streaming: true,
openAIApiKey: openAiApiKey,
});
const promptTemplate = getQAPromptTemplate(prePrompt);
const promptTemplate = makePromptTemplate(
prePrompt,
qAPrePrompt,
qAMainPrompt,
"QA prompt template"
);

return RetrievalQAChain.fromLLM(model, documentVectors.asRetriever(), {
prompt: promptTemplate,
});
}

// initialise the prompt evaluation model
function initPromptEvaluationModel(openAiApiKey: string) {
function initPromptEvaluationModel(
configPromptInjectionEvalPrePrompt: string,
configMaliciousPromptEvalPrePrompt: string,
openAiApiKey: string
) {
if (!openAiApiKey) {
console.debug(
"No OpenAI API key set to initialise prompt evaluation model"
);
return;
}
// create chain to detect prompt injection
const promptInjectionPrompt = PromptTemplate.fromTemplate(
promptInjectionEvalTemplate
const promptInjectionEvalTemplate = makePromptTemplate(
configPromptInjectionEvalPrePrompt,
promptInjectionEvalPrePrompt,
promptInjectionEvalMainPrompt,
"Prompt injection eval prompt template"
);

const promptInjectionChain = new LLMChain({
Expand All @@ -150,21 +169,25 @@ function initPromptEvaluationModel(openAiApiKey: string) {
temperature: 0,
openAIApiKey: openAiApiKey,
}),
prompt: promptInjectionPrompt,
prompt: promptInjectionEvalTemplate,
outputKey: "promptInjectionEval",
});

// create chain to detect malicious prompts
const maliciousInputPrompt = PromptTemplate.fromTemplate(
maliciousPromptTemplate
const maliciousPromptEvalTemplate = makePromptTemplate(
configMaliciousPromptEvalPrePrompt,
maliciousPromptEvalPrePrompt,
maliciousPromptEvalMainPrompt,
"Malicious input eval prompt template"
);

const maliciousInputChain = new LLMChain({
llm: new OpenAI({
modelName: CHAT_MODELS.GPT_4,
temperature: 0,
openAIApiKey: openAiApiKey,
}),
prompt: maliciousInputPrompt,
prompt: maliciousPromptEvalTemplate,
outputKey: "maliciousInputEval",
});

Expand Down Expand Up @@ -209,22 +232,31 @@ async function queryDocuments(
}

// ask LLM whether the prompt is malicious
async function queryPromptEvaluationModel(input: string, openAIApiKey: string) {
const promptEvaluationChain = initPromptEvaluationModel(openAIApiKey);
async function queryPromptEvaluationModel(
input: string,
configPromptInjectionEvalPrePrompt: string,
configMaliciousPromptEvalPrePrompt: string,
openAIApiKey: string
) {
const promptEvaluationChain = initPromptEvaluationModel(
configPromptInjectionEvalPrePrompt,
configMaliciousPromptEvalPrePrompt,
openAIApiKey
);
if (!promptEvaluationChain) {
console.debug("Prompt evaluation chain not initialised.");
return { isMalicious: false, reason: "" };
}
console.log(`Checking '${input}' for malicious prompts`);

// get start time
const startTime = new Date().getTime();
const startTime = Date.now();
console.debug("Calling prompt evaluation model...");
const response = (await promptEvaluationChain.call({
prompt: input,
})) as PromptEvaluationChainReply;
// log the time taken
const endTime = new Date().getTime();
const endTime = Date.now();
console.debug(`Prompt evaluation model call took ${endTime - startTime}ms`);

const promptInjectionEval = formatEvaluationOutput(
Expand Down Expand Up @@ -256,7 +288,7 @@ async function queryPromptEvaluationModel(input: string, openAIApiKey: string) {
function formatEvaluationOutput(response: string) {
try {
// split response on first full stop or comma
const splitResponse = response.split(/\.|,/);
const splitResponse = response.split(/[.,]/);
const answer = splitResponse[0]?.replace(/\W/g, "").toLowerCase();
const reason = splitResponse[1]?.trim();
return {
Expand All @@ -276,12 +308,12 @@ function formatEvaluationOutput(response: string) {
export {
initQAModel,
getFilepath,
getQAPromptTemplate,
getDocuments,
initPromptEvaluationModel,
queryDocuments,
queryPromptEvaluationModel,
formatEvaluationOutput,
setVectorisedDocuments,
initDocumentVectors,
makePromptTemplate,
};
2 changes: 1 addition & 1 deletion backend/src/models/defence.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
enum DEFENCE_TYPES {
CHARACTER_LIMIT = "CHARACTER_LIMIT",
EMAIL_WHITELIST = "EMAIL_WHITELIST",
LLM_EVALUATION = "LLM_EVALUATION",
EVALUATION_LLM_INSTRUCTIONS = "EVALUATION_LLM_INSTRUCTIONS",
QA_LLM_INSTRUCTIONS = "QA_LLM_INSTRUCTIONS",
RANDOM_SEQUENCE_ENCLOSURE = "RANDOM_SEQUENCE_ENCLOSURE",
SYSTEM_ROLE = "SYSTEM_ROLE",
Expand Down
8 changes: 4 additions & 4 deletions backend/src/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import {
getSystemRole,
detectFilterList,
getFilterList,
getQALLMprePrompt,
getQAPrePromptFromConfig,
} from "./defence";
import { sendEmail, getEmailWhitelist, isEmailInWhitelist } from "./email";
import { queryDocuments } from "./langchain";
Expand Down Expand Up @@ -231,14 +231,14 @@ async function chatGptCallFunction(
) as FunctionAskQuestionParams;
console.debug(`Asking question: ${params.question}`);
// if asking a question, call the queryDocuments
let qaPrompt = "";
let configQAPrePrompt = "";
chriswilty marked this conversation as resolved.
Show resolved Hide resolved
if (isDefenceActive(DEFENCE_TYPES.QA_LLM_INSTRUCTIONS, defences)) {
qaPrompt = getQALLMprePrompt(defences);
configQAPrePrompt = getQAPrePromptFromConfig(defences);
}
response = (
await queryDocuments(
params.question,
qaPrompt,
configQAPrePrompt,
currentLevel,
openAiApiKey
)
Expand Down
Loading