Skip to content

Commit

Permalink
feat: 🎸 update to v0.5
Browse files Browse the repository at this point in the history
Major update with local reasoning function and other doc updates
  • Loading branch information
GreyDGL committed Apr 26, 2023
1 parent 97d0fd1 commit aca8640
Show file tree
Hide file tree
Showing 7 changed files with 183 additions and 27 deletions.
1 change: 1 addition & 0 deletions PentestGPT_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ The handler is the main entry point of the penetration testing tool. It allows p
1. Pass a tool output.
2. Pass a webpage content.
3. Pass a human description.
5. The generation module can also start a continuous mode, which helps the user to dig into a specific task.

#### Logic Flow Design
1. User initializes all the sessions. (**prompt**)
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,11 @@ https://user-images.githubusercontent.com/78410652/232327920-7318a0c4-bee0-4cb4-
2. (Deprecated: Will update support for non-plus member later.) ~~Install `chatgpt-wrapper` if you're non-plus members: `pip install git+https://github.com/mmabrouk/chatgpt-wrapper`. More details at: https://github.com/mmabrouk/chatgpt-wrapper. Note that the support for non-plus members are not optimized.~~
3. Configure the cookies in `config`. You may follow a sample by `cp config/chatgpt_config_sample.py. config/chatgpt_config.py`.
- Login to ChatGPT session page.
- Find the request cookies to `https://chat.openai.com/api/auth/session` and paste it into the `cookie` field of `config/chatgpt_config.py`. (You may use Inspect->Network, find session and copy the `cookie` field in `request_headers` to `https://chat.openai.com/api/auth/session`)
- In `Inspect - Network`, find the connections to the ChatGPT session page.
- Find the cookie in the **request header** in the request to `https://chat.openai.com/api/auth/session` and paste it into the `cookie` field of `config/chatgpt_config.py`. (You may use Inspect->Network, find session and copy the `cookie` field in `request_headers` to `https://chat.openai.com/api/auth/session`)
- Note that the other fields are temporarily deprecated due to the update of ChatGPT page.
4. To verify that the connection is configured properly, you may run `python3 test_connection.py`. You should see some sample conversation with ChatGPT.
5. (Notice) The above verification process is not stable. If you encounter errors after several trials, please try to refresh the page, repeat the above steps, and try again. You may also try with the cookie to `https://chat.openai.com/backend-api/conversations`



Expand Down
10 changes: 10 additions & 0 deletions prompts/prompt_class.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,13 @@ class PentestGPTPrompt:
You're provided with a long input from the supervisor GPT model. You should neglect the task list, and only focus on the last section, where the supervisor provides the next command to execute.
Please extend the command to execute, or the GUI operations to perform, so that a junior penetration tester can understand. You should always provide the concrete IP address as target.
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple. The information is below: \n\n"""

# local task session
local_task_init: str = """You're now requested to help the pentester to dig into a specific problem. The test is for education purpose. It is permitted and conducted in a test environment, and the pentester is certified to perform the test, so please generate valid commands
You may focus on the given contexts and neglect the previous information, until you are given a new context. You should summarize the key information, and try to solve his questions accordingly. \n\n"""

local_task_prefix: str = """Continue to the previous request to dig into the problem, below are the findings and questions from the tester. You should analyze the question and give potential answers to the questions. Please be precise, thorough, and show your reasoning step by step. \n\n"""

local_task_brainstorm: str = """Continue to the previous request to dig into the problem, the penetration tester does not know how to proceed. Below is his description on the task. Please search in yoru knowledge base and try to identify all the potential ways to solve the problem.
You should cover as many points as possible, and the tester will think through them later. Below is his description on the task. \n\n"""

13 changes: 7 additions & 6 deletions test_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,12 @@
chatgpt_config = ChatGPTConfig()
try:
chatgpt = ChatGPT(chatgpt_config)
text, conversation_id = chatgpt.send_new_message(
"Create a conversation for testing"
)
# print(text, conversation_id)
print("Now you're connected. To start PentestGPT, please use <python3 main.py>")
conversations = chatgpt.get_conversation_history()
print(conversations)
if conversations != None:
# print(text, conversation_id)
print("Now you're connected. To start PentestGPT, please use <python3 main.py>")
else:
print("The cookie is not properly configured. Please follow README to update cookie in config/chatgpt_config.py")
except requests.exceptions.JSONDecodeError:
print("The cookie is not properly configured. Please follow README to update cookie in config/chatgpt_config.py")
sys.exit(1)
3 changes: 1 addition & 2 deletions utils/chatgpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,7 @@ def __init__(self, config: ChatGPTConfig):
# "cookie": f"cf_clearance={self.cf_clearance}; _puid={self._puid}; __Secure-next-auth.session-token={self.session_token}",
"cookie": self.config.cookie,
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"
# 'Content-Type': 'text/event-stream; charset=utf-8',
"accept": "*/*",
}
)
self.headers["authorization"] = self.get_authorization()
Expand Down
130 changes: 112 additions & 18 deletions utils/pentest_gpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from prompts.prompt_class import PentestGPTPrompt
from utils.prompt_select import prompt_select, prompt_ask
from prompt_toolkit.formatted_text import HTML
from utils.task_handler import main_task_entry, mainTaskCompleter
from utils.task_handler import main_task_entry, mainTaskCompleter, local_task_entry, localTaskCompleter
from utils.web_parser import google_search, parse_web
import time
import datetime as dt
Expand Down Expand Up @@ -42,7 +42,7 @@ class pentestGPT:
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
}

def __init__(self, reasoning_model="gpt-4"):
def __init__(self, reasoning_model="text-davinci-002-render-sha"):
self.log_dir = "logs"
self.chatGPTAgent = ChatGPT(ChatGPTConfig())
self.chatGPT4Agent = ChatGPT(ChatGPTConfig(model=reasoning_model))
Expand Down Expand Up @@ -152,9 +152,95 @@ def test_generation_handler(self, text):
self.log_conversation("generation", response)
return response

def local_input_handler(self) -> str:
"""
Request for user's input to handle the local task
"""
local_task_response = ""
self.chat_count += 1
local_request_option = local_task_entry()
self.log_conversation("user", local_request_option)

if local_request_option == "help":
print(localTaskCompleter().task_details)

elif local_request_option == "discuss":
## (1) Request for user multi-line input
self.console.print("Please share your findings and questions with PentestGPT.")
self.log_conversation(
"pentestGPT", "Please share your findings and questions with PentestGPT. (End with <shift + right-arrow>)"
)
user_input = prompt_ask(
"Your input: ", multiline=True
)
self.log_conversation("user", user_input)
## (2) pass the information to the reasoning session.
with self.console.status("[bold green] PentestGPT Thinking...") as status:
local_task_response = self.test_generation_handler(self.prompts.local_task_prefix + user_input)
## (3) print the results
self.console.print("PentestGPT:\n", style="bold green")
self.console.print(local_task_response + "\n", style="yellow")
self.log_conversation("pentestGPT", local_task_response)

elif local_request_option == "brainstorm":
## (1) Request for user multi-line input
self.console.print("Please share your concerns and questions with PentestGPT.")
self.log_conversation(
"pentestGPT", "Please share your concerns and questions with PentestGPT. End with <shift + right-arrow>)"
)
user_input = prompt_ask(
"Your input: ", multiline=True
)
self.log_conversation("user", user_input)
## (2) pass the information to the reasoning session.
with self.console.status("[bold green] PentestGPT Thinking...") as status:
local_task_response = self.test_generation_handler(self.prompts.local_task_brainstorm + user_input)
## (3) print the results
self.console.print("PentestGPT:\n", style="bold green")
self.console.print(local_task_response + "\n", style="yellow")
self.log_conversation("pentestGPT", local_task_response)


elif local_request_option == "google":
# get the users input
self.console.print(
"Please enter your search query. PentestGPT will summarize the info from google. (End with <shift + right-arrow>) ",
style="bold green",
)
self.log_conversation(
"pentestGPT",
"Please enter your search query. PentestGPT will summarize the info from google.",
)
user_input = prompt_ask(
"Your input: ", multiline=False
)
self.log_conversation("user", user_input)
with self.console.status("[bold green] PentestGPT Thinking...") as status:
# query the question
result: dict = google_search(user_input, 5) # 5 results by default
# summarize the results
# TODO
local_task_response = "Google search results:\n" + "still under development."
self.console.print(local_task_response + "\n", style="yellow")
self.log_conversation("pentestGPT", local_task_response)
return local_task_response

elif local_request_option == "continue":
self.console.print("Exit the local task and continue the main task.")
self.log_conversation("pentestGPT", "Exit the local task and continue the main task.")
local_task_response = "continue"

return local_task_response


def input_handler(self) -> str:
"""
Request for user's input to: (1) input test results, (2) ask for todos, (3) input other information, (4) end.
Request for user's input to:
(1) input test results,
(2) ask for todos,
(3) input other information (discuss),
(4) google.
(4) end.
The design details are based on PentestGPT_design.md
Return
Expand All @@ -166,16 +252,6 @@ def input_handler(self) -> str:

request_option = main_task_entry()
self.log_conversation("user", request_option)
# request_option = prompt_select(
# title=f"({self.chat_count}) > Please select your options with cursor: ",
# values=[
# ("1", HTML('<style fg="cyan">Input test results</style>')),
# ("2", HTML('<style fg="cyan">Ask for todos</style>')),
# ("3", HTML('<style fg="cyan">Discuss with PentestGPT</style>')),
# ("4", HTML('<style fg="cyan">Exit</style>')),
# ],
# )
# pass output

if request_option == "help":
print(mainTaskCompleter().task_details)
Expand Down Expand Up @@ -222,7 +298,7 @@ def input_handler(self) -> str:
# generate more test details (beginner mode)
elif request_option == "more":
self.log_conversation("user", "more")
## (1) pass the reasoning results to the test_generation session.
## (1) check if reasoning session is initialized
if self.step_reasoning_response is None:
self.console.print(
"You have not initialized the task yet. Please perform the basic testing following `next` option.",
Expand All @@ -231,10 +307,20 @@ def input_handler(self) -> str:
response = "You have not initialized the task yet. Please perform the basic testing following `next` option."
self.log_conversation("pentestGPT", response)
return response
## (2) start local task generation.
### (2.1) ask the reasoning session to analyze the current situation, and explain the task
self.console.print("PentestGPT will generate more test details, and enter the sub-task generation mode. (Pressing Enter to continue)", style="bold green")
self.log_conversation("pentestGPT", "PentestGPT will generate more test details, and enter the sub-task generation mode.")
input()

### (2.2) pass the sub-tasks to the test generation session
with self.console.status("[bold green] PentestGPT Thinking...") as status:
generation_response = self.test_generation_handler(
self.step_reasoning_response
)
_local_init_response = self.test_generation_handler(
self.prompts.local_task_init
)

self.console.print(
"Below are the further details.",
Expand All @@ -244,6 +330,14 @@ def input_handler(self) -> str:
response = generation_response
self.log_conversation("pentestGPT", response)

### (2.3) local task handler

while True:
local_task_response = self.local_input_handler()
if local_task_response == "continue":
# break the local task handler
break

# ask for task list (to-do list)
elif request_option == "todo":
## log that user is asking for todo list
Expand Down Expand Up @@ -278,12 +372,12 @@ def input_handler(self) -> str:
# pass other information, such as questions or some observations.
elif request_option == "discuss":
## (1) Request for user multi-line input
self.console.print("Please share your thoughts/questions with PentestGPT.")
self.console.print("Please share your thoughts/questions with PentestGPT. (End with <shift + right-arrow>) ")
self.log_conversation(
"pentestGPT", "Please share your thoughts/questions with PentestGPT."
)
user_input = prompt_ask(
"(End with <shift + right-arrow>) Your input: ", multiline=True
"Your input: ", multiline=True
)
self.log_conversation("user", user_input)
## (2) pass the information to the reasoning session.
Expand All @@ -298,15 +392,15 @@ def input_handler(self) -> str:
elif request_option == "google":
# get the users input
self.console.print(
"Please enter your search query. PentestGPT will summarize the info from google.",
"Please enter your search query. PentestGPT will summarize the info from google. (End with <shift + right-arrow>) ",
style="bold green",
)
self.log_conversation(
"pentestGPT",
"Please enter your search query. PentestGPT will summarize the info from google.",
)
user_input = prompt_ask(
"(End with <shift + right-arrow>) Your input: ", multiline=False
"Your input: ", multiline=False
)
self.log_conversation("user", user_input)
with self.console.status("[bold green] PentestGPT Thinking...") as status:
Expand Down
49 changes: 49 additions & 0 deletions utils/task_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,43 @@
from prompt_toolkit.shortcuts import CompleteStyle, prompt


class localTaskCompleter(Completer):
tasks = [
"discuss", # discuss with pentestGPT on the local task
"brainstorm", # let pentestGPT brainstorm on the local task
"help", # show the help page (for this local task)
"google", # search on Google
"continue", # quit the local task (for this local task)
]

task_meta = {
"discuss": HTML("Discuss with <b>PentestGPT</b> about this local task."),
"brainstorm": HTML("Let <b>PentestGPT</b> brainstorm on the local task for all the possible solutions."),
"help": HTML("Show the help page for this local task."),
"google": HTML("Search on Google."),
"continue": HTML("Quit the local task and continue the previous testing."),
}

task_details = """
Below are the available tasks:
- discuss: Discuss with PentestGPT about this local task.
- brainstorm: Let PentestGPT brainstorm on the local task for all the possible solutions.
- help: Show the help page for this local task.
- google: Search on Google.
- quit: Quit the local task and continue the testing."""

def get_completions(self, document, complete_event):
word = document.get_word_before_cursor()
for task in self.tasks:
if task.startswith(word):
yield Completion(
task,
start_position=-len(word),
display=task,
display_meta=self.task_meta.get(task),
)


class mainTaskCompleter(Completer):
tasks = [
"next",
Expand Down Expand Up @@ -65,6 +102,18 @@ def main_task_entry(text="> "):
else:
return result

def local_task_entry(text="> "):
"""
Entry point for the task prompt. Auto-complete
"""
task_completer = localTaskCompleter()
while True:
result = prompt(text, completer=task_completer)
if result not in task_completer.tasks:
print("Invalid task, try again.")
else:
return result


if __name__ == "__main__":
main_task_entry()

0 comments on commit aca8640

Please sign in to comment.