Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Mentor memory integration #220

Merged
merged 34 commits into from
Jan 24, 2025

Conversation

milesha
Copy link
Contributor

@milesha milesha commented Jan 10, 2025

⚠️ Important

No database migration added to this PR (needs to be done after the initial review) -> I added it -Felix
The typo in the name of the MentorResponse entity is fixed in the following PR 😅 -> Sorry I forgot and already fixed it -Felix

Motivation

To persist the contest of the conversation correctly the mentor requires memory: short-term for the single conversation persistence and long-term for the cross-session information sharing.

Description

https://confluence.ase.in.tum.de/x/zTMfDg - a summary of key concepts used in the PR with corresponding links to the official LangGraph documentation

  • Prompts are now stored as .txt files within a designated prompts directory and dynamically loaded by the graph nodes at runtime as required.
  • After a conversation a summary is generated and given to the user to check and save -> chat-summary UI component added
  • If the user starts a new conversation, the next one is automatically being closed, instead of an input field in the past conversations the user sees an alert -> update of the chat-input UI component

Testing Instructions

  • Start intelligence-service, webapp and application-server
  • Assign your user a mentor_access role
  • Click on the "AI Mentor" button in the header and start your session:
    - After the update about status/impediments/promises, you will receive a summary of the conversation.
    - You need to chat with the bot to the point when it says good bye and wishes you a great week.
    - Afterwards, when starting a new conversation the context of the previous one should be included into the chat (when talking about the status update and impediments).

Screenshots (if applicable)

Screenshot 2025-01-21 at 01 12 13

Checklist

General

  • PR title is clear and descriptive
  • PR description explains the purpose and changes
  • Code follows project coding standards
  • Self-review of the code has been done
  • Changes have been tested locally
  • Screenshots have been attached (if applicable)
  • Documentation has been updated (if applicable)

Client (if applicable)

  • UI changes look good on all screen sizes and browsers
  • No console errors or warnings
  • User experience and accessibility have been tested
  • Added Storybook stories for new components
  • Components follow design system guidelines (if applicable)

Server (if applicable)

  • Code is performant and follows best practices
  • No security vulnerabilities introduced
  • Proper error handling has been implemented
  • Added tests for new functionality
  • Changes have been tested in different environments (if applicable)

@milesha milesha self-assigned this Jan 10, 2025
@github-actions github-actions bot added client application-server feature size:XXL This PR changes 1000+ lines, ignoring generated files. labels Jan 10, 2025
@milesha milesha marked this pull request as ready for review January 21, 2025 07:58
Copy link
Contributor

@iam-flo iam-flo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. When i asked the bot about my last status, it couldnt recollect. however this was right after

@milesha
Copy link
Contributor Author

milesha commented Jan 24, 2025

Looks good. When i asked the bot about my last status, it couldnt recollect. however this was right after

I have just tested the system out one more time and it worked (from the technical perspective the most important point is the saving of the information into the memory and retrieving it) - but of course when working with LLMs you can't be 100% sure 😅
I have changed the prompt a bit, maybe it will help the LLM to better understand the context.

Copy link
Collaborator

@FelixTJDietrich FelixTJDietrich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed some minor stuff in 4e36b82

Sorry for fixing the type, I forgot 😅 Will merge it as soon as I've update the environment.

For now we will just use the same database, it should be fine since the write operations are mutually exclusive on the tables 🤔

Really great work with this PR!! There are some quirks now with the prompting, I think we have to remove the reflective part now a bit back to after the status table. It kind of hinders the flow, but let's fix it in a follow up

@FelixTJDietrich FelixTJDietrich merged commit 78baa94 into develop Jan 24, 2025
8 checks passed
@FelixTJDietrich FelixTJDietrich deleted the feature/integrate-progress-mentor-chat branch January 24, 2025 21:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
application-server client feature intelligence-service size:XXL This PR changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants