MeetingMind is an AI-powered meeting assistant that helps you capture, analyze, and act on your meeting insights effortlessly. This project is built with Langflow, Next.js and Groq-based fast transcription service to analyze your meetings and generate insights.
Check out this demo video to see MeetingMind in action:
MeetingMind.mp4
- Audio recording and file upload
- AI-powered transcription
- Automatic extraction of key information:
- Tasks
- Decisions
- Questions
- Insights
- Deadlines
- Attendees
- Follow-ups
- Risks
- Agenda
- Node.js 14.x or later
- npm or yarn
- A LangFlow server running locally
- Git (for cloning the repository)
To compress your audio files further, you can use tools like:
- Online audio compressors
- FFmpeg (command-line tool for audio/video processing)
Ensure your compressed audio maintains sufficient quality for accurate transcription while staying under the 25 MB limit.
-
Clone the repository:
git clone https://github.com/yourusername/meetingmind.git cd meetingmind
-
Install dependencies:
npm install # or yarn install
-
Set up LangFlow:
- Install and run the LangFlow backend server
- Upload the flow provided in the repo at
utils/langflow_flow/Meeting Mind.json
- Note the URL of your LangFlow server
-
Create a
.env.local
file in the root directory and add the LangFlow URL:LANGFLOW_FLOW_URL="http://127.0.0.1:7860/api/v1/run/5781a690-e689-4b26-b636-45da76a91915"
Replace the URL with your actual LangFlow server URL if different.
In the file
app/api/transcribe/route.ts
, locate thepayload
object and update the Groq component name to match your LangFlow component name. For example:const payload = { output_type: 'text', input_type: 'text', tweaks: { 'YourGroqComponentName': { audio_file: filePath }, } }
Replace 'YourGroqComponentName' with the actual name of your Groq component in LangFlow.
-
Set up the database:
This project uses Prisma as an ORM. By default, it's configured to use SQLite as the database.
a. To use the local SQLite database:
- Ensure your
.env
file contains:DATABASE_URL="file:./dev.db"
- Run the following commands to set up your database:
npx prisma generate npx prisma migrate dev --name init
b. To use a different database (e.g., PostgreSQL with Neon):
- Update your
.env
file with the appropriate connection string:DATABASE_URL="postgresql://username:password@host:port/database?schema=public"
- Update the
provider
inprisma/schema.prisma
:datasource db { provider = "postgresql" url = env("DATABASE_URL") }
- Run the Prisma commands as mentioned above to generate the client and run migrations.
- Ensure your
-
Run the development server:
npm run dev # or yarn dev
-
Open http://localhost:3000 with your browser to see the result.
- Navigate to the dashboard page.
- Upload an audio file.
- Wait for the AI to process and analyze the meeting.
- Review the extracted information in the Dashboard.
app/
: Contains the main application codecomponents/
: Reusable React componentsapi/
: API routes for server-side functionalitydashboard/
: Dashboard page componentpage.tsx
: Home page component
public/
: Static assetsprisma/
: Database schema and migrationsutils/
: Utility functions and configurationslib/
: Shared libraries and modules
- Langflow: For AI workflow management
- Next.js: React framework for building the web application
- React: JavaScript library for building user interfaces
- Tailwind CSS: Utility-first CSS framework
- Framer Motion: Animation library for React
- Axios: Promise-based HTTP client
- Prisma: ORM for database management
- SQLite: Default database (can be changed to PostgreSQL or others)
- Groq: AI model provider for transcription and analysis
- The project uses environment variables for configuration. Ensure all necessary variables are set in your
.env.local
file. - Tailwind CSS configuration can be found in
tailwind.config.ts
. - TypeScript configuration is in
tsconfig.json
.
/api/meetings
: Handles CRUD operations for meetings/api/transcribe
: Handles audio file transcription and analysis
- Use the browser's developer tools to debug client-side issues.
- For server-side debugging, use console.log statements or attach a debugger to your Node.js process.
- Large audio files may take longer to process. Consider implementing a progress indicator for better user experience.
- Optimize database queries and indexes for improved performance as the number of meetings grows.
These screenshots provide a visual representation of the application's main interfaces. The landing page showcases the initial user experience, while the dashboard displays the core functionality where users can upload audio files and view the AI-processed meeting information.
Contributions are welcome! Please feel free to submit a Pull Request. Here are some ways you can contribute:
- Report bugs and issues
- Suggest new features
- Improve documentation
- Submit pull requests with bug fixes or new features
Please read our contributing guidelines before submitting a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
If you encounter any problems or have questions, please open an issue on the GitHub repository.
- Thanks to the Langflow team for providing the AI workflow management tool.
- Special thanks to all contributors who have helped shape this project.