Complete Backend Configuration + Feature Wiring AI Voice Agent (LiveKit, MongoDB, OpenAI, Deepgram)
Upwork

Remoto
•3 hours ago
•No application
About
🧠 Project Overview I have a custom-built AI voice agent named Wendy, created with: LiveKit (real-time audio) OpenAI (LLM) Deepgram (STT) MongoDB Atlas (memory + training data) AWS (backend environment) The frontend is deployed and working. The backend is partially set up but needs to be completed, connected, and stabilized. This job is strictly backend + feature wiring. 🎯 Milestone 2 — What You Will Complete 1. Backend Deployment Deploy the backend to AWS EC2 or AWS Lambda (your recommendation). Correct all environment variables: OPENAI_API_KEY DEEPGRAM_API_KEY MONGODB_URI LiveKit keys Confirm that the backend connects to the frontend reliably. 2. MongoDB Integration I have a MongoDB cluster created. I need: One main collection: wendy_lessons — for scripts, training transcripts, lessons, commands A clean CRUD pipeline: Add new lessons/scripts via API Fetch based on tags (e.g., “script”, “lesson”, “roleplay”, “presentation”) Query by title, type, or date Confirm Osama’s previously uploaded training data (10–15 lessons) is connected or re-upload them. 3. Voice + Conversation Feature Fixes These adjustments are from structured testing I performed: Fix tone/pace cut-ins Smooth response latency Improve silence endpointing Add jitter tolerance Improve dynamic timing across: normal pace fast pace slow reflective pace Add a short “thinking pause” in reflective mode Emotion detection smoothing Strengthen persona switching 4. Command → Feature Wiring Wendy already has internal “brains” (prompts, behaviors, modes). But the command triggers need to be connected. For example: “Wendy, enter Drift Sync Mode” → activates drift loop “Wendy, load my presentation” → fetches from MongoDB “Wendy, evaluate my delivery” → triggers re-teaching mode “Start advisor roleplay” → switches persona “Memory Mode on” → reads scripts “Anchor this line” → saves to memory collection All backend routing + functions must be created and connected. 5. Optional (Add to estimate separately) Add a simple UI input box on frontend to manually add scripts/lessons. Create a second collection for future: wendy_leads — lightweight CRM memory ⚙️ Tech Stack Node.js backend LiveKit server SDK MongoDB Atlas AWS (EC2 or Lambda) Vercel front-end already deployed Deepgram / OpenAI 🧪 Acceptance Criteria Backend deploys cleanly and connects to frontend without errors. All environment variables working. MongoDB collections created + data accessible. All command functions working on live Wendy. Conversation flow issues resolved. Tone, pacing, and role-play improvements implemented. Clear documentation on: How to add lessons How to maintain the backend How commands trigger Wendy modes 💵 Budget Fixed price: $300–$450 depending on experience and speed. (You may propose your rate, but please be realistic.) 🕒 Timeline 4–7 days, depending on whether Lambda is used. 🔑 What I Will Provide All API keys Full frontend/backend repos Testing reports (9 tests) Details of Wendy’s internal behavior modes Any script/lesson uploads needed 🙏 Looking For Someone who understands both LLM backend architecture and realtime audio agents Strong MongoDB experience Can work independently Clear communication No hand-holding needed 🚀 Ready to start immediately. Please send: Your GitHub or sample projects Confirmation you’ve worked with LiveKit or similar audio frameworks Estimated timeline Your rate within the requested range




