Build V1 of SaaS (Next.js + Supabase + Chrome Extension + AI Scoring + Scraper)

Build V1 of SaaS (Next.js + Supabase + Chrome Extension + AI Scoring + Scraper)

Build V1 of SaaS (Next.js + Supabase + Chrome Extension + AI Scoring + Scraper)

Upwork

Upwork

Remoto

2 hours ago

No application

About

I’m building SearchFindr, a SaaS platform for ETA/Search Fund buyers to source and analyze small-business deals using AI. Users will be able to pull deals from on-market listings (via a Chrome extension), CIM PDFs, and off-market data. The system summarizes each business, extracts financials and key attributes, flags red flags, and scores the deal based on each user’s personal search criteria. The foundation is already working: I have a Next.js app deployed on Vercel, Supabase (Postgres + RLS) for auth and data, and a Chrome extension (Manifest V3, TypeScript) that scrapes on-market listings and sends them to the backend. An AI scoring pipeline is already in place that generates a summary, extracts financials, identifies risks, and outputs a score and tier, which are shown in a dashboard and individual deal pages for each user. Supabase RLS is configured so users only see their own deals. I’m looking for a full-stack developer to take this into a production-ready V1 with three main pieces: (1) per-user filters/preferences that drive personalized scoring; (2) a full CIM upload and scoring flow; and (3) a robust off-market engine that can discover and enrich companies from multiple sources, not just a single URL. For on-market and CIM, I need you to: • Implement a “search preferences” / filters system per user (stored in Supabase) that includes things like revenue and EBITDA/SDE ranges, industries, geography, owner involvement, recurring revenue, customer concentration tolerance, and red-flag tolerances. • Build UI on the dashboard where users can set and update these preferences. • Wire the preferences into the existing AI scoring pipeline so the scoring and “fit” explanation are personalized per user, across on-market, CIM, and off-market deals. • Add a CIM upload flow where users can upload a PDF, have the text extracted, run through the same AI scoring logic, and saved as a new company record that shows up in the dashboard and detail page like any other deal. For off-market, I do not want a toy script that hits one page and stops. I want a solid V1 that can actually be useful to real searchers and generate referable results. That means: • Designing and implementing an off-market pipeline that can search or crawl targeted sources (business directories, review sites, company registries, and similar) to discover candidate companies based on the user’s filters (industry, geography, etc.). • For each candidate company, performing multi-source enrichment: visiting multiple URLs related to that company (e.g. official website if available, relevant directory pages, review sites, other public data sources, and where feasible/allowed, professional or rating sites) to assemble a richer profile of the business. • Normalizing that data into a consistent structure (name, location, industry/category, rough size signals, customer/reputation signals, etc.) and sending it through the AI scoring pipeline together with the user’s preferences. • Saving these as off_market deals in Supabase, with deduplication, basic logging, and error handling, so that new off-market leads appear in the user’s dashboard tagged appropriately. • Making the off-market code modular and maintainable so additional sources can be added later and so changes in HTML/structure are not catastrophic. All of this has to be multi-tenant safe: each user has their own preferences and their own set of deals, with Supabase RLS correctly enforced. The UI should feel clean and responsive, with proper loading and error states, and the whole path—from Chrome extension / CIM upload / off-market job → backend → Supabase → dashboard and deal pages—should be stable enough that I can onboard real users (searchers, ETA buyers, small PE shops) and comfortably ask them for feedback and referrals. Tech stack is: Next.js (App Router), React, TypeScript, Supabase/Postgres (with RLS), Vercel, Tailwind, Chrome Extension (MV3), Node-based scraping (Puppeteer/Playwright/Cheerio or similar), and OpenAI (or similar) for LLM integration. I want to keep this stack; your job is to extend and harden what exists rather than rebuild. I’m looking for someone who has real experience with Next.js + Supabase, Chrome extensions, web scraping & multi-source enrichment, and AI/LLM integrations, and who cares about product quality, not just making it “work once.” Please share similar projects you’ve shipped, your estimated hours/total cost, and how you’d approach designing the off-market discovery + enrichment flow so that it’s both reliable and extendable.