Score
000
HI: 000
Wave
01
SYSTEM FAILURE
SYSTEM FAILURE
Shoot asteroids, or scroll to explore

VIDUR RAMACHANDRAN

Business · Data · Product

Curious about systems
Careful with data

Available Immediately
Scroll to explore
Systems I Have Worked With
Profile

Systems
thinker.
Automation
builder.

About me

2.5+ years of professional experience across consulting and an early-stage startup, supporting software validation, data systems, and automation alongside product and engineering teams. Comfortable in fast-paced environments that require structured problem-solving, documentation, and cross-functional coordination.

Currently freelancing as a Business Analyst and GenAI Automation Consultant. Founder of Mesh23.

Experience
Systems Impact
Play with tech,
learn through fun.
Projects
Business Analyst  ·  Automation Engineer  ·  Associate PM
Your next hire
is right here

2.5 years of real work across data systems, automation, and product operations. Available immediately. Open to full-time, contract, and remote roles globally.

Currently available · Hyderabad, IN · Open to relocation
01 / About

VIDUR
RAMACHANDRAN

Systems thinker. Automation builder. I take messy operational problems and turn them into documented, reliable systems that actually hold up.

Profile
Who I am

I'm Vidur Ramachandran, a Business Analyst and GenAI Automation Consultant with 2.5+ years of professional experience across consulting and an early-stage startup. I support software validation, data systems, and automation alongside product and engineering teams.

At Deloitte USI, I worked on enterprise CDP systems for Fortune 500 clients — conducting root cause analysis on data pipeline failures and building automation frameworks. Before that, at Acumen Connect, I owned product and ops tasks in a fast-paced startup.

I'm comfortable in fast-paced environments that demand structured problem-solving, documentation, and cross-functional coordination. Currently freelancing and building Mesh23, delivering AI-driven workflow solutions for SMBs.

Contact
vidur2002@gmail.com +91 88978 05834 inlinkedin.com/in/vidurr Available Immediately
Location & Visa

Hyderabad, India
Open to remote & relocation
B1/B2 US Business Visa — valid · 10-year
Comfortable with EST / PST-aligned shifts & overnight hours

Skills
Data & Analytics
  • SQL — BigQuery, MySQL
  • Python · Java Scripting
  • Apache Airflow
  • JSON Schema Validation
  • Anomaly Detection
  • VBA / Macros
Platforms & Automation
  • Adobe Experience Platform (AEP)
  • Adobe Journey Optimizer (AJO)
  • BigQuery / Firebase
  • n8n / Lindy AI
  • GenAI Workflow Automation
Testing & Validation
  • Test Case Design
  • Functional & Automation Testing
  • Software Validation
  • Defect Analysis & Debugging
  • Root Cause Analysis
Business Analysis & Ops
  • Program Tracking & Milestone Monitoring
  • Risk & Issue Tracking
  • Dashboard Creation & Status Reporting
  • SOPs & Runbooks
  • Process Documentation
Collaboration & Communication
  • Cross-functional Coordination
  • Stakeholder Communication (Technical & Non-technical)
  • EST / PST-aligned Remote Work
  • Comfortable with Overnight & Global Shifts
Experience
Freelance
Business Analyst &
GenAI Automation
Consultant
Jul 2025 — Present
Hyderabad
  • Translated business requirements into structured user stories and collaborated with developers to deliver automation-driven workflows.
  • Led CX analysis and feature refinement initiatives, improving product usability and stakeholder alignment.
  • Designed and deployed GenAI + n8n automation solutions for SMB clients, streamlining manual processes and reducing operational effort.
  • Founded and operated Mesh23: a freelance automation initiative delivering AI-driven workflow optimization.
Deloitte
USI
Consulting Associate
Analyst (Engineering
Support / Developer)
Feb 2024 — Jun 2025
Hyderabad
Fortune 500 clients: Wells Fargo (Feb–Nov 2024) & LG (Dec 2024–Jun 2025)
  • Performed root cause analysis and failure investigation on recurring data pipeline failures within Adobe Experience Platform, utilizing SQL-based audits and validation checks to trace issues across ingestion, transformation, and activation workflows — reducing incident resolution time by 28%.
  • Conducted data validation and error pattern analysis on large datasets (5–18M+ records) to identify schema mismatches and system inconsistencies impacting customer workflows — improving data reliability by 16%.
  • Contributed to the design and testing of Python-based validation automation to filter and verify live data before ingestion into downstream systems.
  • Supported QA, validation, and troubleshooting workflows using AI-assisted tools to accelerate checks during development; documented recurring issues into SOPs and runbooks.
  • Collaborated with engineering, product, and operations teams to validate fixes, monitor post-launch system behaviour, and track issue resolution — recommending improvements based on data insights.
Acumen
Connect
Product Intern →
Product Development Associate
(Early-Stage Startup)
Jan 2023 — Feb 2024
Hyderabad
  • Operated in a fast-paced startup environment with high accountability, taking ownership of operational and product-related tasks with limited guidance.
  • Identified recurring operational inefficiencies and converted fixes into 6+ documented SOPs, improving interdepartmental coordination efficiency by 15%.
  • Collaborated with freelancers and marketing team to translate business requirements into structured tasks and feature validation scenarios for freelance developers.
Education
BCA
2020 – 2023
Bachelor of Computer Applications
Symbiosis Institute of Computer Studies & Research, Pune
CGPA 7.9/10
02 / Work

Systems
Built

Automated systems, analytical frameworks, and workflow tools built across enterprise consulting and independent work. Each one starts with a real problem.

Deep Dive · Case Study
Enterprise Data Systems · Deloitte USI

Reducing Data Pipeline Failures in Adobe Experience Platform

01 Problem

A Fortune 500 client's Adobe AEP pipeline was producing silent failures. Customer segments weren't updating correctly and downstream marketing automations were firing on stale data. The team was manually checking logs with no structured process.

02 Investigation

Performed SQL audits across 18M+ records to identify where the pipeline was failing: schema mismatches, null propagation errors, and upstream ingestion delays. Mapped failure patterns across 8 weeks of pipeline logs to surface recurring root causes.

03 System Understanding

Built a mental model of the full data flow — from raw event ingestion through AEP identity resolution to segment activation. This revealed three structural failure points that were not visible in isolation.

04 Solution

Contributed to Python-based pre-ingestion validation scripts that caught malformed data before pipeline entry. Compiled each failure class into structured SOPs and worked with engineering to implement schema validation gates.

05 Outcome

28% reduction in average incident resolution time. Repeat escalations dropped by around 22%. The runbooks were adopted team-wide, turning individual knowledge into institutional memory.

Environment
Adobe Experience Platform (AEP)
Tools Used
SQL · BigQuery · Python · Apache Airflow
Role
Associate Analyst — Investigation, Automation and Documentation
Scale
18M+ records · Fortune 500 client
28% faster resolution
22% fewer repeat incidents
16% data reliability improvement
6+ SOPs created
03 / Builder Lab

Vision
Labs

Where I experiment with startup ideas and automation systems. Not polished products — honest exploration. The best way to understand a problem space is to build something in it.

Lab Projects
Builder Mindset
Build to Learn. Ship to Understand.

Vision Labs is a commitment to staying operationally sharp beyond formal job titles. Every project here is a real systems problem I've chosen to work on — an automation workflow, a SaaS prototype, or a business model stress-test. The experiments are messy. The thinking is real.

← Vision Labs / Concept Doc
Prototype Built
Built for EMELEX Solutions Pvt. Ltd.

CorpSim
AI

A professional-grade, AI-powered corporate simulation platform that bridges the gap between campus life and the corporate world — built as a working prototype to give shape to a founder’s vision for what AI-native corporate readiness training could be.

My Role

EMELEX Solutions is building the next generation of corporate readiness training. Inspired by the founder’s vision, I wanted to build a demo prototype for him — my own understanding of what he was trying to create, translated into something tangible and explorable.

This prototype is something I built independently, thinking like a product person: mapping the user journey end-to-end, defining the information architecture, prototyping the core experience, and documenting how it all fits together. It’s not a final product — it’s a high-fidelity signal of what this kind of product could feel and function like.

  Deliverable: Prototype + Product Wireframe from a Product Person’s POV — a shareable signal of what the product architecture and experience could look like.
The Core Concept
The Problem
  • Students graduate with zero corporate context — no idea how to manage email, handle stakeholders, or prioritise under pressure.
  • Existing platforms are passive — videos and quizzes, not real work.
  • Internships are competitive and limited. Most students never get one before their first job.
  • Corporate readiness is untestable on a resume — it has to be demonstrated.
The Solution
  • CorpSim AI drops users into a Virtual Desktop Environment — their first day on the job.
  • AI-powered stakeholders respond to your emails, messages, and decisions dynamically.
  • Real tasks: write a BRD, analyse data, manage a Kanban board, present to a panel.
  • A proprietary 8-pillar Corporate Readiness Score (CRS) benchmarks performance against real corporate standards.
The Virtual Desktop Environment

The VDE is a multi-app interface containing everything a junior professional would need on day one. Users don’t “answer questions” — they actually work inside these tools simultaneously.

InboxAI

Intelligent email client where users manage AI-generated stakeholder communications. Claude responds dynamically to tone, clarity, and analytical depth.

SlackSim + TaskBoard

Real-time chat simulation for team collaboration, plus a Kanban system for managing deliverables — mirroring actual project workflows.

DocVault + MeetingRoom

Professional document editor for BRDs, specs, and reports — plus a video call simulator for stakeholder presentations and stand-ups.

DataStudio + TeamCalendar

Analytics and scheduling tools to round out the experience — modelling real analyst and PM workflows.

CRS Engine

Proprietary 8-pillar scoring: Communication, Analytical Thinking, Urgency, Stakeholder Management, and more — updated in real-time as you work.

AI Engine: Claude/Anthropic generates dynamic content (emails, documents) and provides objective managerial grading on all user outputs. The AI acts as your manager, colleague, and evaluator simultaneously.

The User Journey
Designed to feel less like a test, more like your first week on the job.
01
Discovery
Landing → Simulation Catalog

User enters through a professional portal and browses a library of Experience Templates — e.g. “Business Analyst at JPMorgan” or “Risk Analysis at McKinsey”. Each shows difficulty, skills practiced, and partner certifications available.

02
Immersion
Enter the VDE — First Day Hook

User is dropped into their Virtual Desktop. An onboarding email arrives in InboxAI. A welcome message appears on SlackSim from their AI manager. Then the problem lands: “Our regulatory reporting system is failing — we need a BRD by Wednesday.”

03
Deep Work
Multi-App Workflow Execution

Draft requirements in DocVault. Analyse data in DataStudio. Track progress on the TaskBoard. Schedule stakeholder syncs in TeamCalendar. AI stakeholders respond dynamically to tone and clarity — CRS updates in real-time.

04
Curveball
Dynamic Complexity Injection

Halfway through, the AI triggers a disruption — a stakeholder change, a data error, a shifting deadline. How you prioritise under pressure directly impacts your CRS score.

05
Presentation
The MeetingRoom — Stakeholder Presentation

The journey culminates in a simulated video call where the user presents findings to a panel of AI stakeholders — practising verbal communication and executive presence.

06
Results
8-Pillar CRS Scorecard + Certification

A comprehensive debrief breaks down Professionalism, Urgency, Analytical Thinking, Communication, and more. Pass the threshold and earn a verified digital certificate shareable on LinkedIn or with partner recruiters.

Journey: Landing → Dashboard → Catalog → Virtual Work (VDE) → Real-time Feedback → Certified CRS Score

Tech Stack
Frontend
  • Next.js 14 (App Router)
  • TypeScript
  • Tailwind CSS — B2B SaaS light theme
  • Framer Motion for transitions
Backend
  • FastAPI (Python)
  • PostgreSQL
  • Redis for caching and queues
  • WebSockets for live score updates
AI + Infra
  • Claude / Anthropic API
  • Dockerised monorepo
  • Ready for high-scale deployment

Design aesthetic: Premium B2B SaaS — clean light theme, sophisticated slate-gray accents, high-readability typography. Deliberately moved away from typical educational platform aesthetics toward something closer to Linear or Notion.

Try the Prototype
Recommended walkthrough to experience the student POV
Step-by-step
  1. 1Click Get Started and enter any dummy information to create an account.
  2. 2Land on the Dashboard — your student home base.
  3. 3Select Corporate Simulation from the catalog.
  4. 4Choose Business Analyst at JPMorgan to experience the full VM-type student POV.
What to look for
  • How the multi-app VDE simulates a real work environment
  • The information architecture and user flow across tools
  • The CRS scoring panel updating in real-time
  • The premium B2B SaaS aesthetic vs. typical EdTech UI

This is a prototype — some flows may be incomplete. The goal is to convey the product vision and architecture, not production-ready polish.

Open CorpSim AI Prototype ↗
← Vision Labs / Concept Doc
Concept Phase — Not Built

HR
SaaS

Lightweight people operations for early-stage startups — onboarding workflows, doc management, and headcount planning without enterprise overhead.

Why I'm Exploring This

At Acumen Connect, I watched a 15-person startup collapse its own onboarding under Notion chaos. Policies were scattered, contracts were tracked in emails, and nobody was sure who'd signed what. I spent weeks converting that into documented SOPs — and the improvement was immediate.

That experience — combined with my BA work at Deloitte around process reliability — made me realise the problem is structural, not individual. Startups don't fail at HR because they're careless. They fail because the tools available are either too heavy or too unstructured. This is a systems problem, and it's one I know how to scope.

  This is a concept — not a live product. Documenting ideas publicly keeps me accountable to actually building them.
The Problem
What startups actually do
  • Onboarding lives in a shared Google Doc no one updates.
  • Offer letters are sent from a personal Gmail as Word attachments.
  • Leave tracking is a column in a team spreadsheet.
  • Headcount plans exist only in the founder's head.
  • Policy documents get lost after the third Notion reorganisation.
Why existing tools don't fit
  • Enterprise HRMS (Workday, SAP) — built for 500+ orgs, overkill and expensive.
  • Mid-market tools (BambooHR, Darwinbox) — still carry implementation overhead.
  • Notion / Airtable — flexible but no automation, no accountability layer.
  • Nothing is opinionated for a 10–50 person team that just needs things to not fall through the cracks.
Target User
The Ops Lead
Primary persona
The Founder
Secondary persona
Company profile
10–50 employeesSeed to Series ANo dedicated HR teamIndia / SEATech or Tech-adjacent
Their core pain

They're spending 3–4 hours a week on HR admin that should take 20 minutes. Not because the tasks are hard — because there's no system. Every new hire means re-inventing the onboarding checklist. Every policy update means chasing people on Slack. They need structure, not software complexity.

Core Modules — Planned Scope
01
Onboarding Workflows
Day 0 → Day 30
  • Templated onboarding checklists per role
  • Automated task assignment to manager, IT, finance
  • New hire self-serve portal for form collection
  • Progress tracking with deadline nudges
02
Document Management
Contracts, Policies, Offer Letters
  • Template library for offer letters, NDAs, policies
  • e-Signature collection with audit trail
  • Version-controlled policy hub
  • Expiry tracking for contracts and visas
03
Headcount Planning
Hiring Tracker + Org View
  • Open roles pipeline with status tracking
  • Lightweight org chart linked to headcount plan
  • Budget per hire (basic cost modelling)
  • Founder-facing hiring dashboard
04
Leave & Attendance
Lightweight — not a timesheet
  • Leave request + approval in two clicks
  • Team calendar view for coverage visibility
  • Configurable leave policies per region
  • No biometrics, no punching — trust-first design
What It Won't Be

Design constraints — deliberately out of scope

✗ No Payroll

Payroll processing is a compliance minefield. Integrate with Razorpay or Zoho Payroll instead.

✗ No Performance Mgmt

Goal-setting and review cycles are culturally loaded. Keep it out of v1.

✗ No Biometrics

Attendance punching belongs in factories. Early-stage startups run on trust.

Tech Stack Thinking
Frontend
  • React + Tailwind
  • Radix UI components
  • Lightweight, no heavy UI frameworks
Backend / Data
  • Supabase (Postgres + Auth + Storage)
  • Edge Functions for automation triggers
  • Row-level security for multi-tenancy
Automation Layer
  • n8n for workflow orchestration
  • LLM for doc generation / summarisation
  • Email + Slack notification hooks

Stack rationale: Supabase gives a production-grade backend with zero infra ops. n8n keeps automation logic visible and modifiable without code changes. The goal is a 2-person team being able to ship and maintain this.

Status & Next Steps
Current Status
Problem framed
Scope documented
User interviews
Wireframes / IA
MVP prototype
Beta with 3 startups
Immediate Next Steps
  • 5 user interviews with ops leads at seed-stage startups
  • Validate: which module causes the most pain day-to-day?
  • Sketch information architecture and core user flows in Figma
  • Identify 1 anchor feature to build as a no-code prototype
Open Questions
  • Freemium vs flat-rate pricing for early adopters?
  • Is onboarding or doc management the right entry wedge?
  • How tight is the Slack / Google Workspace integration need?
← Vision Labs / Concept Doc
Prototype Ready

Roast
Money

A behavior-first financial assistant that uses "Judgmental AI" to bridge the gap between earning well and building a legacy. Not a tracker — a financial Tamagotchi for adults.

0
Data Sold
1
AI Companion
Roasts Possible
AES
256-bit Encryption
Why I'm Building This

Most personal finance apps are passive trackers. They show you pretty graphs of where your money went, but they don't help you change your behaviour in the moment. The result is the "Financial Ostrich Effect" — people avoid their own bank balances because opening the app feels like guilt, not growth.

I'm building RoastMoney because the emotional layer is completely missing from fintech. Behavioural economics tells us that friction and social cost are the most effective ways to interrupt impulse spending — but no one has built that in a way that's actually fun to use.

  Prototype is built and demoed. Watch the walkthrough below.
The Problem
What existing apps get wrong
  • Notification Fatigue — generic "You spent ₹200 at Swiggy" alerts are immediately ignored.
  • Complexity Overload — super-app clutter makes basic tracking feel like work.
  • No Emotional Connection — money is deeply personal, but banking apps are sterile and cold.
  • Passive by Design — weekly reports you forget to read change nothing about Monday's behaviour.
The gap RoastMoney fills
  • Introduces Emotional Friction — users don't just see numbers go down, they feel something.
  • Converts spending data into future cost — ₹300 today becomes ₹6,000 in compounded opportunity cost.
  • Consolidates credit card due dates into a single predictive timeline.
  • Answers contextual questions — "Can I afford this iPhone and still hit my SIP?"
Target User
The Earner
Primary persona
The Saver
Secondary persona
User profile
22–30 years oldFirst job or freelancingUPI-first spenderKnows they should save moreIndia / Urban
Their real problem

They aren't broke — they're "fine." That's exactly the problem. Being "fine" is the enemy of being wealthy. They earn decently, spend without tracking, and avoid their balance because looking at it feels bad. They don't need a budgeting lecture; they need something that makes finance feel like it matters today, not in retirement.

Core Features
🔥
The Roast Engine

Converts dry transaction data into witty, hard-hitting commentary that actually sticks in your mind. The AI doesn't just tell you what happened — it gives you the emotional context your bank never would.

  • Spend-triggered roasts generated per transaction
  • Tone calibrated to your spending pattern history
  • Never generic — always contextual to your habits
🕰️
Money Time Machine

Translates today's spending into its compounded future cost. A ₹300 daily coffee becomes ₹6,000+ per month in opportunity cost — visible, concrete, impossible to ignore.

  • Per-transaction future value calculation
  • Annual habit cost vs. a real-life goal (vacation, SIP milestone)
  • Converts "doing fine" users into wealth builders
💳
Pay Hub

Clubs multiple credit cards into a single chronological due-date timeline. No more hidden dues, no more missed payments ruining your credit score.

  • All credit card dues in one predictive timeline
  • Pre-due reminders with available balance context
  • Debt fragmentation solved at a glance
💬
AI Chat Modal

Chat with your money instead of searching for answers. Ask "Can I afford that iPhone if I want to hit my SIP goal?" and get a data-backed answer, not just feelings.

  • Contextual Q&A grounded in your actual balance
  • SIP / savings goal impact modelling
  • Subscription audit — catches "invisible leaks" automatically
The Marshmallow Avatar — Emotional Accountability
Passive
Other apps
Alive
RoastMoney

The Avatar is RoastMoney's standout differentiator — a Ditto-Baymax hybrid character (Marshmallow) that reacts physically to your spending in real time. Overspend and it gets visibly angry. Hit a savings goal and it celebrates with you.

Behavioural economics confirms this works: users don't just see a number go down — they feel like they're letting their AI companion down. This is the "Emotional Friction" moat that makes RoastMoney a coach, not just a spreadsheet.

Competitive Positioning
Feature
  • Tone
  • Feedback
  • Engagement
  • Utility
  • Bill Mgmt
Competitors (Mint, YNAB)
  • Helpful, Neutral, Corporate
  • Static pie charts
  • Check once a week
  • Manual budgeting
  • View all accounts
RoastMoney Edge
  • Judgmental, Witty, Brutally Honest
  • Dynamic Avatar Emotions
  • After every spend — to see the reaction
  • AI Budget Analysis + Time Machine
  • Predictive Bill Timeline

Vision: To become the world's first financial app that users are "scared" to disappoint — resulting in the highest savings-rate-per-user in the industry.

Product Demo
Prototype Walkthrough
RoastMoney — Early Build

Early prototype — core flows demonstrated: Roast Engine · Pay Hub · Money Time Machine · AI Chat Modal

Privacy & Business Model
The Vault — Data Safety
  • AES-256 encryption at rest, TLS 1.3 in transit
  • AI Anonymisation — the LLM sees Merchant_ID, never your name or account
  • Zero data reselling — ever. Full stop.
  • Your financial data is your most private asset — we treat it that way
Why Paid — Alignment of Interests
  • If you aren't paying for the product, you are the product
  • One caught subscription or one avoided late fee pays for the annual plan
  • Our incentive is to keep you rich — not to push high-interest debt products
  • Trust-based business model, not ad-driven lead gen
Status & Next Steps
Current Status
Problem & market framed
Core features scoped
Prototype built & demoed
User testing round 1
Roast Engine fine-tuning
Beta launch — 50 users
Immediate Next Steps
  • User testing the prototype with 10 target-persona users
  • Validate: does the Roast Engine create behaviour change or just laughs?
  • Refine Avatar emotion triggers based on spending category, not just amount
  • Stress-test the Pay Hub UX with users who have 3+ active credit cards
Open Questions
  • What's the right roast intensity — brutal honesty vs. playful nudge?
  • Freemium (limited roasts) vs. flat monthly subscription?
  • How does the Avatar feel after 6 months — novelty or habit?
← Vision Labs / Concept Doc
Prototype Built
Powered by Gemini API — Full deployment pending

Validate
Your Idea

An AI-powered idea validation platform that goes deeper than a basic search — analysing what people actually complain about across Reddit, LinkedIn, forums, and the open web to help you find real gaps before you build.

About This Prototype

This is an early-stage prototype built to demonstrate the core idea — not a finished product. The current version runs on a Gemini API key with limited quota, which means some analyses may be throttled or incomplete in the live demo.

When deployed properly with a production-grade API key and full data connectors (Reddit API, LinkedIn data, web scraping pipelines), the platform will deliver richer, real-time multi-source intelligence. This prototype is a proof of concept — the architecture and UX are real, the data depth will scale.

  Current stack: Gemini API · Next.js · Vercel — Production roadmap: full Reddit + LinkedIn + web crawl pipelines with dedicated API infrastructure.
The Problem
How founders validate today
  • Google search + scroll — surface-level, SEO-gamed results with no signal on real frustrations.
  • Ask ChatGPT — gets a generic “here are 5 competitors” list with no depth on what people actually hate about them.
  • Manually browse Reddit — hours of reading to find a handful of relevant threads.
  • Ask friends — confirmation bias from people who want to be supportive.
  • Skip validation entirely and just build — the most common and most expensive mistake.
What’s missing
  • No tool synthesises what real users are complaining about across forums, communities, and reviews at scale.
  • Existing AI tools don’t search the live web — they hallucinate market data from stale training.
  • No platform maps competitor weaknesses against your specific idea framing.
  • Founders need structured output they can act on — not raw search results to interpret themselves.
How It Works
Three inputs. One deep-analysis report. No manual research needed.
01
Input
Describe Your Idea + Context

Enter a plain-language description of your idea. Then select your industry (e.g. FinTech, HR Tech, EdTech), sector (B2B SaaS, Consumer App, Marketplace), and target audience (e.g. solo founders, enterprise ops teams, students). These selectors focus the analysis so you don’t get generic results.

02
Analysis
Multi-Source Deep Search

The platform runs queries across Reddit threads, LinkedIn discussions, product review sites, and the open web to find: what people are saying about this problem space, which companies already exist, and — critically — what users complain those companies are getting wrong.

03
Output
Structured Validation Report

A structured report covering: existing competitors and their gaps, recurring user frustrations in the space, signals of unsolved demand, and a positioning recommendation for how your idea can differentiate. Actionable — not just a list of links.

04
Edge
Deeper Than Existing AI Tools

Unlike asking ChatGPT, this platform is structured around searching live sources — not generating answers from training data. The goal is to give founders information that feels like they did 10 hours of research themselves.

Core Features
01
Idea Input + Context Selectors
  • Plain-language idea description field
  • Industry selector (FinTech, HR Tech, EdTech, SaaS, etc.)
  • Sector picker (B2B, B2C, Marketplace, Consumer)
  • Target audience definition (role, segment, geography)
02
Competitor Landscape Map
  • Identifies existing companies in the space
  • Surfaces what each competitor is criticised for
  • Flags underserved segments or missing features
  • Sourced from real forum discussion, not AI inference
03
Demand Signal Analysis
  • Finds threads where people ask for a solution like yours
  • Scores demand signals by recency and volume
  • Surfaces “has anyone built X yet?” style posts as proof of gap
  • Distinguishes nice-to-have from genuine pain
04
Positioning Recommendation
  • Suggests how to frame your idea vs. existing alternatives
  • Highlights the 1–2 differentiators with strongest signal
  • Flags red flags if the space is too crowded or demand is low
  • Gives a plain-language “build / revisit / pivot” verdict
Tech Stack
Frontend
  • Next.js (App Router)
  • Tailwind CSS
  • Clean SaaS-style UI
AI Layer (Prototype)
  • Gemini API for synthesis
  • Web search grounding enabled
  • Structured prompt pipeline
Production Roadmap
  • Reddit API + LinkedIn data
  • Custom web crawl pipeline
  • Dedicated LLM inference

Current limitation: The Gemini API key used in the prototype has rate limits and quota caps, so deep multi-source analysis may be throttled. This is a demo of the product vision — full deployment requires a production API setup and dedicated data pipeline infrastructure.

Try the Prototype
Recommended flow to experience the platform
Step-by-step
  1. 1Open the prototype and describe your startup idea in plain language.
  2. 2Select your industry, sector, and target audience from the dropdowns.
  3. 3Hit Validate and wait for the analysis to run.
  4. 4Review the competitor map, demand signals, and positioning recommendation.
What to keep in mind
  • API quota may limit depth — this is expected in the prototype
  • Try a specific niche idea for the most focused results
  • The report structure shows what the full product will output at scale
  • Production version will pull live Reddit, LinkedIn & web data directly

This is a prototype — built to prove the concept and explore the product architecture. Not production-ready, but the core idea is real.

Open Validate Your Idea ↗
← Projects / Project Doc
Complete

Automated
Financial
Transaction
Monitoring

A rule-based pipeline that automatically cleans transaction data, detects anomalies, and generates daily risk summaries — replacing manual spreadsheet review entirely.

3
Pipeline Modules
3
Detection Rules
0
Manual Steps
4
Output Files
Scope of the Project
What this project does
  • Cleans and validates raw transaction data automatically
  • Calculates customer-level spending baselines
  • Flags abnormal transactions using rule-based logic
  • Generates structured daily summaries for risk teams
  • Tracks data quality issues for audit purposes
Deliberately out of scope
  • No real customer data — fully simulated transactions
  • No real-time streaming or live API connections
  • No customer-facing communication or alerting
  • Not a machine learning model — explicit rule logic only
The Problem This Solves
The Old Way
Manual review
This System
Automated pipeline

Real-time transaction data arrives every day. Manually checking it is slow, inconsistent, and error-prone — analysts miss patterns that only become visible in aggregate.

This pipeline runs automatically when new data arrives: cleaning it, identifying what's unusual, and producing a structured report — with no manual intervention required after initial setup.

Pipeline Architecture
01
Raw Data In
transactions.csv
02
Clean
clean_data.py
03
Detect
analyze_data.py
04
Report
report.py
Module Breakdown
01
Data Cleaning
clean_data.py
Python · Pandas

Prepares raw data to ensure it's consistent and safe for analysis. Handles the messy reality of real transaction files.

  • Removes exact duplicate records
  • Converts and validates date/amount column formats
  • Fills missing amounts using median — robust to outliers
  • Drops transactions with missing IDs or merchant data
  • Removes zero and negative-value transactions
  • Flags rows where amounts were originally missing (audit trail)
clean_data.py
# 1. Load raw data
df = pd.read_csv('../data/transactions.csv')
# 2. Standardize column names
df.columns = df.columns.str.lower().str.strip()
# 3. Remove exact duplicate rows
df = df.drop_duplicates()
# 4. Convert date column to datetime
df['date'] = pd.to_datetime(df['date'], errors='coerce')
# 5. Handle missing or invalid amounts
df['amount'] = pd.to_numeric(df['amount'], errors='coerce')
df['amount_was_missing'] = df['amount'].isna()
median_amount = df['amount'].median()
df['amount'] = df['amount'].fillna(median_amount)
# 6. Remove transactions with missing critical identifiers
df = df.dropna(subset=['transaction_id', 'customer_id', 'merchant'])
# 7. Remove negative or zero-value transactions
df = df[df['amount'] > 0]
# 8. Sort for consistency (audit-friendly)
df = df.sort_values(by=['customer_id', 'date'])
# 9. Save cleaned data
df.to_csv('../output/cleaned_data.csv', index=False)
02
Anomaly Detection
analyze_data.py
Python · Rule Logic

Identifies transactions that don't match a customer's normal spending behavior. Three independent rules — any one hit triggers a flag.

Rule 1 — Spend Spike

Transaction exceeds 2× the customer's average historical spend. Personalized baseline per customer_id.

Rule 2 — High Absolute Value

Transaction amount exceeds ₹20,000 threshold regardless of customer history.

Rule 3 — High-Risk Merchant

Merchant appears on a configurable high-risk list. Combined with Rule 2 for a final anomaly flag.

analyze_data.py
# 1. Load cleaned data
df = pd.read_csv('../output/cleaned_data.csv')
# 2. Customer-level baseline
avg_spend = df.groupby('customer_id')['amount'].mean()
df['avg_customer_spend'] = df['customer_id'].map(avg_spend)
# 3. Rule 1: Spend spike anomaly
df['spend_anomaly'] = df['amount'] > (2 * df['avg_customer_spend'])
# 4. Rule 2: High absolute transaction value
df['high_value_anomaly'] = df['amount'] > 20000
# 5. Rule 3: High-risk merchant
high_risk_merchants = ['Apple', 'Dell']
df['merchant_risk'] = df['merchant'].isin(high_risk_merchants)
# 6. Final anomaly flag (company logic)
df['final_anomaly'] = (
    df['spend_anomaly'] |
    (df['high_value_anomaly'] & df['merchant_risk'])
)
# 7. Save outputs
df[df['final_anomaly']].to_csv('../output/anomalies.csv', index=False)
03
Reporting
report.py
Python · CSV Output

Converts raw anomaly data into a clear, actionable daily summary for business and risk teams.

  • Total flagged transactions and unique customers impacted
  • Total and average value of risky transactions
  • Top merchants by anomaly frequency
  • Top customers by total anomalous spend
  • Saves structured daily_anomaly_summary.csv for downstream use
  • Handles empty anomaly sets gracefully — no crashes
report.py
# 1. Load anomaly data
df = pd.read_csv('../output/anomalies.csv')
# 2. Handle case where no anomalies exist
if df.empty:
    print("No anomalies detected for this period.")
else:
    # 3. Summary metrics
    total_anomalies = len(df)
    unique_customers = df['customer_id'].nunique()
    total_amount = df['amount'].sum()
    avg_anomaly_amount = df['amount'].mean()
    print(f"Total flagged transactions : {total_anomalies}")
    print(f"Unique customers impacted : {unique_customers}")
    print(f"Total amount at risk (₹)  : {total_amount:,.2f}")
    print(f"Average anomaly amount    : {avg_anomaly_amount:,.2f}")
    # 4. Top risky merchants
    print("\nTop merchants by anomaly count:")
    print(df['merchant'].value_counts().head())
    # 5. Top customers by total anomaly spend
    print("\nTop customers by anomaly spend:")
    print(
        df.groupby('customer_id')['amount']
          .sum()
          .sort_values(ascending=False)
          .head()
    )
    # 6. Save daily summary
    summary_df = pd.DataFrame([{
        'total_anomalies': total_anomalies,
        'unique_customers': unique_customers,
        'total_amount_at_risk': total_amount,
        'average_anomaly_amount': avg_anomaly_amount
    }])
    summary_df.to_csv('../output/daily_anomaly_summary.csv', index=False)
Generated Outputs
Output Files
  • cleaned_data.csv — validated, standardised transaction records
  • anomalies.csv — all flagged transactions with rule columns
  • daily_anomaly_summary.csv — aggregated metrics for reporting
  • Console report — human-readable summary printed at run time
Scalability Path
  • Logic can be scheduled daily via cron or Apache Airflow
  • Each module is independent — swap or extend without breaking others
  • Output CSVs plug directly into dashboards or email automation
  • Detection rules are configurable — thresholds and merchant lists are easy to update
Tech Stack
Core Language
  • Python 3
  • pandas for data manipulation
  • numpy for numeric operations
Data & Storage
  • CSV-based file I/O
  • Structured /data and /output directories
  • Audit-ready column flagging
Automation Layer
  • Cron-schedulable scripts
  • Airflow-compatible pipeline structure
  • Zero manual steps after setup

Design rationale: Vanilla Python and pandas keeps the system readable, auditable, and easy to hand off. No framework dependencies means nothing breaks when an external library updates. The modular structure lets each script be tested, replaced, or extended independently.

Project Status
What's complete
  • Full three-module pipeline built and tested
  • All three anomaly detection rules implemented
  • Daily summary report generating correctly
  • Edge cases handled — empty datasets, missing values
Possible extensions
  • ML-based anomaly scoring (isolation forest / z-score)
  • Airflow DAG for scheduled runs
  • Email or Slack alert integration for flagged batches
  • Dashboard layer (Streamlit or Tableau)