Curious builder crafting
AI powered systems
2.5+ years of professional experience across consulting and an early-stage startup, supporting software validation, data systems, and automation alongside product and engineering teams. Comfortable in fast-paced environments that require structured problem-solving, documentation, and cross-functional coordination.
Currently freelancing as a Business Analyst and GenAI Automation Consultant. Founder of Mesh23.
2.5 years of real work across data systems, automation, and product operations. Available immediately. Open to full-time, contract, and remote roles globally.
Systems thinker. Automation builder. I take messy operational problems and turn them into documented, reliable systems that actually hold up.
I'm Vidur Ramachandran, a Business Analyst and GenAI Automation Consultant with 2.5+ years of professional experience across consulting and an early-stage startup. I support software validation, data systems, and automation alongside product and engineering teams.
At Deloitte USI, I worked on enterprise CDP systems for Fortune 500 clients — conducting root cause analysis on data pipeline failures and building automation frameworks. Before that, at Acumen Connect, I owned product and ops tasks in a fast-paced startup.
I'm comfortable in fast-paced environments that demand structured problem-solving, documentation, and cross-functional coordination. Currently freelancing and building Mesh23, delivering AI-driven workflow solutions for SMBs.
Hyderabad, India
Open to remote & relocation
B1/B2 US Business Visa — valid · 10-year
Comfortable with EST / PST-aligned shifts & overnight hours
Automated systems, analytical frameworks, and workflow tools built across enterprise consulting and independent work. Each one starts with a real problem.
A Fortune 500 client's Adobe AEP pipeline was producing silent failures. Customer segments weren't updating correctly and downstream marketing automations were firing on stale data. The team was manually checking logs with no structured process.
Performed SQL audits across 18M+ records to identify where the pipeline was failing: schema mismatches, null propagation errors, and upstream ingestion delays. Mapped failure patterns across 8 weeks of pipeline logs to surface recurring root causes.
Built a mental model of the full data flow — from raw event ingestion through AEP identity resolution to segment activation. This revealed three structural failure points that were not visible in isolation.
Contributed to Python-based pre-ingestion validation scripts that caught malformed data before pipeline entry. Compiled each failure class into structured SOPs and worked with engineering to implement schema validation gates.
28% reduction in average incident resolution time. Repeat escalations dropped by around 22%. The runbooks were adopted team-wide, turning individual knowledge into institutional memory.
Where I experiment with startup ideas and automation systems. Not polished products — honest exploration. The best way to understand a problem space is to build something in it.
Vision Labs is a commitment to staying operationally sharp beyond formal job titles. Every project here is a real systems problem I've chosen to work on — an automation workflow, a SaaS prototype, or a business model stress-test. The experiments are messy. The thinking is real.
A professional-grade, AI-powered corporate simulation platform that bridges the gap between campus life and the corporate world — built as a working prototype to give shape to a founder’s vision for what AI-native corporate readiness training could be.
EMELEX Solutions is building the next generation of corporate readiness training. Inspired by the founder’s vision, I wanted to build a demo prototype for him — my own understanding of what he was trying to create, translated into something tangible and explorable.
This prototype is something I built independently, thinking like a product person: mapping the user journey end-to-end, defining the information architecture, prototyping the core experience, and documenting how it all fits together. It’s not a final product — it’s a high-fidelity signal of what this kind of product could feel and function like.
The VDE is a multi-app interface containing everything a junior professional would need on day one. Users don’t “answer questions” — they actually work inside these tools simultaneously.
Intelligent email client where users manage AI-generated stakeholder communications. Claude responds dynamically to tone, clarity, and analytical depth.
Real-time chat simulation for team collaboration, plus a Kanban system for managing deliverables — mirroring actual project workflows.
Professional document editor for BRDs, specs, and reports — plus a video call simulator for stakeholder presentations and stand-ups.
Analytics and scheduling tools to round out the experience — modelling real analyst and PM workflows.
Proprietary 8-pillar scoring: Communication, Analytical Thinking, Urgency, Stakeholder Management, and more — updated in real-time as you work.
AI Engine: Claude/Anthropic generates dynamic content (emails, documents) and provides objective managerial grading on all user outputs. The AI acts as your manager, colleague, and evaluator simultaneously.
User enters through a professional portal and browses a library of Experience Templates — e.g. “Business Analyst at JPMorgan” or “Risk Analysis at McKinsey”. Each shows difficulty, skills practiced, and partner certifications available.
User is dropped into their Virtual Desktop. An onboarding email arrives in InboxAI. A welcome message appears on SlackSim from their AI manager. Then the problem lands: “Our regulatory reporting system is failing — we need a BRD by Wednesday.”
Draft requirements in DocVault. Analyse data in DataStudio. Track progress on the TaskBoard. Schedule stakeholder syncs in TeamCalendar. AI stakeholders respond dynamically to tone and clarity — CRS updates in real-time.
Halfway through, the AI triggers a disruption — a stakeholder change, a data error, a shifting deadline. How you prioritise under pressure directly impacts your CRS score.
The journey culminates in a simulated video call where the user presents findings to a panel of AI stakeholders — practising verbal communication and executive presence.
A comprehensive debrief breaks down Professionalism, Urgency, Analytical Thinking, Communication, and more. Pass the threshold and earn a verified digital certificate shareable on LinkedIn or with partner recruiters.
Journey: Landing → Dashboard → Catalog → Virtual Work (VDE) → Real-time Feedback → Certified CRS Score
Design aesthetic: Premium B2B SaaS — clean light theme, sophisticated slate-gray accents, high-readability typography. Deliberately moved away from typical educational platform aesthetics toward something closer to Linear or Notion.
This is a prototype — some flows may be incomplete. The goal is to convey the product vision and architecture, not production-ready polish.
Lightweight people operations for early-stage startups — onboarding workflows, doc management, and headcount planning without enterprise overhead.
At Acumen Connect, I watched a 15-person startup collapse its own onboarding under Notion chaos. Policies were scattered, contracts were tracked in emails, and nobody was sure who'd signed what. I spent weeks converting that into documented SOPs — and the improvement was immediate.
That experience — combined with my BA work at Deloitte around process reliability — made me realise the problem is structural, not individual. Startups don't fail at HR because they're careless. They fail because the tools available are either too heavy or too unstructured. This is a systems problem, and it's one I know how to scope.
They're spending 3–4 hours a week on HR admin that should take 20 minutes. Not because the tasks are hard — because there's no system. Every new hire means re-inventing the onboarding checklist. Every policy update means chasing people on Slack. They need structure, not software complexity.
Design constraints — deliberately out of scope
Payroll processing is a compliance minefield. Integrate with Razorpay or Zoho Payroll instead.
Goal-setting and review cycles are culturally loaded. Keep it out of v1.
Attendance punching belongs in factories. Early-stage startups run on trust.
Stack rationale: Supabase gives a production-grade backend with zero infra ops. n8n keeps automation logic visible and modifiable without code changes. The goal is a 2-person team being able to ship and maintain this.
A behavior-first financial assistant that uses "Judgmental AI" to bridge the gap between earning well and building a legacy. Not a tracker — a financial Tamagotchi for adults.
Most personal finance apps are passive trackers. They show you pretty graphs of where your money went, but they don't help you change your behaviour in the moment. The result is the "Financial Ostrich Effect" — people avoid their own bank balances because opening the app feels like guilt, not growth.
I'm building RoastMoney because the emotional layer is completely missing from fintech. Behavioural economics tells us that friction and social cost are the most effective ways to interrupt impulse spending — but no one has built that in a way that's actually fun to use.
They aren't broke — they're "fine." That's exactly the problem. Being "fine" is the enemy of being wealthy. They earn decently, spend without tracking, and avoid their balance because looking at it feels bad. They don't need a budgeting lecture; they need something that makes finance feel like it matters today, not in retirement.
Converts dry transaction data into witty, hard-hitting commentary that actually sticks in your mind. The AI doesn't just tell you what happened — it gives you the emotional context your bank never would.
Translates today's spending into its compounded future cost. A ₹300 daily coffee becomes ₹6,000+ per month in opportunity cost — visible, concrete, impossible to ignore.
Clubs multiple credit cards into a single chronological due-date timeline. No more hidden dues, no more missed payments ruining your credit score.
Chat with your money instead of searching for answers. Ask "Can I afford that iPhone if I want to hit my SIP goal?" and get a data-backed answer, not just feelings.
The Avatar is RoastMoney's standout differentiator — a Ditto-Baymax hybrid character (Marshmallow) that reacts physically to your spending in real time. Overspend and it gets visibly angry. Hit a savings goal and it celebrates with you.
Behavioural economics confirms this works: users don't just see a number go down — they feel like they're letting their AI companion down. This is the "Emotional Friction" moat that makes RoastMoney a coach, not just a spreadsheet.
Vision: To become the world's first financial app that users are "scared" to disappoint — resulting in the highest savings-rate-per-user in the industry.
Early prototype — core flows demonstrated: Roast Engine · Pay Hub · Money Time Machine · AI Chat Modal
An AI-powered idea validation platform that goes deeper than a basic search — analysing what people actually complain about across Reddit, LinkedIn, forums, and the open web to help you find real gaps before you build.
This is an early-stage prototype built to demonstrate the core idea — not a finished product. The current version runs on a Gemini API key with limited quota, which means some analyses may be throttled or incomplete in the live demo.
When deployed properly with a production-grade API key and full data connectors (Reddit API, LinkedIn data, web scraping pipelines), the platform will deliver richer, real-time multi-source intelligence. This prototype is a proof of concept — the architecture and UX are real, the data depth will scale.
Enter a plain-language description of your idea. Then select your industry (e.g. FinTech, HR Tech, EdTech), sector (B2B SaaS, Consumer App, Marketplace), and target audience (e.g. solo founders, enterprise ops teams, students). These selectors focus the analysis so you don’t get generic results.
The platform runs queries across Reddit threads, LinkedIn discussions, product review sites, and the open web to find: what people are saying about this problem space, which companies already exist, and — critically — what users complain those companies are getting wrong.
A structured report covering: existing competitors and their gaps, recurring user frustrations in the space, signals of unsolved demand, and a positioning recommendation for how your idea can differentiate. Actionable — not just a list of links.
Unlike asking ChatGPT, this platform is structured around searching live sources — not generating answers from training data. The goal is to give founders information that feels like they did 10 hours of research themselves.
Current limitation: The Gemini API key used in the prototype has rate limits and quota caps, so deep multi-source analysis may be throttled. This is a demo of the product vision — full deployment requires a production API setup and dedicated data pipeline infrastructure.
This is a prototype — built to prove the concept and explore the product architecture. Not production-ready, but the core idea is real.
A rule-based pipeline that automatically cleans transaction data, detects anomalies, and generates daily risk summaries — replacing manual spreadsheet review entirely.
Real-time transaction data arrives every day. Manually checking it is slow, inconsistent, and error-prone — analysts miss patterns that only become visible in aggregate.
This pipeline runs automatically when new data arrives: cleaning it, identifying what's unusual, and producing a structured report — with no manual intervention required after initial setup.
Prepares raw data to ensure it's consistent and safe for analysis. Handles the messy reality of real transaction files.
# 1. Load raw data df = pd.read_csv('../data/transactions.csv') # 2. Standardize column names df.columns = df.columns.str.lower().str.strip() # 3. Remove exact duplicate rows df = df.drop_duplicates() # 4. Convert date column to datetime df['date'] = pd.to_datetime(df['date'], errors='coerce') # 5. Handle missing or invalid amounts df['amount'] = pd.to_numeric(df['amount'], errors='coerce') df['amount_was_missing'] = df['amount'].isna() median_amount = df['amount'].median() df['amount'] = df['amount'].fillna(median_amount) # 6. Remove transactions with missing critical identifiers df = df.dropna(subset=['transaction_id', 'customer_id', 'merchant']) # 7. Remove negative or zero-value transactions df = df[df['amount'] > 0] # 8. Sort for consistency (audit-friendly) df = df.sort_values(by=['customer_id', 'date']) # 9. Save cleaned data df.to_csv('../output/cleaned_data.csv', index=False)
Identifies transactions that don't match a customer's normal spending behavior. Three independent rules — any one hit triggers a flag.
Transaction exceeds 2× the customer's average historical spend. Personalized baseline per customer_id.
Transaction amount exceeds ₹20,000 threshold regardless of customer history.
Merchant appears on a configurable high-risk list. Combined with Rule 2 for a final anomaly flag.
# 1. Load cleaned data df = pd.read_csv('../output/cleaned_data.csv') # 2. Customer-level baseline avg_spend = df.groupby('customer_id')['amount'].mean() df['avg_customer_spend'] = df['customer_id'].map(avg_spend) # 3. Rule 1: Spend spike anomaly df['spend_anomaly'] = df['amount'] > (2 * df['avg_customer_spend']) # 4. Rule 2: High absolute transaction value df['high_value_anomaly'] = df['amount'] > 20000 # 5. Rule 3: High-risk merchant high_risk_merchants = ['Apple', 'Dell'] df['merchant_risk'] = df['merchant'].isin(high_risk_merchants) # 6. Final anomaly flag (company logic) df['final_anomaly'] = ( df['spend_anomaly'] | (df['high_value_anomaly'] & df['merchant_risk']) ) # 7. Save outputs df[df['final_anomaly']].to_csv('../output/anomalies.csv', index=False)
Converts raw anomaly data into a clear, actionable daily summary for business and risk teams.
# 1. Load anomaly data df = pd.read_csv('../output/anomalies.csv') # 2. Handle case where no anomalies exist if df.empty: print("No anomalies detected for this period.") else: # 3. Summary metrics total_anomalies = len(df) unique_customers = df['customer_id'].nunique() total_amount = df['amount'].sum() avg_anomaly_amount = df['amount'].mean() print(f"Total flagged transactions : {total_anomalies}") print(f"Unique customers impacted : {unique_customers}") print(f"Total amount at risk (₹) : {total_amount:,.2f}") print(f"Average anomaly amount : {avg_anomaly_amount:,.2f}") # 4. Top risky merchants print("\nTop merchants by anomaly count:") print(df['merchant'].value_counts().head()) # 5. Top customers by total anomaly spend print("\nTop customers by anomaly spend:") print( df.groupby('customer_id')['amount'] .sum() .sort_values(ascending=False) .head() ) # 6. Save daily summary summary_df = pd.DataFrame([{ 'total_anomalies': total_anomalies, 'unique_customers': unique_customers, 'total_amount_at_risk': total_amount, 'average_anomaly_amount': avg_anomaly_amount }]) summary_df.to_csv('../output/daily_anomaly_summary.csv', index=False)
Design rationale: Vanilla Python and pandas keeps the system readable, auditable, and easy to hand off. No framework dependencies means nothing breaks when an external library updates. The modular structure lets each script be tested, replaced, or extended independently.