Chatbot built with Node.js/TypeScript, Express, Socket.io, RabbitMQ, PostgreSQL, SQLite, React (TypeScript), AWS, GitLab CI/CD, LangChain.js, Python (FastAPI), and prompt‑engineering best practices.
Executive Summary
Problem: Customer messages and comments arriving through Facebook Messenger and Instagram were handled manually, leading to slow first‑response times, inconsistent answers, and poor visibility across channels.
Approach: We built a webhook‑driven, omni‑channel chatbot and agent assist platform. A message ingestion layer normalizes events, a queue buffers spikes, LLM‑powered services craft responses (with guardrails), and a real‑time console enables human handoff.
Outcome: Routine inquiries are answered automatically with consistent tone; agents intervene seamlessly for complex cases. Response time drops to seconds during peaks, while throughput scales predictably with demand.
Context & Goals
Context. Social channels became primary support/sales touchpoints, but workflows were fragmented. DMs and comment replies competed with email/chat tools, and no single view existed of customer interactions.
Goals.
Unify Facebook Messenger and Instagram DMs/comments via secure webhooks.
Automate high‑volume FAQs with reliable, brand‑safe responses.
Provide a real‑time agent console with live conversation view and takeover.
Maintain SLOs (low latency, high availability) and absorb traffic spikes.
Establish observability, CI/CD, and a path to add new channels without rewrites.
Solution Overview
We implemented a scalable, event‑driven architecture with explicit boundaries for ingestion, orchestration, and delivery. The platform is cloud‑agnostic in design and deployed on AWS.
Architecture Highlights
Webhook ingestion (Express/TypeScript): Verifies platform signatures, normalizes payloads from Messenger and Instagram (messages, comments, mentions), and publishes events to RabbitMQ.
Asynchronous orchestration (RabbitMQ): Queues isolate bursts, ensure idempotency, and enable targeted retries with DLQs and backoff.
LLM services:
Node.js + LangChain.js orchestrate prompt templates, tools, and policies.
FastAPI microservice (Python) handles classification, safety filtering, and optional enrichment tasks (e.g., keyword tagging) behind a stable HTTP contract.
Response generation with guardrails: Deterministic templates for critical flows; generative responses gated by safety checks, rate limits, and audit logs.
Real‑time console (React + Socket.io): Agents see live threads, confidence scores, and can claim/override the bot in one click; presence & typing indicators improve coordination.
Data stores: PostgreSQL as the system of record (conversations, messages, users, templates); SQLite used for lightweight edge/process‑local caches and dev tooling.
Delivery adapters: Channel‑specific senders post replies back to Messenger/Instagram, handle pagination/rate‑limits, and log message state transitions.
Infra & operations: AWS for runtime and storage, GitLab CI/CD for build/test/deploy, Infrastructure‑as‑Code, centralized logs/metrics/traces, SLO dashboards, and alerting.
Implementation
Phase 1 – Foundations
Defined event schema (message, comment, reply, escalation) and idempotency keys.
Built Express webhook handlers with signature verification and retries.
Containerized services; set up GitLab CI/CD (lint, tests, build, deploy).
Provisioned PostgreSQL; introduced Sequelize/Prisma‑style migrations; added SQLite for local/dev caches.
Phase 2 – Orchestration & Intelligence
Introduced RabbitMQ with TTL, DLQs, and priority queues for urgent events.
Implemented LangChain.js chains (classifier → router → responder) with prompt templates and tool calling.
Added FastAPI service for moderation/classification and deterministic fallbacks for low confidence.
Created response template library (greetings, FAQs, order‑status placeholders) with brand tone controls.
Phase 3 – Productization
React/TypeScript agent console with live transcripts, confidence scores, and bot/agent toggle via Socket.io.
Observability: structured logs, request tracing, key metrics (latency, error rate, queue depth, token usage).
Load/spike/soak tests; autoscaling policies tuned from results; runbooks and on‑call playbooks documented.
Key Features
Omni‑channel: Messenger DMs and Instagram DMs/comments through one pipeline.
Automation with guardrails: Confidence thresholds determine auto‑reply vs. agent‑handoff; all generative output passes through safety filters.
Real‑time agent assist: Suggested replies, quick templates, and instant takeover.
Knowledge & prompts: Prompt‑engineering patterns ensure consistent voice; retrieval hooks are ready for future knowledge base integration.
Compliance & resilience: Signature verification, rate‑limit handling, idempotent processing, and audit trails.
Tech Stack
Backend: Node.js (TypeScript), Express, Socket.io, RabbitMQ
Data: PostgreSQL, SQLite
AI: LangChain.js, Prompt Engineering, Python (FastAPI) microservices
Frontend: React.js (TypeScript)
Infra/DevOps: AWS, GitLab CI/CD
Outcomes
Faster responses: Routine queries are answered within seconds; spikes are smoothed by queues and caching.
Higher consistency: Template‑backed prompts reduce variance in tone and content across channels.
Operational clarity: Unified logs/metrics and an agent console provide a single source of truth for social interactions.
Scalability by design: Horizontal scale at the API and worker layers; message throughput grows linearly with capacity.