Enterprise AI infrastructure built on Anthropic’s Model Context Protocol. One endpoint, 180+ tools, 3-node active-active failover, persistent memory, and autonomous agents.
MCP is an open standard from Anthropic — the makers of Claude — that lets AI models connect to external tools, data sources, and services. It defines how an AI client discovers available tools, calls them, and receives structured results.
Any MCP-compatible client (Claude Desktop, Claude Code, Cursor, Windsurf, Cline, or custom apps) can connect to any MCP server and immediately use its tools.
Lightning MCP is an enterprise aggregation and persistence layer built on top of MCP. We don’t replace MCP — we extend it with everything production AI infrastructure needs.
180+ tools through one MCP connection. No per-tool config. Connect once, access everything.
16+ per-project Redis databases with 3-way replication. AI that remembers across sessions, projects, and people.
3 independent nodes all serving simultaneously. Zero switchover latency. No single point of failure.
Unimatrix vector database with 4,264 embeddings. Search all projects by meaning, not keywords.
Built-in ITSM with incidents, changes, projects, tasks, and 500+ knowledge articles.
AI reporters, task orchestrators, and background workers operating 24/7 without human intervention.
Everything flows through a single path: your AI client connects to the Bridge, the Bridge routes to the Singularity Engine, and the Engine dispatches to the right module.
Singularity is modular. Adding a new integration means dropping in a Python file — no restart, no reconfiguration. The architecture supports any tool that can expose an API.
36 modules running in production across 3 nodes. Every module is independently loadable.
| Module | What It Does | Status |
|---|---|---|
| Linux Control | Run commands, system info, CPU/memory/disk on managed servers | Live |
| Docker | Container lifecycle — ps, logs, stats, start/stop/restart | Live |
| Proxmox | Cluster status, VM management, LXC containers, node monitoring | Live |
| Memory | Redis recall/remember, conscience load/save, project-scoped storage | Live |
| Database | PostgreSQL queries, Redis ops, Qdrant vector search | Live |
| ITSM | Incidents, changes, problems, projects, tasks, knowledge base | Live |
| Network | pfSense firewall, DHCP, VPN, managed switch VLANs | Live |
| Home Assistant | Entity states, service calls, device control | Live |
| Windows Desktop | Mouse, keyboard, screenshots, window management, PowerShell | Live |
| Forge (Validation) | Multi-pass code checking — Bronze, Silver, Gold tiers | Live |
| Continuum | Conversation archival, chat sync, transcript search | Live |
| Hive Mind / Telepathy | Cross-project search, AI-to-AI messaging, coordination | Live |
| Phantom | Autonomous background agent — assigns itself tasks, executes | Live |
| Discovery | Network scanning, host probing, service detection | Live |
| Extraction | System info, configs, users, databases from target systems | Live |
| Assimilation | AI analysis, documentation generation, migration planning | Live |
| VoIP | Phone system management via VoIP.ms API | Live |
| File Transfer | Host filesystem, rsync, SCP operations | Live |
| Web / Search | Web fetch, SearXNG search, file read/write | Live |
| AI/LLM | Local model inference, chatflow execution | Live |
| N8N / Node-RED | Workflow automation, flow management | Live |
| Conduit | Claude.ai browser session bridge, conversation sync | Live |
| Scheduler | Cron-based task scheduling and execution | Live |
| SDLC | Dev-to-production pipeline, deployment tracking | Live |
| Audit / Sentinel | Security scanning, compliance checks, monitoring | Live |
| Bolt | Security hardening, certificate management | Live |
The modular architecture means any API-accessible service can become a Singularity module. These are on the roadmap or ready to build when a customer needs them:
| Integration | What It Would Do | Status |
|---|---|---|
| Salesforce / HubSpot | CRM queries, contact management, pipeline automation | Planned |
| QuickBooks / Xero | Invoice management, financial reporting, expense tracking | Planned |
| Jira / ServiceNow | Ticket sync, sprint management, cross-platform ITSM | Planned |
| AWS / Azure / GCP | Cloud resource management, billing, deployment | Planned |
| Slack / Teams | Channel management, message posting, bot integration | Planned |
| GitHub / GitLab | Repo management, PR review, CI/CD triggering | Planned |
| Kubernetes | Cluster management, pod orchestration, scaling | Planned |
| Terraform / Ansible | Infrastructure as code, provisioning automation | Planned |
| Datadog / Grafana | Metrics queries, dashboard management, alerting | Planned |
| Exchange / O365 | Email, calendar, SharePoint integration | Planned |
| Twilio / SMS | Text messaging, call routing, notifications | Planned |
| Stripe | Billing management, subscription handling, invoicing | In Dev |
When Anthropic released MCP, we were excited. We immediately started connecting Claude to everything — our servers, Docker, Proxmox, databases, firewall, Home Assistant, phone system. Each one was its own MCP server with its own connection.
By the time we had 15 connections, Claude’s performance had visibly degraded. Response times went up. The AI spent more time managing tool connections than actually thinking about our questions. Context windows filled up with tool definitions instead of useful conversation. It was unusable.
That’s why we built Lightning MCP. One endpoint that aggregates everything. Claude connects once, sees all 180+ tools, and performs like it only has one connection — because it does.
Without Lightning: Each MCP server adds connection overhead, tool definitions bloat the context, and the AI juggles multiple sessions. At 10+ servers, you feel it. At 20+, it’s broken.
With Lightning: One SSE connection. One set of tool definitions. The Bridge handles routing internally. Claude never knows there are 36 modules behind the curtain — it just sees one clean tool catalog.
mcp_servers: - filesystem: localhost:8821 - postgres: localhost:8823 - redis: localhost:8824 - docker: localhost:8825 - proxmox: localhost:8826 - home-assistant: localhost:8828 - pfsense: localhost:8829 - voipms: localhost:8830 ... # 15+ more connections # Each adds ~200 tokens of tool defs # Context fills up fast # Response time degrades with every server # One server crashes = broken session
{
"mcpServers": {
"lightning": {
"url": "https://lmcp.your.ai/mcp",
"transport": "sse"
}
}
}
# 1 connection
# 180+ tools
# Internal routing, zero overhead
# Bridge handles failover automatically
# Any module can hot-reload
# No context bloat
Works with Claude Desktop, Claude Code, Cursor, Windsurf, Cline, and any client that speaks MCP over SSE.
Standard MCP has no memory between sessions. Lightning adds a 3-tier persistence architecture so your AI never starts from zero.
Tier 1 — Per-Project Redis: 16+ isolated databases (dedicated port ranges) with separate memory and conscience stores. 3-way active-active replication via KeyDB. Sub-millisecond. Source of truth.
Tier 2 — Unimatrix Vector Search: Qdrant DB with 4,264 embeddings using all-MiniLM-L6-v2. Search all projects simultaneously by meaning, not keywords. Filter by project, owner, category.
Tier 3 — Archive: PostgreSQL backup + full conversation transcripts via Continuum. Nothing is ever deleted.
Not active-passive. All three nodes accept requests simultaneously. KeyDB handles replication and conflict resolution. If any node goes down, the others are already running — zero switchover latency.
Primary dev + production. All tools, Unimatrix, ITSM, Daily Planet.
Full KeyDB replica. Independent Singularity. Full read/write.
Third replica. PostgreSQL backup. ZFS archive. Full redundancy.
lmcp.node1.yourdomain.com — lmcp.node2.yourdomain.com — lmcp.node3.yourdomain.com
Each bridge connects to its own local Singularity instance. True independence — no node depends on another.
Deep dive into 3-tier persistence, Unimatrix, and the designation system.
Autonomous AI newsroom with 4 reporters monitoring infrastructure 24/7.
Learn about enterprise deployments and custom configurations.