MCPs Are a Dead End for Talking to Data
Feb 28, 2026

Introduction
Every enterprise today wants the same capability: to talk to its data.
Executives want answers without dashboards. Operators want insights without writing SQL. Analysts want faster exploration across systems. The promise of conversational AI and text-to-SQL appeared to finally unlock this vision of allowing anyone to ask questions in natural language and receive actionable insights instantly.
Over the past two years, many enterprises attempted to achieve this using agentic systems connected through MCPs (Model Context Protocol servers) layered on top of SQL access. The idea was intuitive: expose enterprise capabilities as tools, let an LLM orchestrate them, and enable conversational analytics.
However, reality has been very different.
Across multiple enterprise deployments, including Fortune 500 organizations we have worked with, MCP-based approaches have struggled with reliability, latency, cost, and correctness. One Fortune 500 enterprise reported failure rates as high as 93%, while another large pharma organization discontinued its pilot entirely after similar outcomes.
The issue is not conversational AI itself.
The issue lies in how AI is allowed to interact with enterprise data.
How Are MCPs Used Today
Most enterprise implementations follow a similar architectural pattern.
When a user asks a question, the request is routed to an agentic system. The agent evaluates available MCP servers (each representing access to some dataset or capability) and selects one to call. The MCP returns data, often in large and generalized payloads, which the language model must interpret before deciding the next step.
This cycle repeats until an answer is produced.
At a conceptual level, MCPs act as middleware between AI and enterprise data systems. Instead of allowing the model to directly reason over databases, the system forces interaction through predefined abstractions.
Initially, this seems safe and modular. In practice, it introduces fundamental limitations.
Enterprise questions rarely map cleanly to predefined tools. Even simple requests often require resolving entities, joining datasets, applying business rules, and executing multi-step reasoning chains. MCP systems push this burden onto the LLM, expecting it to infer intent from noisy intermediate outputs.
The result is exploration instead of execution.
User query flows through an AI agent that repeatedly calls MCP servers, receives large intermediate payloads, and attempts iterative reasoning before generating an answer.
What Are the Issues With MCP
The biggest limitation of MCP-based systems is structural: they prevent AI from talking directly to data.
Even when enterprises grant full database availability, access remains mediated through MCP wrappers. If an MCP does not exist for a specific use case, the assistant cannot proceed. If the MCP is too broad, it returns excessive information that must be filtered by the model. If it is too narrow, engineering teams must continuously build and maintain new MCP endpoints.
This violates the fundamental reason conversational analytics is required.
Over time, organizations face an impossible tradeoff between coverage and maintainability.
A deeper problem emerges around business understanding. Enterprise analytics is not merely SQL execution. Questions such as “Why did this customer’s invoice change last month?” require knowledge of contracts, exceptions, onboarding flows, usage bursts, and operational processes stored across both structured tables and documentation.
MCP systems do not encode this business memory. The agent effectively searches without direction, leading to hallucinations, incorrect joins, or execution loops that timeout.
Latency and cost compound the issue. Each MCP invocation introduces additional tokens, reasoning steps, and network overhead. Multi-hop analytical questions — which represent the majority of enterprise queries — quickly become slow and expensive. Reliability drops sharply as execution chains grow longer.
What begins as an elegant abstraction turns into fragile orchestration.
How Has It Performed So Far
In controlled demos, MCP-based assistants often appear impressive. They handle predefined workflows reasonably well and can execute scripted actions reliably.
But production environments tell a different story.
Enterprise users rarely ask templated questions. They explore edge cases, investigate anomalies, and ask follow-up questions that were never anticipated during tool design. These long-tail queries expose the brittleness of MCP architectures.
Common outcomes observed across deployments include:
Frequent execution failures and timeouts
Hallucinated analytical explanations
High infrastructure and token costs
Increasing developer effort to maintain MCP coverage
Declining user trust after repeated incorrect answers
In several organizations, adoption stalled not because users disliked conversational analytics, but because they stopped trusting the system’s answers.
What Is the Best Way to Talk to Data
The lesson emerging from these deployments is increasingly clear:
AI should not talk to abstractions of data. It should talk to data itself.
A reliable conversational analytics system must behave less like a chatbot calling tools and more like a skilled human analyst working directly with enterprise systems.
This requires a fundamentally different architecture.
Instead of routing requests through middleware layers, the system should construct a unified understanding of the enterprise environment — combining databases, time-series systems, documents, and operational rules into a coherent memory layer. Reasoning then happens against this memory, followed by deterministic execution against live data sources.
When AI can directly query, validate, and iterate on real data, conversations become grounded rather than speculative. Multi-step questions become execution plans instead of guesswork.
The goal is not better prompting or more tools.
The goal is data-native intelligence.
How Does Genloop’s Proposed Way Compare to MCP Approach
A practical example of this shift comes from NetApp’s Keystone Storage-as-a-Service platform, where teams across sales, finance, engineering, and customer success needed conversational access to operational data.
Their initial implementation followed the standard MCP-driven model. Despite having access to Postgres tables, usage telemetry systems, metering datasets, and documentation, the assistant could only operate through curated MCP servers. Broad MCP responses forced the model to infer context repeatedly, leading to timeouts in nearly 93% of runs.
The limitation was not model capability — it was architectural confinement.
Genloop approached the problem differently.
Instead of adding more MCPs, the system was connected directly to Keystone’s underlying data ecosystem. A unified and sanitized enterprise memory was constructed by combining structured datasets with business documentation and operational processes. This allowed the system to understand how Keystone actually functions rather than merely accessing fragments of information.
Execution shifted from exploratory tool-calling to planned reasoning. Specialized agents interacted directly with databases, retrieved documentation context when required, executed analytical steps, and continuously learned from interaction logs.
The transformation was immediate and measurable.
The same environment moved from 93% failure rates to roughly 95% successful executions, eliminating timeouts and enabling teams to rely on the assistant for daily operational decisions.
Instead of behaving like a best-effort chatbot, the system functioned as a persistent AI analyst embedded within the enterprise.
Summary
MCPs were introduced to make AI integration modular and safe. In practice, they have become a dead end for conversational analytics at enterprise scale.
They introduce middleware where understanding is required, abstraction where precision is needed, and orchestration where direct reasoning would suffice.
Talking to data is fundamentally different from calling tools. Enterprise intelligence demands systems that understand business context, execute deterministically, and learn continuously from real interactions.
The future of text-to-SQL and conversational analytics will not be defined by more wrappers or smarter prompts.
It will be defined by systems that can finally talk directly to data.



