← Research
Research · May 15, 2026 · 6 min read

Stop overstretching LLMs for enterprise work.

Our position paper with Microsoft Research and SAP Research on overstretched LLMs and task-native agents.

KSKuldeep Singh

As organizations race to deploy generative AI, a curious paradox has emerged: while adoption is soaring, measurable enterprise-scale value remains limited. Most initiatives are stuck in the "pilot-to-production" gap, confined to localized experiments that fail to scale.

At Eka Labs, we've spent a lot of time thinking about why this is happening. Our latest research paper with Microsoft Research and SAP Research, "Position: Avoid Overstretching LLMs for every Enterprise Task," provides a formal answer: we are trying to force monolithic Large Language Models (LLMs) to perform tasks they are mathematically ill-equipped to handle reliably.

It is time to move away from overstretched, general-purpose models toward Task-Native Agents.

The Theoretical Mismatch: Why Scaling Isn't Solving Reliability.

The industry often treats LLMs as monolithic repositories of both world knowledge and reasoning procedures. We argue this is fundamentally flawed for enterprise workloads, which are dominated by deterministic, structured, and knowledge-dependent tasks.

In our paper, we back this position with three rigorous theoretical foundations:

1. The Information-Feasibility Condition

Every task has an intrinsic information complexity, and every model has a finite parametric capacity. We formally prove that a task is solvable by a model if and only if its capacity meets or exceeds the task's complexity. For multi-step enterprise procedures and evolving policies, the information complexity often exceeds what a bounded parametric model can encode robustly.

2. The Knowledge Bottleneck (I(Y;Z|X) > 0)

Enterprise tasks depend on external, proprietary facts (Z) not contained in a model's training data. If the correct output relies on this external information, a model relying only on its internal weights faces an irreducible error floor. You cannot "prompt" your way out of missing knowledge.

3. The Topological Obstacle

We model world knowledge as a space with infinite doubling dimension, while a model's parameter space is a finite Euclidean manifold. Mathematically, a finite model cannot represent the open-ended, dynamic space of world knowledge without loss. This makes external knowledge stores a principled requirement, not an optional augmentation.

From General SLMs to Task-Native Agents: The Eka Approach.

Rather than one massive model acting as an "oracle" for every subroutine, we advocate for modular decomposition. At Eka Labs, we take general small language models and transform them into Task-Native Agents.

Synthesis Over Orchestration

Our platform does not just manually assemble prompts and tools around a general model. Instead, we automatically synthesize task-specific agents from natural language commands. We take compact, general models (typically with smaller parameters) and shape them with the specific workflows, operational constraints, and action structures required for a defined mission.

Specialized Computational Units

Our proposed architecture assigns these task-native models to narrow, bounded roles—such as extraction, routing, or verification to minimize error propagation.

  • Task-Native Models as Interfaces: They map unstructured inputs into structured representations.
  • Externalized Computation: Substantive reasoning and retrieval are delegated to deterministic symbolic procedures and external knowledge bases.
  • Offline Frontier LLMs: We use high-capacity models offline as "oracles" to synthesize the schemas and rules that the online control loop then executes cost-effectively.

The Path Forward for Industrial Intelligence.

To move from pilots to production, enterprises must move beyond "hallucination-prone" monolithic designs. By using task-native agents, organizations achieve:

  • Lower Costs: Replacing repeated high-cost LLM invocations with lower-cost specialized operations.
  • Improved Reliability: Deterministic components enforce governance, while failure surfaces remain local and testable.
  • Production Efficiency: Optimized latency and operational control for autonomous workloads.

Eka Labs · Position

The future of enterprise AI isn't one giant model stretched across every job; it's a modular ecosystem of task-native agents built for a clear responsibility in a physically constrained world.