How Ontology‑Driven Agents Enable Controllable Execution in Harness Engineering
The article analyzes why current AI agents often act unpredictably, defines a multi‑dimensional notion of safe and controllable execution, proposes an ontology‑driven semantic foundation with architecture constraints, context engineering, and feedback loops, and demonstrates the Knora implementation through concrete workflow examples.
1. From the Agent Boom to “Uncontrollable”
In 2024‑2025, agents become the primary form of enterprise AI, capable of planning, tool use, and multi‑step task execution. In real‑world deployments they frequently misuse terminology, deviate from reasoning logic, and produce results that violate corporate rules, essentially “confidently doing the wrong thing.” The root cause is not model weakness but the lack of a “rule‑aware structure” that defines business boundaries.
2. Redefining “Safe Controllable Execution”
The paper breaks down the safety‑controllability proposition into independent yet related dimensions, each with concrete engineering levers:
Permission & Isolation : Who can do what? Data cross‑domain? – addressed by RBAC/ABAC, API gateways, data sandboxes.
Behavior Constraints : Where does the agent’s reasoning and tool invocation stop? – enforced via prompt constraints, tool whitelists, ontology modeling.
Audit & Traceability : What was done and can the decision path be reconstructed? – operation logs, decision‑chain tracing, explainability frameworks.
Exception Handling : How are errors downgraded or rolled back? – circuit breakers, human‑review nodes, idempotent design.
Result Validation : Does the output obey business rules? – rule engines, formal verification, ontology‑based constraint checks.
Compliance Alignment : Does the process satisfy industry regulations? – compliance knowledge bases, approval flows, auditable reports.
The proposed ontology‑driven solution focuses on the “Behavior Constraints” and “Result Validation” dimensions, providing a semantic infrastructure layer rather than a set of ad‑hoc engineering patches.
3. Architecture Constraints: From “External Fences” to “Built‑in Skeleton”
Traditional engineering constraints work in simple scenarios but face three structural issues in complex business: rule explosion, natural‑language ambiguity, and implicit semantic links that hinder reuse. An ontology embeds business rules directly into the structural model, turning constraints into an intrinsic “skeleton.” Rules are no longer scattered in prompts; they become queryable, verifiable entities. Constraint enforcement occurs after the agent generates an intent but before execution: the intent is compared against the ontology, and any violation triggers an immediate retry rather than silent passage.
4. Context Engineering: From “Memory Padding” to “Re‑architected Memory”
Agents lose context in long tasks, repeatedly asking for basic information. The underlying problem is linear text stacking without structure. By representing business data, processes, and relationships as a graph‑based ontology, the system can extract a relevant semantic sub‑graph for each task, inject only the necessary context, and guarantee consistency. This yields three concrete benefits:
Precise Retrieval Instead of Full Injection : Only the most relevant sub‑graph is loaded, eliminating context overflow.
Consistency Assurance : A unified semantic network resolves stale, conflicting, or redundant information before the agent reasons.
Cross‑Task Reuse : The same ontology serves multiple agents, avoiding repetitive context assembly.
The approach also bridges symbolic knowledge‑graph reasoning (deterministic but limited) with LLM reasoning (flexible but opaque), assigning deterministic constraints where the ontology covers the domain and marking LLM‑generated conclusions with confidence states where it does not.
5. Feedback Loop: From “Subjective Evaluation” to “Traceable Verification”
Current feedback mechanisms rely on a secondary evaluator model, which can be fooled by superficially plausible outputs and lacks business‑level judgment. The ontology enables objective verification: business judgments (e.g., quota limits, prerequisite checks) are codified as rules and can be automatically validated. Hard constraints are enforced directly; soft constraints are combined with LLM or human review. Each verification result is traceable to specific ontology nodes, providing auditable evidence required in regulated industries.
The loop also drives ontology evolution: agents expose uncovered concepts and frequent error paths, prompting ontology updates and confidence‑driven human corrections, turning the system into a self‑improving knowledge base.
6. From Technical Controllability to Business Controllability – The Knora Path
Knora, the platform built by 悦点科技, materializes the methodology into a layered architecture:
Ontology Layer (Knowledge Base) : Stored as a labeled‑property graph (LPG) with five core concepts – Entity, Relation, Event, Action, Logic – each defined in a meta‑schema.
Cognition Engine (Translation & Arbitration) : Extracts relevant knowledge from the ontology before agent activation, injects it into the reasoning context, and after generation validates the result against the ontology, rejecting non‑conforming outputs.
Agent Execution Layer : Executes user tasks, calls tools, and produces results, but tool selection, trigger conditions, and workflow are dictated by the ontology rather than ad‑hoc prompts.
Data flow: User task → Cognition Engine extracts sub‑graph → Agent reasons within this context → Result returned to Cognition Engine for ontology validation → If passed, output is released; otherwise, a structured error report is generated and an approval task is created.
Concrete example – work‑order quantity change:
User requests to change WO‑2026‑0312 from 500 to 800.
Cognition Engine queries the ontology, finds the work order entity, its current state, and the defined approval threshold (20%).
Change magnitude (60%) exceeds the threshold, triggering an Action rule requiresApproval = true and establishing a relation chain to the approvers.
Constraint check fails because the required approvedBy relation is missing; the system blocks the write, generates a structured error, and creates an approval task.
After human approval, the approvedBy relation is added, the validation passes, and the change is committed.
Knora also addresses ontology cold‑start by combining automated high‑confidence mapping (e.g., field‑to‑attribute alignment) with human‑in‑the‑loop verification for ambiguous concepts, gradually reducing manual effort.
Real‑world deployments include energy‑transport rail inspection (reducing a 30‑person, 7‑day reporting process to a 3‑person, 30‑minute automated workflow, >70× efficiency) and electronics manufacturing quality‑traceability, demonstrating tangible business impact.
Conclusion
Enterprise AI is at a crossroads: either continue stacking tools and prompts without clear boundaries, or first construct a structured business ontology that defines the agent’s operating map. The latter path yields agents that know their limits, the rules they must obey, and the justification for every decision. Such a semantic foundation becomes a durable competitive moat, evolving with each agent execution and business iteration.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
