FINRA’s recent oversight materials make this issue concrete. Its 2026 Annual Regulatory Oversight Report includes a dedicated section on Generative AI continuing and emerging trends, discussing use cases, governance expectations, and emerging risks tied to GenAI deployments (FINRA 2026 Annual Regulatory Oversight Report – GenAI). Regulatory Notice 24‑09 reminds firms that using Generative AI and Large Language Models does not change their regulatory obligations and emphasizes technology‑neutral application of rules on supervision, testing, monitoring, and recordkeeping (Regulatory Notice 24‑09). Across these materials, FINRA emphasizes accuracy, supervision, governance, testing, monitoring, documentation, human review, and controls around production AI use.
The problem is structural. Better prompts alone will not close the trust gap. Larger models alone will not solve it either.
To use Generative AI safely in banking, institutions need a Persistent Context Layer: a governed, reusable, interconnected layer that grounds AI in the institution’s actual data, entities, relationships, permissions, provenance, and analytical logic.
That is the role DataWalk software is built to play.
DataWalk provides a (Commercial Off-The-Shelf) COTS Contextual Intelligence Platform that turns fragmented banking data into persistent, governed context for investigations, analytics, decisioning workflows, applications, and AI agents. Instead of asking an LLM to infer reality from isolated files or retrieved text chunks, DataWalk grounds AI in a connected intelligence model that can be searched, traversed, audited, and extended over time.
Why Standard RAG Is Not Enough for Banking Investigations
The default enterprise response to hallucinations has been Retrieval‑Augmented Generation, or RAG.
Standard RAG converts documents into vector embeddings, stores them in a vector database, retrieves semantically similar chunks at query time, and passes those chunks to an LLM. For simple knowledge retrieval, this can be useful. A chatbot answering questions from a policy manual, procedure document, or product guide may perform well with RAG.
But banking investigations are rarely simple document‑retrieval problems.
- They are relationship problems.
- They are multi‑hop reasoning problems.
- They are governance and auditability problems.
A financial crime investigator may need to trace funds across multiple accounts, shell companies, beneficial owners, jurisdictions, counterparties, devices, addresses, alerts, and previous cases.
A vector database can retrieve documents that mention a name. But it does not inherently know that Account A is owned by Person B, who controls Company C, which transferred funds to Entity D, which is linked to an earlier sanctions concern. Similarity is not the same as relationship understanding.
This is where standard RAG reaches its limits. It retrieves relevant text, but it does not provide a persistent, governed model of how entities are connected, how relationships were derived, which permissions apply, what provenance supports each fact, and how a conclusion can be reproduced.
For banking AI, that difference is critical.
| Capability | Standard RAG | Persistent Context Layer |
|---|
| Reasoning Type |
Limited
Semantic similarity / Text chunk retrieval |
High
Multi-hop relationship reasoning |
| Data Logic | Isolated documents and extracts | Interconnected entities, events, and history |
| Auditability | Black-box "inference" from language | Deterministic logic and source-backed evidence |
| Scale Efficiency | One-off knowledge bases for each use case | Reusable foundation for all AI agents/apps |
Agentic AI can increase the very risk banks are trying to control
Banks are moving from chatbots toward agentic AI: systems that can plan, act, call tools, inspect data, and perform multi‑step workflows.
This increases both the opportunity and the risk.
A chatbot that hallucinates may produce a bad answer. An AI agent that hallucinates may take a bad action.
FINRA’s 2026 GenAI materials also discuss AI agents and related concerns such as autonomy, scope of authority, auditability, transparency, data sensitivity, domain knowledge, and the importance of human validation, explicitly flagging autonomous agents as an emerging risk area that demands new oversight mechanisms.
Forrester’s article FinovateEurope 2026: From AI Hype To Bank‑Ready Execution points in the same direction. It describes a shift from AI experimentation to bank‑ready execution, where stronger demos emphasized controlled autonomy, deterministic process logic combined with LLM reasoning, traceability to source data, and mechanisms for human inspection, validation, and override (FinovateEurope 2026 blog).
The lesson is clear: banking agents need more than access to documents.
They need access to governed contextual data. They need to know which entities are connected, which relationships are valid, which data they are allowed to access, which source supports each fact, which analytical functions are approved, and when a human must review the result.
Without that foundation, agentic AI can scale the very risk banks are trying to control.
It also creates an integration problem. Without a shared context layer, every new agent must be connected separately to every relevant data source, tool, and permission model. A Persistent Context Layer, exposed through governed APIs and Model Context Protocol (MCP), reduces this N × M integration tax by giving agents one controlled way to access trusted enterprise context.
Scaling Intelligence vs. Scaling Risk
Why Agentic AI demands a Persistent Context Layer
POTENTIAL BUSINESS IMPACT
AI ARCHITECTURE EVOLUTION
High Governance
Requirement
AI Agents
Ungrounded Agents: Risk of taking "bad actions" based on hallucinations.
Grounded Agents: Traceable logic and scoped authority via Context Layer.
ESTIMATE: "High Governance" requirement reflects Gartner's 2026 finding that successful AI leaders invest disproportionately in data foundations.
The Persistent Context Layer
A Persistent Context Layer is the foundation that allows AI to operate on governed enterprise reality instead of temporary extracts, disconnected documents, or one‑off knowledge bases.
It connects data to real‑world entities, relationships, events, permissions, provenance, history, business definitions, analytical functions, and operational workflows.
The context is persistent because it is not rebuilt from scratch for every use case. It is maintained, governed, reused, and extended as new sources, questions, typologies, workflows, and AI initiatives appear.
Gartner’s 2026 research points in the same direction: recaps of the Data & Analytics Summit emphasize that organizations reporting successful AI initiatives invest disproportionately in data and analytics foundations—data quality, governance, context, and AI‑ready skills—before scaling agentic AI (Gartner D&A Summit 2026 recap). Gartner also notes that AI governance and stewardship are moving rapidly up the priority list as boards and regulators focus on explainability and control (AI governance & stewardship – Gartner Hype Cycle).
For banking, this means AI should not operate on a pile of retrieved documents. It should operate on a governed contextual data layer that knows the difference between a customer, an account, a counterparty, a transaction, an alert, a beneficial owner, a device, an address, a case, and the relationships between them.
That is the difference between AI that sounds plausible and AI that can be checked.
How DataWalk Grounds AI in Context
DataWalk software creates a Persistent Context Layer through an ontology‑first, single‑state architecture.
The ontology maps fragmented banking data to real‑world business concepts such as customers, accounts, transactions, counterparties, companies, alerts, cases, devices, addresses, documents, and relationships.
The single‑state architecture keeps DataWalk’s core relational, search, and graph capabilities aligned on one schema, one data model, one database, and one compute layer.
The practical benefit is consistency and faster adaptation.
With DataWalk, search, graph traversal, scoring, investigation logic, application workflows, and AI access operate on the same governed intelligence model. This reduces the need to synchronize context across separate search indexes, graph databases, data marts, feature stores, and application layers. It also reduces the risk that different tools operate on different versions of reality.
When an AI agent uses DataWalk, it does not have to rely only on semantic similarity across text chunks. It can use governed APIs, Model Context Protocol (MCP), graph traversal, analytical functions, and approved logic to inspect the actual context around an entity, transaction, alert, or network.
In this model, the Agents or LLMs are not expected to invent or infer the answer from language alone. They act as an interface and reasoning layer over governed contextual data. The facts, relationships, calculations, and evidence come from deterministic operations on the DataWalk intelligence model: graph traversal, approved analytical functions, reproducible queries, and governed access to source‑backed data.
- Who or what is this connected to?
- What relationships explain this risk?
- Which transactions are part of the pattern?
- What changed when a new source was added?
- Which source supports this conclusion?
- Which evidence should a human reviewer inspect?
This does not remove the need for governance, testing, monitoring, or human oversight. But it materially reduces hallucination risk by grounding AI outputs in governed, connected, auditable context.
From Black Box AI to Auditable AI-Assisted Workflows
The goal is not to let AI improvise conclusions.
The goal is to let AI assist investigations and compliance workflows while remaining grounded in deterministic, inspectable logic.
In a DataWalk‑based workflow, an AI agent can generate a high‑level plan, call approved analytical functions, traverse governed relationships, calculate aggregations, retrieve supporting evidence, and produce a structured result for human review.
The key difference is that the LLM does not become the system of record or the source of truth. It helps interpret the task, orchestrate approved operations, and present results. The underlying knowledge is retrieved from the governed context layer, and the reasoning steps are tied back to data, relationships, logic, and lineage.
A reviewer can inspect which entities were traversed, which sources were used, which rules or analytical functions were applied, and which evidence supports the result.
That is what banks need from AI:
- not just fluency,
- not just speed,
- not just automation,
- but traceability, explainability, reproducibility, and control.
This turns Generative AI from an ungrounded risk multiplier into a governed analytical interface over trusted enterprise contextual data.
Why This Matters for Banking Leaders
For Chief Data Officers, AI strategy leaders, digital transformation leaders, financial crime executives, and risk teams, the central question is no longer:
Can we use Generative AI?
The real question is:
Can we trust Generative AI inside regulated banking workflows?
DataWalk’s Persistent Context Layer helps banks move from AI experimentation to governed AI execution by providing:
- Lower hallucination risk — AI outputs are grounded in governed entities, relationships, provenance, and analytical logic.
- Better auditability — investigators and reviewers can trace conclusions back to source data, relationship paths, evidence, and approved functions.
- More controlled agentic AI — agents operate through governed access, scoped authority, approved tools, and human review rather than open‑ended improvisation.
- Higher reuse across AI initiatives — each new AI assistant, workflow, or agent can reuse the same context layer instead of building another temporary knowledge base.
- Faster time‑to‑intelligence — teams start from connected context rather than raw data, flat documents, manual joins, and disconnected extracts.
- Stronger governance — data permissions, provenance, history, and business definitions are part of the context layer rather than being recreated project by project.
As AI‑first operating models evolve toward smaller cross‑functional teams and decision pods, Persistent Context also reduces the dependency on large, project‑specific data‑engineering efforts by giving business and technical teams a reusable foundation for governed AI work.
For banks, this is not only a technology issue. It is a trust, risk, and compliance issue.
Scaling Intelligence vs. Scaling Risk
Why Agentic AI demands a Persistent Context Layer
POTENTIAL BUSINESS IMPACT
AI ARCHITECTURE EVOLUTION
High Governance Requirement
AI Agents
Ungrounded Agents: Risk of taking "bad actions" based on hallucinations.
Grounded Agents: Traceable logic and scoped authority via Context Layer.
ESTIMATE: "High Governance" requirement reflects Gartner's 2026 finding that successful AI leaders invest disproportionately in data foundations.
Conclusion: Generative AI Needs Context It Can Trust
Generative AI cannot operate safely in a vacuum.
Without a Persistent Context Layer, banks risk scaling AI outputs that are difficult to verify, explain, or defend. Better prompts and larger models may improve outputs, but they do not solve the deeper problem: AI needs governed access to the institution’s real entities, relationships, permissions, provenance, and analytical logic.
DataWalk provides that foundation.
By combining ontology‑first modeling, graph analytics, search, analytical functions, AI access, and a single‑state architecture, DataWalk helps banks ground Generative AI and AI agents in persistent, governed, auditable context.
In this model, the LLM does not replace governed analytics or become the source of truth. It works over a context layer where relationships, calculations, evidence, and provenance can be inspected and reproduced.
The shift is simple:
- From ungrounded AI to grounded AI.
- From one‑off RAG pilots to reusable context infrastructure.
- From plausible answers to auditable intelligence.
For banking leaders, the path forward is clear:
Do not build AI on disconnected extracts. Build it on Persistent Context.