Tags

, , , ,


Masterclass: Here’s Martyn!

Martyn Rhisiart Jones and the Goodstrat editorial team, Madrid, 3rd February 2026

Introduction

The following is the redacted transcript of a conversation between the distinguished Sir Afilonius Rex of Cambriano Energy and the cordial Martyn Rhisiart Jones of goodstrat.com.

The informal session took place before an invited audience at the Welsh Academy’s alternative summer conference of July 2023 and featured a lively question-and-answer session with audience input.

According to our reliable sources, “The BBC, RTE and RTVE broadcast the session.”

The Session

The music fades, the stage lights come up, and the microphones are prepared.

Sir Afilonius: Welcome to this most “unteddish” of talks. I am your host, and it’s great to see the strong interest in the topics we’re examining today. My guest, who probably needs no introduction, is the son of Wales, Martyn Rhisiart Jones, and I will be asking him some questions that hopefully will generate light, heat, and greater understanding.

Welcome, Martyn.

Martyn: Thank you for having me, Sir Afilonius. It’s a pleasure to be here.

Sir Afilonius: Martyn, the first question we have for you is from Florence Welsch of the University of Oxford: What emerging architectural paradigms, beyond traditional relational databases and knowledge graphs, do you foresee revolutionising how AI systems integrate and query heterogeneous data sources at a planetary scale?

Martyn: [Ironically] Ah, a softball for starters?

Sir Afilonius: [With a smile] Start as we mean to proceed, eh?

Martyn: Look, if we’re being honest, the old-school dream of shoving everything into one giant relational database or even a perfectly mapped knowledge graph has hit a wall. At a planetary scale, the sheer messiness of data, diverse laws and compromising languages, and the speed at which it moves mean we need something more fluid.

Sir Afilonius: And where are we going with this?

Martyn: That’s an excellent question. I really believe we’re moving toward a world of “Cognitive Liquidity.” Instead of trying to force data into rigid tables, we’re seeing the rise of Neural Databases. Think of this as the database actually “learning” the data rather than just storing it. It turns everything, that is, satellite feeds, voice notes, spreadsheets, into a shared mathematical language (a latent space) where an AI doesn’t need a map to find the connection; it just senses the proximity of the concepts.

Sir Afilonius: [Looking thoughtfully]And what do we mean by cognitive liquidity in this context?

Martyn: Cognitive liquidity is a fancy term, but it simply means how easily you can shift attention, think clearly, and make decisions without mental friction.

High cognitive liquidity means your mind feels flexible, focused, and responsive.
Low cognitive liquidity implies a mental drag, overload, or being “stuck” in thought.

In short, it’s the ease with which your thinking flows.

Sir Afilonius: I see. Are there any sticking points?

Martyn: Sure, there are. You can’t store all that in a single location due to digital sovereignty. That’s where the Federated Data Fabric and Service Mesh come in. It’s less like a library and more like a global nervous system. We aren’t moving the data anymore; we’re moving the “questions.” You have these Agentic Middleware, essentially a swarm of specialised AI agents, that go out into the world, navigate different cloud providers and legal jurisdictions, and synthesise an answer without the raw data ever leaving its home.

Sir Afilonius: I see!

Martyn: The real kicker, though, is shifting away from the idea that data is either “right or wrong.” At this scale, data is noisy. The next big paradigm is the Probabilistic Data Fabric. Instead of a database returning a “Yes” or “No,” the architecture itself handles uncertainty. It tells the AI, “I’m 80% sure this is the answer based on these three conflicting sources.” This is huge because it finally gives AI a way to handle contradictions without just making things up.

Sir Afilonius: Fascinating. But weren’t certainty factors and probabilities too complex for the average eighties AI botherer?

Martyn: And they will be again.

Si Afilonius: I see.

Martyn: We’re essentially moving from building “filing cabinets” to creating an “atmosphere” of information that AI breathes in. Don’t forget that a lot of this is highly tendentious, immensely speculative and cloyingly puerile.

Sir Afilonius: Fascinating. Thank you. Moving swiftly along… The next question is from Kate Bush of Imperial College. In an era of exponential data growth from IoT, social platforms, and generative AI, what governance frameworks best balance accessibility, privacy, and bias mitigation without stifling real-time decision-making?

Martyn: The struggle today is that traditional “red tape” governance is just too slow for real-time AI. To keep up, we have to stop treating policy as a document and start treating it as code.

The most promising shift is toward Computational Governance. Imagine if data carried its own “digital ruleset” that automatically enforces privacy and access rights whenever an AI attempts to access it. This is paired with Differential Privacy, where we use math to add “noise” to data, allowing an AI to learn global patterns from IoT or social feeds without ever seeing an individual’s private details.

Sir Afilonius: And bias?

Martyn: To handle bias without hitting the brakes, we’re moving toward Dynamic Guardrails. Instead of annual audits, we run “supervisor models” that monitor the primary AI in real time. If the data drifts toward a biased outcome, the supervisor applies a correction layer immediately, before the decision is made.

Essentially, we’re moving from a “inspect, detect, and correct” model to one where the architecture itself makes it mathematically impossible to break the rules.

Sir Afilonius: That’s absolutely fabulous! I think…

Martyn: Yes, and a whole new world of worlds.

Sir Afilonius: The next question is from Melanie Safka of New College, who asks, “How can knowledge management systems evolve to better handle the ‘hallucination’ problem in large language models, perhaps through hybrid symbolic-neural approaches that enforce verifiability and provenance tracking?”

Martyn: Cute! The fix for hallucinations isn’t just “more data”; it’s about giving AI a grounding wire. Right now, LLMs are like brilliant storytellers with no memory of where they read a fact. To fix this, we’re moving toward Neuro-Symbolic RAG (Retrieval-Augmented Generation).

Sir Afilonius: Neuro-Symbolic RAG is a retrieval-augmented generation approach that combines neural models (LLMs) with symbolic reasoning. Right?

Martyn: Right. Exactly. Instead of the AI just guessing the next word, it’s forced to consult a Symbolic Knowledge Base, a “source of truth” made of complex logic and verified facts. Think of the LLM as the “linguistic engine” and the symbolic layer as the “fact-checker.” Before the AI speaks, it must map its reasoning to a structured graph, ensuring every claim is anchored to a real-world entity.

Sir Afilonius: Echoes of Noam Chomsky?

Martyn: Well… We’re also baking Immutable Provenance into the architecture using specialised metadata. Every piece of information the system retrieves carries a “digital receipt, an auditable paper trail.” If the AI can’t trace a statement back to a verified “parent” document, the system flags it or blocks the output entirely.

Sir Afilonius: Immutable Provenance?

Martyn: Immutable provenance refers to a permanent, tamper-proof record of the origin and history of data or assets.

Sir Afilonius: Okay.

Martyn: Essentially, we’re evolving from models that “sound right” to systems that are “verifiable by design.” We stop asking the AI to memorise the world and start teaching it to cite its sources in real-time.

Sir Afilonius: Thank you! This next question is from Vanessa Bell from the LSE. What are the most underrated risks in information architecture for global enterprises, such as over-reliance on cloud silos or the erosion of data sovereignty in cross-border regulations like GDPR and emerging AI legislation?

Martyn: Vanessa, that is a sharp question. While everyone is focused on the obvious “big” risks, such as data breaches, I think the most underrecognized danger is Architectural Ossification.

Large enterprises are effectively “marrying” their data to specific cloud providers. We call this Cloud Lock-in 2.0, where it’s not just about storage but also about the proprietary AI tools built on top of it. If a provider changes their terms or a geopolitical rift occurs, moving that “intelligence” is nearly impossible. You’re not just renting a server; you’re outsourcing your corporate memory.

Sir Afilonius: And there must be much more to it than that.

Martyn: Sure. The second “sleeper” risk is the Sovereignty Paradox: the idea that a system or entity can maintain complete control (sovereignty) only by giving up some control to others or to external systems. We talk about GDPR as a hurdle, but the real threat is “Regulatory Fragmentation.”

Sir Afilonius: Which is what exactly?

Martyn: In simple terms, Regulatory Fragmentation is the “splinternet” of data laws. It’s what happens when different countries or regions create their own unique, and often conflicting, rules for how data can be handled, stored, and processed by AI.

Sir Afilonius: Tell me more about it.

Martyn: As countries bake AI ethics directly into their data laws, we’re seeing the rise of Data Balkanization. An architecture that works in the EU might be functionally illegal in China or India by 2027. If your information architecture isn’t “regulatorily agile”, meaning it can physically and logically reconfigure itself based on the user’s GPS coordinates, you risk a total systemic shutdown in key markets.

Sir Afilonius: And what about the dangers of a misbehaved feedback loop?

Martyn: Well, with that, there’s the Erosion of Semantic Integrity. As enterprises flood their own silos with AI-generated summaries and synthetic data, we risk a “feedback loop” where the original, ground-truth data is buried. Without a rigorous Lineage Architecture, the enterprise starts making million-dollar decisions based on a “hallucination of a summary of a report,” rather than the facts.

Sir Afilonius: It’s all increasing complexity and obtuseness to all new levels of absurdity.

Martyn: Quite possibly. We’re moving toward a need for Sovereignty-Agnostic Fabrics, systems designed to function across silos while keeping the “keys” to the logic in-house.

Sir Afilonius: The next question, from Becci Lloyd of the Equation Experts, is: Looking ahead to 2030, how might decentralised technologies such as blockchain or federated learning reshape knowledge governance, ensuring equitable access while preventing monopolisation by a few tech giants?

Martyn: Becci, you’ve hit on the central tension of the next decade: how do we stop the “AI-Industrial Complex” from becoming the only gatekeeper of human knowledge? By 2030, I expect we’ll see a massive pivot toward Federated Intelligence Networks.

In this model, we move away from the “data vacuum” approach where tech giants suck all global info into a central brain. Instead, we use Federated Learning to keep data exactly where it’s created, whether that’s on a local hospital server or your personal phone. The AI “travels” to the data to learn, rather than the data travelling to the AI. This effectively breaks the giant data lakes’ monopoly by keeping the most valuable, real-time insights in-house.

Sir Afilonius: What about the issues of quality?

Martyn: To ensure equity, we layer in Blockchain-based Incentive Protocols – digital “reward rules” baked into the software. Right now, if you contribute data to a platform, the giant keeps the value. By 2030, we’ll likely use decentralised ledgers to track “knowledge contributions.” If your unique data helps an AI solve a medical mystery or optimise a power grid, a smart contract ensures you (or your community) are compensated or given free access to the resulting model. It turns knowledge into a Common-Pool Resource rather than a private asset.

Sir Afilonius: And what about real gamechangers?

Martyn: The real game-changer, though, is Decentralised Model Governance (DAOs). Instead of a board of directors in Silicon Valley deciding what an AI is allowed to “know,” the weights and safety guardrails of these models are governed by a distributed collective. It’s “Democracy as an API”, ensuring that the AI reflects a global diversity of thought rather than just the values of a few billionaires.

We’re essentially building a “knowledge internet” that’s too fragmented for any single entity to dominate.

Sir Afilonius: This question is from me. From your experience, what metrics or benchmarks should organisations prioritise to measure the maturity of their data ecosystems, and how do these adapt to domains like scientific research versus commercial analytics?

Martyn: Sir Afilonius, it’s a pleasure. When we move away from the “infrastructure” talk and get into the “ecosystem” reality, we have to stop measuring how much data we have and start measuring how well it flows. In my experience, the most mature ecosystems don’t just have high uptime; they have high “Decision Velocity.”

Sir Afilonius: What do you mean by decision velocity?

Martyn: Decision velocity is how quickly you make decisions and act on them, without getting stuck overthinking.

Sir Afilonius: I see.

Martyn: However, the gold standard metric for 2026 is “Time to Insight.” How long does it take from the moment a new sensor fires or a new paper is published to that information actually changing an AI’s behaviour or a board member’s decision? If it’s weeks, you’re in a museum, not an ecosystem.

However, the “benchmarks” for success look very different depending on whether you’re in a lab or a boardroom:

  • In Scientific Research, the North Star is “Reproducibility and Lineage.” A mature research ecosystem is one in which a peer can review an AI-generated hypothesis and trace it back through a “digital chain of custody” to the exact raw data point that sparked it. Success isn’t measured by ROI, but by Semantic Density: how many different researchers (or AI agents) can reuse the same data for different experiments without breaking context.
  • In Commercial Analytics, it’s all about “Liquidity and Trust.” The metric I prioritise here is “Data Product Adoption.” If you build a sophisticated “customer 360” view but the marketing team is still using their own separate spreadsheets, your ecosystem has zero maturity. We also look at “Governance as Code”, what percentage of your privacy and bias checks are automated versus manual?

The final benchmark that applies to both is “Interoperability Friction.” A mature system feels like an “atmosphere”; you don’t have to think about breathing; the data is just there. If your teams are spending 80% of their time “cleaning” data and only 20% analysing it, you’re still in the “Ad-hoc” phase, regardless of how much you spent on your cloud stack.

Sir Afilonius: Are we moving toward an “Everything, Everywhere, All at Once” mindset in which the ultimate metric is how invisible the infrastructure has become?

Martyn: That’s exactly it.

Sir Afilonius: The next question, from Lila de Alba at Iniciativa, is: How can we architect information systems to foster serendipitous discovery and bridge silos across interdisciplinary fields such as climate science and economics, while maintaining robust security and ethical standards?

Martyn: Lila, the mistake we often make is architecting for “efficiency,” which is the enemy of discovery. To bridge climate science and economics, we need to move from “Filing Cabinets” to “Associative Fabrics.”

The key is Active Cross-Domain Metadata. Instead of static labels, we use AI to “read” the data in real-time and create hidden bridges. For example, a climate sensor’s soil moisture data could be automatically tagged with its potential impact on local crop insurance premiums. This makes “Semantic Proximity”, the system basically taps a researcher on the shoulder and says, “Hey, this drought data you’re looking at is mathematically related to this economic volatility report from three silos over.”

Sir Afilonius: And what about security?

Martyn: To do this without compromising security, we use Functional Encapsulation. We don’t give everyone access to everything. Instead, we allow “Discovery Agents” to query the silos. These agents can confirm a correlation exists across disciplines without ever revealing the underlying sensitive or proprietary raw data. It’s like a blind date for data: the system tells you there’s a match, but you only get the “keys” once the ethical and security handshakes are verified via Smart Contracts.

Sir Afilonius: This is a whole new world.

Martyn: Yes! We’re building a “Global Idea Exchange” where the architecture handles introductions, but humans (and their AI counterparts) drive breakthroughs.

Sir Afilonius: Finally, a question from MIT’s own Paula Jones, who asks: What lessons from historical data breaches or governance failures could inform the design of resilient, adaptive architectures for handling sensitive knowledge in high-stakes areas like healthcare or national security?

Martyn: Paula, history’s most painful lesson is that “The Perimeter is a Lie.” Almost every major breach, from the old-school SQL injection to recent cloud misconfigurations, has happened because we focused on building a “hard shell” around a “soft centre.” Once a gatekeeper’s credentials are stolen, the entire vault is wide open.

Sir Afilonius: These are massive bets, right?

Martyn: Yes! For high-stakes areas such as healthcare and national security, we need to move toward “Zero-Knowledge Architectures.” The biggest lesson from the past is that any data stored in a readable format is a liability. In a resilient system, the architecture itself shouldn’t be able to see the data it holds. We use Homomorphic Encryption, which allows AI to perform analytics on encrypted medical records without ever decrypting them. If a hacker breaks in, all they find is mathematical noise.

Sir Afilonius: Cool! Compute on data without ever seeing it? Tell me more.

Martyn: Another critical lesson is that “Centralisation is a Single Point of Failure.” Historically, when you put all the “state secrets” or “patient records” in one giant honeypot, you invite a catastrophic breach. We’re moving toward Atomic Data Fragmentation, where sensitive knowledge is broken into “shards” and distributed across different legal and physical jurisdictions. No single “key” or “node” can reconstruct the whole picture; it requires a multi-party consensus to assemble the truth.

Sir Afilonius: So, finally, we’ve learned that “Governance is not a snapshot”?

Martyn: Traditional audits fail because they check the system only once a year, while breaches occur in milliseconds. Resilient architecture must be Self-Healing. We use AI-driven “Immutable Ledgers” to track every single data access. If the system detects an anomalous pattern, such as a doctor suddenly downloading 1,000 records at 3 AM, the architecture doesn’t just “alert” someone; it logically “severs” that branch of the network instantly.

Sir Afilonius: There are massive shifts ahead of us.

Martyn: We’re moving from “Fortress Design” to “Biological Resilience,” in which the system survives by assuming it’s already been compromised.

Sir Afilonius:  That brings us to the end of the session. Any closing thoughts, Martyn?

Martyn: It’s been a privilege, Sir Afilonius. If I leave you with one final thought, it’s this: we are moving out of the era of “Data Management” and into the era of “Data Autonomy.”

For decades, we’ve treated information like a static resource to be mined, fenced in, and controlled. But at a planetary scale, that old “fortress” mentality creates the silos, the biases, and the catastrophic breaches we’ve discussed today. The architectures of 2030 won’t be about building better containers; they’ll be about creating better ecosystems.

Sir Afilonius: So, what is the new objective?

Martyn: The goal is to create a world where information is “intelligent” enough to protect itself, “liquid” enough to flow across disciplines like climate and economics, and “honest” enough to show its own legal receipts.

We’re not just building faster databases; we’re building a collective, verifiable memory for humanity.

The future belongs to organisations that stop trying to “own” the data and focus on how to orchestrate trust around it. This is a significant paradigm shift.

Thank you to Florence, Melanie, Vanessa, Becci, Lila, and Paula for such probing questions. It’s clear that while the technology is ready, the real work lies in our willingness to rethink the very foundations of how we define “knowledge.”

Sir Afilonius: Thank you, Martyn. And thanks to all our guests and participants. Until next time. Goodbye.

Suggested Reading


Discover more from GOOD STRATEGY

Subscribe to get the latest posts sent to your email.