I was reading an article. It was written by Jeff Wilts and recommended by Bill Inmon. I got to this statement: “Teradata is a full-featured enterprise data warehouse.” For me, it went further downhill from there.
But this was the coup de grace: “Databricks is a unified data platform that can behave like a data warehouse.”
One of the most famous anecdotes involving UNIVAC was its prediction of the outcome of the 1952 U.S. presidential election between Dwight D. Eisenhower and Adlai Stevenson. As the first computer used for commercial purposes, UNIVAC was tasked with predicting the election results. On live television, UNIVAC correctly predicted Eisenhower’s victory, despite widespread scepticism about the machine’s accuracy.
Many people come up to me in the street and ask me what big-data is all about. I have experienced this numerous times before. I am sure it might just happen to you as well. I know sort of thing, I read the big-data tea leaves. Nothing gets past me.
In data modelling and database design, keys play a fundamental role in uniquely identifying records and defining relationships between tables. One of the most widely used types of keys, especially in analytical systems and data warehouses, is the surrogate key.
A surrogate key is an artificial, system-generated identifier assigned to a record in a table. It is typically used as the primary key. It has no business meaning or semantic relationship to the real-world entity it represents. Common implementations include auto-incrementing integers or globally unique identifiers (GUIDs).
Surrogate keys exist purely to serve the needs of the database system: performance, stability, and simplicity.
The End of the Fortress: Why the Future of Data is “Liquid”
By Martyn Rhisiart Jones | Madrid, 3rd February 2026
In a landmark session at the Welsh Academy, Sir Afilonius Rex sat down with Martyn Rhisiart Jones. They aimed to dismantle our outdated obsession with “data management.” The verdict? The era of the digital fortress is dead. In its place, a new paradigm of Cognitive Liquidity and Data Autonomy is emerging, redefining how we integrate global knowledge.
Martyn Rhisiart Jones and the Goodstrat editorial team, Madrid, 3rd February 2026
Introduction
The following is the redacted transcript of a conversation between the distinguished Sir Afilonius Rex of Cambriano Energy and the cordial Martyn Rhisiart Jones of goodstrat.com.
The informal session took place before an invited audience at the Welsh Academy’s alternative summer conference of July 2023 and featured a lively question-and-answer session with audience input.
According to our reliable sources, “The BBC, RTE and RTVE broadcast the session.”
Happy Sunday to one and all. As many of you will know, I have been intimately involved in designing, building, and delivering data warehousing and advanced analytics initiatives for more than 35 years.
Today, I will take a deep dive into requirements gathering for a new iteration of an enterprise data warehouse and a new data mart.
Establishing a case for a new data warehouse iteration is part of the requirements-gathering phase of a project. This must always be at the forefront of the exercise and a continuous question we must ask ourselves. We must always consider the answer to the question “To what ends?”
First, before diving into the core aspects of the iteration, we will examine the legitimate drivers and objectives for data warehouse initiatives. The prerequisite skills, knowledge, and experience needed to carry out this activity successfully, and, after that, we will look at the preparation required to align the personal, business, and technological effectiveness and success of the initiative.