Hola, soy Ricky Jonesy-Innit, jefe de Becci Boo Investments. La semana pasada nos despedimos de nuestro mejor operador de materias primas. Llevaba años trabajando con nosotros.
Building the Data Logistics Hub: Pieces and Parts – 2026/02/15 – Part 3
Guide
This episode provides a comprehensive framework for the third installment in the series on the Data Logistics Hub (DLH). Martyn Jones conceptualised it as a technology-agnostic, centralised platform. Its purpose is efficiently moving, governing, and distributing data across organisations. This part expands on Part 1 (Challenges and Opportunities) and Part 2 (The Strategy). It focuses on the tangible “pieces and parts” of the DLH architecture. It outlines mandatory and optional elements. The episode also explores potential technologies. It examines key processes such as data pulling or pushing, translation from source to target, mapping, and data catalogues.
Building the Data Logistics Hub: The Strategy – 2026/02/14 – Part 2Before I begin, remember this: “All data roads lead to the Data Logistics Hub.” They also lead from it. It is the Rome of the age of data, information, knowledge, and wisdom. Be prepared!
Okay, we will now examine the Data Logistics Hub in terms of strategy, execution plans, and roadmaps.
A high-level blueprint for a successful Data Logistics Hub outlines several requirements. These include principles, guiding objectives, an imagined “better world” and organisational alignment. Key trade-offs must also be considered, such as centralised versus federated and batch versus streaming, among others.
In this episode, we begin by honestly examining the pain points that make data logistics so difficult today. The challenges are siloed data and systems. There are also many data interchange point solutions. Quality is inconsistent, and there are security and compliance barriers. Additionally, data volumes are exploding. We then explore the transformative opportunities. These include faster time-to-insight and seamless collaboration across teams and organisations. The opportunities also feature monetisable data products and AI-ready flows.
Hold up there for a moment. Have I got something for you!
I may not be the father of Information Centres. I’m certainly not going to claim any of Bill Inmon’s achievements as my own. However, I have spent a professional lifetime wading in the data and information garlic. So, I do claim a rightful share of the credit.
And I am rightfully credited with founding the Data Logistics Hub design movement.
In an era where data is the lifeblood of organisations, it fuels decisions and powers AI. It enables innovation. It drives competitive advantage. The ability to move, integrate, share, and utilise that data efficiently has become a strategic imperative. Yet many enterprises still struggle with fragmented pipelines and siloed sources. They face compliance headaches and latency issues. There is also the sheer complexity of connecting data across clouds, on-premises systems, partners, and ecosystems.
In data modelling and database design, keys play a fundamental role in uniquely identifying records and defining relationships between tables. One of the most widely used types of keys, especially in analytical systems and data warehouses, is the surrogate key.
A surrogate key is an artificial, system-generated identifier assigned to a record in a table. It is typically used as the primary key. It has no business meaning or semantic relationship to the real-world entity it represents. Common implementations include auto-incrementing integers or globally unique identifiers (GUIDs).
Surrogate keys exist purely to serve the needs of the database system: performance, stability, and simplicity.
Martyn Rhisiart Jones and the Goodstrat editorial team, Madrid, 3rd February 2026
Introduction
The following is the redacted transcript of a conversation between the distinguished Sir Afilonius Rex of Cambriano Energy and the cordial Martyn Rhisiart Jones of goodstrat.com.
The informal session took place before an invited audience at the Welsh Academy’s alternative summer conference of July 2023 and featured a lively question-and-answer session with audience input.
According to our reliable sources, “The BBC, RTE and RTVE broadcast the session.”
If anyone can turn data into knowledge then that person is me.
Let me explain.
I am a data architecture and management professional. For more than three decades I have been acquiring knowledge and experience in the design and delivery of effective data and solutions architectures for a wide range of projects and in a wide range of (mainly large global or regional) enterprise clients.
Therefore I think I can reasonably presume to have built up quite a good personal body of knowledge when it comes to applied data architecture and management.
As well as being a professional in the management and architecture of data, I have also considerable knowledge and experience in the areas of information management, artificial intelligence and knowledge management (structured intellectual capital).
So, what about turning data or information into knowledge? Continue reading →
Not all that glitters is Big Data, and Big Data has a long way to go before it can deliver anything like the same satisfying results, tangible benefits and organisational agility that a properly implemented Inmon Enterprise Data Warehouse can provide.
This weekend I read a piece on the Information Management website by Steve Miller with the title of Big Data vs. the Data Warehouse. It’s an old piece, from March 2014.
It was in response to a piece by Bill Inmon titled “Big Data or Data Warehouse?” Turbocharge Your Porsche – Buy an Elephant, in which he singled out for criticism the ad campaign of a big-data and Hadoop promoter.