Sovereignty

A hot topic nowadays (mid 2026). I starting a new series of articles about architecture choices supporting it. None of it fully detailled to implementation level, much more a line of thought that, when implemented, increases sovereignty in a fundamentally different way.

This first article introduces the ideas of DAS, Decentralized Autonomous Systems.

This artile was written by a person. It may contains errors and it contians my personal oppinions. You may or not may agree on them. Such is life when people (instead of AI’s) express themselves.

Introduction

The dependency of modern society on Big Tech has become obvious these days. Google, Amazon, Meta, and others deliver information in ways that previous generations could only dream of. In a kind of reverse movement, these companies are hoarding information about us. In doing so, an unbalanced state is created: the centralized information store is much more powerful and influential in society — let alone over its members — than society is over these tech giants, let alone its individual members.

This imbalance is well known. People like Reijer Passchier, Cory Docotorow 1, and Marietje Schaake2, among others, have written extensively about it. In this article, I will focus on a single aspect of that imbalance: the handling of confidential personal information. I will explain the origin of that problem from an architectural viewpoint, identify patterns that lead to these problems, and look at proposed approaches to mitigation. I will also discuss different mitigation directions.

The key to mitigation is the recognition of an underlying pattern that, in certain cases, we should recognize as an antipattern — one that, in my opinion, we should actively counter in our designs.

The (anti-)pattern

The pattern I am referring to is the centralized solution pattern. This pattern manifests itself in many forms: single source of truth, hub and spoke, controller/orchestrator, and other commonly used principles come to mind. In enterprise architecture, they play an important role.

In IT, the pattern was first encountered in central mainframes, and it proliferated throughout general business when file servers emerged. This continued toward further centralized approaches, leading to the big-tech cloud solutions we see today.

It is in those cloud solutions that we see the current unbalanced state emerge. Because of the sheer size of centralized solutions, much of the data about us is stored there. And this creates the privacy and personal autonomy problems that previous writers have discussed. A recent example is a hack in the database of a major internet provider in the Netherlands. Data about 2 million clients, including passport information, bank accounts, and personal addresses, was retrieved by a third party.

Why the pattern creates a problem

The reason these hacks are such a major risk is that the data is centrally available. Hack the system once, and you get all the data from the centralized store.

Compare this to a situation where the data is decentralized: the third party would have to hack all decentralized stores to get the same number of customer records. And the probability of success would have been lower, given the social engineering attack that was used.

Another example where centralization creates possible issues is when the owner of a centralized solution — for example, a healthcare platform with patient data — decides to sell the business. Once sold, the very private data may move jurisdiction, and it may be used by the new owners in different ways. In both situations, the actual patients have no say in what happens with their data.

There are many more scenarios that reduce ownership of data, leading to reduced autonomy and privacy. See the mentioned writers for more details.

The essence of the problem

It is important to note that the problems we have addressed are not “big tech” problems. The problems originate from two facts:

1) Data belonging to many owners is centrally stored.

2) A single entity that is not the owner of the data can decide over this centrally stored data.

In that regard, it also does not matter who the single entity is: a commercial company, a governmental organisation, or a public institution. In all cases, central storage itself creates a risk, and central ownership does not offer any guarantees of the future secure use and storage of confidential data.

Why this is hard to overcome (or why current counter-approaches fail)

The approaches mentioned by some European authors to solve the indicated problems often address the centralization aspect by focusing on the owner of the central resource.

The rather naive approach is: let’s make a European cloud, where our laws govern its use. However, that has two drawbacks:

  • For a European cloud, huge investments have to be made up front. As there are no users at the start, and as there are US-based hyperscalers that operate at much lower cost, it is not very likely that an attractive solution can be offered in the short term. And because all investors in the EU know this, there is great reluctance to invest, leading to a downward spiral.

  • An EU-based cloud owned by a single entity still has all the problems associated with centralized solutions. Who is to say that the owner is loyal to the offerings of a major big-tech firm, for cash, for the benefit of his children up to the fifth generation?

Solution

A quote widely attributed to Einstein is:

“We cannot solve our problems with the same thinking we used when we created them.”

The above approach (creating a new “big tech” under different governance) is a typical example of that. As indicated in the previous paragraph, the likelihood of success is low, and there is no real guarantee for future developments.

If, however, we try to rise above the cloud-first approach and keep in mind that the solution is to avoid central ownership and central storage, can we still create meaningful IT solutions for confidential data?

I dare to answer this question with “yes, we can.” I will indicate two use cases that show a fictional application, based on the old internet adage: autonomous systems connected to each other.

When the internet was designed, the entire idea focused on separate IT systems that connected to each other over a communications protocol and were thus able to exchange information. Our current way of using the internet, with the big 5 or big 7 hyperscalers storing almost all information centrally, is almost the direct opposite of that design idea. I think we sometimes should — and can — return in our designs to more autonomous systems (coined Decentral Autonomous Systems, or DAS). When considering this approach, we should take into account that the systems we are currently using — even the laptops I am using to write this — are at least a factor of 100 more powerful than the systems the original designers used. Functionally, our current systems use well-tested, very capable software as a commodity foundation, something the original desingers could only dream of.

Given the foundation of well-running, highly powerful personal IT equipment, I am convinced that implementing DAS on top of that is feasible, even though it would require new software solutions.

Implementation

The key element in the application of DAS in confidential data handling is the creation of trust. I will illustrate the idea with a fictional use case.

DAS use case for healthcare data

Imagine a DAS system running on a user’s mobile phone and laptop, both with full internet connectivity. Both systems contain the user’s healthcare data in encrypted form.

Now, when the user visits their GP, a special handshake takes place between the user’s DAS system and the GP’s DAS system. First, the mobile phone detects that the user is in the presence of another DAS system. By querying that system, it learns that this is a healthcare system. Once the patient starts the conversation with the GP, the GP naturally wants access to the patient’s healthcare file, which is stored on the patient’s mobile phone. To access the file, the GP requests access through the DAS system, at least for that session. Once the patient grants access, the GP can view the file and add further information. Given the possibility of future references to the file, the GP could ask for an extended period of access to the patient’s file.

The key element in this use case is the rule that says: if two systems connect, one of which contains patient data and the other of which is a GP’s system, then connection requests are allowed and, when granted, access is provided to the patient’s medical file, which remains owned by the patient.

This rules system could easily be extended. Suppose we are dealing with a medical emergency. In that case, a first-aid responder will come to the user of the DAS system. The responder’s DAS system and the user’s DAS system connect automatically, giving the responder access to the full medical history, based on two facts:

1) the first-aid responder is nearby

2) the DAS system near the patient is verified as belonging to a first-aid responder.

Key elements in the use case

The patient’s file does not leave the patient’s DAS system. The GP is allowed to read it and append to it, but not copy it to their own system. This is a key architectural element in the design of the system. Think of it as a notebook that you hand over to the GP, where they can write in it and then hand it back.

The “verified fact” comes from the fact that the GP has a graduation record from a university that, without doubt, shows that they are in fact a doctor.

The verified fact is only shown when the owner wants to show it (for example, during working hours). When it is switched off, there is no way to identify the owner as a doctor or first-aid responder.

The code that makes up the DAS system must be open source, to avoid the possibility that the creator at some point decides to add a “copy to central location” function in anticipation of a later sell-off to big tech.

Key elements in the use case

The patient’s file does not leave the patient’s DAS system. The GP is allowed to read it and append to it, but not copy it to their own system. This is a key architectural element in the design of the system. Think of it as a notebook that you hand over to the GP, where they can write in it and then hand it back.

The “verified fact” comes from the fact that the GP has a graduation record from a university that, without doubt, shows that they are in fact a doctor.

The verified fact is only shown when the owner wants to show it (for example, during working hours). When it is switched off, there is no way to identify the owner as a doctor (or first-aid responder).

The code that makes up the DAS system must be open source, to avoid the possibility that the creator at some point decides to add a “copy to central location” function in anticipation of a later sell-off to big tech.

How can we make it work?

DAS systems do not exist, however they are technologically well within reach of our current capabilities. Current developments like Autonomi, Solid, and Yivi are already moving in this direction. As a former colleague once said: “It’s just code, and you have to write it.”

Here, government and universities should take a leading role, initially by funding the design and understanding of DAS systems, then by defining standards and basic code, and finally by adding regulations to make sure DAS systems are actually used.

A second action area is verifiable facts. The fact we need in the use case is already available, but some kind of regulation must be in place to use it in the intended way — again, a governmental task.

Finally, there is the fact that the patient brings and holds their own data. This is a new element, needed to implement true autonomous systems. However, users need to follow certain guidelines (storing data more than once, etc.). This requires new ways of working, procedures, and probably a culture change.

Concluding

Here, a sketch has been given of a possible system that addresses a single problematic aspect of big tech. DAS-based systems will not solve disinformation problems, AI-environment problems, and similar issues. These are areas where further articles may be written.

  1. See for example: The Internet Con 

  2. See for exmaple: The Tech Coup