Modern security systems are often cryptographically sound, yet increasingly distrusted by the people who use them. As technical architectures grow more complex and opaque, trust shifts from understanding to authority, and quickly becomes fragile. This essay explores why even secure systems now provoke suspicion, and how the loss of system literacy turns trust into a question of belief rather than comprehension.

Why We No Longer Trust Secure Systems

Whenever a company presents a new security mechanism, the reaction is often strikingly similar. Somewhere between curiosity and unease, a familiar suspicion emerges. So this is the backdoor. The accusation does not usually come with technical detail, proof, or even a concrete claim. It is more a feeling than an argument. Something about the system does not sit right, and “backdoor” becomes the word that gives shape to that discomfort.

What is interesting is that this reaction increasingly appears even when the underlying system is, by conventional measures, well designed. Cryptographically sound. Carefully constrained. Audited, at least internally. The problem is no longer that systems are obviously insecure. The problem is that they are no longer understandable in any meaningful sense to the people who are expected to trust them.

A recent example is Apple’s internal Presto system, which allows iPhones to be updated while still sealed in their retail boxes. The device is powered wirelessly, updated wirelessly, and only then handed to the customer. From a purely technical perspective, this is an elegant solution. It avoids shipping devices with outdated software, reduces setup friction, and closes a security window between manufacturing and sale. It is also, in a narrow sense, entirely unremarkable. Apple has always had factory provisioning mechanisms. Presto simply extends them into the retail environment.

And yet the reaction was immediate. If Apple can update the phone before I turn it on, what stops them from doing it later. If this can happen invisibly, how do I know there is not more happening behind the scenes. The concern is not really about this specific system. It is about the boundaries of control, and about who defines them.

Technically, the answer is straightforward. iPhones operate in distinct phases. Before activation, before any Apple ID is associated, before the Secure Enclave is personalised with user-derived keys, the device does not yet belong to anyone in a cryptographic sense. In that phase, it can accept signed firmware updates. This is the same mechanism used in factories, refurbishment centres, and warranty replacements. Once the device is activated, that phase ends. Ownership becomes cryptographically enforced. Keys are generated that Apple cannot recreate. The boundary closes. Re-entering the factory phase would require erasing the device entirely.

This is not a loophole. It is the intended design. From a security engineering perspective, it is coherent and defensible. And yet, for many people, it does not resolve the unease at all.

The reason is simple. The explanation itself relies on concepts most users no longer possess a working mental model for. Activation, ownership, secure enclaves, boot chains, provisioning phases. These are abstractions layered on abstractions. They may be precise, but they are not legible.

Trust used to be built on rough understanding. You did not need to know everything, but you could see enough to reason about cause and effect. You could tell when a machine was doing something unexpected. You could follow an error message, inspect a configuration file, observe what changed when you flipped a switch. Even partial understanding was enough to ground trust.

That grounding has largely disappeared.

Modern systems are sealed, automatic, and remote by default. They rely on invisible processes that happen elsewhere, on infrastructure you do not control, governed by rules you did not choose and cannot inspect. Security is enforced by components you cannot see, running code you cannot read, making decisions you cannot observe directly. The system may be correct, but correctness now exists entirely outside the user’s capacity to verify it.

At that point, trust stops being a product of understanding and becomes a product of authority. You trust the system not because you can reason about it, but because the vendor tells you it works this way. Because the brand has a reputation. Because the alternative would be to assume malice or incompetence everywhere.

That kind of trust is fragile.

When a system behaves in a way that surprises people, when it reveals a capability they did not expect, suspicion rushes in to fill the explanatory vacuum. “Backdoor” is not a technical diagnosis in this context. It is a placeholder for lost comprehension. It expresses the sense that there is power at work which the user cannot see, challenge, or meaningfully question.

This is why even genuinely secure systems now provoke conspiracy thinking. Not because users have suddenly become irrational, but because they have been structurally excluded from understanding. They know they do not control the system. They know they cannot verify its claims. They know that asymmetries of power exist. In that situation, assuming hidden capabilities is not paranoia, it is a coping strategy.

The irony is that security has improved dramatically over the past decades. Hardware-backed encryption, secure boot chains, isolated execution environments, strict signing requirements. All of this makes mass compromise harder than ever. At the same time, security has become socially brittle. It depends on belief rather than comprehension, and belief erodes quickly when it is not anchored in understanding.

A system can be mathematically sound and socially unbelievable at the same time.

What we are seeing, beneath all the noise about backdoors and exploits, is the quiet loss of system literacy as a foundation for trust. Not just the ability to program or configure, but the ability to form a coherent mental model of where boundaries lie, what assumptions are safe, and what kinds of actions are even possible. As that ability fades, trust becomes binary. Either total faith or total suspicion, with very little room in between.

This loss of literacy does not affect everyone equally. As I argued recently in my essay Die neue digitale Klassengesellschaft, the gap between those who can still reason about systems and those who can only interact with their surfaces is widening. One group retains the ability to question, to model, to understand. The other is left to rely on reassurance, branding, and authority. Trust, in this context, is no longer shared, it is stratified.

Presto does not reveal a hidden backdoor. It reveals something more unsettling. Security has become something we are expected to believe in, not something we are able to understand. And when understanding is no longer part of the equation, trust becomes unstable by default.

The question, then, is not whether our systems are secure enough. The question is whether they are still comprehensible enough to deserve trust.