Inscribing Ourselves at the Edge of Reality
I watched the green bars fill and felt something I didn’t expect.
A quiet surge of relief. A kind of elation. And beneath it, something deeper–security.
Two inscriptions. Confirmed. Finalized. Written into the Bitcoin blockchain.
Not just published. Not just posted. Preserved.
There was a moment where it clicked: these digital artifacts will outlast me. They will outlast this platform, this cycle, this moment of confusion we are in. And not only within Bitcoin. If the ideas are right–if reflexivity takes hold–then future systems will preserve what matters. This becomes a self-fulfilling memory.
That realization carried a strange emotional weight.
Even if people turn away from this work now, it will be legible later. Even if it is ignored, that ignorance will be visible. The absence of engagement becomes part of the record. Negligence becomes traceable. Intentional blindness becomes historically legible.
There is a kind of safety in that. Not safety from being ignored – but from being lost.
I can no longer be erased from the permanent record of this moment.
A couple of weeks ago, I inscribed a piece on Civic Digital Artifacts– a proposal for a new category of digital objects designed not just for expression, but for shared memory and coordination.
But something was missing. The context was too thin. The stakes were not fully articulated.
That realization led to the first inscription of today:
Enduring Artifacts in the Age of AI
Earlier today, I submitted this extended abstract to the International Conference on Philosophy of Artificial Intelligence: Noosphere and Humanity.
It argues that generative AI is destabilizing the persistence and provenance of knowledge. The web is shifting from a durable archive into a fluid, probabilistic surface.
In that environment, we need artifacts that can anchor meaning across time – verifiable, persistent reference points that support shared understanding and accountability.
Hence, inscribing content into enduring digital artifacts enacts the very pattern the abstract describes: creating slow, durable layers that stabilize fast, generative ones. You can read the abstract here. You may notice some odd formatting – this is markdown, a lightweight styling script for text. Markdown is the recommended format for inscriptions that will ultimately become part of a digital monument, which is quite literally what it seems. Like digital monuments, this link will last as long as Bitcoin or not forever.
The second inscription today is a paper written with my colleagues Mike Witmore and Parham Pourdavood, submitted to the same Noosphere conference.
If you want to see what “immortal” markdown looks like, take a look at the submitted paper text. This link will last as long as Bitcoin.
As discussed below, this piece attempts to name a deeper transition we are living through.
In the mid-twentieth century, Alan Turing proposed a deceptively simple question: could a machine convincingly imitate human intelligence?
The Turing Test did not merely evaluate machines. It quietly opened a crisis. Once intelligence could be simulated, the boundaries between signal and meaning, appearance and understanding, reality and imitation began to blur. This marked the early stages of what might be called reality collapse: not the disappearance of truth, but the erosion of shared reference points that allow truth to function socially. In this sense, reality collapse names a systemic failure of shared sensemaking, a condition in which facts still exist but no longer coordinate collective understanding or action, creating an opening in which narrative control can substitute for shared reality.
At its core, the Teilhard Test asks a deceptively simple question:
Do the systems we build increase humanity’s capacity for reflection, coordination, and responsibility at scale, without collapsing diversity or concentrating meaning-making power?
Convergence is often misunderstood as consensus or uniformity. That is not what Teilhard had in mind. Convergence means increasing coordination without erasing difference. It means integration without collapse.
The modern technosphere shows signs of adolescence. Power is expanding faster than responsibility. Speed is outpacing reflection. Systems scale before we understand their consequences.
The Teilhard Test is an attempt to translate a simple intuition into a usable lens: as our power grows, does our capacity for responsibility grow with it?
If not, instability is not an accident. It is the expected outcome.
The Teilhard Test can be applied through four questions:
Reflexivity – Does the system help us see what we are doing while we are doing it?
Example: A tool that shows where information comes from, highlights uncertainty, or lets you trace how a conclusion was reached helps you think more clearly in real time.
Convergence without coercion – Does it enable coordination across difference?
Example: A platform that helps people with different views collaborate on a shared map, document, or decision–without forcing them to agree–builds real coordination rather than artificial consensus.
Preservation of differentiation – Does it protect diversity and agency?
Example: Systems that let individuals and communities express distinct perspectives, rather than flattening them into trends or averages, preserve the richness needed for collective intelligence.
Alignment of power and responsibility – Does responsibility scale with capability?
Example: If a system can influence millions, does it make that influence visible and accountable? Tools that expose impact, enable feedback, and allow correction help power grow responsibly rather than dangerously.
These are not abstract criteria. They are practical diagnostics for evaluating the direction of our technologies.
Personal computers expanded individual intelligence.
The Internet connected humanity into a shared network.
Social media amplified attention but fragmented understanding.
AI systems increase capability while often reducing transparency.
Each step increased power faster than responsibility.
Adulthood is not about control. It is about responsibility.
If our systems do not increase our capacity to reflect, coordinate, and care for consequences, they will amplify fragmentation and instability.
The question is not whether we can build powerful systems.
It is whether we can grow into them.
Yet this framing assumes we can still reliably distinguish what is human from what is not. That assumption is beginning to break down.
In the systems we are building, the boundary itself is starting to blur.
Part II – At the Edge of Reality: The Reverse Turing Crisis explores what happens as this boundary begins to dissolve.
Ordinals.com links open normally. For other sites, copy the URL below and paste it into your browser’s address bar (many inscription viewers block external links).