Essays & Long-Form Writing

Questions I keep returning to — about spatial intelligence, systems that actually deploy, and what it means to build technology that respects human agency. Essays are published here as they're ready.

4 essays published 6 queued across 4 categories

Research-to-Product Translation

From 50 prototypes to one research framework: what VeeRuby taught me 2 min read

Featured

Every XR project starts with the same optimism. Define the use case, build the experience, deploy it, iterate. Simple. Repeatable.

We did this fifty times.

What I didn't expect is that the fifty-first prototype would feel exactly like the first. Not because we were less experienced — quite the opposite. But because every project started from zero. The user's spatial history: gone. Their cognitive load from two hours ago: invisible. The fact that they had done this task four times in different environments: unknown to the system.

The systems were competent. The interactions were polished. But they didn't know anything. They couldn't remember. They couldn't adapt.

I started calling this the reset problem. Every session in XR begins at neutral. The AI has no memory of you. The interface doesn't know if you're an expert or a first-time user. The environment can't distinguish between someone who's been here thirty times and someone who just put on a headset for the first time.

In every other medium we take context for granted. A colleague who's worked with you for a year doesn't over-explain. Your phone knows it's Tuesday and you're commuting. Only XR insists on beginning each session as if the past has no meaning.

The recognition that this was a structural problem — not a features problem — is what led to Harmony. The gap wasn't in the quality of individual XR applications. It was in the absence of connective tissue between them. No shared spatial memory. No cross-session adaptation. No system-level understanding of who the person was or what they needed.

Harmony is the connective tissue. Not another XR application — the infrastructure that lets XR applications remember, reason, and adapt. Together. Continuously. In a way no single app can do alone.

I needed fifty prototypes to understand that the thing I was actually trying to build had never existed. That's what VeeRuby taught me.

The gap between lab demonstrations and enterprise reality 2 min read

New

A lab demonstration of an XR system is a controlled argument. The researcher controls the lighting, the user, the task, the hardware, the room. When it works, it proves that the thing is possible. That's all it proves.

Enterprise deployment is something else entirely. The lighting changes by hour and season. Users vary in height, body type, handedness, and prior experience with technology. Network conditions fluctuate. The IT department has policies. The hardware is two generations old. The task the system was designed for turns out to be only 40% of what people actually use it for. And the researcher isn't in the room.

I spent five years between these two worlds — running a research lab and deploying XR systems at scale for enterprise clients. The gap between them is not a failure of research. It's a structural property of how controlled experiments work. You eliminate variables to test hypotheses. Deployment is the return of every variable you eliminated.

What I learned is that the gap isn't bridged by making research less rigorous — it's bridged by making research anticipate deployment conditions. That means designing experiments that include variance. It means testing with populations that don't look like graduate students. It means your evaluation methodology needs to hold up when the experimenter leaves the room.

Harmony's evaluation framework is designed with this explicitly in mind. RQ2 — can a unified system demonstrate measurable improvement over extended interaction timescales — is not a lab question. It's a deployment question phrased in research terms. That's the bridge. Not easier standards, but standards that were built for the right environment.

1 more essay coming — Why research rigor and deployment experience are not in tension

Queued

Systems Thinking & Architecture

Why spatial intelligence needs system-level design, not feature-level thinking 2 min read

In Progress

Feature-level thinking goes like this: add eye tracking. Add hand gestures. Add voice commands. Add biometric input. Each addition is defensible. Each is technically interesting. And each makes the system more complex without making it more intelligent.

Intelligence in XR isn't the sum of its sensors. It's what emerges when those sensors are integrated into a coherent model of who the person is, what they're doing, and what they need next. A system that can read your gaze but can't connect it to your current cognitive load or your history from last Tuesday isn't intelligent — it's well-instrumented.

The difference between a well-instrumented system and an intelligent one is architecture. You can't add architecture after the fact. You have to design for it before the first line of code, before the first sensor, before the first use case. This is what system-level design means in spatial computing: starting with the question of what the system should know and how it should reason, not with what features it should have.

Harmony is an attempt to operationalize this principle. Its architecture begins with context — not capabilities. The full argument is in progress.

2 more essays coming — The orchestration challenge · Architectural lessons from enterprise XR

Queued

XR, AI & Human-Computer Interaction

Most XR systems treat the user as a camera position — Harmony treats them as a collaborator 2 min read

In Progress

The dominant metaphor for the XR user is a camera. A position in space. A viewport through which content is rendered. We optimize for what the camera can see, how quickly it can rotate, how accurately it can be tracked.

This is the right metaphor for graphics. It is the wrong metaphor for human-computer interaction.

A collaborator isn't a position. A collaborator has history, preferences, fatigue, intention, context — and a continuous relationship with the systems they use. A collaborator expects the system to notice when something is wrong and respond accordingly. A collaborator grows and changes, and expects their tools to grow with them.

Treating users as collaborators instead of camera positions changes everything about system design. It means the system must maintain a model of the person, not just a model of the scene. It means memory is as important as perception. It means "what does the user see?" is less important than "what does the user need?"

This is the architectural commitment Harmony makes. Every layer — Perception, Cognition, Memory, Action — is organized around who the person is and what they need. Not what the hardware can render. The full argument is being written.

2 more essays coming — Memory-driven adaptation · Context is the interface

Queued

Ethics, Society & Human Impact

Adaptation must be explainable, opt-in, and auditable 3 min read

New

Most AI adaptation is invisible. The system changes its behavior based on what it's learned about you, and you have no idea it's happening. In many contexts, this is fine — even desirable. You don't need to understand why Netflix changed its recommendations. The stakes are low and the relationship is transactional.

Spatial computing is not that context.

When an XR system adapts, it changes the environment you physically inhabit. It changes where UI elements appear in space, how guidance is delivered, what information is surfaced and when. It mediates your relationship with your physical surroundings. The stakes are not low, and the relationship is not transactional — it's continuous, embodied, and deeply personal.

This is why I argue that adaptation in spatial computing must satisfy three properties that go beyond standard AI ethics discourse. Explainable means the system can show you why it adapted — not a probability distribution, but a human-readable reason. "I showed you less detail because you've completed this task eighteen times and your response times suggest you don't need the scaffolding anymore." Opt-in means adaptation never happens without your awareness, and you can override or disable any adaptive behavior at any time without penalty. Auditable means there is a complete, accessible record of every adaptation decision — what changed, when, and why — so that you can review, question, and contest the system's model of you.

These aren't constraints on what the system can do. They're constraints on how it does it. A system that adapts explainably, with opt-in consent and a full audit trail, can be more aggressive in its adaptation than an opaque system — because the user has the information they need to trust it.

This is Branch 3 of the Harmony research agenda. Not an add-on. A first-class architectural requirement that shapes how the entire stack is designed.

1 more essay coming — Human agency as a design constraint, not an afterthought

Queued