This essay is the summary of a series of discussions and design sessions developed between February 2025 and February 2026. The ideas here emerged from conversations, sketches, dead ends, and occasional breakthroughs — not from a single sitting. What follows is an attempt to give those fragments a coherent shape.


The exhaustion machine

In early 2026, Anthropic launched Cowork — an AI agent that can autonomously manage files, draft contracts, build presentations. Around the same time, OpenAI released a coding agent that works independently for hours, shipping production-ready code. These aren't incremental improvements. They're steps toward a world where software agents replace large portions of cognitive labor.

The reaction was predictable: excitement from technologists, anxiety from knowledge workers. But underneath both, there's something else — a kind of exhaustion that isn't about any particular job being threatened. It's broader than that.

Byung-Chul Han described it well over a decade ago: we've moved from a society that tells us "you must" to one that tells us "you can" — and the result is that we exploit ourselves more effectively than any external authority ever could. Every efficiency gain raises the baseline of expected output. Every tool that "saves time" creates new time that must be filled with more production.

This essay comes from a different question. It emerged from a design exploration — still early, still uncertain — that asks: What if computing were designed not to maximize output, but to sustain the conditions for human growth? What if artifacts were byproducts of that process rather than its goal?

The question isn't new. Douglas Engelbart proposed "augmenting human intellect" in 1962 — not a product, but a framework for amplifying how we think. Alan Kay conceived the Dynabook as a personal medium for expression, not a productivity device. Seymour Papert argued for environments where children program the computer, not the other way around. Ivan Illich wrote about tools that work with people, not tools that work for them.

These ideas were largely overtaken by commercial computing — the productivity application, the file system, the artifact as unit of value. The architectural choices that won weren't inevitable. They served commercial logic. And they gradually displaced a richer vision of what computing could be.

What I'm trying to think through is something I've been calling a cultivation architecture: a set of principles for redirecting the capabilities of current AI models — capabilities developed almost entirely for productivity — toward a different purpose. Not efficiency, but growth. Not automation, but amplification.

Principles

Seven principles have emerged from these design conversations. I don't present them as finished ideas — more as directions that seem worth exploring.

Gesture and space as cognitive operations

The way we interact with information is spatial and physical. In this kind of architecture, gestures — pinching, dragging, juxtaposing — wouldn't be shortcuts for software commands. They'd be editorial acts on your own thinking. What matters is at the center of your attention; what might matter lives at the edges; what no longer matters fades. This draws on Bret Victor's argument that computing should engage our full perceptual range, not reduce us to eyes-on-screen and fingers-on-keyboard.

Symbiotic infrastructure

This can't be an app sitting on top of a conventional OS. It needs to operate at the depth of an operating system — an event bus through which everything flows: input, background processes, memory, state inference. The distinction is between a layer that mediates everything and an application that competes for attention alongside other applications.

Radical transparency

Every inference the system makes about the user should be visible. Every adjustment to its behavior, legible. The system describes what it observes — "longer pauses between sentences," "accelerated writing rhythm" — but never interprets. It doesn't say "you are anxious." The user makes the interpretation. The user owns the model of themselves. Without this, everything else becomes a sophisticated form of manipulation.

Intent awareness

The system should understand intention, not just commands. It learns patterns over time — not to predict behavior for engagement metrics, but to calibrate the quality and timing of its own presence.

Evolutionary capacity

Two dimensions: the system gets better at serving the user, and the system expands what the user thinks is possible. The second is harder and more important. Most personalization optimizes for confirmation — recommendation loops, filter bubbles. A system that genuinely amplifies cognition must sometimes challenge, disrupt, and surprise.

Displaced productivity

This is the most counterintuitive principle. The focus isn't on producing outputs, but on sustaining the process of learning, thinking, and growing. Outputs — texts, analyses, artifacts — are natural byproducts. The metaphor is cultivation rather than manufacturing: you tend a garden not to extract vegetables, but because the garden is alive and so are you. The vegetables come.

Adaptability to the user as dynamic state

The system models the user not as a static profile but as a shifting configuration — attention, energy, focus, thematic stability. It calibrates itself continuously, adjusting not just content but its own mode of presence.

Together, these describe something that doesn't exist yet as a product category. Not an AI assistant, not a productivity suite, not a knowledge manager. Something closer to a cognitive environment — a space designed to sustain fertile mental states, where the system thinks alongside you.

Possible interfaces

What follows are ideas I've been exploring for how this might actually work. I want to be honest: I don't have full clarity on any of these. They're possible paths, not finished designs. Some may turn out to be impractical or wrong. But they feel worth sharing because they illustrate what "computing as cultivation" could look like in practice.

A central field instead of windows. Imagine a single continuous surface — no tabs, no application switching. Your current focus occupies the center. Past fragments, resonances surfaced by the system, and background results inhabit the periphery. The center is attention; the periphery is possibility. The field might have different states: empty and receptive, focused on one thing, constellated with multiple elements for comparison, or diffuse and exploratory. I'm drawn to this idea because it takes Mark Weiser's calm technology seriously — information in the periphery, shifting to the center only when relevant.

Memory as editorial process, not storage. This might be the most distinctive idea in the project. In conventional systems, you save a file and it persists forever with equal weight. What if memory worked more like curation? Every fragment enters with initial weight. Over time, that weight decays — like human memory. But what you revisit gains weight. What connects to new material gains retroactive relevance. What you abandon fades. The system forgets what you forget. And because it periodically generates narrative summaries — not just lists of facts, but the story of a thread of thinking — the memory has texture rather than flatness. The phrase that keeps coming back is: treat memory as an editorial process, not as storage.

Modes of presence, not a single behavior. I've been thinking about a system that shifts between four modes. Silence: observing but not intervening, the default during flow states. Resonance: surfacing fragments from the past that connect to the present — echoes, not retrieval. Friction: questioning, pointing out tensions, generating counterarguments — never imposed, only by invitation. Mirror: showing you your own patterns over time, available on request. The transitions between modes would depend on what the system observes about your cognitive state — not emotions, but functional signals like focus level and thematic stability.

An artifact surface. When something is ready to leave the system and enter the world, it passes through this layer. The key principle: the artifact is a byproduct, not a goal. The system never pressures you to "finish" or "export." An interesting emergent possibility: because everything is logged and memory accumulates over time, the system could generate meta-accounts of the creative process itself — books about the subjects you've worked on, reports on how your thinking has evolved.

None of these are technical impossibilities. The foundation models we have today — with their reasoning, long-context comprehension, and tool use — already possess the capabilities needed. They've just been pointed almost entirely at productivity. The same model that autonomously writes legal briefs could, differently orchestrated, generate counterarguments to your developing thesis. The same reasoning that triages a sales pipeline could detect patterns in your intellectual development over months. The question is not one of power but of purpose.

The ethics of observation

A system that infers cognitive states and adjusts its behavior is, by any reasonable definition, a surveillance system. The question isn't whether it watches — it must, to function — but for whom it watches, and toward what end.

The prevailing model is algorithmic determination: platforms observe behavior to predict and shape it, to increase engagement, to serve the platform's interests. The user is modeled but never shown the model.

What I'm proposing is the opposite: total transparency between user and system. Every inference visible. Every adjustment legible. The user owns their model — it's not a proprietary asset. They can inspect it, edit it, contest it, export it.

The system describes, it does not interpret. It reports "your writing rhythm has slowed," not "you are losing focus." This boundary between observation and prescription is the ethical architecture — not a policy added after the fact, but a design principle.

I'll be honest: this is commercially disadvantageous. Systems that interpret for the user — that diagnose, recommend, nudge — are more engaging, more monetizable. A system that merely describes and lets you interpret demands more of you and produces less measurable engagement.

The productization problem

A cultivation architecture fits none of the standard categories. Its purpose isn't to serve millions, but to serve one person deeply. Its value isn't in tasks executed but in something harder to measure: the growth of a person's capacity to think and create.

Can an architecture designed for cultivation be productized without contradicting its own principles? I genuinely don't know. Subscription models incentivize stickiness. Platform models incentivize extraction. Agent models incentivize throughput. This architecture, by design, incentivizes none of these.

My inclination is that it has to be open source. The principles of transparency and user ownership are structurally incompatible with proprietary control. But open source introduces its own tension: if the mechanisms work, they'll be available for appropriation by the very paradigm they resist. That history is well-documented — Linux powering cloud monopolies, the open web enabling surveillance advertising.

I don't have a resolution. What I can say is that the design principles themselves — transparency, user ownership, non-prescription — are also the best defense against cooptation. An architecture whose ethics are structural rather than cosmetic is harder to hollow out. But "harder" is not "impossible."

An invitation

This project exists, today, as design documents, architectural notes, and the record of conversations in which these ideas took shape. It's not a product in development. It's a question being explored: whether computing can be redesigned around human growth rather than output maximization, at a moment when AI capabilities make both paths possible.

The tension at the heart of it came from one of those early conversations:

"How to create an instrument that does not transform the user into a means?"

I don't have a definitive answer. But the direction — computing as cultivation, not extraction — feels worth pursuing. If it resonates, I'd welcome the conversation.

marcelo@collecto.com.br