There's an open-source tool called Pake that does something deceptively simple: it takes any URL and turns it into a standalone desktop application. Not a browser shortcut disguised as an app — an actual native window, with its own icon, its own profile, its own memory footprint. Built on Rust and Tauri, it uses the operating system's native WebView instead of bundling an entire Chromium instance. The result weighs a fraction of an Electron app.

It's a clean piece of engineering. But the interesting part isn't the tool itself. It's what it reveals about a gap in the current AI landscape — a gap that's about to close.

The format question nobody's asking

Right now, the dominant interaction model with AI is conversational. You type, the model responds. Sometimes it writes code. Sometimes it generates an image. Occasionally it calls a tool or runs a script. But the output is almost always text flowing through a chat interface, or at best a file dropped into a directory.

Consider what Pake does from a different angle. It solves a format problem. The content — a web application — already exists. What's missing is the right container: a desktop window instead of a browser tab. The intelligence required is minimal; the value created is real. Anyone who's drowned in thirty open tabs while trying to use three web tools simultaneously understands this instantly.

Now imagine an AI agent that reasons not just about what to produce, but about how to package it. An agent that, given a task, can decide whether the answer should be a CLI utility, a desktop widget, a browser extension, a notification daemon, a local API, a PDF report, or a full interactive application — and then build the appropriate one.

This is self-tooling. And it's the most underdeveloped capability in the current generation of AI agents.

Software as a variable, not a category

We've spent decades treating application types as fixed categories. You build a web app or a mobile app or a CLI tool. Each requires different frameworks, different deployment models, different expertise. The category is decided early and rarely questioned.

But the categories are artificial. They're artifacts of distribution constraints and development economics, not of user needs. A weather dashboard doesn't need to be a web app — it needs to show you the forecast. Whether that happens in a browser tab, a menu bar widget, a terminal command, or a morning briefing in your inbox is a packaging decision, not an architectural one.

When you give an agent the ability to reason about format — to treat the application type as a variable rather than a constant — the design space explodes. The same underlying logic (fetch weather data, apply user preferences, format output) can materialize as entirely different software depending on context, device, and individual workflow.

This isn't hypothetical. Every component already exists. LLMs generate working code in any mainstream language. Frameworks like Tauri, Electron, and platform-native toolkits are well-documented enough to be targets for code generation. Browser extension APIs, system tray utilities, CLI frameworks — all within reach of a capable model. The missing piece is the reasoning layer that connects a user's actual need to the right output format.

Personalization at a resolution that didn't exist

The implication goes further than convenience. It changes the unit of software distribution.

Traditional software serves millions of users with one interface. Customization exists, but within the boundaries the developer anticipated. You can change themes, toggle features, rearrange panels. You cannot fundamentally alter what the application is.

Self-tooling agents dissolve this constraint. The application is generated, not distributed. It can be shaped to a single user's workflow, preferences, and context — not as a configuration layer on top of a generic product, but as a distinct artifact built from scratch. Two users with the same underlying need might receive entirely different applications: one gets a keyboard-driven CLI tool because they live in the terminal; another gets a visual dashboard because they think spatially. Neither is a degraded version of some canonical app. Each is the right tool for its user.

This is personalization at a resolution that has never been economically viable. Building bespoke software for individual users required either enormous budgets or accepting the constraints of no-code platforms. Agents that reason about format and generate purpose-built tools make it viable at marginal cost.

The agents aren't there yet — but the trajectory is clear

Current AI coding assistants are impressive but format-blind. They'll write whatever code you ask for, in whatever framework you specify. The choice of what to build — web app, script, extension, widget — remains entirely with the user. The agent fills in the implementation; the human decides the form.

The next step is agents that participate in that decision. Given "I need to monitor three stock prices during my workday," a self-tooling agent would evaluate the user's platform, workflow patterns, and notification preferences — then produce a menu bar widget, or a terminal watcher, or a scheduled email, depending on which format actually fits. The format becomes part of the solution, not a prerequisite for asking the question.

Several converging trends accelerate this. Cross-platform frameworks are maturing — Tauri 2.0, for instance, builds desktop and mobile from a single codebase with native performance. WebAssembly enables near-native execution in any environment. LLM function calling and tool use provide the plumbing for agents to interact with build systems, package managers, and deployment targets. And the models themselves keep improving at sustained, multi-file code generation.

The harder problem is taste. Choosing the right format requires understanding not just the task but the user — their habits, their environment, their tolerance for complexity. This is where self-tooling intersects with the broader challenge of agent personalization: the system must model the user, not just the problem.

What this means

Pake is a well-made wrench. It solves one specific format translation — web to desktop — and does it elegantly. But the pattern it represents is far more general. The ability to reshape software around individual needs, to treat the application format as a design decision made at runtime rather than at project inception — this is a capability that agents are uniquely positioned to deliver.

We're not there yet. But the components are in place, the economics are favorable, and the user need is obvious to anyone who has ever wished their tools fit their workflow instead of the other way around.

The era of one-size-fits-all software has been long. It's ending not because of a new framework or platform, but because the builder is becoming something that understands who it's building for.


Marcelo Kanhan writes about technology, AI, and the future of work. marcelo@collecto.com.br