Most AI products today are optimized for responses, not outcomes. They rely on chat, expose too much underlying structure, and require constant user direction. As models become more capable, and user expectations increase with each breakthrough, users don’t want to manage prompts or workflows, they want to review outcomes.
The challenge is not capability, but behavior. If an AI system can take action over time, what should it do without being asked? When should it surface decisions? How do you make its behavior understandable enough to trust, without overwhelming the user with internal complexity?
I designed an Assistant that operates as a system, not an interface. Users interact through intent, while the Assistant plans, executes, and adapts in the background. Instead of exposing agents or tools, the system absorbs that complexity and surfaces only what matters: actions taken, decisions made, context that informed those decisions, and items that require review.
The result is a shift from prompting to collaborating. The Assistant doesn’t just respond – it moves work forward. It identifies opportunities, executes tasks, and brings the user in at the right moments, with enough context to understand and act, without needing to manage the system itself.
A system that operates before you arrive
The Assistant runs continuously - identifying opportunities, executing work, and surfacing what matters when you return.
Core beliefs
Users want to move from intent to outcome. Over time, the log of events powering execution requires less review.
Users should interact with intent, not workflows
The Assistant is the primary interface to the system
Systems should act, not wait to be prompted
Autonomy should be graduated and interruptible
Complexity should be absorbed, not exposed
Systems should be legible without full transparency
Explore the prototype, built with Claude Code
The system acts and surfaces what matters
The Assistant runs continuously, identifying opportunities and executing work. Instead of requiring prompts, it presents a prioritized view of what changed, what was done, and what needs attention. Home becomes a communication surface with prioritized messages from the Assistant, evoking how a user might start a day in office with a coworker.
Autonomy alone wasn't enough. Early versions still felt like a task list with AI layered on top. V4 shifts toward a more cohesive system that behaves like a coworker, surfacing and progressing work in context.
Execution, powered by context and knowledge
Each item reflects work the system has already progressed. Here, a fully prepared, hyper-personalized outreach campaign. The user reviews and intervenes at the right moment, rather than building from scratch.
Context is embedded directly into the interface - signals, reasoning, and supporting data are surfaced inline, allowing users to quickly understand why the system acted without exposing the full underlying complexity
From prompting to delegation
As the system became more capable, the interaction model shifted from issuing commands to handing off work. Users define intent, and the system determines how to execute.
Users define intent and hand off work, allowing the system to plan, execute, and return at the right moments.
The user hands off a goal - monitoring and improving sequences - while the system operates in the background. It analyzes performance, proposes changes, and brings the user in only when input or approval is needed