Autonomous or Assistive Intelligence? Designing AI That Actually Works
Dr. Shlomo E Argamon
2026-01-07
AI shouldn’t replace humans-it should support them. This article explores why assistive intelligence outperforms autonomous AI in real systems.
Autonomous or Assistive Intelligence?
The Wall Street Journal ran an elegant and fascinating experiment (link below). They let an AI agent run an office vending machine. It did not fare well; the bot purchased odd items (like a betta fish), let users cajole it into discounts and freebies, and even gave things away for free (including a PlayStation 5). The Journal’s conclusion was that the agent was “inadequate and easily distracted.”
That’s the truth, but not the whole truth. If we stop at “AI makes mistakes,” we walk away with the wrong lesson.
The real error is viewing AI through the “autonomous intelligence” mindset. This view treats AI as a simple replacement in a human role. We judge only the bot’s performance in that role and assume everything else stays the same.
But putting an AI in changes the entire socio-technical system. Human incentives change.
For example: Social friction drops, because it is less embarrassing to haggle with a bot than a person. Boundary testing increases; people try harder to see what they can get away with. Such changes alter the AI’s operating environment. Then the AI’s behavior adapts and influences human behavior again. The result is a feedback loop that drives outcomes as much as model ability.
The more appropriate viewpoint is AI as “assistive intelligence”. This view implies a design methodology. The idea is that AI should support human activity, not replace it. Taking this view seriously requires systems thinking and process design.
The problem isn’t whether AI is smart enough — it’s whether the system around it is designed well.
Instead of asking how to make the most capable AI bot, ask: how can humans and AI divide the work so the whole system behaves well, and improve human performance and experience? This means considering the whole human/machine system in an integrated fashion, and designing the AI’s capabilities and interfaces with a view to how they affect human incentives and performance in that system. Human oversight is not a safety mechanism tacked on to a technological system, but an integral part of the design from the get-go.
In broad outline, AI proposes, summarizes, checks, drafts, and flags issues for consideration, while humans approve, own decisions, monitor overall processes, handle exceptions, and remain accountable. We build in controls to manage risk: permissions and limits, escalation paths, audit trails, and deliberate friction around money, safety, and high-impact actions. The key is constant testing and experimentation to understand how AI actions, preferences, and interfaces affect human incentives and the overall process.
The question is not whether AI can replace people — it’s whether we’ve designed the human-machine system to produce the outcomes we actually want.









