AI agents and the legal meaning of agent

The word 'agent' in 'AI agent' borrows from a legal concept with two centuries of doctrine behind it. The borrowing is creating problems.

The word “agent” did not start its life in computer science. It started in the law. The law of agency is one of the older bodies of common law doctrine, and it answers a specific question: when one person acts on behalf of another, when is the principal bound by what the agent did, and when is the principal liable for the harm the agent caused. The product category called “AI agents” has imported the word without importing the doctrine, and the resulting confusion is becoming a real source of legal risk for the companies building and deploying them.

The agency-law question turns on three things. Authority (was the agent authorized to act, and how broadly), control (how closely the principal directed the agent’s behavior), and apparent agency (did the third party reasonably believe the agent was acting for the principal). The answers determine whether the principal owns the consequences. The doctrine works because the answers usually have human witnesses, written authorizations, and corporate structures behind them.

When the agent is a piece of software, all three factors get harder.

Authority becomes a system prompt and a permissions configuration, written by an engineer at one moment and possibly modified by users, fine-tuners, or the model itself in later moments. The question of “what was this agent authorized to do” is not answered by reading a power of attorney. It is answered by tracing the configuration that was actually live at the time of the action, which is harder than it sounds and which most companies do not have a complete log of.

Control is a much harder question. Traditional agency doctrine asks whether the principal had the right and ability to direct the agent’s conduct. With a model that is operating with tool use, browsing, code execution, and inter-agent communication, the principal’s control is real but bounded by the model’s actual behavior, which is not deterministic. Saying “we did not intend the agent to do that” is not, on its own, a defense, and the law will not be sympathetic if the answer to “did you have any control” is “no.”

Apparent agency is the place where third-party plaintiffs are starting to focus. If your product calls itself an “AI sales agent” or “AI scheduling agent” and operates externally on your customers’ behalf, third parties dealing with the agent reasonably believe it speaks for your customer. Those reasonable beliefs create binding commitments under existing law. The question of whether your customer is bound by what your agent committed to is a real legal question, not a hypothetical, and the contracts between the AI vendor and the customer often do not address it cleanly.

The practical takeaway for tech companies building or deploying agents is to read your customer-facing description of the product through agency-law eyes. If the product holds itself out as acting for the customer, the customer probably is bound by what it does. The contracts and disclosures need to allocate that risk explicitly. The default of “we are just software” is no longer a serious answer.