Second order thoughts on current AI agents
Many people are voicing first order thoughts on AI agents as they become more widespread, capable, and gain more permissions. A good recent example is Dr. Hannah Fry's You Tube video "Why AI agents are either the best or worst thing we've ever built." Skip over the cute/predictable "whoops, it gave away our credit card number" frame, and the video does ask good questions:What happens when AI agents are ubiquitous, and everyone has thousands of agents at their command?Our human institutions depend on scarce human agency. What happens when everyone has effectively unlimited agency for complaints, grievances, queueing, bidding, research grant applications, etc?What legal model is an AI agent in relation to its human controller? Is it a child? A piece of equipment? A dog? Which interpretation prevails determines liability.But while these are good questions, they aren't the right questions. Or they're good "that is sure something interesting to think about," but they don't get anyone closer to "and here is what to do about it."The better questions are second-order: given that, what then?Who gets access to agentic capacity first? Because agentic capacity isn't broadly distributed now. It's in the hands of the frontier labs, large institutions, software developers, and enthusiasts. It's not in the hands of activists, lawyers, political grievance mongers, or ordinary people.Who can deploy agentic capacity effectively? Just because a capacity exist, doesn't mean it is being utilized well. A person using a staff of AI agents to try to get to the front of a ticket queue, to the best table at a restaurant, to get the fastest bid on a rare pair of sneakers on eBay, isn't maximizing agency, they're maximizing dissolution and distraction.Is there a first mover durable advantage? Do the people who first got access, first figured out how to deploy effectively, maintain a durable lead as their 'share' of the active agent ecosystem is diluted? I would expect early exposure and skill to