The first ATM in the United States stood in the lobby of a Chemical Bank on North Village Avenue in Rockville Centre, New York. It was 1969. The machine was called a Docuteller. It could dispense up to one hundred and fifty dollars in cash and log the transaction.
The advertisement Chemical Bank ran in the local paper that week read:
On September 2, our bank will open at 9:00 and never close again.
It was a sentence written by a marketing department that did not know it was describing the next sixty years.
The obvious prediction was that the teller would disappear. Whatever a teller did all day — count cash, dispense bills — the machine could do without lunch breaks or pension contributions. The prediction was reasonable. But it was off by forty years.
For more than three decades after the Docuteller arrived, bank teller employment in the United States rose, and kept rising. Branches needed fewer tellers — from twenty, on average, down to thirteen. But cheaper branches meant more branches. And more branches meant more tellers.
The ATM had taken one narrow piece of the bank’s authority.
Move money, within a limit, with a paper record. Everything else — the loan question, the disputed charge, the elderly widow settling her late husband’s accounts— still relied on a person, because the bank had not yet decided to let the machine do anything else.
The prediction had counted the tasks but missed what held them together — the authority the bank chose to give them.
I keep thinking about that lobby in Rockville Centre when people ask me what AI is going to do to work. An LP forwards me a job posting his daughter applied to that closed in nine hours with three thousand applicants. A friend sends me a contract a model drafted for him in fifteen minutes and asks what his old firm was billing for. The severity varies. The fear does not.
If intelligence becomes cheap, what happens to everyone whose value was built on being smart?
At the keyboard, the same kind of decision arrives in a permission box.
A code agent is running. The model wants to edit a file. Approve. The model wants to run the test suite. Approve. The model wants to install a package, touch a config, rewrite a function it just wrote ten minutes ago because the test failed. Approve. Approve. Approve.
Each request is small. The model writes. The user decides. The user lets the work in.
Intelligence flows from the model. Labor — the act of letting the work into the system — flows from the worker.
Then the box changes.
The label varies — Always Allow, yes don’t ask again, auto-approve, bypass permissions. The question does not. Will you keep deciding each step, or let the model act without asking? Will the model, having done the writing, be the one who lets it in? Will intelligence become labor?
Always Allow.
These words stand in for an almost imperceptible transfer of authority. Let the model act without asking. Let it operate inside the system where mistakes have consequences.
That button contains more of the AI labor debate than any forecast.
The forecasts ask which jobs will go.
The question is not wrong. The unit is.
A job is a bundle. A tax practice is research, intake, planning, dispute resolution, signature, partner review, bar membership — dozens of functions held together by a name and a paycheck. Some of those functions are already being repriced. Some will outlive the people who do them. The question of whether the job survives is a question of how many functions get unbundled, repriced, and reassigned, and to what.
The teller did not lose their job in 1969. She lost one part of one job. Other parts grew, and the role survived for forty years on what was left.
The fear treats labor as something intelligence can replace in one piece. It cannot. What it replaces is narrower than a job.
The closest word people reach for is “task.” It is also too blunt.
Some tasks produce. Others act. A smaller set of actions change the state of the business: money moves, a record updates, a claim closes, a tax position gets filed. These are the moments where work stops being preparation and becomes consequence.
The commit point is the moment a recommendation becomes a business fact.
The invoice is posted. The order is updated. The loan is approved. The code is merged. Before that point, AI is preparation. The organization can take it or leave it. At that point, the organization has accepted the output as the action itself, not preparation for one.
That is the line where AI starts to become labor.
This is why organizations are better understood as permission systems than task graphs. Every consequential action has someone whose job is to decide whether it happens. A partner signs the return. An underwriter approves the loan. A senior engineer merges the code. A claims adjuster authorizes the payout.
The map of AI disruption is a map of commit rights — the set of decisions a company is willing to delegate, and to whom.
Consider a claim. A model can read the file, classify severity, find precedents, draft a reserve, and recommend a payout. By the standards of the last decade, this is impressive. By the standards of the company paying for it, it is preparation.
The work goes from preparation to consequence when the payment is approved, money is set aside in the ledger against the claim, the policyholder record is updated, and the case eventually returns a verdict — paid as predicted, disputed, audited, or litigated. The cases where the model is wrong get routed to a person.
That is the system around accountability: permission to act, a record to change, a verdict to learn from, and a path for repair when the action fails.
Some domains already have enough of that system in place. That is why code went first.
Code is one of the few places in the economy where all four pieces exist around the act of letting work in.
A developer who wants to change something opens the change as a pull request, not as a direct edit to the running system. A reviewer has to approve before the change is let in. Tests run automatically. Code that fails them does not get merged. If something breaks after the merge, a single command returns the system to the version that worked.
Permission, record, verdict, repair. All four, built into the workflow before anyone thought to ask whether a machine should participate.
That infrastructure was not built for AI. It was built because human developers needed it — to coordinate across teams, to catch their own mistakes, to recover from each other’s. But it turns out to be exactly the system a non-human actor needs in order to be trusted with anything. The stack treats a model’s proposed change the same way it treats a human’s.
This is why code went first. A code change takes minutes to fail a test. The system itself can tell quickly when the work is wrong. Most of the economy runs on slower clocks. A claim can take weeks to settle. A loan, months to perform or default. A tax position, a year before the IRS responds. A venture career, a decade or two after the check is written. The faster the verdict, the faster trust accumulates around the system that judges it.
Code went first because software development had industrialized accountability.
Most of the economy has not.
Most of the economy runs on systems that were never built to be judged.
A model can know exactly what to do and still have nowhere to do it.
A patient calls about a denied insurance claim. The model can read the explanation of benefits, identify the denial code, find the policy language, draft the appeal, and address it to the right department. Every part of that is intelligence. But the appeal still must be filed through a system built for a human sitting at a keyboard. The person on the phone is doing what the model cannot do — not because they are smarter, but because the system accepts them as the actor.
This is the action gap: the distance between knowing what should happen and being the one the system lets do it.
Look underneath most of the work the economy depends on, and the four pieces are not there.
Most enterprise workflows lack the infrastructure required for any non-human actor to be trusted at a commit point. Records do not save versions that can be compared or rolled back. Actions are logged, but not in a way that produces a usable verdict for the next action. When something goes wrong, the case routes to a person whose judgment becomes the audit trail. There is no automated test that fails. There is no commit that reverts. There is no answer that arrives before the next action is taken.
The companies furthest along are not primarily building better models. They are building the missing infrastructure around the model — the permission layer, the record layer, the verdict layer, and the repair path — so that an agent can act inside a real system of record and the organization can judge the result.
Progress is limited by the infrastructure that must receive, judge, and act on model output — not by the model itself.
The same architecture shapes careers.
The domains with the fastest verdicts and the clearest paths to repair will move first — and they already are. Code is the visible front. Parts of customer support are next — refund authorization within scope, account changes, retention offers where the action is bounded and the rollback is cheap. Both have systems mature enough to absorb a non-human actor at the commit point.
But the rest of the economy will not face a sudden shock. It will face a slow erosion. The change will not strike at the commit point. It will arrive as a quiet thinning of the work beneath it.
A partner will not fire associates. Rather, she stops hiring them. The drafting, reconciling, summarizing, preparing — the apprenticeship layer — gets absorbed into the model. The senior signs the same returns. The partner takes the same client meetings. The architecture above the commit point looks unchanged. The headcount underneath it thins one departure at a time.
This is the shape displacement will take in most professional services. Structural unemployment will bypass mass layoffs in favor of structural non-hiring: fewer juniors, fewer outsourced roles, fewer apprenticeships, fewer entry points into the careers that produce the people who eventually hold commit rights.
The partner today holds a commit point because of the decade she spent as a junior. Her early drafts were the record. The partner’s review was the verdict. Her late-night revisions were the repair. That loop is how the firm eventually granted her permission. Compress the layer below, and the layer above keeps working — for now — but the supply of people the firm has reasons to trust gets thinner every year.
The people closest to a commit point will be the last to feel it. The people who fed them work will be the first. And the people who would have fed those people, ten years from now, may not be hired at all.
The visible jobs survive. The invisible path to those jobs erodes.
Authority compounds through exposure to consequence.
The fear is not that intelligence becomes cheap. The fear is that the path to expensive judgment closes without anyone noticing.
Return to the lobby in Rockville Centre.
The teller did not lose her job in 1969 because the bank had not yet handed the machine the role. It had handed it one narrow permission — dispense cash, within a limit, with a record. Everything else stayed with a person. The role grew into the parts of banking the machine had not been allowed to reach — disputes, relationships, judgment about whether to bend the rules. Forty years later, when the rest of the role moved into software, a new generation of branches and bankers had grown around what was left.
The same is going to happen now. Not as fast, not as evenly, and not in the places the forecasts are pointing. AI will move the way the ATM moved — one workflow at a time. Permission by permission. Commit right by commit right.
Some work will disappear. Some will move. Some will be repriced. Some will become the infrastructure that makes the next layer of work judgeable. The next valuable work sits above what AI can commit — building the systems that judge it, holding the consequences when it fails, deciding which permissions should be granted next. Some of that work has names. Most of it does not yet.
The forecasts are right that the old map is being redrawn. They are wrong about how fast, and wrong about where.
AI does not become labor where the model is best. It becomes labor where the system already knows how to judge it.
The work moves to the edge of what the system cannot yet judge — which is exactly where it has always moved, every time a machine arrived in a lobby and asked to be trusted.
Thanks to Zhengyuan Zhou and Nazli Dakad for their comments on previous versions of this piece.