My experiments with AI

A software engineers journal

The Human Lag Behind AI

AI Is Already Here. We’re Still Arguing About Whether to Let It In.

AI did not ask for permission.

It has already reshaped how people learn, diagnose illness, consume electricity, manufacture critical technology, and think about their own jobs. This is happening whether institutions approve of it or not.

The real issue is not how fast AI is moving.

The real issue is how slowly humans, governments, and institutions are adapting to what it has already changed.

Across sector after sector, the same story keeps playing out. Behavior changes first. Systems react later. Regulations follow damage, not anticipation. And the public is left absorbing consequences they were never prepared to understand.

That growing gap is the big AI caveat.

When Students Learn Faster Than the System Allows

Higher education is experiencing widespread AI anxiety, but not for the reason it claims.

The problem is not cheating.
The problem is not plagiarism.
The problem is not that students are “misusing” AI.

The problem is that schools are still teaching for a world that no longer exists, while students are already living in a world where AI is assumed.

As described in Built In’s coverage of higher education: “AI Anxiety Is Reshaping Higher Education” , faculty are struggling to respond to generative AI. Policies are being rewritten. Tools are being banned. Detection software is being deployed.

But this reaction misses the real shift.

Students already assume AI is available. Institutions are still debating whether it should be allowed.

Students are using AI to brainstorm, clarify concepts, draft outlines, and accelerate learning outside the classroom. Meanwhile, many institutions are trying to preserve assessment models built around information scarcity and individual output.

AI collapses that scarcity.

What becomes valuable instead is judgment, synthesis, critical thinking, and the ability to work with intelligent tools responsibly. Yet curricula, grading systems, and faculty incentives have barely moved.

Just like in every other domain, the people on the ground adapted faster than the institution designed to serve them.


When Patients Show Up With ChatGPT Printouts

In the Marketplace episode “Dr. AI Will See You Now” a frontline physician makes an observation that quietly captures the future of healthcare:

Patients are already using tools like ChatGPT to interpret their symptoms and explore treatments before seeing a doctor.

This is not a theoretical future. It is the present.

Doctors now face a fork in the road. They can meet patients where they already are, evaluate the information together, correct what is wrong, and guide decisions. Or they can reject it outright and insist that patients should not be using AI at all.

Many institutions have chosen rejection. Not because it improves care, but because it preserves authority.

The result is predictable. Appointments become tense. Trust erodes. Time pressure increases. Healthcare is not struggling because AI exists. It is struggling because patients adapted faster than the system was willing to.

AI changed expectations. Training, workflows, and policy did not follow.


Your Electric Bill Is the First AI Tax You Didn’t Vote For

NPR’s reporting on AI infrastructure makes the next impact painfully tangible. “What AI Data Centers Are Doing to Your Electric Bill”
Massive AI data centers are being built at record speed. They consume electricity at the scale of small cities. Utilities are forced to upgrade grids, keep aging power plants online, and spread those costs across consumers.

Here is the disconnect:

People are feeling the cost of AI in their monthly bills long before they understand why those bills are rising.

Local communities hosting data centers were not prepared for the strain. Ratepayers never opted into subsidizing Big Tech infrastructure. Regulators are reacting one project at a time, not designing for systemic change.

Once again, AI moved first. Public understanding lagged behind.


““Made in America” Meets 18,000 Rules and a Desert Community

The promise of rebuilding advanced semiconductor manufacturing in the United States sounds straightforward on paper. In practice, it is anything but.

The New York Times’ The Daily podcast lays this bare in its episode on TSMC’s Arizona fab: “TSMC’s Arizona Factory and the Reality of ‘Made in America’”. The factory was pitched as a symbol of American industrial revival. A cornerstone of national security. A critical pillar for the AI economy.

What the reporting reveals instead is a collision between global-scale ambition and local-scale reality.

Building a cutting-edge semiconductor fab in Arizona is not just an engineering challenge. It is a regulatory, cultural, environmental, and social one. The project must navigate roughly 18,000 separate regulatory requirements, spanning federal, state, county, and municipal rules. Each approval introduces delays. Each delay compounds cost. None of it moves at software speed.

Then there is the human factor.

The factory is being built next to established retiree communities, many of whom moved to the area specifically for quiet, low-impact living. Suddenly, they are facing round-the-clock construction, increased traffic, water usage concerns, and industrial activity on a scale never envisioned when their neighborhoods were planned.

Predictably, resistance followed.

Local residents organized. Public meetings grew contentious. Lawsuits and appeals slowed progress further. What looked like a national priority from Washington became a neighborhood disruption on the ground.

Strategic importance does not automatically translate into local consent.

Even when progress is made, the challenges do not end. Operating a fab requires an extremely specialized workforce trained in precise processes and rigid discipline, often modeled after practices developed over decades in Taiwan. That culture cannot be imported overnight. Training pipelines take years. Mistakes are expensive. Yield losses are unforgiving.

The result is a sobering reality check.

We are not failing to build semiconductor factories because we lack money or motivation. We are struggling because industrial capability is built slowly, while AI demand is accelerating quickly.

Once again, the same caveat emerges.

AI assumes that supply chains, regulations, labor, and communities can pivot instantly. The physical world does not work that way.

The Arizona fab is not a failure. It is a warning. A reminder that readiness is not declared through policy or incentives alone. It is earned through coordination, patience, and sustained investment across systems that were never designed to move this fast.


Layoffs, Panic, and the Great AI Overcorrection

Nowhere has the confusion been louder than in the job market.

Over the last few years, companies laid off tens of thousands of workers, often with the same justification. AI would automate large portions of the workforce. Replacement was inevitable. Efficiency was imminent.

The confidence was striking.

The data now tells a more humbling story. Per LinkedIn, hiring is quietly returning; Roles are being redefined. Companies are discovering that what AI can do in demos does not map cleanly to what it can replace in real organizations, at least not yet.

AI changes how work is done long before it eliminates who does it.

The layoffs were driven less by realized automation and more by expectation and fear. Organizations were not prepared to redesign work around AI, so they reduced people instead. Workers were not prepared for how quickly hiring signals and skill requirements would shift. Leaders were not prepared to admit that judgment, accountability, and context remain stubbornly human.

This is not the end of work. It is a correction phase.

The long-term impact of AI on employment may still be profound. But the short-term damage came from certainty without readiness.


The Pattern We Can No Longer Ignore

Across schools, healthcare, infrastructure, manufacturing, and labor, the same failure mode keeps repeating:

  • AI reshapes behavior faster than institutions adapt
  • Regulation follows impact instead of anticipation
  • The public absorbs costs before understanding tradeoffs
  • Resistance replaces preparation when systems feel overwhelmed

We keep mistaking technological progress for human readiness.

This is not a call to slow AI down. That debate is already over.


Readiness Is Not Optional — It’s Existential!

Here is the part we often overlook.

Humans do adapt.

Institutions feel slow not because adaptation is impossible, but because meaningful change requires redesign, not denial. Schools will change how they teach. Healthcare will learn how to work with AI-informed patients. Infrastructure will be planned with AI demand in mind. Jobs will stabilize around new definitions of value.

We are early in this transition. Awkward early.

What we are experiencing now is not failure. It is friction at the boundary between an old world and a new one.

AI is forcing us to rethink how we learn, work, govern, and care for one another. That process is uncomfortable, but it is also necessary.

Readiness is not a switch. It is a curve.

And while AI may be moving faster than we are today, history suggests we will catch up not by resisting change, but by learning how to live with it, shape it, and build systems that assume it is here to stay.

That is the real opportunity hidden inside the big AI caveat.

Leave a comment

Navigation

About

My Experiments with AI is where I explore the cutting edge of artificial intelligence through hands-on experimentation and thoughtful analysis.