Home Weekly Briefing The Unreasonable Speed Problem: Why AEC Can’t Wait for AI to Fail Before Writing the Rules
FOUNDER'S SIGNAL

The Unreasonable Speed Problem: Why AEC Can’t Wait for AI to Fail Before Writing the Rules

🔑 Key Finding

Engineering has always learned through failure — every code, every standard, every regulation was written in blood. AI failure modes don't collapse on a single jobsite; they propagate silently across thousands of projects simultaneously, and by the time the pattern is visible, it's too late for a code update.

✅ Action Item

Don't wait for the AEC industry's equivalent of Ronan Point. Start building AI governance frameworks, verification protocols, and incident documentation processes now — before the failures accumulate.

Why the construction industry’s 5,000-year instinct to learn from disaster may be its most dangerous habit in the age of AI

There is a phrase circulating in AI research circles right now that deserves more attention from engineers than it is getting.

“Move unreasonably fast.”

It comes from Markus Buehler at MIT, one of the most serious minds working at the intersection of materials science and artificial intelligence. His argument is not reckless. It is precise: the bottleneck in scientific discovery is no longer capability. It is the institutional drag between what we can discover and what we actually deploy. The gap between the laboratory and the jobsite. Between the model and the stamp.

He is right. And for AEC, that observation is simultaneously an opportunity and an existential warning.

How Engineering Actually Learns

There is an uncomfortable truth at the foundation of every building code, every structural standard, every approval checklist in the built environment.

Someone died first.

The Ronan Point collapse in 1968 gave us progressive collapse provisions. The Hyatt Regency walkway failure in 1981 rewrote connection design requirements. The L’Ambiance Plaza disaster in 1987 changed lift-slab construction forever. The Champlain Towers South collapse in 2021 is still reshaping condominium inspection requirements across North America.

Every line in every code is a scar. A formalized memory of something that went wrong, something that theory didn’t anticipate, something that only revealed itself under the specific, unpredictable conditions of the real world.

This is not a flaw in how engineering develops knowledge. It is the mechanism. Empirical systems — systems that deal with physical reality — learn through contact with that reality. You build the model, you test it against the world, the world pushes back, and you update the model.

Five thousand years of civil engineering has been one long iterative loop between human ambition and physical consequence.

It works. The built environment is, by any reasonable measure, extraordinarily safe relative to its complexity and scale. That safety was purchased through accumulated failure, codified caution, and professional accountability.

The question is whether that mechanism — learning through failure — is still viable when the failures are no longer local.

The New Failure Mode

A collapsed bridge is a tragedy. It is visible, bounded, and immediate. Investigators can examine the wreckage. Engineers can identify the failure mode. Codes can be updated. The knowledge is painful but transferable.

An AI system that has been quietly miscalculating structural assessments across thousands of projects simultaneously is a different category of problem entirely.

It is not a single event. It is a distribution of errors across a portfolio of decisions, many of which may not manifest as visible failures for years or decades. By the time the pattern is recognizable, the causal chain is buried under layers of subsequent decisions, design iterations, and professional sign-offs.

This is the failure mode that AEC is not discussing seriously enough.

The industry is correctly focused on the opportunity side of AI: faster design iteration, automated code compliance checking, generative structural optimization, predictive maintenance through digital twins. These are real and the potential is significant.

But the accountability infrastructure for these tools is being built at a fraction of the speed of the tools themselves. We are deploying AI into professional workflows that were designed around human judgment, human error rates, and human accountability — and we are not asking hard enough questions about what happens when those assumptions no longer hold.

The Institutional Drag Problem

Buehler’s framing of institutional drag is precise for AEC because the drag is structural, not cultural.

It is not that engineers are resistant to change. It is that the professional frameworks within which engineers operate — licensing, liability, indemnity, code compliance, peer review — were designed for a world where the engineer is the reasoning agent and the tools are subordinate instruments.

CAD didn’t reason. FEA didn’t make judgments. BIM didn’t generate recommendations. They executed instructions and presented results that a qualified professional interpreted and validated.

AI does reason. Or at least, it produces outputs that are structurally indistinguishable from reasoning — fluent, confident, and frequently wrong in ways that are difficult to detect without deep domain expertise.

The professional frameworks have not caught up. The PE stamp still attaches accountability to a human who may or may not have the technical depth to critically verify what the AI produced. The liability chain still assumes a human reasoning process at its center. The approval process still treats the engineer’s judgment as the final filter — without asking what happens when that judgment is increasingly informed by systems the engineer does not fully understand.

This is the institutional drag. Not bureaucratic inertia. A fundamental mismatch between the accountability architecture of the profession and the actual nature of the tools now entering professional workflows.

Moving Fast Without Breaking Things

The answer is not to slow down AI adoption in AEC. That is neither realistic nor desirable. The capability gains are real and the industry’s productivity challenges are severe enough that leaving those gains on the table has its own costs — in project delays, in cost overruns, in the carbon embedded in inefficient construction.

The answer is to build the accountability framework at the same speed as the technology.

This means several things in practice.

It means professional bodies need to develop AI-specific competency requirements — not generic digital literacy, but domain-specific frameworks for what it means to critically verify AI output in structural engineering, in geotechnical assessment, in building envelope performance modelling.

It means liability frameworks need to explicitly address AI-assisted professional decisions — not to eliminate accountability, but to clarify where it sits when the reasoning chain includes systems that are not human.

It means the concept of the PE stamp needs to evolve. Not to be weakened — the accountability function is more important in an AI-augmented environment, not less — but to explicitly encompass the engineer’s responsibility to understand, verify, and be able to explain the basis for AI-generated recommendations they endorse.

It means firms need to treat AI governance as a professional risk matter, not an IT matter. The question of which AI tools are used, how their outputs are validated, and what documentation exists for AI-assisted decisions is a professional liability question, not a software procurement question.

And it means the industry needs to start building the incident database now — before the failures accumulate. Not waiting for the equivalent of Ronan Point, but proactively documenting near-misses, anomalous outputs, and cases where AI-generated recommendations were found to be incorrect before they reached construction.

Learning from failure is how engineering built five thousand years of safe infrastructure. It cannot be how we build the accountability framework for AI.

The Hard Part Nobody Is Talking About

The AI debate in AEC is dominated by two camps that are both, in different ways, missing the point.

The optimists are focused on capability. What can these tools do? How fast can we deploy them? What competitive advantage do early adopters gain?

The pessimists are focused on risk. What could go wrong? How do we prevent AI from replacing engineers? How do we protect the profession?

Neither camp is asking the question that actually matters: how do we build the accountability infrastructure that allows AEC to capture the genuine productivity and safety benefits of AI without inheriting failure modes that the profession has no established mechanism to detect or recover from?

That is the hard problem. It is not a technology problem. It is a governance problem, a professional standards problem, and ultimately a question about what the engineering profession believes its social contract with the built environment actually requires.

Moving unreasonably fast is the right frame. But speed without accountability architecture is not innovation. It is a gamble with consequences that won’t be visible until it is too late to update the code.

Written by

Marcin Kasiak

Structural engineer and digital transformation leader with 20+ years in AEC. PhD, IWE, PMP, PE. I write about where engineering practice ends and the future begins — AI in structures, digital twins, predictive analysis, and the tools that are actually changing how we build. The views expressed are my own.

AECO.digital →
Scroll to Top