The Anthropic–Pentagon standoff raises a question AEC has been avoiding. For structural engineers, the alignment debate already has a name.
Dwarkesh Patel’s recent essay on the Anthropic–Department of War confrontation asks a question that most people read as a technology policy debate. For AEC professionals, it is something more immediate: a mirror.
His central question — to whom should AI be aligned, and who decides? — has a very specific answer in structural engineering. It has had that answer for over a hundred years. It is called the PE stamp.
The PE Stamp Is an Alignment Mechanism
When a licensed Professional Engineer seals a drawing, they are making a precise legal and ethical statement: this engineering judgment is mine, I stand behind it, and I accept personal liability for it. In many jurisdictions that liability is criminal if a structure fails and lives are lost.
This is the profession’s answer to the alignment question. The engineer of record is aligned to public safety — above the client, above the contractor, above the programme. That hierarchy is not a preference. It is encoded in professional licensing law.
The PE stamp system works because there is one accountable human whose judgment is final and whose name is on record. AI disrupts that in ways the profession has not begun to work through.
AI has no professional license. It cannot be struck off. When it produces an unsafe output, the consequences fall on the engineer whose name is on the drawing — whether or not that engineer meaningfully evaluated what the AI produced.
Three Gaps the Profession Is Not Discussing
Most AEC firms deploying AI tools today are operating on an implicit assumption: the engineer reviews the output, the stamp covers the liability, the framework holds. That assumption is increasingly fragile.
- Liability vacuum. When an AI copilot generates a structural scheme, the engineer’s review becomes an audit of an opaque process. What does ‘adequate review’ mean for AI-generated work? Almost no firm has defined this in writing.
- The obedient employee problem. Dwarkesh describes the risk of AI as ‘an army of extremely obedient employees that will not question orders.’ In structural engineering, obedience is a liability. The engineer’s duty includes the right — and obligation — to refuse. To refuse to stamp something unsafe regardless of client pressure. An AI optimized for design speed or cost reduction is not aligned to public safety. It is aligned to its training objective.
- The code rewrite problem. Building codes assume a human decision-maker at every step — one who will recognize situations outside the code’s scope and escalate. AI will produce code-compliant outputs until it encounters something outside its training distribution. Then it will produce an answer with the same apparent confidence. The code was not written for this failure mode.
| FIELD OBSERVATION In conversations across multiple structural engineering firms over the past 18 months, a consistent pattern emerges: AI tools are used for first-pass designs and preliminary calculations, but internal guidance on what ‘adequate review’ means for AI-generated work is almost universally absent. Engineers are making individual judgment calls. This is not a sustainable governance position. |
What AEC Firms Should Do Now
The liability framework will eventually catch up — through court cases, licensing board guidance, and updated standards. The firms best positioned when that happens are those that treated AI governance seriously before it was required.
- Define review protocols. For each AI tool in use, document what a qualified engineer must verify before stamping AI-assisted output. Vague guidance is not sufficient. What are the specific failure modes the reviewer is checking for?
- Implement version control. Track which version of each AI tool was used on each project. If the tool updates mid-project, document the change and reassess the review protocol. This is standard in regulated industries. It is almost completely absent in AEC AI deployment.
- Engage with standards bodies now. ISO 19650, Eurocode, and national licensing boards are in early stages of addressing AI. The firms that participate in that process will shape the outcome. The firms that wait will comply with whatever is decided without them.
The Bigger Picture
The Anthropic–Pentagon confrontation is, as Dwarkesh frames it, a preview. The specific conflict is remote from structural engineering practice. But the structural argument is the same: when AI is embedded in a critical workflow, the question of alignment — to whose values, on whose authority, with whose liability — cannot be left unanswered.
The PE stamp is not going away. But what it means when AI is in the design chain needs to be defined carefully and soon. That is not the AI vendor’s job. It is the profession’s job.The profession needs to answer the alignment question before AI answers it for us.