Are you building intelligence or abstracting yourself out of reality?

In February this year, an automated US military targeting system struck a girls' school in southern Iran, killing up to 180 children. It wasn't a malfunction and it wasn’t anything to do with Claude or Gen AI going wrong, the system worked exactly as designed, processing a target list and executing strikes at a rate of a thousand decisions per hour.

A building that had been a school for over a decade remained listed as a military facility in an outdated database. No one checked. No one was going to. Kevin Baker's essay on how we got here is a chilling analysis about the automated systems responsible. The problem he identifies is what happens when you build abstraction layers between a decision-maker and reality; data accumulates, validations reference prior validations, and the loop closes with no outside check. The school looked like every other package in the queue. The system had no mechanism for doubt.

This story is truly horrific. And yet despite the horror, I am optimistic about what technology, including AI, can do to improve our lives and the problems it can solve and the value it might create. And that’s because most core technology is morally neutral. It's a tool, and the ethical onus lies with us, not the machine. We have the chance to make the right choices in the application of technology. The military-industrial complex is a bellwether for moral decision making and the application of technology, as what gets pioneered here can cast light on what is happening in other domains and help us make the right choices.

It worries me that we are building the same logic into marketing; intent data, account scoring, AI-generated insight, automated outreach; all abstraction layers that add efficiency and remove proximity to the actual human on the other side. A recent analysis of Meta’s Advantage+ creative and media automation service observed ”The system favours larger data sets and less human intervention. On those terms, it's working as designed.” Sound familiar? Circular reporting means engagement data validates the model that generated the outreach that produced the engagement. Representational residua, the thickening file of signals and scores around a prospect, creates the appearance of understanding while potentially amplifying an original, false, assumption. The stakes are obviously different, but the architecture is the same and so are the warning signs. At Davos, Julie Sweet, CEO of Accenture talked about AI systems and the principle of human in the lead not just human in the loop. I hope we can make that a reality.

Kevin Baker's original article
https://artificialbureaucracy.substack.com/p/kill-chain