Why your AI strategy needs guidance from an 82-year-old computer

- The US Army built the world’s first electronic general-purpose computer during World War II — and with it came proof of concept for AI.
- The Army went on to explore the thinking processes behind “ideation” which — when automated — yield generative AI.
- Ideation has its limits: Human creativity can’t currently be performed by any known arrangement of electronic transistors.
It was 1943. And the US Army had a plan to create the future faster.
The plan began with ENIAC [Electronic Numerical Integrator and Computer]. Commissioned by the Army Ordinance Corps at the midpoint of World War II, ENIAC was the world’s first electronic general-purpose computer. Built of metal cabinets packed with 17,468 vacuum tubes (descendants of the lightbulb that would, in later decades, be superseded by transistors), it could dash through five-thousand additions a second — at the cost of enough kilowatts to power your modern household for three years.
ENIAC’s thirty-ton bulk can now be replicated by microgram circuits. But its infallible logic gates were proof of concept for artificial intelligence, hailed by 1940s futurists like John von Neumann as a replacement for the human brain — and then as something even more spectacular: the end of time. Time, after all, was simply the lag between past and future. That interval occurred because, in the physical world, it required minutes, hours, millennia for cosmic interactions to play out. Stars flew ponderously through space, needing eons to achieve their preordained collisions. Even light, which zipped at breathtaking pace across the dark, took many human lifetimes to traverse a single galaxy.

All this slowness would be eliminated by AI. Reducing interstellar bodies to numerical spreadsheets of mass and motion, it would calculate tomorrow from measurements today. And once the computer had finished witnessing the conclusion of the universe, it could predict the upshot of anything else, from markets to elections to wars. In an instant, the outcome of every global occurrence would be known.
So the US Army dreamed in 1943, the year that ENIAC was designed. But that same year, the Army’s digital conquest of the future got snagged by an unforeseen twist: human creativity.
Human creativity was brought to the Army’s attention by World War II fighter aces like Gabby Gabreski. Gabreski flew P-47 Thunderbolts for the Army Air Force. And his combat aviation skills were good. Very good. By war’s end, Gabreski had shot down 28 German planes, more than any other American pilot.
The Army wanted to know: How was Gabreski such an exceptional dogfighter? Initially, the generals hypothesized that Gabreski was simply better at following orders. Where rookie pilots panicked in battle, Gabreski functioned like a robot, dispassionately following the instructions laid out in his military flight manuals.
Upon closer inspection, however, Army scientists discovered that the opposite held true. The tighter that pilots stuck to rules they’d learned in fighter school, the faster they went down in flames. Textbook flying made them predictable, easy prey for Germany’s top aces who — like Gabreski — triumphed by being surprising, original, imaginative.

This discovery of the evolutionary fitness of human creativity seemed to doom the Army’s plan for ENIAC. ENIAC could accelerate time only if the future was a math equation, deterministic, driven by past data. And as Gabreski’s kill tally revealed: the future wasn’t. Tomorrow belonged to individuals who broke the rules, forsaking protocol to act inventively. To rely on ENIAC — and computers more generally — was to condemn America’s armed forces to perish like a noob pilot who put his faith in logical programming.
Yet in a second unanticipated twist, AI’s role of tomorrow-maker was salvaged by a middle-aged professor named J.P. Guilford. Guilford was a psychologist at the University of Southern California. And in 1943, Guilford informed the Army that he could distill the creativity of pilots like Gabreski to computational protocols. Intrigued, the Army funded Guilford’s research, eventually promoting him to colonel. Until finally, Guilford reported the fruits of his investigation: human creativity was the product of two logical processes, one randomized and the other probabilistic.
Those two processes have since become known as divergent thinking and convergent thinking. Divergent thinking is random. It arbitrarily mix-and-matches between categorical sets. (This free-associational activity is the basis of corporate brainstorming techniques.) Convergent thinking is probabilistic. It identifies patterns in random associations. (It’s what happens at the end of a brainstorm when everyone votes on the ideas that are most likely to work.) Together, these two processes constitute ideation, the engine of design thinking. And when automated by computers, they yield generative AI.
Generative AI was impossible for ENIAC. The machine’s vacuum hardware was too basic. But over the twentieth century, computers got faster at running ideation. By the late 2010s, the precursors of ChatGPT, Gemini, and DALL-E were spluttering to life, making real the future that the Army had envisioned in World War II. With the help of symbolic AI (the heir of ENIAC’s hand-coded circuitry), American soldiers would predict all the parts of tomorrow that followed the laws of math. And with the help of generative AI (the heir of Guilford’s two computational protocols), American soldiers would make all the parts of tomorrow that flowed from creativity. Combined, these two forms of artificial intelligence ushered in the age of “neurosymbolic AI,” spelling (at last!) the end of history.
Except, there was a hitch. In this story’s third and final twist, the Army realized during the early 2000s that ideation didn’t work. For years, US Army Special Operations had rigorously drilled its most elite recruits in divergent and convergent thinking. And it found: the practical creativity of those recruits declined. Their tactics were less imaginative, their strategies less effective. They were slower to invent original plans. They were more emotionally fragile when placed in fast-changing environments.
Your best chance at beating your competitors is to learn the lesson of the US Army: computer AI is only partly smart.
Seeking an explanation for this unexpected setback, Army Special Operations reached out in 2021 to my lab at Ohio State University. Working together, we discovered that human creativity is driven by mechanical processes that are natural for animal neurons — yet can’t be performed by any known arrangement of electronic transistors. These mechanical processes are narrative, not logical, which is to say, they are driven by thinking in actions (as synapses do) rather than by thinking in equations (as computers do).
“Thinking in actions” is detailed in my new book, Primal Intelligence, but as one example, there’s the human brain’s ability to spot what Special Operators refer to as “exceptional information,” which is defined in the Army manual Mission Command: “There is information that results from an extraordinary event, an unseen opportunity, or a new threat. This is exceptional information — specific and immediately vital information that directly affects the success of the current operation … Identifying exceptional information requires initiative.”
Initiative cannot be programmed. It is not reducible to pattern recognition. (It requires process recognition.) Nor can it be achieved via random mix-and-matching.
Yet initiative is not magic. As my lab determined from studying Army Special Operators, it can be physically cultivated in human brains via narrative-based exercises like this one:
List people who are creative. Congratulations. You’ve just thought like a computer, using a keyword for fast memory search. Now escape the generic archetypes of logic and think like a novelist, remembering (or discovering) a specific story of when every person on your list was creative in their own distinct way. Stretch that original story into the future, imagining an individual action each person might take in response to a challenge or opportunity you see ahead.
Perhaps, in time, human engineers will build an artificial brain capable of performing this exercise. (That artificial brain won’t be a computer; it will require the invention of narrative-competent hardware that incorporates the synapse’s nonelectronic architecture.) Until then, however, your best chance at beating your competitors is to learn the lesson of the US Army: computer AI is only partly smart. No matter how quantum, neurosymbolic, or sentient the ENIAC, human creativity remains the future.