Why GPT3 is just a Wordcel Calculator
The hottest AI tech is just our generation's calculator
Paul Krugman’s Ghost
In 1998, Paul Krugman infamously predicted that, “By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's."
This quote gets bandied around anytime Krugman (a Nobel laureate) has the gall to suggest he might know a thing or two about how the future might play out.
There’s a lesson here: predictions on the future impact of technology are a fools’ errand. No matter your future success, these predictions will be retrieved thirty years in the future to demonstrate that 1. you are dumb and 2. That no one should take your predictions seriously.
And yet - I can’t help following in Krugman’s footsteps.
Okay enough throat clearing.
Here’s my prediction: when the dust settles, we will see that GPT3 - far from heralding the start of AGI and the singularity and whatever other dreams may come - has been as impactful to the world as the invention of the humble calculator.
This newsletter is totally free. But subscribing and sharing are appreciated :)
The Electronic Calculator
This is not actually a knock on GPT3.
Calculators, while commonplace today, were an incredibly important technology! And, like AI, they took a while to reach their present form. Mechanical calculators were invented in the 17th Century by Blaise Pascal. An 1857 GQ review of the hot-new Arithmometer celebrated:
“A multiplication of eight figures by eight others is made in eighteen seconds(!); a division of sixteen figures by eight figures, in twenty four seconds; and in one minute and a quarter one can extract the square root of sixteen figures, and also prove the accuracy of the calculation. [...] It is not matter producing material effects, but matter which thinks, reflects, reasons, calculates, and executes all the most difficult and complicated arithmetical operations with a rapidity and infallibility which defies all the calculators in the world.”
But these devices were not modern calculators. They were more like powerful abacuses.
Things became more interesting with the first mainframe computers in the 1940s. Now we had digital calculators – even if they were the size of a garage. By 1961, the first office-sized electric calculators were sold in the United States for a whopping $2200.
This is where things start to mimic our moment. Because the digital calculator touched off a panic. The NYT wrote, in 1950, that these electric brains were going to eliminate the need for human labor. In February 1962, President Kennedy claimed that automation was the job problem of the 1960s. See, you might trust your handy calculator app now, but at the time – that robot brain was coming for human jobs!
But at least those computers were massive and inconvenient. And then in 1969 TI introduced the Cal Tech, a calculator that could fit in the palm of your hand. By the early 1970s, the modern handheld calculator was available for a few hundred dollars.
By the end of the decade, the price had fallen to under $10.
Accountants, human computers, engineers – all must tremble before the mighty power of the calculator and its command of mathematics!
But the thing is – the automation apocalypse didn’t happen.
At least not really. Because the calculator augmented rather than supplanted human capabilities. It provided a simple, accessible tool to automate the rote work for workers in quant fields. It was no more and no less than a productivity tool.
Productivity tools are important. They should and do change the way we work, the way we teach and lay the groundwork for emergent revolutions. The calculator did all three.
In 1986, Connecticut became the first state in the Union to require the use of calculators on state exams. The discussions had been contentious. Many educators believed that American students were already numerically illiterate. In an echo of Plato’s critique of writing, they believed that students who used calculators would never learn to calculate themselves.
Parents joined their chorus – after all, the way they learned math had been good enough for them! Their kids were just going to type in numbers and get all the answers?
No way. Not my kids. Not in my country. This is America.
But the State School Board saw it differently.
By allowing students to focus on problem solving rather than computation, schools could shift the focus of education to learning how and when to apply calculations.
The State’s Mathematics Department Head, Mr. Steven Leinwald, told the NY Times that: “teachers will be able to move beyond pencil-and-paper drills that teach rote computation skills such as addition, subtraction, multiplication and division. Instead, they could focus on the more complex problems, such as problem-solving and estimation, which are areas in which national and state tests have indicated that students are in dire need of learning.”
Education had to change because the world had changed. Of course, this shows up again today as we debate if GPT3 necessitates a change to writing college papers. Which.. Duh.. of course it does.
There are certain writing skills – formulating an argument, summarizing a point – that can and should be tested in classrooms (just as certain math exams do not allow calculators). But the beauty of the new technology is that it accelerates the boring part of the work to help us focus on the parts that still require human experience. Indeed, it helps us focus in on what actually adds value in modern writing – connecting disparate ideas, generating novel insights, connecting to personal or social experience to make your text relatable.
Just like the calculator, the advent of LLMs allow us to get past the basics of writing and research, to do the more thoughtful work underneath. The five paragraph essay may be consigned to the dustbin of history. But would that really be a bad thing?
The Industry Impact
If you’ve seen Hidden Figures, you know that NASA once depended on literal human-computers to get us to the moon. Astronauts didn’t trust slow and onerous machine calculations. They wanted humans doing the work and checking machine calculations.
But this work wasn’t done by NASA’s top engineers. NASA’s brass considered themselves above mere calculations. They entrusted the work to the employees they called “computers.”
As electronic calculators and computers matured, those jobs died out. But it was not a pure loss. Many of the more talented “calculators” moved into engineering where their skills – often overlooked in favor of their calculating ability – could be put to better use.
Now - this is not a purely rosy story. Many jobs were eliminated. Talented engineers – who could now do the calculating work and the engineering work – were even more valuable than before. Lower-skilled workers who performed the automatable tasks lost their jobs. The rich became richer, and the poor poorer.
But the mix-shift is more complicated than that. The diffusion of calculating technology into related fields accelerated the growth of math service jobs. Between 1970 and 1990, the fastest growing relative sectors of the US economy were in finance, insurance and real estate. The democratization of calculation helped power the financialization of the American workforce.
Those jobs were new and they were higher paying. Inequality increased as we eliminated a middle-class segment of knowledge workers, but unemployment did not.
We should expect a similar transition in a post-GPT3 world. The number of artists and writers who can produce high-quality work through a combination of leveraging technology, leveraging their taste and creating work that can connect with an audience will see their earnings and their impact rise. Those who crank out machine-quality output are likely to be displaced.
And increased inequality will likely result. But the rise of these technologies will also empower a new generation of careers – artists who lack fine-motor skills, writers who can build immersive universes of fully realized characters – that will boost overall economic well-being.
The Adjacencies Boom
Perhaps the most profound impact of the calculator boom was not about calculators at all.
It was about the potential other uses of their insides. In the race to develop better personal calculators, computing technology matured. The first use of transistors outside of audio engineering was in electronic calculators. TI and HP developed programmable pocket calculators that introduced a generation of kids to programming. And perhaps most importantly – the calculator era was critical to the rise of a startup called Intel.
The Japanese company, Nippon Calculating Machine, wanted a versatile chip that could be used in all sorts of calculators. They heard about a young, scrappy startup called Intel and asked them if they could meet the design spec. Intel lied (they had never done anything like that before) and said that they could. But in trying to realize their promise, they ended up inventing the first computer on a chip – or as we know it today, a microprocessor.
Those microprocessors – made of silicon – would give their name to the region which would take the small dreams of the calculator and turn it into a computer revolution: Silicon Valley.
But it all started with a calculator.
So… what comes next? From Turing Test to Turing Completeness (Again).
All of this is to say, GPT3 is not the revolution itself. It is the starting gun. It is a taste of what is to come.
LLM advocates like to say that this is moving the goalposts. GPT3, after all, is probably the first artificial intelligence that can really be said to pass Alan Turing’s famous test.
But Turing’s test was itself, a product of its time and a limited view of human intelligence that focused on our capacity for language. The ability to generate “correct” sounding language, while impressive, is not sufficient to really count as “intelligence.” Language is a set of symbolic representations that we use to communicate the underlying concepts that exist in our brain. It is a translation mechanism that transforms concepts into recognizable symbols and back again.
GPT3 has built that communication layer. But it still lacks understanding.
This objection is most famously formulated as Searles’ Chinese Room. Searle imagines a human asked questions in Chinese while hiding, concealed in a room. Our hidden human is fed answers in Chinese by a machine. The human is thus able to always produce a right enough answer without understanding anything of what he is saying. This, Searle argues, is the problem with Turing’s test. Just because a machine can give you the right words in Chinese, doesn’t mean you can speak Chinese.
But how do we model understanding? Turing reduced the thinking question to a speaking question. What is a similar useful test to parse what comes next?
Turing also wrote another paper in which he developed a concept called Turing Completeness. To be Turing Complete, a computer needs to be able to represent any abstract symbolic logical system completely in a step-by-step algorithm.
Turing Completeness is what separates a computer from a calculator. A calculator can perform a set of discrete mathematical functions – addition, multiplication, subtraction, division, etc. But it cannot manipulate symbols in an arbitrary set of logical transformations forever. For that it needs the ability to test conditions, to change variables, to iterate or recur through functions.
A computer that has those properties can complete any arbitrary calculation and is said to be Turing Complete. Every modern computer is Turing Complete.
To separate GPT3 and LLMs from the more general intelligence technology that could really change the way we do all work, we need a similar test. We need software that can understand – not because it can perform generative tricks – but because it can represent and manipulate underlying concepts with arbitrary precision.
I am not an AI research scientist. So I can’t tell you if an LLM with an order of magnitude more parameters could achieve this goal – but I tend to doubt it. Humans have multiple forms of intelligence. We can manipulate logic, we can use language, we can test and discern, we can learn through embodied physical experience.
My strong hunch is that we will need to develop an LLM that can generate individual insights, then use other intellectual capacities – classical logic and causal reasoning, skeptical adversarial testing, and yes, the capacity for language to translate and generate symbolic representations – to achieve the general purpose intelligence that GPT3 and Stable Diffusion hint at.
The calculator is great, but the microprocessors it inspired and the computers that they powered – well, those really changed the world. They just took a little longer to arrive.
Thanks for reading all the way down here. If you got this far, you might like my other work. So please consider subscribing :)