While the serial novel got its start way back in the 17th century, it did not “get its sea legs” until Victorian era England. In 1836 and 1837, a 24-year-old journalist began publishing a series of related stories called “The Pickwick Papers”.

March 1, 2026
While the serial novel got its start way back in the 17th century, it did not “get its sea legs” until Victorian era England. In 1836 and 1837, a 24-year-old journalist began publishing a series of related stories called “The Pickwick Papers”. “The Papers” and its author – Charles Dickens - became a phenomenon amongst the English, spawning a theatrical version, joke books and other merchandise (Dickens, for his part, would go on to bore school kids for the next 200-years). And so was born an era marked by cliffhangers designed to titillate the reader and leave them wanting more from the “next episode”. This era would last well into the middle of the 20th century before gradually petering out (although, Stephen King did do a great reboot in the mid-1990s with “The Green Mile” series) as other forms of entertainment – radio, comic books, television and movies – took the mantle.
Not to be deterred, we are going to spend the month of March rebooting the serial with a series on the disruption that has been caused by artificial intelligence. We did not set out to do this, but when three pages turned into six and ten into fifteen, we decided that rather than try to take a sledgehammer to the length of the piece (or subject our readers to a tome), we would break it up into bite-sized pieces. We do not promise weekly cliffhangers, but we will do our best to keep you wanting more. We should note that we use AI in our work now and we expect it to become an ever-growing part in the coming years. In fact, some of the charts in this piece were done with an “AI-assist”.
With that in mind, let’s get to it.
We often like to think that this time is different. That is – a transformative technology has finally gone too far and millions of people will lose their jobs and the economy will be imperiled as the new technology displaces human workers. Economists call the underlying logical error the "lump of labor fallacy" — the assumption that there is a fixed amount of work (a “lump”) to be done, and that any task claimed by a machine is permanently lost to human workers. History has repeatedly demonstrated that this assumption is wrong, yet the fear resurfaces with each new technology because the initial disruption is very real, even when the long-term outcome is not. Let’s explore a few historic examples:
Skilled English textile workers (known collectively as “the Luddites”) — weavers and framework knitters who had spent years mastering their craft — faced genuine economic catastrophe as power looms mechanized what had been well-paid artisan work. After their appeals to employers and the government failed, the Luddites resorted to organized, nighttime raids on factories and mills to destroy the new machines, which they viewed as the source of their problems. The British government responded by making machine destruction a capital offense and the Luddite movement effectively ended in 1813 with mass executions.
Henry Ford's moving assembly line, introduced in 1913, transformed automobile manufacturing by breaking complex work into simple, repetitive tasks. Critics argued this "deskilled" work and would create an army of interchangeable, disposable workers.
Let’s start with a chart and then comment:

When the first ATM was installed at Chemical Bank on Long Island in 1969, industry observers predicted the machine would steadily eliminate the need for human tellers, as the ATM could do the most common teller tasks faster, cheaper, and around the clock.
Okay, with those out of the way, let’s pivot to a more recent historic example – the Internet. Given its proximity to current events, we will dedicate an entire section to this and use it as an opportunity to wind down part one of the piece.
Of all the historical analogues to AI, the commercialization of the Internet in the mid-1990s is the most instructive — not because it is perfectly analogous, but rather because it played out within living memory, the fears it generated were remarkably similar, and the gaps between predicted and actual outcomes were so dramatic.
The perfectly reasonable predictions …
By the late 1990s, serious analysts — not just modern-day Luddites — were predicting the Internet would hollow out entire professions:
Many of these predictions were directionally correct. Travel agent employment did fall by nearly half over the following decade. Stockbrokers on the floors of exchanges largely disappeared. Classified advertising was almost entirely displaced by digital platforms, devastating newspaper revenue and newsroom employment.
… that ultimately proved mostly wrong
Let’s again start with a chart and then comment:

What forecasters systematically underestimated was the volume of new categories of work the internet would create — roles that simply did not exist and could not have been imagined in 1995:
Thus, the Internet ended up being both a job destroyer, but also a massive job creator. While the human mind has the capacity to foresee the former – this new technology is going to displace a lot of jobs – it is less effective at imagining the latter – this new technology is going to create far more jobs that did not previously exist.
Okay, let’s end part one there as we can tell some eyes are probably glazing over. Next week (cliffhanger time), we will look out how AI is coming for your job and how something called “the J-curve” might predict what is going to happen.