Goldman is Wrong About AI
Blog

By Steve Shillingford
Once upon a time in tech, circa 1995-2001, when AOL CDs were like Tide samples in the mail, companies called Webvan and Pets.com burned through billions in venture capital, seemingly overnight, and then they vanished. Today, those ancient flameouts – and the lessons they carry – are lost a fog of forgetting. I would bet a large quantity of BTC that the founders of Doordash and Chewy have never heard of these companies. Likely their investors don’t either. Neither realize they are benefiting from “second mover advantage.”
For those who haven’t gone to the Wayback Machine yet, Webvan was “groceries on demand” and Pets.com was, wait for it, “buy your pet supplies online!” Everyone thought it was cool, until it wasn’t, and until the money burned up. Interest rates were 9% and there was no “doubling down” in venture. As Doordash and Chewy and countless others have demonstrated, the death knell for companies isn’t whether the idea is good, it’s whether you run out of money along the way while perfecting it.
The point is it’s way too early to “call the ball” in AI. I’d argue we’re in the first quarter and maybe not even through it. Yes, some ideas and their companies are going to crash and burn along the way. Yes, billions and billions in fiat currency will be set on fire along the way. And, many early leaders will get commoditized. But it’s simply ahistorical to assume that means the tech itself lacks innovation, the spend is foolhardy, and we just move on to something else. While we’ve seen some of the Hyperscalers try to tamp down expectations, I am specifically calling out a recent Goldman Sachs analyst report. The doom and gloomers either suffer from recency bias – or worse, aren’t paying attention to where we are right now in the adoption cycle.
Where we are is early.
Marc Andreessen provides a more useful view: “Any new technology tends to go through a 25-year adoption cycle.” As such, I think we should mark the start of that cycle at 2017. And if Mr. Andreesen is correct (which of course he is), we’re barely through the first quarter.
Generative AI, Large Language Models, and the applications being powered by these innovations, while based on concepts derived from artificial intelligence – which is not new – are nonetheless relatively recent innovations, born out of a research paper from Google. It is considered by some to be a founding paper for modern artificial intelligence, as transformers became the main architecture of large language models like those based on GPT.
The idea that anyone, let alone a financial analyst and academic, will accurately predict the usefulness, productivity, and future innovation here is wildly ambitious and likely plain wrong. Why?
Again: History.
I can still remember companies like Global Crossing in 1999, raising billions to lay fiber across the world oceans. At their peak they were valued at $50B. They gave up the ghost 2 years later for a 1/10th of the price. But no one then could have forecasted a need to go from dial-up to 100Gbps – over a wire!
Let’s also remember that Steve Ballmer told us “There’s no chance that the iPhone is going to get any significant market share…”
The point here is while a progress report on the evolution of new technology makes complete sense, having the temerity to think you can “call the ball” on GenAI after 4-5 years of spending is just plain lazy.
What have the Goldman analysts actually foretold? Here’s Daron Acemoglu’s crystal ball:
Only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks.
Based on what?
He also questions whether AI adoption will create new tasks and
products, saying these impacts are “not a law of nature.”
One guy at MIT is skeptical and now we should not expect any material productivity gains? Sounds very Ballmerish to me.
Another from Goldman’s Jim Covello:
AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.
Wait. Who said AI isn’t designed for complex problems? X and LinkedIn would beg to differ. So would Mark Zukcerberg. As would most tool and application vendors. Has he seen what Claude 3.5 is doing these days? Has he talked to any other firms deploying AI “everywhere.” Has he talked to Jamie Dimon lately?
Look, I’m not saying that there hasn’t been lots of money poured into the space or that the hype cycle isn’t full tilt. However, to publish a 10-year view based on very little forensic data or historical comps seems egregious.
As a founder of a GenAI application startup going on 5+ years, I can’t help but acknowledge my viewpoints are necessarily biased. But my bias doesn’t mean I’m wrong to point out that we’ve seen this story before. We’re seeing companies looking aggressively to incorporate GenAI solutions (with Human-In-The-Loop oversight), grappling with regulatory over-reach, and figure out the cost-benefits. They will solve this.
No question, the early days of innovation, especially anything meaningful complicated, are teeming with snake oil, hucksters, and blind believers. That doesn’t make the larger promise a dream, or the actual facts wrong.
Here’s my suggestion for the doomsayers. Look harder, get curious, observe more than just the FAANGs, and get outside the building. Talk with “doers” versus salesman. That’s the difference. Dig into those “complex” processes that AI isn’t good for…and then, if you’re still not seeing it, tell us exactly why.
Because otherwise, frankly, I just don’t believe you.
A different kind of future-seer, Arthur C. Clarke, formulated three laws back in 1962 in an essay titled “Hazards of Prophecy: The Failure of Imagination.” And they’re spot on:
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
Goldman’s report got all three wrong.
Number 3 is especially relevant: in a world of tweets and dearth of attention for long-form content, GenAI probably does look like magic. But in a world where people are building actual applications, solving complex business processes faster, better, safer, GenAI is only beginning its transformation of the regulated enterprise.
At a recent VB Transform event, Shri Santhanam, an executive vice president and general manager of Software, Platforms, and AI at Experian North America discussed the slow adoption of GenAI in Financial Services, rightly pointing out that “despite being a highly regulated industry, financial services is rife with potential generative AI applications.”
“AI in finance,” he explained, having actual real-world operational experience, “is expected to offer several benefits, such as increased efficiency, cost savings, improved risk management, personalized services and enhanced decision-making.”
Will billions and trillions be invested? Absolutely. Will there be more examples ala Global Crossing, Webvan, and Pets.com? Likely. But to say there’s no productivity gain to be had or that AI isn’t suited for complex processes is not only wrong, it reflects a complete disconnect from the active, innovative, and diverse reality of right now. And an utter disregard for the history of technology, going back to the invention of fire.
FWIW.