Still Haven't Seen Any AI Around Here
Billions of dollars, mountains of press, drama. And nothing has changed for me.
I’ve been tracking technology long enough to know when something isn’t right. Generative AI isn’t right. So far, it’s a chimera.
Generative AI is a huge batch system, which means that it takes billions of dollars and megawatts of power to train large language models over some period of time before the LLM can mimic what appears to be human logic. But actually what it is doing is using that data to predict the next letter or word or set of pixels or line of code in a way that appears magical. But batch systems have never been magical, going all the way back to when programming was done with punch cards.
I don’t know if this applies to coding, since I don’t know how to program, but everything that is put out by generative AI appears to me to be mush. The images created by Dalle and others all appear to have a similar look and feel. The text generated for emails and reports is anodyne. My son has been using various LLMs to create programs without coding (which blows his mind) but brings to my mind The Last One from the 1980s. As you might imagine, The Last One wasn’t the last program you’d ever need and didn’t last long.
Eric Newcomer wrote a great newsletter titled “The Bear Case for OpenAI at $157 Billion”, in which he posited that "A core question as to whether this spending is sustainable is whether the money OpenAI is spending is building a deep technological moat and a permanent resource or if it’s merely reflective of the amount of money OpenAI is going to have to keep spending to stay ahead of the competition."
Shortly after Newcomer published that piece, The Wall Street Journal published a story “This AI Pioneer Thinks AI Is Dumber Than a Cat” (gift link). In it, Yann Lecun, among other things chief AI scientist at Meta, articulated an argument that Generative AI is on a fundamentally different level than Artificial General Intelligence (AGI), which is the holy grail of a computing system that is capable of reasoning like a human being. He argues, by saying that current AI is dumber than a cat to make the point that it mimics human intelligence, but doesn’t actually possess it.
Of course, I agree with him. I also suspect that Newcomer will get credit for predicting the coming AI bust, when it becomes obvious that the investment in data centers to process LLMs is no longer sustainable. I even think Jensen Huang, the CEO of Nvidia (whose valuation has exploded because GPUs happen to be really useful for batch-processing data for LLMs) agrees, at least according to something he said in a recent Bloomberg interview (gift link): “We’re able to drive incredible cost reduction for intelligence,” Huang said. “We all realize the value of this. If we can drive down the cost tremendously, we could do things at inference time like reasoning.”
Along with being skeptical about anything that is processed in batches, I’m trained by my experience in the last 45 years to believe that nothing important happens in technology until it benefits the individual. (I started with the Apple II and remember when VisiCalc was the tail that wagged the personal computer dog, a phrase Ben Rosen articulated, by getting serious business people to buy Apple II computers so they could use the first spreadsheet.)
So far, I haven’t gotten anything useful out of Generative AI. Or should I say I haven’t found Chatbots to be useful.
I don’t need ChatGPT to write for me. I’ve been writing professionally for 45 years. I still write better than a machine trained on everybody else’s writing. In fact, I haven’t opened ChatGPT for several months and am thinking maybe I can stop paying $20/mo for the privilege. Have you noticed how you can spot all the Generative AI images made by ChatGPT? Yes, they all look the same.
I’ve been to Cerebral Valley AI Summit two years in a row (and plan to attend #3 in November), which is of course moderated by Eric Newcomer (see above). During the p past two years, I’ve heard all the pitches for how helpful this stuff will be. But I’m still stuck using Powerpoint to make slides. Excel to make calculations and do data operations. Word, that lovely pig of an application, to read the documents people send me. (Although I prefer to make new documents in Google Docs, which is inherently better for collaborating.)
Best value so far? Actually, the Gemini summary in Google searches is nice. I am a master of Google search, including using Boolean operators and understanding syntax for getting what I want. So it’s not earth shaking; but it is nice to get a summary. My son likes Perplexity, which is focused on search and designed to supersede Google Search. I just bought a subscription to Perplexity, so maybe it will improve on Google Search’s Gemini summaries. I’ll let you know, right after I decide to cancel my ChatGPT subscription since they both cost $20 a month. I’ll definitely cancel ChatGPT if OpenAI does, as is rumored, raise the price to $44 a month.
How could “AI” help me live a better life?
*Scheduling travel and appointments in multiple time zones. Why, in 2024, can my calendar (Apple, but true for Google and Outlook) not know that I am traveling to a different time zone? All my flights are on the calendar, but I still have to go into each item and set the time zone separately for departing and arriving. But once I’ve done that, why can’t it show me my appointments at the time in the other time zone, instead of showing me having dinner at 10am when I will be in London?
*My work email likes to send important emails to junk and the stupidest junk to my inbox. My work email is managed in Microsoft Office 365 (fka Exchange). I have hated using Microsoft Outlook since it was introduced in 1997. All bad on me for choosing it for our firm when we started in 2006. Everybody in the firm hates Microsoft Office equally. Even after 27 years, I still get stupid junk mail like this:

But I don’t get any of those emails in Gmail, which I use for all my personal and commercial email. Hmmm. Does that mean Google is actually using some form of AI, perhaps sophisticated machine learning, to recognize junk mail for what it is? Now that’s AI I can like, even if I never see it.
*Apple isn’t much better. Contacts. Calendar. Mail. They are all pigs and don’t sync well between IOS and MacOS. But Apple just announced that IOS 18.1 is due for delivery on October 28, which will include Apple Intelligence.
I am really looking forward to Apple Intelligence. What I understand so far is that it will use device-based SLMs (I made that up to mean Small Language Models that can work on a device like the iPhone without having to go to the internet for data). What I imagine Apple is doing is using SLMs that use your personal data (still inside the Apple Privacy tent) to optimize value for you, rather than the organization. I really hope it works! Will it know about time zones? Can it pick contact information out of an email and update or create a contact automatically?
Bottom line: So far all this hullabaloo about generative AI is invisible to this individual user. I’m not feeling the VisiCalc vibe or the Mosaic vibe or even the Instagram vibe. I’ll believe when I see it, but I’m worried that I won’t last long enough to see real AI in action.
PS: If you want to listen in to my conversations with my son, Stewart Alsop III, check out Stewart Squared on Spotify.
.
Can you articulate more the ways in which things based on batch processing have not enabled outsized value creation? I'm trying to reflect deeply on whether what we're seeing is a huge bubble about to burst, or the novelty phase portending a Cambrian explosion. In a way, I think both can be true. Foundational model companies can't all be worth what their last-round implied valuation; at the same time, the infrastructure being laid may usher in the next wave of new-era, AI-first approaches to pretty much everything. But I have not read anyone articulate this point on batch processing as you have.