Attenuating Innovation (AI)

Attenuating Innovation (AI)

In 2019, a very animated Bill Gates explained to Andrew Ross Sorkin why Microsoft lost mobile:

There’s no doubt that the antitrust lawsuit was bad for Microsoft. We would have been more focused on creating the phone operating system so that instead of using Android today, you would be using Windows Mobile. If it hadn’t been for the antitrust case, Microsoft would have…

You’re convinced?

Oh we were so close. I was just too distracted. I screwed that up because of the distraction. We were just three months too late with a release that Motorola would have used on a phone, so yes, it’s a winner-take-all game, that is for sure. Now nobody here has ever heard of Windows Mobile, but oh well. That’s a few hundred billion here or there.

This opinion is, to use a technical term favored by analysts, bullshit. Windows Mobile wasn’t three months late relative to Android; Windows Mobile launched as the Pocket PC 2000 operating system in, you guessed it, 2000, a full eight years before the first Android device hit the market.

The issue with Windows Mobile was, first and foremost, Gates himself: in his view of the world the Windows-based PC was the center of a user’s computing life, and the phone a satellite; small wonder that Windows Mobile looked and operated like a shrunken-down version of Windows: there was a Start button, and Windows Mobile 2003, the first version to have the “Windows Mobile” name, even had the same Sonoma Valley wallpaper as Windows XP:

If anything, the problem with Windows Mobile is that it was too early: Android, which originally looked like a Blackberry, had the benefit of copying the iPhone; the iPhone, in stark contrast to Windows Mobile, looked nothing like the Mac, despite sharing the same internals. Instead, Steve Jobs and company started with a new interface paradigm — multi-touch — and developed a user interface that was actually suited to a handheld device. Jobs — appropriately! — called it revolutionary.

Fast forward four months from the iPhone introduction, and Jobs and Gates were together on stage for the D5 Conference, and Gates still didn’t get it; when Walt Mossberg asked him about what devices we would be using in five years, Gates still had a Windows device at the center:

I don’t think you’ll have one device. I think you’ll have a full-screen device that you can carry around and you’ll do dramatically more reading off of that. I believe in the tablet form factor. I think you’ll have voice, I think you’ll have ink, I think you’ll have some way of having a hardware keyboard and some settings for that. And then you’ll have the device that fits in your pocket which the whole notion of how much function should you combine in there, there’s navigation computers, there’s media, there’s phone, technology is letting us put more things in there but then again, you really want to tune it so people what they expect. So there’s quite a bit of experimentation in that pocket-sized device. But I think those are natural form factors. We’ll have the evolution of the portable machine, and the evolution of the phone, will both be extremely high volume, complementary, that is if you own one you’re more likely to own the other.

In fact, in five years worldwide smartphone sales would total 700 million units, more than doubling the 348.7 million PCs that shipped that same year; yes, a lot of those smartphone sales went to people who already had PCs, but it was already apparent that for huge swathes of people — including in developed countries — the phone was the only device that you needed.

What is even more fascinating about this conversation, though, is the way in which it illustrated how Jobs and Apple were able to invent the future, while Microsoft utterly missed it.

Mossberg asked:

The core functions of the device form factor formerly known as the cellphone, whatever we want to call it — the pocket device — what would you say the core functions are five years out?

Gates’ answer was redolent of so many experts trying to predict the future: he had some ideas and some inside knowledge of new technology, but no real vision of what might come next:

How quickly all these things that have been somewhat specialized — the navigation device, the digital wallet, the phone, the camera, the video camera — how quickly those all come together, that’s hard to chart out, but eventually you’ll be able to make something that has the capability to do every one of those things. And yet given the small size, you still won’t want to edit your homework or edit a movie on a screen of that size, and so you’ll have something else that lets you do the reading and editing and those things. Now if we could ever get a screen that would just roll out like a scroll, then you might be able to have the device that did everything.

After a back-and-forth about e-ink and projection screens, Mossberg asked Jobs the same question, and his answer was profound:

I don’t know.

The reason I don’t know is because I wouldn’t have thought that there would have been maps on it five years ago. But something comes along, gets really popular, people love it, get used to it, you want it on there. People are inventing things constantly and I think the art of it is balancing what’s on there and what’s not — it’s the editing function.

That right there is the recipe for genuine innovation:

  • Embrace uncertainty and the fact one doesn’t know the future.
  • Understand that people are inventing things — and not just technologies, but also use cases — constantly.
  • Remember that the art comes in editing after the invention, not before.

To be like Gates and Microsoft is to do the opposite: to think that you know the future; to assume you know what technologies and applications are coming; to proscribe what people will do or not do ahead of time. It is a mindset that does not accelerate innovation, but rather attenuates it.

A Cynical Read on AI Alarm

Last week in a Stratechery Interview with Gregory Allen about the chip ban we discussed why Washington D.C. suddenly had so much urgency about AI. The first reason was of course ChatGPT; it was the second, though, that set off alarm bells in my head. Here’s Allen:

The other thing that’s happened that I do think is important just for folks to understand is, that Center for AI Safety letter that came out, that was signed by Sam Altman, that was signed by a bunch of other folks that said, “The risks of AI, including the risks of human extinction, should be viewed in the same light as nuclear weapons and pandemics.” The list of signatories to that letter was quite illustrious and quite long, and it’s really difficult to overstate the impact that that letter had on Washington, D. C. When you have the CEO of all these companies…when you get that kind of roster saying, “When you think of my technology, think of nuclear weapons,” you definitely get Washington’s attention.

It turns out you get more than that: on Monday the Biden administration released an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This Executive Order goes far beyond setting up a commission or study about AI, a field that is obviously still under rapid development; instead it goes straight to proscription.

Before I get to the executive order, though, I want to go back to Gates: that video at the top, where he blamed the Department of Justice for Microsoft having missed mobile, was the first thing I thought of during my interview with Allen. The fact of the matter is that Gates is the single most unreliable narrator about why Microsoft missed mobile, precisely because he was so intimately involved in the effort.

By the time that interview happened in 2019, it was obvious to everyone that Microsoft had utterly failed in mobile, and that it cost the company billions of dollars along the way. It is exceptionally difficult, particularly for someone as intelligent and successful as Gates, to admit the obvious truth: Microsoft missed mobile because Microsoft approached the space with the entirely wrong paradigm in mind. Or, to be more blunt, Gates got it wrong. It is much easier to blame someone else than to face that failure, particularly when the federal government is sitting right there!

In short, it is always necessary to carefully examine the motivations of a self-interested actor, and that certainly applies to the letter Allen referenced.


To rewind just a bit, last January I wrote AI and the Big Five, which posited that the initial wave of generative AI would largely benefit the dominant tech companies. Apple’s strategy was unclear, but it controlled the devices via which AI would be accessed, and had the potential to benefit even more if AI could be run locally. Amazon had AWS, which held much of the data over which companies might wish to apply AI, but also lacked its own foundational models. Google likely had the greatest capabilities, but also the greatest business model challenges. Meta controlled the apps through which consumers might be most likely to encounter AI generated content. Microsoft, meanwhile, thanks to its partnership with OpenAI, was the best placed to ride the initial wave generated by ChatGPT.

Nine months later and the Article holds up well: Apple is releasing ever more powerful devices, but still lacks a clear strategy; Amazon spent its last earnings call trying to convince investors that AI applications would come to their data, and talking up its partnership with Anthropic, OpenAI’s biggest competitor; Google has demonstrated great technology but has been slow to ship; Meta is pushing ahead with generative AI in its apps; and Microsoft is actually registering meaningful financial impact from its OpenAI partnership.

With this as context, it’s interesting to consider who signed that letter Allen referred to, which stated:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

There are 30 signatories from OpenAI, including the aforementioned CEO Sam Altman. There are 15 signatories from Anthropic, including CEO Dario Amodei. There are seven signatories from Microsoft, including CTO Kevin Scott. There are 81 signatories from Google, including Google DeepMind CEO Demis Hassabis. There are none from Apple or Amazon, and two low-level employees from Meta.

What is striking about this tally is the extent to which the totals and prominence align to the relative companies’ current position in the market. OpenAI has the lead, at least in terms of consumer and developer mindshare, and the company is deriving real revenue from ChatGPT; Anthropic is second, and has signed deals with both Google and Amazon. Google has great products and an internal paralysis around shipping them for business model reasons; urging caution is very much in their interest. Microsoft is in the middle: it is making money from AI, but it doesn’t control its own models; Apple and Amazon are both waiting for the market to come to them.

In this ultra-cynical analysis the biggest surprise is probably Meta: the company has its own models, but no one of prominence has signed. These models, though, have been gradually open-sourced: Meta is betting on distributed innovation to generate value that will best be captured via the consumer touchpoints the the company controls.

The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.

An Executive Order on Attenuating Innovation

There is another quote I thought of this week. It was delivered by Senator Amy Klobuchar in a tweet:

I wrote at the time in an Update:

In 1991 — assuming that the “dawn of the Internet” was the launch of the World Wide Web — the following were the biggest companies by market cap:

  1. $88 billion — General Electric
  2. $80 billion — Exxon Mobil
  3. $62 billion — Walmart
  4. $54 billion — Coca-Cola
  5. $42 billion — Merck

The only tech company in the top 10 was IBM, with a $31 billion market cap. Imagine proposing a bill then targeting companies with greater than $550 billion market caps, knowing that it is nothing but tech companies!

What doesn’t occur to Senator Klobuchar is the possibility that the relationship between the massive increase in wealth, and even greater gain in consumer welfare, produced by tech companies since the “dawn of the Internet” may in fact be related to the fact that there hasn’t been any major regulation (the most important piece of regulation, Section 230, protected the Internet from lawsuits; this legislation invites them). I’m not saying that the lack of regulation is causal, but I am exceptionally skeptical that we would have had more growth with more regulation.

More broadly, tech sure seems like the only area where innovation and building is happening anywhere in the West. This isn’t to deny that the big tech companies aren’t sometimes bad actors, and that platforms in particular do, at least in theory, need regulation. But given the sclerosis present everywhere but tech it sure seems like it would be prudent to be exceptionally skeptical about the prospect of new regulation; I definitely wouldn’t be celebrating it as if it were some sort of overdue accomplishment.

Unfortunately this week’s Executive Order takes the exact opposite approach to AI that we took to technology previously. As Steven Sinofsky explains in this excellent article:

This document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law — literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.

Much attention has been focused on the Executive Order’s ultra-specific limits on model sizes and attributes (you can exceed those limits if you are registered and approved, a game best played by large established companies like the list I just detailed); unfortunately that is only the beginning of the issues with this Executive Order, but again, I urge you to read Sinofsky’s post.

What is so disappointing to me is how utterly opposed this executive order is to how innovation actually happens:

  • The Biden administration is not embracing uncertainty: it is operating from an assumption that AI is dangerous, despite the fact that many of the listed harms, like learning how to build a bomb or synthesize dangerous chemicals or conduct cyber attacks, are already trivially accomplished on today’s Internet. What is completely lacking is anything other than the briefest of hand waves at AI’s potential upside. The government is Bill Gates, imagining what might be possible, when it ought to be Steve Jobs, humble enough to know it cannot predict the future.
  • The Biden administration is operating with a fundamental lack of trust in the capability of humans to invent new things, not just technologies, but also use cases, many of which will create new jobs. It can envision how the spreadsheet might imperil bookkeepers, but it can’t imagine how that same tool might unlock entire new industries.
  • The Biden administration is arrogantly insisting that it ought have a role in dictating the outcomes of an innovation that few if any of its members understand, and almost certainly could not invent. There is, to be sure, a role for oversight and regulation, but that is a blunt instrument best applied after the invention, like an editor.

In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.

The Sclerotic Shiggoth

I fully endorse Sinofsky’s conclusion:

This approach to regulation is not about innovation despite all the verbiage proclaiming it to be. This Order is about stifling innovation and turning the next platform over to incumbents in the US and far more likely new companies in other countries that did not see it as a priority to halt innovation before it even happens.

I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.

What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?

They should not. We should accelerate innovation, not attenuate it. Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst. Indeed, the shoggoth to fear is our societal sclerosis seeking to drag the most exciting new technology in years into an innovation anti-pattern.

Photo of a radiant, downscaled city teetering on the brink of an expansive abyss, with a dark, murky quagmire below containing decayed structures reminiscent of historic landmarks. The city is a beacon of the future, with flying cars, green buildings, and residents in futuristic attire. The influence of AI is subtly interwoven, with robots helping citizens and digital screens integrated into the environment. Below, the haunting silhouette of a shoggoth, with its eerie tendrils, endeavors to pull the city into the depths, illustrating the clash between forward-moving evolution and outdated forces.
Photo generated by Dall-E 3, with the following prompt: “Photo of a radiant, downscaled city teetering on the brink of an expansive abyss, with a dark, murky quagmire below containing decayed structures reminiscent of historic landmarks. The city is a beacon of the future, with flying cars, green buildings, and residents in futuristic attire. The influence of AI is subtly interwoven, with robots helping citizens and digital screens integrated into the environment. Below, the haunting silhouette of a shoggoth, with its eerie tendrils, endeavors to pull the city into the depths, illustrating the clash between forward-moving evolution and outdated forces.”