💭 Thinking about innovation
This essay first appeared for Premium members on 3 November 2019.
How should we manage our innovation? In a field like medicine, we’ve established a rigorous and scientifically-validated approach to testing and approving drugs, based on clinical trials and randomised controlled tests. In the finance and insurance industries, new entrants must comply with strict rules that protect the consumer. Capital adequacy and anti-money laundering requirements ensure the integrity of the financial system, supposedly.
The tech industry has had no such framework. It has enjoyed ‘permissionless innovation’ since the advent of the Internet fifty years ago, this week. The open platform enabled entrepreneurs to try new things without getting permission from regulators or, indeed, anyone else.
This is why we got streaming audio in the mid-nineties, while the traditional broadcast industry remained heavily regulated. And why we’ve had a flourishing ecology of remarkably large products like Skype or Wikipedia, alongside weird, wonderful niche products.
The permissionless approach served us well. Can you imagine getting a search engine as good as Google, if it had to be approved by even the most forward-thinking of regulators? If newspaper lobbyists had had their way, would blogging have ever existed, let alone flourished? But, permissionless innovation also enabled other anti-delights, such as the unpleasant Usenet newsgroups, Chatroulette and 4Chan.
An alternative approach, that sits between permissionless innovation and full-on regulation, is the precautionary principle (or PP). Since it emerged in the 1980s, PP has encouraged cautious action if there might be a threat of serious damage, particularly environmental damage—even if we don’t have robust scientific evidence for it. This approach could work in scenarios where this is a chance we might blow the whole place up.
But in general, I’m quite uncomfortable with the precautionary principle because it is so woolly. It eschews accepted norms of scientific evidence in favour of looser interpretations of what harm might be created. To that extent, it can be subjective and open to lobby and the ill-informed excesses of public opinion.
A few recent incidents, remind us that public opinion is often not a great guide to complicated technical problems, especially where there are systemic effects.
Be Cautious with the Precautionary Principle: Evidence from Fukushima Daiichi Nuclear Accident by Neidell et al, makes the case that more people died from the increase in electricity prices after Japan’s Fukushima nuclear reactor was put offline than from the incident at the power plant itself. (Summary of the paper by the Mercatus Center, a research group that is ideologically critical of regulation in general.) When the accident occurred, the Japanese government turned off nuclear power stations because of public fears, resulting in a 20-30 per cent spike in electricity prices. Consumers responded by cutting their heating during winter. The authors estimate that 4,500 people died as a result. The disaster has directly accounted for at least 1,500 deaths, mostly from the resultant evacuation. Only one death has been attributed to radiation. (The bigger problem with the public response to nuclear power is that it attenuated our decarbonisation progress.
The delays in introducing genetically modified ‘golden rice’ may have resulted in up to 2,000 child deaths per day in the developing world, as well as consigning millions of kids to blindness.
Permissionless innovation (PI), and its weird step-siblings, industry self-regulation and industries cosied up to regulators, are no panacea either.
PI has been superb, brilliant, tinkertastic. The “don’t ask permission culture” lead to so many things we might not have created in the early days of the internet. I know first-hand how hard incumbent telecoms and publishing industries fought for worse than what the internet gave us.
Loosely-coupled dumb networks which put intelligence on the edge, aka the internet, allowed for innovation on that edge. In the 1990s, the phone companies hated it. Their power arose for centrally-controlled circuit switching. If they had had their way, the internet protocol would have played second-fiddle to proprietary virtual circuits based on ATM or SDH. And, if they had succeeded, we would never have seen Geocities, Viaweb, Skype, Yahoo, eBay, Instagram, Amazon and, most digital services you use daily. (Admission: I was an aficionado of telecoms protocols in the early nineties.)
But this week, we mark fifty years since the delivery of the first message across the proto-Internet, the ARPAnet. The Internet has been mainstream in most markets for more than two decades. And the entire environment within which technology innovation occurs is vastly different to twenty-five years ago:
The domains of operation are too important. Information access or self-publishing, the products of the mid-90s, are valuable but ultimately niche. But today’s technology innovators are tinkering with insurance, financial services, healthcare, even our DNA.
The tech industry today is simply too big and has tremendous access to capital. Entrepreneurs know how to ‘blitz scale’, that is, grow their companies globally very quickly. Capital markets are willing to support them however barefooted and Adam-Neumannish they are. (Listen to my discussion with Reid Hoffman on blitzscaling.) The biggest firms, Alphabet, Facebook, Amazon, Tencent and Baidu, are sovereign-state in scale. This actual or potential leviathanhood demands we ask them for more prudence.
There are emergent effects from many of these innovations which, in a permissionless environment, are borne collectively by society or simply weigh on the vulnerable. For example, Uber and Lyft have increased congestion in cities while reducing driver wages. These firms’ founders and earliest investors make out like bandits.
Our societies are incredibly interconnected. A decade ago, the global financial crisis took a bunch of mortgage defaults by sub-prime-rated American homeowners in and magnified them into the worst financial crisis in history. This crisis brought General Motors to within hours of bankruptcy, and debilitated factories around the world.
From a sketchily-approved mortgage in Nevada to whacking industrial supply chains, meant those novel financial instruments travelled across the nervous system of the shadow banking sector, eviscerating Lehman Brothers and bringing America’s Bear Stearns, the UK’s HBOS and Germany’s Dresdner, amongst others, to their knees, in the process.
The financial technologies of synthetic debt instruments emerged, admittedly in a regulated environment—the story is complex—but reminds me of the permissionless approach so common in the tech industry. (In another former life, I happened to hold short positions on the US housing market during 2007 and 2008, and have some personal experience of that crisis.)
So what should the goldilocks zone of regulation look like? How can we support fevered exploration while managing risks, especially systemic, run-away or existential ones?