Four Lessons Learned from the Digital Revolution | Human-Centered Change and Innovation
GUEST POST from Greg Satell
When Steve Jobs was trying to lure John Sculley from Pepsi to Apple in 1982, he asked him, “Do you want to sell sugar water for the rest of your life, or do you want to come with me and change the world?” The ploy worked and Sculley became the first major CEO of a conventional company to join a hot Silicon Valley startup.
It seems so quaint today, in the midst of a global pandemic, that a young entrepreneur selling what was essentially a glorified word processor thought he was changing the world. The truth is that the digital revolution, despite all the hype, has been something of a disappointment. Certainly it failed to usher in the “new economy” that many expected.
Yet what is also becoming clear is that the shortcomings have less to do with the technology itself, in fact the Covid-19 crisis has shown just how amazingly useful digital technology can be, than with ourselves. We expected technology and markets to do all the work for us. Today, as we embark on a new era of innovation, we need to reflect on what we have learned.
1. We Live In a World of Atoms, Not Bits
In 1996, as the dotcom boom was heating up, the economist W. Brian Arthur published an article in Harvard Business Review that signaled a massive shift in how we view the economy. While traditionally markets are made up of firms that faced diminishing returns, Arthur explained that information-based businesses can enjoy increasing returns.
More specifically, Arthur spelled out that if a business had high up-front costs, network effects and the ability to lock in customers it could enjoy increasing returns. That, in turn, would mean that information-based businesses would compete in winner-take-all markets, management would need to become less hierarchical and that investing heavily to win market share early could become a winning strategy.
Arthur’s article was, in many ways, prescient and before long investors were committing enormous amounts of money to companies without real businesses in the hopes that just a few of these bets would hit it big. In 2011, Marc Andreesen predicted that software would eat the world.
He was wrong. As the recent debacle at WeWork, as well as massive devaluations at firms like Uber, Lyft, and Peloton, shows that there is a limit to increasing returns for the simple reason that we live in a world of atoms, not bits. Even today, information and communication technologies make up only 6% of GDP in OECD countries. Obviously, most of our fate rests with the other 94%.
The Covid-19 crisis bears this out. Sure, being able to binge watch on Netflix and attend meetings on Zoom is enormously helpful, but to solve the crisis we need a vaccine. To do that, digital technology isn’t enough. We need to combine it with synthetic biology to make a real world impact.
2. Businesses Do Not Self Regulate
The case Steve Jobs made to John Sculley was predicated on the assumption that digital technology was fundamentally different from the sugar-water sellers of the world. The Silicon Valley ethos (or conceit as the case may be), was that while traditional businesses were motivated purely by greed, technology businesses answered to a higher calling.
This was no accident. As Arthur pointed out in his 1996 article, while atom-based businesses thrived on predictability and control, knowledge-based businesses facing winner-take-all markets are constantly in search of the “next big thing.” So teams that could operate like mission-oriented “commando units” on a holy quest would have a competitive advantage.
Companies like Google who vowed to not “be evil,” could attract exactly the type of technology “commandos” that Arthur described. They would, as Mark Zuckerberg has put it, “move fast and break things,” but would also be more likely to hit on that unpredictable piece of code that would lead to massively increasing returns.
Unfortunately, as we have seen, businesses do not self-regulate. Knowledge-based businesses like Google and Facebook have proven to be every bit as greedy as their atom-based brethren. Privacy legislation, such as GDPR, is a good first step, but we will need far more than that, especially as we move into post-digital technologies that are far more powerful.
Still, we’re not powerless. Consider the work of Stop Hate For Profit, a broad coalition that includes the Anti-Defamation League and the NAACP, which has led to an advertiser boycott of Facebook. We can demand that corporations behave how we want them to, not just what the market will bear.
3. As Our Technology Becomes More Powerful, Ethics Matter More Than Ever
Over the past several years some of the sense of wonder and possibility surrounding digital technology gave way to no small amount of fear and loathing. Scandals like the one involving Facebook and Cambridge Analytica not only alerted us to how our privacy is being violated, but also to how our democracy has been put at risk.
Yet privacy breaches are just the beginning of our problems. Consider artificial intelligence, which exposes us to a number of ethical challenges, ranging from inherent bias to life and death ethical dilemmas such as the trolley problem. It is imperative that we learn to create algorithms that are auditable, explainable and transparent.
Or consider CRISPR, the gene editing technology, available for just a few hundred dollars, that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies. Worried about some hacker cooking up a harmful computer virus, what about a terrorist cooking up a real virus?
That’s just the start. As quantum and neuromorphic computing become commercially available, most likely within a decade or so, our technology will become exponentially more powerful and the risks will increase accordingly. Clearly, we can no longer just “move fast and break things,” or we’re bound to break something important.
4. We Need a New Way to Evaluate Success
By some measures, we’ve been doing fairly well over the past ten years. GDP has hovered around the historical growth rate of 2.3%. Job growth has been consistent and solid. The stock market has been strong, reflecting robust corporate profits. It has, in fact, been the longest US economic expansion on record.
Yet those figures were masking some very troubling signs, even before the pandemic. Life expectancy in the US has been declining, largely due to drug overdoses, alcohol abuse and suicides. Consumer debt hit record highs in 2019 and bankruptcy rates were already rising. Food insecurity has been an epidemic on college campuses for years.
So, while top-line economic figures painted a rosy picture there was rising evidence that something troubling is afoot. The Business Roundtable partly acknowledged this fact with its statement discarding the notion that creating shareholder value is the sole purpose of a business. There are also a number of initiatives designed to replace GDP with broader measures.
The truth is that our well-being can’t be reduced to and reduced to a few tidy metrics and we need more meaning in our lives than more likes on social media. Probably the most important thing that the digital revolution has to teach us is that technology should serve people and not the other way around. If we really want to change the world for the better, that’s what we need to keep in mind.
— Article courtesy of the Digital Tonto blog
— Image credit: Pexels
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.