Web3, the Metaverse, and the Lack of Useful Innovation – American Affairs Journal

So far, the year 2022 has certainly looked like a deflating technology bubble. After a decade of rising market caps, stocks for formerly hot “tech” companies fell far below their recent highs. By September 2022, exercise equipment maker Pelo­ton was down 90 percent from a year before; ridesharing company Lyft had fallen 70 percent; videoconferencing firm Zoom, 70 percent; electric vehicle manufacturer Rivian, 60 percent; Meta (or Facebook), 60 percent; Netflix, 60 percent; the gory list goes on. Many recent new technologies have simply failed to meet expectations. For instance, despite predictions that the economic gains from AI would reach $15 trillion by 2030, the market for AI in 2021 was only $51.5 billion, expected to reach $62 billion in 2022.

This downturn is occurring at the end of record spending on innova­tion by venture capital firms and incumbents such as Google. Futuristic technologies such as quantum computing, nuclear fusion, bioelectron­ics, and synthetic biology have received massive funding in recent years. And while exuberance around a host of new technologies from the past dec­ade—like self-driving cars, delivery apps, home flipping, and augmented reality—recedes, VCs are working to inflate new bubbles around other, much-hyped technologies, such as the Metaverse and Web3, which is a part of the wider excitement around blockchain technologies. The shelf life of ebullience for the Metaverse and Web3 is, of course, unclear, but a much more important question is this one: how do such technology bubbles affect the broader economy and society?

Answering this question requires looking at the broader economic and social context in which these bubbles develop. Of course, this broader context is large and complex, but here is one road in: For at least a century now, there has been a widespread faith that technological progress will improve human well-being, including via economic growth. For much of the twentieth century, new industries developed around new technologies. These industries created well-paying jobs and flourishing communities. Use of the new technologies improved quality of life and, by enhancing productivity in mass production industries, greatly reduced prices, leading to even relatively poor people being able to afford increasing quantities of both necessities and modern conveniences. The period from the late nineteenth to the mid-twentieth centuries witnessed a remarkable era of innovation, perhaps the most significant in human history. Running water, electricity, mass production, the telephone, and the automobile provided improvements to our standard of living that have not been equaled by recent innovations.

But as economist Robert Gordon examined in his book The Rise and Fall of American Growth, the technological growth engine hit hard times beginning in the 1970s. With the brief exception of a period between 1994 and 2004, which we will examine in greater detail below, improvements to business efficiency, or productivity, have remained stubbornly low since the 1970s. This period of low productivity growth has remained true right up through the technology bubble of the last decade, when technophiles were singing the praises of robots and AI. Indeed, contrary to expectations that the Covid-19 pandemic would spur ever widening adoption of automation in businesses, productivity was negative for the first two quarters of 2022.

Meanwhile, basic economic conditions have become more precarious for many people. For the past decade, the United Way’s alice program has attempted to measure how much of the population faces economic hardship, taking into account both the cost of living and available incomes. Working at the county level in about half of the United States, alice routinely finds that about 40 percent of the population struggles to make ends meet. While this reality hits some groups harder than others, it affects all races, genders, and other identities, from the majority of white populations in, for example, dying manufacturing and mining towns in Appalachia to majority black populations on the Southside of Chicago or rural Alabama. The reality of hardship plays out in places with long-standing black poverty, examined in classics like William Julius Wilson’s When Work Disappears (1996), as well as in Anne Case and Angus Deaton’s study of the more recent rise of “deaths of despair.”

Our question is whether newly hyped technologies, like the Metaverse, Web3, and blockchain, have any chance of changing this basic picture. There are many reasons to be skeptical that they can. In many ways, the Metaverse and Web3 are merely a pivot by Silicon Valley, an attempt to gain control of the technological narrative that is now spiral­ing downward, due to the huge start-up losses and the financial failure of the sharing economy and many new technologies. Huge start-up losses along with the small markets for new technologies have brought forth novel criticisms of Silicon Valley. If we are correct that the newest wave of hot technologies will do almost nothing to improve human welfare and productivity growth, then elected officials, policymakers, leaders in business and higher education, and ordinary citizens must begin to search for more fundamental solutions to our current economic and social ills.

In what follows, we will first review Web3 and the Metaverse. Mul­tiple industry insiders claim that these technologies require far better infrastructure than currently exists, and that their constituent technologies of blockchain, crypto, virtual and augmented reality (VR and AR) aren’t working well by themselves. Second, we examine the economic effects of bubbles by comparing the current technology bubble to past ones. The biggest difference is that some goods did emerge from the dot-com bubble, but not from the housing bubble, and probably not much will result from the current bubble either. Third, we describe changes in America’s system of basic and applied research that might be preventing new, more useful ideas from emerging, particularly those based on advances in science. Finally, we sketch out alternative roads for future technological and economic development. The current ecology of tech­nology, including venture capital and both corporate and university R&D, is failing society. Together, we must look for other paths forward.

The Metaverse and Web3

Many technology suppliers have already thrown cold water on the overall concepts of the Metaverse and Web3. An Intel executive says, “Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art.” Even Meta admits that its grand ambition of building the ultimate Metaverse won’t be possible if there aren’t drastic improvements in today’s telecom networks.

Indeed, vague terms like Web3 and the Metaverse seem designed to fool people, to convince them that companies using this marketing have come up with something new that hasn’t been tried before. After years of investment and promotion, two key parts of the metaverse, virtual and augmented reality, are still not popular—probably because they don’t work well. Poor resolution, low brightness, bulky headsets, and lack of additional sensory feedback have produced poor experiences for most VR users, including inducing nausea when using the goggles. Other aspects have not been improved for decades, including headset size. Apparently, it is difficult to make devices smaller without sacrificing field of view, which is a big reason why typical AR headsets have a field of view that is too small to be useful. Proponents don’t seem to realize that both VR and AR will require years if not decades of improvement. A recent leak of Meta documents, reported by the Wall Street Journal, revealed that Metaverse user numbers are far below expectations, that most users don’t return after the first month, and that virtual real estate trading volumes are down 98 percent in 2022.

Web3 has similar problems with its enabling technologies. While it isn’t our job to define the nebulous term Web3, we do know that two key constituent technologies, crypto and non-fungible tokens (NFTs), are not doing well. After years of proponents claiming it was a great hedge against inflation in so-called fiat currencies, some types of crypto have collapsed, exchanges have gone bankrupt, and even the most popular cryptocurrency, Bitcoin, has seen its price decline by more than half in the last six months. This has also pushed down the prices of digital tokens. The prices of NFTs plunged in the first half of 2022. So far, these seem to be products in search of an economic rationale.

Blockchain is also a disappointment despite being initially developed in the 1960s and reintroduced fourteen years ago by an unknown person or persons under the pseudonym Satoshi Nakamoto. Journalist Izabella Kaminska, testifying before the UK House of Commons Science and Technology Committee on Blockchain in mid-2022, stated that she “can’t think of a single successful deployment of blockchain” outside of financial speculation. During 2016 and 2017, there was a lot of hype surrounding blockchain technology, but “now in 2022 when we look back, almost nothing has come out of that hype.”

The problem is that most implementations must sacrifice some part of the blockchain concept, which can be summarized as “[a]n open-source technology that supports trusted, immutable records of transactions stored in publicly accessible, decentralized, distributed, automated ledgers.” The key points of differentiation with existing technologies here are “trusted,” “immutable,” “publicly accessible” (transparent), “decentralized,” and “distributed.” Because the term distributed actually means replicated, there are typically many copies of the underlying datasets in which updates must be synchronized via a convoluted and sometimes inordinately inefficient process known as “consensus.”

This makes blockchain highly inefficient, especially in energy usage, and it has hit insurmountable roadblocks in throughput and performance.1 As a result, constraints such as “trusted (by consensus)” and “decentralised” have been dropped in so-called permissioned blockchains—i.e., blockchains created and operated by some form of “central authority.” Having a central authority does of course eliminate an important selling point, the decentralization. Nevertheless, it was hoped that these “permissioned” blockchains would be much more efficient than permissionless versions, but unfortunately, relaxing these con­straints only undermined a key rationale behind blockchain without delivering much success in other areas.

One such example was Facebook’s plan in 2019 to create a global digital currency, called Libra, which would be built upon a global blockchain and would supposedly empower millions of the world’s unbanked poor. After building an impressive proof of concept, Face­book faced resistance from legislators and regulators, and dropped the idea, though it lives on in a much-reduced form.2

In the following year, an analysis by a pro-blockchain organization, the British Blockchain Association, showed that a vast majority of the blockchain projects they surveyed had no well-described rationale, no predetermined criteria for achievement, and no analysis of success or failure. In other words, they were merely pie-in-the-sky ideas, based on hype rather than detailed analysis or justification. And none of the projects benchmarked themselves against existing, proven technologies.

For example, in cooperation with both small producers and large retailers, such as Walmart, IBM claimed to have built blockchain supply chains in the “food trust” area. But this solution was, in fact, centralized, driven by Walmart, with no consensus, and with transparency that can be implemented using privacy functions built into existing database technology.

Similarly, the use of blockchain has been trumpeted in shipping supply chains, most prominently in a project led by Maersk and, again, IBM. Unfortunately, this logistics blockchain system looks like a con­ventional software system, with the only discernible use of blockchain being a list of document versions which could be handled, perhaps even better, in a conventional database. Meanwhile, a similar, large blockchain‑based shipping program operating as We.trade has recently closed down after running out of cash.

Perhaps the most notable example of blockchain being deployed as the wrong solution to the wrong problem is the case of the Chess Replacement project undertaken by the Australian Securities Exchange (ASX). This project, to replace the venerable Chess equities settlement system, was first started at the height of the blockchain hype in 2017 by the ASX and a fledgling U.S. software company, Digital Asset. Unfortunately, the project has exceeded its budget by at least a factor of five, announced five separate delays in delivery, and is currently on hold, pending the outcome of yet another independent inquiry.3 In summary, many blockchains have been forced, by the realities of business requirements, to deviate significantly from their initial concepts and have been reduced to little more than blockchains in name only.

Previous and Current Bubbles

There are arguably bubbles around new technologies like blockchain, the Metaverse, and Web3, but the larger question is how these bubbles will affect the broader economy and society. Here it is helpful to compare our current moment to two previous bubbles, namely the dot-com bust and the housing bubble, which burst in 2000 and 2008, respectively. What we find is that our current bursting bubble shares some things in common with both, including layoffs, cutbacks, large losses in asset values, and potentially looming bankruptcies. But while the dot-com bubble led to significant advances in technology and business organization, our current bubble seems more like the housing bust, because few real gains are likely to be left once the bubble deflates.

Start-ups with big losses, no revenues, and large debt-to-income ratios did successful IPOs during the dot-com bubble, just as they have again in recent years. From its peak in March 2000, the nasdaq fell 60 percent in a single year, and hundreds of dot-com startups went bankrupt. During the housing bubble, banks provided subprime mort­gages and repackaged them into seemingly low-risk investments that ended up proving very risky. Markets fell about 50 percent from their peak in late 2007, and the 2008 crash led to bankruptcies for 64,318 firms, including Lehman Brothers, and government-brokered rescues from bankruptcy as in the case of Merrill Lynch.

Yet in exploring such similarities, it is easy to lose sight of how many successful technologies and businesses came out of the dot-com years. The dot-com bubble gave us e-commerce, websites for news and other content, enterprise software such as customer relationship management and manufacturing resource planning, and the widespread use of mobile phones. These changes also quickly led to large markets. E-commerce, internet hardware, software, and mobile service revenues reached $446, $315, $282, and $230 billion, respectively, by 2000 (in 2020 dollars). PC revenues were $132 billion in 1990. Internet-connected personal com­puters also likely led to significant economic growth, with a period of high productivity gains between 1994 and 2004 that outpaced both the period from 1970 to 1994 and the period between 2004 and the present.

The dot-com bubble also gave us many successful start-ups. Those that have achieved top-100 market capitalization include Amazon, Cisco, Qualcomm, Yahoo!, eBay, Nvidia, Paypal, and Salesforce, several within ten years of their founding and most within twenty years. Some are still among the top 100 in market capitalization.

The growth continued in the 2000s. Cloud computing had global revenues of $127 billion by 2010, and online advertising reached $81 billion in the same year (in 2020 dollars). Facebook had 550 million users by the end of 2010. The iPhone was introduced in 2007, the App Store in 2008, and Android phones also in 2008. The global revenues for smartphones reached $293 billion by 2012 and web browsing, navigation services, and other apps were widely used. Facebook, Netflix, and Google are three start-ups that benefited from these new technologies and are now among the top 100 in terms of market capitalization.

We see something different in the 2010s, a decade of growing markets for existing technologies but less so for new ones. Although revenues for e-commerce, cloud computing, smartphones, online advertising, and other technologies continued to grow, only one single category of new digital technology achieved $100 billion in sales by 2021, and that was big data. Other “new technology” categories faltered. For instance, despite all those promised deliveries from drones, commercial drones had a market size of only $21 billion. The markets for VR and AR, expected to explode during the pandemic, only reached $6 and $25 billion, respectively. The market size for blockchain applications (not cryptocurrencies), the basis for much of Web3, was $4.9 billion. The market for AI software and services was bigger, at $58.3 billion, similar to that of OLED displays at $53 billion. What these smaller market sizes mean is that, compared to the earlier technologies we mentioned, individuals, at both work and home, have thus far found the technologies of the 2010s less useful than earlier ones. If the new technologies were useful, more people would have bought them, and the technology’s respective market sizes would have grown much faster.

Big data was the only one of these new digital technologies to exceed $100 billion in 2021, if we include analytics and AI, reaching $163 billion. The backlash to these algorithms, however, has been huge, beginning at least as early as 2016 with Cathy O’Neil’s book Weapons of Math Destruction. Algorithms that predict crimes, determine sentencing, decide the parole of imprisoned convicts, identify criminals through photos, or aid social workers have continued to be criticized in 2022. And the ones used to predict housing prices or insurance claims or fraud have led to big losses for start-ups. All in all, it is hard to claim that big data has brought benefits equal to that of PCs, e-commerce, and smartphones of previous decades.

The small markets for new technologies are a big reason why today’s unicorn firms, or private start-ups with a valuation over $1 billion, are far less successful than those of the dot-com bubble. A simple metric to compare them is cumulative losses, the losses accumulated prior to profitability. Sixteen of today’s unicorn start-ups now have greater than $3 billion in cumulative losses; Uber has the biggest losses ($31.2 billion) in the United States. At the end of 2021, there were seventy-seven publicly traded (ex-)unicorns with cumulative losses greater than annual revenues. In com­parison to past success stories, this feat was briefly achieved only by Amazon when its cumulative losses peaked at $3 billion more than fifteen years ago.

Similar problems exist outside the United States, with many ex-unicorns having similarly large cumulative losses in China, India, and Singapore. Video-streaming service Kuaishou has the largest cumulative losses of any ex-unicorn as of mid-2022, $57.4 billion. Many others have cumulative losses higher than their 2020 revenues.

Then there are the market capitalizations of these start-ups, particularly the American ones. Yahoo! (founded in 1994) reached the top-100 global firms by year five, Google by year eight, and eBay by year ten. But among today’s unicorns, many of which were founded fifteen years ago, none are among the top 100. Only one (Airbnb) is among the top 200, and three others are among the top 300 (Uber, Moderna, Snowflake). Only Moderna is profitable.

If we focus on AI startups, the results are even worse. Only two publicly traded companies, SoundHound and c3.AI, can be truly defined as AI firms, and their market capitalizations are less than $2 billion each. If we expand the scope of AI and include big data and other aspects of software, the situation looks slightly better. Companies among the top 500 include Snowflake (data warehousing) and Crowdstrike (security), but neither are even close to being profitable.

To summarize, when the dot-com bubble deflated, we were left with lasting improvements such as e-commerce, digital media, and enterprise software, but our current bubble has involved investors running up the stock prices of firms working on technologies that have produced demonstrably less value. When the air goes out of this bubble, we very well may be left with hardly anything of value at all.

Changes in Basic and Applied Research

While venture capitalists are partly to blame, America’s system of basic and applied research has also undergone major changes since the early years of Silicon Valley’s success, changes that have likely caused fewer genuine science-based opportunities to emerge as candidates for com­mercialization. Silicon Valley was named for silicon-based semiconductors, one type of science‑based technology, which powered much of America’s innovation in the last half of the twentieth century. Other important technologies that emerged from scientific research in the mid-twentieth century include polymers (i.e., plastics), nuclear power, lasers, jet engines, radar, LEDs, and glass fiber, many of which earned their developers Nobel Prizes.

The biggest change from that era is the decline of basic and applied research at companies. Until the 1970s, most of this research was done at corporate laboratories such as Bell Labs, RCA, and DuPont, research that led to both Nobel Prizes and real products and services such as transistors, integrated circuits, plastics, and radar. This dominance began to change in the 1960s, for many reasons, one being the increased funding for universities. Increases in funding caused the number of PhD degrees awarded annually in the United States to rise more than eight times since 1950. In some social circles, it is now considered much more prestigious to work for a university than a corporate laboratory. Universities train PhD students to become adept at the former and not the latter. They learn to do literature searches and write papers while product commercialization is largely forgotten. Universities are proud of their PhD students who become professors; corporate research is seen as second class.

But there are many problems with higher education becoming the primary locus of basic and applied research, and thus the main source of new ideas to be commercialized. First, there is far too much emphasis placed on publishing papers and not enough on developing new technologies up to the point at which companies can commercialize them. There are even too many papers for researchers to read, causing publication counts or dubious indices like the h-index to become mostly useless. Counting the number of publications or calculating h-factors has created an environment of quantity over quality, in which the status of the submitter often determines the outcome. Papers with more than one hundred authors are not uncommon. Some estimates say half of all peer-reviewed articles are not read by anyone other than the author, journal editor, and reviewers.

This publish-or-perish culture has also encouraged researchers to game the system, which undermines the usefulness of publication and citation counts. This is an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” For science and research conducted at universities, a publication list determines every hire, grant application, promotion case, and salary review, and thousands of today’s professors have much higher h-indices than did Albert Einstein and Richard Feynman.

A similar process of routinization has transformed other parts of the scientific landscape. Grant applications are more rigidly structured, elaborate, and hype-ridden, as are requests for research time at major observatories or national laboratories. And anything involving work with human subjects, or putting instruments in space, involves heaps of paperwork. Overall, there has been an enormous increase in the amount of paperwork university researchers must do. These professionalizing tendencies are an all but inevitable consequence of the explosive growth of modern science. Standardization makes it easier to manage large numbers of papers, applications, and people. But a lot of unproductive effort goes into jumping through bureaucratic hoops, and outsiders face barriers to entry at every turn. No wonder science and innovation have slowed.

One scientist argued in Nature that scientists must publish less, or good research will be swamped by the ever-increasing volume of poor work. A recent study found that more papers impedes the rise of new ideas. Another problem is that mainstream scientific leaders increasingly accept that large bodies of published research are unreliable in almost every field, including cancer research, yet published papers that fail a replication experiment are often cited more than those that pass one because they make bolder claims. Behind this unreliability is a destructive feedback loop between the production of poor quality science, the responsibility to cite previous work, and the compulsion to publish.

Another result of an obsession with papers is more journals. If you want more papers, you need more journals, and as a result we certainly have more journals. The number of worldwide researchers, journals, and journal articles has risen, increasing by about 60 percent just in the thirteen years between 1982 and 1995, followed by another 20 percent increase in the number of journals between 2002 and 2014. And it is not just more low-tier journals, the number of high-tier journals is also growing. The number of journals published by the Association of Com­puting Machinery has reached fifty-nine, while the number published by the American Chemical Society has hit thirty-nine; the Society of Mechanical Engineers, thirty-five; the Physical Society, fifteen; and the Medical Association, thirteen. The number of transactions, journals, and magazines published by the Institute of Electrical and Electronics Engineers exceeds two hundred, and the number published by Nature has reached 157 (up from one, fifty years ago), with each journal representing a different specialty.

Universities encourage this hyper-specialization. One recent paper analyzed the evaluation decisions for more than forty thousand academics. It found that scholars are penalized in their careers because gatekeepers in scientific disciplines may perceive multidisciplinary scien­tists as threats to the distinctiveness and knowledge domain of individual disciplines. Interestingly, the highest performers among multidisciplinary candidates are the ones who are most penalized. Those who are brilliant are also those who may bring disruption to a discipline, and hence are penalized more than those with a middling track.

Can we expect hyper-specialized researchers to develop something of use to society? Doing so requires the integration of many types of different information, and when it is spread across so many journals, who could possibly do this? Even if we ignore the more prosaic tasks of manufacturing, marketing, and accounting, just finding a new concept is difficult for hyper-specialized researchers.

What Should We Do?

The problems in America’s system of basic and applied research will not be easy to solve. Nor does returning to a system of the past guarantee success, as many past successes occurred for unknown or hard to explain reasons. Trying to recreate the past without understanding these reasons can easily lead to a worse situation.

There will also be a tendency to ignore these problems and just forge ahead with more government funding. But more money without reform will likely give us more PhD students, more papers, and bigger labs, without giving us more new breakthrough products and services. Nor is there any guarantee that more government funding for development will improve the situation. VCs are currently giving record amounts of funding to nuclear fusion, superconductors, quantum computers, bio­electronics (think of Theranos), without results, so more government funding, in and of itself, will not necessarily make things better.

Rather, the lack of success in these areas suggests that many if not all of these new technologies aren’t ready for development. They need significant improvements in performance and cost, something that does not appear to be coming from more papers, which place more emphasis on issues dear to academics, like theory and novelty, than on practical improvements. Somehow, we must change this paper mentality to an improvement mentality, in which university researchers work with corporations, perhaps alliances of them, to make the improvements.

Doing this will be difficult, not only because academics and funding agencies must be given new incentives, but also because the corporate side is weak. For instance, there are very few big electronic, materials, or other high-tech companies in America that still have large corporate laboratories (Big Pharma is an exception) and that can work with universities. America does not have a major supplier of displays, batteries, or solar cells, and outside of software, it has only Micron and Intel as suppliers of electronic hardware. Can we expect Facebook, Google, or Microsoft to do the basic and applied research that might lead to useful superconductors, nanotechnology, bioelectronics, or nuclear fusion?

Building stronger universities (and VCs and consulting firms) will require big changes in how we measure researchers. Counting publications or calculating h-factors are clearly insufficient. Since these people are supposed to be a source of new ideas, we should measure them by how good their ideas are. Which ideas led to new products, services, or solutions? Who came up with them? Which journals published them? Which government agencies funded them?

Developing this type of research system requires us to move away from the simple metrics that have been pushed by bureaucrats. Bureaucrats like these types of measures because they help them retain their power and position. But we need metrics that require members of the research system to understand the technologies they are funding, including key measures of cost and performance. Unfortunately, many members of the research system do not understand these issues, and thus it may be necessary to pare down the extensive system of academics, bureaucrats, and public relations specialists that make it harder for good decisions to be made.

None of this will be easy. But when a system doesn’t work, we must try to come up with a better one. We devise new metrics, we test them, and we move forward. One thing we know is that the existing system of publication counts and h-factors doesn’t work. They distract us from the real challenges facing both start-ups and incumbents in commercializing new technologies that are actually useful and transformative.

This article originally appeared in American Affairs Volume VI, Number 4 (Winter 2022): 23–35.
1 The use of blockchain means that the Bitcoin application is consistently maxed out at around 350,000 transactions per day, a pittance compared to other payment technologies and many times less that would be needed in a Web3 application.

2 As an aside, experiments in so-called Central Bank Digital Currencies (CBDCs) have mostly rejected models based on blockchain having witnessed the performance problems experienced elsewhere.

3 See notice of the latest delay from one of the ASX’s regulators, Australian Securities and Investments Commission, “22-204MR Delay to the ASX Chess Replacement Project and Independent Review,” news release, August 3, 2022.

2 As an aside, experiments in so-called Central Bank Digital Currencies (CBDCs) have mostly rejected models based on blockchain having witnessed the performance problems experienced elsewhere.

3 See notice of the latest delay from one of the ASX’s regulators, Australian Securities and Investments Commission, “22-204MR Delay to the ASX Chess Replacement Project and Independent Review,” news release, August 3, 2022.