Innovation chief says “pressure test” your pet hypothesis. It’s guaranteed to be wrong.
Imagine trying to invent something as earth-shaking as the atomic bomb. That massive, ambitious project took place during World War II under Robert Oppenheimer’s leadership, an event at the center of Christopher Nolan’s new film “Oppenheimer.” For Astro Teller — the grandson of Edward Teller, who was on Oppenheimer’s team and later helped create the even more powerful hydrogen bomb — this era contains a valuable lesson about the difficulty of steering innovation to socially beneficial ends.
Astro, who now leads Alphabet’s pioneering moonshot division known as X, believes that if we’d focused more on using nuclear energy for good, we might not be facing one of today’s greatest global crises.
“Ironically,” Teller says, “if we hadn’t let ourselves be clouded by the power of nuclear bombs, it would have allowed us to avoid what is now actually the problem, which is climate change” — an issue foreseen by his grandfather, whom he recalls being invested later in life in the ability of atoms to power cities, not destroy them.
Today, Teller is attempting to reclaim the magic of the midcentury, when the seemingly impossible projects that give “moonshots” their name made history. With some notable triumphs — like Waymo and Google Brain — and a few public missteps — Google Glass, anyone? — Teller knows better than perhaps any scientist alive what it’s like to work across disparate fields to try to capture that rarefied energy that changes not just one discipline, but the world.
In a conversation with Freethink, Teller spoke about what it takes to innovate, how to balance profit and purpose, and how we can guide innovation to benefit everyone. He also reflected on the power of stories — like the ones about Oppenheimer and his team — to shape how we see and understand the world of innovation, for better and worse.
Freethink: The term “moonshot” has been around since the late 1940s — the word originally described how hard, and perhaps impossible, it seemed to put humans on the moon. How do you define moonshots? And how do you and your colleagues at X decide if a moonshot is worth pursuing?
Astro Teller: For something to be a moonshot at X, it has to have three basic things:
One, there has to be a huge problem with the world.
Two, there has to be some radical proposed solution, some science-fiction-sounding product or service that — independent of whether we can make it — would make that huge problem go away, or at least take it down a couple notches.
And three, then there has to be some kind of technology core that makes this feel like a testable hypothesis. It doesn’t guarantee that we’re right, but at least allows us to get started. It gives us some hope that the skills that we have here at X will be particularly applicable to making this product or service that would solve this huge problem. So the aspiration is really big.
Here’s the difference [between our work and moonshots of the past] — when Kennedy said we [are going to the moon], there was something powerful about the determination to do it, long before it was clear whether it could be done. [But] that is not necessarily efficient.
During wartime — Manhattan Project, Bletchley Park — or during pseudo-wartime, like the Cold War, the space program, efficiency is not the goal. But for a place like X, efficiency is absolutely the goal. It wouldn’t be rational for the world or for investors to continue to lean into something like X if we weren’t as serious about efficiency as we were about radical innovation.
Radical innovation, despite how it is usually put forward in the public, is not some lone genius who’s just like, “I thought of this thing, and it turns out I’m right. I’m the smartest person ever.” Like, that’s usually how it looks. That’s just not how it happens. This is lots of hard work across broad, transdisciplinary teams with lots of mess in the process. That’s how it actually happens.
And so we start on lots of things that have the aspiration of a moonshot, but we start really small then aggressively filter afterwards on the basis of evidence. The ethos of X is one of pressure testing. Pressure testing the tech and, you know, a little bit later, pressure testing, like, “Who wants this? What would they do with this? Do they really get from it the benefits that we were hoping for?”
All those things are opportunities for us to discover we are wrong. A lot of the time, that can’t be fixed, but we can find out we were wrong much less expensively than we might have otherwise done.
“Radical innovation, despite how it is usually put forward in the public, is not some lone genius who’s just like, ‘I thought of this thing, and it turns out I’m right. I’m the smartest person ever.’ Like, that’s usually how it looks. That’s just not how it happens. This is lots of hard work across broad, transdisciplinary teams with lots of mess in the process. That’s how it actually happens.”
Freethink: At this point, you’ve been involved in multiple moonshots that have entered the public consciousness, with a variety of results, from Google Glass to Google Brain to Waymo. You’ve also doubtless been involved in many, many more moonshots that never made it out of the idea phase. What have you learned about what drives innovation?
Teller: What I’ve learned, number one, is that this is way harder than you think it’s gonna be. There is an extreme flexibility that is required in order to get people to not only be creative — you know, color outside the lines a little bit — but also to encourage them to break the right assumptions in the right ways.
You have to have the right kind of disregard for how things are normally done, but it’s easy to get people who are like, “I don’t care about the rules.” If you want radical innovation [and] you don’t care about efficiency, just find really egotistical people who are pretty smart, give them lots of money, and look away. You’ll get some radical innovation, but it won’t be efficient.
It’s that shaping while giving them flexibility that’s hard. If you shape too hard, the flexibility goes away. You [can] give them a checklist they can follow, but the bad news is they will follow it, and then all they’re doing is going inside a lane you’ve made for them. But if you create too much space, often they’ll churn on the ambiguity.
I mean, I happened to just be wearing today a t-shirt that says, “Chaos pilot.” It’s another term that we use internally to try to help people understand: we’re not just causing chaos. That’s not our job. It’s to go into unknown places, but then to metabolize it with some purpose. I would not say we’re done. [laughs]
Freethink: How do you assess if you’re heading in the right direction with a project? I imagine that if you’re piloting through a chaotic space, you can’t necessarily look outward for guidance to figure out where you should be heading. Maybe you can, depending on what data you’re collecting, but how do you assess yourselves?
Teller: Our goal is to build the foundations for enduring businesses that are really valuable and really good for the world.
Given that that’s our job, if you work at X, you need to have — from a pretty early day — what you could think of as a moonshot story-hypothesis, what might be called on the outside an “investment thesis.”
We believe deeply at X that purpose and profit are aligned. If you make something that could be really good for the world, but it loses money, it’s not really gonna be good for the world. If you could make a profitable company, but we can’t be proud of what it does in the world, like, let’s not do that. And so let’s fish at the intersection of those things, where the profit the company makes is aligned with the goodness it’s doing in the world. Then we can feel really good about it.
And so you’re always testing that hypothesis. Let’s take our Free Space Optics efforts — so this is to connect the unconnected and the underconnected around the world using light. Wireless optical communications.
So far it’s looking really good, but one of the things we’re looking for in all of our projects is, “How is this being experienced?” and “Where might there be unintended consequences that we have to be on the lookout for?”
“We believe deeply at X that purpose and profit are aligned. If you make something that could be really good for the world, but it loses money, it’s not really gonna be good for the world. If you could make a profitable company, but we can’t be proud of what it does in the world, like, let’s not do that. And so let’s fish at the intersection of those things, where the profit the company makes is aligned with the goodness it’s doing in the world.”
So if you worked at X, that would be the back-and-forth that we would have, that constant pressure testing relative to your hypothesis.
I guarantee you your hypothesis is wrong and will need to evolve — that’s what we’re paying for over time. Not for you to be right, but for you to make your hypothesis better.
Freethink: Most Americans today are pessimistic about the economy — the costs of education, housing, and healthcare have all outpaced inflation over the past few decades. In difficult economic times, how do you justify investing in moonshots? Obviously, you’re using capital from a private source, not a public one, but how do you and your colleagues at X evaluate the tradeoff inherent in making risky bets, where costs are significant, but payoffs can be profound?
Teller: Let’s try a thought experiment for a second. I’m gonna offer you a lottery ticket. It costs $1 to buy the lottery ticket. You have a one in 1,000 chance of having the lottery ticket pay off in your favor. If it pays off, you won’t even find out for 10 years. And it pays off for a million dollars. $1 to buy in. 1,000 to one to win a $1,000,000 prize, 10 years to wait. So that has $1,000 expected utility, but you have to wait 10 years to get it. Would you buy that lottery ticket?
Freethink: Well, absolutely, these are great odds — one in 1,000 for the lottery!
Teller: Right. Now let’s play a more complicated game. I don’t know exactly how good this lottery ticket is. It could be one in 1,000, it could be one in a million. I don’t really know. I don’t know what the prize is. It could be $1,000, it could be $1,000,000, but it’s only going to cost you $1 to find out and to start to tighten those bounds.
If you thought even some of the lottery tickets you were looking at were gonna be as good as that first lottery ticket I described to you, do you think you might pay $1 to see if you can learn more about that lottery ticket?
Freethink: I think so — from what you’re saying, is the majority of your time just spending that $1 to find out if there’s even any expected value to a project?
Teller: We’re shooting for radical innovation, but trying to get it efficiently. And I believe that what I’ve just described to you is something that’s actually available to everybody in the world.
I think the world is littered with opportunities like this, and because we don’t train people to go looking for those kinds of opportunities, we leave a lot of value and a lot of goodness for the world on the table.
I hate failing. I don’t wanna fail myself. I don’t wanna fail Alphabet. I don’t wanna fail the world. I don’t think anybody here wants to fail.
But the issue is, failure is so stigmatized in our society [but] learning is driven almost exclusively through moments where what we thought was going to happen doesn’t happen. That’s what drives learning. You learn nothing when you’re right.
Freethink: A number of commentators, including economist Tyler Cowen and entrepreneur Patrick Collison, have argued that we should be worried about the pace of progress — namely, that it’s slowing down. If you agree, what do you think has led us to this moment, when by some measures productivity is falling, despite universities investing more in research and producing more STEM PhDs than ever before? Do we need a moonshot for progress itself?
Teller: Let me make a distinction between the rate at which productivity is increasing and the rate at which the world is changing.
They are right that productivity growth has slowed almost to 0. So productivity isn’t getting worse, but it isn’t getting better very fast, and there are lots of parts of the modern world that are essentially depending on productivity to go up. You know, the national debt is going to be hard to service if we don’t can’t grow our way out of it through productivity. So I understand that concern and I think that deserves attention.
But I’m not sure that that problem is at root an innovation or technology problem, because the rate at which the world is changing is speeding up, and the rate at which you can metabolize those changes is not speeding up as fast, which means that there’s a growing gap between the rate of change of technology and our ability to metabolize that change as a society. And that is very central to all of the angst that you feel out in the modern world.
“Failure is so stigmatized in our society [but] learning is driven almost exclusively through moments where what we thought was going to happen doesn’t happen. That’s what drives learning. You learn nothing when you’re right.”ASTRO TELLER
So it is both simultaneously true that we have a not-changing-fast-enough problem and a changing-so-fast-it’s-causing-us-heartburn problem. And so I would argue that leaving at the doorstep of technology or innovation the idea that productivity isn’t going up fast enough is probably wrong. I would suggest that there are probably some public policy and social issues which need to be addressed so that productivity can hitch its wagon more effectively to the rate of change that the world has already experienced.
Freethink: Moonshots like the Apollo space program and the Manhattan Project occupy a huge space in the public consciousness. What do you see as the Manhattan Projects of today? Would something like Operation Warp Speed, which developed vaccines to combat the COVID-19 pandemic, qualify? And are there any scientific endeavors that you feel we should regard similarly but that have been overlooked?
Teller: I think the all-in-poker-chips, the equivalent of the Apollo space mission won’t happen again. The world’s moved on from a sort of government-centric solution for the world. That doesn’t mean governments can’t be involved or shouldn’t be involved. It means I don’t think they can solve problems by themselves.
It is also definitely not the case [for us] — there is no problem that X is going to solve by itself. That’s also not a thing. I think part of what’s happened is that the world is now complex enough that these moonshots have to be more distributed.
What happened during COVID? I don’t know if you could exactly call it a moonshot but the whole scientific community of the world — helped along by things like the mRNA vaccine, which had been brewing for a long time — started working together at literally 10 times the rate.
That was really inspiring. It makes me very sad that we seem to have gone back to normal, but it was a proof that it can be done, and it happened kind of bottom-up. There were things like Operation Warp Speed, but those were nuggets within a much bigger system that was fairly organic. So I take a lot of inspiration from that, that it can happen and I think that’s probably not a bad analogy.
“There’s a growing gap between the rate of change of technology and our ability to metabolize that change as a society. And that is very central to all of the angst that you feel out in the modern world.”
Let me give you one or two other ones. The electric grid is the largest, most complex machine humans have ever made. It’s having a hard time right now in this country and all over the world, and it is central to our ability to support humanity, to lift up humanity, and if we don’t fix how it works, it’s going to be central to destroying humanity just because of the climate change effects that sort of come along through the grid. But it’s also our opportunity to take a big bite out of the problem of climate change, so that is an opportunity for a moonshot.
Now at X we are working on a moonshot for the electric grid, but we can’t just be tinkering in a closet somewhere, like, “Aha, here’s the solution!” because we have to work with system operators, the transmission folks. So we’re trying to do our part, but I see that as another example of something that we need to be on as a society and no one of us could just solve it.
Freethink: Right. That’s very interesting to hear about the electrical grid moonshot. Is that something that you’ve spoken about publicly?
Teller: We’ve mentioned it, but the short of it, I would put this way. The electric grid was built at a time when the demand — I mean, sometimes Bob turned on the lights and sometimes Suzie turned on the lights, but it was stochastic enough that there were very predictable needs at different times of the day. And so it was this sort of system where you produce the energy, you turn it up and down — and it wasn’t simple. It was hard for the time.
But the world is getting complex very fast. Every time an EV is plugged in, you don’t even know whether it could give electrons back to the grid, or it’s going to try to pull them down. And, you know, the sun is shining. Hey, here’s some more electrons, a lot more electrons. Whoops, the clouds showed up. All those electrons just stopped. You know the wind blowing — same thing. So you’re getting these huge fluctuations in the grid and the grid wasn’t set up for that.
But the grid is so complex that there is no grid operator in the world that has a detailed enough map of their own machine that they can plan for how to fix it. So if you make a solar field and then you want to plug it into the grid, you will end up waiting, on average, in the United States, in about a seven-year line. And that’s not because the system operators are bad people or dumb people.
They have to, by hand, run all of these what-if simulations. Like, “OK, let’s say we plug this new solar field onto the grid and then the heat was really high and it was June and the sun was doing this.” And they have to plan out this very specific scenario. And then they kind of roll it forward and try to see, does anything really bad happen? That was one scenario. It took them weeks maybe. They have to do a bunch of those before they can have any confidence that you plugging your solar field onto the grid won’t be a disaster.
And they don’t have a simulator that would allow them to do that in a second instead of in a month. So, we have bigger aspirations, but we think that the sort of first really foundational piece is how do we help the system operators of the world have a virtualization of their grid that allows for them to deeply understand what they have, what they need to fix, and what would happen to it if they did various things, so that they can upgrade it much faster.
Freethink: Obviously, it’s not always easy to predict how new technologies will be used. When the written word came along, Socrates complained that books would make it too easy to remember things—I can only imagine what he would say about cell phones. But I’m curious about what you see as the pitfalls of innovation—how do you evaluate the risks of your work at X?
Teller: Anyone who is doing any incremental innovation — and even more so, I suppose, for radical innovation — has an obligation to try to see what those problems might be and to deal with them as thoughtfully as possible.
Mostly our experience is that you can’t predict the stuff ahead of time. It means you have to get out into the world soon, and you have to do it in a sandboxed kind of way that doesn’t endanger anybody, but still allows you to learn in the process — the equivalent of having Waymo cars out in the world very early on, but with safety drivers with their hands right by the wheel.
We did that for a decade, and that was a way for us to be learning in the real world — because we couldn’t learn on a fake driveway — but to still keep everybody safe in the process.
“Technology can be misused. It’s the responsibility of the technologists, the inventors, as a group, to avoid that where they can. But as a society, we also need to make sure that we don’t let those fears cause us to miss out on all the benefits of the technology.”
We do that with a lot of the things that we do. Let me put it into three categories. I think there are things that are clearly the responsibility of the innovators as a group. Your drone for package delivery shouldn’t hit a person or hit the house. Your thing has to not do damage — direct bad stuff.
There’s a bit of a gray area where you still have a responsibility as the innovation team. So that might be more like the noise from a drone. You know, nothing really horrible is gonna happen, but that’s just rude and we’re part of a society.
And then there’s things which are not solvable by the innovation unit itself. So for example, most innovation causes some new jobs to come up and some jobs to go away. It’s almost impossible to do a new technology that doesn’t cause that to happen to some extent, but the ability to move people from one domain to another and the sort of job reskilling, that’s a public policy issue.
That’s not something that the tech innovation group can solve by themselves, but they still have a responsibility to go to the public-policy sphere and say, here’s what’s coming, here’s the shifts we imagine will happen, here’s the details of how this technology works, so you, the public policy makers, can make the best choices possible.
Freethink: Your grandfather, Edward Teller, who’s portrayed by Benny Safdie in Oppenheimer, helped invent the atomic bomb — perhaps the most famous example of an innovation that brought great risks and promise all at once. What did you learn from him about innovation?
Teller: His main interest for his whole life — certainly the part of his life where I knew him — was nuclear power. I don’t mean nuclear bombs. I mean nuclear power, like stations that could create electricity for us. And he was actually part of this, because back in the ‘60s and ‘70s, he saw climate change coming.
That’s why he was so worked up about nuclear power plants. And because the destructive power of nuclear bombs was so disturbing to society — for understandable reasons — it created this sort of really destructive, confused panic in society. And this broken narrative about nuclear power being bad for us, which, ironically, if we hadn’t let ourselves be clouded by this, this issue about the power of nuclear bombs, it would have allowed us to avoid what is now actually the problem, which is climate change.
And I think that there’s a metaphor in there for lots of other technologies. Technology can be misused. It’s the responsibility of the technologists, the inventors, as a group, to avoid that where they can. But as a society, we also need to make sure that we don’t let those fears cause us to miss out on all the benefits of the technology.
Freethink: Given the way nuclear weapons seem to have sullied the promise of nuclear power, how important is storytelling to technological innovation?
Teller: I think it’s critical. They hear a lot about it from me here at X. I mean, part of the moonshot hypothesis I described — I called it before a moonshot story-hypothesis — it’s an architecture. A story isn’t marketing. A story is an intellectual architecture and an argument that helps you understand something. That’s really what a story is.
And I think we could do a much better job on lots of pieces of technology in understanding what it really is, in a sanguine way, explaining it to each other, building up sophistication about it in public. That we as a society have strong reactions to a thing without really understanding the thing makes it almost impossible for us as a society to get our arms around what we really should best do with that thing.
So I do think that storytelling is important and when wise, organized versions of it don’t happen, that doesn’t mean no storytelling happens. It just means that the shrillest voices on both the positive and negative side overwhelm what could be the thoughtful story in the middle — for any technology.
This article Innovation chief says “pressure test” your pet hypothesis. It’s guaranteed to be wrong. is featured on Big Think.