Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.
How Progress Compounds and Why It Matters
Thinking about the future does not require endless hyperbole or even forecasting. There are patterns, from a long-term perspective.
Gates’ Law
“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”
It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.
You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.
Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.
Let’s look at what Bill Gates meant and why it matters.
Moore’s Law
Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.
When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy, and a physical law as certain as the laws of thermodynamics.
Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.
Why Is Moore’s Law Related to Amara’s Law?
Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”
When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.
One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.
If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfill. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.
In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better than Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.
The Technology Hype Cycle
To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.
Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.
The hype cycle happens like this:
- New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
- The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
- The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
- The slope of enlightenment: As time passes, people continue to improve technology and find better uses for it. Eventually, it’s clear how it can improve our lives, and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
- The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.
When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.
Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial skeptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.
Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.
If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.
The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.
- A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
- A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
- A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.
In the short run, the world is a voting machine affected by whims and marketing. In the long run, it’s a weighing machine where quality and product matter.
The Adjacent Possible
Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?
One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.
Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.
Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots, and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.
Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working on designing an incubator that could save the lives of babies for a lot longer than a couple of months.
Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.
Even more, telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.
Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.
This happens in our lives, as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.
Accelerating Change
The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.
Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.
Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.
In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change—that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.
In an essay on the topic, he writes:
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.
Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice the incremental change because it lacks stark contrast. The current pace of change is our norm, and we adjust to it. In hindsight, we can see how Amara’s Law plays out.
Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how to change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.
That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable, describes filtering as one of the key forces that will shape the future.
History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete, and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.
Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools, and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That period essentially led to the creation of civilization, with the foundations of our current world.
More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam-powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.
As Stephen Hawking put it in 1993:
For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.
But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually, we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate. We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.
This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long run.
This article was originally published on the Farnam Street Blog. Reprinted with permission.