Is the Next Economy Taking Shape?

The United States needs to be preparing now for what it will do when the computer-driven new economy loses momentum.

Recent economic trends, including a massive trade deficit, declining median incomes, and relatively weak job growth, have been, to say the least, somewhat disheartening. But there is one bright spot: strong productivity growth. Starting in the mid-1990s, productivity has rebounded after 20 years of relatively poor performance. Why has productivity grown so much? Why did it fall so suddenly in the 1970s and 80s? Is this latest surge likely to last? Understanding the answers to these questions goes to heart of understanding the prospects for future U.S. prosperity

Unfortunately, economists have provided few answers, largely because conventional neoclassical growth models ignore technological innovation. In contrast, a “neo-Schumpetarian” analysis suggests that the revival and stagnation of productivity are tied to the emergence and subsequent exhaustion of new techno-economic production systems. When an old economy reaches its limits in terms of innovation and the diffusion of the technology system, it becomes increasingly difficult to eke out productivity gains. Only when a new technology system becomes affordable enough and pervasive enough is it able to revitalize the engine of productivity. This analysis suggests that although the current information technology (IT)–based technology system is likely to continue to drive strong productivity growth for at least another decade, an innovation-exhaustion slowdown may be just over the horizon.

The old mass-production corporate economy emerged after World War II and prospered until the early 1970s. This was indeed a golden age, during which labor productivity grew on average 3% per year and real family incomes boomed (30% during the 1960s). Yet, starting in 1973, labor productivity growth fell precipitously to about 1.3% per year and income growth stagnated. Between 1973 and 1980, average family income did not grow at all, and it increased just 9% from 1981 to 1996.

Why, to borrow a phrase coined by economists Barry Bluestone and Bennett Harrison, did this “great U-turn” happen? Economists struggled to find answers, postulating that factors such as energy prices, interest rates, and taxes contributed to the decline. Economist Alan Dennison conducted the most comprehensive analysis of the productivity slowdown and concluded that collectively these kinds of factors could explain at best only 40% of the productivity slowdown. The remaining 60% was a mystery.

To this day, economists are not quite sure what happened. Alan Blinder, former vice chairman of the board of governors of the Federal Reserve System, states, “No one quite knows why productivity growth slowed down so much, although many partial explanations—higher energy costs, lagging investment, and deterioration in the skills of the average worker—have been offered.” Economist and columnist Paul Krugman confesses, “We do not really know why productivity growth ground to a near halt. Unfortunately, that makes it hard to answer the other question. What can we do to speed it up?”

It’s the technology, stupid

Economists had difficulty determining the causes because they were not looking at changes in the underlying technological production system and how technologies changed. However, when viewed through the lens of economic cycles, the puzzle of falling productivity begins to make sense. In its heyday, the mass-production corporate economy was able to take advantage of a number of key innovations in technology, scale economies, and the organization of enterprises to create significant new efficiencies. Numerous production innovations, including automated assembly lines, numerically controlled machine tools, automated process control systems, and mechanical handling systems, drove down prices in U.S. manufacturing and led to the production of a cornucopia of inexpensive manufactured consumer goods. In fact, the rise of mechanical automation was the truly great development of that era’s economy. The term “automation” was not even coined until 1945, when the engineering division of Ford used it to describe the operations of its new transfer machines that mechanically unloaded stampings from the body presses and positioned them in front of machine tools. But automation was not confined to autos, discrete parts, and other durable goods industries; it became widespread in commodity processing. Continuous-flow innovations date back to 1939, when Standard Oil of New Jersey created the first of the industry’s great fluid crackers. In these plants, raw material flowed continuously in at one end and finished product emerged at the other end.

Even if all establishments adopted the core technologies, productivity could still keep growing if technologies continued to get better. Indeed, this is what happened for many years from the end of World War II until the early 1970s. But by the late 1970s, the dominant electro-mechanical technological path was exhausted, and further gains came with increasing difficulty. Engineers, managers, and others who organize production had wrung all the efficiencies out from both achieving increased scale economies and fully using the existing technological system. Over time, virtually all enterprises had adopted the new technologies and ways of organizing work and firms; most manufacturers used assembly lines, most chemical companies adopted continuous flow processing, and most companies sold through department stores.

WHEN AN OLD ECONOMY REACHES ITS LIMITS IN TERMS OF INNOVATION AND THE DIFFUSION OF THE TECHNOLOGY SYSTEM, IT BECOMES INCREASINGLY DIFFICULT TO EKE OUT PRODUCTIVITY GAINS.

This trend can be seen in a number of industries. In banking, for example, the limits to mechanical check-reading became apparent. In the early 1950s, IBM invented an automatic check-reading and sorting machine for use by banks. Every few years, they and other producers would come out with a better and somewhat cheaper machine that would process checks just a little faster and with fewer errors. But by the early 1980s, the improvements slowed because it was physically possible to move paper only so fast. At that point, efficiency gains were more difficult to achieve. The same trend can be observed in the auto industry. Numerically controlled machine tools and other mechanically based metalworking tools could not be made much more efficient. As a result, auto sector productivity growth declined from 3.8% per year from 1960 to 1975, to 2.2% from 1976 to 1995.

By the end of the 1970s, the only way to regain robust productivity growth rates was for the production system to get on a new S-curve path based on a new set of core technologies. Even though industry leaders recognized at the time that IT would be the basis of the new technology system, the transition would not happen overnight. Even as late as the early 1990s, this emerging IT-based techno-economic system was not well enough developed, was too expensive, and was too limited to exert a noticeable economywide effect on productivity and economic growth.

This was why, by the early 1990s, many economists began to question whether the new IT system was in fact going to be the savior of productivity. Emblematic of their doubts, Nobel Prize–winning economist Robert Solow famously quipped, “We see computers everywhere except in the productivity statistics.” This conundrum—rapid developments in IT but no rapid growth in productivity—was labeled the “productivity paradox.” Because productivity growth had lagged since the 1960s while investments in IT grew, some concluded that IT did not affect productivity. For example, Bluestone and Harrison argued that, “The first Intel chip, produced in late 1971, was capable of processing about 60,000 instructions per second. The latest, introduced in 1998, is capable of 300 million. Yet over that same period, productivity nose dived.”

In fact, IT was actually boosting productivity, but only in particular sectors. Since the 1970s, productivity grew 1.1% per year in sectors investing heavily in computers and approximately 0.35% in sectors investing less. Between 1989 and 2001, productivity growth in IT-intensive industries averaged 3.03% per year, compared to only 0.42% per year in less–IT-intensive industries.

Why were computers not showing up in the overall productivity statistics? In making an analogy to the adoption of electric motors, Stanford University historian Paul David advanced the most widely cited explanation, claming that it simply takes a long time to learn how to use new technology. He pointed out that it took over 30 years for electric motors to be fully utilized by factories after they were first developed in the early 1900s, so we should not be surprised that it takes a long time for companies to figure out how best to use these technologies and reorganize their production systems. In contrast to the IT skeptics, David counseled patience.

Although David’s “learning” hypothesis seems reasonable, it suffers from two key problems. First, these technologies are actually not hard to learn. In fact, with “Windows” functionality, off-the-shelf software, and the easy-to-use Internet, information technologies are relatively easy for companies and people to adopt and use. Second, David’s story about learning suggests that technologies come on the scene fully formed and that it takes years for recalcitrant organizations to finally adopt them and figure out how to use them. Yet electric motor technology took more than 25 years to increase power output, functionality, versatility, and ease of use to get to the point where it was widely used and had a big impact. For example, in the 1920s, companies developed multivoltage motors, push-button induction motors, and smoother-running motors using ball bearings. In the 1930s, companies developed motors for low-speed high-torque applications and motors with variable-speed transmissions.

IT has followed a similar development trend. Compared to today, IT of even the early 1990s seems antiquated. The first popular Microsoft Windows platform (3.0) was not shipped until 1990, and even this was nowhere near as easy to use as Windows95. Pentium computer chips were not introduced until 1993. The average disk drive storage was 2 gigabits. Few machines were networked, and before the mid1990s, there was no functional World Wide Web.

One way of understanding how far IT has come is to realize that computer storage has become so cheap that companies give it away. For example, Google recently launched a free Web mail service called GMail that gives users more than 2.6 gigabytes of free memory. If Google were to use 1975 storage technology, it would cost them over $42 million in today’s dollars to provide me with that much memory. In short, until the mid-1990s most Americans were working on Ford Model Ts, not Ford Explorers.

Yet compared to the original Apple 2 computer with no hard drive and 560 kilobytes of memory, the machines of the early 1990s looked pretty impressive. Most economists, who found desktop computers to be extremely useful in their own work, could not understand why this marvelous device was not leading to gains in productivity. They could not have anticipated that 10 years later these remarkable computers would not be good enough to donate to an elementary school.

In short, the skeptics were expecting too much too soon, and when the miracle did not happen, they questioned the entire IT enterprise. In arguing that policymakers should not look to IT to boost incomes, Bluestone and Harrison wrote in 2000 that the information age was in its fourth decade and had yet to show returns. The reality is that the IT revolution is only in its second decade, and all the prior activity was merely a warm-up.

The 1990s productivity puzzle

No sooner did the idea of the productivity paradox become widely accepted than events overtook it. Between the fourth quarter of 1996 and the fourth quarter of 2004, productivity growth averaged more than 3.3% per year, which was almost three times as fast as during the stagnant transition period.

What happened? The reason why productivity rebounded and continues its solid performance is that by the mid1990s, the new IT system was affordable enough, powerful enough, and networked enough to open up a new set of productivity possibilities that could be tapped by a wide array of organizations. IT was particularly crucial in helping to boost productivity in the non-goods sector. Between 1973 and 1996, service sector productivity grew less than 0.4% per year. In an economy in which more than 80% of jobs are in the non-goods sector, even fast productivity growth in the goods sector is no longer enough to pull the overall economy along.

Yet until the mid-1990s, it was quite difficult for companies to automate processes such as phone calls, handling of paper forms, and personal face-to-face service. However, as it developed, IT provided the ability to vastly improve efficiency and productivity in services. Just as mechanization let companies automate manufacturing, digitization is enabling organizations to automate a whole host of processes, including paper, in-person, and telephone transactions. For example, check processing might have reached its productivity ceiling, but the new technology system of electronic bill payment does away with checks completely and lets banks once again ride the wave of significant productivity gains. Likewise in the auto industry, the new technology and organization systems allowed carmakers to control parts delivery in real-time systems, design cars on computers, machine parts with computerized numerically controlled machines, and do a host of other things to boost efficiency. The result was that auto productivity growth surged to 3.7% per year in the last half of the 1990s. This opening up of new technology has had a similar transformative effect on a host of industries.

Although many IT skeptics grudgingly acknowledge that IT helped boost productivity, it is fashionable now to argue that this trend is over. Business professors Charles Fine and Daniel Raff argue that with regard to the auto industry that IT provides only “a one-shot improvement in forecasting, communication and coordination.” Morgan Stanley chief economist Stephen Roach agrees that IT yields a one-time productivity gain, after which stagnation will set in. Harvard Business School’s Nicholas Carr in an article entitled “IT Doesn’t Matter” concludes that “As for IT-spurred industry transformations, most of the ones that are going to happen have likely already happened or are in the process of happening.”

I believe that just as the skeptics were wrong in the 1990s, they are wrong now. Although it is true that the adoption of technologies has sometimes produced one-time productivity gains for the adopters, as technologies diffuse to other adopters, productivity kept going up. Moreover, it is not as if technologies do not improve. In these and myriad other ways, “one-time” gains become continuous gains, at least until the technology system is mature and fully diffused. This is why the Institute for Supply Management recently found that 47% of manufacturing companies and 39% of nonmanufacturing companies believe they have achieved less than half the efficiency gains available from existing technology.

In order to achieve the full promise of the digital revolution, at least four things will have to happen to the technology system. First, the technology will need to be easier to use and more reliable. Americans do not think twice about plugging in an appliance because they know it will work. But in spite of considerable efforts to make them easier to use, most digital technologies remain complicated and less than fully reliable. Technologies will need to be so easy to use that they fade into the background. Luckily, the IT industry is working on this challenge, and each new generation of technology is getting closer to the ideal of “plug and play.”

Second, a variety of devices will need to be linked together. Although it is entertaining to watch a corporate road warrior in an airport security line juggling cell phone, laptop, Blackberry, and personal digital assistant, this is a diversion that needs to stop. At home, it is just as bad, with stereos, televisions, phones, laptops, desktops, printers, peripherals, and MP3 players existing in unconnected parallel digital universes. Moreover, an array of new devices such as smart cards, e-book readers, and ubiquitous sensors will need to be integrated into daily life and existing information systems. The IT and consumer electronics industries are well aware of the problem and are pushing toward convergence and integration.

Third, improved technologies are needed. A recent National Institute of Standards and Technology report articulated a number of cross-cutting generic technology needs in areas such as monitoring and control of large networks, distributed databases, data management, systems management, and systems integration. Other technologies, such as better voice, handwriting, and optical recognition features, would allow humans to interact more easily with computers. Better intelligent agents that routinely filter and retrieve information based on user preferences would make the Internet experience better. Expert system software would help in making decisions in medicine, engineering, finance, and other fields. Again, these improvements are being made. For example, Internet2, a consortium that includes more than 180 universities working with industry and government, is working to develop advanced network applications and technologies, accelerating the creation of tomorrow’s dramatically more powerful Internet.

Finally, we need more ubiquitous adoption. When roughly 75% of households are online, including 50% with true high-speed broadband connections, and 50% are using key applications such as electronic bill payment, a critical inflection point will occur. At that point the cyber world will begin to dominate, whereas now both activities—cyber business and traditional business—exist in parallel worlds. It is not just online ubiquity that we need; IT will need to be applied to all things we want to do, so that every industry and economic function that can employ digital technologies does. Government, health care, transportation, and many retail functions such as the purchase of homes and cars are some of the industries that lag behind.

In one sense, however, the skeptics are right. If past transformations provide a roadmap, although the productivity gains from today’s IT-driven economy should continue for at least another decade or so, they will not last forever. Most organizations will adopt the technology and the digital economy will simply be the economy. Moreover, the pace of innovation in the IT sector may eventually hit a wall. Indeed, many experts suggest that by 2015 the breakneck rate of progress in computer chip technology that has become known as Moore’s law will come to an end.

In Isaac Asimov’s Foundation series, the secret foundations’ mission is to reduce the length of a galactic dark age by accelerating the reemergence of a new Empire; in that case, based on microminiature technologies. Although the United States will not face a 1,000-year galactic dark age, it might face a 10- to 20-year period of slow growth, precisely at the time when it will need that growth more than ever: when baby boomers go from being producers to consumers. This suggests that the nation needs to think about what kind of technology system will power growth 20 to 25 years from now and to consider what steps, if any, might accelerate its development. In the 1960s, no one predicted the slowdown that was to come just a decade later. If they had, perhaps they could have stepped up efforts to accelerate the IT revolution.

Which technologies will form the core of the next wave is not yet clear, but it seems likely that one will be based on nanoscale advances, whether in pharmaceuticals, materials, manufacturing, or energy. Another could relate to the key need to boost productivity in human-service functions. Boosting productivity in human-service occupations is difficult, but technology can play some role. For example, as Asimov has speculated, robots could play an important role in the next economy, perhaps by helping care for the elderly at home.

Although Congress and the administration have expanded research funding for biological sciences and established the National Nanotech Initiative, more needs to be done. One first step is to reverse the decline in research funding that the administration projects for the next three to four years. Another step would be to ask an organization such as the National Academy of Sciences to examine what the likely technology drivers will be by the year 2030 and what steps government and industry could take in the next decade to accelerate their development.

Harvard University economist Derek Scherer has noted that: “There is a centuries’ old tradition of gazing with wonder at recent technological achievements, surveying the difficulties that seem to thwart further improvements, and concluding that the most important inventions have been made and that it will be much more difficult to achieve comparable rates of advance. Such views have always been wrong in the past, and there is no reason to believe that they will be any more valid in the foreseeable future.” Such pessimism is especially misplaced now, given that we are in the middle of a technology-driven surge in productivity and can expect perhaps as many as two decades of robust growth until the current techno-economic system is fully utilized. Schumpeter got it right when he stated, “There is no reason to expect slackening of the rate of output through exhaustion of technological possibilities.” The challenge now is to make sure policymakers take the steps needed not only to advance the digital economy but also to put in place the conditions for the emergence of the next economy and its accompanying technology system.

Recommended Reading

  • Robert D. Atkinson, The Past and Future of America’s Economy: Long Waves of Innovation that Power Cycles of Growth (Northhampton, MA: Edward Elgar, 2005).
  • Paul A. David, Computer and Dynamo: The Modern Productivity Paradox in a Not-Too-Distant Mirror (Stanford, CA: Center for Economic Policy Research, 1989).
  • Michael J. Mandel, Rational Exuberance: Silencing the Enemies of Growth (New York: Harper Business, 2004).
  • Carlota Perez, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (Northampton, MA: Edward Elgar, 2003).
  • Joseph A. Schumpeter, Capitalism, Socialism and Democracy (New York: Harper Perennial, 1942, 1975)
Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Atkinson, Robert D. “Is the Next Economy Taking Shape?” Issues in Science and Technology 22, no. 2 (Winter 2006).

Vol. XXII, No. 2, Winter 2006