Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Feasting on the Giant Peach

Originally published at Forbes ASAP

WILL THE INTERNET COLLAPSE? NO WAY!

What is all this commotion in Massachusetts? The very source of the Arpanet at Bolt, Beranek & Newman — the cradle of the Internet — Massachusetts is falling to the forces of Auntie Spiker and Aunt Sponge.

These are the mingy ladies in the Roald Dahl story who rejoiced in James’s Giant Peach as long as it didn’t take flight. Now Massachusetts — the state that once barred Apple shares as a likely West Coast levitation scam — looks askance at the Giant Peach of the Internet, aloft in Silicon Valley and around the globe, with James Clark, James Gosling, Netscape and a series of thin-air IPOs.

Howard Anderson of Boston’s Yankee Group, long an Internet tout, thinks those wired yahoos on Wall Street and Sand Hill Road are blind to the inevitable sine waves of advance: What goes up must come down, he sternly avers, trying to bring some simple physics to the scene, as if the Internet has to obey the law of gravity.

And now Bob Metcalfe — Metcalfe himself! — inventor of Ethernet, pioneer of Arpanet and the founding father of the networking era. Here he is, prophesying lugubriously into every megaphone he can grasp, from the New York Times Magazine and PBS to U.S. News & World Report and InfoWorld, that the Internet will collapse in 1996. Metcalfe now predicts a general retreat to Intranets, shielded from the public system and unavailable to it.

Metcalfe was striking a blow against the very solar plexus of my prophecies. I had founded my confidence in the Internet on the continuing power of the law of the telecosm, an edict adapted from Metcalfe’s very own law of networks. Metcalfe’s Law ordains that the value of a network rises by the square of the number of terminals attached to it.

In its most basic form, this law merely captures the exponential rise in the value of any network device, such as a telephone, with the rise in the number of other such devices reachable by it. Metcalfe, however, shrewdly added in the declining cost of Ethernet adapters and other network gear as the Net expanded. In the law of the telecosm, I summed up these and other learning-curve factors by incorporating into Metcalfe’s Law the law of the microcosm.

Based on the power-delay product in semiconductors, the law of the microcosm ordains that the cost-effectiveness of the terminals will rise by the square of the number of additional transistors integrated on a single chip. Amplified by the law of the microcosm, the law of the telecosm signifies the rise in the cost-effectiveness of a network in proportion to the resources deployed on it and the number of potential nodes and routers available to it.

As the network expands, each new computer both uses it as a resource and contributes resources to it. This is the secret of the stability of the Internet. The very process of growth that releases avalanches of new traffic onto the Net precipitates a cascade of new capacity at Internet service providers (ISPs). They supply new servers and routers, open new routes and pathways for data across the Web, and buy new terminals and edge switches to upgrade their connections to the Network Access Points (NAPs), the Internet supernodes that in turn exert pressure on the backbone vendors to expand their own bandwidth.

Because all these routes and resources are interlinked, they are available to absorb excess traffic caused by outages, crashes or congestion elsewhere on the Net. Because all these resources are growing in cost-effectiveness at the exponential pace of the law of the microcosm, and total available bandwidth on the Net is rising at the still-faster pace of the law of the telecosm, the Internet has been able to double in size annually since 1970 and increase its traffic two times faster still, without suffering any crippling crashes beyond the Morris worm of 1988.

Impelling the growth of the largest interconnected network, the law of the telecosm means that the most open computer networks will prevail. Proprietary networks lose to a worldwide web.

LOADED FOR BEAR

I wanted to answer Metcalfe’s challenge. As the apparent winner of a previous argument over ATM and Ethernet [see Forbes ASAP, “Metcalfe’s Law and Legacy,” Sept. 13, 1993], I thought I might have an edge (after all, Fast Ethernet outsells ATM at least 20 to 1). But when he met me on a rainy day late in May at his Boston townhouse on Beacon Street, where he looks benevolently across the Charles at the mit campus, Metcalfe was loaded for Internet bear. At the peak of his influence, this smiling cover boy of June’s ieee Spectrum, winner of the 1996 ieee Medal of Honor, was ready to explain.

“I am way out on a limb here,” he says over sushi and wasabi at a restaurant near his house. “I actually told a World Wide Web conference I would eat my column if the Internet didn’t collapse….

“What do I mean by a collapse? Well, the FCC requires telcos to report all outages that affect more than 50,000 lines for more than an hour. I mean something much bigger than that.” I suggested that with enough raw tuna and wasabi, his column would go down well. But Metcalfe was dead serious.

The Internet will collapse and it will be good for us, and for the Net. “The collapse has a purpose. The Internet is currently in the clutches of superstition, promoted by a bio-anarchic intelligentsia, which holds that the Net is wonderfully chaotic and brilliantly biological, and homeopathically self-healing by processes of natural selection and osmosis. The purpose of the collapse will be to discredit this ideology.

“What the Internet is — surprise, surprise — is a network of computers. It needs to be managed, engineered and financed as a network of computers rather than as an unfathomable biological organism.”

Metcalfe’s intellectual targets are not hard to find. He dubs them the “Wired intelligentsia, epitomized by Nicholas Negroponte,” and, one supposes, author/editor Kevin Kelly and hippie mystic seer John Perry Barlow, celebrating a “neo-biological civilization out of control.”

For example, at a recent meeting of NANOG (North American Network Operations Group), whenever Metcalfe brought up the problems of Internet management — the need for a settlements-and-payments process so that people who invest in the Net backbone can get their money back — “they kept telling me to get lost.

“They’d tell me, ‘You just don’t get it, do you?’ This is the worst possible charge of the politically correct: ‘You just don’t get it.’ The implication is that I am a clueless newbie.

“But I am not a newbie and I do get it: an accelerating pattern of wild behavior on the Internet [caused by] a breakdown of any relationship between supply and demand for Internet services, any way of metering usage, any method of paying back people who invest in the backbone. One thing is sure: They will not be paid by biofeedback loops.

“The result is bad — the deterioration of the public Internet and the rise of private Intranets. These are not really part of the Internet at all. Many of them use ‘hot potato routing,’ throwing any messages from nonsubscribers back into the pot. It is a tragedy of the commons, a shrinkage of the public network on which we all ultimately depend.”

Since I had frequently cited Metcalfe’s Law as an answer to “The Tragedy of the Commons” argument, this charge hit home.

Metcalfe warns that “back when Internet backbones carried 15 terabytes of traffic per month, the world’s Ethernet capacity was 15 exabytes per month, a million times higher.” (Exabytes, if you wonder, add up. While a terabyte is a 1 with 12 zeros after it, an exabyte commands 18 zeros.) But those were last year’s numbers. Carrier of some 40% of backbone traffic, MCI now reports 250 terabytes per month. Just a small shift in local traffic onto the public Net can create catastrophic cascades of congestion.

With private networks increasingly becoming TCP/IP Intranets that can use the Internet but shield their resources from it by “firewalls,” the likelihood of a crippling cascade from private to public Nets grows more acute every day. According to Metcalfe, one way or another, such a disaster is now at hand.

His primary evidence is data from the Routing Arbiter at Merit (the Michigan group that commands routing servers at every NAP and collects Internet statistics by “pinging” routers across the Net every few minutes). Merit’s pings yield an echo of chaos: “a dramatic, accelerating rise of packet losses, delays and routing instability. This data is available on the Net. But the Merit people are afraid of making waves, offending the big carriers, so they don’t really tell anyone how bad it is.

“I ask my readers [at InfoWorld], and they tell me they think the Net has already collapsed.”

As the North American guild of network operators, what does NANOG need? I asked. “One thing NANOG definitely needs,” sums up Metcalfe, “is more people in suits.” The trouble with NANOG is that it is full of biomystics with big beards and Birkenstocks who look like Bob Metcalfe did when he finally got his Ph.D. from Harvard after a dramatic setback the year before, when his thesis board flunked him at the last minute.

(Perhaps it was because he “hated Harvard” and spent all his time at mit and Bolt, Beranek & Newman, laying the foundations for the Internet with Larry Roberts rather than sitting humbly at the feet of Crimson computer scientists refining their professorial perks and queues. Republished in June under the title Packet Communication, with a new introduction from the author, Metcalfe’s thesis is now recognized as a classic text on networking that anticipated most of the evolution from the Arpanet to the Internet. In the front of the new edition is a picture of Metcalfe as a newbie at the Harvard commencement, with a big beard and a weird shirt and jacket, looking kind of like a bio-anarchic, Harvard-hating Hawaiian homeopath himself, ready to help start the Internet movement back in 1973.)

What does this all mean? The conversion of Bill Gates into an Internet obsessive. The jeremiads of Metcalfe, one of my favorite people in the industry, both as a technical seer and conservative economic voice in a webby-minded wilderness. What do I make of the descent into vapor of several of my favored technologies and the admitted biodegradation of the Net?

NEW SCARCITY, NEW ABUNDANCE

Rather than debating this apparent jumble of conjectures — and for a second time jousting with the Olympian Metcalfe — I would instead transcend the details in a larger theme: Marking every industrial and economic transformation are new forms of scarcity and new forms of abundance.

Economics has been termed the dismal science of scarcity. Indeed, scarcity is at the heart of most economic models; many of my critics still live in the grip of the dismal scarcities and zero sums of pre-Netic economics. But what is the controlling scarcity of an information age? In the Industrial Age, natural resources and real estate were scarce. But Julian Simon of the University of Maryland has shown that, as manifested by falling real prices, all natural resources, such as foodstuffs, minerals, clean air and available water and energy, have been increasing in abundance over the last century.

If conventional resources are becoming more abundant, what is the ruling scarcity of the information era? Is it information? Hardly. The information glut has become a ruling cliché. As all resources — from energy to information — become more abundant, the pressure of economic scarcity falls ever more heavily on one key residual, and that single shortage looms ever more stringent and controlling. The governing scarcity of the information economy is time: the shards of a second, the hours in a day, the years in a life, the latency of memory, the delay in aluminum wires, the time to market, the time to metastasis, the time to retirement.

The ruling scarcities in the economy of time, however, can be distilled to two commanding limits: the speed of light and the span of life. They form the boundaries of all enterprise.

The speed of light is the most basic constraint in information technology. As a key limit, the speed of light shapes the future architectures and topologies of computers and communications. For example, the light-speed limit dictates that the fastest computers will tend to be the smallest computers. Electrons move nine inches a nanosecond (a billionth of a second). As computers move toward gigahertz clock rates — a billion cycles a second — the longest data path must be decisively smaller than nine inches. Pulses of electromagnetic energy — photons — take some 20 milliseconds to cross the country and one-quarter second to reach a satellite in geostationary orbit (as you notice in a satellite phone call). At a gigabit per second, this means that as many as 250 megabits of data — many thousands of IP packets, for example — can be latent (or lost) in transit at any time, thus playing havoc with most prevalent network protocols, such as TCP.

Thus light speed is a centrifuge. It abhors concentration in one place, ordains that these small supercomputers will be distributed across the globe and will always be near to a network node. Although the networks will be global in reach, they will depend on the principle of locality: the tendency of memory or network accesses to focus on clusters of contiguous addresses at any one time. Light speed imposes limits on the pace of any one processor or conduit, and pushes both computer and communications technologies into increasingly parallel and redundant architectures.

As a governing scarcity in the new economy, no less important than the speed of light is the span of life. Just as light speed represents the essential limits of information technology, lifespan defines the essential shortage of human time. Although medical and other health-related advances have increased the span of life in the United States some 5 years in the last 25 — while the media focused on aids and cancer, and zero-sum pundits declared that our descendants, the scions of our science, will live less well than we do — the ultimate lifespan remains limited. Indeed, the modal economic activity of the information economy is exploitation of the technologies of the speed of light to increase the effective span of life by increasing efficiency in the use of time.

GDP and other economic numbers from the National Income and Product Accounts (NIPA) totally miss the minting of new time through innovation: the opening of parallel universes of choice in ideas, courses, arts, letters, entertainments, therapies and communities. Finding stagnation and poverty and agonizing over new wealth, Morgan Stanley gapologist Stephen Roach plumbs the shallows of NIPA for all the world like the CIA economists who found the Soviets exceeding the United States in growth for 17 years. Video teleconferencing, telecommuting, teleputing, digital wireless telephony, Internet mail, cybercommerce, telemedicine and teleducation all are in the process of compressing the span of life toward the increasingly thronged channels of the speed of light.

If time is scarce, what is the growing and defining source of abundance among all the material abundances in the information economy? Signifying the definitive abundance in any economic era is the plummeting price of a key factor of production. In order to grow fast, every new-era company must exploit the drop in the cost of the newly abundant resource. Companies that use the resource that is plummeting in cost will gain market share against all other companies and will come increasingly to dominate the economy.

FROM WATTS TO MIPS AND BITS

Over the last hundred years, there have been three such economic eras. The industrial era fed on the plummeting price of physical force or energy, best measured in watts. Some 30 years ago, with the regulatory sclerosis of the nuclear and natural gas industries, the price of watts began to plateau, dropping less than 0.7% per year for the last 35 years. The last 30 years brought the reign of the microcosm, which fed on the plummeting price of transistors, manifested in the exponential drop in the cost of computer MIPS (millions of instructions per second) and memory bits. For the last 30 years, the price of a bit of semiconductor memory has dropped 68% per year. With this year’s decline in DRAM prices, the trend line is being resumed after a four-year hiatus. The likely result is a sharp upside surprise in PC sales — and thus in chips — through 1997.

As fast as the price of MIPS and bits continues to drop, however, this Moore’s Law trend line will no longer dominate the economy. Like a great river headed for a falls, a new factor of production is racing toward a historic cliff of costs. Over the next 30 years, the spearhead of wealth creation will be the telecosm, marked by the plummeting cost of bandwidth — communications power — measured in gigabits per second.

This result means that the growth of bandwidth will outpace the growth of processor power. After an entire career keyed on Moore’s Law, Bill Gates remains skeptical, foreseeing an era of middleband nets, with shared cables bogging down in gigabytes from tomcruise. com/vrml and fiber gushing into twisted copper cul-de-sacs. The usually savvy Network Computing columnist Bill Frezza believes that bandwidth is inherently a slower-moving technology than processing, because bandwidth has to be delivered at once to an entire area while processors can be sold one at a time. Robert Lucky, Bell system laureate, and Paul Green of IBM debated these points two decades ago. Stressing the dependence of bandwidth on the labor-intensive digging of trenches and stretching of wires across continents and under seabeds, Lucky doubted that communications could ever be truly cheap. Paul Green, a computer network man, thought that digital computer communications could join the Moore’s Law learning curve.

The evidence mounts that Green was more than right. Impressed by Green’s own achievements in fiber optics, Lucky now acknowledges that communications power will grow at least tenfold more than computing power over the next decade. Using the rough metric of Moore’s Law, computer power doubles every 18 months. Bandwidth is now doubling at least every year. Over a 10-year period, this means a hundredfold rise in computer power and at least a thousandfold rise in bandwidth, measured at any point in the network from the home to the backbone.

The reason communications power has lagged behind computer power is not the difference in technology but in regulation. Moore’s Law in bandwidth has given way to what venture capitalist Roger McNamee calls Moron’s Law, the labyrinthine tangle of tariffs and rulings and FCC dockets that frustrate the implementation of communications advances. With an acceleration of technology and a tsunami of new Internet demand for bandwidth, this bottleneck is breaking at last.

Backbone capacity is leaping upward today. As TCP/IP coinventor Vinton Cerf of MCI told Forbes ASAP in December, his company correctly predicted its backbone bandwidth would increase from 45 megabits per second to 155 megabits per second this year, or by a factor of nearly four. But on March 11, MCI Vice-President of Enterprise Marketing Stephen VonRump told Gordon Cook of the Cook Report on the Internet that MCI will jack up the speeds to 622 megabits per second before the end of the year, or nearly fifteenfold in one year. Meanwhile, cable modems, telco Digital Subscriber Line technologies (from HDSL to ADSL and SDSL) and digital wireless advances promise even larger factors of expansion in the bandwidth to homes, though unlike the backbone expansion, the impact will be incremental.

Shaping the future, however, will be breakthroughs in laboratories. As “Into the Fibersphere” maintained, the ultimate source of bandwidth expansion is the immense capacity of optical fiber. Now comprising a global installed base of 40 million miles (some 25 million miles in North America), each optical fiber, as Paul Green of IBM estimated to Forbes ASAP four years ago, commands an intrinsic available bandwidth of 25,000 gigahertz. At the time, the world record transmission over a significant distance was still approximately 20 gigabits per second, and the highest deployed capacity was just 2.5 gigabits per second. Moreover, the light pulses had to be converted to electronic pulses every 50 to 70 kilometers to amplify and regenerate the signal. This electronic bottleneck restricted the speed of long-distance transmission to the maximum speed of the optoelectronics, or some 10 gigahertz. So Green’s projections provoked incredulity in many quarters.

Early this year, however, Green’s visions were becoming more plausible. On Feb. 26, 1996, at the conference on Optical Fiber Communication (OFC ’96) in San Jose, Calif., papers from Lucent Technologies’ Bell Labs, Fujitsu and NTT Labs all reported successful transmissions at a landmark rate of a terabit per second, one twenty-fifth of Green’s limit. For these terabit rates, Fujitsu and Bell Labs used between 50 and 55 separate bitstreams or wavelengths, each some 20 gigabits per second. NTT, which employed 10 separate bitstreams, also reported diffraction grating receivers that could resolve 64 different wavelengths at once.

At the same time, erbium-doped fiber amplifiers were smashing the electronic bottleneck. Impelled by a pump laser (light amplification by stimulated emission of radiation), these all-optical amplifiers are now being deployed in networks around the world. They open a new era. Simple broadband amplifiers made of a coiled fiber thread, they replace optoelectronic repeaters comprising nine custom bipolar microchips that must be duplicated for every frequency or modulation scheme used in the fiber. Thus the new amplifiers make possible the creation of vast broadband fiber networks bearing hundreds or even thousands of separate carriers, and permit the sending of thousands of separate messages around the globe or under the seas entirely on wings of light. The bandwidth of these all-optical amplifiers is now up to 4.5 terahertz, or close to 20% of Green’s estimated limit.

The ultimate capacity of fiber is not a merely academic issue. At a rate of 4,000 miles a day, fiber deployment is beginning to make a dent in neighborhoods. David Charlton of Corning estimates that over the past five years, the top 10% of U.S. households, comprising most of the early technology adopters, have drastically improved their access to fiber. Five years ago these homes were, on average, 1,000 households away from a fiber node; this year, they are just 100 households away. Milo Medin of @Home estimates that 15% of U.S. cable TV subscribers had systems directly connected to fiber nodes at the beginning of 1996. By the end of the year, that number will be close to one-third.

Hostility to cable TV remains high and politically useful; many city governments subsist on cable TV franchise fees and regulate these companies into a stupor. Eerily mimicking computer experts of the 1960s who attested that telephone wires were too beset with noise and interference to carry digital data, telco experts today pronounce cable plant entirely unsuited for Internet bits. But the fact remains that cable TV coax is the only truly broadband link already in most U.S. homes.

Flaws in the transmission of analog video, in which every glitch of interference is visible on the screen, fall away in digital systems that can deliver flawless images at a signal-to-noise ratio more than 1,000 times lower. Directv, for example, does not outperform cable TV because it is harder to send a signal a few thousand feet down a coaxial cable than to zap it to a satellite 23,600 miles away, beam it to an 18-inch dish on a roof, then send it down a coaxial cable to your TV. The superiority of Directv derives from its digital nature. Essentially, the picture is created in the set rather than at the station. Using a variety of new cable modems, possibly including Cisco and Terayon’s CDMA for upstream signals (CDMA finesses interfering frequencies by spreading codes through them all), cable TV plant will prove to be entirely adequate for digital transmissions, both upstream and downstream.

But what about switching, ask the critics of cable? Claude Shannon of mit and Bell Labs, the inventor of information theory, had the answer in 1948: Bandwidth is a replacement for switching. Rather than performing the processing at some central point, you use routers in the Net and filters in the terminals. If you have adequate bandwidth, you can emulate any switching topology you want. Cable commands a potential of some 8 gigabits per second of two-way bandwidth. Linked to the potential terabits of fiber bandwidth, cable plant has as much chance of accommodating the explosive growth of the Internet as the telcos do.

The two essential models for the distribution of information are select and switch or broadcast and select. Select and switch is based on intelligence in central servers that search databases for desired material, and on intelligence in switches that channel the material to the desired address. Select and switch uses computer power at the servers and switches to compensate for the lack of bandwidth on the network and the lack of storage and processing power in the terminals. By contrast, broadcast and select is based on bandwidth in the network and on intelligence and storage in the terminals.

Envisaging video servers, information warehouses and other centralized schemes, select and switch is the model pursued by much of the industry today. In some schemes, agents from networked terminals search through large banks of data looking for specified items, which are switched through the network to the terminals. Computers reach out and grab data they need from servers with large storage facilities.

Epitomized by the World Wide Web today, the marvels of this select-and-switch model are evident to us all. It is far superior for personal two-way communications and one-to-one file transfers. But it is not superior for everything, as companies trying to send movies point-to-point over ATM switches discovered in Orlando and elsewhere. The success of select and switch, using the storage of servers, has been too total for the health of the Internet.

In its extreme form, select and switch contravenes the laws of the microcosm and telecosm. These laws will increase storage at the terminals far faster than at the centralized server and will expand the network’s bandwidth faster than its switching capability. For many uses, the broadcast-and-select method is appropriate — indeed, inevitable — and its spread will relieve many of the pressures on Internet capacity.

Broadcast and select is the system used in wavelength division multiplexing systems in the new multi-bitstream terabit-per-second fiber tests. Broadcast and select conforms closely with the strengths of the cable system. In late April, for example, Wave Systems (where I serve as a director) launched its Cablepc project in the heart of Silicon Valley with the Palo Alto Cable Co-op. Supporting the test are 26 other companies with equipment and services, including cable modems from Com21 and En Technology; a merchandising engine from Zero.one; the Destination PC/TV from Gateway 2000; game machines from mak Technologies; and content from Simon & Schuster Interactive, Network News, William Morris Agency, Microsoft’s interactive software arm and an array of CD-ROM publishers.

This system uses cable bandwidth to broadcast huge amounts of digital information and entertainment. Originating anywhere from the World Wide Web and Directv satellites to CD-ROMs on a PC, the rush of bits will be filtered and downloaded by the PC at the programmable specifications of the viewer. Customers pay for material by the piece and only when they choose to decrypt it through an onboard “credit chip” or WaveMeter that may be periodically tapped over telephone lines from a transaction center. Rather than millions of people downloading new versions of Netscape one at a time over 28.8 modems, for example, you can program your machine to download all new Netscape browser releases to your hard drive when they are broadcast, probably late at night. The next day you can decide whether to buy, save or delete the program. Explains new Wave Systems president Steven Sprague, “This system creates a new channel where the customer pays only for what is used, when it is used, and the owner of intellectual property benefits from each use.”

Together with systems of mirroring, replicating and local buffering being pioneered by @Home, the Cablepc project is one of many ways to use cable bandwidth to relieve pressure on the Net and to exploit the ever-rising intelligence and storage in the terminals. Other broadcast-and-select systems include PointCast for the PC, which uses the screensaver as a way to display programmably filtered news and other information. Another large contribution of broadcast-and-select bandwidth for the WWW will come from digital satellite systems, such as Directv, that devote channels to Internet services.

Meanwhile, coming to the rescue of the Net backbone are an array of technologies incorporating asynchronous transfer mode (ATM), an elaborate set of standards for broadband switching. Supported by some 800 companies in the ATM Forum, ATM resembles RISC (reduced instruction set computer), which accelerates speeds by making all instructions the same length and processing them in silicon. Similarly, ATM breaks all data into 53 byte cells, small enough to be processed in a semiconductor chip at speeds fast enough to accept voice, video or data at once. Conceived as an end-to-end system from your phone or PC through the “cloud” to your Internet service provider and beyond, ATM seems a panacea for the protocol zoo emerging in data communications.

ATM to the desktop faces dire challenges, however. Paul Green noticed that the most popular booth at the early May ATM Year ’96 conference in San Jose was by a company called Ipsilon. Now partnering with Hitachi, Ipsilon makes an IP switch that dispenses with all ATM software and uses ATM cells only for fast hardware switching. Similarly, NetStar [see Forbes ASAP, “Angst and Awe on the Internet,” Dec. 4, 1995], now being purchased by Ascend Communications for $300 million in stock, offers an IP crossbar switch in gallium arsenide with a backplane throughput of 16 gigabits per second. Meanwhile, vendors of Fast Ethernet and Gigabit Ethernet attracted increasing attention. Why transform your network when you can get most of the advantages of ATM through new forms of Ethernet and TCP/IP? But in one form or another, ATM switches are still the fastest switches and use their advantage in silicon integration to dominate the top-of-the-line slots in the backbone of the Internet.

REVENGE OF BEARDS AND BIRKENSTOCKS

The readers of Forbes ASAP will recall Gordon Jacobson and Avi Freedman, East Coast ISPs who have graced these pages contemplating a national network. Tonight, in New York, they are debating the future of the Net with each other and with two executives from a San Diego company called AtmNet who have similar ambitions. I am there to get a view from the pits of the Internet on Metcalfe’s lament. Jacobson has a problem, though. He wants to take us to Le Colonial on East 57th Street, which he describes as the hottest bar and best French-Vietnamese restaurant in the city. But the bearded Avi Freedman has shown up in a green T-shirt and Birkenstock sandals, which won’t cut it at Le Colonial.

A second-floor hideaway, Le Colonial looks like Rick’s place in Casablanca, so they say, and it sounds like a bar on the Champs-Elysees. More important to guru Gordon, it allows him to flaunt a tycoon’s cigar, unlike P.J. Clarke’s, his other favored haunt, which has succumbed to nicotine correctness since Dan Jenkins’s novel on the Giants, Semi-Tough, celebrated its smoke and grit. Unlike P.J. Clarke’s with its elderly Irish trolls, Gordon tells us, Le Colonial offers “the most beautiful bartender in all New York. You got to see her.”

Avi, though, has more important things on his mind. Polynomials.

They’re a dilemma, those polynomials. But Gordon decides to act anyway. We will start out with dinner at Clarke’s and then move on to Le Colonial for after-dinner drinks. At the bar, Le Colonial will tolerate the T-shirt, and perhaps, with adequate lubrication, we will be able to relieve the pressure of the polynomials.

The Internet is in the process of a horizontal explosion, with new network exchange points popping up everywhere — two in L.A., one each in Tucson, Phoenix, Atlanta, Cincinnati — you name it, a hundred or more network exchange facilities coming online. AtmNet is beginning one in San Diego and has plans to participate in those in L.A.

Meanwhile, the P.J. Clarke’s waiter, delivering salad with home fries well done and a Diet Pepsi, is struggling with the demands of serving different meals to five customers (that’s 25 different possibilities). With mental buffers overflowing and packet losses mounting, he resolves on a polling algorithm, offering the plate to each of us around the table before settling on Avi. Gordon is looking worried; Avi is questioning his confidence that ATM switching can resolve most of the complexity problems on the Internet.

SEX AND POLYNOMIALS AT LE COLONIAL

I ask whether the problem arises from scanty RAM buffers in the Cisco routers. Avi says no. An entire global routing table still takes just 14 megabytes and virtually everyone on the Net can now handle that. Soon they will be able to handle a gigabyte of routes, no sweat, enough to deal with any foreseeable growth of the Net. Yes, I observe, it’s exponential; I talk about it all the time. No, Avi corrects me, complexity growth is not exponential. It is polynomial.

This problem will have to wait, however, says Gordon, hailing the waiter. It’s time to leave for Le Colonial. Gordon wants Jim Browning and John Mevi of AtmNet to explain how their ATM systems can transcend all these complexities.

AtmNet is visiting New York to consult with Gordon and Avi about AtmNet’s plans to create a new national 155-megabit backbone across the country. AtmNet already has a 155-megabit-per-second backbone on the West Coast connecting San Diego, L.A. and San Francisco. But they are dependent on the caprices of long-distance carriers to cross the country. Gordon pays the waiter and we’re off to Le Colonial.

After dinner, Avi’s T-shirt and sandals pass. But upon arrival, Gordon is crestfallen: The exponential bartender is off for the night. When Gordon recovers, we all troop to a table in the corner. Thronging the room are models in miniskirts — tall, lithe and pneumatic. Across the table, in front of a large framed photograph with a wraithlike image of Ho Chi Minh in a Huey Newton chair, a sleek young couple in black hungrily writhes through hot kisses. A sultry Asian waitress in a red kimono blouse emerges from behind palm leaves to take orders of port, Courvoisier and Diet Pepsi. Avi is worried that we still don’t get his point about the polynomials.

He wants to correct me: Strictly speaking it is not exponential (that’s when the exponent rises), it is polynomial (the variable n rises). In this case, the complexity of the network rises by n nodes times n-1, which is not even quite the square of n. I got it. The growth of Internet complexity is polynomial. But the growth curve still rises toward the sky, okay?

Avi ignores my comment and cruises on. Cisco is selling 60,000 routers a month. It’s the low end of the Net that is exploding. Hierarchical segmentation through routers is the answer, reducing the n squared factor to the logarithm of n. “Log n is wonderful,” Avi says. “It shears off complexity.” The curves are relatively flat. But what about Metcalfe’s prediction of a whopping Internet crash in 1996? Avi will get to that. And what is the role of AtmNet’s ATM switches?

Indeed. Apparently joining Avi in ignoring these tantalizing questions, the girl across the table raises her legs and hooks them sinuously around the body of the sleek young man. The waitress leans forward to deliver the drinks, suffusing the table with exotic perfumes. The two AtmNet promoters insist that the router problem can be overcome through the interposition of ATM switches.

Avi dismisses the ATM argument. The complexity curve is still polynomial, he says. Whether routed or switched, the messages have to follow the same physical routes. The complexity is the same. Moreover, Avi’s cell phone is on the blink and he has been out of reach for three hours. Gordon offers a show-off Audiovox the size of a pack of cards, only lighter. Avi manages to put through a call.

The girl across the table shudders with pleasure as the man reaches out and cups her breasts in his hands. “The FIX is down,” Avi sighs. “What does that mean?” I ask. That means, so I learn, a 45-megabit line is out of service and the Federal Internet Exchange, a Washington NAP, cannot trade routes or data with MAE East, Metropolitan Fiber Systems’s Fiber Distributed Data Interface exchange point in Vienna, Virginia. This glitch ramifies, creating certain problems for some of Avi’s new customers in Washington. The young man whispers something in the ear of the girl. She balks. “No, I’m getting embarrassed,” she says. “Let’s leave.” “I’ll call back in 10 minutes,” says Avi. The pair unwrap their entangled limbs and staggers up from the table. Avi and the rest of us get up to go.

Thus ends the visit to the palmy domain of Le Colonial. Before I can pry in a question about Metcalfe, Avi is on the road back home to his wife and an Internet crisis at 11 p.m., enjoying life as a Diet Pepsi bon vivant polynomial ISP. Anyway, it was time for fresh air. John Mevi of AtmNet needs a break. “Avi talks so fast it makes my ears ring,” he explains. “You’ve got to understand. I’m from a telco environment.”

So it was that on a steamy evening in New York, on June 17, I returned to consult Avi again on Metcalfe’s predictions of a network crash. We met with Gordon Jacobson at Martini’s, an Italian restaurant near the Sheraton Hotel on the west side of Manhattan. While Avi and Gordon consume a lox pizza and several orders of pasta, I question these men who live on the Net from minute to minute, day to day, who live in a world of routing tables, TCP/IP address resolutions, and BGP (Border Gateway Protocol) and Gate Daemons, about what they make of the doom scenario.

Avi believes that Metcalfe has ascended to an elevation in the industry that takes him out of the loop. He really doesn’t get it. The Merit data is mostly irrelevant. Pings from the Routing Arbiter are weighted as lowest-priority packets. It is predictable and unimportant that many are dropped and re-sent. “That’s the way the Internet works. Like Ethernet, it is tolerant of failure. Undelivered packets are re-sent; they show up as a few milliseconds of delay.”

Metcalfe makes much of Merit’s index of router instability, measured by the number of routes announced and withdrawn. In the extreme, instability brings “route flaps,” in which waves of announcements and withdrawals spread across the Net in positive feedback loops that congest the system. Avi dismisses this effect. “There have been no significant route flaps in the last six months or more.”

A large portion of the instability problems is attributable to a bug in Cisco router software that is in the process of being fixed. He confirms the findings of Ken Ehrhart of Gilder Technology Group that shows little correlation between the router instability number and performance of the Net measured by throughput at NAP switches. While all this “wild statistical behavior” went on, the Net continued to perform stably by using other routes, circumventing the congested paths. “That’s how the Internet works.”

Avi sums it up: “Metcalfe has become an elder statesman and now he is doing more harm than good, spreading fear and doubt while the rest of us solve the problems.” As Ehrhart puts it, “These Merit numbers bear bad names like ‘instability,’ ‘packet loss’ and ‘delay.’ Metcalfe says these bad things are growing wildly. But in back of these numbers what is really growing wildly is the Internet and that is a good thing.” Richard Shaffer’s ComputerLetter, for example, reports that MCI’s backbone traffic has risen fivefold in the last year. MCI reports that its traffic has grown a total of 5,000% since it opened the backbone in 1994.

Like Howard Anderson, Bill Gates, Andy Grove and other bandwidth skeptics, Metcalfe seems to find the explosive growth of his intellectual progeny — Ethernets and Internets — too good to be true. All the sages and titans seem to seek obsessively the worms in the Giant Peach as it hurtles through the air. The message from Avi, Gordon and the AtmNet crew is “Let them eat worms. We’ll feast on the peach.”

Metcalfe’s economics arguments are largely true. As Michael Rothschild’s Bionomics shows, growth in natural and economic systems depends on running a surplus in every cell. But the cells of the Internet are thriving today. From the creators of the backbone who lease their facilities, to the ISPs who are madly multiplying their points of presence, the leading companies are attracting so much investment and support that laggard behemoths such as AT&T, TCI and the RBOCs are rushing in.

The law of the telecosm depends on the principle that new computers and routers on the Net not only use its resources but also contribute new resources to it. If the recent upsurge in Intranets is parasitical to some degree (because these newcomers use the resources of the Net without contributing resources of their own), the ultimate parasite on the scene may be AT&T, which commands perhaps the world’s largest Intranet.

AT&T has attracted some 6 million orders from newcomers for Net service, with plans to have 20 million by the end of the year, while lagging far behind MCI and Sprint in contributing to the Internet system. Until recently, AT&T’s vast fiber backbones carried just 2% of Internet traffic. AT&T preens as the largest and lowest-cost ISP, but its traffic mostly travels the backbones of MCI, Sprint and other national Internet carriers. A key to clearing the current bogs and bit pits of the Internet and preventing a Metcalfe collapse is the enlisting of the full-fiber and switching resources of AT&T to relieve the pressure on the existing NAPs and backbones and to accommodate the Internet’s growth. AT&T is currently moving to supply such support.

The Intranets criticized by Metcalfe are crucial to the Internet’s growth. Like the corporate PCs that spearheaded the advance of PC technology, Intranets spread Internet technology through business, expand the market for high-powered gear, lower component prices, enlarge bandwidth, and bring new users and buyers onto the Net.

As for Metcalfe’s prediction of Internet crashes from private network overflows, the fact is that no computer memory system could work without the principle of locality — the tendency of memory accesses to focus on a contiguous region of addresses. The Internet is similar. Internal corporate e-mail, for example, is about 10 times as voluminous as remote e-mail. Metcalfe’s private Net overflow cascade is mostly a theoretical chimera. Like Ethernets, “the Internet works in practice but not in theory.”

A second key fact of the Internet is that nothing in modern computer systems could survive the combinatorial explosions of multimillion-line software programs and multimillion-node circuitry without the magic of the microcosm. Semiconductors sink the complexity into silicon, where it gives way to the exponential boon of the power-delay product. The performance of the circuit — measured by its speed and low power — improves roughly by the square of the number of transistors on the chip.

A microprocessor using separate components would be taller than the Empire State Building and cover most of New York state. Most of the problems of Internet complexity must be solved the same way that the microprocessor solves its complexity explosion, sucking the complexity into semiconductors and taming it on the chip. This means that Internet nodes would ideally be single-chip systems. Avi is correct; it makes little difference whether these systems are routers or switches. Today the backbone is being renewed by ATM switches because these devices integrate more of the process onto silicon than any other switches. In the future, broadband optics will likely prevail by integrating entire communications systems onto seamless webs of glass.

THE INTERNET AS LIFE EXTENDER

The internet is a human contrivance requiring finance and physical renewal. In practice, this means capital from phone companies and other large institutions. In the real world, self-organizing systems rely on market incentives rather than bio-analogies. Metcalfe is right that these incentives must be protected and extended in order for the law of the telecosm to conquer the laws of entropy.

The key remaining obstacle to the fulfillment of the promise of the Internet is government regulation. This obstacle is being overcome at last by the brute force of bandwidth abundance, stemming from breakthroughs in fiber optics, smart radios, satellites and cable modems.

In every industrial transformation, businesses prosper by using the defining abundance of their era to alleviate the defining scarcity. Today this challenge implies a commanding moral imperative: to use Internet bandwidth in order to stop wasting the customer’s time. Stop the callous cost of queues, the insolence of cold calls, the wanton eyeball pokes and splashes of billboards and unwanted ads, the constant drag of lowest-common-denominator entertainments, the lethal tedium of unneeded travel, the plangent buffeting of TV news and political prattle, the endless temporal dissipation in classrooms, waiting rooms, anterooms, traffic jams, toll booths and assembly lines, through the impertinent tyranny of unneeded and afterwards ignored submission of forms, audits, polls, waivers, warnings, legal pettifoggery.

George Gilder

Senior Fellow and Co-Founder of Discovery Institute
George Gilder is Chairman of Gilder Publishing LLC, located in Great Barrington, Massachusetts. A co-founder of Discovery Institute, Mr. Gilder is a Senior Fellow of the Center on Wealth & Poverty, and also directs Discovery's Technology and Democracy Project. His latest book, Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (2018), Gilder waves goodbye to today's Internet.  In a rocketing journey into the very near-future, he argues that Silicon Valley, long dominated by a few giants, faces a “great unbundling,” which will disperse computer power and commerce and transform the economy and the Internet.