fast-internet-connection-with-the-optical-fiber-stockpack-adobe-stock.jpg
Fast internet connection with the optical fiber
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Gilder Meets His Critics

Published in Forbes ASAP

This article was first published in Forbes ASAP, February 27, 1995. The article contains letters from various correspondents commenting upon a wide variety of issues raised in the series of George Gilder’s “Telecosm” articles which will be published in 1996 by Simon & Schuster, as a sequel to Microcosm, published in 1989 and Life After Television published by Norton in 1992. Subsequent chapters of Telecosm will be serialized in Forbes ASAP.


ASAP contributing editor George Gilder ran into a buzz saw over recent bandwidth and big-bird articles.

Right after George Gilder took on feisty Tom Peters in the Battle of City vs. Country, we struck a low blow: We gave the exhausted futurist a mountain of mail he had to read and answer, quick. Here, we print a sampling of the responses to Gilder’s piece the onrush of bandwidth (Forbes ASAP, Dec. 5); a letter from Steven Dorfman of GM Hughes about an earlier article (Oct. 10); and Gilder’s answers to all. Thanks, George! {Editors, ASAP}

George Gilder’s piece on bandwidth was good. But I don’t understand how Intel gets hurt unless it stops delivering the best price/performance microprocessors. The more network connectivity the more we need MIPS. Andy Grove is right that DSPs are just a complex way of getting more MIPS. Just because bandwidth reduces some of the need for compression doesn’t mean bandwidth reduces demand for cycles.

In any case Gilder is very stimulating even when I disagree with him, and most of the time I agree with him.

Bill Gates – Chairman and CEO, Microsoft, Redmond, WA

Debunking Bandwidth:

When our world is fibered, the planet is like a desktop. Earth is but a backplane for a single computer. True. But as mere humans, the bandwidth we’re really interested in is the one that exists between us and computers, be they the size of a cuff link or a country. That bandwidth is often one we want to be smaller, not bigger. Most of us, most of the time, want less bits, not more bits. Sure we want gigabits, but only for a few millionths of a second at a time.

Remember the early days of computing when stacks of fanfolded output were dropped on an executive’s desk? People caught on quickly; that was data, not information. Today, for some reason, we have forgotten some simple concepts about what constitutes meaning and understanding and where they come from. You. So while it is real easy to ship vast amounts of data and high-resolution images back and forth between computers and while it is suddenly possible to ignore geographic constraints, let’s not forget that in many cases “less is more” when it comes to bandwidth.

Narrow channels force us to be smarter. Yes, bandwidth will be free, but so will computing. The future will not be driven by either MIPS or BPS, but information and entertainment content. Andy Grove does not need to worry about John Malone or Bill Gates. He has to worry about Michael Ovitz.

Nicholas Negroponte – Director, MIT Media Laboratory, Boston, MA

Bandwidth to Burn: Now What Do We Do?

Gilder has made a case for vastly expanded bandwidth overwhelming the influence of the steady march of computing power. [But] what new need will drive businesses to translate the inventions Gilder describes into significant new media opportunities?

Apparently, it’s the need for video-on-demand. [But] if this were a plausible mass market, the streets of New York would be filled with bicycle messengers delivering Tom Cruise with bags of Chinese food. No, during the next five to 10 years, bandwidth will certainly be consumed in much greater quantities — but for completely different reasons. We will dramatically extend ourselves and our social relationships with video-telephones. We will consume substantial bandwidth by substituting bandwidth for gasoline — through telecommuting. We will network to multimedia databases (such as the current Internet-based World Wide Web) and dramatically expand our range of social contacts — across borders, cultures and tribes.

Unfortunately for Gilder’s bandwidth braggarts, these enormous markets will be built using a telecommunications technology which began deployment over 10 years ago — ISDN — and in which none of them has any important financial stake today. Unglamorous, ungainly, even downright ugly, ISDN (integrated services digital network) will be supplied by old-time telephone companies (not cable companies) and it will be driven by the steady progress of personal computers — themselves now a 15- to 20- year-old industry. As has been widely noted, we tend to overestimate (sometimes dramatically) the near-term impact of new technologies and underestimate the long-range effects.

In this age, new technology hype has become an epidemic. Reality itself, as it turns out, is far more interesting.

Mark Stahlman – President, New Media Associates, New York, NY,

Increasing bandwidth will provide computers with more information to process, and this will increase, not decrease, the computational requirements. Having high bandwidth makes it possible for the interface nodes to be less intelligent, but this is not necessarily desirable. Furthermore, the time frame must be considered; high-bandwidth WAN (wide area network) connections are not going to be widely available for years, and in the meantime, computational power will continue to be critical as a way to mitigate bandwidth limitations.

No matter how much bandwidth is available, it is still very desirable to have high-performance computational ability in desktop systems. Rendering of three-dimensional images from mathematical representations, for example, is something that has widespread application not only in games, but in other consumer applications (like home and garden design programs). Orders of magnitude more performance will result in direct improvements in such applications, and bandwidth is no substitute here.

Finally, with regard to the inclusion of signal-processing capabilities in general-purpose microprocessors, I disagree with Gilder’s conclusion that this will not occur. Minor extensions to general-purpose architectures, such as the ability to perform four 8-bit additions in parallel using the same hardware that normally performs a single 32-bit addition, will provide a significant boost for applications such as video decompression. The cost of adding these features is small, and the benefit is great. Sun and HP have already made such additions to their processors, and I expect Intel and other x86 vendors will do so in the future. Dedicated DSPs will always be able to provide higher performance, but the incremental cost/performance of adding functions to the host CPU is superior.

Michael Slater – President, MicroDesign Resources, and Editorial Director, Microprocessor Report, Sebastopol, CA

[Gilder’s] view is rounded on the narrow philosophy of technological determinism. It is a peculiar and persistent form of myopia based on the wobbly assertion that the best technology will win in the marketplace. He who rides the best technological wave will ascend to glory. Oh, if it were only so! If technology determined success, there would be no Microsoft today. By any reasonable standards, MS-DOS, the foundation of Mr. Gate’s empire, was an average technology when it was brought to market more than a decade ago. But Microsoft had all the other elements that created a compelling value proposition for its customer.

Value is what customers want. Intel has got what it takes and has been a value leader for many years. Andy Grove has already begun to direct Intel’s development portfolio toward communications opportunities. He has read the signals and made the call, just as he did several years ago when he vacated the memory-chip business, in advance of grinding competition and shrinking margins. With constant vigilance and change, Intel’s success can continue for years to come.

Michael E. Treacy – President, Treacy & Co., Cambridge, MA {Treacy is co-author of The Discipline of Market Leaders}

George Gilder’s analysis of the changes in the computing and communications tradeoff is brilliant, concise, analytical — and flawed. His portrait of the rapid changes in communications and relative disadvantage of the old-line computer industry (Intel, etc.) does not overestimate the movement. It underestimates how the next 10 years will be the decade of Bandwidth on Demand. Consider this:

From 1995-2005, the cost of bandwidth will drop faster than the cost of computing.
From 1995-2005, the cost of switching will drop faster than the cost of bandwidth.

Historical examples: The cost of a T1 line (1.54 megabits) coast to coast in 1985 was $ 40,000/month. Today? Under $ 2,000/month, a drop of 95%.

Assume the following: by 2000, computing is free, and bandwidth is free. Now — design the future!

The amount of money spent on ATM Research and Development (Source: Yankee Group ATM Planning Service):

1993:$ 335 million
1994:$ 550 million
1995:$ 950 million

So Gilder is right on about the impact of ATM. In fact, Fore Systems, where our sister company Battery Ventures is the second-largest outside stockholder, carries a market capitalization of $ 900 million — on a $ 60 million sales base demonstrating that the ATM value is well known within the industry.

This past year, the Yankee Group trained 5,000 end-users on the use of ATM technology and the most frequently asked question was, “How in the world am I going to use all that bandwidth?” But it was only 10 years ago that users thought they would fall off the end of the earth if they went faster than 2.4 kilobits!

Which leads to some immutable laws about networks, which Gilder alludes to:

Networks always grow.
Networks always become more complex.
Networks find applications that double the bandwidth needed every three years.
The cost of bandwidth is artificially high.
Andy Grove is right: “Only the paranoid survive.”

Howard Anderson – Managing Director, The Yankee Group, and General Partner, Battery Ventures, Boston, MA

George Gilder’s article goes yet another step in establishing him as the forefront signal-to-noise processor” of information technology. Yet, I confess to being somewhat confused by it.

My dilemma resides in what I will call the “30-30 rule” — that we humans can take in information at only about 30 megahertz through our eyes or, even slower, at 30 kilohertz through our ears. The kind of bandwidth that Gilder projects are important to machine-to-machine communications, i.e., to networks, but it is the computer (in some form, whether PC, PDA, digital-phone or digital-TV) that will continue to determine the “match” between bandwidth and the inherent limitation of the 30-30 rule.

Bandwidth is important, because it will make the connection a richer one, but the fact remains that we humans lack broadband input channel to access all that bandwidth directly. And it is the computer that must bridge that gap, keeping it in the driver’s seat as we enter the realm of ubiquitous, connected computing.

Gilder’s article makes an additional point, and one that falls too often on deaf ears in Washington. That is that bandwidth scarcity, the basis for much of our telecommunications regulation, is an outdated concept. Only major revamping of the government’s role in telecommunications will permit the natural competition between computing and communications to play out.

G. A. Keyworth, II, The Progress and Freedom Foundation, Washington, D.C.

Gilder’s article does a wonderful job showing the potential impact of the bandwidth revolution. Let me give you two examples of approaches in computer systems to exploit enormous bandwidth increases:

The speed of light is not doubling every 18 months. There is a revolution in system design for small, fast machines just as significant as the one for broadband networks in your article. What we call today “large servers” will in fact have to become physically very small. We are now approaching “design for light speed” in computer systems, and we have to keep our handy ruler, measured in nanoseconds, ready for each new board design. Light travels about four inches in a nanosecond in today’s wires, so that, in a 500-megahertz (two nanosecond) computer design, we have less than eight inches of room for our signals to travel in a synchronous processor design (as most are). This means that the fastest computers in our future will also have to be the smallest!

The backplanes of these machines have to be physically very short. The limit of a single backplane makes it hard to keep up with the improvements in processor speeds, using traditional backplane designs.

Switching becomes a core strategy for computer systems. Two approaches that merge switching and architecture are now popular. One, called Distributed Shared Memory, uses a switching network to link cache-coherent memories together. In DSM computers, the power of shared memory designs can be extended over very high- speed switched memory networks. The other, called clustering, has been around for at least 15 years, and uses a switching network to link computer systems. In this approach, applications are modified to share common disks, peripherals and software.

Small size and switching are the future of high-performance computing. Both are based on networking as their core. As the switched networks get faster these architectures will come to dominate computing. The fastest improving technology, in this case networking, always drives the architecture. The hollowing out of the computer occurs when high-performance computers truly span networks. ATM asynchronous transfer model, now in its infancy, is the likely network for us to bet on.

Eric Schmidt – Chief Technical Officer, Sun Microsystems, Mountain View, CA

In “Ethersphere” (Oct. 10) Gilder offers the view that high-powered geostationary satellites — the mainstay, high-capacity platforms of our past, current and future service offerings — are already antiques, and soon will be displaced entirely by thousands of low earth orbiting satellites. That these highly touted systems are nonexistent, unlaunched and unproven [and require major technical breakthroughs] are details that conveniently escape Gilder’s scathing assault on geostationary systems.

Gilder should recognize that new technology products are designed and brought to market based on a host of considerations in addition to pure technical feasibility. Tradeoffs are — must be — made. But to Gilder, “tradeoff” would appear to be synonymous with “sellout.”

In the corporate world, this is business naivety. In deciding what form Hughes’s new Spaceway and DirecTV services, for example, should take, our goal was to deploy systems that: maximized technology insertion, thereby minimizing risk; provided a low-cost service for which there was demonstrated consumer demand; and faced minimal regulatory, technology-development, or financing delays, thereby expediting service introduction.

A Ka-band GEO system, evolved from U.S. defense communication satellite applications, Spaceway is the logical extension of Hughes’s universe of 120,000 very small aperture terminal antennas worldwide, used for private network, two-way voice, data and video. Our me-satellite regional approach provides global coverage at a cost of $ 3 billion. Because service can be rolled out incrementally, revenues can be generated before full system deployment. (By contrast, virtually all 840 Teledesic satellites must be operational — at a $ 9 billion system cost — before service can begin.)

Our comparatively low investment cost and highly efficient spot beam architecture, whereby we cost-effectively target our capacity to the world’s most populated regions, yield significant savings and low user costs. . .critical because developing nations with limited communications infrastructure are a key market.

For voice, we expect that developing regions without access to low-cost terrestrial voice service will embrace Spaceway despite the fractional time delay — at least until terrestrial infrastructure is available. This is a significant, revenue- generating window of opportunity for us. As for data applications, our VSAT experience has shown that custom developed protocols provide totally acceptable throughput efficiency and seamless interactivity. In short, we believe “the delay issue” has been overstated. There is a different delay issue, however, that cannot be overstated. Gilder is, I believe, overly optimistic about how soon Teledesic’s technology will be ready — and hence, how soon service revenues can be generated.

I believe Spaceway is the best technological solution for this market at this time. But if tomorrow the technology and market are in place so that the LEO system makes sense, rest assured that Hughes will introduce an innovative LEO product of Our own.

Gilder attaches far too little import and value (in the form of operating profits) to today’s technology. Nowhere is this more clear than in his assessment of satellite direct-to-home television programming. Gilder calls DBS “one-size-fits-all programming,” stressing its lack of sufficient consumer choice and absence of Interactivity. But in holding out for a fiber solution, Gilder is making a poor business decision.

Today, Hughes’s two DirecTV GEO satellites are filled with 150 program channels. We are adding 3,000 subscribers a day, and will break even (three million) by mid-1996. With 10 million subscribers projected by 2000, DirecTV will be a $ 3 billion a year business, with $ 1 billion in operating profit.

Waiting for the future, Gilder, carries a price tag most CEOs can’t afford, and are not prepared to pay.

Steven Dorfman – President, Hughes Telecommunications and Space Co. GM Hughes, Los Angeles, CA

George Gilder Replies

I want to thank my correspondents for their alternately poetic, ironic, trenchant and pithy responses. So many of them, though, share the notion that I predicted dire straits for Intel that I must assume a lack of clarity in my treatment of the issue. I predicted that new and larger opportunities would arise in the field of communications processors and systems that central processing units would bear a declining share of total processing not that they would in themselves decline in any absolute sense. Indeed, CPUs should continue to improve their cost-effectiveness apace with Moore’s Law, plus an increment for architectural advances in parallelism. Such advances, however, will fail to keep pace with the onrushing expansion of bandwidth, as further detailed by Howard Anderson’s intriguing letter. Bandwidth gains will be fed on the demand side, as Mark Stahlman incisively observes, more by the needs of teleconferencing and telecommuting than by the need for one-way video-on-demand.

Thus I agree with Bill Gates that Intel can continue to thrive as long as it continues to produce the most cost-effective microprocessors. I did raise the possibility, foreshadowed by Microunity’s new semiconductor lab process, that Intel’s existing technology might face rivals that could produce more MIPs or gigabits per second per watt. Power efficiency will be a crucial index in a time of seething CPUs and increasing demands for power- saving designs from producers of mobile appliances, such as the digital cellular communicators which will be the most common PCs of the next decade.

Focusing on gigabits per second as a prime spec, these devices may well eclipse CPUs in raw processing pace and find a wide range of applications in digital radio, real-time compression and decompression, pattern recognition, echo- cancellation and other digital signal processing uses. The demands of these applications have already impelled an array of processing and architectural advances at Microunity, Texas Instruments and elsewhere in the pullulating field of DSP. Unconstrained by proprietary legacies and immense installed bases, perhaps other manufacturers will also find ways to excel the Moore’s Law pace of Intel’s majestic progress down the learning curve of three-volt CMOS technology.

Jay Keyworth and Nicholas Negroponte both eloquently point to the central paradox of the information age. While production systems of the industrial age use scarce resources, such as land, labor, and capital, to create abundance, production systems of the information age use abundant resources, such as bits and bandwidth, to create knowledge scarce enough to fit the bandwidth of humans. This distillation function- — elivering correct and useful data to human beings with their Keyworth window of roughly 30 kilohertz cochlea and 30 megahertz retinas — requires processing speeds orders of magnitude above the human rates, just to sample, quantize and codify the flow. To scan, select, recognize, correct, decompress, echo-cancel, visualize or otherwise manipulate the data entails still further accelerations of processing power.

Communications processors may well emerge as most efficient for many of these tasks. The idea that all such functions will be sucked into the CPU has a long history, but motherboards and their buses remain as crowded as ever. I suspect that the bandwidth explosion will offer many opportunities for processors specializing in communications.

Steven Dorfman of Hughes, I predict, will do better both for his company and his two-way communications from space by moving quickly rather than slowly to low earth orbits. I fully share his admiration for the point-to multipoint-powers of DBS and I have long cited them as a prime reason for the obsolescence of cable TV regulations based on the assumption of monopoly. Indeed, I predict far more than 10 million users by 2000 if Hughes and its suppliers can meet the demand. But satellite and cable TV vendors will prosper best by providing two-way channels for the 110 million personal computers in the land. I expect that these channels — particularly CATV, not ISDN — will provide the dominant access channel for computers over the next decade.

Above all, I hope that whoever Andy Grove fears most, it is not Michael Ovitz. Grove goes Hollywood and we’ll all be in trouble.

George Gilder

Senior Fellow and Co-Founder of Discovery Institute
George Gilder is Chairman of Gilder Publishing LLC, located in Great Barrington, Massachusetts. A co-founder of Discovery Institute, Mr. Gilder is a Senior Fellow of the Center on Wealth & Poverty, and also directs Discovery's Technology and Democracy Project. His latest book, Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (2018), Gilder waves goodbye to today's Internet.  In a rocketing journey into the very near-future, he argues that Silicon Valley, long dominated by a few giants, faces a “great unbundling,” which will disperse computer power and commerce and transform the economy and the Internet.