scott-webb-55167-unsplash
Photo by Scott Webb on Unsplash
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Information Factories

Originally published at Wired

THE DRIVE UP INTERSTATE 84, through the verdant amphitheatrical sweep of the Columbia River Gorge to the quaint Oregon town of The Dalles, seems a trek into an alluring American past. You pass ancient basalt bluffs riven by luminous waterfalls, glimpsed through a filigree of Douglas firs. You see signs leading to museums of native Americana full of feathery and leathery tribal relics. There are farms and fisheries, vineyards arrayed on hillsides, eagles and ospreys riding the winds. On the horizon, just a half hour’s drive away, stands the radiant, snowcapped peak of Mount Hood, site of 11 glaciers, source of half a dozen rivers, and home of four-season skiing. “I could live here,” I say to myself with a backward glance down the highway toward urban Portland, a sylvan dream of the billboarded corridor that connects Silicon Valley and San Francisco.

Then, as the road comes to an end, the gray ruin of an abandoned aluminum plant rises from a barren hillside. Its gothic gantries and cavernous smelters stand empty and forlorn, a poignant warning of the evanescence of industrial power.

But industry has returned to The Dalles, albeit industry with a decidedly postindustrial flavor. For it’s here that Google has chosen to build its new 30‑acre campus, the base for a server farm of unprecedented proportion.

Although the evergreen mazes, mountain majesties, and always-on skiing surely play a role, two amenities in particular make this the perfect site for a next-gen data center. One is a fiber-optic hub linked to Harbour Pointe, Washington, the coastal landing base of PC-1, a fiber-optic artery built to handle 640 Gbps that connects Asia to the US. A glassy extension cord snakes through all the town’s major buildings, tapping into the greater Internet though NoaNet, a node of the experimental Internet2. The other attraction is The Dalles Dam and its 1.8‑gigawatt power station. The half-mile-long dam is a crucial source of cheap electrical power – once essential to aluminum smelting, now a strategic resource in the next phase in the digital revolution. Indeed, Google and other Silicon Valley titans are looking to the Columbia River to supply ceaseless cycles of electricity at about a fifth of what they would cost in the San Francisco Bay Area. Why? To feed the ravenous appetite of a new breed of computer.

Moore’s law has a corollary that bears the name of Gordon Bell, the legendary engineer behind Digital Equipment’s VAX line of advanced computers and now a principal researcher at Microsoft. According to Bell’s law, every decade a new class of computer emerges from a hundredfold drop in the price of processing power. As we approach a billionth of a cent per byte of storage, and pennies per gigabit per second of bandwidth, what kind of machine labors to be born?

How will we feed it?

How will it be tamed?

And how soon will it, in its inevitable turn, become a dinosaur?

One characteristic of this new machine is clear. It arises from a world measured in the prefix giga, but its operating environment is the petascale. We’re all petaphiles now, plugged into a world of petabytes, petaops, petaflops. Mouthing the prefix peta(signifying numbers of the magnitude 10 to the 15th power, a million billion) and the Latin verb petere (to search), we are doubly petacentric in our peregrinations through the hypertrophic network cloud.

Just last century – you remember it well, across the chasm of the crash – the PC was king. The mainframe was deposed and deceased. The desktop was the data center. Larry Page and Sergey Brin were nonprofit googoos babbling about searching their 150-gigabyte index of the Internet. When I wanted to electrify crowds with my uncanny sense of futurity, I would talk terascale (10 to the 12th power), describing a Web with an unimaginably enormous total of 15 terabytes of content.

Yawn. Today Google rules a total database of hundreds of petabytes, swelled every 24 hours by terabytes of Gmails, MySpace pages, and dancing-doggy videos – a relentless march of daily deltas, each larger than the whole Web of a decade ago. To make sense of it all, Page and Brin – with Microsoft, Yahoo, and Barry “QVC” Diller’s Ask.com hot on their heels – are frantically taking the computer-on-a-chip and multiplying it, in massively parallel arrays, into a computer-on-a-planet.

The data centers these companies are building began as exercises in making the planet’s ever-growing data pile searchable. Now, turbocharged with billions in Madison Avenue mad money for targeted advertisements, they’re morphing into general-purpose computing platforms, vastly more powerful than any built before. All those PCs are still there, but they have less and less to do, as Google and the others take on more and more of the duties once delegated to the CPU. Optical networks, which move data over vast distances without degradation, allow computing to migrate to wherever power is cheapest. Thus, the new computing architecture scales across Earth’s surface. Ironically, this emerging architecture is interlinked by the very technology that was supposed to be Big Computing’s downfall: the Internet.

In the PC era, the winners were companies that dominated the microcosm of the silicon chip. The new age of petacomputing will be ruled by the masters of the remote data center – those who optimally manage processing power, electricity, bandwidth, storage, and location. They will leverage the Net to provide not only search, but also the panoply of applications formerly housed on the desktop. For the moment, at least, the dawning era favors scale in hardware rather than software applications, and centralized operations management rather than operating systems at the network’s edge. The burden of playing catch-up in this new game may be what prompted Bill Gates to hand over technical leadership at Microsoft to Craig Mundie, a supercomputer expert, and Ray Ozzie, who made his name in network-based enterprise software with Lotus and Groove Networks.

Having clambered well up the petascale slope, Google has a privileged view of the future it is building – a perspective it’s understandably reticent to share. Proud of their front end of public search and advertising algorithms, the G-men hide their hardware coup behind an aw-shucks, bought-it-at-Fry’s facade. They resist the notion that their advantage springs chiefly from mastering the intricate dynamics of a newly recentralized computing architecture. This modesty may be disingenuous, of course, but amid the perpetual onrush of technological innovation, it may well be the soul of wisdom. After all, the advantage might turn out to be short-lived.

BACK IN 1993, in a midnight email to me from his office at Sun Microsystems, CTO Eric Schmidt envisioned the future: “When the network becomes as fast as the processor, the computer hollows out and spreads across the network.” His then-employer publicized this notion in a compact phrase: The network is the computer. But Sun’s hardware honchos failed to absorb Schmidt’s CEO-in-the-making punch line. In which direction would the profits from that transformation flow? “Not to the companies making the fastest processors or best operating systems,” he prophesied, “but to the companies with the best networks and the best search and sort algorithms.”

Schmidt wasn’t just talking. He left Sun and, after a stint as CEO of Novell, joined Google, where he found himself engulfed by the future he had predicted. While competitors like Excite, Inktomi, and Yahoo were building out their networks with SPARCstations and IBM mainframes, Google designed and manufactured its own servers from commodity components made by Intel and Seagate. In a 2005 technical article, operations chief Urs Hölzle explained why. The price of high-end processors “goes up nonlinearly with performance,” he observed. Connecting innumerable cheap processors in parallel offered at least a theoretical chance for a scalable system, in which bang for the buck didn’t erode as the system grew.

Today, Schmidt’s insight has been vindicated, and he’s often seen on Google’s Mountain View, California, campus wearing his comp-sci PhD’s goofy dimpled grin. The smile has grown toothier since he announced the plant in The Dalles, a manifestation of what he trumpets as “some of the best computer science ever performed.” When it’s finished, the project will spread tens of thousands of servers across a few giant structures. By building its own infrastructure rather than relying on commercial data centers, Schmidt told analysts in May, Google gets “tremendous competitive advantage.”

The facility in The Dalles is only the latest and most advanced of about two dozen Google data centers, which stretch from Silicon Valley to Dublin. All told, it’s a staggering collection of hardware, whose constituent servers number 450,000, according to the lowest estimate.

The extended Googleplex comprises an estimated 200 petabytes of hard disk storage – enough to copy the Net’s entire sprawling cornucopia dozens of times – and four petabytes of RAM. To handle the current load of 100 million queries a day, its collective input-output bandwidth must be in the neighborhood of 3 petabits per second.

Of course, these numbers are educated guesses. One of the unstated rules of the new arms race is that all information is strategic. Even the once-voluble Chairman Eric now hides behind PR walls. I had to battle hoards of polite but steadfast flacks to reach him, but he finally replied to my queries with a cordial email. Cloud computing, he confirmed, has indeed succeeded the old high-performance staples: mainframes and client-server, both of which require local-area networks. This is very much last year’s news. “In this architecture, the data is mostly resident on servers ‘somewhere on the Internet’ and the application runs on both the ‘cloud servers’ and the user’s browser. When you use Google Gmail, Maps, Yahoo’s services, many of eBay’s services, you are using this architecture.” He added: “The consequence of this ‘architectural shift’ is the return of massive data centers.”

This change is as momentous as the industrial-age shift from craft production to mass manufacture, from individual workers in separate shops turning out finished products step by step to massive factories that break up production into thousands of parts and perform them simultaneously. No single computer could update millions of auctions in real time, as eBay does, and no one machine could track thousands of stock portfolios made up of offerings on all the world’s exchanges, as Yahoo does. And those are, at most, terascale tasks. Page and Brin understood that with clever software, scores of thousands of cheap computers working in parallel could perform petascale tasks – like searching everything Yahoo, eBay, Amazon.com, and anyone else could shovel onto the Net. Google appears to have attained one of the holy grails of computer science: a scalable massively parallel architecture that can readily accommodate diverse software.

Google’s core activity remains Web search. Having built a petascale search machine, though, the question naturally arose: What else could it do? Google’s answer: just about anything. Thus the company’s expanding portfolio of Web services: delivering ads (AdSense, AdWords), maps (Google Maps), videos (Google Video), scheduling (Google Calendar), transactions (Google Checkout), email (Gmail), and productivity software (Writely). The other heavyweights have followed suit.

Google’s success stems from more than foresight, ingenuity, and chutzpah. In every era, the winning companies are those that waste what is abundant – as signalled by precipitously declining prices – in order to save what is scarce. Google has been profligate with the surfeits of data storage and backbone bandwidth. Conversely, it has been parsimonious with that most precious of resources, users’ patience.

The recent explosion of hard disk storage capacity makes Moore’s law look like a cockroach race. In 1991, a 100-megabyte drive cost $500, and a 50-megahertz Intel 486 processor cost about the same. In 2006, $500 buys a 750-gigabyte drive or a 3-gigahertz processor. Over 15 years, that’s an advance of 7,500 times for the hard drive and 60 times for the processor. By this crude metric, the cost-effectiveness of hard drives grew 125 times faster than that of processors.

But the miraculous advance of disk storage concealed a problem: The larger and denser the individual disks, the longer it takes to scan them for information. “The little arm reading the disks can’t move fast enough to handle the onrush of seeks,” explains Josh Coates, a 32-year-old storage entrepreneur who founded Berkeley Data Systems. “The whole world stops.”

The solution is to deploy huge amounts of random access memory. By the byte, RAM is some 100 times more costly than disk storage. Engineers normally conserve it obsessively, using all kinds of tricks to fool processors into treating disk drives as though they were RAM. But Google understands that the most precious resource is not money but time. Search users, it turns out, are sorely impatient. Research shows that they’re satisfied with results delivered within a twentieth of a second. RAM can be accessed some 10,000 times faster than disks. So, measured by access time, RAM is 100 times cheaper than disk storage.

But it’s not enough to reach users quickly. Google needs to reach them wherever they are. This requires access to the Net backbone, the long-haul fiber-optic lines that encircle the globe. In the last decade, the speed of backbone traffic has accelerated from 45 Mbps to roughly a terabit per second. That’s a rise of more than 20,000 times. Google interconnects its hundreds of thousands of processors with gigabit Ethernet lines. The expense of placing gigantic data centers near major fiber-optic nodes is well worth the expense.

Wasting what is abundant to conserve what is scarce, the G-men have become the supreme entrepreneurs of the new millennium. However, past performance does not guarantee future returns. As large as the current Google database is, even bigger shocks are coming. An avalanche of digital video measured in exabytes (10 to the 18th power, or 1,000 petabytes) is hurtling down from the mountainsides of panicked Big Media and bubbling up from the YouTubian depths. The massively parallel, prodigally wasteful petascale computer has its work cut out for it.

THE FASTEST-GROWING search engine – besides Google – isn’t Microsoft or Yahoo or AOL. It’s Ask.com, which has seen its total searches grow 20 percent this year. Like Google, Ask.com has built a petascale computer out of commodity CPUs, hard disks, and RAM chips. And while Google doesn’t permit outsiders to ogle the hardware inside its data centers, Ask.com is eager for the attention.

The East Coast branch of Ask.com’s machine occupies a 500,000-square-foot concrete structure at the end of a long and winding suburban road. The driveway runs a gauntlet of pylons bearing heavy gray power lines and festooned with smaller yellow fiber-optic cables. The windowless facility crouches behind a 10-foot-high chain-link fence in a drab tan camouflage that suggests military-level security. The building holds the central nervous system of not only Ask.com, which occupies more than half the space, but also other well-known information technology companies. Corporate logos are conspicuously absent.

The facility is run by telco giant Verizon. It was designed not for supercomputing but for communications, steering photons through glass threads and mostly copper switches toward their telephonic destinations. MCI, a Verizon acquisition, built it to accommodate UUNet, the premier high-end Internet service provider.

Arriving at a diminutive brown door discreetly designated MAIN ENTRANCE, I pass inside to be vetted by earnest security guards. Metal gates clank behind me as I step into a sirocco of lukewarm air blowing up from the floor. A sudden roar of machinery – fans, air conditioners, and power supplies all whirring together in a tangle of white noise – assaults my ears. The few workers attending to the endless rows of cabinets protect their eardrums with hulking orange muffs.

These cabinets once held as many as eight UUNet routers apiece. Now each 10-foot frame houses 42 black Dell PowerEdge servers, interconnected by a Rastafarian tangle of wires – tens of thousands of computers in total. Hovering above the cabinets like a midday emanation over Death Valley, a shimmering haze of heat signifies an awesome consumption of power.

If it’s necessary to waste memory and bandwidth to dominate the petascale era, gorging on energy is an inescapable cost of doing business. Ask.com operations VP Dayne Sampson estimates that the five leading search companies together have some 2 million servers, each shedding 300 watts of heat annually, a total of 600 megawatts. These are linked to hard drives that dissipate perhaps another gigawatt. Fifty percent again as much power is required to cool this searing heat, for a total of 2.4 gigawatts. With a third of the incoming power already lost to the grid’s inefficiencies, and half of what’s left lost to power supplies, transformers, and converters, the total of electricity consumed by major search engines in 2006 approaches 5 gigawatts.

That’s an impressive quantity of electricity. Five gigawatts is almost enough to power the Las Vegas metropolitan area – with all its hotels, casinos, restaurants, and convention centers – on the hottest day of the year. So the annual operation of the world’s petascale search machines constitutes a Vegas-sized power sump. In the next year or so, it could add a dog-day Atlantic City. Air-conditioning will be the prime cost and conundrum of the petascale era. As energy analysts Peter Huber and Mark Mills projected in 1999, the planetary machine is on track to be consuming half of all the world’s output of electricity by the end of this decade.

Google’s Hölzle noticed the high electric bills after taking his post in 1999. At 15 cents per kilowatt-hour, power dominated his calculus of costs. “A power company could give away PCs and make a substantial profit selling power,” he says. (At The Dalles, the huge protuberances on top are not giant disk drives, climbing to the rooftop for a smoke while the RAM below does the work, but an array of eight hulking cooling towers.)

The struggle to find an adequate supply of electricity explains the curious emptiness that afflicts some 30 percent of Ask.com’s square footage. Why is the second-fastest-growing search engine one-third empty? “We ran out of power before we ran out of space,” says search operations manager James Snow, a ponytailed refugee from an IBM acquisition. Not only does the Verizon facility lack a cheap power source, it struggles to get any further power at all; designed for the more modest needs of Internet switching, the building has already maxed out the local grid. Consequently, Ask.com’s Sampson has followed Google’s trail to the Columbia River, where he’s scoping out properties. Perhaps by moving farther up the river into the Washington headwaters he can get even cheaper power than Google will get in The Dalles.

Microsoft and Yahoo are a few steps ahead of him, building me-too data centers in Quincy and Wenatchee, Washington, respectively. There they can take advantage of rock-bottom electricity prices as well as dark fiber laid by the Bonneville Power Administration. Patterning itself on Ronald Reagan’s cold war strategy against the Soviet Union, Microsoft is headed toward spending two dollars on data centers and online services for every dollar spent by Google. As Microsoft Live operations chief Debra Chrapaty tells me, her company “added a Google” last year in search capability.

Microsoft’s power consumption has risen 10 times in the last three years as it has come to serve Hotmail’s 260 million users, MSN Messenger’s 240 million, and Live’s 320 million. Chrapaty projects a further tenfold rise in the next five years as the company’s nine data centers, most scattered around the US, are joined with others around the globe. With a great sucking sound audible in Redmond as the Windows desktop disappears into the cloud, Microsoft has no choice but to damn the expense and forge ahead.

The catch-up crew at Ask.com may be wise to hold out for alternatives beyond the remote corners of the Pacific Northwest. After all, hydropower is a limited and localized resource, while nuclear power promises centuries of nearly limitless energy that can be produced almost anywhere. China is moving forward with plans to build as many as 30 new nuclear plants; perhaps the next wave of data centers will be sited in Shenzhen.

UNTIL HUMANKIND devises an inexhaustible font of electricity that can be situated wherever it’s most convenient, the best hope for cooling overheated data centers is to make computers themselves more efficient. The dire state of data center economics (as well as customer demand for portable computers with a reasonable battery life) has driven chipmakers to throw all their weight behind efforts to design low-power chips. AMD’s Opteron CPU, debuted in 2003, consumed significantly less power than its predecessors, reversing the trend toward higher speed and greater power consumption that had held since the microprocessor was invented. The Opteron upgrade this past summer brought an additional 30 percent reduction in power usage. Intel, introducing its competing Core architecture, recently acknowledged that the market now values energy efficiency over clock speed. But with the Internet’s expansion and the migration of desktop applications online, these improvements won’t be enough to avert a meltdown.

An even more daunting roadblock stands ahead: Further dramatic gains in efficiency may be physically impossible without radical breakthroughs in chip design. Microsoft’s Craig Mundie bluntly describes the predicament. “We have now run into a brick wall,” he says. “What brought all of us faster computing was raising the CPU’s clock rate, which increased power consumption. Raising the clock rate without consuming more power was only possible because we could lower the voltage. We can’t do that anymore because we’re down into electron volts. If you can’t lower the voltage, you can’t raise the clock rate without using a lot more power.”

If CPUs won’t run much cooler, maybe the rest of the computer can be redesigned to keep power consumption to a minimum. That’s the goal of Andy Bechtolsheim, who’s plotting the compact, low-power future of the data center from his workbench at Sun. Bechtolsheim has fundamentally reconceived the way computers are put together, coupling CPUs directly to drives, angling drives for maximum air flow, and connecting fans directly to motherboards. He aims to bring data center computing capacity to the back office – without oversize air-conditioning trolls on the roof.

Some industry vets believe that Bechtolsheim doesn’t count much in the era of cloud computing. He counted, though, back in 1998 when he supplied the first outside money for Brin and Page. Prior to that, he had made successive fortunes as Sun’s founder and employee number one, as a major early investor in Microsoft, and as progenitor of Granite Systems, an inventor of gigabit Ethernet switches ultimately snapped up by Cisco. Cisco, Google, Microsoft, and Sun give him sort of a royal flush, making him perhaps the supreme investor-entrepreneur in Silicon Valley history.

Speaking at double data rate in a German accent, Bechtolsheim acknowledges that the move from search to more ambitious services plays to Google’s advantage. “To deliver video, maps, and all the rest on the fly, optimized for a specific customer’s needs, to get the maximum benefit for the advertiser – that requires tremendous hardware, storage, and memory. It takes hundreds of computers free to each end user. The next tier down doesn’t have the economics to build this stuff.”

I ask, “So is the game over?” Bechtolsheim’s reply: “Only if no one changes the game.”

Earlier this year, Sun presented new products that can dispense the entire Internet from a few bread boxes – using, curiously enough, industry-standard AMD Opteron processors, cheap hard disks, and industry-standard RAM. The Sun Fire X4600 is a modular hybrid data server and storage facility. Stacking 655 of these machines together, the Tokyo Institute of Technology created a 38-teraflop machine that has been recognized as one of the world’s fastest supercomputers. And with 1-terabyte drives, available next year, Bechtolsheim will be able to pack the Net into three cabinets, consuming 200 kilowatts and occupying perhaps a tenth of a row at Ask.com. Replicating Google’s 200 petabytes of hard drive capacity would take less than one data center row and consume less than 10 megawatts, about the typical annual usage of a US household.

Leaning back in his chair in a Sun conference room, Bechtolsheim observes, “The last few years have been disappointing for people who want to accelerate progress in technology. But now the world is moving faster again.”

FOR THE MOMENT, at least, the power of massive parallelism has far outstripped the promise of alternative computing architectures. But reliance on massively parallel computing may come to define the limits of what can be accomplished by a computer-on-a-planet. Two decades ago, Carver Mead, the former head of computer science at Caltech and key contributor to several generations of chip technology, pointed out that a collection of chips arrayed in parallel can’t do everything a computer might be called upon to do. “Parallel architectures,” he noted, “are inherently special-purpose.”

Hölzle admits as much. Since he arrived at Google, he says, the company has been through six or seven iterations of its search software and perhaps as many versions of the hardware backend. “It’s impossible to decouple the two,” he explains. “The search programs have to fit with the hardware systems, and the hardware systems have to work with the software.”

All previous parallel architectures, from Danny Hillis’ Thinking Machines to Seymour Cray’s behemoth supercomputers to Jim Clark’s Silicon Graphics workstations, have fallen before this problem. Their software and hardware became too specialized to keep up with the pace of innovation in computing. Scalability is also an issue: As the number of processors grows, the balance of activity between communications and processing skews toward communications. The pins that connect chips to boards become bottlenecks. With all the processors attempting to access memory at once, the gridlock becomes intractable. The problem has an acronym – NUMA, for nonuniform memory access – and it has never been solved.

Google apparently has responded by replicating everything everywhere. The system is intensively redundant; if one server fails, the other half million don’t know or care. But this creates new challenges. The software must break up every problem into ever more parallel processes. In the end, each ingenious solution becomes the new problem of a specialized, even sclerotic, device. The petascale machine faces the peril of becoming a kludge.

Could that happen to Google and its followers?

Google’s magical ability to distribute a search query among untold numbers of processors and integrate the results for delivery to a specific user demands the utmost central control. This triumph of centralization is a strange, belated vindication of Grosch’s law, the claim by IBM’s Herbert Grosch in 1953 that computer power rises by the square of the price. That is, the more costly the computer, the better its price-performance ratio. Low-cost computers could not compete. In the end, a few huge machines would serve all the world’s computing needs. Such thinking supposedly prompted Grosch’s colleague Thomas Watson to predict a total global computing market of five mainframes.

The advent of personal computers dealt Grosch’s law a decisive defeat. Suddenly, inexpensive commodity desktop PCs were thousands of times more cost-effective than mainframes.

In this way, the success of the highly centralized computer-on-a-planet runs counter to the current that has swept the computer industry for decades. The advantages of the new architecture may last only until the centripetal forces pulling intelligence to the core of the network give way, once again, to the silicon centrifuge dispelling it to the edges. Google has pioneered the miracle play of wringing supercomputer performance from commodity CPUs, and this strategy is likely to succeed as long as microchip progress remains in the doldrums. But semiconductor and optical technologies are on the verge of a new leap forward.

The next wave of innovation will compress today’s parallel solutions in an evolutionary convergence of electronics and optics: 3-D and even holographic memory cells; lasers inscribed on the tops of chips, replacing copper pins with streams of photons; and all-optical networks in which thousands of colors of light travel along a single fiber. As these advances find their way into an increasing variety of devices, the petascale computer will shrink from a dinosaur to a teleputer – the successor to today’s handhelds – in your ear or in your signal path. It will access a variety of searchers and servers, enabling participation in metaverses beyond the ken of even Ray Kurzweil’s prophetic imagination. Moreover, it will link to trillions of sensors around the globe, giving it a constant knowledge of the physical state of the world, from traffic conditions to the workings of your own biomachine.

Such advances promise to transform the calculus of storage, bandwidth, and power that gives centralization its current advantage. As the redoubtable Bell Labs engineer turned giga-investor Andy Kessler tells me, “It’s sure to happen. It always has. Because all the creativity, customer whims, long tails, and money are at the network’s edge. That’s where chipmakers find the volumes that feed their Moore’s law margins. That’s where you can find elastically ascending revenues and relentlessly declining costs.”

Amid the beckoning fantasies of futurism, the purpose of whatever comes next – like that of today’s petapede – will be to serve the ultimate, and still the only general-purpose, petascale computer: the human brain. The brain demonstrates the superiority of the edge over the core: It’s not agglomerated in a few air-conditioned nodes, but dispersed far and wide and interconnected via myriad sensory and media channels. The test of the new global ganglia of computers and cables, worldwide webs of glass and light and air, is how readily they take advantage of unexpected contributions from free human minds, in all their creativity and diversity. Search and you shall find.

George Gilder is a senior fellow at the Discovery Institute and publishes the Gilder Technology Report.

George Gilder

Senior Fellow and Co-Founder of Discovery Institute
George Gilder is Chairman of Gilder Publishing LLC, located in Great Barrington, Massachusetts. A co-founder of Discovery Institute, Mr. Gilder is a Senior Fellow of the Center on Wealth & Poverty, and also directs Discovery's Technology and Democracy Project. His latest book, Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (2018), Gilder waves goodbye to today's Internet.  In a rocketing journey into the very near-future, he argues that Silicon Valley, long dominated by a few giants, faces a “great unbundling,” which will disperse computer power and commerce and transform the economy and the Internet.