Eight False Promises of the Internet
In early 2003 I had a conversation with Dee Hock, founder and former CEO of VISA. At the time we were interested in hiring him to be the keynote speaker at our upcoming Future of Money Summit, an event that would take place in November of that year.
Ten years earlier, in March of 1993, Hock gave a dinner speech at the Santa Fe Institute where he described his unusual organizational theories in managing VISA, describing them as “chaordic” a term that roughly translates into “ordered chaos.”
In 1996 he formed the Chaordic Alliance, later renamed the Chaordic Commons, for the purpose of furthering his notions that businesses can run more effectively when they are based on a “vital set of living beliefs” distributed through an organization, essentially replacing top-down command and control.
As we talked, his powers of persuasion were quite evident as he artfully described his “chaordic” theories, and by the end of the conversation I was a true believer, wanting to become a disciple of this new business gospel.
But as with many things that sound too good to be true the first time you hear them, Hock’s “chaodic” theories that somehow worked within VISA, proved non-reproducible in other settings, and have now largely been abandoned after numerous attempts to implement them in other companies.
As we enter the 2nd decade of the new millennium we find ourselves in a similar quandary trying to separate the fallacies from the promises of what works and what doesn’t on the Internet. With that in mind I’ve put together a list of eight of the founding theories of the Internet that have proved similarly deceptive.
1. “No one controls the Internet” – False!
In the 1950s, the U.S. military was concerned with the country’s ability to survive a nuclear first strike with its current hub and spoke communications systems. Paul Baran of the Rand Corporation concluded that the best way to eliminate the problem would be to design a computer network capable of breaking messages into units and have the network route them along redundant pathways soas to be reassembled into a form of coherent communication at the final destination.
This architecture later evolved into the TCP/IP packet switching that forms the basis for today’s World Wide Web.
Core to this design was the principal of eliminating central control to remove all possible failure points. But there remains one point of control – the root servers.
There are actually 13 hubs for the root server system consisting of multiple actual servers in multiple locations. They organize the names and numbering for routing traffic.
The domain names and numbering assignments are managed by ICANN. However, the root zone is still controlled by the United States Department of Commerce who must approve all changes to the root zone file requested by ICANN.
The Internet’s First Hostile Takeover Attempt
By way of a DARPA contract with UCLA in the late 1960s, Jon Postel was hired to create the original numbering scheme for the Internet and remained the principal control person through 1998.
In 1990 Network Solutions was put in charge of managing the root servers and in 1995 they began charging for domain names starting at $50 per year. The money and control quickly gave Network Solutions execs the reputation of being bullies and their less-than-tactful style of management tended to grate on the early pioneers of the Internet, including people like Jon Postel and Vint Cerf.
In the early 1990s a new international organization called the Internet Society was formed in Geneva to handle issues related to Internet policy with ambitions of eventually assuming root authority – name and numbering control. Both Postel and Cerf were members of the Internet Society.
After numerous meetings with U.S. government officials and deteriorating relationships with Network Solutions, an incident happened that would forever define control over the Internet.
In 1998, after considerable prodding from fellow Internet Society Board Members, Jon Postel sent a memo to all of the key people in charge of the eight root servers at the time, asking them to redirect primary control to his server at UCLA. Since he was the original architect of the system, a well-respected authority, and a personal friend of those receiving the message, all of them complied.
Users of the Internet didn’t notice anything had changed because Postel set his computer to replicate the one at Network Solutions.
However, reaction by officials in the U.S. government were swift with national security advisers rousting Ira Magaziner, a key Clinton appointee, out of bed at 1:00 am on a trip to Switzerland to deal with the matter. Magaziner immediately called Postel and conferenced in an official at UCLA for added pressure. After making a clear threat of legal force by the U.S. government, Postel agreed to redirect control to Network Solutions.
Labeling the 1998 root authority incident a hostile takeover attempt may be overstating what actually happened, but the intentions were clear.
Since that time, control over root authority has remained firmly in the hands of the U.S. government.
CORRECTION: I was informed that the U.S. does not have total control over the Internet’s root authority. Only partial control. See comments below for more detail.
2. “Borders don’t matter” – False!
In the 1990s the Internet was greeted as the New New Thing: It would erase national borders, give rise to communal societies that invented their own rules, and undermine the power of governments. But not so fast!
It was February 2000, an anti-Nazi activist Marc Knobel was sitting in Paris, scouring the Internet for Nazi artifacts being sold online. Knobel explained what he found during that search: “page after page of swastika arm bands, SS daggers, concentration camp photos, and even replicas of the Zyklon B gas canisters.” They were all for sale on Yahoo’s auction site.
In April 2000 a French anti-racism group filed a lawsuit against Yahoo! claiming the Internet search company hosted illegal auctions of Nazi-related paraphernalia.
Since France has strict laws against selling or displaying anything that incites racism, sale of Nazi artifacts, is illegal in France.
No such items were offered on Yahoo’s French auction site, but French users have been able to access Yahoo’s U.S.-based auction site, where sales of Nazi memorabilia were allowed.
In November 2000, French Judge Jean-Jacques Gomez gave Yahoo! 90 days to bar French residents from viewing auctions of Nazi memorabilia at its U.S.-based auction site, or face fines of US$13,000 per day for each day it exceeds the deadline.
Quick to respond, Yahoo! CEO Jerry Yang issued a statement that stated “France wants to impose a judgment in an area over which it has no control,”
Many people agreed. French laws did not apply to Yahoo, a company based in the United States, home of the First Amendment – freedom of speech! The rules would be impossible to enforce! How would Yahoo know which viewers came from France? It would hamstring international commerce!
In December 2000, Yahoo! filed documents in U.S. federal court declaring that the French government has no right to make the company bar French residents from seeing auctions of Nazi paraphernalia over its U.S.-based Web site.
Yahoo insisted that this task was impossible, but the prosecution brought in experts who demonstrated technology capable of identifying content by geographical sources. Experts also showed the court that Yahoo had set up a mirror server in Stockholm—well beyond the protections of the US First Amendment.
This convinced an already-persuaded Judge Gomez to make the order permanent. Yahoo insisted that it would ignore the decision. But behind the scenes, its executives faced a serious brick-and-mortar problem.
By June of 2001, Yahoo had jumped on the geographical ID bandwagon, purchasing services from Akamai that allowed it to deploy locationally relevant advertising. A year later it was doing business with China, and had signed that country’s Public Pledge on Self-Discipline for the Chinese Internet Industry.
Even though the Internet began as a utopian dream of a unified world without government intervention, today’s Internet is moving toward the opposite end of the spectrum. In many cases, Internet companies not only welcome governmental restrictions; they are being used as agents of government policy.
The future Internet will see a move towards even more border sensitivity, with hyper-location based services to both improve relevancy of the user experience, and also put themselves in good standing for regional business and government contracts.
3. “Information wants to be free” – False!
In 1984 at a Hackers Conference, Silicon Valley futurist Stuart Brand was the first to use the phrase “Information wants to be free” in response to a point made by Apple co-founder Steve Wozniak.
Brand continued by saying, “On the one hand, information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life.”
John Perry Barlow, lyricist for the Grateful Dead, keyed in on the first half of the phrase, “Information wants to be free” in a keynote speech at an Open Source Internet Symposium in 1992.
Gifted in his ability to pen catch-phrases and sound bites, Barlow set the stage for an entirely new era of free-thinking “free” advocates.
Closely tracking this theme in his 2009 book “Free: The Future Of Radical Price,” Wired Magazine editor, Chris Anderson, discusses why he believes that “Sooner or later, every company is going to have to figure out how to use free or compete with free, one way or another.”
But there is always a cost to “free.” While it may not extract a payment from your bank account, there is always a “time” cost involved.
Without some amount of friction, the volume of information you have to sift through skyrockets and even with good search technology, your time-costs climb dramatically.
As global data centers begin to explode exponentially with video content, the cost of storage will become a serious consideration and will need to be passed along.
The days of “free” thinking are numbered, and no, at least in my book, information is never truly free.
4. “Information from the many is better than information from the few” – It depends!
I’ve often thought that it would be great to create a global history book written with the combined submissions of 10,000 people all adding their own perspective to how current events are affecting their own communities.
However a project like this is only feasible if contributors are all properly vetted first. A handful of bad contributors can taint, even contaminate, the entire project.
The crowdsourcing fallacy is that by tapping the collective brainpower of the masses you can solve virtually any problem. But without proper controls the value of the input can be very low quality creating distracting mental clutter.
While it is true that the Internet is eliminating many of the gatekeepers, people trying to break into a field without going through gatekeepers find it far harder to gain credibility and foster a “trust” relationship with their audiences.
In the end it still boils down to trust. Can I trust the person I am reading or listening to? Are they an accurate source of information? Will it be worth the time and brainpower I’m investing?
5. “First mover advantage” – Rarely!
Is there such a thing as first mover advantage?
While many in the investing community initially thought the first companies to enter an emerging Internet market had an advantage, the first mover advantage has turned out to be as elusive as it was exaggerated.
Some now believe the true advantage goes to those who come late to the game – last mover advantage.
The first companies to enter an emerging market have four primary advantages.
- Head start on the leaning curve
- First to cultivate talent
- Ability to define the marketplace
- First to capture mindshare and build customer loyalty
The last mover advantage can also be summed up in four similar advantages
- Able to learn from the mistakes of the first movers
- Able to steal the talent already cultivated by the first mover
- Able to refine the strategies that defined the marketplace
- Since the first mover has already established the bar for customer service and customer loyalty, it’s not hard to raise the bar
A well-executed properly-funded company can be tough to beat. But there are a lot of “ifs” in that statement.
In addition to the inherent lack of first mover advantage in most Internet sectors, the structure of the market has made it even more difficult to protect competitive positions. There are huge uncertainties surrounding the Internet, and the strategies that will win, and first mover advantages are largely illusionary.
6. “Disintermediation” – It depends!
Disintermediation is a term used to describe the removal of intermediaries in a supply chain by “cutting out the middleman”. Instead of going through traditional distribution channels, which have some type of middleman, companies may now deal with every customer directly, for example via the Internet.
A prime example of a disintermediation company is Dell Computers, a company that sell products direct to the consumers, bypassing traditional retail stores.
But while disintermediation has proven effective for some products and industries, it has had little effect on others.
Products that are easily quantifiable commodities such as computers, books, hotel rooms, and baseball tickets the middlemen have had to find other jobs. But any product requiring a touch and feel experience such as furniture, groceries, lumber, and pet food, people prefer to check it out personally before making a purchase.
Virtually everything else lies in the grey area of partial disintermediation.
Counter to some of the early thinking is the concept of “reintermediation” – the creation of a new set of middlemen. As an example, rather than selling products in retail stores, its now far easier to sell them on Amazon.
The grand “disintermediation” experiment is far from over. One by one, every product from here on out will begin to define its own rules for how it enters the marketplace.
7. “The radical transparency advantage” – False!
Can you feel the layers being lifted? Transparency is entering our lives in unusual ways and much like having individual veils lifted from a multi-veiled garment; we are now able to see the world around us with far greater clarity.
Recently several luminaries have begun asserting that the more transparent our society becomes, the better off we’ll be. This is simply not a universal truth.
Transparency today is comprised of a series of data points, all of which become clearer and more viral with each improvement in our communication and social networks. Knowing the details is only part of the equation. Having the ability to expose the details and organize a response adds an entirely new dimension.
Leveraging the power of the Internet, a single response by a person today can have a profound impact. And an impact in one part of the world can create a ripple effect around the world.
We live in an increasingly fluid society, with money and ideas being transferred around the world at the speed of light. Even the movement of people is being handled with far greater efficiency.
The greatest danger of too much transparency is that we will become consumed by watching each other, and somewhere along the way, we will lose sight of the big picture. Each day will be filled with constant drama as we exhaust ourselves trying to right every wrong, and solve every problem.
We are all terminally human and have very limited ability to improve who we are simply because someone else may be watching.
So is there some optimal level of privacy we should be aspiring to?
8. “People are basically good, so the fewer rules the better” – False
The 1990s marked the early days of the Internet with people like Julian Dibbell and John Perry Barlow articulating a libertarian vision that gained wide currency in the public imagination. The Electronic Frontier Foundation was later formed to protect the Internet from regulation in the belief that a free online community might unite people and melt government away.
eBay was founded by Pierre Omidyar on the ideology that generally people are good as depicted in the social contract theory. Unfortunately for Omidyar, this concept didn’t always work so well during eBay’s exponential growth periods. The virtual community began to be contaminated by fraud. eBay soon realized that a feedback forum and dispute resolution leader, could not contain or put a stop to the fraudulent activities that were disrupting the utopianism of eBay.
Although eBay was generally able to maintain order without outside intervention, some eBay users still seized upon the opportunity to steal from their fellow auctioneers. These corrupt few made it necessary to invoke the coercive power of government to keep order.
While far from optimal, a minority of wrong-doers can drive the need for over regulation for the rest of us.
Internet so far has not proven to be capable of governing itself. But therein may lie the greatest opportunity of all.
I will readily admit to having gotten sucked into many of the false notions these early Internet gurus have proposed. With so many things changing around us, it becomes a difficult exercise to separate the truth from the blustery noise of well-intentioned people.
In the end we did not hire Dee Hock to speak at our 2003 “Future of Money Summit.” It wasn’t because we didn’t invite him. Rather it was because he bowed out after we had reached an agreement. His numerous experiments in this area were failing, and my guess is that he was having trouble defending his theories in public.
The Internet is a rich vibrant community of communities, constantly changing and constantly evolving. While many of the original theories have proven delusional, new chapters in the rulebook are being written every day.
There is no doubt in my mind that the final chapters will prove far better than the first.