Is It “Web 2.0”?

First, if you think that the terms “World Wide Web” and “Internet” are synonymous, let me expand your worldview a bit, in the same way that Copernicus did for people’s view of our Solar System back in the mid 1500s. The “World Wide Web” is basically one network service that runs on a much larger, more complex thing which is called the Internet. The web is a simple client-server system based on HTTP (HyperText Transfer Protocol) and HTML (HyperText Markup Language). Due to extremely serious limitations and inefficiencies of these standards, both have been enhanced and extended numerous times. The result is still not particularly elegant to real network software designers or engineers, but it has clearly had a major impact on the world. The technology of the web was a refinement and convergence of several ideas and technologies that were in use before HTML and HTTP were created by Tim Berners-Lee in the late 1980s, at CERN. But there is lot to the Internet beyond the web (E-mail, instant messaging, video conferencing, VoIP, file transfer, peer-to-peer (P2P), VPNs, IPTV, etc.). There are thousands of Internet protocols, of which the web uses one (HTTP).

HyperText, WAIS/SGML and Gopher

The terms HyperText and HyperMedia were coined by Ted Nelson in 1965, at Brown University. These terms referred to online text documents (or rich media, including pictures, sound, and other media content) that contained links that allowed building paths from any word or phrase in the document to other parts of the same document, or parts of other documents that were also online. In August 1987, Apple Computer released the first commercial HyperText based application, called HyperCard, for the Macintosh. There were already document storage and retrieval systems on the early Internet, such as WAIS (Wide Area Information Server). WAIS was based on the ANSI Z39.50:1988 standard, and was developed in the late 1980s by a group of companies including Thinking Machines, Apple Computer, Dow Jones and KPMG Peat Marwick. As with the web, there were both WAIS servers and clients. A later version of WAIS was based on ANSI Z39.50:1992, which included SGML (Standard Generalized Markup Language, ISO 8879:1986) for more professional looking documents. There was another Internet application called Gopher (University of Minnesota, circa 1991) that could distribute, search for, and retrieve documents. Gopher was also primarily text based, and imposed a very strict hierarchical structure on information.

HTML and HTTP

Tim Berners-Lee combined these three concepts (HyperText, WAIS/SGML and Gopher document retrieval) to create HTTP and HTML. HTML was a very watered down and limited markup language compared to SGML. SGML is capable of created highly sophisticated, professional looking books. In comparison, HTML allows very limited control over the final appearance of the document on the client’s screen. HTTP was a very simple protocol designed to serve HTML documents to HTTP client programs called web browsers. A basic HTTP server can be written in one afternoon, and consists of about half a page of the C programming language (I’ve done it, and retrieved documents from it with a standard browser). The first browser (Lynx, 1992) was very limited (text only, but including hypertext links). In 1993, at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, the first Mosaic web browser was created (running on X Windows in UNIX). Because it was created for use on X Windows (a platform with good support for computer graphics), many graphics capabilities were added. With the release of web browsers for PC and Macintosh, the number of servers went from 500 in 1993 to 10,000 in 1994. The World Wide Web has since grown to millions of servers and many versions of the web client (Internet Explorer, Mozilla Firefox, Safari, Opera, Chrome, etc.). It’s been so successful that a lot of people today think that the World Wide Web is the Internet.

Web 2.0

The term Web 2.0 was first coined by Darcy DiNucci in 1999, in a magazine article. The current usage dates from an annual conference that began in 2004, called “Web 2.0”, organized and run by Tim O’Reilly (owner of O’Reilly Media, publisher of many excellent computer books).

Many of the promoters of the term Web 2.0 characterize what came before (which they call Web 1.0) as being “Web as Information Source”. Web 1.0 is based on technologies such as PHP, Ruby, ColdFusion, Perl, Python and ASP (Active Server Pages). In comparison, Web 2.0 is “Network as Platform”, or the “participatory web”. It uses some or all of the technologies of Web 1.0, plus new things such as Asynchronous JavaScript, XML, Ajax, Adobe Flash and Adobe Flex. Typical Web 2.0 applications are MediaWiki (and the world’s biggest Wiki, the Wikipedia), blogging sites, social networking sites like FaceBook, video publishing sites like YouTube, photographic snapshot publishing sites like Flickr, Google Maps, etc.

Andrew Keen (British-American entrepreneur and author) claims that Web 2.0 has created a cult of digital narcissism and amateurism, which undermines the very notion of expertise. It allows anyone anywhere to share their own opinions and content, regardless of their talent, knowledge, credentials, or bias. It is “creating an endless digital forest of mediocrity: uninformed political commentary, unseemly home videos, embarrassingly amateurish music, unreadable poems, essays and novels.” He also says that Wikipedia is full of “mistakes, half truths and misunderstandings”. Perhaps Web 2.0 has made it too easy for the mass public to participate. Tim Berners-Lee’s take on Web 2.0, is that it is just a “piece of jargon”. In the finest tradition of Web 2.0 these comments, which were found in the Wikipedia article on Web 2.0, probably include some mistakes, half-truths and misunderstandings.

Basically, Web 2.0 does not introduce any revolutionary new technology or protocols; it is more a refinement of what was already being done on the web, in combination with a new emphasis on end-users becoming not just passive consumers, but also producers of web content. The Second Internet will actually help make Web 2.0 work better, as it removes the barriers that have existed in the First Internet since the introduction of NAT to anyone becoming a producer of content. If anything, on the Second Internet, these trends will be taken even further by decentralizing things. There will be no need for centralized sites like YouTube or Flickr to publish your content, just more sophisticated search engines or directories that will allow people to locate content that will be scattered all over the world. Perhaps that will be the characterizing feature of Web 3.0?

Web 2.0 is a really minor thing compared to the Third Internet. What isn’t pure marketing hype is an evolutionary development of one of the major services (the World Wide Web) out of perhaps a dozen that the Third Internet will host. These include global telephony, newer forms of communication like decentralized instant messaging, major new Peer to Peer applications (not just file sharing), global broadcast entertainment via multicast IPTV, connectivity between essentially all consumer electronic products, personal healthcare sensor nets, smart building sensor nets, etc.