Bookmark and Share

The Adoption of TCP/IP

The superiority of TCP/IP as an internetworking standard is not enough to explain its success. If mere technical superiority, or novelty, were sufficient, the QWERTY keyboard, which reflected the layout of a typewriter keyboard, might have sunk into oblivion many years ago (David 1985). It is true that the slow pace of development of OSI standards, as a replacement for X.25, was a factor and that until Internet (initially ARPANET + TCP/IP) was "demobbed" it was not even a candidate (Abbate 1999, 211). But these explanations are not sufficient to explain why organisations turned their backs on considerable investments in X.25 or why TCP/IP took off when it did - the real reasons are more complex.

In the 1980s, a seismic shift in the balance of power within the IT industry took pace - a shift from East to West Coast, from Hudson Valley to Silicon Valley. At the end of it, seemingly indestructible pillars of the industry were posting huge losses [1] and facing extinction (Heller 1994, 1). At the same time another shift of power was taking place, ebbing from central IT departments, within businesses, to the user departments. The technologies, which evolved in the triangle whose points were at Berkeley, San Francisco and San Jose would usher in a new distributed model of computing. This model was appealing to departmental heads looking for new ways to address their needs as well as to CIOs (Chief Information Officers) looking to restore flagging reputations. It was the Trojan horse in which TCP/IP rode to victory.

Business Needs. At the beginning of the 1980s most large companies ran their information systems on expensive centralised mainframe computers, mostly supplied by IBM which dominated the market. These computers were large and were cosseted in cavernous computer rooms with air conditioning, and sometimes even water cooling facilities as well. Users interacted with systems through "green screen" visual display units which were character-based rather than graphical. The IBM 3270 VDU display was 80 characters across by 24 high - 80 characters because that's how many columns there were on a punched card!

The mainframe-centred business model entailed high costs of marketing, sales and support. Sales staff cultivated senior management and board-level relationships while IBM customer engineers and systems engineers provided dedicated support, often full-time on site. The office accommodation provided for them by customers was often known as the "IBM Room" even when it was in use by other vendors. The limited challenges to the IBM hegemony came largely in niche markets; supercomputing, for example Cray, and science and engineering where Digital had a strong foothold based on their popular PDP machines (Heller 1994, 149). With their VAX range, Digital were beginning to compete with low-end mainframes.

The customers for all this computing power had been led to expect great things from it but they had been disappointed. With the centralised computer facility went a centralised organisation, responsible for selecting and operating the equipment as well as building applications for it. In the opinion of the users, they simply weren't delivering. James Martin, a popular lecturer, writer and commentator on Information Technology, described the situation as the "crisis in data processing" (J. Martin 1984, 4). Applications were taking inordinately long, sometimes years, to build and then not meeting expectations, once delivered. Due to a lack of precision in design specifications, users often had no clear idea of what to expect anyway. Subsequently changes also took unacceptably long and there was a resource conflict between building new applications and changing old ones (euphemistically known as maintenance). The backlog of applications was long and getting longer.

There was no silver bullet that would make these problems go away. Certainly, better development methods, processes and automated support for them would help, but there was a growing feeling that the mainframe computer, and the organisation that managed it, were the problems. For many, this suspicion was confirmed by the appearance of personal computers which brought users face-to-face with easy to use applications like spreadsheets. The idea of departmental computing began to gather momentum.

Technological Change. The changes which took place in the 1980s were facilitated by three main technologies, all originating with established East Coast companies who failed, for different reasons, to exploit them[2]. The companies were IBM, Xerox and AT&T and the technologies were RISC, Ethernet and UNIX. A fourth addition to the mix would be the intelligent workstation, primarily the personal computer. The changes wrought by the adoption of these technologies were profound and would not be fully recognised before the end of the decade, if then.

Reduced Instruction Set Computing (RISC) was originally the brainchild of John Cocke, a computer scientist working at IBM's Thomas Watson Research Center in Yorktown Heights, New York. The IBM/360 architecture had over 100 instructions but Cocke realised that a well-chosen smaller set of instructions could realise very high performance at considerably lower cost. Cocke developed his ideas during the 1970s, achieving almost unbelievable speeds relative to the computers currently on the market (Carroll 1994, 201). In 1980, he finally produced an experimental computer, the IBM 801, which demonstrated his ideas but IBM's business was thriving and there was no enthusiasm for a radical new technology. The word was out, however, and groups at Stanford and at Berkeley started work on RISC (Ceruzzi 2003, 289).

Robert Metcalfe left PARC in 1979 to found 3Com which would market Ethernet (Metcalfe 2006 - 2007, 38). He was able to broker a deal with Xerox and DEC to collaborate and get Ethernet accepted as a standard - IEEE 802.3.

The UNIX operating system originated in Bell Labs in the 1970s, to meet internal requirements, but soon engaged the interest of computer scientists in universities (Naughton 1999, 175). However, AT&T was still bound by restrictions arising from the US Government's antitrust suit and could not enter the computer business and offer UNIX as a commercial product. In 1974, they opted to provide it to academic institutions at a nominal cost, complete with source code. The source code is the computer language text which defines the programs the operating system consists of. This meant that programmers would be free to modify and experiment with it. UNIX became ubiquitous in university computer science departments (Naughton 1999, 17).

One programmer who tinkered with UNIX was Bill Joy of the Computer Science Research Group (CSRG) at the University of California at Berkeley (Ceruzzi 2003, 283). Joy and his colleagues, using a DEC PDP-11 machine, improved UNIX in a number of ways including adding support for VDUs where previously only teletype terminals were supported. By 1978 they were distributing their version, Berkeley Software Distribution (BSD), to interested groups[3] (Ceruzzi 2003, 284).

By the time of the second release, BSD had come to the attention of ARPA who saw it as a candidate host for TCP/IP and, in 1979, a deal was struck for ARPA to fund the work to integrate TCP/IP into BSD. The official version, 4.2BSD, was finally completed in 1983 with Ethernet support also built in (Gillies and Caillau 2000, 43 & 69). This initiative by ARPA was the single most important factor in ensuring TCP/IP's proliferation and ultimate success because UNIX would be at the centre of the coming departmental and end-user computing revolution (Kirstein 2012).

By the time 4.2BSD was completed, Bill Joy had left to join a start-up called Sun Microsystems in Mountain View, not far to the south of Palo Alto (Gillies and Caillau 2000, 43). Sun (for Stanford University Network) were marketing high performance workstations running under their operating system SunOS, based on BSD UNIX. As Christopher Cooper, of the UK academic network JANET, was later to remark:

A Sun workstation came complete with an Ethernet interface and, moreover, it came almost from the start with a derivative of BSD Unix which not only incorporated Ethernet support but TCP/IP software, as recently adopted by ARPANET. Other manufacturers followed suit, as for example DEC with Ultrix (Cooper 2010, Sec2:90).

DEC didn't introduce Ultrix on their VAX range until 1984. When CERN, the European research organisation, was considering UNIX they found the license from AT&T inordinately expensive. But, in 1983, they bought a DEC VAX 11/780, the machine used to develop 4.2BSD. They found they could buy a BSD license quite cheaply and it also came with TCP/IP, unlike the Bell Labs version of UNIX (Gillies and Caillau 2000, 82).

Hewlett-Packard, with headquarters in Palo Alto, was also to enter the high performance workstation market, in 1985, with the HP 9000 running their UNIX variant, HP-UX (Ceruzzi 2003, 281). Over the next two years both HP and Sun would introduce RISC technology into their products, PA-RISC (Precision Architecture RISC) and SPARC (Scalable Processor Architecture) respectively. HP 9000 servers would be based on PA-RISC technology from 1987. Sun entered the server market in 1991 with its SPARCserver products around the time that IBM released the RS/6000 series of servers based on their own POWER RISC technology, a decade after John Cocke had demonstrated it.

The new package of UNIX, TCP/IP and Ethernet was certainly a hit with the academic and research communities, the early adopters of these smaller-scale technologies. At CERN,

[...] in 1985 the group responsible for the laboratory's new flagship accelerator, LEP [Large Electron Positron collider], adopted UNIX computers and used TCP/IP to network them (Gillies and Caillau 2000, 84).

In 1987 CERN took delivery of a Cray X-MP supercomputer and decided to run UNICOS, Cray's version of UNIX, on it. That meant that it came with TCP/IP built in, so that is how it was networked (Gillies and Caillau 2000, 86).

The final piece of the jigsaw began to fall into place, in 1984, when Cisco Systems was founded. Cisco routers were used at CERN to meet the tight security constraints that the US government insisted upon with such computers as the Cray. Later that year CERN were able to advise EUnet that there was a Cisco router that would enable them to run TCP/IP over their X.25 lines (Gillies and Caillau 2000, 87), proof positive of the internetworking capabilities of TCP/IP.

Gateways between TCP/IP and X.25 had been pioneered before Cisco's off-the-shelf products became available. In 1984, Peter Kirstein's group at University College London (UCL) replaced their satellite link to ARPA with TCP/IP over BT's X.25-based IPSS and on through Telenet (Braden and Higginson 1981). The same approach was adopted by CSNET, an initiative of the National Science Foundation (NSF) to connect US university computer science departments which had not been able to connect to ARPANET. Institutions were able to connect to CSNET via X.25 services like Telenet and onwards to ARPANET via CSNET's link (Cooper 2010, 108). TCP/IP began to spread across the existing global X.25 infrastructure almost by osmosis. In addition, new IP backbones were implemented: in 1985, NSF (NSFNET), in 1991 the Janet UK academic backbone began converting to IP. Others followed.

Government funded organisations, like CERN, might have been expected to adhere to international standards but

[...] use of Unix and its rich set of applications was beginning to spread to other communities, encouraged by increasing deployment of workstations, all of which came with UNIX and TCP/IP. However, government (including the US) and non-US PTTs subscribed, in principle, to international standards - which did not include TCP/IP - for at least the first half of the 1980s (Cooper 2010, Sec2:91).

Unfortunately, given the slow pace of development of the ISO/OSI standards making process, the inevitable happened, namely the rapid acceptance of the open TCP/IP protocol suite in the late 1980s thanks to their implementation on diverse hardware and software platforms and despite numerous devious political manipulations to prevent the adoption of US protocols by Europe[4] (O. H. Martin 2012, 12).

As Martin suggests, there were those in Europe prepared to go to the stake for an international standard in preference to one that had originated in the US, even if it meant a long wait. However, most did not feel that way and they voted with their feet.

These developments were accompanied by the widespread adoption of the IBM Personal Computer (PC). Its success owed much to the power that IBM still wielded in the business marketplace. The combination of the PC and some of the powerful and (relatively) low cost servers available was proving a highly attractive proposition. By mid-1989 "IBM's growth had slowed as PCs and minis had become the Number 1 and Number 2 products, vaulting past the mainframes that had been its mainstay" (Heller 1994, 260). But what, aside from price/performance, were the benefits of these new technologies to the businesses that adopted them?

Business Solutions. The success of the UNIX servers, and with them TCP/IP, was built on the applications which they enabled and the relative ease with which they could be implemented. An application consists of three components: a user interface, some data and some logic to connect the two. Implementation of the first two components was being radically altered by intelligent workstations and by new database technologies on affordable servers.

Within the business community, "intelligent workstation" came to mean IBM PC. The PC was different from IBM's other products - its processor and operating system had been outsourced, to Intel and Microsoft, and its architecture was published so that third parties could contribute both hardware and software add-ons[5]. In consequence, the rather miserable, mainframe-like command line interface presented by Microsoft's MS-DOS operating system was soon joined by rich graphical ones, including finally Windows itself in 1985. Soon, the front ends of applications would benefit from buttons, drop-down lists, text boxes and all the other innovations pioneered at PARC. A new generation of programming languages was making it easier and faster to build these graphical user interfaces (GUIs).

This new generation of languages, called object-oriented languages, were about to help fulfil the software engineers' dream of being able to build from reusable components, just like electronic engineers building circuits. These languages included Smalltalk from PARC, C++ from Bell Labs and, later, Java from SUN. Soon, a developer could sit and drag components, like buttons and drop-down lists, from a palette onto a screen design, tailor their size, colour and other properties, and then "wire" them together to form a complete GUI.

Along with these developments, simpler approaches to data management were becoming available. The relational database was invented by an IBM scientist, Ted Codd, at their San Jose Lab (Codd 1970) but a relational database management system was released by RSI (now Oracle) in 1979 (Oracle 2007), a full 5 years before IBM released their DB2 product. The simplicity of the relational approach came from modelling data as a set of two dimensional tables corresponding to the subjects businesses wanted to keep data about: products, customers, orders, employees, etc. There was also a query language, SQL, for extracting data from the tables and updating them. SQL, which became an industry standard, was an English-like language based on formal mathematical logic (Codd 1971). In due course, graphical techniques made it possible to generate the SQL for simple queries without programming.

The combination of affordable servers running relational databases and PCs with attractive graphics interfaces made it easier to generate appealing new applications. For large companies, mission-critical applications remained on mainframes, where performance and security could be managed, but increasingly snapshots and summaries of corporate data were made available to these applications. Departmental and end user computing went from strength to strength as applications like spreadsheets were added to the toolkit. These possibilities were evident in the first half of the 1980s and gradually took hold over the following decade. Some of the technological milestones of this eventful decade are illustrated below.

Another important spin-off from relational database technology was the wider availability of application packages. It became popular to build data model diagrams in which relational tables were represented by boxes connected by lines representing the relationships between them. It soon became clear, as many had suspected, that most businesses were very similar in terms of the core data they maintained and that building bespoke systems, for mundane applications like order processing, represented a waste of scarce resources. Companies such as SAP, Baan and J D Edwards sprang up, joined later by Oracle, providing suites of applications which could then be tailored to an individual customer's needs. These were targeted at the mid-range servers, first UNIX and later servers based on the Intel PC processor and running Windows NT.

By the mid 1990s, the client-server model, workstations interworking with databases on servers, was ubiquitous. The application vendors had also adopted this model. And the servers came equipped with TCP/IP. All that was needed to finally cement it in place was a "killer app" and, in 1991, it appeared at CERN in the shape of the World Wide Web. Where the Internet is a network of computers connected by communication links, the Web is a network of documents connected by hyperlinks. This simple innovation for locating and retrieving documents has led to numerous unexpected applications which have driven the second stage of Internet growth.

Internet, as a wide area networking technology, might very well have been condemned simply to carrying emails and academic documents. But the Web heralded a second wave of Internet adoption as businesses were able to connect over wide area IP networks to their suppliers and customers, including most significantly individual consumers. The browser enabled the user interface to be downloaded rather than resident on the workstation thus avoiding problems of updating. Applications within companies began to be implemented using browsers operating over so called "intranets" and the manufacturers' networking software began to be eroded. Some workstations, such as those at supermarket checkouts, would have the browser as their only application. The public switched networks, based on X.25 also fell into disuse and the telecommunication companies turned to providing Internet services like ADSL instead.

Notes

1. In 1992, IBM posted a $2.8 billion loss.

2. All had also been the object of attention from the US antitrust authorities.

3. There are conflicting accounts as to whether these copies were free, attracted a nominal charge or were only available to those who already had a license from AT&T. For present purposes, it is unimportant - they were cheaper than AT&T's version.

4. This seems to have consisted largely of groups within the educational and research network organisations trying to block the adoption of TCP/IP (O. H. Martin 2012, 33-34). The disagreements became so heated that they became known, colourfully, as the "protocol wars".

5. Initially CERN used a free version of TCP/IP from MIT on their PCs (Segal 1995).

Bibliography

Abbate, Janet. Inventing the Internet. Cambridge, MA: The MIT Press, 1999.

Braden, Robert T, and Peter L Higginson. "Development of UK/US Network Services at University College London." RFC Editor. 26 May 1981.

Carroll, Paul. Big Blues: The Unmaking of IBM. London: Weidenfeld & Nicolson, 1994.

Ceruzzi, Paul E. A History of Modern Computing. Cambridge, MA: The MIT Press, 2003.

Codd, E F. "A Relational Model of Data for Large Shared Databanks." Communications of the ACM 13 (1970): 377.

Codd, E F. "A Data Base Sublanguage founded on the Relational Calculus." Proc. ACM SIGFIDET. San Diego, CA, 1971.

Cooper, Christopher S. JANET: The First 25 Years. The JNT Association, 2010.

David, Paul A. "Clio and the Economics of QWERTY." The American Economic Review 75, no. 2 (May 1985): 332-337.

Gillies, James, and Robert Caillau. How the Web was Born: The Story of the World Wide Web. Oxford: Oxford University Press, 2000.

Heller, Robert. The Fate of IBM. London: Warner, 1994.

Kirstein, Peter T, interview by Clive Mabey. Interview of Peter Kirstein (24 July 2012).

Martin, James. An Information Systems Manifesto. Englewood Cliffs: Prentice-Hall, 1984.

Martin, Olivier H. "The "hidden" Prehistory of European Research Networking." ictconsulting.ch. May 2012.

Metcalfe, Robert, interview by Len Shustek. Oral History of Robert Metcalfe Mountain View: Computer History Museum, (2006 - 2007).

Naughton, John. A Brief History of the Future: The Origins of the Internet. London: Weidenfeld & Nicolson, 1999.

Oracle. "Profit: Anniversary Timeline." Oracle. May 2007.

Segal, Ben. A Short History of Internet Protocols at CERN . April 1995.