Chapter 2

Overview of Network Systems and Services

A network is an interconnected system of computing devices that provide shared, economical access to computer services. The task of managing the access to shared services is given to a specialized type of software known as a network operating system (NOS). There are many NOSs available in the marketplace today the major players are covered in detail in part III, "Software Platforms: NOSs and Clients." This chapter provides a high-level view of the two main types of local area networks (LAN): client/server and peer-to-peer. It also examines the basic hardware structure that comprises the modern LAN, and looks at some of the features and services furnished by this combination of networking hardware and software. In the process, a good many of the basic networking terms and concepts used throughout this book will also be introduced.

The Client/Server Network

Client/server computing is a buzzword that has been bandied about a great deal in the computer press, often without being specifically defined. Basically, the client/server concept describes a computing system in which the actual processing needed to complete a particular task is divided between a centralized host computer, the server, and a user's individual workstation, the client. The two are connected by cables (or infrared, radio, or microwaves) for communication (see fig. 2.1). (Note that the connecting lines in the figure represent the network's pattern of data traffic, and not the physical cabling system).

Figure 2.1 This is the logical architecture of a typical client/server network

Although they are both PC's with the same basic architecture, the client and server computers usually have very different hardware and software configurations. The primary function of each in relation to the other can be stated in simple terms: the client requests services from the server, and the server responds by providing those services. A few examples of client/server operations might make this distinction clear:

In each case, it is clear which system is the client, and which is the server. It is also clear that the computer operating as the server is providing a service to the client that is essential to the completion of the task at hand. Indeed, the server could be providing these services to dozens or even hundreds of clients simultaneously. It is not surprising, therefore, that a server is generally a more powerful, complex, and expensive machineórunning more powerful, complex, and expensive softwareóthan the clients with which it interacts.

The differences in function and ability between the server and the client are in accordance with the rea drives, high-qualfig printers, high-speed modems, and other expensive items. Concentrating the most expensive and important pieces of the network at a server allows those items to be protected and maintained by trained professionals, while allowing many more people to use them.

This centralized location of shared equipment is, of course, nothing new to computing. It is the essence of the host-based system, in which a mainframe holds all of the data and runs all of the applications, and users interact with the host using terminals to input data and view results. Clearly, another aspect is necessary for a system to be considered a client/server network, and that is the distributed processing previously mentioned. In a host-based system, all the important processing happens on the mainframe. The application running on the mainframe even controls most of the functions of the users' terminals, telling them where to display certain characters, when to beep and for how long, and when to accept user input.

In a client/server relationship, the server does some of the necessary data processing, and the client does some. The degree to which the processing tasks are separated between the two machines can vary greatly, and this is the source of confusion to many. When a user launches an application at his workstation, it may be a spreadsheet whose softwarefig stored and operated solely within the workstation computer, or it may be a database client that interacts with a server to bring information to that workstation. If the data file being opened by the spreadsheet is stored on a network file server, both of these instances can, by strict definition, be called client/server applications. Server processes are needed to provide both the spreadsheet and the database with the data files that they need to operate.

However, there is a question of degree here that cannot be overlooked. Once the file server has delivered the spreadsheet's data file to the workstation, its participation in the process is ended, until it is time to write the modified file back to the server. The database application, on the other hand, requires the continuous participation of both sides of the client/server paradigm to function. The database client is useless (and sometimes cannot even be launched) unless the server process is active.

This is what is really meant by client/server computing. Instead of the entire functionality for multiple users being carried out by a single computer, as in the mini/mainframe situation, the processing capabilities of the server and all of the client machines are combined into a whole that is inherently more efficient. This is because of reductions in the amount of data being transmitted over the network, as well as the combined increase in pure computing power.

There are several types of systems that can be considered servers in a client/server environment. At the most basic level is a file server that performs high-performance data storage duty for multiple clients, and perhaps provides shared print services as well. There can also be large application servers, running high volume applicationsósuch as database access, updating, indexing, selection, and retrievalóon behalf of less powerful clients. Smaller, special-purpose servers may provide fax services, electronic mail pickup and delivery, or modem sharing.

Even though many servers have very powerful, expensive hardware, it is the software differences between clients and servers that really distinguish them. In some client/server networks, the server hardware may not be all that different from that of the client. Most of these servers are ordinary IBM-compatible PCs, as are most of the clients. The server may have 16M, 32M, or more of RAM, 1G or more of disk space, and perhaps a tape backup system, but it is not unusual today to find PCs with similar hardware resources being used as clients or stand-alone systems.

The server, though, is running a NOS, such as Novell NetWare, Microsoft's Windows NT Advanced Server, or Banyan VINES, or it is running a server application on top of a high-performance general purpose operating system (OS), like IBM LAN Server or Microsoft LAN Manager running on OS/2. In any case, the software has special features: extensive security measures designed to protect the system and the data it contains from unauthorized access, enhanced file system software with features to protect files from damage or loss while handling simultaneous requests from multiple users, and a communications system capable of receiving, sorting out, and responding correctly to hundreds or thousands of varied requests each second from different clients.

To support the demands placed on a server in a client/server environment, the server software usually runs on a computer dedicated solely to the purpose of hosting that software and supporting its services. It might be possible to use the server computer to run a regular software application or to support a client session at the same time as the server software is functioning, but server processes and functions have priority access to the system's resources.

The Peer-To-Peer Network

While client/server networks are distinguished by how different the clients and servers are, each with a clearly defined role, peer-to-peer networks are just the opposite. There are still clients and servers in a peer-to-peer system, but generally, any fully functioning client may simultaneously act as a server. The resources of any computer on the networkódisk drives, files, applications, printers, modems, and so onócan be shared with any other computer, as shown in figure 2.2 (again displaying the pattern of data communications between the nodes and not the physical cabling diagram).

Figure 2.2 This is the logical architecture of a typical peer-to-peer network.

Peer-to-peer networking software might be included in the base client OS, or it might be purchased separately as an add-on feature. Until recently, it was common for all of the nodes in a peer-to-peer network to be running the same client OSóDOS, Windows for Workgroups, OS/2 Warp Connect, Windows NT, or Macintosh System 7. Some exceptions to this rule have always existed. A PC running DOS can be a client in a Windows for Workgroups network, but it cannot be a server. With TOPS, both PCs and Macintoshes can share resources as clients and servers. Artisoft's LANtastic peer-to-peer systems can accommodate Macintoshes as clients, but servers must be DOS-based. A PC running OS/2 usually can be configured to be either a client or server in a DOS-based peer-to-peer network by running the DOS software in one or more DOS virtual machines, although the network resources may not be available to OS/2 applications.

The release of Windows 95, however, signals the achievement of Microsoft's goal of providing peer-to-peer interoperability between all of its Windows OSs. Windows for Workgroups, Windows 95, and Windows NT all ship with a fully integrated peer-to-peer networking functionality that allows all three OSs to share resources with any of the others. The ease and simplicity with which basic networking functions can be configured and enabled in these OSs, not to mention the economy of their inclusion in the standard OS package, bodes ill for add-on products like LANtastic and Personal NetWare, which offer little in the way of additional features or performance.

Peer-to-peer networks are often comprised of only a few workstations, perhaps sharing a printer or two and the data files on each other's hard drives. The upper limit on the number of nodes that can function as both clients and servers on a peer-to-peer network with performance levels that remain reasonable is usually somewhere between 10 and 25. Beyond this number, a peer-to-peer machine can be used as a dedicated server with additional high-performance hardware. It then can handle as many as 50 or 100 clients that aren't too demanding. When networks grow beyond this point, though, it is time to migrate to a full client/server architecture.

The distinctions between client/server and peer-to-peer NOSs can get blurry at times. It is very common for a computer to be more or less dedicated to the role of a server in a peer-to-peer network to provide good performance and responsiveness to clients' requests. Conversely, the server OS in most client/server networks allows one or more client sessions to be run on the same computer as the server. In NetWare 2.x, the server can be set up in non-dedicated server mode, which allows a single DOS client session on the server. NetWare 4.x is usually thought of as a stand-alone client/server NOS, but it can be run as an application atop OS/2, allowing multiple OS/2, Windows, and DOS applications to be run on the same computer. LAN Manager, LAN Server, and Windows NT Advanced Server all allow one or more client sessions to be active on the server. Older computers such as IBM XTs and ATs can live on in a client/server environment as specialized print, mail, and fax servers, long after their usefulness as client workstations has ended.

Further confusing the picture is the fact that the client software in a peer-to-peer system is often nearly the same as the client software in a client/server system. Computers running Windows for Workgroups, Windows 95, or Windows NT in a peer-to-peer system can be configured to simultaneously access resources on many client/server systems, such as NetWare, Windows NT Advanced Server, or LAN Manager. LANtastic clients can also use the file and print services of NetWare. In fact, Artisoft resells a modified version of NetWare 4.x as the basis of its CorStream Server client/server product. Novell's Personal NetWare uses the same basic client software as the latest versions of its client/server packages, NetWare 3.12 and NetWare 4.1. Using this software, a DOS or Windows PC can easily access resources on both a Personal NetWare server and a dedicated NetWare 3.x or 4.x server.

Back in the days when most corporations regarded mainframes and minicomputers to be their only strategic computing platforms, individual branches and departments put in PC networks to quickly and economically share information and servers. As these networks have grown and become connected with each other, and now function as the strategic computing platform in many corporations, peer-to-peer networking software is being used within larger networks to continue sharing resources and information on a local level.

Client/Server versus Peer-To-Peer

Without question, the primary advantage of the peer-to-peer network is its low cost. For little more than the price of network interface cards and cabling, a handful of stand-alone PCs can be assembled into a basic functional small business network, often without the need for high-priced consultants or administrative talent.

This last point should not be taken lightly. A small business that chooses to install a client/server network (perhaps anticipating their future growth) is often placed into the difficult position of being incapable of running the network themselves, while at the same time being unable to justify employing a full-time network administrator. They are usually forced, as a result, to engage consultants for a high hourly fee.

The administrative factor, along with the high cost of the NOS software itself (while the peer-to-peer NOS is, in many cases, free with the client OS), presents a good reason for the small business to stick with peer-to-peer. This is fine, as long as the requirements of the business remain within the scope of the functionality that peer-to-peer systems provide. File and print services are sufficient for many types of operations, but if the business has the potential for generating truly large amounts of data that will have to be organized and retrieved on a regular basis, then the judicious course might be to run a client/server network from the very beginning.

Fortunately, the task of migrating from a peer-to-peer system to a client/server one has become much simpler over the years. Indeed, in the case of a Microsoft Windows network, simply adding a machine running Windows NT Advanced Server to an existing network of Windows for Workgroups or Windows 95 machines all but accomplished the task. Adding a NetWare server to such a peer-to-peer network is not terribly difficult either, although it will be necessary to reconfigure all of the workstations to accommodate both network types.

Deciding whether to go with a peer-to-peer or client/server network is an important decision. The pros and cons of both are presented in table 2.1. It is, however, not nearly as crucial as some of the others presented in this chapter, and throughout this book. If the selection of the networking hardware and infrastructure is made well, then a low-cost peer-to-peer network can be upgraded to a client/server system with virtually no expenditures wasted.

Table 2.1 Advantages and Disadvantages of Peer-To-Peer and Client/Server Networks

Advantages Disadvantages

Peer-To-Peer

Client/Server

Fundamental Networking Concepts

Having defined the roles of clients and servers in a LAN, we shall now consider the network itselfóthat is, the medium that links all of the component computers and the data communications conducted through it. Computer networking is a highly complex and technical subject. What appears to be a simple network transaction, such as the transfer of a file from server to workstation, is actually an extraordinarily complicated procedure, encompassing many different forms of communication on many levels.

Fortunately, the average LAN administrator does not usually need to be an expert on all facets of the communication that occurs between computers. This section is designed to serve as an introduction to some of the more in-depth discussions occurring later in this book. The basic vocabulary of networking will be introduced, and we will take a brief look at the hardware and network types covered elsewhere in greater depth .

Protocols

In any discussion of the technologies and methods underlying network communications, the term network protocol will be used often (perhaps too often). A protocol is nothing more than a set of rules and specifications governing how two entities communicate. Human societies develop protocols at many levels. Languages are protocols that are formulated to allow people to understand each other. Improper use of language results in misunderstanding, or prevents any communication at all. Later, more advanced protocols specify what is considered proper and polite behavior for the use of language evolve. When someone violates these rules, they are often considered rude or disruptive and may be ignored or perhaps even punished.

Computers also have certain defined protocols specifying proper behavior for communication between them. When any hardware or software violates these rules, proper communications may not be able to take place over the network, even between other systems that are following the rules. Messages generated by a machine not comforming to accepted protocols probably won't be acknowledged by other computers. As with humans, such messages will be considered to be disruptive noise and will be ignored.

All of the rules and standards that make computer-to-computer communication possible are properly called protocols. Because, as with humans, communication occurs at many levels, however, it is helpful to distinguish some different types of protocols. Understanding these different types is where a construction such as the OSI network model comes in handy. The OSI model is a construction that is designed to illustrate the seven basic levels of network communication, ranging from the physical medium of the network all the way up to the application interface appearing on the workstation. It is covered in much greater depth in chapter 3, "The OSI Model: Bringing Order to Chaos."

NCP (NetWare Core Protocol), TCP/IP, Ethernet, and UTP are all technically considered protocols, even though they refer to very different communication levels that have little in common beyond being components of a network transmission. A quick look at each protocol, in reference to the OSI network model, will help make the distinctions clear.

In accordance with the OSI network model, unshielded twisted pair (UTP) is a protocol that refers to the physical layer, specifying a type of cable that can be used to connect computers, how the connectors need to be wired together, and the electrical characteristics of the signals on the wire. To avoid confusion when talking about this and other physical layer protocols, people usually refer to cabling systems, cabling types, or other terms clearly identifying that they mean the physical link between systems. Since the physical layer need not take the form of an actual cable and may use infrared or radio transmissions, for example, a general term that covers all possibilities is transmission medium.

Ethernet is a protocol that functions at the data link layer, providing the basic rules for how the signals transmitted over the network medium must be formatted to contain data, how physical stations on the network are uniquely identified, and how access to the physical medium is governed so as to allow all of the stations on the network an equal opportunity to transmit. Ethernet is one of several data link layer protocols in use today and will function properly with UTP, or one of several other physical layer protocols. The protocols operating at physical and data link layers of the OSI model are often considered together because they are dependent on each other. As explained fully in the chapter on the OSI model, the use of a particular protocol at the data link layer imposes certain limitations on the physical layer and vice versa.

Transport Control Protocol/Internet Protocol (TCP/IP) is a broad set of rules and standards, sometimes called a protocol suite, that works at the network, transport, and session layers of the OSI model. These rules control how data on the network is sent on the correct path to reach its intended destination, how some communication errors are handled, and how logical connections between nodes on the network are established, maintained, and ended. There are several other protocols of this type in use today. Partially because these protocols cover several adjacent layers of the OSI model, they are sometimes referred to as protocol stacks.

The NetWare Core Protocol (NCP) is a set of rules that govern how applications and computer systems communicate at the presentation and application layers to carry out common data processing functions such as creating, reading, writing, and erasing files, sending and accepting print jobs, and logging on and off of a server. Other common protocols of this type are Server Message Blocks (SMB), Network Filing System (NFS), and AppleShare/AppleTalk.

As you can see from figure 2.3, these four examples encompass all of the layers of the OSI model, moving from bottom to top. The four protocols discussed above need not be and usually are not used together. They are provided merely as common examples what comprises the nature of the protocols running at the different networking communication levels. The rest of this chapter covers some basic concepts concerning these different levels, ranging from hardware to software as we progress again from the physical to the application layer.

Figure 2.3 These are four examples of networking protocols.

Cable Types

The first computer networks were constructed on a foundation of serial ports and cables, like those used to connect a modem or serial printer to a PC. Serial networks are hampered by severe speed and distance limitations but are still used today. In fact, DOS 6.2 comes with Interlink, a simple peer-to-peer network application that allows two PCs to be connected with serial or parallel cables and redirects drives or printers to the other PC.

Coaxial Cable

The first big advance in networking was when coaxial cable, similar to the cable that carries radio and TV signals over long distances in cable TV systems, was adapted for data communications. Coaxial contains a single conducting wire, surrounded by an insulating material, which in turn is surrounded by a metal sheath or braid that can be grounded to help shield the signal on the conductor from interference, and then still more insulation. All of these components share a single axis at the center of the cableóhence the term coaxial.

The first systems to use coaxial used the same signaling techniques as cable TV systems. The stream of data bits being transmitted was converted into a radio wave signal of a specific frequency, and broadcast on the wire. This system, called broadband signaling, has the advantage of allowing multiple signals or channels to be carried on a single cable at the same time, using a technique called frequency-division multiplexing (FDM), that assigned different base frequencies to different channels. The ability of a single cable to carry as many as 12 simultaneous TV channels must have seemed at the time like an unlimited amount of bandwidth that could never be used up, especially when compared to the slow teletype speeds achieved with serial connections that were the current standard.

Although analog TV signals can be carried for several miles on coaxial cable before acceptable signal quality is compromised, such distances could not be achieved with digital data. Any electrical signal carrying information can be thought of as a series of waves. The distance between the waves is the frequency of the signal, and the height of the waves is the strength, or amplitude, of the signal. The information in a signal, whatever the data type, is carried in small variations of either the frequency or amplitude. While a signal might start out well-defined, as it travels over the wire the variations that carry the information become blurred and distorted. The effect is similar to looking at an object through multiples panes of window glass. The image that passes through one pane is almost indistinguishable from looking directly at the actual object. As more panes are added, the object becomes blurry and indistinct, until finally it is unrecognizable. This distortion is called attenuation.

Digital data is much more susceptible than analog to the noise and signal distortions that are introduced as signals travel greater distances. The amount of distortion that would cause an annoying but acceptable amount of "snow" in a TV signal will utterly destroy the information carried in a digital data stream. Because of this, network systems using broadband coaxial cable are limited to reaching only a few dozen (or maybe hundreds) of feet, unless signal repeaters are used to regenerate the signal periodically. Simple amplifiers would not suffice, as they would only amplify the noise and distortion that the signal picks up as it travels down the wire.

The next advance in networking used similar coaxial cable, but instead of encoding the stream of digital data bits into a radio signal, the bits were directly transmitted on the cable with different voltage levels corresponding to different values, each level being maintained for a small but consistent time period. This is known as baseband signaling, the method still used in the vast majority of LANs today. Rather than decoding small variations in amplitude or frequency from a larger signal to decipher information, a receiving station now simply measures the voltage level on the wire at specific intervals.

This method of signaling allows much cheaper transmitting and receiving apparatus to be used on a network. It does raise a problem of signal collisions, thoughóonly one station is able to successfully transmit at a time. If two or more nodes transmit at the same time, neither signal will be intelligible, but this problem is dealt with by the next level in the OSI model. With the development and use of baseband signaling, the limiting factor in network distance has become not how far a clean signal of a certain frequency can be transmitted, but how fast the signal can get from one end of the cable to the other. Signal quality considerations still come into play, of course, particularly in limiting the data speed, or signaling rate, of the network, which decreases as distance goes up for any particular type of cable.

Coaxial with a single conductor is an unbalanced transmission medium, as opposed to balanced media that use two similar wires carrying signals of opposite polarity or voltage. Balanced media are more resistant to noise and interference. Some IBM systems uses a balanced cable type, similar to coaxial, that has two conductors inside the shielding braidóthis is called twinax.

Most baseband coaxial network systems have a maximum signaling rate of 10 megabits per second or less. Depending on the specific type of cabling used, they work at this speed over distances of more than a mile when simple repeaters are used to regenerate the signal. For a long time, coaxial was the only economical choice for use in high-speed LANs. The drawbacks to setting up and maintaining a coaxial cable system include the fact that the cable is difficult and expensive to manufacture, is difficult to work with in confined spaces, can't be bent too sharply around tight corners, and is subject to frequent mechanical failure at the connectors. Because a failure at any point usually renders the entire cable system unusable, the overall reliability of coaxial systems has always been less than stellar.

Twisted-Pair Cable

The biggest network in the world, the telephone network, originally used only twisted-pair cabling and still does for most local connections. The name comes from the fact that each individually insulated conductor is part of a pair, making this a balanced medium, and that each pair is twisted together along its length, which helps further protect it from interference. Twisted-pair cabling comes shielded, with a metal sheath or braid around it, similar to coaxial, or unshielded. The two are commonly known as STP (shielded twisted pair) and UTP (unshielded twisted pair). All signaling over twisted pair is of the baseband type. STP is commonly used in token-ring networks, but UTP is by far the most popular LAN cabling protocol today, used in the majority of Ethernet networks and in many token-ring networks as well.

Fiber-Optic Cable

A relatively new technology, fiber-optic cable, uses light signals transmitted over thin, glass fiber to carry data. Fiber-optic cable offers higher speed, greater reliability, and spans longer distances than other methods. It is more expensive to install than coaxial or twisted pair but is usually cheaper to maintain. It often is used at larger companies to interconnect servers or departmental networks at high speed, over a long distance, or where high security is desired.

A high-speed link dedicated to the connection of servers and other selected network components (as opposed to workstations) is called a backbone. Fiber-optic cable is completely resistant to electromagnetic interference and signals cannot be intercepted without breaking the cable. It is also far less susceptible to attentuation than copper cable. Ethernet over fiber-optic cable transmits at 10 Mbps at distances of over two miles, much farther than over any other media. FDDI is a standard network topology that allows 100 Mbps communications between nodes up to a mile or more apart.

Network Topologies

A topology is the pattern by which the cabling medium is used to interconnect the various computers that form the network. Like the cable types discussed above, the topology used is intimately connected with the data link layer protocol. Once cannot simply choose a cable type and wire to according to any topology desired. The mechanism by which the data link layer protocol passes the data from the computer onto the network imposes definite restrictions on the way that the cable is wired. The amount of attentuation inherent to the medium, the speed of the signal, and the length of the cable segments are all factors that must be accounted for by the data link protocol. A topology is therefore selected in most cases as the result of a data link protocol decision.

The Bus Topology

There are several general categories of LAN physical topologies, which really are just different ways of stringing cable from station to station. The simplest, and the first true LAN topology, is the bus topology (see fig. 2.4), which is a single long cable with unconnected ends, to which all of the networked computers (sometimes called nodes) are attached along its length. The bus topology is most often used with coaxial cable, although other computing interfaces that utilize components wired in series (such as SCSI) are sometimes called a bus.

Figure 2.4 This is a representation of the bus topology.

Depending on the width (and subsequent unwieldiness) of the cable, the bus may extend directly from computer to computer, as in thin Ethernet, or it may be wired to the vicinity of the computer and a short cable used to attach it to the bus cable, as in thick Ethernet. Both ends of a bus must always be terminated; otherwise, signals reaching the end of the cable may echo back along its length, corrupting transmissions.

Like any circuit wired in series, the bus topology is inherently unreliable in that a break anywhere in the cable disrupts service to all of the stations on the far side of the break. Like old-fashioned Christmas tree lights, a single blown bulb can affect the entire stringóor network.

The Star Topology

Although developed later, the star topology has become the most popular topology in networking, due in no small part to its overcoming the Christmas-tree light effect. In a star topology, each machine on the network has its own dedicated connection to a hub or concentrator. A hub is device, installed in a central location, that functions as a wiring nexus for the network. Servers and workstations alike are attached to the hub, as shown in figure 2.5, and if any particular cable segment fails, only the computer attached with that segment is affected. All of the others continue to operate normally. The star topology is used mostly with twisted pair cabling, usually in an Ethernet or token-ring network environment.

Figure 2.5 This is a representation of the star topology.

If there is drawback to the star topology, it is the additional cost imposed for the purchase of one or more hubs. Usually, though, this expense is offset by twisted-pair cable's easier installation and cheaper cost than coaxial.

The Ring Topology

Essentially, a bus topology where the two ends are connected forms a ring topology (see fig. 2.6). This is primarily a theoretical construct, though, for very few networks today are actually wired in a ring. The popular token-ring network type is actually wired using a star topology, but there can be a logical topology that differs from the physical topology. The logical topology describes how the network signaling works, rather than how the cable looks. In the case of a token-ring network, special hubs are used to create a data path that passes signals from one workstation to the next in a procession that concludes at the station where the transmission originated. Thus, the topology of this network is that of a physical star but a logical ring.

Figure 2.6 This is a representation of the ring topology.

The Mesh Topology

The mesh topology is another example of a cabling system that is almost never used. Describing a network in which every computer contains a dedicated connection to every other computer, it is another theoretical construction that overcomes one networking problem while creating another. One of the fundamental problems of networking at the data link layer of the OSI model is the method by which signals from many different workstations are to be transmitted over a shared network medium without interference. The mesh network eliminates this problem by not sharing the medium.

As shown in figure 2.7, each workstation has its own link to every other workstation, allowing it to transmit freely to any destination, at any time when it is not actually receiving a transmission. Of course, problems quickly arise as the number of workstations grows. Even a modest 10-node network would require 100 NICs and 100 cable runs, making it by far the most expensive network ever created, both in terms of hardware costs and in maintenance. We won't even speak about finding computers that can run 10 NICs each.

Figure 2.7 This is a representation of the mesh topology.

Hybrid Topologies

In many cases, the network topologies described above are combined to form hybrids. For example, multiple hubs, each the center of a star, are often connected using a bus segment. In other cases, hubs are added to the ends of segments extending out from another hub, forming what is sometimes called a tree topology. As always, care must be taken when using these techniques to ensure that the requirements set down by the data link layer protocol are not violated too severely. There is a certain amount of leeway to many of the specifications, but wanton disregard of the networks signaling requirements can be severely detrimental to the performance of the network.

Data Link Standards

Several data link standards which are in common use today are briefly introduced in the following sections. This list is by no means exhaustive; there areother possible topologies that have been used on LANs in the past, are still being used today, and will continue to be used in the future. However, the vast majority of new networks installed today utilize one or more of these protocols. The following sections are brief introductions to the data link protocols and some of the concepts that they introduce. They are all covered in greater detail in chapter 7, "Major Network Types."

In any data link protocol, a quantity of data is packaged into a frame, or packet, which has addressing, routing, control, and other information added to it so that other stations on the network can recognize the data that is meant for them, and know what to do with data meant to go elsewhere.

Ethernet

Originally developed by the Digital Equipment, Intel, and Xerox corporations in the late 1970s, Ethernet has since been adopted as an international standard and has become the predominant LAN-data link protocol in use today.

Ethernet was originally specified with a 10 Mbps signaling rate, and uses a method of network access control known as Carrier Sense Multiple Access/Collision Detection (CSMA/CD). Instead of passing a token from station to station to allow access to the network for transmissions, any Ethernet workstation is allowed to transmit a frame at any time, as long as the network is not occupied by transmissions from other stations.

When two or more stations do begin transmitting at the same time, a collision occurs. The stations involved usually detect the collision and stop transmitting. After a randomly chosen period of time (only a few milliseconds at most) each of the stations attempts to transmit again, after first listening to make sure that the line is clear. Small numbers of collisions and other errors are normal on an Ethernet network.

Because it is impossible to determine with precision which node will be the next one to transmit and equally impossible to guarantee that the transmission will be successful, Ethernet is called a probabilistic network. There is a small, but very real, chance that a particular frame will not be able to be transmitted successfully within the time required because of the possibility of collisions happening at every try. As the amount of traffic increases on an Ethernet network, the number of collisions and errors increases, and the chance of a particular frame being communicated successfully decreases.

The bottom line is that as an Ethernet network grows, it reaches a point where adding active stations causes the maximum data throughput on the network to decrease. When a particular network will reach this saturation point is hard to predict with certainty. The mix of traffic, applications, and activity varies greatly from one network to another and also changes on a network over time. To avoid saturation problems, strict limits on cable segment lengths and the allowed number of nodes are specified for each of the cable types that can be used for Ethernet.

Due to continued development of the standard during its long history, Ethernet has evolved into a very flexible communications protocol that can be used with a wide range of transmission media. When referring to the various Ethernet cabling standards, abbreviations like 10Base2 are used. As shown in figure 2.8, the first number, 10, refers to the signaling rate for the medium in megabits per second (Mbps). The word after the first number is either "Base" or "Broad," indicating a baseband or broadband transmission. The last number refers to the length, in units of 100 meters, to which a single cable segment is limited. Later additions to the Ethernet standards began using letters in this position (like "T" for twisted pair) to indicate cable type.

Figure 2.8 Here is the explanation for the 10BaseX cable naming standard.

The most common forms of Ethernet are 10Base5, 10Base2, and 10BaseT, which are often known colloqially as Thick Ethernet, thin Ethernet and UTP, respectively. The latter type uses telephone cabling, as discussed above, while the others both use forms of coaxial cable. Several less common Ethernet topologies are also available, as well as well as some new varieties running at 100 Mbps that are rapidly gaining in popularity. Many Ethernet networks use a combination of two or more topologies. For example, a thick Ethernet cable may be used as a backbone running throughout a large facility. Or multiport thin Ethernet repeaters may be used to branch off segments connected to 10BaseT hubs, which are used for the final connection to the workstations. These mixed-media networks are allowed as long as each segment type adheres to the Ethernet standards for that medium.

Ethernet NICs, hubs, and cabling equipment are readily available for all media types and all PC buses. There are even pocket-sized external adapters that attach to a laptop's parallel port or to a Macintosh's SCSI port to enable network access for computers without expansion slots. Universally recognized as an economical data link protocol offing good performance for traditional network applications, Ethernet can be used for networks running virtually any of the NOSs and transport protocols available today, including NetWare, Windows NT, LAN Server, VINES, and many UNIX variants.

Token Ring

Token Ring was developed in the mid-1980s by IBM, and accepted as an industry standard shortly thereafter. Stations on a token-ring network are allowed access to the network medium only when they are in possession of a token, a special frame that is passed from one station to the next. Token ring is physically wired in a star configuration, with cables running from each station to a central access point called the multistation access unit (MSAU). The MSAU attaches each station only to the one preceding it in the ring (the upstream neighbor), from which it receives transmissions, and the one after it in the ring (the downstream neighbor), to which all its transmissions are sent. MSAUs are interconnected by a cable running from the Ring In port of one MSAU to the Ring Out port of another. The last MSAU's Ring Out is connected to the first MSAU's Ring In. This forms a logical ring topology because a frame originating at a particular workstation is passed to every other station in turn, finally returning to its origin, where it is removed from the network.

The original Token Ring signaling rate was 4 Mbps. Today, many token-ring networks run at 16 Mbps. Most token-ring equipment such as NICs and MSAUs can be configured to operate at either rate, although all stations on a ring must be set to the same speed. In a token-ring network, it can always be precisely determined who will next be able to transmit, and their transmission is practically guaranteed to be successful as long as there are no hardware errors on the network. This certainty leads to token ring being called a deterministic network, unlike Ethernet. Also unlike Ethernet, collisions are not an expected occurrence. Consistently occurring collisions on a token-ring network are a matter of concern to the network administrator.

Because a token-ring station needs to be able to discover whom its upstream and downstream neighbors are, each station needs to be able to determine and modify the overall configuration of the ring. The ring needs to be able to cope if the station with the token gets turned off or otherwise removed from the ring before passing the token on. There are many more kinds of errors that can disrupt the normal functioning of the ring, and each station must be able to detect and recover from these errors. This requires token-ring NICs to have a lot of error detection and correction circuitry that is not needed or present on Ethernet or ARCnet adapters.

The rules governing the design of a token-ring network are much more complex than those for an Ethernet network. Early token-ring networks used proprietary cables sold only by IBM, but these were thick and unwieldy and had to be purchased in predetermined lengths and strung together to span long distances. Modem token-ring networks all use either STP or UTP cable. The type of cable used determines the segment length and number of node limitations that are imposed on the network, with STP providing the greater flexibility in both areas.

For a long time, IBM only supported use of STP cabling on token-ring networks. This, along with a smaller number of of token-ring equipment suppliers when compared to Ethernet, and the additional "intelligence" required of token-ring NICs, made token-ring equipment much more expensive than Ethernet or ARCnet. In spite of the higher costs, however, its reliability and IBM's reputation and support have helped make token ring the second most popular network topology in use today. UTP cable is now allowed if a media filter is used at the workstation, at either speed, reducing the cost of a token-ring installation. In addition, token-ring NICs and MSAUs today are available from many vendors, and the increasing competition has helped lower equipment costs dramatically.

FDDI/CDDI

Fiber Distributed Data Interface (FDDI) is a high-speed, fault-tolerant topology that supports transmission speeds of 100 Mbps over a network size of up to 100 kilometers (60 miles). The maximum distance between nodes is two kilometers. It uses dual strands of fiber-optic cable to attach stations together, with the overall topology being a counter-rotating dual ring, that is, two separate rings with traffic moving in opposite directions. Nodes can be attached to one fiber-optic cable (a single-attached station) or both (a dual-attached station). A failure of any one station or a break in one or both cables will still allow communication to continue among most of the remaining stations. The dual-attached stations closest to the break will detect the break and shunt signals from one ring to the other, maintaining the overall ring topology. Two or more simultaneous cable breaks would result in two or more isolated but still functioning networks.

FDDI uses a token-passing method of network access. Its high speed, reliability, and fault-tolerance have made it a popular choice for large backbone networks (internetworks providing connections between multiple LANs in an enterprise). Its complete resistance to electromagnetic interference also makes it a popular choice for connections between buildings. Routers then provide connections between the FDDI backbone and local Ethernet or token-ring networks. FDDI equipment and administration is very expensive when compared to the other technologies, which limits its acceptance for other uses such as direct workstation connections.

A somewhat cheaper alternative to FDDI that retains its reliability is Copper Distributed Data Interface (CDDI), which uses the FDDI communication standards over UTP or STP cable. The distance between nodes is reduced from 2,000 meters to 100 meters with UTP. Comparable performance, although at shorter distances and with less reliability, can be achieved relatively cheaply using TCNS (a 100 Mbps version of ARCnet from Thomas Conrad), 100BaseT, or 100VG AnyLan.

LocalTalk

LocalTalk is used almost exclusively with Macintoshes. Every Mac has an integrated LocalTalk network interface, and comes with AppleTalk networking software included in the OS. Apple's LocalTalk specification calls for nodes to be connected in a daisy-chain or bus configuration, using STP cabling. A company named Farallon has also developed a connector that enables LocalTalk to work over UTP cabling. This enables a Macintosh network to be wired together using a single extra pair in existing telephone wiringóthis technique is called PhoneNet.

LocalTalk was originally developed to enable easy sharing of peripherals such as printers. Most Macintosh-compatible printers use a LocalTalk connection to one or more Macintoshes on a simple network. The signaling rate of LocalTalk is a relatively slow 230.4 Kbps. This is faster than the printer ports on an IBM-compatible PC, however, making LocalTalk a good choice for printer sharing. It is quite slow when used for file sharing, running at about the same speed that floppy disk drives read and write.

No more than 32 nodes can be connected to a LocalTalk network, which can be a maximum of 1,000 feet long. Repeaters, bridges, or routers can be used to connect LocalTalk networks, enabling communications over a much wider area. Using LocalTalk for file sharing once required purchasing additional software, such as AppleShare software from Apple or TOPS from Sun Microsystems. Recent versions of the Macintosh OS, System 7.x, include peer-to-peer networking capabilities.

ARCnet

One of the first network communication topologies to be developed, ARCnet (Attached Resource Computing Network), was commercially released in 1977 and is still occasionally used today. Originally, it used a star topology constructed out of the same type of coaxial cabling used to connect IBM 3270 terminals to mainframes. ARCnet is limited to a signalling rate of 2.5 Mbps. There can be no more than 255 active nodes on an ARCnet network, and node addresses are a single byte in length. Unlike the other network types considered here, node addresses are manually set with jumpers or switches on the NIC, while those for Ethernet and token-ring NICs are assigned at the factory. Care must be taken in an ARCnet network to make sure that no two nodes are given the same address.

ARCnet also uses a token-passing method of network access control; only the station possessing the token is allowed to transmit a data packet onto the network. The token is passed along the network as in a logical bus, from the first station to the last.

ARCnet has always had some advantages: the components were quite inexpensive when compared to other networking systems; signals could travel over relatively long distances of up to 2,000 feet; and ARCnet provides a high level of resistance to external electrical interference and noise, which makes it popular in manufacturing environments. The relatively slow signaling rate of the original ARCnet specification has been viewed as a disadvantage by some, but the actual performance of ARCnet in real-world networks is often surprisingly high. Because of its token-passing architecture, ARCnet handles traffic levels close to its theoretical maximum very efficiently. ARCnet is also a much simpler protocol than IBM's Token Ring, so less of the available bandwidth is taken up with managerial communications, leaving more for actual data throughput.

ARCnet today can use a star, a bus, or a combination of topologies, over coaxial, STP, UTP, or fiber-optic cabling. Signaling rates today can be a mix of the original 2.5 Mbps and a newer 20 Mbps. A variation of ARCnet known as TCNS (Thomas Conrad Network System) is available that can go as high as 100 Mbps on STP, coaxial, or fiber-optic.

Inexpensive ARCnet network adapters and hubs are available from many sources. All manufacturers of ARCnet equipment are required by licensing agreements to adhere to the standards set by DataPoint, the original developer of ARCnet, so usually there are no problems with mixing equipment from different sources. Still and all, the user base for ARCnet equipment has dwindled considerably, to the point at which it can no longer be considered a major competitor in the network marketplace.

Repeaters, Bridges, Routers, and Switches

The individual data link segments that we have considered thus far are, as we have seen, limited in the distance they can cover and the number of workstations they can support. This section, then will deal with the ways in which these segments can be connected together to form the large internetworks that are commonplace today.

Repeaters

Repeaters are used to interconnect network segments into larger LANs. These devices do just what the name impliesóthey repeat every signal received on one port out to all its other ports. Because of this, repeaters spread not only good data across networks but also collisions and other errors. Repeaters operate at the physical layer of the OSI model, as shown in figure 2.9, and have no traffic filtering or packet translation ability. They can be used only to connect similar or identical network segments within a relatively small area. For example, a repeater can be used to connect a 10BaseT Ethernet segment using UTP to a 10BaseF Ethernet segment using fiber-optic cable. A repeater cannot be used to connect an Ethernet 10BaseT segment to a 100BaseX segment, or a token-ring segment to any kind of Ethernet. To extend a network's reach beyond the strict limits of repeaters requires a different type of device to interconnect two or more networks; one that operated at a higher OSI layer.

Figure 2.9 Repeaters are used to connect network segments at the physical layer.

Bridges

The simplest of these is called a bridge. A bridge operates at the data link layer of the OSI model (see fig. 2.10). Early bridges required address tables to be hand-entered, but most bridges today are learning bridges. The technique used most often with Ethernet is known as transparent bridging. As a transparent bridge operates, it monitors traffic on all of its ports and stores the node addresses of the sending stations connected to each port in a table in memory. When it receives a packet on one port that is destined for a node address on a different port, it forwards the packet to that port so that it can reach its destination. If it receives a packet destined for an unknown node, it broadcasts the packet on all ports and listens for a response. When it receives a response on one port, it adds that node address to the table for that port, and also passes the response back to the original sender. In this way, each bridge eventually learns through which of its ports each node on the network can be reached. Special procedures are used to deal with unusual situations like multiple paths to a destination.

Figure 2.10 Bridges are used to connect network segments at the data link layer.

The bridging technique used most often on token-ring networks is called source-route bridging. In this method, a bridge is not required to learn where every node resides on the network in relation to itself. However, a station that wants to send a packet to a station on another network must know the path to that station, including all intervening bridges that the data must cross. This information is included in the packet. Each intermediate bridge looks at a received packet to see where it has to go next, modifies the addressing information in the packet accordingly, and sends it on its way to the next bridge or to its final destination.

Stations learn the path to another node by sending out discovery packets to the destination address. Bridges add their own address to these packets and forward them out of one or all bridge ports. When a discovery packet reaches its destination, it contains the addresses of all of the bridges that it has passed through. This allows the destination node to send a packet back to the originating station containing the route of the shortest or fastest path between that origin and destination.

Almost any network protocol can pass over a bridge because the only requirements are node addresses for the destination and source. Bridges can even be used to connect dissimilar networks, such as ARCnet and Ethernet, or to provide LAN connections to a WAN, provided that the traffic on each side of the bridge uses the same higher-level protocols such as IPX, NetBEUI, or TCP/IP.

Continued...