Chapter 7

Major Network Types

Now that you have examined the components of the average network server and several different types of workstations, you have come to the fabric that incorporates them all into what is known as a local area network (LAN).

A network may, at first glance, appear to be deceptively simple. A series of computers are wired together so that they can communicate with each other. However, each device on the network must be able to communicate with any other device at many different levels, even to perform the simplest tasks. Further, this must all be accomplished using a single transmission channel for all of the devices. Multiply these myriad levels of communication by dozens, hundreds, or even thousands of devices on a network, and the logistics become staggeringly complex.

In this chapter, you will concentrate on an examination of the basic network types in use today. You will learn about the actual physical medium that connects networked devices, but you will also learn about the basic communications methods used by each network type at the bottom two levels of the OSI reference model. These methods are integrally related with the physical medium, as they impose numerous restrictions on the way that you can construct the network fabric. You will also read about some of the new network technologies that are just coming into general acceptance in the marketplace. These offer increased network throughput to accommodate the greater demands of today's application figtware and data types in a variety of ways. Even if your network has no need for these technologies today, it is important to keep a finger on the pulse of the industry in order to facilitate the performance of network upgrades at a later time.

The selection of a network type is one of the first major decisions to be made in setting up a network. Once constructed, a network type can be very difficult and expensive to change later, so it is a decision that should be made correctly the first time. Consideration must be paid to the needs of your organization right now, as well as the future of the network. Knowing more about the basic communications methods utilized by a network gives you a greater understanding the hardware involved and the problems that it may be subject to. Network communications problems can be very difficult to troubleshootóeven more so when you are unaware of what is actually going on inside the network medium.

Understanding Terminology

As you have seen in chapter 3, "The OSI Model: Bringing Order to Chaos," the basic OSI model for network communications consists of seven layers, each of which has its own set of terms, definitions, and industry jargon. It can be very difficult to keep track of all of the terminology used in networking at the various levels, and this chapter will hopefully help you understand many of the terms that are constantly used.

First of all, keep in mind that this chapter is concerned primarily with the lowest levels of the OSI reference model: the physical and data link layers. Everything discussed here is completely independent of any concerns imposed by applications and operating systems (OS) either at the server or workstation level. An Ethernet LAN, for example, can be used to connect computers running NetWare, Windows NT, numerous flavors of UNIX, or even minicomputer OSs. Each of these has its own communications protocols at higher levels in the OSI model, but Ethernet is completely unconcerned with them. They are merely the "baggage" that is carried in the data section of an Ethernet packet. Network types such as Ethernet, token ring, and FDDI are simply transport mechanisms; postal systems, if you will, that carry envelopes to specific destinations irrespective of the envelopes' contents.

Fundamentals of Network Communications

The packet is the basic unit used to send data over a network connection. At this level, it is also referred to as a frame. The network medium is essentially a single electrical or optical connection between a series of devices. Data must ultimately be broken down into binary bits that are transmitted over the medium using one of many possible encoding schemes designed to use fluctuations in electrical current or pulses of light to represent 1s and 0s. Since any device on the network may initiate communications with any other device at any time, it is not practical for a single device to be able to send out a continuous stream of bits whenever it wants to. This would monopolize the medium, preventing other devices from communicating until that one device is finished or, alternatively, corrupting the data stream if multiple devices were trying to communicate simultaneously.

Instead, each networked device assembles the data that it wants to transmit into packets of a specific size and configuration. Each packet contains not only the data that is to be transmitted but also the information that is needed to get the packet to its destination and reconstitute it with other packets into the original data. Thus, a network type must have some form of addressingóthat is, a means by which every device on the network can be uniquely identified. This addressing is performed by the network interface located in each device, usually in the form of an expansion card known as a network adapter or a network interface card (NIC). Every network interface has a unique address (assigned either by the manufacturer or the network administrator) that is used as part of the "envelope" it creates around every packet. Other mechanisms are also included as part of the packet configuration, including error-checking information and the data necessary to assemble multiple packets back into their original form once they arrive at their destination.

The other responsibility of the network, at this level, is to introduce the packets onto the network medium in such a way that no two network devices are transmitting onto the same medium at precisely the same time. If two devices should transmit at the same time, a collision occurs, usually damaging or destroying both packets. The mechanism used to avoid collisions while transmitting packets is called media access control (MAC). This is represented in the lower half of the data link layer of the OSI reference model, also known as the MAC sublayer. A MAC mechanism allows each of the devices on a network an equal opportunity to transmit its data, as well as providing a means of detecting collisions and resending the damaged packets that result.

Thus, the network types covered in this chapter each consist of the following attributes:

Many of the following network types utilize widely different means of realizing these three attributes. Although they are all quite capable of supporting general network use, each is particularly suited to a different set of network requirements. It is also possible to connect networks of differing types into what is technically known as an internetworkóthat is, a network of networks. You may find, therefore, that while Ethernet is completely suitable for all of the networked workstations within a particular building, an FDDI link (which actually comprises a network in itself) would be a better choice for use as a network backbone for connecting all of the servers that use higher speeds.

The growing trend today is towards heterogeneous networks, an amalgam of varying network types interconnected into a single entity. This and the increasing popularity of wide area network (WAN) links between remote sites has made it necessary for the LAN administrator to have knowledge of all of these network types. It is only in this way that the proper ones can be chosen to satisfy the particular needs of an installation.

Ethernet and Its Variants

With over 40 million nodes installed around the world, Ethernet is, by far, the most commonly found network type in use today. As an open standard from the very outset, it's huge popularity has led to a gigantic market for Ethernet hardware, thus keeping the quality up and the prices down. The Ethernet standards are mature enough for them to be very stable, and compatibility problems between Ethernet devices produced by different manufacturers are comparatively rare.

Originally conceived in the 1970s by Dr. Robert M. Metcalfe, Ethernet has had a long history and has been implemented using a number of different media types and topologies over the years, which makes it an excellent platform with which to learn about low-level networking processes. One of the keys to its longevity was a number of remarkably foresighted decisions on the parts of its creators. Unlike other early network types that ran at what are today perceived to be excessively slow speeds, such as StarLAN's 1 Mbps and ARCnet's 2.5 Mbps, Ethernet was conceived from the outset to run at 10 Mbps.

It is only now, 20 years later, that a real need for greater speed than this has been realized, and the Ethernet specifications are currently being revised to allow network speeds of 100 Mbps as well. A number of competing standards are vying for the approval of the marketplace in this respect, but it is very likely that "Fast Ethernet," in some form or another, will be a major force in the industry for many years to come.

Ethernet Standards

Of course, as so often seems to be the case in the computing industry, nomenclature is never easy, and what is generally referred to as Ethernet actually bears a different and more technically correct name. The original Ethernet standard was developed by a committee composed of representatives from three large corporations: Digital Equipment Corporation, Intel, and Xerox. Published in 1980, this standard has come to be known as the DIX Ethernet standard (after the initials of the three companies). A revision of the standard was later published in 1985, which is known as Ethernet II. This document was then passed to the Institute of Electrical and Electronics Engineers (IEEE) for industry-wide standardization. The IEEE is a huge organization of technical professionals that, among other things, sponsors a group devoted to the development and maintenance of electronic and computing standards. The resulting document, ratified in 1985, was officially titled the "IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications." This should make it clear why most people in the industry retained the name Ethernet despite the fact that nearly all of the hardware sold today is actually 802.3 compliant.

In most ways, the 802.3 standard is a superset of the DIX Ethernet standard. While the original standard specifies only the use of thick Ethernet coaxial cabling and Ethernet II adds the thin coaxial variant, 802.3 adds the capability for using other cable types, such as unshielded twisted pair (UTP) and fiber optic, which have all but eclipsed thick Ethernet, or thicknet, in common network use. Other aspects of the physical layer remain the same in both standards, however. The data rate of 10 Mbps and the Baseband Manchester signaling type (the way 1s and 0s are conveyed over the medium) remain unchanged, and the physical layer configuration specs for thicknet and thinnet are identical in both standards.

One source of confusion, however, is the existence of the SQE Test feature in the 802.3 standard, which is often mistakenly thought to be identical to the heartbeat feature defined in the Ethernet II document. Both of these mechanisms are used to verify that the medium access unit (MAU) or transceiver of a particular Ethernet interface is capable of detecting collisions, or signal quality errors (SQE). A test signal is sent on the line where the collision occurred from the MAU or transceiver to the Ethernet interface following every packet transmission. The presence of this signal verifies the functionality of the collision detection mechanism, and its absence can be logged for review by the administrator. No signal is sent out over the common, or network, medium. Use of SQE Test and heartbeat are optional settings for every device on the network, and problems have often been caused by their use, particularly when combined. The essential difficulty is that the heartbeat function was only defined in the Ethernet II standard. It does not exist in Ethernet I and equipment of that type may not function properly when transceivers using heartbeat are located on the same network. In addition, the 802.3 standard specifically states that the heartbeat signal should not be used by transceivers connected to 802.3-compliant repeaters. In other words, an Ethernet II NIC connected to an 802.3 repeater must not utilize the heartbeat feature or a conflict with the repeater's jam signal may occur.

There are other differences between the two standards, but they are, for the most part, not consequential in the actual construction and configuration of a network. The original DIX Ethernet standards cover the entire functionality of the physical and data link layers, while the IEEE standard splits the data link layer into two distinct sublayers: logical link control (LLC) and media access control (see fig. 7.1).

Figure 7.1 The OSI data link layer has two sublayers.

The Logical Link Control Sublayer

The top half of the OSI model's data link layer, according to the IEEE specifications, is the LLC sublayer. The function of the LLC is to effectively isolate all of the functions that occur below this layer from all of the functions occurring above it. The network layer (that is, the layer just above the data link layer in the OSI model) must be sent what appear to be error-free transmissions from the layer below. The protocol used to implement this process is not part of the 802.3 standard. In the IEEE implementation of the OSI model, the LLC is defined by the 802.2 standard, which is also utilized by other network types defined in IEEE standards, as well as by the 802.3 standard. Utilizing a separate frame within the data field of the 802.3 frame, the LLC defines error-recovery mechanisms other than those specified in the MAC sublayer, provides flow control that prevents a destination node from be overwhelmed with delivered packets, and establishes logical connections between the sending and receiving nodes.

The Media Access Control Sublayer

The other half of the data link layer, the MAC sublayer, as mentioned earlier, arbitrates access to the network medium by the individual network devices. For both of the standards, this method is called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). This protocol has remained unchanged ever since the original DIX standard and is equally applicable to all media types and topologies used in Ethernet installations. The way that this protocol works is discussed in the following section.

CSMA/CD: The Ethernet MAC Mechanism

When a networked device, also referred to as a station, a node, or a DTE (data terminal equipment), wants to send a packet (or series of packets), it first listens to the network to see if it is currently being utilized by signals from another node. If the network is busy, the device continues to wait until it is clear. When the network is found to be clear, the device then transmits its packet.

The possibility exists, however, for another node on the network to have been listening at the same time. When both nodes detect a clear network, they may both transmit at precisely the same time, resulting in a collision (also known in IEEEspeak as an SQE, or signal quality error). In this instance, the transceiver at each node is capable of detecting the collision and both begin to transmit a jam pattern. This can be formed by any sequence of 32ñ48 bits other than the correct CRC valuecyclical redundancy check, an error-checking protocol) for that particular packet. This is done so that notification of the collision can be propagated to all stations on the network. Both nodes will then begin the process of retransmitting their packets.

To attempt to avoid repeated collisions, however, each node selects a randomized delay interval, calculated using the individual station's network address (which is certain to be unique), before retransmitting. If further collisions occur, the competing nodes will then begin to back off, that is, increase the range of delay intervals from which one is randomly chosen. This process is called truncated binary exponential backoff. As a result of its use, repeated collisions between the same two packets become increasingly unlikely. The greater the number of values from which each node may select, the lesser the likelihood that they will choose the same value.

It should be noted that collisions are a normal occurrence on a network of this type, and while they do result in slight delays, they should not be cause for alarm unless their number is excessive. Perceptible delays should not occur on an Ethernet network until the utilization of the medium approaches 80%. This figure means that a transmission is occurring somewhere on the network 80% of the time. A typical Ethernet network should have an average utilization rate of 30ñ40%, meaning that the cable is idle 60ñ70% of the time, allowing stations to initiate transmissions with relative ease.

A collision can only occur during the brief period after a given station has begun to transmit and before the first 64 bytes of data are sent. This is known as the slot time, and it represents the amount of time that it takes for the transmission to be completely propagated around the network segment. 64 bytes will completely fill the wire, end-to-end, insuring that all other stations are aware of the transmission in progress and preventing them from initiating a transmission themselves. This is why all Ethernet packets must be at least 64 bytes in length. A transmitted packet smaller than 64 bytes is called a runt and may cause a collision after the packet has completely left the host adapter, which is known as a late collision. The packets involved in a late collision cannot be retransmitted by the normal backoff procedures. It is left to the protocols operating at the higher levels of the OSI model to detect the lost packets and request their retransmission, which they often do not do as quickly or as well, possibly resulting in a fatal network error.

Late collisions do not always involve runts. They can also be caused by cable segments that are too long, faulty Ethernet interface equipment, or too many repeaters between the transmitting and receiving stations. When a segment displays an inordinately large number of collisions for the traffic that it is carrying, it is likely that late collisions are a cause of the excess. While a certain number of transmissions are expected on an Ethernet network, as we have discussed, late collisions indicate the existence of a serious problem that should be addressed immediately.

A packet can be retransmitted up to 16 times before it is discarded and actual data loss occurs. Obviously, the more highly trafficked the network, the more collisions there will be, with network performance degrading accordingly. This is why the Ethernet standards all impose restrictions on the number of nodes that can be installed on a particular network segment, as well as on the overall length and configuration of the medium connecting them. A segment that is too long can cause a packet's transmission to its destination to take longer than the 600 nanoseconds prescribed in the specifications, thus causing the collision detection mechanisms to react when, in fact, no collision has occurred. Too many nodes will naturally result in an increased number of collisions, and in either case, network performance will degrade considerably.

The Capture Effect

Although large numbers of collisions can be dealt with by an Ethernet network with no loss of data, performance can be hampered in ways beyond the simple need to repeatedly transfer the same packets. One phenomenon is known as the capture effect. This can occur when two nodes both have a long stream of packets to be sent over the network. When an initial collision occurs, both nodes will initiate the backoff process and, eventually, one of them will randomly select a lower backoff value than the other and win the contention. Therefore, let's say that Node A has done this, and has successfully retransmitted its packet, while Node B has not. Now, Node A attempts to transmit its second packet while Node B is still trying to retransmit its first. A collision occurs again, but for Node A, it is the first collision occurring for this packet, while for Node B, it is the second. Node A will randomly select from the numbers 0 and 1 when calculating its backoff delay. Node B, however, will be selecting from the numbers 0, 1, 2 and 3 because it is the second collision for this packet, and the number of possible values increases with each successive backoff.

Already, the odds of winning the contention are in Node A's favor. Each iteration of the backoff algorithm causes longer delay times to be added to the set of possible values. Therefore, probability dictates that Node B is likely to select a longer delay period than Node A, causing A to transmit first. Once again, therefore, Node A successfully transmits and proceeds to attempt to send its third packet, while Node B is still trying to send its first packet for the third time, increasing its backoff delay factor each time. With each repeated iteration, Node B's chances of winning the contention with Node A are reduced, as its delay time is statistically increasing, while that of Node A remains the same. It becomes increasingly likely, therefore, that Node B will continue to lose its contentions with Node A until either Node A runs out of packets to send, or Node B reaches 16 transmission attempts, after which its packet is discarded and the data lost.

Thus, in effect, Node A has captured the network for sixteen successive transmissions, due to its having won the first few contentions. Various proposals for a means to counteract this effect are currently being considered by the IEEE 802 committee, among them a different backoff algorithm called the binary logarithmic access method (BLAM). While the capture effect is not a major problem on most Ethernet networks, it is discussed here to illustrate that complex ways that heavy traffic patterns can affect the performance of an Ethernet network. Other MAC protocols, such as that used by token-ring networks, are not subject to this type of problem, as collisions are not a part of their normal operational specifications. This is another reason why it is important for the proper network type to be selected for the needs of a particular organization.

The Ethernet/802.3 Frame Specification

The process of sending data from one device on a network to another is, as stated earlier, a very complex affair. A request will originate at the highest level of the OSI model and, as it travels down through the layers to the physical medium itself, be subject to various levels of encoding, each of which adds a certain amount of overhead to the original request. Each layer accepts input from the layer above and encases it within a frame designed to its own specifications. This entire construction is then passed on to the next layer, which includes it, frame and all, in the payload area of its own specialized frame. When the packet reaches the physical layer, it consists of frames within frames within frames and has grown significantly in size. An average packet may ultimately contain more bits devoted to networking overhead than to the actual information being transmitted.

By the time that a request has worked its way down to the data link layer, additional data from every layer of the network model has been added. The upper layers add information regarding the application generating the request. Moving farther down, information concerning the transport protocol being utilized by the network operating system (NOS) is added. The request is also broken down into packets of appropriate size for transport over the network, with another frame added containing the information needed to reassemble the packets into the proper order at their destination. The LLC sublayer adds its own frame to provide error and flow control. Several other processes are interspersed throughout, all adding data that will be needed to process the packet at its destination. Once it has reached the MAC sublayer, all that remains to be done is to see that the packet is addressed to the proper destination, the physical medium is accessed properly, and the packet arrives at its destination undamaged.

The composition of the frame specified by the IEEE 802.3 standard is illustrated in figure 7.1. The frame defined by the original Ethernet specification is slightly different but only in the arrangement of the specified bits. The functions provided are identical to those of the 802.2 and 802.3 specifications in combination. The functions of each part of the frame are explicated in the following list.

Figure 7.2 The fields that comprise the IEEE 802.3 Frame Format.

Ethernet Physical Layer Types and Specification

The 802.3 standard has been revised over the years to add several different media types as they have come into popularity. Only thick Ethernet is part of the original DIX Ethernet specification. The IEEE standard defines not only the types of cabling and connectors to be used but also imposes limitations on the length of the cables in an individual network segment, the number of nodes that can be installed on any one segment, and the number of segments that can be joined together to form a network.

For these purposes, a network is defined as a series of computers connected so that collisions generated by any single node are seen on the network medium by every other node. In other words, when Node A attempts to transmit a packet to Node Z and a collision occurs, the jam pattern is completely propagated around the network and may be seen by all of the nodes connected to it. A segment is defined as a length of network cable bounded by any combination of terminators, repeaters, bridges, or routers. Thus, two segments of Ethernet cabling may be joined by a repeater (which is a signal amplifying and retiming device, operating purely on an electrical level, that is used to connect network segments), but as long as collisions are seen by all of the connected nodes, there is only one network involved. This sort of arrangement may also be described as forming a single collision domainóthat is, a single CSMA/CD network where a collision occurs if two nodes transmit at the same time.

Conversely, a packet-switching device such as a bridge or a router may be used to connect two disparate network segments. These devices, while they allow the segments to appear as one entity at the network layer of the OSI model and above, isolate the segments at the data link layer, preventing the propagation of collisions between the two segments. This is more accurately described as an internetwork, or a network of networks. Two collision domains exist because two nodes on opposite sides of the router can conceivably transmit at the same moment without incurring a collision.

Thick Ethernet

The original form in which Ethernet networks were realized, thick Ethernet, is also known colloquially as thicknet, "frozen yellow garden hose," or by its IEEE designation: 10Base5. This latter is a shorthand expression that has been adapted to all of the media types supported by the Ethernet specification. The "10" refers to the 10 Mbps transfer rate of the network, "Base" refers to Ethernet's baseband transmitting system (meaning that a single signal occupies the entire bandwidth of the medium), and the "5" refers to the 500 meter segment length limitation.

Thicknet is used in a bus topology. The topology of a network refers to the way in which the various nodes are interconnected. A bus topology means that each node is connected in series to the next node (see fig. 7.3). At both ends of the bus there must be a 50-ohm terminating resistor, so that signals reaching the end of the medium are not reflected back.

Figure 7.3 This is a basic 10Base5 thicknet network.

The actual network medium of a thicknet network is 50-ohm coaxial cable. Coaxial cable is so named because it contains two electrically separated connectors within one sheath (see fig. 7.4). A central core consisting of one connector is wrapped with a stiff insulating material and then surrounded by a woven mesh tube that is the second connector. The entire assembly is then encased in a tough PVC or Teflon insulating sheath that is yellow or brownish-orange in color. The Teflon variant is used for plenum-rated cable, which may be required by fire regulations for use in ventilation ducts, also known as plenums. The overall package is approximately 0.4 inches in diameter and more inflexible than the garden hose it is often likened to.

Fig. 7.4.Here is a cutaway view of a coaxial cable.

As a network medium, coaxial cable is heavy and difficult to install properly. Installing the male N-type coaxial connectors at each end of the cable can be a difficult job, requiring the proper stripping and crimping tools and a reasonable amount of experience. With all coaxial cables, the installation is only as good as the weakest connection, and problems may occur as the result of bad connections that can be extremely subtle and difficult to troubleshoot. Indeed, with thicknet, it is usually recommended that the cable be broken in as few places as possible and that all of the segments used on a single network come from the same cable lot (that is, from a single spool or from spools produced in the same batch by a single manufacturer). When forced to use segments of cable from different lots, the 802.3 specification recommends that the segments used should be either 23.4, 70.2, or 117 meters long, to minimize the signal reflections that may occur due to variations in the cable itself. The specification also calls for the network to be grounded at only one end. This causes additional installation difficulties, as care must be taken to prevent any of the other cable connectors from coming in contact with a ground.

The sheer size of the thicknet cable makes it an excellent conducting medium. The maximum length for a thicknet segment is 500 meters, much longer than any other copper medium. It also provides excellent insulation against electromagnetic interference and attenuation, making it ideal for industrial applications where other machinery may inhibit the functionality of thinner network media. Thicknet has also been used to construct backbones connecting servers at distant locations within the same building. Electrical considerations, however, preclude its use for connections between buildings, as is the case with any copper-based medium.

Media Access Components

All Ethernet types utilize the same basic components to attach the network medium to the Ethernet interface within the computer. This is another area in which the 802.3 standard differs from the DIX Ethernet standard, but the differences are only in name. The components are identical, but they are referred to by different designations in the two documents. Both are provided here, as the older Ethernet terminology is often used, even when referring to an 802.3 installation.

Thicknet is an exemplary model for demonstrating the different components of the interface between the network cable and the computer. The relative inflexibility of the cable prevents it from being installed so that it directly connects to the Ethernet interface, as most of the other medium types do. Components that are integrated into the network adapter in thinnet or UTP installations are separate units in a thicknet installation.

The actual coaxial cable-to-Ethernet interface connection is through a medium dependent interface (MDI). Two basic forms of MDI exist for thicknet. One is known as an intrusive tap because its installation involves cutting the network cable (thereby interrupting network service), installing standard N connectors on the two new ends, and then linking the two with a barrel connector that also provides the connection that leads to the computer. This method is far less popular than the non-intrusive tap, which is installed by drilling a hole into the coaxial cable and attaching a metal and plastic clamp that provides an electrical connection to the medium. This type of MDI can be installed without interrupting the use of the network, and without incurring any of the signal degeneration dangers that highly segmented thicknet cables are subject to.

The MDI is, in turn, directly connected to a MAU. This is referred to as a transceiver in the DIX Ethernet standard, as it is the unit that actually transmits data to and receives it from the network. In addition to the digital components that perform the signaling operations, the MAU also has analog circuitry that is used for collision detection. In most thicknet installations, the MAU and the MDI are integrated into a single unit that clamps onto the coaxial cable.

The 802.3 specification allows for up to 100 MAUs on a single network segment, each of which must be separated from the next by at least 2.5 meters of coaxial cable. The cabling often has black stripes on it to designate this distance. These limitations are intended to curtail the amount of signal attenuation and interference that can occur on any particular area of the network cable.

The thicknet MAU has a male 15-pin connector that is used to connect to an attachment unit interface (AUI) cable, also known as a transceiver cable. This cable, which can be no more than 50 meters long, is then attached, with a similar connector, to the Ethernet interface on the computer, from which it receives both signals and power for the operation of the MAU. Other AUI cables are also available that are thinner and more manageable than the standard 0.4 inch diameter ones, but they are limited to a shorter distance between the MAU and the Ethernet interface, often 5 meters or less.

While thicknet does offer some advantages in signal strength and segment length, its higher cost, difficulty of installation and maintenance, and limited upgrade capabilities have all but eliminated it from use except in situations where its capabilities are expressly required. As with thinnet, the other coaxial network type in use today, thicknet is and always will be limited to 10 Mbps. The new high-speed standards being developed today are designed solely for use with twisted-pair or fiber-optic cabling. Despite the obsolescence of the medium itself, however, it is a tribute the designers of the original Ethernet standard that the underlying concepts of the system have long outlived the original physical medium on which it was based.

Thin Ethernet

Thin Ethernet, also known as thinnet, cheapernet, or 10Base2 (despite the fact that its maximum segment length is 185 and not 200 meters), was standardized in 1985 and quickly became a popular alternative to thicknet. Although still based on a 50-ohm coaxial cable, thinnet, as the name implies, uses RG-58 cabling, which is much narrower (about 3/16 of an inch) and more flexible (see fig. 7.5), allowing the cable to be run to the back of the computer where it is directly attached to the Ethernet interface. The cable itself is composed of a metallic core (either solid or stranded), surrounded by an insulating, or dielectric layer, then a second conducting layer made of aluminum foil or braided strands, which functions both as a ground and as insulation for the central conductor. The entire construction is then sheathed with a tough insulating material for protection. Several different types of RG-58 cable exist, and care should be taken to purchase one with the appropriate impedance (approximately 50 ohms) and velocity of propagation rating (approximately 0.66). A network adapter for a thinnet network has the AUI, MAU, and MDI integrated into the expansion card, so there are no separate components to be purchased and accounted for.

Fig. 7.5.This is a thinnet cable whith a BNC connector attached.

Unlike thicknet, which may be tapped for attachment to a computer without breaking the cable, individual lengths of thinnet cabling are used to run from one computer to the next in order to form the bus topology (see fig. 7.6). At each Ethernet interface, a "T" connector is installed. This is a metal device with three Bayonet Neill-Concelman-type connectors (BNC): one female for attachment to the NIC in the computer, and two males for the attachment of two coaxial cable connectors (see fig. 7.7). The cable at each machine must have a female BNC connector installed onto it, which is attached to the "T." Then a second length of cable, similarly equipped, is attached to the third connector on the "T" and used to run to the next machine. There are no guidelines in the standard concerning cable lots or the number of breaks that may be present in thinnet cabling. The only rule, in this respect, is that no cable segment may be less than 0.5 meters long.

Figure 7.6 This is a basic 10Base2 thinnet network.

Figure 7.7 This is a thinnet BNC "T" connector.

Thinnet cables of varying lengths can be purchased with connectors already attached to them, but it is far more economical to buy bulk cable on a spool and attach the connectors yourself. Some special tools are needed, such as a stripper that exposes the bare copper of the cabling in the proper way and a crimper that squeezes the connectors onto the ends of the cable, but these can be purchased for $50ñ75 or less. Attaching the connectors to the cable is a skill that should be learned by watching the procedure done. It requires a certain amount of practice, but it is worth learning if you are going to be maintaining a thinnet network. This is because the single largest maintenance problem with this type of network is faulty cable connections.

Since thinnet requires no hub or other central connection point, it has the advantage of being a rather portable network. The simplest and most inexpensive Ethernet network possible can be created by installing NICs into two computers, attaching them with a length of thinnet cable, and installing a peer-to-peer operating system such as Windows for Workgroups. This sort of arrangement can be expanded, contracted, assembled, and disassembled at will, allowing a network to be moved to a new location with little difficulty or expense.

Thinnet cabling can be installed within the walls of an office, but remember that there always must be two wires extending to the T connector at the back of each computer. This often results in installations that are not as inconspicuous as might be desired in a corporate location. Thinnet cabling can also be left loose to run along the floor of a site, allowing for easy modification of the cabling layout, but this exposes the connectors to greater abuse from everyday foot traffic. Loose connectors are a very common cause of quirky behavior on thinnet networks, and it can often be extremely difficult to track down the connection that is causing the problem. The purchase of a good quality cable tester is highly recommended. See

It is also important to note that, unlike thicknet, the thinnet cabling must extend directly to the NIC on the computer. A length of cabling running from the T connector to the NIC, also known as a stub cable, is not acceptable in this network type, although it may seem to function properly at first. The 802.3 specification calls for a distance of no more than 4 centimeters between the MDI on the NIC and the coaxial cable itself. The use of stub cables causes signal reflections on the network medium, resulting in increased numbers of packet retransmissions, thus slowing down the performance of the network. On highly trafficked segments, this can even lead to frame loss if the interference becomes too great.

Like thicknet, thinnet must be terminated at both ends of the bus that comprises each segment. 50-ohm terminating resistors built into a BNC connector are used for this purpose. The final length of cable is attached to the last machine's T connector along with the resistor plug, effectively terminating the bus. Although it is not specified in the standard, a thinnet network can also be grounded, but as with thicknet, it should only be grounded in one place. All other connectors should be insulated from contact with an electrical ground.

Due to the increased levels of signal attenuation and interference caused by the narrower gauge cabling, thinnet is limited to a maximum network segment length of 185 meters, with no more than 30 MAUs installed on that segment. As with thicknet, repeaters can be used to combine multiple segments into a single collision domain, but it should be noted that the MAUs within the repeater count towards the maximum of 30.

Unshielded Twisted Pair

In the same way that thinnet overtook thicknet in popularity in the late 1980s, so the use of unshielded twisted-pair cabling (UTP) has come to be the dominant Ethernet medium since its addition to the 802.3 standard in 1989. This revision of the standard is known as the 802.3i 10BaseT specification. Other UTP-based solutions did, however, exist prior to the ratification of the standardómost notably LattisNet, a system developed by Synoptics that at one time was on its way toward becoming an industry standard itself. LattisNet is not compatible with 10BaseT, though, as the latter synchronizes signals at the sending end and the former at the receiving end.

A UTP or 10BaseT Ethernet network is an adaptation of the cabling commonly used for telephone systems to LAN use. The T in 10BaseT refers to the way in which the two or more pairs of wires within an insulated sheath are twisted together throughout the entire length of the cable. This is a standard technique used to improve the signal transmission capabilities of the medium.

The greatest advantages to 10BaseT are its flexibility and ease of installation. Thinner than even thinnet cable, UTP cabling is easily installed within the walls of an existing site, providing a neat, professional-looking installation in which a single length of cable attaches each DTE device to a jack within a wall plate, just as a telephone is connected. Some sites have even adapted existing telephone installations for the use of their computer networks.

Many different opinions exist concerning the guidelines by which 10BaseT cabling should be installed. For example, the EIA/TIA-569 standard for data cable installation states that data cable should not be run next to power cables, but in most cases, this practice does not show any adverse effect on a 10BaseT network. This is because any electrical interference will affect all of the pairs within the cable equally. Most of the interference should be negated by the twists in the cable, but any interference that is not should be ignored by the receiving interface because of the differential signaling method used by 10BaseT.

Another common question is whether or not the two pairs of wires in a standard four-pair UTP cable run that are unused by Ethernet communications may be used for another purpose. The general consensus is that these may be used for digital telephone connections but not for standard analog telephone because of the high ring voltage. Connections to other resources (such as minicomputers or mainframes) are also possible, but using the cable for other connections may limit the overall length of the segment. The only way to know this for sure is to try using the extra pairs under maximum load conditions, and then test to see if problems occur.

Unlike both thicknet and thinnet, 10BaseT in not installed in a bus topology. Instead, it uses a distributed star topology, in which each device on the network has a dedicated connection to a centralized multiport repeater known as a concentrator or hub (see fig. 7.8). The primary advantage to this topology is that a disturbance in one cable affects only the single machine connected by that cable. Bus topologies, on the other hand, are subject to the "Christmas light effect," in which one bad connection will interrupt network communications not only to one machine but to every machine down the line from that one. The greater amount of cabling needed for a 10BaseT installation is offset by the relatively low price of the cable itself, but the need for hubs containing a port for every node on the network adds significantly to the overall price of this type of network. Two devices can be directly connected with a 10BaseT cable that provides signal crossover, without an intervening hub, but only two, resulting in an effective, if minimal, network.

Figure 7.8 This is a basic 10BaseT UTP network.

While the coaxial cable used for the other Ethernet types is relatively consistent in its transmission capabilities, allowing for specific guidelines as to segment length and other attributes, the UTP cable used for 10BaseT networks is available in several grades that determine its transmission capabilities. Table 7.1 lists the various data grades and their properties. IBM (of course) has its own cable designations. These are listed in the section on Token Ring networks later in this chapter. The 802.3i standard specifies the maximum length of a 10BaseT segment to be 100 meters from the hub to the DTE, using Category 3 UTP cable, also known as voice grade UTP. This is the standard medium used for traditional telephone installations, and the 802.3i document was written on the assumption that many sites would be adapting existing cable for network use. This cable typically is 24 AWG (American Wire Gauge, a standard for measuring the diameter of a wire) copper tinned, with solid conductors, 100ñ105-ohm characteristic impedance, and a minimum of 2 twists per foot.

Table 7.1 UTP Cable Types and Their Speed Ratings

Category Speed Used For
2 Up to 1 Mbps Telephone Wiring
3 Up to 16 Mbps Ethernet 10BaseT
4 Up to 20 Mbps Token Ring, 10BaseT
5 Up to 100 Mbps 10BaseT, 100BaseT

In new installations today, however, the use of Category 5 cabling and attachment hardware is becoming much more prevalent. A Category 5 installation will be much less liable to signal crosstalk (the bleeding of signals between the transmit and receive wire pairs within the cable) and attenuation (the signal lost over the length of the cable) than Category 3, allowing for greater segment lengths and, more importantly, future upgrades to the 100 Mbps Fast Ethernet standards now under development. The 100 meter segment length is an estimate provided by the specification, but the actual limiting factor involved is the signal loss from source to destination, measured in decibels (dB). The maximum allowable signal loss for a 10BaseT segment is 11.5 dB, and the quality of the cable used will have an significant effect on its signal carrying capabilities.

10BaseT segments utilize standard 8-pin RJ-45 (RJ stands for registered jack) telephone type connectors (see fig. 7.9) both at the hub and at the MDI. Usually the cabling will be pulled within the walls or ceiling of the site from the hub to a plate in the wall near the computer or DTE. A patch cable is then used to connect the wall socket to the NIC itself. This provides a connection between the two MAUs on the circuit, one integrated into the hub and the other integrated into the network interface of the DTE. Since UTP cable utilizes separate pairs of wires for transmitting and receiving, however, it is crucial that the transmit pair from one MAU be connected to the receive pair on the other, and vice versa, This is known as signal crossover, and it can be provided either by a special crossover cable or it can be integrated into the design of the hub. The latter solution is preferable because it allows the entire wiring installation to be performed "straight through," without concern for the crossover. The 802.3i specification requires that each hub port containing a crossover circuit be marked with an "X" to signify this.

Fig. 7.9.An RJ-45 connector looks like a telephone cable connector.

While existing Category 3 cable can be used for 10BaseT, for new cable installations, the use of Category 5 cable is strongly recommended. Future developments in networking will never give cause to regret this decision, and the savings on future upgrades will almost certainly outweigh the initial expense. In addition, if cost is a factor, considerable savings can be realized by pulling Category 5 cable and utilizing Category 3 hardware for the connectors. These can later be upgraded without the need for invasive work.

The hubs used for a 10BaseT network may contain up to 132 ports, enabling the connection of that many devices, but multiple hubs can be connected to each other using a 10Base2 or other type of segment, or the 10BaseT ports themselves (as long as signal crossover is somehow provided). Up to three mixing segments connecting multiple hubs can be created, supporting up to 30 hubs each. Thus, as with the 10BaseF variants, to be examined later, it is possible to install the Ethernet maximum of 1024 nodes on a single network, without violating any other of the 802.3 configuration specifications. Hubs that conform to the standard will also contain a link integrity circuit that is designed to ensure that the connection between the hub port and the DTE at the other end of the cable remains intact.

Every 1/60th of a second, a pulse is sent out of each active port on a hub. If the appropriate response is not received from the connected device, then most hubs will be able to automatically disable the functionality of that port. Green LEDs on both the hub port and the NIC in the DTE will also be extinguished. This sort of link integrity checking is important for 10BaseT networks because, unlike the coaxial Ethernets, separate transmit and receive wire pairs are used. If a DTE was to have a non-functioning receive wire, for example, due to a faulty interface, it may interpret this as a quiet channel when network traffic is, in fact, occurring. This may cause it to transmit at the wrong times, perhaps even continuously, a condition known as jabber, resulting in many more collisions than the system is intended to cope with.

One of the frequent causes of problems on 10BaseT networks stems from the use of improper patch cables to connect computers to the wall socket. The standard satin cables used to connect telephones will appear to function properly when used to connect a DTE to a UTP network. However, these cables lack the twisting that is a crucial factor in suppressing the signal crosstalk that this medium is subject to. On a twisted-pair Ethernet network, collisions are detected by comparing the signals on the transmit and receive pairs within the UTP cable. When signals are detected at the same time on both pairs, the collision mechanism is triggered.

Excessive amounts of crosstalk can cause phantom collisions, which occur too late to be retransmitted by the MAC mechanisms within the Ethernet interface. These packets are therefore discarded and must later be detected and retransmitted by the upper layers of the OSI model. This process can reduce network performance considerably, especially when multiplied by a large number of computers.

Fiber-Optic Ethernet

The use of fiber-optic cable to network computers has found, with good reason, great favor in the industry. Most of the common drawbacks of copper media are virtually eliminated in this new technology. Since pulses of light are used instead of electrical current, there is no possibility of signal crosstalk and attenuation levels are far lower, resulting in much greater possible segment lengths. Devices connected by fiber-optic cable are also electrically isolated, allowing links between remote buildings to be safely created. Conducting links between buildings can be very dangerous, due to electrical disturbances caused by differences in ground potential, lightning, and other natural phenomena.

This sort of link between buildings is also facilitated by the fiber-optic cable's narrow gauge and high flexibility. Other means of establishing Ethernet connections between buildings are available, many of them utilizing unbounded, or wireless, media such as lasers, microwaves, or radio, but these are generally far more expensive and much less reliable. Fiber-optic cable is also capable of carrying far more data than the 10 Mbps defined by the Ethernet standards. It's primary drawback is its continued high installation and hardware costs, even after years on the market. For this reason, fiber-optic technology is used primarily as a backbone medium, to link servers or repeaters over long distances rather than for connections to the desktop, except in environments where electromagnetic interference (EMI) levels are high enough to prevent the use of other media.

FDDI is a fiber optic-based network standard that supports speeds of 100 Mbps, and this will be examined later in this chapter, but there is also an Ethernet alternative known as 10BaseF that utilizes the same medium. Cabling can be installed to run at the 10 Mbps provided by the Ethernet standard and later upgraded to higher speeds by the replacement of hubs and adapters. Like 10BaseT, fiber optic uses separate cables for transmitting and receiving data, but the two are not combined in one sheath, as UTP is, nor is there any reason for them to be twisted. The Ethernet fiber standards allow the use of a MAU that is external to the NIC, such as is used by thicknet networks. The fiber-optic MAU (FOMAU) is connected to the MDI using the same type of AUI cable used by thicknet MAUs. Other 10BaseF interfaces may integrate the MAU onto the expansion card, as with the other Ethernet variants.

The first fiber optic standard for Ethernet was part of the original DIX standard of the early 1980s. Know as the Fiber Optic Inter-Repeater Link segment (FOIRL) its purpose, as the name implies, was to link repeaters at locations up to 1,000 meters away, too distant for the other Ethernet media types to span. This also provided a convenient method for linking different network types, once the thinnet and 10BaseT media standards came into use. As prices for the fiber-optic hardware came down, however (from outlandish to merely unreasonable), some users expressed a desire to use fiber links directly to the desktop. Some equipment allowing this was marketed before there was a standard supporting such practices, but the 10BaseF additions to the 802.3 specification provide a number of fiber-based alternatives to a simple repeater link segment.

When discussing the configuration of multiple Ethernet segments, the terms link segment and mixing segment are used to describe the two fundamental types of connections between repeaters. A link segment is one that has only two connections on itóthat is, a link between two DTEs only, most often used to span the distance between two remotely located networks. A mixing segment is one that contains more than two connections, usually for the purpose of attaching nodes to the network. Thus, a standard thick or thin Ethernet bus connecting any number of computers would be a mixing segment. Technically, each connection on a 10BaseT network is a link segment because there are no intervening connections between the MAU in the hub and MAU on the NIC.

Several sub-designations are specified by the 10BaseF standard, and the primary difference between them is the type of segment for which they are intended to be used.

Broadband Ethernet

Although it is not often used, there is a broadband standard for Ethernet networks. A broadband network is one in which the network bandwidth is divided or split to allow multiple signals to be sent simultaneously. This is called multiplexing, and the Ethernet variant uses a method of multiplexing called frequency division multiplexing. The concept and the cable itself are similar to those used for a cable television network. Multiple signals are all transmitted at the same time, and the receiving station chooses the appropriate one by selecting a certain frequency to monitor. This form of Ethernet is known as 10Broad36 because the maximum segment length allowed by the standard is 3,600 meters. This is far longer than any other allowable segment in the entire specification, obviously providing the capability to make connections over extremely long distances. Fiber-optic cable has become much more popular for this purpose, however, and 10Broad36 installations are few and far between.

Configuration Guidelines for Multiple Ethernet Segments

As touched upon earlier, the key to the successful operation of an Ethernet network is the proper functioning of the media access control and collision detection mechanisms. Signals must be completely propagated around a collision domain according to specific timing specifications for the system to work reliably. The two primary factors controlled by these specifications are the round-trip signal propagation delay and the inter-packet gap.

The larger an Ethernet network is and the more segments that comprise a specific collision domain, the greater the amount of signal delay incurred as each packet wends its way to the far ends of the network medium. It is crucial to the efficient operation of the system that this round-trip signal propagation delay not exceed the limits imposed by the 802.3 specification.

When consecutive packets are transmitted over an Ethernet network, the specification calls for a specific inter-packet gapóthat is, a required minimum amount of space (amounting to 9.6 microseconds) between packet transmissions. The normal operations of a repeater, when combined with the standard amount of signal disturbance that occurs as a packet travels over the network medium, can lead to a reduction in the length of this gap, causing possible packet loss. This is known as tailgating. A typical Ethernet interface is usually configured to pause for a brief period of time after reading the end of a packet. This blind time prevents the normal noise at the end of a packet from being treated as the beginning of a subsequent packet. Obviously, the blind time must be less than the inter-packet gap; it usually ranges from 1 to 4 microseconds. Should the inter-packet gap time be reduced to a value smaller than the blind time, an incoming packet may not be properly recognized as such and therefore discarded.

The 802.3 specification provides two possible means of determining the limitations that must be imposed on a particular network to maintain the proper values for these two attributes. One is a complex mathematical method by which the individual components of a specific network can be enumerated and values assigned based on segment lengths, number of connections, and number and placement of repeaters to perform calculations resulting in the precise signal propagation delay and inter-packet gap figures for that installation. It is then easy to determine just what can be done to that network, while still remaining within acceptable range of values provided by the specification.

This procedure is usually only performed on networks that are a great deal more complex than the models provided as the other method for configuring a multi-segment network. This is sometimes known as the 5-4-3 rule, and it provides a series of simple guidelines to follow in order to prevent an Ethernet network from becoming too large to manage its own functions.

The basic 5-4-3 rule states that a transmission between any two devices within a single collision domain can pass through no more than five network segments, connected by four repeaters, of which no more than three of the segments are mixing segments. The transmission can also pass through no more than two MAUs and two AUIs, excluding those within the repeaters themselves. The repeaters used must also be compliant with the specifications in the 802.3 standard for their functions. When a network consists of only four segments, connected by three repeaters, then all of the segments may be mixing segments, if desired.

On a 10BaseT network, two segments are always utilized to connect the communicating machines to their respective hubs. Since these two segments are both link segments because there are no connections other than those to the MAUs in the hub and the host adapter, this leaves up to three mixing segments for use in interconnecting multiple hubs.

A number of exceptions to this basic rule are defined as part of the 10BaseF standards, and these are among the primary advantages of these standards on an Ethernet. On a network composed of five segments with four repeaters, fiber-optic link segments (whether FOIRL, 10BaseFB, or 10BaseFL) can be up to 500 meters long, while fiber-optic mixing segments (10BaseFP) can be no longer than 300 meters. On a network with only four segments and three repeaters, fiber-optic links between repeaters can be up to 1,000 meters long for FOIRL, 10BaseFB, and 10BaseFL segments and 700 meters long for 10BaseFP segments. Links between a repeater and a DTE can be no longer than 400 meters for 10BaseFL segments and 300 meters for 10BaseFP segments.

Obviously, these specifications provide only the broadest estimation of the actual values that may be found on a particular network. These rules define the maximum allowable limits for a single collision domain, and Ethernet is a network type that functions best when it is not pushed to its limits. This is not to say, however, that exceeding these limitations in any way will cause immediate problems. A segment that is longer than the recommended limit, or a segment with a few more DTEs than specified in the standard will probably not cause your network to grind to a screeching halt. It can, however, cause a slight degradation of performance that will only be exacerbated by further expansion.