Chapter 6

The Workstation Platform

In the last chapter, when examining the hardware involved in assembling a network server, we narrowed our vision to the high end of the current PC market. In this chapter, we discuss the network workstation, and our survey is necessarily much wider in scope. For the purposes of this chapter, the term workstation refers to a PC based on one of the Intel x86 microprocessors. Other types of desktop computers may also be referred to as workstations at times, such as UNIX and Macintosh machines. The networking aspects of Macintoshes are covered in appendix C, "Adding Macintosh Access." As to UNIX, the subject is sufficiently enormous to preclude its inclusion in a book of this type.

As we have discussed, a file server and a workstation are, from the hardware perspective, more alike than they are different. Both contain the essential elements of a network PC: a microprocessor, memory chips, usually some form of storage medium, and, of course, the network interface that links the computer to all the other resources scattered around the enterprise. However, network workstations encompass a far greater range of assets and capabilities than servers do. While it is safe for me to say that a file server should be a 486-based machine or better, workstations can range from the original IBM PCs and XTs all the way to top-of-the-line Pentiums that could adequately function as servers themselves, given the proper software.

Obviously, the tasks that can be accomplished with a networked XT are substantially different from those that can be performed on a Pentium. Both have retained their usefulness in the workplace, however, and this chapter surveys the wide range of hardware that might be found in PCs that have been adapted for network use. Since most of the information already presented in the discussion of network servers is equally valid when examining a network workstation, it will not be repeated here. We will, however, cover the hardware that was glossed over in the last chapter, discussing in detail some of the technology that is more obviously suited to workstation use.

The lines between server and workstation hardware configurations often blur, so we will cover some material that could easily be germane in a further discussion of server configurations, including a detailed examination of the entire Intel microprocessor line. Some of the newer technologies on the market, such as Enhanced IDE hard drives and other peripherals, could well become standard equipment in smaller servers, and we will also discuss network interface cards (NICs) in greater depth. The topic of NICs was deferred from the last chapter because similar or even identical cards can be used in both the server and workstation platforms. Unless stated otherwise, any discussion of network interface hardware in this chapter is equally applicable to a server.

We also will discuss the viability of hardware upgrades on the workstation platform. In many cases, the most important information is to know what machines are worth upgrading at all and what components can be replaced or augmented without investing money in an obsolete technology that is destined for the high-tech junkyard.

Workstation Types and Specifications

Although faster and more capable computers are available every year, many networks in the corporate world continue to operate productively despite a reliance on technology that dates back to the early 1980s. When a tool is created to accomplish a particular task, and the task does not change over the years, many LAN administrators see no reason to change the tool. Some companies that rely heavily on particular DOS-based programsóeither commercial applications or custom designed onesóare still running workstations as old as the original IBM XTs. This is not to say that their tasks couldn't be accomplished more quickly and efficiently with a faster machine, for they unquestionably could, but the economic factor often mitigates the drive for new technology. A shop that has hundreds of users running a simple application on a fleet of XTs may not realize a great enough gain in production to warrant replacing so many machines with newer models.

On the other hand, the problem with this philosophy is that very often the services required of the network do change and existing hardware might not be up to the task. A mass replacement of an entire fleet of machines not only incurs a large financial expense at one time, but also requires significant amounts of system downtime as well as retraining of personnel to use the new equipment. The alternative to this sort of procedure is the gradual upgrade or replacement of workstations as new technologies become available. Many shops have replaced all their workstation computers several times in the past decade. This certainly provides users with better tools to perform their tasks more efficiently, but it can be extremely expensive to regularly replace older equipment with newer equipment when the marketing pattern of the computer industry places such a high premium on the latest technology, and relegates the old to the scrap heap. If you were to replace a fleet of XTs today with new machines, you very likely would have to pay someone to haul away the old hardware. Warehouses across the country have old computers stored away gathering dust because they have been replaced with newer machines and there is no market for the old ones. Occasionally, a company may engage in a project or division that requires only limited workstation capabilities, which may allow this old equipment to be put to further use. Some organizations also participate in programs that coordinate the donation of older computers to educational or charitable institutions, but this is usually the exception, not the rule.

In the following sections, we will take a walk through the Museum of Workstation Technology and examine some of the old machines that you may still find in use in network shops today. Obviously, as each day passes, fewer and fewer of the old computers remain usable, but you may someday find yourself tasked with maintaining machines such as these. We will also try to determine the exact point at which the upgrading of legacy machines such as these becomes economically and technologically impractical. In most cases, workstations based on the 80386 or earlier processors are nearing the end of their usefulness, and upgrading them is like living in a rented apartmentóyou don't want to make any changes that you can't take with you when you leave for use elsewhere. Indeed, the very oldest machines are nearly always not worth the effort. It usually becomes matter of cannibalizing some ip every user withfig486 or a Pentium (as much as the industry spindoctors would have you believe otherwise). A great many companies continue to use older technology to great effect, and having the knowledge and ability to maintain them demonstrates a sense of economic practicality that is lacking in many network administrators today.

The IBM PC

In 1981, when IBM released the original PC, what had been a hobbyist's toy was transformed overnight into a practical business tool. Technologically, it was not the best PC available at the time, but it was a marketing coup that set the standard for the way the PC business is conducted to this day. Built using readily available components and easily upgradable, the PC design allowed for maximum marketability and minimal financial risk on IBM's part. No one, howeveróIBM includedóhad any clue that the concept would be as successful as it was. Suddenly, the PC was a business tool, and the basic designs created by IBM for their original machines became industry standards that persist to this day.

The original IBM PCs and XTs were based on the Intel 8088 microprocessor. Throughout the 1970s, Intel had steadily built increasingly more powerful processors that were designed more for general purpose use than for installation in specific computers. The 8086 chip, released in 1978, was an expansion of their 8080 design and was the first Intel microprocessor to have a full 16-bit design. The processor had 16-bit registers, a 16-bit-wide data bus, and a 20-bit-wide address bus, allowing it to control a full megabyte (M) of memory (a huge amount at the time, considering that the original PC shipped with only 16K). That 1M was effectively divided into sixteen 64K segments, however, making it operate like sixteen of the earlier Intel 8080 chips for compatibility purposes.

Although our history begins with the PC, this was not the first personal computer on the market by any means. IBM's design was mitigated by the need for backward compatibility, even at that early date. A significant number of applications already existed for the eight-bit computers of the time, such as the Apple II, the Radio Shack TRS-80, and other machines that used a CP/M operating system (OS). By today's standards, the applications were few and very rudimentary, but IBM was attempting to protect its investment in every possible way in this venture, and they ultimately decided not to alienate the earlier accomplishments by releasing a fully 16-bit computer.

The 8088 processor was released by Intel after the 8086. The two were nearly identical, except for the fact that the 8088 had a data bus that was only eight bits wide. Use of this processor allowed the PC to be built using eight-bit components (much more readily available) and allowed for the possibility of software conversion from the CP/M OS to the BASIC that was included in the PC's system ROM. The 16-bit internal registers of the 8088 processor allowed IBM to market the PC as a "16-bit computer" without alienating the existing user base. Later IBM PS/2 models utilized the 8086 processor.

The original 8088 processor ran at 4.77MHz and took approximately 12 clock cycles to execute a typical instruction. This is glacial performance by today's standards, but speed was not a major issue at that time. Desktop computers were utterly new to the business world, and the issue was having one or not having one, as opposed to how capable a machine was. Later models of the 8088 ran at 8MHz, providing some increased performance, but that was the limit for this microprocessor design.

The original PC was marketed for $1,355 at a base configuration including 16K of memory and no storage medium other than its built-in ROM. An interface port for a typical audio cassette drive was included, but a floppy disk drive would cost you an extra $500. It's astonishing to think that all those obsolete computers now taking up warehouse space were just as expensive when they were new as today's far more capable machines. It was also the original PC that saddled us with the 640K conventional memory limitation that remains an albatross around our necks to this day. At the time, 640K was considered to be far more than any program would ever need. In fact, the PC could only support up to 256K on its system board; additional memory had to be added through the use of an expansion card. This amount was therefore arbitrarily decided upon by IBM as the place where OS-usable memory would end, and system resources such as video memory and BIOS would begin. In fact, these early machines used only a small fraction of the 384K allotted for these purposes, but the standard was set, and we are still living with it.

The XT

We have examined the original IBM PC because of its place in the history of desktop computing, but the first IBM model that can still be considered even a remotely usable network workstation today is the XT, which was first released in 1983. Still based on the Intel 8088 microprocessor, the XT is more recognizable as the prototypical business computer in many ways. From a hardware design standpoint, the placement of the XT's components and the layout of its motherboard are more akin to today's computers than the PC was. A 360K floppy drive and a whopping 10M (or later, 20M) hard disk drive were standard equipment, and the venerable ISA bus was already in place. A full 640K of memory could be mounted directly on the motherboard in later models, and serial and parallel connectors were available for connection to modems, printers, and other peripherals.

Although its original hard drive has barely the capacity to store today's entire DOS, the XT can easily be outfitted with a NIC and attached to a LAN. Obviously, its performance seems extremely slow to a sophisticated user, as its memory has a 200 nanosecond (ns) refresh rate, and its hard drive has a transfer rate of only 85K per second, but for use with small, simple applications it can be a viable workstation. Even a reasonably sophisticated program such as WordPerfect 5.1 for DOS (still a highly useful application, despite having been overshadowed by its bloated successors) runs on an XT.

This is not to say, however, that the XT is a suitable general-use computer by today's standards. It most definitely is not. I have worked in shops, though, where a company's old XTs have been put to profitable use. One such case involved the introduction of a proprietary software package that ran on an OS/2 application server and was designed for use with dumb terminals connected to the server by a "roll-your-own" network of serial connections and multiport serial concentrators. Rather than spend money on new terminals for what amounted to an experimental business venture, the company used its fleet of obsolete XTs to run a terminal emulation program over the existing LAN. This arrangement worked out very nicelyóthe company was able to outfit an entire department with workstations at virtually no cost, they managed to clear out some valuable storage space where the old XTs were kept, and they had a number of extra machines left over that could be used for parts to repair the operational ones. Most of the repairs needed, by the way, were minor, involving worn out keyboards, blown monitors, and the like. The XTs had held up remarkably well, despite years of previous use, plus several more years collecting dust.

That was, of course, an isolated case where older machines were put to good use in a modern workplace. For today's LANs, the XT is rarely worth using as a workstation, for almost any worker becomes more productive on a faster machine. These old warhorses can be put to productive use as dedicated network print or fax servers, though, or even as routers, on less demanding networks. However, if asked whether someone should purchase old XT machines that they might find, I unhesitatingly answer no, unless that person has a specific purpose that the machines are suited to and the price is extremely low. If asked, however, whether a company's fleet of XTs should be stored or discarded, I almost always say to hang onto them. Donating them or even distributing them to employees for home use would be preferable to throwing them away, especially with today's environmental concerns, for it might be more expensive to dispose of them properly than it would be to keep them.

As to the prospect of upgrading XTs, don't even think about it. Like most older computing technologies, the XT is a unified whole whose parts have been selected to work together. The ST-506 hard drive that was standard equipment at the time (discussed later in this chapter) is incredibly slow by today's standards, for example, but moves data faster than the XT can operate on it. There is virtually no single component in the machine that could be upgraded to increase the overall performance of the system once it is fully populated with memory. Parts can be replaced easily from other machines (providing at least one good reason to buy any old units you come across), but the XT is essentially a dead end that is not worth an extensive amount of time, effort, or expense.

The AT

In 1984, IBM released the AT computer. Based on the Intel 80286 processor, the AT was a great step forward from a hardware standpoint but was not utilized by software developers to anything approaching its capabilities. On the whole, the AT was treated like a better, faster XT, and on this basis, it became the prototype for a huge number of imitators to emulate, giving rise to the vast IBM clone market that has since overtaken the originator in sales by a huge margin.

While the 8088 mixed an eight-bit data bus with 16-bit registers, the 80286 was a 16-bit processor in every way. By that time, the 16-bit components that were used to build the ancillary circuitry of a PC were readily available, and IBM did not fail to realize that their fears concerning the compatibility of the 16-bit data bus were unfounded. The PC was now a going concern for all involved, and even Intel began to design its microprocessors more specifically for use in computers. Components that had previously remained on separate chips were beginning to be incorporated into the microprocessor itself. Intel's earlier chips had deliberately avoided doing this to facilitate their use in devices other than computers. Other customers would not want to pay extra for circuitry that would go unused in another application, but now that there was a practically guaranteed market for the 80286 chip, microprocessors for use in PCs rapidly became the focus of Intel's development efforts.

The explosive growth of the PC industry was evident even in the various versions of the 80286 chip that were released. Originally designed to run at 6MHz, faster and faster versions of the chip were made available, up to 20MHz. This, combined with the doubled width of the data bus, yielded a machine that was already far faster than the XT. The 286 also increased the address lines from 20 to 24, allowing up to 16M of physical memory to be addressed, as opposed to the 1M of the 8088.

The 286 was also the first processor that could utilize virtual memory. Virtual memory is hard disk storage space that could be utilized by the processor as a substitute for actual memory chips. Data had to be swapped from the hard drive to RAM before it could be operated upon, but this technique allowed a 286-based machine to address up to 1G of total memory (16M of actual RAM chips and 1,008M of virtual memory). This was, of course, a far greater capacity than other hardware could accommodate at the time. The original AT could only mount 512K worth of DRAM chips on the motherboard, and the idea of a 1G-capacity hard drive for a desktop computer was completely absurd. The most obvious limitation, though, was that there was no OS available that could adequately make use of these capabilities. Once again, the specter of backward compatibility had risen, forcing the industry to try to accomplish the all but impossible task of satisfying the needs of an existing user base that demanded greater performance without sacrificing its existing investment.

Real Mode versus Protected Mode

In order to make it compatible with earlier software, the 80286 microprocessor was designed to run in two different modes: real mode and protected mode. Real mode exactly emulates the functionality of the 8086 processor (not the 8088, as the chip still uses a 16-bit data bus) including the ability to address only the first megabyte of memory. When the computer is powered up, it initially boots into real mode and is completely capable of running any software that was written for earlier IBM PCs. The processor's protected mode is where the real advances in its architecture are evident. Once switched into this mode by a software command, the processor can access all the memory capabilities, both physical and virtual, that are present in the machine. Protected mode also provides the processor with the ability to multitask, the now commonplace practice of running multiple virtual machines where separate programs can run without affecting each other's performance.

Several problems impacted the use of this protected mode, however. The first was that despite the increased amount of memory that could be addressed, that memory was still broken up into 64K blocks, just as it had been with the 8086 and 8088 processors. This left programmers with the same memory segmentation difficulties that they had always had to work around. The only difference was that they now had more segments to work with. The other major problem was that the 286 processor, once it had been switched from real mode into protected mode, could not be switched back again except by resetting the processoróin effect, restarting the computer.

It was here that a familiar pattern in the development of the microcomputer industry first emerged. Despite the extended capabilities of the 80286 chip, it was a full three years before an OS was developed that could take advantage of the chip's protected mode. This OS was OS/2, which in its early versions were a collaborative effort between IBM and Microsoft. OS/2 could effectively multitask programs written to take advantage of this feature and could access the entire range of the computer's memory, but then, as now, few applications were written specifically for it, and the OS never achieved a significant market share.

It was not until some time later, when Windows 3.0 was released by Microsoft, that a commercially successful figcould make use of the 80286's protected mode. Windows' Standard mode was specifically designed to take advantage of existing 286 systems, and while it could address all the memory in the machine, it could not multitask DOS programs. Only native Windows applications could run simultaneously. Besides, the Intel 80386 processor was already available by that time, and this new processor could make far better use of the Windows environment. Due to the lack of software support, the 286-based computer was essentially relegated to the role of a somewhat faster version of the XT.

The Clone Wars

While the XT remained primarily an IBM platform, the AT was the first microcomputer to be duplicated (or "cloned") in large numbers by other manufacturers. Hundreds of thousands of 80286-based computers were sold during the mid to late 1980s, and new systems were still widely available as late as 1992. The original IBM models ran at 6MHz, introduced the high density 1.2M 5 1/4-inch floppy disk drive, and were equipped with 20M or 30M hard disk drives of the same ST-506 variety as in the XT.

By the time that their popularity began to wane, however, due to the arrival of the 80386 processors, many manufacturers had substantially improved on the capabilities of the basic AT. Most shipped with a full 640K on the motherboard (while IBM's could only fit 512K; an expansion board was needed for more), ran the processor at faster speeds (up to 20MHz), and included larger and faster hard disks, including some of the first IDE drives. The 1.44M, high-density 3 1/2-inch floppy drive was also a popular addition. Video options ranged from the monochrome display adapter (MDA) of the original PC, to the Hercules monochrome graphics adapter, to the later color graphics adapter (CGA) and enhanced graphics adapter (EGA) color standards.

As a result, the 286 machines still found in network use can have a wide range of capabilities. Some are little more than slightly accelerated XTs, while others might have color graphics and enough hard drive space to actually be functional in a modern environment.

Networking ATs

Like the XT, AT-compatible computers are easily adaptable to network use. An ISA bus NIC can be easily inserted and configured to connect the system to a network. The primary drawback with the networking of 286-based and earlier computers is their general inability to load drivers into the upper memory blocks above 640K. Certain chipsets (such as those manufactured by Chips & Technologies) do allow for this possibility, but for most ATs and AT clones, all network drivers have to be loaded into conventional memory. Given the size of most network requesters, which can run up to 100K or more, this can seriously diminish the capability of the workstation to run programs. As with the XT, only simpler programs can be considered for regular use on an AT. Despite the ability of Windows 3.x to run on a 286 machine (in Standard mode only), the AT most definitely is not a suitable Windows platform for everyday use.

Upgrading ATs

Given the wide range of possible hardware configurations available on 80286-based PCs, upgrades of certain components certainly are a possibility, but the question remains whether the process is worth the effort and expense. Intel, for example, marketed a replacement chip for the 80286 called the "snap-in 386" that was the only way in which a 286 microprocessor could be upgraded because of socket and signaling differences. This upgrade could easily be applied if you could locate this chip (which I doubt), but the performance gain would probably not be worth the effort.

On the other hand, certain upgrades are worth the effort if you are committed to using machines such as these in a production environment. For example, if you value the eyesight of your users, any machines utilizing a CGA display system should be upgraded. A video graphics array (VGA) card and monitor, or even an EGA package, would be a vast improvement, applied with very little effort. Even a monochrome solution is an improvement over CGA, but it has been some time since I have seen new monochrome monitors and display adapters available through a conventional source (a fact that I find infuriating when I end up installing 1M VGA cards and monitors on file servers).

Hard drives can be added or replaced in AT machines, although I would say that locating the hardware for anything other than an IDE or SCSI installation would not be worth the effort. See the "IDE" section, later in this chapter, for more information on this process.

I spent many years working primarily with 286-based machines, and while they can be sorely lacking in many of the advanced features that are taken for granted today, they are quite capable of running many DOS applications with satisfactory performance. WordPerfect and Lotus 1-2-3 were the mainstays of the shrink-wrapped software industry at the time of the AT's heyday, and both performed adequately on these machines. The current versions of these products contain capabilities that were unheard of in DOS applications of the late 1980s, but they pride themselves on retaining backward compatibility with their vast installed user base. This is not to say that the average secretary or bookkeeper should have to make due with a 286óanyone would be more productive with a newer, faster machineóbut, as with the XT, there are places in the corporate world where this antiquated technology can be put to productive use.

The Intel 80386

By the beginning of 1987, systems built around the next generation of Intel microprocessors had begun to appear on the market. The advent of the 386 was a watershed in personal computing in many ways. First of all, you may notice that this section is not named for a particular IBM computer. By this time, "clone" manufacturers were selling millions of systems around the world, and IBM had stopped being the trendsetter it had always imagined itself to be. In fact, the term "clone" could no longer be considered pejorative. Rival system manufacturers such as Compaq had long since proven themselves to be much more than makers of cheap knockoff versions of IBM machines. Compaq was, in fact, the first systems manufacturer to release an 80386-based PC.

People realized that the IBM systems, while still technologically competitive, added an extra premium to their prices for a sense of brand recognition that was of diminishing value in the real world. Therefore, while IBM's PS/2 systems did utilize the 80386 processor to great effect, and while a great many companies remained IBM-only shops for many years afterward, this was the real beginning of the commercial home computer market, and hundreds of manufacturers began turning out 386-based systems at a phenomenal rate, and selling them by means other than through franchised dealerships and traditional corporate sales calls.

The 80386 processor was a technological breakthrough as well as a marketing one. It was not simply a case of Intel creating a faster chip with a wider bus, although they did do this. The 386 increased the power of personal computers through some fundamental architectural changes that might not be immediately apparent to some users. Indeed, the 386 operates at about the same efficiency level as the 80286, taking approximately 4.5 clock cycles to execute a single instruction. An 80386-based system running a DOS program at the same clock speed as a 286 system will not be tremendously faster. The real innovation behind the 386 was the capability to move personal computing into the age of multitasking.

The 80386 processor was made available by Intel in two basic flavors: the 80386DX and the 80386SX, the latter designed more as an entry-level processor aimed at the home user or less-demanding business user. The DX chip was, first of all, a full 32-bit processor in every way. The internal registers, data bus, and memory address lines were all 32-bit. This doubled the width of the pathway in and out of the processor when compared to the 286. In addition, this meant that a 386-based system could address up to 4G of actual, physical memory chips, and up to 64 terabytes (1 terabyte=1,000G) of total (that is, physical and virtual) memory. Obviously, this is a great deal more RAM than any desktop computer can hold, even today.

Although originally offered at speeds of 12.5MHz and 16MHz, Intel quickly acceded to the demands of users and began producing chips that ran at speeds up to 25MHz and 33MHz. These two became the flagship processors of the Intel line, although other chip manufacturers such as Advanced Micro Devices (AMD) later manufactured 386-compatible chips that ran at speeds up to 40MHz.

The 80386SX chip ran at speeds of 16MHz and 20MHz and was identical to the DX version of the processor in every other way except that its external data bus was only 16 bits wide and that it had only 24 address lines, giving it the same memory-handling capacities as the 286: 16M of physical memory and up to 1G of total physical and virtual memory.

Operational Modes

The real innovation behind the 386, though, can be found in the modes that it can operate in. Like the 286, the 386 has a real mode and a protected mode similar in functionality to those of the earlier chip. The system always boots into real mode, which still emulates the 8086 processor exactly, for compatibility with existing DOS programs. The system can then be shifted into protected mode by the OS, just as the 286 can. However, unlike the 286, the 386 chips can be switched back from protected mode to real mode without resetting (that is, power cycling) the microprocessor. In addition, another operating mode was added, called virtual real mode. This mode allowed existing DOS programs to be multitasked without any alteration whatsoever to their code.

Virtual real mode allows individual virtual machines to be created on a single system, each of which functions like a completely independent DOS session. Attributes such as environment variables can be individually modified without affecting the rest of the system, and, should a program in one virtual machine crash, the others can continue running normally (in theory). This is done by distributing the processor's clock cycles evenly among the virtual machines in a rotational manner. This is the basis of multitasking and the fundamental innovation of Microsoft Windows.

Obviously, this function is as dependent on the OS as it is on the hardware. In this respect, the 80386 microprocessor must be considered alongside of Windows because together they completely changed the face of personal and business computing on the desktop. Within months of their release, 386-based systems had almost completely replaced 286s as the workstation of choice. With SX computers at very attractive entry-level prices and DXs positioned as the power user's platform, nearly every PC vendor on the planet signed an OEM agreement with Microsoft to bundle Windows with their computers. Other OSs also supported the multitasking capabilities of the 386, such as Quarterdeck's DESQview, IBM's OS/2, and various UNIX types, but none of these had the marketing push that Windows did, and they never caught on as suitable for everyday business use. Although the transition to Windows as the primary business platform of choice took a few years, this was the beginning of the revolution.

Networking 386s

As you can well imagine, the 386 machine became the preeminent business PC quite quickly and was just as rapidly introduced into network use. What Windows helped the 386 do to the desktop was equaled by what NetWare 386 did to the file server and the network. NOSs were able to take advantage of the chip's multitasking capabilities just as desktop OSs could, and the vastly improved memory handling of the processor allowed workstation network drivers to be loaded into the upper memory blocks between 640K and 1M, with the proper software support. This meant that most of the 640K of conventional memory could be used for running applications, instead of devoting a substantial part of it to network overhead.

The later microprocessor improvements that resulted in the 80486 chip were more of an incremental improvement than a revolutionary one. It was the 386 that set the foundation upon which today's most popular workstation processors are built. This leaves network administrators in a difficult situation, however. Millions of 386 computers were sold around the world, but now that 486s and Pentiums have garnered nearly the entire new computer market, everyone wonders what to do with the old machines.

The real problem is that these 386 workstations are not relics of another age, suitable only for use with archaic programs. 286 and earlier machines simply are incapable of running today's applications and OSs satisfactorily. This is not the case with 386s. Although somewhat slower (or even a great deal slower), a well-equipped 386-based PC can run any of the current productivity applications in use today. Their marketing cache is totally gone, however, and many computer users today express indignation at the prospect of being asked to use a 386 for general business use.

This is an unfortunate byproduct of the tremendous marketing efforts undertaken by Intel and other corporations to promote the latest PC technologies, especially the 486 and Pentium processors. Reaching out of the computer trade press and into the mainstream media, including high-tech television commercials, they have effectively convinced computer users and non-users alike that nothing less than a 486-based machine is acceptable. Now, the message is changing to emphasize the Pentium. With the release of the Pentium Pro, even the 486 is suddenly a generation older. By the end of 1995, Intel all but ceased production of 486 processors, and now, nothing less than a Pentium is available in a new model computer. For the corporate LAN administrator, though, this should not be the case. Vendors that cater to corporate clients can still supply 486 machines in quantity and are likely to continue doing so for as long as the demand persists.

Clearly, a pattern is beginning to emerge here. Every so often, a new level of PC technology is introduced and, once the kinks are worked out, the industry tries to persuade the public that their older products must be abandoned or upgraded if they are to remain productive or competitive. Add to this the fact that product cycle times have been diminishing steadily for several years. Software upgrades are delivered every twelve or eighteen months, and Intel's primary defense against competitive processor manufacturers is no longer litigation, but simply an accelerated development cycle for its next wave of technology.

Notice also that the well-trumpeted upgradability of Intel processors has turned out to be far more limited than we were originally led to believe. The P24T Pentium Overdrive has only recently made it to market after many months of empty promises, and even this is not a true 64-bit Pentium. In fact, while some non-Intel products do exist that can effectively upgrade 386 systems to 486 performance levels, a 486 cannot be upgraded to a full Pentium, and due to architectural changes, there will be no upgrade platform at all from the Pentium to the Pentium Pro.

For the network administrator in a corporate environment, it is obviously not practical to junk an entire fleet of PCs every time a new processor is released. The 386 is really the first case in which truly usable technology is in danger of being discarded due to sales pressure applied by the computer industry. The fact remains that, for a great many PC users in the business world today, a 386-based PC with sufficient memory and hard drive space is a completely satisfactory production machine. For Windows-based e-mail, standard word processing and spreadsheet use, and even Internet access, this is quite sufficient. This is not to say that I recommend purchasing 386 machines today (even if you could find them), but to warehouse or give away existing units because they are not 486s is lunacy.

I consider 386s to be the most upgradable of PCs. Most of the system hardware packages sold when the 386 was most popular are deficient in the memory and storage capacities that are recognized as essential for business use today. Fortunately, RAM upgrades and hard drives are two components that can be added easily to an underpowered system to make it into a practical workstation. Moreover, both can be removed easily from the 386 when it is finally time to retire the machine from duty. The installation of additional system memory is covered in chapter 5, "The Server Platform," while hard drive upgrades are discussed later in this chapter.

The Intel 80486

Intel marketed its first microprocessor, the 4004, in 1971. Designed for use in the first hand-held calculators, it had a 4-bit bus that made it capable of handling numerical input but very little else. During the next two decades, Intel continued to develop and refine its line of microprocessors, and in 1989, the company released its first model in the 80486 line.

Improvements of the 80486 Processor

The 486 has a full 32-bit bus, meaning that both the data and address buses of the I/O (input/output) unit are 32 bits wide. This allows up to 4G of physical memory to be addressed by the processor and up to 64 terabytes (1 terabyte=1,000G) of virtual memory. Virtual memory is a technique in which storage space on a disk drive can be utilized like actual memory through the swapping of data to and from the computer's memory chips.

When compared to the innovations of the 80386 processor, the 486 is more of an evolutionary step forward than a radical new design. The silicon of the chip is etched with finer details, as small as .8 microns (1 micron=1/1,000,000 of a meter), and faster clock speeds of up to 100Mhz are supported. The capabilities of the 486's I/O unit are also significantly enhanced over the earlier processor models, allowing for off-chip memory accesses in burst modes that can deliver data to the processor at a rate of up to 32 bits per single clock cycle.

A clock cycle is the smallest unit of time recognized by the processor. Electrical current is applied to a quartz crystal within the processor, causing it to vibrate at a predetermined frequency that is used as a baseline for the timing of all processor operations. Therefore, a chip running at a clock speed of 100MHz is actually operating at 100 million clock cycles per second. Improvements in the architecture of the 486 allow it to execute a single instruction in two clock cycles, while the 386 needed four.

These are the two fundamental ways in which processor design can be improvedóincrease the speed of the clock, or execute more instructions per clock cycle.

The architectural improvements of the 486, made possible in part by the increased number of transistors that can be packaged on a single chip, allow for several important resources to be built into the processor itself, as opposed to being located in separate units connected by the computer's system bus. Any operation that can be performed without accessing off-chip resources greatly enhances the overall speed and efficiency of the system.

Always remember that, although we are dealing with minute amounts of time and fantastic speeds, computing is all relative. Data that is moved about within the processor travels at a far greater speed (and over a much shorter distance) than that which must travel to the system's memory chips. Similarly, the memory is much faster than a hard drive, a hard drive is faster than a tape drive, a tape drive is faster than a floppy drive, and a floppy drive is faster than a pad and pencil.

The math coprocessor (sometimes called the floating point unit or FPU), for example, is now an integrated part of the microprocessor, as opposed to the separate chip that was required in the 80386 and earlier models. There is also now an 8K on-board cache that significantly increases the efficiency of the processor's I/O unit. This is a write-through cache of four-way set-associative design. This means that the cache is broken up into four 2K pieces, each of which can be utilized by a different process at the same time, making it particularly effective for multitasking OSs like NetWare and Windows.

A write-through cache means that when the processor receives a command to read from the computer's memory, it first consults the cache to see if the desired data is present there. If it is, then I/O from the memory chips is not necessary, and processing can begin immediately using the cached data. When the processor writes to memory, however, it immediately sends its data to both the cache and the memory chips. By caching in only one direction, a processor sacrifices a measure of additional speed for the sake of data integrity. Additional off-processor RAM caching, called Level 2 or L2 cache, can also be used to great effect with the 486 chip, without interfering with the on-board cache. This sort of static RAM cache is discussed in chapter 5, "The Server Platform."

The 80486 Processor Line

The 80486DX processor is available in speeds ranging from 25MHz to 50MHz. Intel also markets an 80486SX line of processors that differ primarily by lacking an integrated math coprocessor. The intention behind this effort was to emphasize the upgradability of the 486 chip. Users were able to purchase a system with a relatively inexpensive 80486SX chip and later upgrade to the full DX version. The strategy was primarily aimed at the home computer and workstation market and is discussed in greater detail in the "Upgrading Processors" section, later in this chapter. For use in a server, you should consider nothing less than a 33MHz 80486DX processor.

Another innovation of the 486 design over the 386 was the capability for clock-doubling or clock-tripling processors. These processors, called the DX2 and DX4 versions of the 80486, are basically 25MHz or 33MHz chips that have been altered to operate at double or triple their rated speed. Thus, the maximum speed rating achieved by the Intel 80486 family is the DX4 version of the 33MHz processor, which is tripled to run at 100MHz (the native speed of the chip is actually 33.3MHz). The silicon chip itself is quite capable of performing at these high speeds, but there are two important considerations when evaluating these processors.

The first is the fact that these clock-doubled processors only run at double-speed within the processor itself. Thus, when a 33MHz chip is doubled to operate at 66MHz, the internal math coprocessor and on-board cache are effectively doubled, but the communication between the I/O unit of the chip and the rest of the computer is still conducted at only 33MHz. It is the I/O unit itself that has been given the extra capability to translate between the two clock speeds, and that is therefore the buffer between the processor and the motherboard. Actually, this arrangement works out quite well because it is unnecessary to alter the motherboard or any of the computer's other hardware to accommodate the increased speed of the processor. This is, again, part of Intel's "upgradable processor" marketing strategy. DX2 and DX4 processor chips are also available on the retail market as Intel Overdrive processors. Identical in pinout configuration to the DX chips, which means that they can be installed into the same type of socket as the DX, these are designed to be installed in systems as a replacement foróor an addition toóan existing DX processor.

It should be noted, however, that older system boards containing an extra processor socket for the installation of the upgraded chip completely disable the old processor once the new one has been installed. While this is one of the few "chip-only" processor upgrades that I ever recommend be performed in a file server or workstation, I also recommend against the use of the second overdrive socket so that the original processor chip can be removed and used elsewhere or kept as a spare.

The second area of concern with clock-doubled chips is heat. The faster a microprocessor runs, the more heat it generates, and excessive heat can turn this exquisitely etched piece of silicon technology into a high-priced guitar pick with amazing rapidity. For this reason, most 486DX2 and 486DX4 chips, as well as all Pentium processors, come with a heat sink attached to the top of the chip. A heat sink is simply a piece of metal or plastic with protruding fins or fingers that increases the chip's surface area through which heat can be dissipated. Many computer manufacturers are now using specially-designed fans, about an inch in diameter, that are mounted directly atop the processor chip to provide additional cooling. These fans are also available as add-on kits that can be easily installed on existing machines; the kits attach to the power supply, as opposed to factory-installed models which draw power directly from the processor socket. The use of one or both of these methods for cooling down processors is recommended, particularly in a file server that might contain a greater number of heat-generating components than the average PC.

Intel Pentium

Intel released the first generation of its next processor line, the Pentium, in early 1993. The Pentium represents a major step forward in microprocessing, while retaining full backward compatibility with all previous Intel designs. This step forward is not completely without repercussions to the rest of the industry. While the chip indeed runs existing software faster and more efficiently than the 486, its most revolutionary innovations will require some effort from software developers to be fully utilized.

What's in a Name?

Intel chose not to continue using the x86 naming scheme for its microprocessors after extended litigation failed to prevent rival chip manufacturers from using the term "486" to describe their own products. A number cannot be copyrighted, so a suitable name was chosen for Intel's next generation microprocessor and duly protected by copyright. Pentium-compatible chips by Cyrix and AMD are now appearing on the market, but they cannot use the Pentium name, and Intel has remained a jump ahead by bringing their newest processor, dubbed the Pentium Pro, to market with unprecedented speed.

The primary improvement of the Pentium is that it utilizes superscalar technology. Unlike the 486 and all previous Intel processors, which could only execute one instruction at a time, the Pentium has dual instruction pipelines, a feature that previously has been available only in high-speed RISC microprocessors. This gives the Pentium the capability to execute two simple integer instructions simultaneously, within a single clock cycle, under certain conditions.

The primary data path, called the u-pipe, can execute the full range of instructions in the processor's command set. A secondary path, called the v-pipe, has been added. The v-pipe is not as fully functional as the u-pipe. It can only execute a limited number of instructions in the processor's command set under particular conditions, but it can do so at the same time that the u-pipe is functioning.

Since each pipe has its own ALU, the result of this is that certain combinations of instructions can be "paired" to execute simultaneously, with the results appearing exactly the same as if the instructions were performed sequentially. Other commonly-used instructions have been hardwired into the processor for enhanced performance.

To take full advantage of these innovations, however, software developers have had to recompile their programs to ensure that this parallel processing capability is utilized to its fullest. By organizing software to make instructional calls using pairs that the Pentium can run utilizing both pipelines, developers ensure that a greater number of instructions are executed in the same number of clock cycles. This results in tremendous speed benefits.

Some NOSs that run on the Intel processor, such as NetWare 4.x and Windows NT, have already been recompiled to take advantage of the Pentium's capabilities. Many desktop applications will also be recompiled as they are ported to 32-bit versions designed to take advantage of newer 32-bit desktop OSs such as Windows 95 and OS/2.

Another improvement found in the Pentium is the presence of two separate 8K memory caches within the processor. Each of the two caches is of two-way set-associative design, split into two 4K sections using 32-bit lines (the 486 used 16-bit lines for its cache). One cache is utilized strictly for code, and therefore deals only with data traveling into the processor from the system bus. This prevents any delay of instructions arriving at the processor because of conflicts with the data traveling to and from the twin instructional pipelines.

The other cache is a data-only cache that has been improved over its 486 counterpart by being write-back capable (the older model was strictly a write-through cache). A write-back cache stores data on its way to and from the processor. Thus, output data remains in the cache and is not written to memory until subsequent usage forces a portion of the cache to be flushed. The write-through cache of the 486 stores data only on its way to the processor; all output is immediately written to memory, a process that can cause delays while the processor waits for memory chips to signal their readiness to accept data. The Pentium data cache can also be configured by software commands to switch from write-through to write-back mode as needed, holding its output in on-board memory buffers when the programmer deems it necessary. This helps to eliminate any possible delays caused by multiple calls to system memory. The data cache also has two separate interfaces to the system board to accommodate the two instruction pipelines of the processor, thus enabling it to deliver data to both pipes simultaneously.

Like the 486, the Pentium has a 32-bit address bus, giving it the same memory addressing capabilities as the earlier chip. However, the data bus has been increased to 64-bit, doubling the bandwidth for data transfers to memory chips on the system board. Some of the on-chip data paths have even been widened to be 256-bit to accommodate the Pentium's burst-mode capabilities, which can send 256 bits into the cache in one clock cycle. These attributes combined allow the chip to transfer data to and from memory at up to 528 Mbps, while the maximum transfer rate of a 50MHz 486 is only 160 Mbps.

Improvements have also been made in the FPU, the processor's error detection protocols, and the processor's power management features. The FPU of the Pentium has been completely redesigned. It now utilizes an eight-stage pipeline and can consistently perform floating point calculations in one clock cycle. Error detection is performed by two separate mechanisms: one using parity checking at the interface with the system board and an internal procedure that checks the caches, buffers, and microcode on the chip itself.

As mentioned earlier, Intel in 1993 released the first generation of Pentium microprocessors. Running at 60MHz and 66MHz, these chips possessed all the capabilities previously described but were hampered by some aspects of their design that caused a number of problems.

First, the large number of transistors on the chip (3.1 million, up from 1.2 million on the 486), combined with Intel's continued use of the three-layer 0.8 micron complimentary metal oxide semiconductor (CMOS) manufacturing technology from the 80486DX-50, required a very large dieóthis caused complications in the manufacturing process that severely hampered Intel's ability to deliver the chips in the quantities needed. In addition, this design caused the resulting chips to use a large amount of power, thereby generating tremendous heat. Moreover, the 60MHz version of the processor was nothing more than a 66MHz chip that had exhibited instability problems during the quality control process when run at 66MHz. Reports like these were not received well by the consumer, and this, in combination with the initial high prices of the chips, caused informed buyers to be very cautious when considering the use of the new processor.

By March 1994, though, when the second generation of Pentiums came to market, the manufacturing techniques had been modified extensively. The chips were then made using a four-layer 0.6 micron bipolar complementary metal oxide semiconductor (BiCMOS) technology that had already been adopted by other chip manufacturers, and they required significantly less power than their earlier counterparts (3.3v, as compared to 5v for the earlier Pentium models, despite an increase in the number of transistors from 3.1 to 3.3 million).

90MHz and 100MHz versions of the chip were released, along with a 75MHz version designed for use in lower-end machines and portables. Unlike the first-generation chips, in which the processor ran at the same speed as the bus, the second-generation chips run at 150% of the bus rate. Thus, if a chip runs at 100MHz internally, communication with the system bus is actually conducted at 66MHz. At this time, 66MHz is still the fastest possible communication rate with the system bus.

Extensive power management capabilities were also added to the second-generation Pentium processor. The chip is capable of placing itself into one of several low power consumption modes, depending on the activities being conducted, and can even be used to control the suspension of power to other devices in the computer when they are not in use. While features of this sort are of more concern to laptop configurations than to those of file servers, bear in mind that reduced power also means reduced heat, which is beneficial to any system.

In March of 1995, Intel introduced the first chips in its next generation of Pentium microprocessors. Running at 120 and 133MHz, the newest Pentiums are manufactured using a 0.35 micron process that allows for a manufacturing die one-half the size of the previous generation's die, and one-fourth the size of the original Pentium's die. Still operating at 3.3 volts and utilizing four layers of metal between silicon BiCMOS wafers, these chips not only increase Pentium performance levels still further but also allow for more efficient manufacturing processes, which means lower costs and ready availability. These are now the processors of choice in high-end Pentium machines, and prices have dropped considerably with the advent of the Pentium Pro, which now occupies the position as the premium (read: most expensive) Intel processor on the market. Indeed, there are even rumors to the effect that the 150 and 160MHz Pentiums are ready for market, waiting only for an opportune release time that will not jeopardize Pentium Pro sales.

Rival Pentiums

After extended legal battles with AMD and other rival microprocessor manufacturers, Intel has been forced to allow other companies to manufacture processors that are Intel-compatible but don't infringe upon Intel's designs.

Creating a Clean Copy

Rival manufacturers usually design their chips using a clean room technique. First, a team of technicians examines the target technology (in this case, an Intel chip) and documents its capabilities in great detail. Then a second team that has never examined the original technology is given the materials generated by the first team and tasked with the creation of a component that can do everything specified. This way, a product is created with the same capabilities as the original but realized in a completely independent way.

Several companies have created processors that rival the 486, some of which exceed the capabilities of Intel's 80486 line, and now the Pentium clones have begun to hit the market. NextGen's Nx586 is currently available, as is Cyrix' 6x86 and the Am5x86 by AMD. All of these manufacturers claim performance levels comparable to a 133 MHz Pentium, but offer few real avantages over a true Pentium, at this time. The NextGen chip has so far failed, in the first systems using it, to provide a pervasive reason not to use Intel. Several of the larger systems manufacturers, among them Compaq, have expressed great interest in using these new processors, but their motivations are certainly more economic than technological. Until these chips are thoroughly tested in real world situations, I would not recommend their use, especially in servers, and even if they are found to be stable, performance or price would have to be substantially better than their Intel counterparts.

Intel Pentium Pro

While the BiCMOS manufacturing method has yielded a Pentium of even greater speed than its predecessors, it has also set the stage for the next level in the Intel microprocessor family. Code-named P6 during development, and finally named the Pentium Pro for its release, this processor was developed with one primary goal in mind, according to an Intel press release: "To achieve twice the performance of [the] Pentium processor while being manufactured on the same semiconductor process." In order to do this, a new method of executing instructions had to be developed.

All microprocessors, up to and including the Pentium, are dependent on the system bus and memory chips of the computer to deliver instructions and data to the processor for calculation. Because these components operate at slower speeds than the internal workings of the processor, there have always been times when processing halts for short periods while data is being fetched from memory chips. These memory latency delays result in underutilization of the processor, and because the speed of memory devices has increased over the years at a rate far less than that of processors, simply requiring faster memory is not an adequate solution.

Intel's initial attempt to address this problem came in the form of a component that was introduced in the Pentium processor called the branch target buffer (BTB), which attempted to intelligently anticipate the next instruction that would be required in a string of commands and execute that instruction before it was actually received. When an instruction called for a branch (that is, a direction to access a command from a particular memory address), this addressóalong with the commandówas stored in the Pentium's 256-entry BTB. The next time the same branch was called, this memory address would be located in the buffer, and its corresponding command executed before the instruction could actually be accessed from system memory. If the BTB had correctly anticipated the desired command, then the processor delay time caused by memory latency was partially offset by the immediate availability of the command's result; in other words, the command was executed while it was being accessed. If the BTB's guess was wrong, then a process called branch recovery was initiated, in which the results of the buffered command were discarded. The correct instruction was then executed with no significant loss of processor time.

The design of the Pentium Pro processor leaps ahead of this technique by means of dynamic execution. Dynamic execution is a method in which chains of commands are stored on the processor in an instruction pool and executed out of order during lag periods.

For example, the fetch/decode unit of the processor accesses a series of instructions from the system bus using a multiple branch prediction algorithm and a 512-entry BTB, and places them in the pool. The dispatch/execute unit then begins to perform the calculations of instruction #1, but the data required by the instruction cannot be found in the processor's cache and therefore cannot be completed until the required data arrives via the system bus. The processor then begins a dataflow analysis procedure that looks ahead to the next command in line for execution, instruction #2, but finds that it cannot be executed yet at all because it relies heavily on the results of the uncompleted instruction #1. Analyzing the next waiting instructions, the processor finds that instructions #3 and #4 do not rely on earlier results, and that all data required for their completion is already present in the cache. Therefore, they can be executed during the idle time while the processor is waiting for the data needed by instruction #1.

The results of instructions #3 and #4 cannot yet be returned to the permanent machine state, as they are still the result of a speculative procedure, so they are stored in the retire unit, where they wait until the results of instructions #1 and #2 are available. The retire unit then is responsible for reordering the results of all four instructions into the correct sequence, and sending them back to the programmer-visible registers where normal operations can continue, oblivious to the machinations performed within the processor. Thus, instructions enter and exit the processor in the correct order, but are actually executed in whatever order serves to most efficiently minimize idle time at the processor core.

The Intel developers predict that with their new 0.35 micron manufacturing technology they will be able to realize Pentium Pro processors running at speeds of 200MHz or more with far greater efficiency than that of any microprocessor available today. The first Pentium Pro, though, will run at 133MHz and utilize the 0.6 micron BiCMOS technology of the second-generation Pentiums. The chip will integrate 5.5 million transistors into its design, require a reduced power supply of 2.9v, and include as part of the unit an off-chip Level 2 system cache connected to the processor by a dedicated high-speed bus. Intel is relying heavily on PCI bus mastering controllers to provide the system bus speeds necessary to accommodate the faster chip, which will, at its peak, execute three instructions per clock cycle. These fundamental architectural changes mean that, while the Pentium Pro will be 100% software compatible with all previous Intel processors, there will be absolutely no upgrade path from earlier processor families without motherboard replacement.

The first systems utilizing the Pentium Pro hit the market near the end of 1995, and unfortunately, these systems do not yet demonstrate a marked improvement in processing speed over the Pentium. It is even being opined that Intel is holding up the release of its 150 and 166MHz Pentiums for fear of eclipsing the performance of its own "next generation." Particularly in light of the relative inadequacy of the first 60/66MHz Pentiums that were released, it is safe to say that it's early days yet for the Pentium Pro. It offers little substantial improvement over the Pentium while carrying the high-ticket prices that are natural for a newly released technology. Too expensive for a workstation and too untried for a server, we can only hope that the next "next generation" will show the same refinement that was achieved in the Pentium line.

It is also important to note that the primary competition to the Intel processor type in the desktop market comes from companies that produce chips that are primarily used in high-end workstations costing anywhere from 10 to 50 times as much as the average PC. Their processors also are much more expensive, and since the market is much smaller, they tend to be manufactured in numbers counted by the thousands, while Intel manufactured over 30 million 486 processors in 1994, and claims to be on track to surpass this number of Pentiums in 1995! The efficiency and reliability of the manufacturing process is therefore a crucial element of any Intel processor design. The industry has now progressed to the point where Intel is marketing products comparable to the high-end RISC processors for a much lower price and in quantities that probably make their competitors salivate.

With these considerations in mind, it is no wonder that Intel holds a lock on the world microprocessor market that extends far beyond PC use and into devices found in nearly every American home. It is very likely that this hold on the market will continue for some time to come.

Pentium FPU Flaws

In late 1994, the detection of a flaw in the FPU of the Pentium processor was publicized to a previously unheard of degree, not only in trade publications, but in the mainstream press as well. The existence of the flaw, in itself, is not a terribly unusual occurrence. Indeed, many experts were quick to comment that it would be virtually impossible to produce a microprocessor of such complexity that it didn't contain a flaw at some level. The problem was caused by an error in a script that was created to download a software lookup table into a hardware programmable lookup array (PLA). Five entries were omitted from the table, with the result that division operations performed on certain combinations of numbers by the Pentium's FPU return results with what Intel refers to as "reduced precision." (That means the answer is wrong.)

Determining whether or not your Pentium contains the flaw is a simple procedure. Use a spreadsheet to perform the following calculation:

Obviously, the result should be the original number: 4195835. A flawed Pentium chip, though, generates a result that is off by 256. Incidentally, while many sources have introduced software "fixes" for the problem, most of these do nothing more than disable the floating point logic in the processor, slowing down all FP calculations considerably, including those unaffected by the flaw. Some software vendors also have made patches or switches available that disable floating point calculations for their individual applications, allowing for mission-critical data to be protected and for the FPU to function normally elsewhere. Intel has made public a patch that intercepts only the offending floating point calculations and executes them in a different manner, to assure correct results. This patch is in the form of code that is intended to be incorporated into application software by compilers and developers. It is not available as an executable program for use with software that has already been compiled.

No one with any experience in the computer industry can deny that products sometimes ship with flaws. It might even be safe to say that all products do, to some degree. The tremendous backlash of publicity regarding the Pentium flaw was due not so much to the problem itself but to Intel's response to industry criticism. The mathematician who discovered the problem attempted to ascertain just how serious the situation was by computing the probability of the error's reoccurrence during normal processor use. Intel immediately responded with its own figures and estimates that appeared to demonstrate that the occurrence of errors should be far less frequent than the mathematician's figures seemed to indicate. This argument continued in the press with increasing amounts of anger and statistics bandied about by all interested parties. It was also revealed that Intel had been aware of the problem for some time but had been very careful not to publicize it.

To clarify a few points, it should be noted that the problem occurs only in the FPU of the processor and that only a relatively small number of applications utilize the FPU. In the commercial desktop software field, it is primarily in spreadsheets, CAD, and similar financial and graphic design applications that the FPU is used. Network OSs are not affected, although some server-based financial and database engines might be. In addition, many of the processor calculations performed by these applications are dedicated to file management, screen display, and other "overhead" tasks that use only integer calculations and have nothing to do with the floating point calculations that potentially yield incorrect results. In other words, Intel was correct in its steadfast declaration that the vast majority of Pentium users will never be in a situation in which it is possible for the flaw to manifest itself. Where Intel began to go astray, however, was in the company's dealings with those users who do, in fact, utilize the FPU.

All the elaborate mathematical arguments presented by various parties were based on attempts to predict the number of divide calculations performed by particular applications under normal conditions. This was then used to calculate a probability of the flaw being manifested within a certain period of time. Intel attested, for example, that the average spreadsheet user was likely to encounter one divide error for every 27,000 years of normal spreadsheet use. This may be comforting to some, but in fact, there is no less chance that the payroll figures you calculate tomorrow will be wrong than there is that your income tax figures in the year 2296 will be wrong. There is no reason why the one time in 27,000 couldn't occur right now, no matter how many charts, graphs, and white papers attempt to prove otherwise. The bottom line is that there is a flaw, and your calculation results could be wrong.

On the basis of its probability arguments, Intel refused to implement a blanket exchange policy for purchasers of the flawed chips. They proposed instead a program in which users were to be made to prove that they had a need for a processor that functioned properly before they were given one. After massive marketing campaigns in which Intel earnestly tried to get a Pentium into every home in America, this announcement denigrating the vast majority of its users as unworthy of a product that performs as advertised was an act of staggering gall, and industry commentators proceeded to tell them so.

The final result of the conflagration was that Intel suffered a tremendous amount of bad publicity and ultimately instituted a program that provides free replacements of the flawed chips to any user requesting one. The general consensus of the press was that people in the industry did not fault Intel so much for the flawed product as they did for the way they attempted to cover up the problem, and the way they reacted once it was exposed. Indeed, as of this writing, only about 10 percent of the flawed Pentium processors have been returned to Intel. There are, however, a great many users of the Pentium upon whose calculations rest the stability of bridges, the safety of automobiles, the lives of medical patients, and so on.

We can only hope that, as a result of the incident, a lesson was learned by some vendors in this industry. Consumers finally are becoming more conscious of the games that have been run on them successfully for many years. Consumers are becoming harder for the vendors to fool, and if consumers stay the course, we can hope that it will be cheaper and easier for vendors to become forthright in all their shortcomings than to become clever enough to fool us again.

To obtain a replacement Pentium chip, call Intel at (800) 628-8686. You will be required to furnish a credit card number so that, if you fail to return the flawed processor chip within 30 days, you can be charged for the replacement. Intel representatives say that the company is doing this so that there is no way for flawed chips to remain in use or to be submitted for repeated replacements.

486 and Pentiums Workstations

In chapter 5, "The Server Platform," we examined in depth the architecture and capabilities of the Intel 80486 and Pentium microprocessors. Of course, these processors that are acceptable for use in file servers also offer superb performance in a workstation. At this point in time, the standard network workstation PC configuration offered by most corporate-oriented vendors is a 80486DX2 processor running at 50 or 66MHz, 16M of memory, and approximately 300M of hard drive space. Pentiums at the workstation level are more often reserved for users with special needs, such as desktop publishing and other graphics work, CAD, and the like.

However, by the time that the Pentium Pro comes into general release, prices will certainly plummet on Pentium processors, and this eventually will become the standard desktop platform of choice. Of course, it will then be time to throw away all your company's 486s, right?

Upgrading Workstations

One of the secrets to administering a large fleet of PCs (or even one PC, actually) is to know that the processor is not the be-all and end-all of a workstation's existence. A well-designed PC can run better and more efficiently than a badly-designed one with a faster processor. To be economically realistic in today's business world, it is more important to purchase machines that can easily be upgraded in other ways to accommodate the needs of the ever-expanding software packages that will have to run on them.

Windows is the business environment of choice today, and is likely to remain so for the next few years. However, Windows applications are becoming more and more demanding of workstation resources such as memory and hard disk space. 16M of RAM is now a standard, whereas it was quite a luxury only a year or two ago. Power users and advocates of other OSs such as OS/2, Windows NT, or Windows 95 feel that 32M is now preferable. The office software suites that have sold so well recently require anywhere from 75M to 100M of storage for a full installation. (Wouldn't you love to go back in time and say this to that guy who was so proud of his new 10M hard drive in 1983?) Networking a PC, of course, can eliminate the need to have all resources present on the workstation's local drives, but there should be enough room for the OS and the user's most frequently used applications.

The key, then, to practical workstation administration is to categorize users on the basis of their computing needs and allot microprocessors accordingly. After that, make sure that all your computers have memory and disk storage sufficient for today's applications. This probably means adding additional RAM and hard drive space to virtually every 386 that you still own and assigning them to the users with the most basic needs.

The only real problem with this philosophy is that the pecking order in the corporate world sometimes does not allow for this sort of resource distribution. By purely practical standards, assistants who deal with extensive amounts of correspondence, mail-merges, and so on, should get the 486s or Pentiums, while executives who only use the computer to check their e-mail should get the older machines. I leave to the individual network administrator the task of explaining this to the executives.

For the remainder of this chapter, therefore, we examine the hardware and procedures necessary to attach a PC to the network and to keep it there as a usable resource for as long a period as possible. This means we must look at the memory and processor upgrades that are practical on the workstation platform, the addition or replacement of the hard drive types most commonly found in workstations, and the purchase and installation of NICs. The ability to perform procedures like these allows you to preserve the hardware investment your company has already made, while keeping your users productive and, if not happy, at least satisfied.

Processor Upgrades

As stated earlier, the Intel 80486 processors were more of an incremental development than a revolutionary one. It is primarily for this reason that processor upgrades from a 386 to a 486 are even remotely possible. Intel, however, would rather see you purchase an entire new 486-based computer, so it was left to their competitors to realize such products.

80386 Upgrades

Cyrix markets a line of microprocessor upgrade kits for 386s that offer 486 performance levels for nearly any 386 machine. It should be noted at the outset, however, that this is not a true 486 upgrade, but rather a means of accelerating the performance of a 386 machine to levels approaching that of a 486. Cyrix is the first to admit that an upgraded 386 machine does not equal the capabilities of a true 486, whether a genuine Intel or one of Cyrix's own, but they do promise an overall performance increase of approximately 100% for less than $300, which may be helpful in keeping those 386 machines useful in the business network environment.

Two different upgrade kits are available from Cyrix, depending on the existing processor in the 386 machine. 80386DX-based machines running at 16MHz, 20MHz, 25MHz, and 33MHz require a complete processor replacement, and Cyrix includes a chip-pulling tool for removing the old processor. The microprocessors in 80386SX-based machines are soldered to the motherboard, and the Cyrix kit for these includes a specially designed chip that snaps in place over the existing one. Both kits are designed for installation by the average user, and make the process a rather simple one.

It should be noted that there are certain 80386 microprocessors that cannot be upgraded with these products. 16MHz 386SX machines manufactured before 1991 lack a float pin that is required for the new chip to function; 33MHz 386SX and 40MHz 386DX machines also cannot be upgraded. Older 387 math coprocessor chips might be incompatible with the upgrades, requiring replacement with a newer model. Cyrix has made available a listing of specific manufacturers and models that can be upgraded, as well as a free software utility that can be used to determine whether a specific machine is a viable candidate for an upgrade.

As far as software compatibility is concerned, Cyrix has certified their processor upgrades for use with all the major desktop OSs, including DOS, Windows, Windows NT, OS/2, and several varieties of UNIX. They have also certified their upgrades for use in NetWare, Banyan, and LAN Manager client workstations. Software is required to initialize the on-board 1K cache of the processor, and this is included in the kit.

Although clearly not a replacement for a true 486 workstation, an upgrade such as this provides a simple and economical way to preserve the extensive investments many companies have made in 80386 technology.

80486 Upgrades

It is only with 486 and higher-level machines that actual processor chip replacement becomes a practical upgrade alternative. The practice of upgrading the microprocessor on a PC's motherboard is one that should be approached with a good deal of caution. Basically, there are two fundamental problems involved in the process: the actual chip replacement and hardware compatibility.

Replacing the Chip

The physical act of replacing the microprocessor chip on a motherboard can be very difficult or ridiculously simple, depending on the hardware. Changing the processor in a machine with a traditional socket can be a miserable experience. First of all, because inserting the new chip into the socket requires a force of 100 pounds (or 60 pounds for the "low insertion force" socket), it will likely be necessary to remove the motherboard from the case. Most computers utilize plastic spacers at the corners to hold the motherboard away from the computer's case. Pressing down hard on the center of the board could easily crack it, so depending on the design of your PC, you might have to disassemble virtually your entire computer to get the motherboard out, unless you can manage to provide support for the board from beneath. Once you have done this, you will need a chip puller (or a small screwdriver and a lot of courage) to pry the old processor out of the socket with even pressure from all sides so that it is lifted vertically away from the motherboard. Next, you must line up the new processor over the socket so that all 273 or 296 pins are precisely lined up with their respective pinholes, then place the heel of your hand atop this expensive piece of silicon, metal, and plastic, and gingerly press down with all your weight until the chip is well-seated. There should be approximately 1/16 inch of space between the chip and the socket. If you bend one of the pins, you might be able to bend it back with a fine-nosed plier. If you break off even one pin, however, the chip is ruined.

As noted earlier, you might find that your motherboard already has a second, vacant Overdrive processor socket on it. In cases like this, you need to perform only the latter half of the above procedure, but the machine's original processor will be disabled once the new chip is in place. Worse yet, damaging the original socket in an attempt to remove the old processor could render both sockets unusable.

You probably have gathered that I generally do not recommend replacing microprocessor chips on this type of motherboard. This is true, but not only for the reasons outlined above. The process can be difficult and is not recommended for the uninitiated, but it can be done with the right tools and a lot of confidence. The primary reason I hesitate to recommend upgrading processors is that there generally is more risk in the process than gain in the result.

On the other hand, most new motherboards utilize a zero insertion force (ZIF) socket for the processor. This is a plastic construction with a lever on the side that, when engaged, locks the processor chip into place in the socket (see fig. 6.1). In these machines, replacing a microprocessor chip is simply a matter of flipping open the lever, taking out the old chip (no tools needed), inserting a new one, and closing the lever again. It fits so loosely into the pinholes that you would worry about it falling out if the lever wasn't there. The replacement procedure is so simple that most motherboard manufacturers have opted to include a single ZIF socket, rather than two of the conventional type, for overdrive capability. The only things you can possibly do wrong are to insert the processor the wrong way, or insert the wrong processor into the socket. The first problem has been eliminated by the pin distribution in Intel's socket designsóthere is only one possible way to insert the chip. The second, more complicated problem is explained in the following section.

Fig. 6.1.This is an empty ZIF cocket microprocessor.
Microprocessor Interchangeability

By far, the more pervasive problem in upgrading processors is knowing which models can be safely upgraded to which other models in what has become an increasingly bewildering array of chips in the Intel processor line.

If your file server is running any chip from the 80386 family, you can forget about upgrading. 386 motherboards simply cannot handle the increased requirements of the more advanced processors.

If your file server is running any chip from the 80486SX or DX family, then you definitely can upgrade to a comparable DX2 Overdrive processor. If your computer's motherboard contains the original 169-pin Overdrive socket (Socket 1), then you can install the appropriate Overdrive chip for your original processor. For example, if your original CPU was a 486DX-25, you can install the 486DX2-50 Overdrive processor. Do not try to install a DX2-66, as your motherboard is configured only to run with a processor that communicates with the system bus at 25MHz.

Newer motherboards may contain the 238-pin socket, designated Socket 2 by Intel, which provides some added processor upgrade flexibility. Still running at 5v, like Socket 1, the second socket can also support the addition of the P24T, the Pentium Overdrive processor, in addition to those mentioned above. Be aware, however, that this processor is not a full 64-bit Pentium. It is a 32-bit knockdown version that might provide some additional speed but will not result in the dramatic improvement that you would expect given the earlier description of the Pentium chip's capabilities. Depending on the price and availability of the Pentium Overdrive, which is now on the market, after repeated delays, you might find it more economical to upgrade to the fastest possible 486 and save your money for a true Pentium system later.

Another alternative is the clock-tripled version of the 486, called the 80486DX4. Although not originally available on retail market, an overdrive version of the DX4 can now be purchased that triples the internal speed of the processor, just as the DX2 doubles it. ("Overdrive" is simply a marketing term for these processors when they are released into the retail market. Except for the P24T, they are indistinguishable from the chips found as original equipment in preassembled systems). The primary architectural difference is that the DX4 runs at 3.3v, while all the other 486s run at 5v. For this reason, you can only install a DX4 chip into a Socket 1 or Socket 2 motherboard with the use of a voltage-regulating adapter. If you plug a DX4 directly into a 5v socket, you will ruin the chip and probably produce a very unpleasant smell.

The Intel Socket 3 designation is the one most conducive to successful upgrades. With 237 pinholes and designed to operate at either 3.3v or 5v, Socket 3 accommodates the entire 486 family as well as all the Pentium Overdrive chips. It is extremely important to determine which socket is installed on the motherboard that you wish to upgrade (see table 6.1). Some motherboard manufacturers also incorporate DIP switches into their upgradable processor designs. It is always safest to check the documentation of the motherboard or to call the manufacturer to determine whether or not a particular upgrade is advisable, and what additional adjustments to the motherboard might be necessary.

Table 6.1 Intel 486/Pentium CPU Socket Types and Specifications

Socket Number Number of Pins Pin Layout Voltage Supported Processors
Socket 1 169 17 x 17 PGA 5v SX/SX2, DX/DX2*
Socket 2 238 19 x 19 PGA 5v SX/SX2, DX/DX2*, Pentium Overdrive
Socket 3 237 19 x 19 PGA 5v/3.3v SX/SX2, DX/DX2, DX4, Pentium Overdrive, DX4 Pentium Overdrive
Socket 4 273 21 x 21 PGA 5v Pentium 60/66, Pentium 60/66 Overdrive
Socket 5 320 37 x 37 SPGA 3.3v Pentium 75/90/100/120/133, Pentium 90/100 Overdrive
Socket 6 235 19 x 19 3.3v DX4, DX4 Pentium Overdrive

PGA=Pin Grid Array
SPGA=Staggered Pin Grid Array
*DX4 can also be supported with the addition of a 3.3v voltage regulator adapter.

An upgrade from a 486 to a full Pentium chip is not possible without replacement of the motherboard, due to the differences in their pinouts and voltage requirements. This is just as well, in a way, because a good Pentium machine requires a motherboard that has been built to accommodate the needs of the faster processor. For the same reasons, the first-generation Pentiums running at 60MHz and 66MHz and requiring the 273-pin, 5v Socket 4, cannot be upgraded to the second-generation chips, running at 90MHz, 100MHz or faster, which use the 320-pin, 3.3v Socket 5.

The currently available family of Pentium processors, running at 75, 90, 100, 120, and 133MHz, all utilize this same socket, allowing them to be interchangeable on many motherboards, as long as the heat considerations of the faster chips are accounted for, which should not be a problem in most of today's Pentium systems. The Pentium Pro, however, as stated earlier, does not present a practical upgrade path from lesser processors, due to its fundamental architectural differences, particularly the integrated L2 cache.

Memory Upgrades

When considering a business-oriented workstation running Windows 3.1 or one of the newer 32-bit OSs, no other upgrade yields as immediate an increase in productivity as an installation of additional memory. A faster processor can speed up individual tasks, but memory is the lifeblood of a multitasking environment. The more RAM that is available, the greater the number of applications that can be opened and used simultaneously. For the business user, this can be critical, as the workflow of a typical day at the office often consists of numerous interruptions and digressions that require access to various resources and documents. The ability to open an additional program without having to close others can add up to tremendous time savings over long periods.

The capability of the microprocessor to utilize disk drive space as virtual memory is useful as a safety buffer. When all the system's RAM is being utilized, further operations are carried out by swapping memory blocks to and from the hard drive. Once a system is loaded to the point at which virtual memory is used, however, performance levels drop precipitously, as hard drives are far slower than RAM chips are; hard drive speeds are measured in milliseconds (thousandths of a second), while RAM speeds are measured in nanoseconds (billionths of a second). It is best, therefore, to try to avoid the use of virtual memory whenever possible, and the only way to do this without limiting the capabilities of the machine is to install more RAM.

In chapter 5, "The Server Platform," we examined the way that system memory is organized into banks that determine the possible ways in which RAM can be upgraded. To reiterate the basic rule: a bank must be completely filled with identical RAM chips or must be left completely empty. You can consult table 6.2 to learn how most workstation computers have their memory organized, but the best way to find out what type and capacity of memory modules may be safely added to a particular PC is to check the documentation for the motherboard.

Fig. 6.2.this is the IDE master-slave relationship.

Table 6.2 Memory Bank Widths on Different Systems

Processor Data Bus Bank Size (w/Parity) 30-Pin SIMMs per Bank 72-Pin per SIMMS Banks
8088 8-bit 9-bits 1 1 (4 banks)
8086 16-bit 18-bits 2 1 (2 banks)
286 16-bit 18-bits 2 1 (2 banks)
386SX, SL, SLC 16-bit 18-bits 2 1 (2 banks)
386DX 32-bit 36-bits 4 1
486SLC, SLC2 16-bit 18-bits 2 1 (2 banks)
486SX, DX, DX2, DX4 32-bit 36-bits 4 1
Pentium 64-bit 72-bits 8 2

The primary problem with upgrading memory in 386-based machines is that many vendors, at the time of the 386's popularity, utilized 1M SIMMs in their computers. These machines usually shipped in 4M or 8M memory configurations, and the existing SIMMs might have to be removed from the system to make room for larger capacity modules in order to bring the system up to the 16M that is currently the typical amount of RAM for a networked Windows workstation. For example, a typical 386DX system might have eight memory slots broken into two banks of four each. The only way for the manufacturer to populate the machine with 8M of RAM would be to use 1M SIMMs in all eight slots. To perform a memory upgrade, at least one bank has to be cleared in order to install 4M SIMMs. The problem might be that you have no ready use for the 1M modules in this day and age, but letting those SIMMs go to waste is better than saddling a user with an underpowered machine.

Consult chapter 5, "The Server Platform," for more information on the actual installation of additional memory modules. The procedures are the same for a workstation as they are for a server. With memory prices hovering in the range $30 to $40 per megabyte, it might cost $500 or more to bring a machine up to 16M of RAM, but this will make the difference between a barely usable relic and a viable productivity platform. Also, remember that the SIMMs can be removed when it is finally time to retire the machine.

Workstation Storage Subsystems

Like every other component in the personal computer, the minimum requirement for hard disk drive storage has greatly increased over the past several years. While the average 386-based machine sold in the early 1990s may have had a 120M drive as a standard component, a typical workstation configuration today may have 300M, 500M, or even more storage. Even more astonishing is the way that hard drive prices have dropped. Just five years ago, it was common to pay $2 or more per megabyte of storage (we won't even discuss the $1,500 10M drives of the early 1980s). Prices today are now often less than fifty cents per megabyte, and some hard drives have capacities of several gigabytes in the same form factor that could not hold one tenth of that amount in the past.

This is, of course, a direct reaction to the needs of computer users. The average Windows application today requires from 5M to 20M of drive space just for installation, and the advent of multimedia and true color graphics has made multi-megabyte data files commonplace. In addition, the increasing use of full motion video on the desktop promises to expand these requirements to an even greater degree.

Attaching computers to a network, however, mitigates these requirements to some degree. It can often be difficult to decide how much disk space is truly necessary for a networked workstation. Some administrators swear by the use of a server for virtually all of a workstation's storage needs, including the OS. It is quite possible to run an entire Windows workstation without any local hard drive at all, but the burden that this places on the network, along with the decrease in performance that will be seen at the workstation, hardly compensates for the small savings on the price of even a modest workstation hard drive. Even a 100M drive allows for the installation of DOS, Windows, a good-sized permanent Windows swap file, and some basic applications. In today's market, though, a 100M drive costs more per megabyte than a larger unit.

Other factors also affect your decision of what hard drive size is ideal for a networked workstation. The type of work being done, the applications used, backup procedures, and the security of the data files all must be considered. Once the decision is made, however, it may be necessary to augment or replace the hard disk drives in some older or underpowered workstations, for reasons of increased capacity, increased speed, or even drive failure. The following sections examine some of the hard drive technologies that might be found in various workstations, paying particular attention to the ATA interface (also known as IDE), which is unquestionably the most popular workstation hard disk drive interface in use today. We also explore the latest enhancements to this interface that are contributing greatly to its speed and versatility. Unlike some older technologies, ATA hard drive upgrades are easy to perform, and the units can readily be moved to other machines when necessary. With this knowledge, a LAN administrator should be able to configure workstations to the needs of their users in a simple and economical manner.