Monday, August 31, 2009

Intel 80386, also known as the i386


The Intel 80386, also known as the i386, or just 386,1 was a 32-bit microprocessor introduced by Intel in 1985. The first versions had 275,000 transistors and were used as the central processing unit (CPU) of many personal computers and workstations. As the original implementation of the 32-bit extensions to the 8086 architecture, the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors. This is termed x86, IA-32, or the i386-architecture, depending on context.
The 80386 could correctly execute most code intended for earlier 16-bit x86 processors such as the 80286; following the same tradition, modern 64-bit x86 processors are able to run most programs written for older chips, all the way back to the original 16-bit 8086 of 1978. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386 (and thousands of times faster than the 8086). A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS.
The 80386 was launched in October 1985, and full-function chips were first delivered in 1986.vague Mainboards for 80386-based computer systems were at first expensive to buy, but prices were rationalized upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was designed and manufactured by Compaq.
In May 2006, Intel announced that production of the 80386 would cease at the end of September 2007.4 Although it has long been obsolete as a personal computer CPU, Intel and others had continued to manufacture the chip for embedded systems. Embedded systems that utilise a 80386 or one of its derivatives are widely used in aerospace technology.
The processor was a significant evolution in the x86 architecture, and the latest of a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system. The 80386 added a 32-bit architecture and a paging translation unit, which made it much easier to implement operating systems that used virtual memory. It also had support for hardware debugging.
The 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode which debuted in the 286 was extended to allow the 386 to address up to 4 GB of memory. The all new virtual 8086 mode (or VM86) made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible.
The 32-bit flat memory model of the 386 would arguably be the most important feature change for the x86 processor family until AMD released x86-64 in 2003.
Chief architect in the development of the 80386 was John H. Crawford.5 He was responsible for the 32-bit extension of the 80286 architecture and instruction set, and he then led the microprogram development for the 80386 chip.
The 80486 and Intel Pentium line of processors were descendants of the 80386 design.

Intel 80286 Microprocessor


The Intel 802861, introduced on February 1, 1982, (originally named 80286, and also called iAPX 286 in the programmer's manual) was an x86 16-bit microprocessor with 134,000 transistors. It was the first Intel processor that could run all the software written for its predecessor2 .
It was widely used in IBM PC compatible computers during the mid 1980s to early 1990s, starting when IBM first used it in the IBM PC/AT in 1984.
After the 6 and 8 MHz initial releases, it was subsequently scaled up to 12.5 MHz. (AMD and Harris later pushed the architecture to speeds as high as 20 MHz and 25 MHz, respectively.) On average, the 80286 had a speed of about 0.21 instructions per clock. The 6 MHz model operated at 0.9 MIPS, the 10 MHz model at 1.5 MIPS, and the 12 MHz model at 1.8 MIPs.
After the 6 and 8 MHz initial releases, it was subsequently scaled up to 12.5 MHz. (AMD and Harris later pushed the architecture to speeds as high as 20 MHz and 25 MHz, respectively.) On average, the 80286 had a speed of about 0.21 instructions per clock. The 6 MHz model operated at 0.9 MIPS, the 10 MHz model at 1.5 MIPS, and the 12 MHz model at 1.8 MIPs.
An interesting feature of this processor is that it was the first x86 processor with protected mode. Protected mode enabled up to 16 MB of memory to be addressed by the on-chip linear memory management unit (MMU) with 1 GB logical address space. The MMU also provided some degree of protection from (crashed or ill-behaved) applications writing outside their allocated memory zones. However, the 286 could not revert to the basic 8086-compatible "real mode" without resetting the processor, which imposed a performance penalty (though some very clever programmers did figure out a way to re-enter real mode via a series of software instructions which would execute the reset while retaining active memory and control)citation needed. The Intel 8042 keyboard controller at IBM PC/AT had a function to initiate a "soft boot" which resets a host CPU only.
This limitation led to Bill Gates famously referring to the 80286 as a 'brain dead chip'5, since it was clear that the new Microsoft Windows environment would not be able to run multiple MS-DOS applications with the 286. It was arguably responsible for the split between Microsoft and IBM, since IBM insisted that OS/2, originally a joint venture between IBM and Microsoft, would run on a 286 (and in text mode). To be fair, when Intel designed the 286, it was not designed to be able to multitask real-mode applications; real mode was intended to be a simple way for a bootstrap loader to prepare the system and then switch to protected mode.
In theory, real mode applications could be directly executed in 16-bit protected mode if certain rules were followed; however, as many DOS programs broke those rules, protected mode was not widely used until the appearance of its successor, the 32-bit Intel 80386, which was designed to go back and forth between modes easily. See Protected Mode for more info.
The 80286 provided the first glimpse into the world of the protection mechanisms then exclusive to the world of mainframes and minicomputers which would pave the way for the x86 and the IBM PC architecture to extend from the personal computer all the way to high-end servers, drive the market for other architectures all the way down to only the highest-end servers and mainframes, a fact which presumably gave the IBM PC/AT its name.

Intel 80188 microprocessor and Intel 80186 microprocessor


The Intel 80188 is a version of the Intel 80186 microprocessor with an 8 bit external data bus, instead of 16 bit. This makes it less expensive to connect to peripherals. Since the 80188 is very similar to the 80186, it had a throughput of 1 million instructions per second.
As the 8086, the 80188 featured four 16-bit general registers, which could also be accessed as eight 8-bit registers. It also included six more 16-bit registers, which included, for example, the stack pointer, the instruction pointer, index registers, or a status word register that acted like a flag, for example, in comparison operations.
Just like the 8086, the processor also included four 16-bit segment registers that enabled the addressing of more than 64 KB of memory, which is the limit of a 16-bit architecture, by introducing an offset value that was added, after being shifted left 4 bits, to the value of another register. This addressing system provided a total of 1 MB of addressable memory, a value that, at the time, was considered to be very far away from the total memory a computer would ever need.
The Intel 80186 is a microprocessor and microcontroller introduced in 1982. It was based on the Intel 8086 and, like it, had a 16-bit external data bus multiplexed with a 20-bit address bus. It was also available as the Intel 80188, with an 8-bit external data bus.
The 80186 and 80188 series was generally intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of chips required, it included features such as clock generator, interrupt controller, timers, wait state generator, DMA channels, and external chip select lines.
The initial clock rate of the 80186 and 80188 was 6 MHz, but due to more hardware available for the microcode to use, especially for address calculation, many individual instructions ran faster than on an 8086 at the same clock frequency. For instance, the common register+immediate1 addressing mode was significantly faster than on the 8086, especially when a memory location was both (one of the) operand(s) and the destination. Multiply and divide also showed great improvement and were several times as fast as on the original 8086. Multi-bit shifts were done in a single pass through the ALU rather than one pass per bit of shift.
A few new instructions were introduced with the 80186 (referred to as the 8086-2 instruction set in some datasheets): enter/leave (replacing several instructions when handling stack frames), pusha/popa (push/pop all general registers), bound (check array index against bounds), ins/outs (input/output of string). A useful immediate mode was added for the push, imul, and multi-bit shift instructions. These instructions were included in the 80286 and successor chips.
The (redesigned) CMOS version, 80C186, introduced DRAM refresh, a power-save mode, and a direct interface to the 8087 or 80287 floating point numeric coprocessor.

The Intel 8088 microprocessor


The Intel 8088 microprocessor was a variant of the Intel 8086 and was introduced on July 1, 1979. It had an 8-bit external data bus instead of the 16-bit bus of the 8086. The 16-bit registers and the one megabyte address range were unchanged, however. The original IBM PC was based on the 8088.
The 8088 was targeted at economical systems by allowing the use of an 8-bit data path and 8-bit support and peripheral chips; complex circuit boards were still fairly cumbersome and expensive when it was released. The prefetch queue of the 8088 was shortened to four bytes, from the 8086's six bytes, and the prefetch algorithm was slightly modified to adapt to the narrower bus.Variants of the 8088 with more than 5 MHz maximum clock frequency include the 8088-2, which was fabricated using Intel's new enhanced NMOS process called HMOS and specified for a maximum frequency of 8 MHz. Later followed the 80C88, a fully static CHMOS design, which could operate from 0 to 8 MHz. There were also several other, more or less similar, variants from other manufacturers. For instance, the NEC V20 was a pin compatible and slightly faster (at the same clock frequency) variant of the 8088, designed and manufactured by NEC. Successive NEC 8088 compatible processors would run at up to 16 MHz.
Depending on the clock frequency, the number of memory wait states, as well as on the characteristics of the particular application program, the average performance for the Intel 8088 ranged from approximately 0.33 to 1 million instructions per second 1. Meanwhile, the mov and ALU1 reg,reg instructions taking 2 and 3 cycles respectively yielded an absolute peak performance of between 1/3 and 1/2 MIPS per MHz, that is, somewhere in the range 35 MIPS at 10 MHz.
The original IBM PC was the most influential microcomputer to use the 8088. It used a clock frequency of 4.77 MHz (4/3 the NTSC colorburst frequency). Some of IBM's engineers and other employees wanted to use the IBM 801 processor, some preferred the new Motorola 68000,2 while others argued for a small and simple microprocessor similar to what had been used in earlier personal computers.3 However, IBM already had a history of using Intel chips in its products and had also acquired the rights to manufacture the 8086 family.4 Another factor was that the 8088 allowed the computer to be based on a modified 8085 design, as it could easily interface with existing, and quite economical, 8085-type components.

64-bit Microprocessors


Sixty-four-bit processors have been with us since 1992, and in the 21st century they have started to become mainstream. Both Intel and AMD have introduced 64-bit chips, and the Mac G5 sports a 64-bit processor. Sixty-four-bit processors have 64-bit ALUs, 64-bit registers, 64-bit buses and so on.
One reason why the world needs 64-bit processors is because of their enlarged address spaces. Thirty-two-bit chips are often constrained to a maximum of 2 GB or 4 GB of RAM access. That sounds like a lot, given that most home computers currently use only 256 MB to 512 MB of RAM. However, a 4-GB limit can be a severe problem for server machines and machines running large databases. And even home machines will start bumping up against the 2 GB or 4 GB limit pretty soon if current trends continue. A 64-bit chip has none of these constraints because a 64-bit RAM address space is essentially infinite for the foreseeable future -- 2^64 bytes of RAM is something on the order of a billion gigabytes of RAM.

With a 64-bit address bus and wide, high-speed data buses on the motherboard, 64-bit machines also offer faster I/O (input/output) speeds to things like hard disk drives and video cards. These features can greatly increase system performance.

Servers can definitely benefit from 64 bits, but what about normal users? Beyond the RAM solution, it is not clear that a 64-bit chip offers "normal users" any real, tangible benefits at the moment. They can process data (very complex data features lots of real numbers) faster. People doing video editing and people doing photographic editing on very large images benefit from this kind of computing power. High-end games will also benefit, once they are re-coded to take advantage of 64-bit features. But the average user who is reading e-mail, browsing the Web and editing Word documents is not really using the processor in that way.