1、毕业设计说明书(论文) 16 附录 外文文献及翻译 Progress in computers The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949. These early experimental computers were built by people like myself with varying backgrounds. We all had extens
2、ive experience in electronic engineering and were confident that that experience would standus in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a te
3、levision set could lead to a serious error in a computer. As far as computing circuits were concerned, we found ourselves with an embarrass de riches. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used el
4、sewhere. This sort of choice persisted and the term famillogic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant. In those early years, the IEE was still dominated by power engineering and we had to fight a number of ma
5、jor battles in order to get radio engineering along with the rapidly developing subject of electronics. dubbed in the IEE light current electrical engineering. properlyrecognized as an activity in its own right. I remember that we had some difficulty in organizing a conference because the power engi
6、neers ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960s By the late 50s or early 1960s, th
7、e heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating syst
8、ems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along. Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can o
9、nly be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly. Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of int
10、egration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make
11、 a computer or anything else. These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organization, a business or a university wa
12、s able to have a minicomputer for each major department. Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale 毕业设计说明书(论文) 17 required and at a reasonable cos
13、t. Minicomputers transformed the situation. The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry going the other way. As time goes on people get more for their mo
14、ney, not less. Research in Computer Hardware. The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above
15、that of discreet transistors. The researcher, in a university orelsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scaleminicomputer with fancy capability logic. The 7400 series was still going strong i
16、n the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rin
17、gs need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring. The RISC Movement and It
18、s Aftermath Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designers int
19、uition. In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and ditzy entitled The Case for the Reduced Instructions Set Computer. Apart from leading to a striking acronym, this title conveys little of the insights into instruction
20、 set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited
21、greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict
22、 with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice. Simulation made rapid progress and soon came into universal use by computer designers. In consequence, co
23、mputer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm. The x86 Instruction Set Little is now heard of pre-RISC instru
24、ction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival. This d
25、ominance of x86 disappoints people like myself who come from the research wings. both academic and industrial. of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like
26、毕业设计说明书(论文) 18 to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been di
27、fferent if Intels major attempt to produce a good RISC chip had been more successful. I am referring to the i860 ( not the i960, which was something different) . In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in aworkstation. There is an interesting
28、 sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation ofthe x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although
29、 it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution
30、 is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Pattersons books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8. The IA-64 instruction set. Some time ago, I
31、ntel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognized need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compati
32、bility with the x86; the puzzle is that they did the exact opposite. Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the trad
33、itional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer. In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard
34、to see exactly what was meant. Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly. Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional i
35、nstruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears di
36、fferently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry. Shortage of Electrons Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it
37、may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done. notably by HuronAmend and his team working in the Cavendish Laboratory. on the direct development of structures in which a single electron more or less makes the
38、 difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.