Term papers writing service


1. write an essay on the evolution of computers

Robert Mannell One of the earliest machines designed to assist people in calculations was the abacus which is still being used some 5000 years after its invention. In 1642 Blaise Pascal a famous French mathematician invented an adding machine based on mechanical gears in which numbers were represented by the cogs on the wheels.

A Short History of Computers and Computing

Englishman, Charles Babbage, invented in the 1830's a "Difference Engine" made out of brass and pewter rods and gears, and also designed a further device which he called an "Analytical Engine".

His design contained the five key characteristics of modern computers: An American, Herman Hollerith, developed around 1890 the first electrically driven device. It utilised punched cards and metal rods which passed through the holes to close an electrical circuit and thus cause a counter to advance. This machine was able to complete the calculation of the 1890 U.

The Harvard Mark 1 was completed in 1944 and was 8 feet high and 55 feet long. This machine was basically a small calculator. In 1943, as part of the British war effort, a series of vacuum tube based computers named Colossus were developed to crack German secret codes.

  • The majority of programming languages used today are often referred to as 3GL's 3rd generation languages even though some of them originated during the 2nd generation;
  • Such computers had a single processor chip containing a single processor.

The Colossus Mark 2 series pictured consisted of 2400 vacuum tubes. Presper Eckert of the University of Pennsylvania developed these ideas further by proposing a huge machine consisting of 18,000 vacuum tubes. It was a huge machine with a huge power requirement and two major disadvantages. Maintenance was extremely difficult as the tubes broke down regularly and had to be replaced, and also there was a big problem with overheating.

The most important limitation, however, was that every time a new task needed to be performed the machine need to be rewired. In other words programming was carried out with a soldering iron.

This allowed programs to be read into the computer and so gave birth to the age of general-purpose computers. This generation is often described as starting with the delivery of the first commercial computer to a business client. This generation lasted until about the end of the 1950's although some stayed in operation much longer than that. The main defining feature of the first generation of computers was that vacuum tubes were used as internal computer components.

Vacuum tubes are generally about 5-10 centimeters in length and the large numbers of them required in computers resulted in huge and extremely expensive machines that often broke down as tubes failed.

The Second Generation 1959-1964: In the mid-1950's Bell Labs developed the transistor. Transistors were capable of performing many of the same tasks as vacuum tubes but were only a fraction of the size. The first transistor-based computer was produced in 1959. Transistors were not only smaller, enabling computer size to be reduced, but they were faster, more reliable and consumed less electricity. The other main improvement of this period was the development of computer languages.

Assembler languages or symbolic languages allowed programmers to specify instructions in words albeit very cryptic words which were then translated into a form that the machines could understand typically series of 0's and 1's: Higher level languages also came into being during this period. Whereas assembler languages had a one-to-one correspondence between their symbols and actual machine functions, higher level language commands often represent complex sequences of machine codes.

Two higher-level languages developed during this period Fortran and Cobol are still in use today though in a much more developed form. The Third Generation 1965-1970: In 1965 the first integrated circuit IC was developed in which a complete circuit of hundreds of components were able to be placed on a single silicon chip 2 or 3 mm square.

Computers using these IC's soon replaced transistor based machines. Again, one of the major advantages was size, with computers becoming more powerful and at the same time much smaller and cheaper. Computers thus became accessible to a much larger audience. An added advantage of smaller size is that electrical signals have much shorter distances to travel and so the speed of computers increased. Another feature of this period is that computer software became much more powerful and flexible and for the first time more than one program could share the computer's resources at the same time multi-tasking.

The majority of programming languages used today are often referred to as 3GL's 3rd generation languages even though some of them originated during the 2nd generation. The Fourth Generation 1971-present: The boundary between the third and fourth generations is not very clear-cut at all.

Department of Linguistics

Most of the developments since the mid 1960's can be seen as part of a continuum of gradual miniaturisation. In 1970 large-scale integration was achieved where the equivalent of thousands of integrated circuits were crammed onto a single silicon chip.

This development again increased computer performance especially reliability and speed whilst reducing computer size and cost. Around this time the first complete general-purpose microprocessor became available on a single chip. Complete computer central processors could now be built into one chip.

The microcomputer was born. Such languages are a step further removed from the computer hardware in that they use language much like natural language.

Many database languages can be described as 4GL's. They are generally much easier to learn than are 3GL's. The Fifth Generation the future: The "fifth generation" of computers were defined by the Japanese government in 1980 when they unveiled an optimistic ten-year plan to produce the next generation of computers.

This was an interesting plan for two reasons.

  1. The main defining feature of the first generation of computers was that vacuum tubes were used as internal computer components. The microcomputer was born.
  2. Two higher-level languages developed during this period Fortran and Cobol are still in use today though in a much more developed form. Vacuum tubes are generally about 5-10 centimeters in length and the large numbers of them required in computers resulted in huge and extremely expensive machines that often broke down as tubes failed.
  3. Complete computer central processors could now be built into one chip.
  4. This machine was able to complete the calculation of the 1890 U. Firstly, it is not at all really clear what the fourth generation is, or even whether the third generation had finished yet.

Firstly, it is not at all really clear what the fourth generation is, or even whether the third generation had finished yet. Secondly, it was an attempt to define a generation of computers before they had come into existence. The main requirements of the 5G machines was that they incorporate the features of Artificial Intelligence, Expert Systems, and Natural Language.

The goal was to produce machines that are capable of performing tasks in similar ways to humans, are capable of learning, and are capable of interacting with humans in natural language and preferably using both speech input speech recognition and speech output speech synthesis.

Such goals are obviously of interest to linguists and speech scientists as natural language and speech processing are key components of the definition.

As you may have guessed, this goal has not yet been fully realised, although significant progress has been made towards various aspects of these goals. Parallel Computing Up until recently most computers were serial computers. Such computers had a single processor chip containing a single processor. Parallel computing is based on the idea that if more than one task can be processed simultaneously on multiple processors then a program would be able to run more rapidly than it could on a single processor.

Clusters of networked computers eg. By 2008, most new desktop and laptop computers contained more than one processor on a single chip eg.

Having multiple processors does not necessarily mean that parallel computing will work automatically. The operating system must be able to distribute programs between the processors eg. An individual program will only be able to take advantage of multiple processors if the computer language it's written in is able to distribute tasks within a program between multiple processors.