The ENIAC was the first machine to have memory elements and arithmetic fully functional controls. John Neumann who frequently visited the Moore School, had heard about the ENIAC project but was too late to get involved, so he decided to join the discussions in the initial ideas for the new machine the EDVAC. The machine was to have an internal memory system and the capability to issue instructions via “tape”.
There was a pause on the construction of the machine due to the ongoing war at the time, and it was not until 1946 that it was decided the research would resume on the EDVAC, with the agreement changes that were required to be made to the project to enable it to progress.
Three alternatives were suggested by the Moore school which were:
John Neumann then suggested that it should be a binary machine, and on that it was decided that the EDVAC 1 proposal would work best, but with the added suggestion of a hardware facility and a complete check of the arithmetic operations. With this added to the proposal the Machine became the EDVAC 1.5.
The assembly of the machine began in 1949 and once constructed the machine stood at 30.5 x 14 feet. The EDVAC ran its first application program in 1951 and then wasn’t considered reliable enough to run a large calculation in 1952. By 1960 other technologies started to become available and by 1963, when the EDVAC was shut off for a short period of time, it stopped working once turned back on again and the decision was made to scrap the machine. Today nothing remains of the EDVAC, but for a few of the plug in circuit elements and a section of electronics stored away at the Moore school.
The EDVAC was the first stored program computer invented and the use of memory in digital computers to store both sequences of instructions, and data was an enormous breakthrough in early computer technology.
The capacitor collects and quickly releases the electricity and the diode acts as a stopper to prevent any more electricity to pass through. Before the transistor came about, engineers would use vacuum tubes, which looked and worked similar to a lightbulb, however were large and bulky and had the tendency to burn out. In 1947, a viable option for a replacement component came along, in the form of the transister, these were much more relable than the inferior vacuum tubes.
Prior to the integrated circuit, manufacturers would assemble circuits by hand by connecting the metal wires and then soldering the components in place. Engineers then realised manufacturing the circuits by hand was unreliable with all the tiny components. Depending on the size of the circuits, speed was another factor. If the wires were too long, as with large computers, the electric current could not travel fast enough and as a result the computer was to slow and ineffective.
These issues were known as the ‘tyranny of numbers’, circuits at the time contained various components and connections, it was decided they were impossible to produce.
While Jack Kilby was one of the inventors of the Integrated circuit, his prototype 1958 was not the only attempt of the integrated chip on a single circuit, but he played an important role in its development. Kilby acknowledged this in his Nobel Prize acceptance speech (Nobel Media, 2000), stating “the contributions of thousands of engineers and scientists in laboratories and production facilities all over the world.”
Once his colleagues returned, he presented his idea to them and was granted permission to produce a test version, and it successfully worked. Kilby’s circuit resulted in a major shift in the manufacturing methods, with the removal of the requirement to manually assemble small components, and this then went on to become an automated process.
Several months later, in 1959, Robert Noyce (Nobel Media, 2003) came up with his own idea for the integrated circuit, solving many of the issues Kilby’s circuit had, one of which was connecting all of the components on the chip. He suggested that by adding metal as a final layer, then removing some of it, allowed the wires connecting the components to form. This meant that the integrated circuit was more suitable for mass production.
Noyce went on to be one of the co-founders of Intel, which today is one of the largest manufacturers of integrated circuits, or as it is known as today, the microchip. Both Kilby and Noyce’s ideas introduced revolutionary changes in the industry, and is one of the main elements behind our modern computerised society. Today the most advanced microchips contain millions of components on an area no larger than a fingernail.
Cray had business clients in mind and built the CDC 6600 between 1960 -1964, many people consider this to be the first supercomputer. Its design transformed the computing industry, prior computers had a single processing unit, though the 6600 central processing unit had 10 small surrounding processors, with their own input/output functions. This allowed the main central processing unit to perform more complex tasks, making it the first reduced instruction set computer.
Seymour Cray is the inventor of a number of other technologies, that were patented by the companies he worked for, including the Cray-1 vector register technology, the cooling technologies for the Cray-2 computer, the CDC 6600 Freon-cooling system, and a magnetic amplifier for ERA. He also contributed to the Cray-1 cooling technology design.
The concept of switching small blocks of data was first introduced by Paul Baran, he called it ‘distributed adaptive message block switching’, at the Rand Corporation in the US and Donald Davies at the National Physical Laboratory here in the UK in 1965. Donald Davies then developed the same message routing concept as developed by Baran. He introduced the name packet switching, which was a simpler name than Baran's. He envisioned to build a nationwide network in the UK.
Baran’s work then influenced Lawrence Roberts to adapt to the concept and developed the ARPANET. In 1969 (The Centre for Computing History, 2015) ARPANET launched the world’s first packet switched computer network, designed by British scientist Donald Davies and Lawrence Roberts of the Lincoln Laboratory. The first connection was a logon request, which was sent to Stanford Research Institute, California. At that point it was not yet a fully-fledged system, however it was the basis for today’s Internet, changing the way people communicated personally and in business, accumulated knowledge and reduced the need for travel.
ARPANET was then developed further by The U.S. Department of Defence Advanced Projects Agency (DARPA), which was then known as ARPA. They envisioned creating a reliable computer network, which would provide communications with its nodes and also remote access to the same computing resources.
ARPANET was once a network of four computers (Deffree, 2015), to which were located in at different locations which were:
The platform for information sharing and digital communications over the years has greatly expanded and continues to do so today.
The basic idea came to him in 1961 while sitting in a conference on computer graphics when he was thinking about making interactive computers more efficient. It occurred to him that, using a pair of small wheels on a table top, one wheel turning horizontally, one turning vertically, the computer could track their movements and move the cursor on the display accordingly.
He first designed it in the 1960s (The Centre for Computing History, 2015) in his research lab at Stanford Research Institute and hired a small research team to help. By this time there were a few off-the shelf solutions to moving the cursor on a screen but none met Engelbart’s ‘high performance’ requirement.
The first prototype was developed in 1964 with the help of Bill English who built a prototype of the hand-held device with perpendicular wheels mounted in a carved out wooden block, with a button on top. This was the first mouse. In 1965, Engelbart and his team reviewed all their solutions, including the mouse, and decided which was the most efficient.
After trying out different things, they settled on a "keyset" with five piano-like keys. Both devices were introduced to the public in 1968. These two devices became an important development in the desktop computers we see today as people are still using a modern mouse and keyboard.
Prior to this, Intel’s CPU functions of a computer were integrated onto a handful of chips. Intel 4004 (I-programmer.info, 2011) was created when Ted Hoff, an employee at Intel, discovered he could adapt the Digital PDP-8, he had been working with, by adding memory and making it programmable. However, Ted Hoff and his team found it difficult to build the microprocessor and little progress was made until Federico Faggin joined Intel in 1970.
Intel designed (Intel, 2015) a set of four chips called the MCS-4 (commonly known as the 4004). It included a CPU chip as well as supporting a ROM chip for the custom applications programs, a RAM chip for processing data, producing these on two-inch wafers compared to the 12-inch wafers commonly used for today's products. The most surprising thing about this microprocessor is that the circuit line width was 10,000 nanometres, an average human hair is 100,000 nanometres wide.
Once Intel released the microprocessor, it became the first general-purpose programmable processor on the market that engineers could purchase and then customize with software to perform different functions in a wide variety of electronic devices. This microprocessor led to Intel designing and dominating the market with the processors we use today, as it has been the standard processor architecture for over four decades.
Craig (2013, p. 16-17) further states, that it’s important to remember that the ‘reality’ part is, that you would stay in the real world. The augmentation is not there to make you think you’re somewhere else, as this would be more within the realms of virtual reality. Rather, you should use the same senses as you would normally, but with the addition of digital information being “superimposed” on your view of the world. This is achieved through the use of hardware, such as mobile devices, computers and other smart devices with a visual display. Craig also mentions how AR can be produced by using projectors to place digital objects within a physical space.
This type of technology alongside, or even built into a tablet device, and the relevant software could become an invaluable asset to a designer or developer. Whilst travelling, it would removing the need to carry heavier equipment, such as a laptop.
During a TED Talks seminar, Matt Mills (2012) and Tamara Roukaerts gave a fantastic presentation, showing how AR can be used to make physical objects animated. Mills demonstrates with the use of software called Aurasma, a painting would become alive, when Roukaerts held up a mobile phone pointing towards him with the application installed, the painting changed to a video with an actor talking, which was overlaid on the original painting.
With this type of thing already possible, it would easily be conceivable in the future, to be able to make websites that would be way more interactive, with this a whole world of possibilities opens up. Imagine a game studio’s website, they have a page with information about their latest game, and when using AR to look at the page, a character from the game could step out of a web page, and into the users’ room.
Microsoft (2015) further goes on to suggest numerous commercial possibilities, in engineering, science, education and design, as well advising on various corporations, which have already invested in their technology. They even state that NASA’s Jet Propulsion Laboratory will be using this technology, within in their research, for exploring the surface of Mars in the future.
In a press release, the Microsoft News Center (2015) reported that the HoloLens will be shipping to developers in early 2016, however this will come with an expectedly high price point of $3,000 USD per unit.
In an article for Fortune.com, Gaudiosi (2015) writes that projections by Digi-Capital suggest that the augmented and virtual reality market could generate $120 billion in revenue by 2020, bringing it very much above that of the current and possibly future revenue generated by other digital media devices. The article goes on to suggest that by the same point in time we should see both augmented and virtual reality become mainstream, this is according to the managing director of Digi-Captial, Tim Merel. As with today’s market, where there are numerous smartphones and tablets, a similar surge in wearable technology is expected which would provide a launch pad for both AR and VR technologies to slide into place on the mainstream market.