Computer Graphics History (1950s to 2020s)

Do you enjoy playing video games? Do you enjoy playing in virtual worlds? In this post, we will look at how current visuals have grown throughout time, allowing us to engage with such wonderful interfaces and increasing our user experience.

The 1950s

So, in the early 1950s, projects like Whirlwind and Sage created the Cathode Ray Tube, which was used to show visual graphics, and a light pen was utilized as an input device to write on the board. William Higinbotham created the first video game – Tennis for Two with Interactive Graphics – in 1895 to entertain visitors at the Brookhaven National Laboratory. Later in the decade, numerous advances were made, such as the creation of a TX2 computer and sketchpad software, which allowed users to draw simple forms on the screen with a light pen and save them for later use.

The 1960s

William Fetter invented the phrase “Computer Graphics” in 1960. E.E. Zajac of the Bell Telephone Laboratory made a film in 1963 utilizing animations and graphics to depict the movement of satellites and their changes in heights around the Earth’s orbit. IBM began its research in the realm of computer graphics in the late 1960s. The IBM 2250 was the first commercially available graphic computer. Ralph Baer created a basic video game in 1966 in which one could manipulate dots of light on a screen. Ivan Sutherland created the first head-mounted display in 1966, which included two independent wireframe pictures and produced a 3D image.

The 1970s

In the 1970s, there was a significant development in computer graphics that enabled practical computer graphics technologies such as MOS LSI technology. Edwin Catmull animated the opening and shutting of his hand. Fred Parke, a student, collaborated with him to make an animation of his wife’s face. One of the early pioneers, John Warnock, developed Adobe Systems, which is currently one of the most widely used applications for editing and photoshop. This resulted in a significant advance in the realm of computer graphics. In 1977, a significant improvement in 3D computer graphics was made to generate a 3D representation of an object on the screen, which served as the foundation for most subsequent advances.

Many current video games were then invented, such as the 2D video game arcade, Pong, Speed game, Gunfight, Space Invaders, and so on.

The 1980s

Modernization and commercialization began in the 1980s as a result of such advancements. High-resolution computer graphics and personal computer systems began to transform computer graphics in the early 1980s. Many CPU microprocessors and microcontrollers were created in order to produce a large number of Graphical Processing Unit (GPU) chips. Osaka University in Japan created a supercomputer in 1982 that employed 257 Zilog Z8001 microprocessors to show realistic 3D images.

The 1980s were dubbed the “Golden Age of Video Games.” Many firms, including Atari, Nintendo, and Sega, introduced computer graphics with a completely new interface to the public. Apple, Macs, and Amiga all allowed users to create their own games. Real-time advances in 3D graphics were achieved in arcade games. During this decade, the application of computer graphics was expanded to various sectors, including car manufacturing, vehicle design, vehicle simulation, chemistry, and so on.

The 1990s

The 1990s were a time when 3D computer graphics thrived and were widely used. 3D models were prominent in gaming, multimedia, and animation during this time period. The first computer graphic TV series – La Vie Des Betes – was released in France in the early 1990s. Toy Story, Pixar’s first significant commercial animated picture, was released in 1995 and was a tremendous success both monetarily and in the area of computer graphics.

During the same decade, several 3D games such as racing games, first-person shooter games, fighting games such as Virtual Fighter, Tekken, and the most well-known SuperMario began to entice the audience with their interface and gaming experience. Since these developments, computer graphics have become more realistic and have extended their application in a variety of industries.

The 2000s

Video games and film were the dominant forms of computer graphics in the 1990s. CGI was frequently utilized for television ads in the late 1990s and early 2000s, and it drew a big audience. Because of the widespread popularity of computer graphics, 3D graphics have become a common feature in almost all industries. Many advancements were achieved, and computer visuals used in films and video games were made more realistic in order to attract a larger audience. Many animated pictures, such as Ice Age, Madagascar, and Finding Nemo, were fan favorites and dominated the box office.

Many computer-generated feature films were made at the time, including Final Fantasy: The Spirits Within, Polar Express, and Star Wars, which garnered a lot of attention.
With the debut of the Sony Playstation 2 and 3, as well as the Microsoft Xbox, video games experienced a significant improvement. Many video game titles, such as Grand Theft Auto, Assassin’s Creed, Final Fantasy, Bio Shock, Kingdom Hearts, Mirror’s Edge, and many more, helped to grow the video game business and continue to impress the public.

The 2010s

CGI extended its application in the 2010s, providing visuals in real-time at ultra-high-resolution settings in 4K. The majority of animated films are now CGI and feature animated images and 3D cartoons. Microsoft Xbox and Sony PlayStation 4 began to dominate the 3D world in video games and are still among the most popular among consumers today.
Since then, the world of computer graphics has continued to evolve and has earned much acclaim from the public for improving its experience and offering a fantastic platform for interacting with high-resolution computer images. So that’s it for the history of computer graphics.

The 2020s

Ray tracing, for example, became a reality in the 2020s. Ray tracing is a method used in video games to make light behave how it does in real life. It works by replicating genuine light rays and tracing the route that a beam of light would travel in the physical world using an algorithm. Game designers may use this approach to make virtual beams of light appear to bounce off objects, throw realistic shadows, and generate lifelike reflections.
Ray tracing’s basic resemblance to reality makes it a highly realistic 3D rendering approach, even making blocky games like Minecraft appear near photo-realistic under the appropriate conditions.

There is only one problem: it is incredibly difficult to mimic. Recreating how light works in the actual world is difficult and time-consuming, necessitating massive amounts of processing power. That is why existing ray-tracing solutions in games, such as Nvidia’s RTX-powered ray tracing, aren’t realistic. They are not real ray tracing in the sense that each point of light is simulated. Instead, the GPU “cheats” by employing a number of intelligent approximations to get a similar visual impression while putting less strain on the hardware. This will very certainly alter in future GPU generations, but for the time being, it is a step in the right direction.

So what do you think brought the biggest jump in computer graphics?