The Beginnings of 3D Rendering

Soma Cube

Let’s start with a small colored cube.

(Exploded 3D Rendered Soma Cube,1968)

The Soma Cube rendered by Gordon Romney in 1967 is considered the first rendering of a complex, manipulable, colored 3D virtual object.1

Gordon Romney was David Evans’ first Ph.D. student in computer graphics, and David Evans was a pioneer in this field. David’s idea was to use raster scan technology (similar to that used in TVs) to generate realistic 3D images, and Gordon’s Soma Cube was created using this method.

However, this technology is very different from our modern 3D rendering methods. It essentially involved drawing each pixel one by one, rather than constructing 3D images by building points, lines, and surfaces like we do today. This brings us to the third person—Edwin Catmull.

The Birth of Points, Lines, and Surfaces

Edwin Catmull, also a student of David Evans, proposed an alternative way to generate 3D images.

(Edwin Catmull’s A Computer Animated Hand, 1972)

This hand is taken from a computer-animated short film created by Edwin Catmull and Fred Parke. It shows the movement of the hand and is the first animated 3D object generated by a computer.2

As you can see, this hand is quite similar to modern 3D models. It is made up of a polygonal mesh, consisting of triangular faces and vertices. However, unlike today’s 3D software, this process involved creating a real model, drawing the triangular faces on it, scanning the coordinates of its each vertex, and inputting them into the computer to build the model.3

This project laid the foundation for 3D computer graphics and had a profound impact on the future of computer animation.

From Trials to Industry

By the 1980s, 3D rendering entered a period of rapid development and commercial application.

During this time, Pixar’s RenderMan rendering software and the use of ray tracing algorithms pushed the progress of rendering technology. RenderMan supported complex lighting calculations and became a core tool for animated films like Toy Story.4

RenderMan uses physical rules similar to those in the real world to simulate the behavior of light, such as reflection, refraction, scattering, and more. By simulating these physical phenomena, RenderMan is able to generate highly realistic images.

After that, with the launch of commercial software such as Maya and 3ds Max, 3D animation became more widely used across various industries.

Summary

From the history of 3D rendering development, we can see that the core of 3D rendering has always been based on ‘simulation’. Simulating the physical laws of reality within a computer has been a key focus in 3D rendering technology development.

Based on understanding the physical rules of the real world, technicians use more efficient methods to build similar physical environments in the computer, creating realistic 3D models. This concept is not just part of the past—it will continue to play an important role in future innovation and development.

Reference

1.First Rendering: A history of 3D rendering

2.The rise of 3D animation—a journey through its evolution

3.A Computer Animated Hand – Wikipedia

4.Milestones:The Development of RenderMan® for Photorealistic Graphics, 1981-1988 – ETHW

Leave a Reply

Your email address will not be published. Required fields are marked *