A Critical History of Computer Graphics and Animation

Section 18:
Scientific Visualization


Visualization in its broadest terms represents any technique for creating images to represent abstract data. Thus much of what we do in computer graphics and animation can fall into this category. One specific area of visualization, though, has evolved into a discipline of its own. We call this area Scientific Visualization, or Visualization in Scientific Computing, although the field encompasses other areas, for example business (information visualization) or computing (process visualization).

 

In 1973, Herman Chernoff introduced a visualization technique to illustrate trends in multidimensional data. His Chernoff Faces were especially effective because they related the data to facial features, something which we are used to differentiating between. Different data dimensions were mapped to different facial features, for example the face width, the level of the ears, the radius of the ears, the length or curvature of the mouth, the length of the nose, etc. An example of Chernoff faces is shown to the right; they use facial features to represent trends in the values of the data, not the specific values themselves. While this is clearly a limitation, knowledge of the trends in the data could help to determine which sections of the data were of particular interest.

In general the term "scientific visualization" is used to refer to any technique involving the transformation of data into visual information, using a well understood, reproducible process. It characterizes the technology of using computer graphics techniques to explore results from numerical analysis and extract meaning from complex, mostly multi-dimensional data sets. Traditionally, the visualization process consists of filtering raw data to select a desired resolution and region of interest, mapping that result into a graphical form, and producing an image, animation, or other visual product. The result is evaluated, the visualization parameters modified, and the process run again. The techniques which can be applied and the ability to represent a physical system and the properties of this system are part of the realm of scientific visualization.

Visualization is an important tool often used by researchers to understand the features and trends represented in the large datasets produced by simulations on high performance computers.

 

 

 

 

 

Click on the images below to view a larger version (when available).

 


Chernoff faces

Herman Chernoff, The use of faces to represent points in k-dimensional space graphically, Journal of the American Statistical Association, V68, 1973.

 

 

From the early days of computer graphics, users saw the potential of this image to present technology as a way to investigate and explain physical phenomena and processes, many from space physics or astrophysics. Ed Zajac from Bell Labs produced probably one of the first visualizations with his animation titled A two gyro gravity gradient altitude control system. Nelson Max at Lawrence Livermore used the technology for molecular visualization, making a series of films of molecular structures. Bar graphs and other statistical representations of data were commonly generated as graphical images. Ohio State researchers created a milestone visualization film on the interaction of neighboring galaxies in 1977.


Interacting Galaxies

 

 

 

One of the earliest color visualizations was produced in 1969 by Dr. Louis Frank from the University of Iowa. He plotted the energy spectra of spacecraft plasma by plotting the energy against time, with color representin the number of particles per second measured at a specific point in time.


Composite of frames from A two gyro gravity gradient altitude control system


Frame from Nelson Max DNA visualization


Frame from Interacting Galaxies, Ohio State - 197
7


Visualization of spacecraft plasma - Frank, 1969

 

The most well-known example of an early process visualization is the film "Sorting out Sorting", created by Ronald Baecker at the University of Toronto in 1980, and presented at Siggraph '81. It explained concepts involved in sorting an array of numbers, illustrating comparisons and swaps in various algorithms. The film ends with a race among nine algorithms, all sorting the same large random array of numbers.The film was very successful, and is still used to teach the concepts behind sorting. Its main contribution was to show that algorithm animation, by using computer generated images, can have great explanatory power.

To see a Java based demonstration of sorting algorithms, similar to the visualization done by Baeker, go to
http://www.cs.ubc.ca/spider/harrison/Java/sorting-demo.html

 

Three-dimensional imaging of medical datasets was introduced shortly after clinical CT (Computed axial tomography) scanning became a reality in the 1970s. The CT scan process images the internals of an object by obtaining a series of two-dimensional x-ray axial images. The individual x-ray axial slice images are taken using a x-ray tube that rotates around the object, taking many scans as the object is gradually passed through a gantry. The multiple scans from each 360 degree sweep are then processed to produce a single cross-section.

 


The goal in the visualization process is to generate visually understandable images from abstract data. Several steps must be done during the generation process. These steps are arranged in the so called Visualization Pipeline.


Data is obtained either by sampling or measuring, or by executing a computational model. Filtering is a step which pre-processes the raw data and extracts information which is to be used in the mapping step. Filtering includes operations like interpolating missing data, or reducing the amount of data. It can also involve smoothing the data and removing errors from the data set. Mapping is the main core of the visualization process. It uses the pre-processed filtered data to transform it into 2D or 3D geometric primitives with appropriate attributes like color or opacity. The mapping process is very important for the later visual representation of the data. Rendering generates the image by using the geometric primitives from the mapping process to generate the output image. There are number of different filtering, mapping and rendering methods used in the visualization process.

 

Gabor Herman was a professor of computer science at SUNY Buffalo in the early 1970s, and produced some of the earliest medical visualizations, creating 3D representations from the 2D CT scans, and also from electron microscopy. Early images were polygons and lines (e.g., wireframe) representing three-dimensional volumetric objects. James Greenleaf of the Mayo Clinic and his colleagues were the first to introduce methods to extract information from volume data, a process called volume visualization, in 1970 in a paper that demonstrated pulmonary blood flow.

Mike Vannier and his associates at the Mallinckrodt Institute of Radiology, also used 3D imaging as a way of abstracting information from a series of transaxial CT scan slices. Not suprisingly, many early applications involved the visualization of bone, especially in areas like the skull and craniofacial regions (regions of high CT attenuation and anatomic zones less affected by patient motion or breathing). According to Elliot Fishman from Johns Hopkins, although most radiologists at the time were not enthusiastic about 3D reconstructions, referring physicians found them extremely helpful in patient management decisions, especially in complex orthopedic cases.In 1983, Vannier adapted his craniofacial imaging techniques to uncover hidden details of some of the world's most important fossils.

There are other early examples that used graphics to represent isolines and isosurfaces, cartographic information, and even some early computational fluid dynamics. But the area that we now call scientific visualization really didn't come into its own until the late 1980s.

 

 

 

Herman, G.T., Liu, H.K.: Three-dimensional display of human organs from computed tomograms, Computer Graphics and Image Processing 9:1-21, 1979

 

 

Volume visualization (Kaufman, 1992) - a direct technique for visualizing volume primitives without any intermediate conversion of the volumetric data set to surface representation.

 

 

Michael W. Vannier , Jeffrey L. Marsh , James O. Warren, Three dimensional computer graphics for craniofacial surgical planning and evaluation, Proceedings of SIGGRAPH 83, Detroit, Michigan

 

In 1987 a SIGGRAPH panel released a report done for the National Science Foundation, Visualization in Scientific Computing, that was a milestone in the development of the emerging field of Scientific Visualization. As a result, the field was now on the radar screen of funding agencies, and conferences, workshops and publication vehicles soon followed.

 

 

Special Issue on Visualization in Scientific Computing," Computer Graphics, Publication of ACM/SIGGRAPH, Vol 21, No. 6, November 1987.
Bruce McCormick, Maxine Brown, and Tom DeFanti

 

Publication of this NSF report prompted researchers to investigate new approaches to the visualization process and also spawned the development of integrated software environments for visualization. Besides several systems that only addressed specific application needs, such as computational fluid dynamics or chemical engineering, a few more general systems evolved. Among these were IBM's Data Explorer, Ohio State University's apE, Wavefront's Advanced Visualizer, SGI's IRIS Explorer, Stardent's AVS and Wavefront's Data Visualizer.Two lesser known but important systems were Khoros (from the University of New Mexico) and PV-WAVE (Precision Visuals' Workstation Analysis and Visualization Environment), originally from Precision Visuals, Inc., but now owned by Visual Numerics, Inc. (VNI).

These visualization systems were designed to take the burden of making the visualization image off of the shoulders of the scientist, who often didn't know anything of the graphics process. The most usable systems therefore utilized a visual programming style interface, and were built on the dataflow paradigm: software modules were developed independently, with standardized inputs and outputs, and were visually linked together in a pipeline. They were sometimes referred to as modular visualization environments (MVEs). MVEs allowed the user to create visualizations by selecting program modules from a library and specifying the flow of data between modules using an interactive graphical networking or mapping environment. Maps or networks could be saved for later recall. General classes of modules included:

  • data readers - input the data from the data source
  • data filters - convert the data from a simulation or other source into another form which is more informative or less voluminous.
  • data mappers - convert it into another completely different domain, such as 2D or 3D geometry or sound.
  • viewers or renderers- rendering the 2D and 3D data as images.
  • control structures - examples include initialization of the display device, control of recording devices, open graphics windows, etc.
  • data writers - output the original or filtered data

Advantages of MVEs included:

  • Required no graphics expertise
  • Allowed for rapid prototyping and interactive modifications
  • Promoted code reuse
  • Extensible- allowed new modules to be created
  • Reasonably powerful and complete for a broad range of problems
  • Often allowed computations to be distributed across machines, networks and platforms


The problem with such packages included poor performance on large data sets, they were more restrictive than general programming environments, they were often not built on accepted graphics standards, and their ease of use sometimes promoted poor visualizations (this often involved a high glitz factor).


The Dangers of Glitziness and Other Visualization Faux Pas
This is a Windows Media file and will downloaded to your computer.

Wayne Lytle, who worked with the Cornell Theory Center, produced this parody of scientific visualizations for Siggraph 93, called The Dangers of Glitziness and Other Visualization Faux Pas , using fictitious software named "Viz-o-Matic."The video documents the enhancement and subsequent "glitz buffer overload" of a sparsely data-driven visualization trying to parade as a data-driven, thoughtfully rendered presentation.

 


 

 

 

 

 

 

 


MVE dataflow diagram. The boxes represent process modules, which ar linked by lines representing the flow of data between the modules.

 

 

 

 

 

 

 


 

In the mid 80's, Stellar Computer was marketing a graphic supercomputer. To demonstrate the capabilities of their hardware, they developed a software package called Application Visualization System (AVS) that they gave away with the hardware. AVS was one of the first integrated visualization systems, and was developed by Digital Productions veteran Craig Upson and others. Over time, Stellar merged with Ardent Computer to become Stardent Computer (see Note 1 below). When business conditions changed, some of the engineering staff and management of Stardent formed a new company called Advanced Visual Systems, Inc. to continue the development of AVS product line.

The computational model of AVS was based on a collection of parametric modules, that is, autonomous building blocks which could be connected to form larger data processing networks. Each module had definite I/0 dataflow properties, specified in terms of a small collection of data structures such as field, colormap, or geometry. The Network Editor, operating as a part of the AVS kernel, offered interactive visual tools for selecting modules, specifying connectivity and designing convenient GUIs to control module parameters. A set of base modules for mapping, filtering, and rendering was built into the AVS kernel. The user extensibility model was defined at the C/Fortran level, allowing for new modules to be constructed and appended to the system in the form of independent UNIX processes, supported by appropriate dataflow interfaces.

Craig Upson, et al, The Application Visualization System: A Computational Environment for Scientific Visualization, IEEE CG&A, July 1989, pp 30-42

 


 


Screen shots from the AVS system

 

 

Other visualization systems came out of the commercial animation software industry. The Wavefront Advanced Visualizer was a modeling, animation and rendering package which provided an environment for interactive construction of models, camera motion, rendering and animation without any programming. The user could use many supplied modeling primitives and model deformations, create surface properties, adjust lighting, create and preview model and camera motions, do high quality rendering, and save the resulting images for writing to video tape. It was more of a general graphics animation system, but was used for many scientific visualization projects.

 

 


Iris Explorer was a data-flow, object-oriented system for data visualization developed by G. J. Edwards and his colleagues at SGI. The product is now marketed by NAG (Numerical Applications Group). Like other dataflow systems, it allowed the user to connect pre-built modules together using a "drag and drop" approach. The modules are X Windows programs developed in C or FORTRAN, and was built around the OpenGL standard.

Iris Explorer had three main components:

  • The Librarian module contained the list of modules and previously created maps
  • The Map Editor was the work area for creating and modifying maps.
  • DataScribe was a data conversion tool for moving data between Explorer and other data formats

A map was a dataflow network of modules connected or "wired" together. The user wireed together the modules by connecting the appropriate ports, e.g. the output port of one module to the input port of the next module. Each module accepted data, processed it, and then output it to the next module. A map could be created and then stored for future use. It could also be made part of another map.

Explorer Data Types

  • Parameter (scalar)
  • Lattice (array, including images)
  • Pyramid (irregular grid)
  • Geometry (Inventor-based)
  • Pick (user interaction with geometry)
  • in addition, users can define their own types with a typing language

 


Wavefront Advanced Visualizer


G. J. Edwards. The design of a second generation visualization environment. In J.J. Connor, S. Hernandez, T.K.S. Murthy, and H. Power, editors, Visualization and Intelligent Design in Engineering and Architecture, pages 3-16.

In 1984, Ohio State University competed for an NSF supercomputer center, but was unsuccessful. So the University took the proposal to the state legislature, who established the Ohio Supercomputer as a state center in 1987. One of the reasons for the success of the proposal was the connection with Ohio State's very highly regarded Computer Graphics Research Group, which became the Advanced Computing Center for the Arts and Design at about the same time. The connection was made formal when the Ohio Supercomputer Graphics Project of ACCAD was made part of the OSC structure. Researchers from OSGP set out to develop a visualization package based on the dataflow paradigm, and in 1988 the apE (animation production Environment) software was released. It was originally distributed free of charge to any users who wanted to use it, and it allowed for these users to write their own modules to extend the capabilities.

In 1991 OSC decided to commercialize this popular free software, and contracted with Taravisuals, Inc. to maintain and distribute it. Unfortunately, at about the same time, Iris Explorer was released and was freely bundled with the SGI workstation, one of the more popular apE platforms, and the apE effort was discontinued.

D. Scott Dyer. A dataflow toolkit for visualization. IEEE Computer Graphics and Applications, 10(4):60--69, July 1990

 


Screen shot from apE from the Ohio Supercomputer Graphics Project at Ohio State


Like most of the other systems of the time, Data Explorer (DX) was a general-purpose visualization application in which the user created visualizations by combining existing modules into a network. It was discipline independent and easily adaptable to new applications and data. The program provided a full set of tools for manipulating, transforming, processing, realizing, rendering and animating data.

DX used visual programming and a data flow approach to visualization and analysis. The data flow modules were connected together to direct the data flow. This was analogous to creating a flow chart, and the resulting diagrams were called networks.

Users could work solely with the modules that came with Data Explorer or they could create macros. A macro was a network that was named and used in place of a module. There grew a large public collection of these macros that users could download.

DX provided visualization and analysis methods based on points, lines, areas, volumes, images or geometric primitives in any combination. It worked with 1-, 2-, and 3-dimensional data and with data which was rectangularly gridded, irregularly gridded, gridded on a non-rectangular grid, or scattered.

 

 


Window and dataflow visual editor from IBM Data Explorer


NGDC - Boulder


Temperatures in atmosphere - NASA Goddard


Registered 3D MRI and Magnetoencephalographic Scans - NYU Medical Center

 

B. Lucas et al. An architecture for a scientific visualization system. In Proceedings of Visualization '92, pages 107--114. IEEE Computer Society Press, 1992

 

 

 

Most of the early visualization techniques dealt with 2D scalar or vector data that could be expressed as images, wireframe plots, scatter plots, bar graphs or contour plots. Contour plots are basically images of multivariate data that represent the thresholds of data values, that is for example fi(x,y)<=ci. The contours ci are drawn as curves in 2D space, and the shading represents values that are less than the contour value, and greater than the lower one. Often, the contour plots are redrawn over time, to get an animated sequence of a phenomenon. For example, Mike Norman of NCSA created an animation of a gas jet using sequential shaded contour plots.


Norman's animation of a gas jet - NCSA, 1992


Another gas jet - NCSA

Vector or flow fields provide an effective means of visualizing certain phenomena, such as gas wind, gases, smoke, etc. By giving a direction to data within a certain interval, one can easily determine patterns within the data. For example, a stream plot is an example of a vector field which can be used to depict wind over the United States, as in the image to the right. The image below the stream plot shows cool air entering a kitchen through ceiling vents, and the vectors show how the direction and temperature of the air changes as it is influenced by a gas-fired appliance, like a stove.

Any of these techniques can be combined, as is shown in the visualization of the energy over the space shuttle to the right and below. In this case, stream lines represent one data domain, while shaded contours represent another domain.

 

 

 


Contour plot (wind speeds over US) - Unisys


Animation of a gas jet - Norman, NCSA


Stream plot (wind directions over US) - Unisys


Kitchen ventilation - ACCAD, Ohio State


Momentum is shown as streamlines colored by magnitude and the density on the shuttle surface is shown as shaded contours - NASA Ames

 

 

3D visualization presented a more difficult problem. Data that is obtained in 3D usually needs to be converted to an alternative geometric form in order to send it to the rendering part of the pipeline. Early researchers, like Herman (mentioned above) and Harris in the late 1970s used primitive techniques to map the "values" of CT scans into 3D volumes, but the computational overhead was tremendous. One approach, which utilized the "lofting" algorithm developed by Henry Fuchs, et al, involved tracing the important boundaries from the planar scans, and then joining the adjacent traces with triangles to create a 3D surface.

Harris, Lowell D., R. A. Robb, T. S. Yuen, AND E. L. Ritman, "Non-invasive numerical dissection and display of anatomic structure using computerized x-ray tomography," Proceedings SPIE 152 pp. 10-18 (1978).

H. Fuchs , Z. M. Kedem , S. P. Uselton, Optimal surface reconstruction from planar contours, Communications of the ACM, v.20 n.10, p.693-702, Oct. 1977

William E. Lorensen , Harvey E. Cline, Marching cubes: A high resolution 3D surface construction algorithm, ACM SIGGRAPH Computer Graphics, v.21 n.4, p.163-169, July 1987

The Visualization Toolkit -- An Object-Oriented Approach to 3D Graphics, by Will Schroeder, Ken Martin and Bill Lorensen, Prentice Hall, 1996

 

Probably the most important 3D geometry conversion algorithm was presented by Bill Lorensen an Harvey Cline of General Electric in 1987. The Marching Cubes algorithm forms cubes between two adjacent planar data scans. Then it used a particular density value, or contour value, and "marched" from cube to cube, finding the portions of the cube that had the same contour value, subdividing the cube as necessary in the process. When all surfaces of all cubes having the same value were presented, a "level" surface or isosurface was created, which could then be rendered. One of the most famous examples of isosurfaces in visualization was done by Wilhelmson and others at NCSA at the University of Illinois in 1990. It was an animated visualization of a severe storm, and besides the surfaces, it used other techniques, like contour shading, flowlines, stream lines and ribbons, etc. to tell the scientific story of the storm.


Modeling of a Severe Storm - NCSA 1990

 

 


Model of a severe storm, using isosurfaces - NCSA 19


The isosurfaces were supplemented with flow lines and stream lines to depict the details of the storm.

 

 

As mentioned above, data acquisition can be accomplished in various ways (CT scan, MRI scans, ultrasound, confocal microscopy, computational fluid dynamics, etc.). Another acquisition approach is remote sensing. Remote sensing involves gathering data and information about the physical "world" by detecting and measuring phenomena such as radiation, particles, and fields associated with objects located beyond the immediate vicinity of a sensing device(s).

It is most often used to acquire and interpret geospatial data for features, objects, and classes on the Earth's land surface, oceans, and atmosphere, but can also be used to map the exteriors of other bodies in the solar system, and other celestial bodies such as stars and galaxies.

Data is obtained via aerial photography, spectroscopy, radar, radiometry and other sensor technologies. The filtering and mapping steps of the visualization pipeline vary, depending on the type of data acquisition method. One of the most famous remote sensing related visualizations is seen in the animation L.A. the Movie produced by the Visualization and Earth Sciences Application group at JPL. The VESA group has performed data visualization at JPL since the mid 1980's.

L.A. The Movie is a 3D perspective rendering of a flight around the Los Angeles area starting off the coast behind Catalina Island and includes a brief flight up the San Andreas Fault-line. Since then the group has produced other animations including flights around Mars, Venus, Miranda (a moon of Jupiter) and more. L.A. the Movie was created in 1987 utilizing multispectral image data acquired by the Landsat earth orbiting spacecraft. The remotely sensed imagery was rendered into perspective projections using digital elevation data sets available for the area within a Landsat image.

 

 


Scene from L.A. the Movie
The Rose Bowl is in the foreground and JPL is in the foothills of the San Gabriel Mountains.


Scene from Mars the Movie

For more information about remote sensing, go to
http://rst.gsfc.nasa.gov/start.html

To read more about the L.A. the Movie process see Animation and Visualization of Space Mission Data in the August 1997 issue of Animation World Magazine


L.A. the Movie

Flight over Miranda

3.0

Another major approach to 3D visualization is Volume Rendering. Volume rendering allows us to display information throughout a 3D data set, not just on the surface. There are a couple of famous methods of volume rendering that should be discussed. The first was developed at Pixar for the Pixar Image Computer in 1988 by Robert Drebin and others. The algorithm used independent 3D cells within the volume, called "voxels". The basic assumption was that the volume was composed of voxels that each had the same property, such as density. A surface would occur between groups of voxels with two different values. The algorithm used color and intensity values from the original scans and gradients obtained from the density values to compute the 3D solid.

 

The second approach used ray-tracing.The basic idea was to cast rays from screen pixel positions through the data, obtain the desired information along the ray, and then display this information. The data could be an average of the data in a cell, called a "voxel", or of all cells intersected by the ray, or some other such measure. This is used in areas such as medical imaging and displaying seismic data. The first major contribution was by Marc Levoy of the University of North Carolina in 1988. Since then there have been many variations on the ray-tracing volume rendering approach.

 

A third approach was developed by Lee Westover of UNC. Splatting is a volume rendering algorithm that combines efficient volume projection with a sparse data representation. In splatting, the final image is generated by computing for each voxel in the volume dataset its contribution to the final image. Only voxels that have values inside a detrmined iso-range need to be considered, and these voxels can be projected via efficient rasterization schemes. In splatting, each projected voxel is represented as a radially symmetric interpolation kernel, equivalent to a fuzzy ball. Projecting such a basis function leaves a fuzzy impression, called a footprint or splat, on the screen. The algorithm works by virtually "throwing'' the voxels onto the image plane. The computation is processed by virtually "peeling'' the object space in slices, and by accumulating the result in the image plane. Splatting traditionally classifies and shades the voxels prior to projection, and thus each voxel footprint is weighted by the assigned voxel color and opacity.


 

Drebin, R. A., Carpenter, L., AND Hanrahan, P. Volume rendering. Computer Graphics 22, 4 (Aug. 1988), 65--74.

 

Marc Levoy, Display of Surfaces from Volume Data, IEEE Computer Graphics and Applications, v.8 n.3, p.29-37, May 1988

 


Image generated by splattting, Ohio State 1996

Westover, L., Splatting: A Parallel, Feed-Forward Volume Rendering Algorithm. PhD Dissertation, July 1991

As previously discussed, the representation of the data draws from many disciplines such as computer graphics, image processing, art, graphic design, human-computer interface, cognition, and perception. Donna Cox, from the School of Art and Design
and the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign, understood the potential of bringing scientists and visual design artists together. In 1987 Cox developed the concept of "Renaissance Teams," a team of domain experts and visualization experts whose goal was to determine visual representations which both appropriately and instructively presented domain specific
scientific data. As Vibeke Sorenson said in a 1989 essay,

"the accumulated knowledge of the fine arts can be extremely useful to Scientific Visualization, a field which will rely more and more on visual skills and ideas. A study of art history can help to gain insights into visual form giving and unique ways of solving problems which could enhance the scientific research environment in new and unexpected ways."

A similar approach was the basis for the OSGP group at Ohio State, at Cornell, and at many other visualization centers.

 


Cox, D.  "Renaissance Teams and Scientific Visualization: A Convergence of Art and Science", Collaboration in Computer Graphics Education, SIGGRAPH '88 Educator's Workshop Proceedings, August 1-5, 1988, p. 81 - 104.
http://www.ncsa.uiuc.edu/People/cox/

 

Sorensen, Vibeke, The Contribution of the Artist to Scientific Visualization, 1989
http://visualmusic.org/Biography/Index.html


Notes:

1. Dana Computer Inc. was founded by Allen Michels in Sunnyvale, California in the early 1980s. The company was renamed Ardent Computer in December 1987 because another company named Dana Computer already existed. Ardent was financed by venture capital and Kubota Ltd., (Kubota paid $50,000,000 for 44% of Ardent). In 1989 Ardent merged with Newton, Mass. based Stellar Computer to become Stardent Computer. The Sunnyvale facility was closed in 1990, followed soon after by the Massachusetts facility. Kubota Pacific Computer then gained the intellectual property from Stardent. Kubota Pacific Computer became Kubota Graphics Corporation and lasted until February 1, 1995. Several industry notables worked for Ardent including Gordon Bell, the founder of DEC, who was VP of Engineering.

 


Next: The Quest for Visual Realism and Synthetic Image Complexity
<<Previous
Next>>
<<Back to Section Index