A Critical History of Computer Graphics and Animation

Section 17:
Virtual Reality


During the late 1980s and 1990s, virtual reality was touted as a new and emerging application that promised to revolutionize interactivity and man-computer interfaces. In fact, VR is much older than the 1980s, older or nearly as old as the entire computer graphics field itself.

In 1956, Morton Heilig began designing the first multisensory virtual experiences. Resembling one of today's arcade machines, the Sensorama combined projected film, audio, vibration, wind, and odors, all designed to make the user feel as if he were actually in the film rather than simply watching it. Patented in 1961, the Sensorama placed the viewer in a one person theater and, for a quarter, the viewer could experience one of five two-minute 3D full color films with ancillary sensations of motion, sound, wind in the face and smells. The five "experiences" included a motorcycle ride through New York, a bicycle ride, a ride on a dune buggy, a helicopter ride over Century city in 1960 and a dance by a belly dancer. Since real-time computer graphics were many years away, the entire experience was prerecorded, and played back for the user.

Heilig also patented an idea for a device that some consider the first Head-Mounted Display (HMD). He first proposed the idea in 1960 and applied for a patent in 1962. It used wide field of view optics to view 3D photographic slides, and had stereo sound and an "odor generator". He later proposed an idea for an immersive theater that would permit the projection of three dimensional images without requiring the viewer to wear special glasses or other devices. The audience would be seated in tiers and the seats would be connected with the film track to provide not only stereographic sound, but also the sensation of motion. Smells would be created by injecting various odors into the air conditioning system. Unfortunately, Heilig's "full experience" theater was never built.

Comeau and Bryan, employees of Philco Corporation, constructed the first actual fabricated head-mounted display in 1961. Their system, called Headsight featured a single CRT element attached to the helmet and a magnetic tracking system to determine the direction of the head. The HMD was designed to be used with a remote controlled closed circuit video system for remotely viewing dangerous situations. While these devices contributed intellectual ideas for display and virtual experiences, the computer and image generation were yet to be integrated.

 

The field we now know as virtual reality (VR), a highly multidisciplinary field of computing, emerged from research on three-dimensional interactive graphics and vehicle simulation in the 1960s and 1970s. Not surprisingly, the development of the discipline can be traced to early work at MIT and Utah and none other than Ivan Sutherland.

Two of the necessary foundations of VR were being addressed at MIT by Larry Roberts and Sutherland, among others. The first necessary practical contributions included the research and development that allowed the CRT to serve as an affordable and effective device on which to create a computer generated image, and the interactive interfaces that showed that a user could interact with the CRT image to accomplish some desired task.

As we mentioned in an earlier section, Roberts wrote the first algorithm to eliminate hidden or obscured surfaces from a perspective picture in 1963. His solutions to this and other related problems prompted attempts over the next decade to find faster algorithms for generating hidden surfaces. Among the important activities of Sutherland and his colleagues and students at the University of Utah were efforts to develop fast algorithms for removing hidden surfaces from 3D graphics images, a problem identified as a key computational bottleneck.

Students of the Utah program made two important contributions in this field, including an area search method by Warnock (1969) and a scan-line algorithm that was developed by Watkins (1970) and developed into a hardware system. One of the most important breakthroughs was Henri Gouraud's development of a simple scheme for continuous shading (1971). Unlike polygonal shading, in which an entire polygon (a standard surface representation) was a single level of gray, Gouraud's scheme involved interpolation between points on a surface to describe continuous shading across a single polygon, thus achieving a closer approximation of reality. The effect made a surface composed of discrete polygons appear to be continuous. This ability is essential in the process of generating the quality of visual images necessary to present a believable VR environment.

Each of these efforts provided part of the foundation for early attempts at addressing the concept of a virtual environment. The other part was the earlier work that resulted in the commercial development of the important head mounted display. The Ultimate Display was created by Sutherland in 1965. What made it so important was the fact that it had a stereoscopic display (one CRT element for each eye). The HMD had a mechanical tracking system, and later Sutherland experimented with an ultrasonic tracker. As was discussed in the National Academy of Sciences report Funding a Revoloution: Government Support for Computing Research, the HMD was the central research component of the emerging field.

Work on head-mounted displays illustrates the synergy between the applications-focused environments of industry and government-funded (both military and civilian) projects and the fundamental research focus of university work that spills across disciplinary boundaries. Work on head-mounted displays benefited from extensive interaction and cross-fertilization of ideas among federally funded, mission-oriented military projects and contracts as well as private-sector initiatives. The players included NASA Ames, Armstrong Aerospace Medical Research Laboratory of the Air Force, Wright-Patterson Air Force Base, and, more recently, DOD programs on modeling and simulation, such as the Synthetic Theater of War program. Each of these projects generated a stream of published papers, technical reports, software (some of which became commercially available), computer-animated films, and even hardware that was accessible to other graphics researchers. Other important ideas for the head-mounted display came from Knowlton and Schroeder's work at Bell Laboratories, the approach to real-time hidden-line solutions by the MAGI group, and the GE simulator project (Sutherland, 1968).

Early work on head-mounted displays took place at Bell Helicopter Company. Designed to be worn by pilots, the Bell display received input from a servo-controlled infrared camera, which was mounted on the bottom of a helicopter. The camera moved as the pilot's head moved, and the pilot's field of view was the same as the camera's. This system was intended to give military helicopter pilots the capability to land at night in rough terrain. The helicopter experiments demonstrated that a human could become totally immersed in a remote environment through the eyes of a camera.

The power of this immersive technology was demonstrated in an example cited by Sutherland (1968). A camera was mounted on the roof of a building, with its field of view focused on two persons playing catch. The head-mounted display was worn by a viewer inside the building, who followed the motion of the ball, moving the camera by using head movements. Suddenly, the ball was thrown at the camera (on the roof), and the viewer (inside the building) ducked. When the camera panned the horizon, the viewer reported seeing a panoramic skyline. When the camera looked down to reveal that it was "standing" on a plank extended off the roof of the building, the viewer panicked!

In 1966, Ivan Sutherland moved from ARPA to Harvard University as an associate professor in applied mathematics. At ARPA, Sutherland had helped implement J.C.R. Licklider's vision of human-computer interaction, and he returned to academe to pursue his own efforts to extend human capabilities. Sutherland and a student, Robert Sproull, turned the "remote reality" vision systems of the Bell Helicopter project into VR by replacing the camera with computer-generated images.(Other head-mounted display projects using a television camera system were undertaken by Philco in the early 1960s, as discussed by Ellis in 1996.) The first such computer environment was no more than a wire-frame room with the cardinal directions--north, south, east, and west--initialed on the walls. The viewer could "enter" the room by way of the "west" door and turn to look out windows in the other three directions. What was then called the head-mounted display later became known as VR.

Sutherland's experiments built on the network of personal and professional contacts he had developed at MIT and ARPA. Funding for Sutherland's project came from a variety of military, academic, and industry sources. The Central Intelligence Agency provided $80,000, and additional funding was provided by ARPA, the Office of Naval Research, and Bell Laboratories. Equipment was provided by Bell Helicopter. A PDP-1 computer was provided by the Air Force and an ultrasonic head-position acoustic sensor was provided by MIT Lincoln Laboratory, also under an ARPA contract.

Sutherland outlined a number of forms of interactive graphics that later became popular, including augmented reality, in which synthetic, computer-generated images are superimposed on a realistic image of a scene. He used this form of VR in attempting a practical medical application of the head-mounted display. The first published research project deploying the 3D display addressed problems of representing hemodynamic flow in models of prosthetic heart valves. The idea was to generate the results of calculations involving physical laws of fluid mechanics and a variety of numerical analysis techniques to generate a synthetic object that one could walk toward and move into or around (Greenfield, Harvey, Donald Vickers, Ivan Sutherland, Willem Kolff, et al. 1971. "Moving Computer Graphic Images Seen from Inside the Vascular System," Transactions of the American Society of Artificial Internal Organs, 17:381-385. )

As Sutherland later recalled, there was clearly no chance of immediately realizing his initial vision for the head-mounted display. Still, he viewed the project as an important "attention focuser" that "defined a set of problems that motivated people for a number of years." Even though VR was impossible at the time, it provided "a reason to go forward and push the technology as hard as you could. Spin-offs from that kind of pursuit are its greatest value." (Ivan Sutherland in "Virtual Reality Before It Had That Name," a videotaped lecture before the Bay Area Computer History Association.)

 

 

 

 

 



Click on the images below to view a larger version (when available).

Virtual Reality — a three dimensional, computer generated simulation in which one can navigate around, interact with, and be immersed in another environment
(John Briggs - The Futurist)

Virtual Reality — the use of computer technology to create the effect of an interactive three-dimensional world in which the objects have a sense of spatial presence.
(Steve Bryson - NASA Ames)

The actual term "Virtual Reality" is attributed to Jaron Lanier of VPL in 1986 in a conversation regarding the work of Scott Fisher. Fisher, of NASA Ames, had been referring to the field as "Virtual Environments". Myron Krueger labeled the activity "Artificial Reality" in 1983, the title of his book, and a year later, William Gibson coined the term "Cyberspace" in his book Neuromancer.
http://www.cyberedge.com/info_r_lex01.html

 


Heilig's Sensorama


Heilig's head mounted display from his patent application

http://www.artmuseum.net/w2vr/timeline/Heilig.html#

VR is one of those fields that Ivan Sutherland would christen "holy grails"--fields involving the synthesis of many separate, expensive, and risky lines of innovation in a future too far distant and with returns too unpredictable to justify the long-term investment.

 

http://www.nap.edu/readingroom/books/far/contents.html


Sutherland (1965) The Ultimate Display

The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked.


"The Ultimate Display," Sutherland, I.E., Proceedings of IFIPS Congress 1965, New York, May 1965, Vol. 2, pp. 506-508.

 


Sutherland, Ivan E. 1968. "A Head-Mounted Three Dimensional Display," pp. 757-764 in Proceedings of the Fall Joint Computer Conference. AFIPS Press, Montvale, N.J.

 


Ellis, S. What are Virtual Environments? IEEE Computer Graphics & Applications.

Ellis, S. (1996). Virtual Environments and Environmental Instruments, In Simulated and Virtual Realities, Taylor & Francis: 1996.

 

 

 

 

 

 


Around the same time, Thomas A. Furness, a scientist at Wright-Patterson Air Force Base in Ohio, began to work on better cockpit technology for pilots. "I was trying to solve problems of how humans interact with very complex machines," said Furness. "In this case, I was concerned with fighter-airplane cockpits." Aircraft were becoming so complicated that the amount of information a fighter pilot had to assimilate from the cockpit's instruments and command communications had become overwhelming. The solution was a cockpit that fed 3-D sensory information directly to the pilot, who could then fly by nodding and pointing his way through a simulated landscape below. Today, such technology is critical for air wars that are waged mainly at night, since virtual reality replaces what a pilot can't see with his eyes.

"To design a virtual cockpit, we created a very wide field of vision," said Furness, who now directs the University of Washington's Human Interface Technology (HIT) Lab. "About 120 degrees of view on the horizontal as opposed to 60 degrees." In September of 1981, Furness and his team turned on the virtual-cockpit projector for the first time. "I felt like Alexander Graham Bell, demonstrating the telephone," recalled Furness. "We had no idea of the full effect of a wide-angle view display. Until then, we had been on the outside, looking at a picture. Suddenly, it was as if someone reached out and pulled us inside."

The Human Interface Technology Laboratory is a research and development lab in virtual interface technology. HITL was established in 1989 by the Washington Technology Center (WTC) to transform virtual environment concepts and early research into practical, market-driven products and processes. HITL research strengths include interface hardware, virtual environments software, and human factors.

 

 

 

Furness, T. 1986. The super cockpit and its human factors challenges. Proceedings of the Human Factors Society. 30, 48-52.

 

While multi-million dollar military systems have used head-mounted displays in the years since Sutherland's work, the notion of a personal virtual environment system as a general purpose user-computer interface was generally neglected for almost twenty years. Beginning in 1984, Michael McGreevy created the first of NASA's virtual environment workstations (also known as personal simulators and Virtual Reality systems) for use in human-computer interface research. With contractors Jim Humphries, Saim Eriskin and Joe Deardon, he designed and built the Virtual Visual Environment Display system (VIVED, pronounced "vivid"), the first low-cost, wide field-of-view, stereo, head-tracked, head-mounted display.Clones of this design, and extensions of it, are still predominant in the VR market.

Next, McGreevy configured the workstation hardware: a Digital Equipment Corporation PDP-11/40 computer, an Evans and Sutherland Picture System 2 with two 19" monitors, a Polhemus head and hand tracker, video cameras, custom video circuitry, and the VIVED system. With Amy Wu, McGreevy wrote the software for NASA's first virtual environment workstation. The first demonstrations of this Virtual Reality system at NASA were conducted by McGreevy in early 1985 for local researchers and managers, as well as visitors from universities, industry, and the military. Since that time, over two dozen technical contributors at NASA Ames have worked to develop Virtual Reality for applications including planetary terrain exploration, computational fluid dynamics, and space station telerobotics. In October 1987 Scientific American featured VIVED--a minimal system, but one which demonstrated that a cheap immersive system was possible.

 

Two other projects from the 80s deserve mention:

In 1978 Andy Lippman and group of researchers from MIT (including Michael Naimark and Scott Fisher) developed what is probably the first true hypermedia system. The Aspen Movie Map was a surrogate travel application that allowed the user to enjoy a simulated ride through the city of Aspen, Colorado.

The system used a set of videodisks containing photographs of all the streets of Aspen. Recording was done by means of four cameras, each pointing in a different direction, and mounted on a truck. Photo's were taken every 3 meters. The user could always continue straight ahead, back up, move left or right.

Each photo was linked to the other relevant photos for supporting these movements. In theory the system could display 30 images per second, simulating a speed of 200 mph (330 km/h). The system was artificially slowed down to at most 10 images per second, or 68 mph (110 km/h).

To make the demo more lively, the user could stop in front of some of the major buildings of Aspen and walk inside. Many buildings had also been filmed inside for the videodisk. The system used two screens, a vertical one for the video and a horizontal one that showed the street map of Aspen. The user could point to a spot on the map and jump directly to it instead of finding her way through the city.


The Aspen Map

 


Working on human-computer interaction in the University of Wisconsin in the late 1960s and early 1970s, Myron Krueger experimented and developed several computer art projects.

After several other experiments, VIDEOPLACE was created. The computer had control over the relationship between the participant's image and the objects in the graphic scene, and it could coordinate the movement of a graphic object with the actions of the participant. While gravity affected the physical body, it didn't control or confine the image which could float, if needed. A series of simulations could be programmed based on any action. VIDEOPLACE offered over 50 compositions and interactions (including Critter, Individual Medley, Fractal, Finger Painting, Digital Drawing, Body Surfacing, Replay, and others).

In the installation, the participant faced a video-projection screen while the screen behind him was backlit to produce high contrast images for the camera (in front of the projection screen), allowing the computer to distinguish the participant from the background.

The participant's image was then digitized to create silhouettes which were analyzed by specialized processors. The processors could analyze the image's posture, rate of movement, and its relationship to other graphic objects in the system. They could then react to the movement of the participant and create a series of responses, either visual or auditory reactions. Two or more environments could also be linked.

In 1983 Krueger published his now-famous book Artificial Reality, which was updated in 1990.

 

 

 

 

 

 

 

 

 

 

 

 


Scenes from Krueger's VIDEOPLACE

Krueger, M.W., "Artificial Reality". Reading, Mass., Addison Wesley, 1983

 

 

   

One of the first instrumented gloves described in the literature was the Sayre Glove, developed by Tom Defanti and Daniel Sandin in a 1977 project for the National Endowment for the Arts. (In 1962 Uttal from IBM patented a glove for teaching touch typing, but it was not general purpose enough to be used in VR applications.) The Sayre glove used light based sensors with flexible tubes with a light source at one end and a photocell at the other. As the fingers were bent, the amount of light that hit the photocells varied, thus providing a measure of finger flexion. The glove, based on an idea by colleague Rich Sayre, was an inexpensive, lightweight glove that could monitor hand movements by measuring the metacarpophalangeal joints of the hand. It provided an effective method for multidimensional control, such as mimicking a set of sliders.

The first widely recognized device for measuring hand positions was developed by Dr. Gary Grimes at Bell Labs. Patented in 1983, Grimes' Digital Data Entry Glove had finger flex sensors, tactile sensors at the fingertips, orientation sensing and wrist-positioning sensors. The positions of the sensors themselves were changeable. It was intended for creating "alpha-numeric'' characters by examining hand positions. It was primarily designed as an alternative to keyboards, but it also proved to be effective as a tool for allowing non-vocal users to "finger-spell'' words using such a system.
  
This was soon followed by an optical glove, which was later to become the VPL DataGlove. This glove was built by Thomas Zimmerman, who also patented the optical flex sensors used by the gloves. Like the Sayre glove, these sensors had fibre optic cables with a light at one end, and a photodiode at the other. Zimmerman had also built a simplified version, called the Z-glove, which he had attached to his Commodore 64. This device measured the angles of each of the first two knuckles of the fingers using the fibre optic devices, and was usually combined with a Polhemus tracking device. Some also had abduction measurements. This was really the first commercially available glove, however at about $9000 was prohibitively expensive.


The Dataglove (originally developed by VPL Research) was a neoprene fabric glove with two fiber optic loops on each finger. Each loop was dedicated to one knuckle, which occasionally caused a problem. If a user had extra large or small hands, the loops would not correspond very well to the actual knuckle position and the user was not able to produce very accurate gestures. At one end of each loop was an LED and at the other end a photosensor. The fiber optic cable had small cuts along its length. When the user bent a finger, light escaped from the fiber optic cable through these cuts. The amount of light reaching the photosensor was measured and converted into a measure of how much the finger was bent. The Dataglove required recalibration for each user, and often for the same user over a session's duration. Coupled with a problem of fatigue (because of the stiffness) it failed to reach the market penetration that was anticipated.

 

Sturman, D.J. and Zeltzer, D., A survey of glove-based input, Computer Graphics and Applications, IEEE , V14 #1, Jan. 1994,30 -39


Grimes' Digital Data Entry Glove

   

Again from the National Academy of Sciences report:

The basic technologies developed through VR research have been applied in a variety of ways over the last several decades. One line of work led to applications of VR in biochemistry and medicine. This work began in the 1960s at the University of North Carolina (UNC) at Chapel Hill. The effort was launched by Frederick Brooks, who was inspired by Sutherland's vision of the ultimate display as enabling a user to see, hear, and feel in the virtual world. Flight simulators had incorporated sound and haptic feedback for some time. Brooks selected molecular graphics as the principal driving problem of his program. The goal of Project GROPE, started by Brooks in 1967, was to develop a haptic interface for molecular forces. The idea was that, if the force constraints on particular molecular combinations could be "felt," then the designer of molecules could more quickly identify combinations of structures that could dock with one another.

GROPE-I was a 2D system for continuous force fields. GROPE II was expanded to a full six-dimensional (6D) system with three forces and three torques. The computer available for GROPE II in 1976 could produce forces in real time only for very simple world models — a table top; seven child's blocks; and the tongs of the Argonne Remote Manipulator (ARM), a large mechanical device. For real-time evaluation of molecular forces, Brooks and his team estimated that 100 times more computing power would be necessary. After building and testing the GROPE II system, the ARM was mothballed and the project was put on hold for about a decade until 1986, when VAX computers became available. GROPE III, completed in 1988, was a full 6D system. Brooks and his students then went on to build a full-molecular-force-field evaluator and, with 12 experienced biochemists, tested it in GROPE IIIB experiments in 1990. In these experiments, the users changed the structure of a drug molecule to get the best fit to an active site by manipulating up to 12 twistable bonds.

The test results on haptic visualization were extremely promising. The subjects saw the haptic display as a fast way to test many hypotheses in a short time and set up and guide batch computations. The greatest promise of the technique, however, was not in saving time but in improving situational awareness. Chemists using the method reported better comprehension of the force fields in the active site and of exactly why each particular candidate drug docked well or poorly. Based on this improved grasp of the problem, users could form new hypotheses and ideas for new candidate drugs.

The docking station is only one of the projects pursued by Brooks's group at the UNC Graphics Laboratory. The virtual world envisioned by Sutherland would enable scientists or engineers to become immersed in the world rather than simply view a mathematical abstraction through a window from outside. The UNC group has pursued this idea through the development of what Brooks calls "intelligence-amplifying systems." Virtual worlds are a subclass of intelligence-amplifying systems, which are expert systems that tie the mind in with the computer, rather than simply substitute a computer for a human.

In 1970, Brooks's laboratory was designated as an NIH Research Resource in Molecular Graphics, with the goal of developing virtual worlds of technology to help biochemists and molecular biologists visualize and understand their data and models. During the 1990s, UNC has collaborated with industry sponsors such as HP to develop new architectures incorporating 3D graphics and volume-rendering capabilities into desktop computers (HP later decided not to commercialize the technology).

Since 1985, NSF funding has enabled UNC to pursue the Pixel-Planes project, with the goal of constructing an image-generation system capable of rendering 1.8 million polygons per second and a head-mounted display system with a lagtime under 50 milliseconds. This project is connected with GROPE and a large software project for mathematical modeling of molecules, human anatomy, and architecture. It is also linked to VISTANET, in which UNC and several collaborators are testing high-speed network technology for joining a radiologist who is planning cancer therapy with a virtual world system in his clinic, a Cray supercomputer at the North Carolina Supercomputer Center, and the Pixel-Planes graphics engine in Brooks's laboratory.

With Pixel-Planes and the new generation of head-mounted displays, the UNC group has constructed a prototype system that enables the notions explored in GROPE to be transformed into a wearable virtual-world workstation. For example, instead of viewing a drug molecule through a window on a large screen, the chemist wearing a head-mounted display sits at a computer workstation with the molecule suspended in front of him in space. The chemist can pick it up, examine it from all sides, even zoom into remote interior dimensions of the molecule. Instead of an ARM gripper, the chemist wears a force-feedback exoskeleton that enables the right hand to "feel" the spring forces of the molecule being warped and shaped by the left hand.

In a similar use of this technology, a surgeon can work on a simulation of a delicate procedure to be performed remotely. A variation on and modification of the approach taken in the GROPE project is being pursued by UNC medical researcher James Chung, who is designing virtual-world interfaces for radiology. One approach is data fusion, in which a physician wearing a head-mounted display in an examination room could, for example, view a fetus by ultrasound imaging superimposed and projected in 3D by a workstation. The physician would see these data fused with the body of the patient. In related experiments with MRI and CT scan data fusion, a surgeon has been able to plan localized radiation treatment of a tumor.


Fetal VR surgery

The term Haptic refers to our sense of touch, and consists of input via mechano-receptors in the skin, neurons which convey information about texture, and the sense of proprioception, which interprets information about the size, weight and shape of objects objects via feedback from muscles and tendons in the hands and other limbs. Haptic Feedback refers to the way we attempt to simulate this haptic sense in our virtual environment, by assigning physical properties to the virtual objects we encounter and designing devices to relay these properties back to the user. Haptic feedback devices use vibrators, air bladders, heat/cold materials, and Titanium-Nickel alloy transducers which provide a minimal sense of touch.

 



UNC uses a ceiling mounted ARM (Argonne remote manipulator) to test receptor sites for a drug molecule. The researcher, in virtual reality, grasps the drug molecule, and holds it up to potential receptor sites. Good receptor sites attract the drug, while poor ones repel it. Using a force feedback system, scientists can easily feel where the drug can and should go.

http://www.cs.unc.edu/Research/

   

In 1979 F.H Raab and others described the technology behind what has been one of the most widely utilized tracking systems in the VR world — the Polhemus. This six degrees of freedom electromagnetic position tracking was based on the application of orthogonal electromagnetic fields. Two varieties of electromagnetic position trackers have been implemented — one uses alternating current (AC) to generate the magnetic field, and the other uses direct current (DC).

In the Polhemus AC system, mutually perpendicular emitter coils sequentially generate AC magnetic fields that induce currents in the receiving sensor, which consists of three passive mutually perpendicular coils. Sensor location and orientation therefore were computed from the nine induced currents by calculating the small changes in the sensed coordinates and then updating the previous measurements.

In 1964 Bill Polhemus started Polhemus Associates, a 12-person engineering studies company working on projects related to navigation for the U.S. Department of Transportation and similar European and Canadian departments, in Ann Arbor, Michigan. His research was focused on determining an object's position and orientation in a three-dimensional space.

He relocated the company to Malletts Bay in 1969 and the company went beyond studies and began focusing on hardware. In late 1970, after an influx of what Polhemus called "a very clever team from a division of Northrop Corp. (now Northrop Grumman Corp.) that had a lot of experience in development of miniaturized inertial and magnetic devices," the firm changed its name to Polhemus Navigation Sciences, later shortened to Polhemus, and incorporated in Vermont.

"The Polhemus system is used to track the orientation of the pilot's helmet," Polhemus said of the electromagnetic technology he pioneered. "The ultimate objective is to optically project an image on the visor of the pilot's helmet so he can look anywhere and have the display that he needs. ... It's critical to know, in a situation like that, where the pilot's helmet is pointed, so you know what kind of a display to put up on the visor," he added before comparing the system to a "heads-up display" or "gun sight," which projects similar data onto an aircraft's windshield.

Polhemus was supported for a few years in the early 1970s by Air Force contracts. But by late 1973, "in the absence of any equity capital to speak of, we just ran dry," in his words. "By that time, however, the device looked attractive to a number of companies, and there were several bids for it. We finally wound up selling to the Austin Company," a large conglomerate with headquarters in Cleveland, Ohio.

The next few years saw the company change hands to McDonnell Douglas Corp. of St. Louis, Mo., and then to the current owner, Kaiser Aerospace and Electronics Corp. of Foster City, Calif., in 1988.



Ernie Blood was an engineer and Jack Scully a salesman at Polhemus. Blood and Scully created the digitizer used for George Lucas' groundbreaking Star Wars series, which won an Academy Award for Polhemus. Blood's name is on the patent. They had been discussing possible expanded commercial uses for the Polhemus motion tracking technology, possibly in the entertainment field, in training situations, or in the medical field. However, Polhemus was focused on military applications, and was not interested in any other markets. When they took the idea of a spinoff company to their superiors at McDonnell-Douglas, the parent company of Polhemus, they were fired in 1986.

Still convinced that there were commercial possibilities, Blood and Scully started a new company in 1986, which they called Ascension. The first few years were lean years, but Blood improved upon the head-tracking technology for fighter pilots and Scully eventually negotiated a licensing agreement with GEC (General Electric Co. of Great Britain).The contract was put on hold for two years when Polhemus, which had been purchased by Kaiser Aerospace, a direct competitor of GEC, sued Ascension for patent infringement. Polhemus dropped the case shortly before it went to trial, and Ascension, with the financial backing of GEC, was able to stay afloat. The licensing agreement was finalized with GEC, and Ascension sales of equipment based on the technology took off, particularly in the medical field.

When the virtual reality revolution erupted in the early 1990s, Ascension played a part in it, developing motion trackers that could be used in high-priced games. "We decided from the beginning that we were not going to go after a single-segment application," Blood said. "From day one, we've always made sure we were involved in a lot of different markets." As the VR market declined, this philosophy helped Ascension's sales stay constant.

A constant for Ascension has been its work in the field of animation. Scully says Ascension deserves some of the credit for inventing real-time animation, in which sensors capture the motions of performers for the instant animation of computerized characters. Ascension's Flock of Birds product has been used to capture this motion and define the animated characters. Over the years, Ascension technology has been used in the animation of characters in hundreds of television shows (MTV's CyberCindy, Donkey Kong), commercials (the Pillsbury Doughboy, the Keebler elves), video games (Legend, College Hoops Basketball, SONY's The Getaway) and movies (Starship Warriors, pre-animation for Star Wars films).

Ascension Technology serves six markets: animation, medical imaging, biomechanics, virtual reality, simulation/training and military targeting systems from its facility. Using DC magnetic, AC magnetic, infrared-optical, inertial and laser technologies, Ascension provides turnkey motion capture systems for animated entertainment as well as custom tracking solutions for original equipment manufacturers to integrate into their products.

 

 

 

 

F. Raab, E. Blood, T. Steiner, and H. Jones, Magnetic position and orientation tracking system, IEEE Transactions on Aerospace and Electronic Systems, Vol. 15, No. 5, 1979, pp. 709-718.

 

 

 

 


Polhemus FASTrak and VISIONTrak tracking systems

 

 

 

 

 

 

 

 

 


Ascension’s Flock of Birds motion tracker is used
in a Puma helicopter repair training

   

 

LEEP Optical System started to develop wide angle lenses for 3-D still photography applications in 1975. The Large Expanse, Extra Perspective (LEEP) optical system was designed by Eric Howlett in 1979 and provides the basis for most of the current virtual reality helmets available today. The combined system gave a very wide field of view stereoscopic image. The users of the system have been impressed by the sensation of depth in the scene and the corresponding realism. The original LEEP system was redesigned for the NASA Ames Research Center in 1985 for their first virtual reality installation, the VIEW (Virtual Interactive Environment Workstation) by Scott Fisher. The system was built according to lessons learned using the LEEP display earlier, and proved to bequite impressive. It already featured many techniques that are used nowadays: a Polhemus tracker, 3D audio output, gesture recognition using VPLs DataGlove, a remote camera, and a BOOM-mounted CRT display.

 

 

 

In 1988 Fakespace began building a telepresence camera system for the Virtual Environment Workstation (View) project at NASA Ames Research Center. The complete system combined a teleoperated camera platform and 3D viewing system.To increase image quality, Fakespace invented the BOOM (Binocular Omni-Orientation Monitor). Very small monitors are mounted on a mechanical arm, and users look into the monitors like they would look into a pair of binoculars. Tracking occurs when the user moves the arm, which changes the perspective. When a user releases the BOOM, another person can look at the same thing from the same perspective, which is an advantage over HMDs. Since real monitors are used, the resolution is quite good.

 

 

 


The concept of a room with graphics projected from behind the walls was invented at the Electronic Visualization Lab at the University of Illinois Chicago Circle in 1992. The images on the walls were in stereo to give a depth cue. The main advantage over ordinary graphics systems is that the users are surrounded by the projected images, which means that the images are in the users' main field of vision.

This environment has been called a "CAVE", (CAVE Automatic Virtual Environment). The first CAVE (as well as the concept) was created by Carolina Cruz-Neira, Dan Sandin, and Tom DeFanti, along with other students and staff of EVL. Since then this back-projection method of virtual reality has gained a strong following.

The CAVE is a surround-screen, surround-sound, projection-based virtual reality (VR) system. The illusion of immersion is created by projecting 3D computer graphics into a 10'x10'x10' cube composed of display screens that completely surround the viewer. It is coupled with head and hand tracking systems to produce the correct stereo perspective and to isolate the position and orientation of a 3D input device. A sound system provides audio feedback. The viewer explores the virtual world by moving around inside the cube and grabbing objects with a three-button, wand-like device.

Unlike users of the video-arcade type of VR system, CAVE dwellers do not wear helmets to experience VR. Instead, they put on lightweight stereo glasses and walk around inside the CAVE as they interact with virtual objects. Multiple viewers often share virtual experiences and easily carry on discussions inside the CAVE, enabling researchers to exchange discoveries and ideas. One user is the active viewer, controlling the stereo projection reference point, while the rest of the users are passive viewers.

The CAVE was designed from the beginning to be a useful tool for scientific visualization; EVL's goal was to help scientists achieve discoveries faster, while matching the resolution, color and flicker-free qualities of high-end workstations. Most importantly, the CAVE can be coupled to remote data sources, supercomputers and scientific instruments via high-speed networks. It has obvious benefits: it is easy for several people to be in the room simultaneously and therefore see images together; and it is easy to mix real and virtual objects in the same environment. Also, because users see for example their own hands and feet as part of the virtual world, they get a heightened sense of being inside that world.

Various CAVE-like environments exist all over the world today. Most of these have up to four projection surfaces; images are then usually projected on three walls and the floor. Adding projection on the ceiling gives a fuller sense of being enclosed in the virtual world. Projection on all six surfaces of a room allows users to turn around and look in all directions. Thus, their perception and experience are never limited, which is necessary for full immersion. The PDC Cube at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm in Sweden is the first fully immersive CAVE.

 

 

 

 

 





For a discussion of the current state of the art in VR devices, see

Review of Virtual Environment Interface Technology, IDA Paper P-3186
by Christine Youngblut, Rob E. Johnson, Sarah H. Nash, Ruth A. Wienclaw, and Craig A. Will at

http://www.hitl.washington.edu/scivw/IDA/


 


Next:
<<Previous
Next>>
<<Back to Section Index