Above; Jaron Lanier and George Zachary of VPL Research, at SIGGRAPH 1992.

Click here to see the Table of Contents of Virtual Reality 1.0 – the 90’s


by Dr. Tom Furness

In the 60’s I became one of the original inventors of Virtual Reality, although we didn’t call it that at the time. I was joined by Ivan Sutherland, then a professor at the University of Utah. Ivan’s motivation was to build an ‘ultimate interface’ that would allow people to interface with computers by being inside 3D computer graphics and to use direct interaction with those graphics through hand-held devices. My motivation was different. I was trying to solve several problems in fighter cockpits for the United States Air Force. These problems centered on cockpit complexity, night vision and weapon-aiming issues in military aircraft. In our separate ways both of us pioneered what we know today as Virtual Reality….or the idea that people can experience as real an alternative reality of a computer-generated world that only appears to exist.

Unbeknownst to us at the time was an earlier pioneer: Mort Heileg. Mort was a cinematographer and filmmaker. He felt that traditional films and theaters did not involve the ‘whole person’ from a sensory standpoint. Driven by his desire to create experiences that involved the whole body, Mort built the Sensorama, an arcade ‘ride’ that would propel the customer through an immersive three-dimensional world that included wide field-of-view visual effects, sound, smell and vibration that together made the user’s experience more realistic, engaging and enjoyable. He was way ahead of his time.

We were joined later in the 70’s and 80’s by such legends as Jaron Lanier, Fred Brooks, Henry Fuchs, Myron Kruger, Michael McGreevy, Scott Fisher, Jonathan Waldron and others who, like us, became infected with VR fever. Sadly, I can attest that such an addiction never subsides and makes one wish and hope for better and better quality virtual worlds to inhabit.

For two decades after those beginnings, I orchestrated for the Air Force and other military services the development of many configurations of helmet tracking and display systems while also venturing into 3D binaural sound, speech and gesture input. My work in the Air Force culminated in the development of the super cockpit concept…”a cockpit that the pilot wears.” It essentially was a control/display medium that organized and fused information from aircraft subsystems and portrayed that information in the form of a virtual visual, auditory and tactile circumambience for rapid assimilation by the pilot.

The simulators we built in support of the super cockpit project (such at the ‘Darth Vader’ helmet) represented the first viable multisensory interactive Virtual Reality. But others were also doing amazing things. Jaron Lanier was working on a way to program computers with a visual programming language that used ‘eyephones’ and ‘data gloves’ to manipulate objects. In fact, it was Jaron that gave us our name: Virtual Reality. This at least is the name that stuck. About the same time Jonathan Waldron was making arcade games through his company, W Industries. Bob Stone was working on virtual interfaces for robots. Those early days were heady times. When we begin to realize the implications of what we were doing we became even more intoxicated by this VR thing.

In 1989 I left the Air Force to become a professor at the University of Washington and start the Human Interface Technology Laboratory. I wanted my education, which had cost the taxpayers millions of dollars, to teach the next generation of students about Virtual Reality and its power. About this time Howard Rheingold authored his book: Virtual Reality and many of us were ushered into the public spotlight. We speculated about futures where Virtual Reality could be used for everything — medicine, training, education, design and not the least —entertainment. We were on fire. Amongst this excitement Fred Brooks, Jaron Lanier and I were asked to testify before the Senate regarding the future of VR within the context of the Information Highway that was being promoted by Senator Al Gore.

During the early 90’s Ben Delaney emerged as a bellwether for our art. Ben took the satellite view across the sandboxes of our collective Virtual Reality community. Through the pages of the CyberEdge Journal we began to network and feel a part of a larger community of not only believers, but doers. This became a boon for us especially as a forum for those involved in the business side of this fledging revolution. In 1991 Tom Sheridan (MIT) and I started the journal Presence: Teleoperators and Virtual Environments, which was the first academic journal for serious investigators of virtual environments. In parallel, I worked with colleagues David Mizell and Tom Caudell in 1993 to start the IEEE Virtual Reality Annual International Symposium (VRAIS) that morphed into the IEEE Virtual Reality conference that still exists today.

At the turn of the millennium things began to cool down… we entered into ‘VR winter’. Unfortunately, in our enthusiasm of the 80s and 90s, we had set high expectations for VR and its applications. The reality was that we were a long way from anything practical. The delivery technology could not keep up with the worlds that we could build. When you went into an immersive virtual world, you were practically blind, and there was a high probability that you would experience motion sickness brought about by long latencies and slow update rates of the virtual scene. These were showstoppers.

Many of us kept working on these more difficult problems. Fred Brooks, Henry Fuchs and others worked on the computing and tracking technology while my colleagues and I were working on better ways to make worlds and the display delivery mechanisms using retinal scanning. From my HITLab we were launching start-up companies from inventions and via graduates who also had become infected. Many VR applications were emerging. The Virtual Worlds Consortium that helped to support my lab had a membership of 50 companies. The Media Lab at MIT, who were also players in this world, had many more.

Even though we had many ups and downs, we learned a lot. We felt that we had split the atom. We explored what it took to create a sense of presence in virtual worlds and discovered that in the process of making our worlds immersive we created a deep coupling to memory: that by putting people in virtual places, we ended up putting those places into people. Our studies showed a huge impact on learning and retention and the ability to teach complex subject experientially. We also found a new ways to design spaces, visualize data, manipulate atoms and molecules. Equally important, we discovered the problems of sensory conflicts between visual and vestibular cues that bring on motion sickness. In spite of all of the good applications we envisioned for Virtual Reality, we found that Hollywood and filmmakers would create negative scenarios… like splitting the atom, there could be destructive applications and consequences.

Today… with the timely publishing of Ben’s book, we are experiencing a rediscovery of virtual reality. This reawakening has been fueled by the march of computer graphics and imaging technology that solve many of the problems we first experienced. Most of the new adventurers in virtual space are probably not aware that we ‘gray ones’ have been there before. As an old head, I don’t want to dampen their enthusiasm, because they will be the ones that really make it happen. They too will be pioneers. But it might help for these youngsters to learn from the crude maps we made… and this is what this book provides: a window into an equally exciting time when we were on a steep learning curve, developing and promoting VR as a paradigm shift that would change the world forever. Ours may be an old performance but the music lingers on.

Tom Furness
October 10, 2014

Dispatch from the Virtual Front:

Battlespace Visualization

By Penny Weiss, R. Bowen Loftin, and Rob Johnston

One of the largest audiences taking advantage of virtual reality technology is the military, which is developing visualization tools to augment reality for training and mission-specific rehearsal. Unlike other professions, where the practitioner can perform the actual task daily, the soldier, except in times of war, is restricted to games and simulation. Again, unlike most other professions, in an entire career a soldier may never actually employ the skills for which he/she has been trained. Since the military can not create a war at will for training purposes, methods are needed for instilling and refining the cognitive skills necessary for battles pace management, and developing the appropriate visualization skills for actual operations. VR allows the soldiers to simulate battles, manage battlespace, and visualize dynamic information in new ways.

An area where this technology may be of great benefit to the military is in the development of Battlespace Planning for Joint Service Operations (JSO). Joint Service Operations are military operations that rely on coordinating the actions of the Air Force, Army, Navy and Marines. They may also include coordinating a multi-national force like the one seen in the Gulf War.

The concept of "Battlespace" is an enlargement and modernization of the "theater of war" idea. However, Battlespace is not only the physical space in which a battle occurs. It also involves the management of time, human resources, information, equipment, communications, and the strategic plans necessary to win a war. In a battle, a commander is responsible for managing a variety of specialized teams (e.g., tanks, ground troops, aircraft, munitions, artillery, communications, intelligence). Each of these teams are trained to perform a specific task; the commander. or officer in charge, needs a way to visualize the tasks as a whole system, in order to make sense of the battlespace. In an actual battle, a commander uses terrain maps, satellite imagery, signals, and human intelligence to create a conceptual model of the battlefield. The reason the existing methodology is outdated is that commanders have more areas of responsibility, and technical requirements, than traditional maps, images, or visualization devices provide. The battlefield is no longer a two dimensional plane but is now a three dimensional space. Commanders must manage tactical forces (e.g. ground troops and tanks) as well as strategic forces (e.g., communications, logistics, and intelligence). They must network with the other services (e.g., coordinating close air sup¬port for ground troops) deal with large increases in battle information, and worry about logistics and supply; all of which has to be done in much less time then ever before.

One of the most critical developments in military training was the development of SimNet (which we have discussed many times in CEJ). SimNet is a distributed interactive simulation for training heavy equipment crews (e.g., tanks, Bradley Fighting Vehicles, etc.) for war fighting. This training tool allows instructors and students to make sense of mistakes that would cost them their lives in a real battle. It also allows them to experience a variety of scenarios, so that they can practice war fighting in multiple terrains, environments, conditions, and with a variety of friendly and opposing forces. This training utilizes and builds the requisite pattern recognition skills that commanders need in an actual battle.

Another of the developments in Battlespace planning and 'training is the development of the Synthetic Theater of War (STOW). STOW is a project funded by the DARPA to create a synthetic battles pace to improve mission rehearsal and training. One STOW project in partnership with the United States Marine Corps (USMC) is LeatherNet. LeatherNet is a military simulation tool designed to train individual combatants, improve the lower echelon command and control, enhance existing training tool-kits, and integrate the four services.

The use of VR in this area will allow military personnel to simulate conditions that could not normally be created. This will be useful as a training tool, as well as an abstract problem solving aid. Like SimNet, artificial battles in LeatherNet can be created to practice combat maneuvers. The difference lies in the combatants that are integrated in these battles. LeatherNet can actually engage individual human participants in these simulated battles. One way in which individuals can be integrated is through l-port. l-port is a portal to SimNet and Distributed Interactive Simulation (DIS) for individual ground troops. For the first time, individuals can be trained to perform their specific combat-related tasks in a virtual environment.

LeatherNet will be able to simulate an entire battle, from machinery to manpower, thereby creating the multitude scenarios and conditions necessary for realistic training purposes. This tool, in combination with other existing battlespace simulations, offers an opportunity to train commanders the combat decision making skills they require in a war in a safe, realistic environment where errors don't equal death.

Penny Weiss, R. Bowen Loftin, and Rob Johnston are all working at the Virtual Environment Technology Laboratory at NASA's Johnson Space Center (UHjNASA-JSC). Weiss and Johnston also work with the Institute for Defense Analyses.

A first take on W-Industries

By Myron W. Krueger

I saw the W-Industries Virtuality system at the Imagina Conference in Monte Carlo in early February. While waiting in line to try it, my first impression was established. This is a manufactured product. The Visette (their term for the goggles), the power pack, the cabling and the gloves are all stylishly designed and extremely rugged.

Initially, the Visette appears quite massive. However, it is much lighter than expected. It is made of foam and magnesium, and clamps to the head to provide optimum tracking. The injection-molded polymer glove fits around the back of your hand and attaches to your fingers through rings. Your palm is exposed, making the glove more comfortable and more hygienic in situations where many users must share the same equipment. The glove has a single bend sensor per finger, less than other products, but enough for the point and grasp functions that are typically employed. Finally, an instrument pack, attached by a steel-reinforced cable, is worn around the participant's waist.

When I tried the equipment, I was represented by a graphic robot and could move around in a simple graphic scene containing a single dwelling and one other participant. Since our actual body movements were not measured in any way, the movements of the graphic body were fabricated. Your graphic representation shuffled around in the direction you were pointing. In theory, you could talk to the other participant through the microphone in your Visette and hear them in quadrophonic sound. Unfortunately, I was unable to make audio contact with my robot companion, even though I walked through its body.

Perhaps because the graphics world was simply rendered, its responses to my movements were noticeably rapid compared to comparable goggle and glove systems. When I turned my head, there was not a long lag before the graphic world caught up with me. When I moved my hand, the graphic hand did not hang suspended for a noticeable fraction of a second before it moved in response. The system was quick enough that I was aware of a new problem - a little jitter in the world as I moved my head.

The system is faster than other systems because it is designed from the ground up. It contains several closely-coupled processors and a special, dual-channel graphics board. It also properly abbreviates the graphics experience in favor of real-time response.

In addition to this early product designed for standing participants, a new arcade system for seated participant has been introduced. At what is, for the moment, the low end, this US$40,000 product may be a smash, because it is currently unopposed. Whether it threatens high-end research systems remains to be seen. (The new $17,000 tactile feedback glove using pneumatic pressure bladders should definitely be of interest to researchers.) What is certain is that customers now have an alternative that will force them to think more clearly about their needs.

Myron Krueger, Ph.D., is President of Artificial Reality Corporation, inventor of the famous VideoPlace installation, researcher in scientific visualization, and responsible for the term "artificial reality". He is an artist and scientist who has been working on artificial reality since 1969. His book, Artificial Reality II has just been released by Addition Wesley Publishing.