Superpowers of Computer Graphics People

Valerie Hau
8 min readSep 29, 2019

--

I have this conversation a lot:

So what exactly is computer graphics? Well, let me tell you dear reader, the field of computer graphics is a world steeped in sorcery, illusion, and wonderful wizardry. Those who train in this field perform unbelievable conjuring acts and enable mind-boggling feats. Don’t believe me? Read on and see for yourself.

Wielders of Light, Water, Fire, and Earth

Explosions! Raging rivers! Crumbling towers and falling cities! These are some examples of the first, and perhaps most well-known, class of computer graphics power. You see this power at work in almost every major blockbuster film. From the rippling construction of Iron Man’s sweet nanotech suit in Avengers Endgame¹ to the crystal blue water of Moana², all of these special effects are computer generated. So, how do computer scientists and artists create such hyperrealistic visuals?

It begins with tracing light. Literally. The whole point of computer-generated imagery is to make our eyes see something as if it is really there, right? Well then, a good place to start would be to simulate how our eyes capture information in the real world. It starts with a light source like the sun, a street lamp, or your phone screen. Light is emitted from that source and bounces around, hitting objects in the real world. Some of the light is absorbed while the rest continues to travel and bounce around. For example, if light from the sun hits a red flower petal, most of the blue and green light is absorbed, and the red light is reflected off. If that light happens to enter our eyes — voila! — we see the red flower petal.

Therefore, if we wish to render an image based on a 3D scene, we simply simulate this process, but in reverse. As described in Turner Whitted’s seminal paper³, we “shoot” light backwards from the camera through a pixel of our desired image plane. The ray travels around hitting objects, and we trace it as it bounces around the scene. Based on the properties of the hit objects and the final light source, we can calculate that ray’s contribution to the final pixel color. Now just repeat that for every pixel and we have our image!

That’s a super brief and extremely simplified overview of raytracing, but it demonstrates the importance of physics in computer graphics. Achieving a photorealistic rendering means simulating how light moves in the real world. Simulating how light moves in the real world requires understanding how light behaves. How does light act when it hits a liquid surface? How does light travel through smoke and clouds? How does light interact with skin and strands of hair? The principle of utilizing physics and maths to visually emulate the real world extends to simulations of natural elements like water, fire, and other phenomenon. For example, Mike Seymour has an incredible article on fxguide detailing the simulation work done on Game of Thrones⁴. It is an excellent example of how people can take physical laws and bend them into the digital world. So yeah, those who work on raytracing and simulation essentially have the power of photokinesis and wield control over other elemental forces. Don’t mess with them.

Master Illusionists

The second class of computer graphics power falls into the category of illusionists. Illusionists create the impression of seeing something or being somewhere that doesn’t exist. An excellent example of people with this ability is Disney’s Imagineering team; there’s hardly a better example of people seamlessly merging the digital and physical realms to deliver breathtaking experiences. Imagineers draw you fully into make-believe worlds by combining digital technology with intricate set design and stunning animatronics. Of course, they aren’t the only ones weaving illusions into our lives.

You’ve probably heard the buzz about virtual and augmented reality. Virtual (VR) and augmented (AR) realities are another form of computer graphics illusion. VR completely immerses the user in a virtual world. Headsets with built-in displays cover the user’s field-of-view, track the user’s head motion, and render the appropriate view to give the user the visual illusion of being in a different place. Augmented reality technically means any technology that composites digital information with the real world. Pokémon Go⁵ is an example of a mobile phone AR experience, and there are more immersive experiences available with headsets like Microsoft’s Hololens⁶ and Magic Leap’s Magic Leap One⁷. There’s a ton of incredibly interesting graphics problems in the VR/AR space, stemming from the fact that as humans, we are incredibly sensitive to visual stimuli. Any deviation from the expected view — say, from lag or incorrect head tracking for VR or an improperly occluded object in AR, for example — breaks the illusion, resulting in disbelief.

Illusionists continue to find new ways to merge the digital and the real and bring the make-believe into our physical space. There are startups working on everything from creating holographic displays (Light Field Lab)⁸ to turning plain 3D objects into vibrant interactive art (Lightform, Inc.)⁹. It seems every year a new dream becomes a reality.

Speedsters

There’s another class of graphics power that’s all about speed. The goal for those who possess this power is to deliver those beautiful pixels to your eyes as quickly as possible. The movie industry cares about this because it reduces the time and friction — and therefore cost — that goes into developing a feature film. Given the vast level of detail in each frame of a movie, several minutes of a movie can take hours to render. The true golden ideal is to achieve full render quality in real time, and every step is an improvement. The faster you can render a frame, the more quickly artists can iterate and tweak the final result. It’s a huge boon to the creative process. But it’s not just movies who care. Games rely even more heavily on speed — I mean, what kind of interactive game makes the user wait hours, minutes, or even seconds for a single frame? Speed is literally the name of the game here.

One trick is that a lot of graphics rendering is pretty repetitive — whether raytracing or rasterizing, we’re basically just doing the same steps over and over again. This basic fact fueled the development of Graphics Processing Units (GPUs). As detailed in this Nvidia blog post by Kevin Krewell, these are specialized hardware that are highly efficient at computing huge workloads in parallel, and thus, are much preferred over CPUs for graphics rendering¹⁰. They are the true powerhouses behind those stunning visuals on your phone and computer and the industry continues to research new ways to optimize and increase their efficiency.

Reality Warpers

This last classification is perhaps the most mind-boggling: there are some among the computer graphics community who have the power to warp reality itself. Take, for example, this work by Fried, et al.¹¹, describing the following method for editing video. The paper¹² details that after providing a video of a person saying some dialogue, a user can edit the transcript of the video — say, by replacing a series of words or inserting a new word — and the system will automatically produce new video frames to match the new script and inject them into the video. The result, seen in their technical video summary¹³, looks completely seamless — there is no obvious jump or inconsistency that marks the beginning and end of the altered segment. Unless I know exactly where the input has been modified, I can’t even tell that the section is altered.

There’s a lot of interesting applications of such technology. We can alter content so that it’s more artistically to our liking. We can edit out undesirable content. We can create content that would be difficult to produce in the real world. This technology is an entirely new creative medium.

However, with the improvement of such image and video editing technology, people have raised the concern that such alteration will make it difficult to discern truth from fakery, as Lisa Eadicicco points out in her article on Business Insider¹⁴. It’s not an unjustified concern. Visual storytelling is an incredibly compelling form of persuasion. Just think of how some of the best advertisements utilize creative visual content to influence what we crave and what we buy. To further compound the problem, Drew Harwell points out in his Washington Post article that although people with this reality warping power are also striving to develop ways to detect deceit, consistently identifying fakes remains a challenging and ever-changing task¹⁵. This is an incredibly important technical and ethical problem that the computer graphics community must tackle.

So there you have it: four superpowers of computer graphics people. I hope I’ve been able to give you a glimpse into this crazy, magical world. It’s a field that touches films, games, education, medicine, robotics, and so much more. At the end of the day, it’s the art of turning fantasy into reality through code, physics, math, and a solid dose of imagination.

References

  1. Avengers: Endgame. Directed by Anthony Russo and Joe Russo, Marvel Studios, 2019.
  2. Moana. Directed by Ron Clements and John Musker, Walt Disney Studios Motion Pictures, 2016.
  3. Turner Whitted. “An Improved Illumination Model for Shaded Display”. In: ACM SIGGRAPH 2005 Courses. SIGGRAPH ’05. Los Angeles, California: ACM, 2005. DOI: 10.1145/1198555.1198743. URL: http://doi.acm.org/10.1145/1198555.1198743.
  4. Seymour, Mike. “The Simulation of Game of Thrones.” fxguide, 28 May 2019, www.fxguide.com/fxfeatured/the-simulation-of-game-of-thrones/.
  5. Pokémon Go. Niantic, 2016.
  6. “Hololens 2.” Microsoft, www.microsoft.com/en-us/hololens.
  7. “Magic Leap One.”, Magic Leap, Inc., www.magicleap.com/magic-leap-one.
  8. Light Field Lab. Light Field Lab, Inc., www.lightfieldlab.com/.
  9. Lightform. Lightform, Inc., lightform.com/.
  10. Krewell, Kevin. “What’s the Difference Between a CPU and a GPU?” Nvidia, 16 Dec. 2009, blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/.
  11. Fried, Ohad, et al. “Text-based Editing of Talking-head Video.” https://www.ohadf.com/projects/text-based-editing/
  12. Ohad Fried et al. “Text-based Editing of Talking-head Video”. In: ACM Trans. Graph. 38.4 (July 2019), 68:1–68:14. DOI:10.1145/3306346.3323028.
  13. “Text-based Editing of Talking-head Video (SIGGRAPH 2019).” Youtube, uploaded by Ohad Fried, 4 Jun 2019, www.youtube.com/watch?v=0ybLCfVeFL4.
  14. Eadicicco, Lisa. “There’s a terrifying trend on the internet that could be used to ruin your reputation, and no one knows how to stop it.” Business Insider, 10 Jul. 2019, www.businessinsider.com/dangerous-deepfake-technology-spreading-cannot-be-stopped-2019-7.
  15. Harwell, Drew. “Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’.” The Washington Post, 12 June 2019, www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/.

--

--

No responses yet