Surface Tension: MFA Thesis
← INDEX

Categories
Design Research
Prototyping
MFA Thesis
Tools
Unity3D
HTC VIVE
Adobe Illustrator
Adobe After Effects
Agisoft PhotoScan
p5.js
Topics
Embodiment
Human-Computer Interaction
GUI for VR/MR

Immersive technologies (especially VR) propose new means of displacing our bodies through technology, and photogrammetry recreates these flesh and object bodies for virtual space. SURFACE TENSION is a year-long research project uses diverse design approaches to explore how scanning, photogrammetry, and touchscreen interactions mediate our relationship between bodies and interfaces.

This design project takes multiple experimental approaches to answering these initial queries:

↑ Chapters of cumulative video:

Analyzing Immersion

An initial point of interest for me is the swimming pool as an analog of immersive technologies: designed as a literally "immersive" space filled with water, the pool is a deliberate space carved out for purposes of leisure and play (derived from the observation that most applications of VR/AR currently is for gaming).

Over the course of the year, I created a diagrammatic drawing/visual diary of various immersive technologies, notes, spatial layouts, and documentation and sketches of prototypes. The drawing documents different immersive set-ups, such as Microsoft Hololens’ spatial mapping, photogrammetry, and HTC VIVE’s play space:

This fantastical landscape of sensors, actions, and objects show that as untethered as we might imagine VR to be, it is still supported by a system of concrete things that map our surroundings, capturing and recreating our “real world.”

FACEID

The first chapter in this video thinks about immersion as recreating the physical into digital, and is a reinterpretation of Apple's FaceID. We are already scanning ourselves everyday when we use our thumbprints or facial recognition to unlock our phones.

While Apple’s version uses infrared sensors to perform facial recognition from arm’s length, these prototypes ask what might result if we “scan” our faces in by directly pressing our flesh against the screen. How might scanning become less impersonal and less “objective” (as in, performed from a distance)?

↑ Video stills.

Our bodies are at times at odds with scanning devices—for instance, the phone cannot recognize your thumbprint if it is sweaty. I considered how this gesture of scanning a face directly on the phone might produce residue that is usually considered "incompatible" with technological devices, such as sweat, grease, and face oils.

↑ References from art and design; from top left, clockwise: Ana Mendieta, Apple's infrared sensor, Zach Blas, Evan Roth, Unity's ARKit (image: Jeremy Bailey), JK Keller.

Scanning objects and people into virtual space is an increasingly common practice as we create more elaborate immersive experiences, and photogrammetry is one of the most common solutions for this.

Photogrammetry is a process involving multiple photos taken from all angles around an object or person, then processing these photos through software to generate a 3D model. For this project, I opted for more accessible modes of photogrammetry, i.e. using smart phones as the camera:

But what is each individual's process of capturing or recreating real objects? How can we "read" each person's scan?

Earlier in the project, I set up a series of user tests using a VIVE controller with a camera attached to it, as well as a corresponding scene in Unity, to see if I could physicalize the trace or action of photogrammetry. I asked each user to scan a bicycle and a coffee cup.

↑ Images of the low-fidelity prototyping tool.

↑ One example video showing the RR action and the VR trace.

As shown in the video, I designed multiple systems for visualizing the user interaction, including realtime documentation, symbols to serialize their scanning patterns, as well as a script that leaves a trace behind the VIVE controller's movements:

SKINSPACE

SKINSPACE is a slow pan in Unity in, around, and through a 3D model of a face. This is a closer exploration of a 3D model and its texture as inhabitable space, because one of the affordances of immersion is the mismatch of embodied and experienced scales.

↑ Video stills.

Much of the design research in Surface Tension centers on photogrammetry as a process, and on the materials generated by photogrammetry. In addition to exploring the interactions and gestures that scanning allows, my design studies also looked at texture maps—the skin that lies on the model mesh.

↑ Screen view of model being generated in Agisoft PhotoScan, with source photos mapped onto model.

↑ Photogrammetry models of faces in Unity3D. The choice of faces rather than other arbitrary objects is deliberate, since faces question how scanning appropriates biometrics and potentially displaces identity.

↑ Early studies looking at aesthetics of texture maps.

∞FEED

∞FEED looks at how immersion is a result of activity, rather than technology. If we think of immersion as a mindset or state created by the interactions we perform on our phones, what might this look like? This video prototype is a means of bringing immersive activity down to the everyday, to our smartphone gestures.

↑ Video stills.

In this short vignette, the screen remains a very digital green—the screen is left open to imagination, and viewers can easily project an Instagram feed, text message stream, Tinder profiles, etc. onto that screen.

↑ Diagramming basic smartphone gestures, which the actor performed in an endless stream.

SURFACE/INTERFACE

Water, skin, fabric, screen—SURFACE/INTERFACE explores the surfaces of immersion, and speculative proposals for immersive interfaces.

↑ Video stills.

This chapter shows a few hands performing gestures (like the smartphone gestures in ∞FEED) stroking and caressing billowing fabric. The fabric itself is a simulation using a photogrammetry texture map.

↑ Early concept illustration collage: what if the interface responds to interactions, to direct touching, e.g. it smears and melts?

What this study argues for is the parallel between scanning and touching. While touching is already an everyday behavior we perform on our phones, scanning could be an emergent behavior as VR/AR/MR becomes more widely used. Both acts can be carried out in an almost intimate, careful way. Could intimacy and care become behaviors we design into new immersive interfaces?

GOOEY¹

The video ends with GOOEY, a chapter on the gooeyness as a materiality for immersion. These prototypes of gooeyness are designed in contrast to linear modes of entering immersive spaces, i.e. you're in the "real world," you put on a head mounted display, and (bam!) you're immersed. Can this barrier between physical/digital be less defined, perhaps less normative?

↑ Video stills.

Early experiments used agar and alginate to create molds and casts of existing interfaces, such as iPhones. The resulting texture is wet, cold, soft, and gooey: observing users interacting with these prototypes, I noticed that there was generally more care, intrigue, repulsion, and gentleness in their approach (although some users were attracted to violently interact with the material).

↑ An interactive prototype for the final installation.

Summary + Reflections

Surface Tension explores multiple facets of immersion:

The project is by no means conclusive, but remains open-ended, as these research questions and design outcomes can lead to multiple applications.

One example is how Surface Tension is in dialogue with sensory design. I focused my explorations and studies around VR/MR and photogrammetry as these tools potentially open up new ways of engaging and creating physically, or sensorially, for the digital world.

A question I am keen to keep exploring is: how can we design for sensory (and inclusive) experiences when working with primarily visual or screen interfaces?

The paper counterpart for this project can be read: here.

Exhibitions

↑ Screenshot of SURFACE TENSION for online exhibition #cyborgs for Peripheral Forms.

1 Inspired by American Artist's essay, "Black Gooey Universe".