top of page

Gaze to the Stars

Type

Installation

​Location

Boston, US.

Year

2025

Description

Gaze to the Stars is a participatory media installation that transforms MIT’s Great Dome into a collective storytelling device. Bridging computational design, spatial projection, artificial intelligence (AI), and human-computer interaction (HCI), the project invites participants to share personal narratives in a pod, capturing close-up recordings of their eyes. These intimate expressions are then mapped onto the dome, turning private reflection into public display. Gaze to the Stars resignifies an iconic institutional landmark as a vessel for vulnerability, resilience, and shared cultural memory.

Role

As a graduate researcher within the MIT Media Lab's Critical Matter Group, I led a small team of researchers carrying out the computational design and digital fabrication of the pod enclosure. Moreover, I contributed to message encoding synthesis and research publications surrounding the project.

Authors

Critical Matter Group, MIT Media Lab

Credits

Behnaz Farahi

Julian Ceipek

Sergio Mutis

Suwan Kim

Chenyue Dai

Frank Cong

Haolei Zhang

Yalou Wang

​

Nebus Kitessa

Krystal Jiang

Linda Xue

Yaqi Li

JD Hagoof

Milin T.

Pria Sawhney

Jiaji Li

Technical Description

Pod Design

The pod consists of a spherical shell containing a mirrored dodecahedron with an embedded 4K microscope, iPad display, and directional audio. Components are arranged for precise eye alignment and immersive audiovisual feedback.

Enclosure Design

A surrounding structure made of 150 parametrically arranged polycarbonate tubes creates acoustic and visual isolation. Integrated lighting and dry ice mist enhance the sensory experience and delineate the interaction space.

pod inside - edit2.png

Pod Interaction (ML)

Inside the pod, users speak with an AI embodied as the MIT Dome. The system guides a meditative, theme-driven conversation to elicit personal stories.

Eye Segmentation

Each eye video is processed using AI-powered segmentation to isolate the iris. This creates a dynamic canvas for visual storytelling.

Story Ecoding

Narratives are distilled, translated into Braille, and animated as orbiting point clouds within the segmented iris—each story encoded in light.

Projection

Final animations are composited and projected onto the Dome using Madmapper. A livestream overlay displays participant ID, name, and encoded message, extending the installation to digital audiences.

2min Project Overview

Back to Works

bottom of page