top of page

Environments, Animals, Technology 

B L O G  D O C U M E N T A T I O N

sb11.jpg

P A R T  I  -  S O U N D

Section A: Thoughts on 'Listen to the World' by New York Times

Before engaging with the New York Times audible, I prompted myself with a few questions:

  • What sounds did I hear growing up?

  • How did sound affect my experience?

  • Where have I traveled and what did I feel?

​

Growing up in Houston, Texas, I had to consider the residential space that I was in. Before moving to Texas at the age of seven, I had lived in four places: Effingham, Illinois, Greenville, South Carolina, Los Angeles, California, and Neenah, Wisconsin. With the exception of Effingham, I had grown up in the suburbs. And although living in dense neighborhoods might’ve felt familiar moving from one place to another, the environments' geography all resulted in distinct sounds and experiences. The wind chimes of the winter of Neenah were different from that of the steady breeze of December in Houston. Sound played a crucial part in perception. And it still does.

​

For the most part, Houston would transition from harsh summers to chilly winters, with no buffer in between. The suburb of Katy (part of Houston and where I grew up), was always met with quietness. I’d hear cars passing, lawns being mowed, and the stillness of the white walls I inhabited. It was as if I lived in a quiet get-away bubble hidden from reality. My family and I would travel frequently to nature-dense locations: Yellowstone, Hawaii, Alaska, and many more. Here, I’d be met with sounds that’d put me at a momentary ease: water flowing, leaves rustling, and feet marching. For the longest time, my auditory perception would transition between these two environments -- home and adventure -- until I moved to Austin, Texas and eventually, New York, where sound served as a constant obstruction and now, comfort. 

 

So, when I first engaged with the New York Times audible, hearing ‘lava’ for the first time took me aback. “Of course I’ve heard this sound before!” I told myself before resuming the play button. But no, I had not. Instantly, my audible perception was shifted. What haven’t I heard? And through this hearing, what haven’t I seen? As I continued through the experience, it was as if I was transitioned to a different space altogether. Well, I guess that was the intent wasn’t it? I thought about sound as a language in the Maromizaha Forest, Madagascar story and being impressed by the animals’ ability to communicate through sound. But, isn’t that how humans communicate also? Is it more of a semantic sound or have we defined it as semantic since we only understand it in its entirety? How do animals perceive us and how is that perception affected based on temporal resolutions? For example, although New York City is considered one of the noisiest cities in the world, the infamous New York rat is at ease. Then in turn, how is a rat’s perception of a place adjusted based on their auditory input? How does its perception affect their overall story? Maybe I’m heading towards more questions than I have answers to at the moment!
 

I believe, however, that many of these curiosities can be met with the power of storytelling and sound’s role in it. After all, many of the sounds in cinema are inspired and acquired through natural phenomenon as cited in the Northern Italy story. All of this made me think of the art of the narrative. Narratives inspired by reality and the abstract. How do the people in the Iceland story connect with their environment? If humans are a result of nature, then we must inquire on why we feel that there is a separation between the two. 

​

This all brings me to my storyboard for an interactive sound experience augmenting human experience with inevitable nature. How can a story be told that augments technology with its parent, nature? Can we see technology as a result of animal intuition? Can we see animal intuition as a result of nature? Then, is technology a result of nature?
 

Section B: Storyboarding Sound

Screenshot 2022-09-13 160605.jpg

P A R T  II  -  S O U N D  H A B I T A T

P A R T  III  -  M O T I O N  A S  A  P R A C T I C E

Summarizing Max Forum's Event on the Art of Movement Mapping

Somehow I always end up at 42nd Street. Or at least quite close to it. But this time, with the flashing cacophony of LED screens serving as a perfect backdrop for the momentary journey I was about to embark on – learning about motion as a practice... 

​

7:00 pm, the event begins. I attended the Art of Movement Mapping at ONX Studio, a partnership between NEW INC and Onassis USA. The event featured three key figures: Lisa Jamhoury, Mimi Yin, and Dr. Ryan York as moderator. All of them had distinct and qualified backgrounds for the topic at hand – motion capture as both a practice for art and science. Much of the discussion pertained to body mapping with topics including asymmetry, body structure, and how our bodies are represented in the digital world.

WhatsApp Image 2022-09-26 at 4.22.18 PM.jpeg
WhatsApp Image 2022-09-26 at 4.22.19 PM.jpeg

The first project that caught my eye was Lisa Jamhoury's where a body's motion was captured to produce generative art. The specific project was titled, Maquette, and is a prototype under development for the MAXlive 2023-24 festival. Having experimented with motion capture myself, I’ve always seen it through one strict lens – tracking real-time human movement for animated characters. Using this technique to create interpretive art was completely new to me. As a result, I dove deeper into Lisa’s work, finding much of her projects to have one central theme: using creative art to learn more about human kineticism. At the event, Lisa discussed using Omni Tracking to collect data regarding people’s movement within a digital space. Based on this, the designer can better understand how a person is being perceived in the virtual world.

Screenshot 2022-09-26 162626.jpg

Maquette by Lisa Jamhoury 

This leads into Mimi Yen’s work on the interaction of movement. In her project showcase aptly titled, The Interaction of Movement, Mimi designs a space that studies the relationship between human interaction and generative form. This interpretive dance piece further abstracted my understanding of human-spatial interaction – how does the body move in relation to space and how can an abstraction be pragmatized to better understand movement?

Screenshot 2022-09-26 162946.jpg

Overall, my knowledge of motion capture was heightened by this event. My intent is to take what I absorbed from my time near 42nd Street and apply it into my own practice of computational media.

CDP Colloquium project I had done for my final summer crit. Although I'm not continuing with motion capture as a component of my capstone project, the findings from Thursday's event will better inform my journey into motion capture moving forward!

P A R T  IV  -  H U M A N S  A S  A N I M A L S

Measuring the Morning Routine With Data

Group 251.png

P A R T  V  -  N A T U R A L  A L G O R I T H M S

Abstracting Pigeon Biology onto Streets/Sidewalks

Algorithm will be designed in Rhino+Grasshopper.

ANAE-in-peripheral-blood-cells-in-the-domestic-pigeon-ANAE-positive-PBL-arrow-and.png
graph3d.gif

Steps (Inspired by my daily walks to Avery Hall):
1) User walking on streets/sidewalks of Manhattan OR in pigeon dense locations.
2) Open AR application
3) View cellular automata of animals that occupy a specific space (i.e. pigeons on sidewalks)
4) Information overlaid onto cellular generation

Tools:
Rhino+Grasshopper
Blender
AR software (location specific)

P A R T  V I  -  A R  P R O P O S A L

Abstracting Pigeon Biology onto Streets/Sidewalks

My plan is to make an augmented reality (AR) experience that abstracts pigeon cells onto their most populous place(s) in Manhattan. The purpose of this experience is to inform the user about pigeon biology and to backpack off of their recent contribution to meme culture (i.e. Where do pigeons come from? Are pigeons even real? What even is a pigeon?). Through this experience, I want to tell a story of pigeons as an invasive species and how they traveled to North America with overlayed infographics. I plan on approaching this project with three possible outcomes:

  1. Placing the pigeon cell automata onto pigeon dense places in Manhattan

  2. Creating a scavenger hunt with the pigeon cell automata. With every discovery, the iteration grows larger.

  3. Placing the automata in one distinct location.
     

Tools/Software

The tools/software needed for this project are Rhino+Grasshopper to create the cellular automata, Blender for adding colors and possibly manipulating the geometries further, and a location-based AR software (TBD). 

Current Progress

The pigeon cell geometry will be a loose abstraction of how pigeon cells are seen. 

ANAE-in-peripheral-blood-cells-in-the-domestic-pigeon-ANAE-positive-PBL-arrow-and.png
ANAE-in-peripheral-blood-cells-in-the-domestic-pigeon-ANAE-positive-PBL-arrow-and.png

Below are captures of the pigeon cell automata I created in Rhino+Grasshopper. The plan is to take this abstraction, bake out its components, and experiment with it in an AR software. Seen are 1000 iterations of a octagonal prism from three surface points.

P A R T  V I I  -  P I G E O N  C E L L  A U T O M A T A  A R  I N S T A L L A T I O N

Abstraction of pigeon cells onto the built environment. How can we be one with the pigeons? (;
https://adobeaero.app.link/Hghyb79Ooub

As noted in “Part VI,” the automata sequence in Rhino+Grasshopper is a loose abstraction of pigeon cells. The elongated octagonal prisms are based on the cylindrical form of their cells. From there on, 1000 random iterations are generated from three specified surface points (infinite generations exist for this algorithm, I just chose to render one of the automata models). The process of creating the AR map included baking out the automata in Rhino+Grasshopper at different iteration numbers to represent observation pigeon density in Manhattan (very little data on pigeon movement so this was purely speculative based on human observation), recoloring the render in the Blender software, and using Adobe Aero to create the AR experience.

  • 1000 octagonal prisms = Central Park

  • 800 octagonal prisms = Bryant Park

  • 600 octagonal prisms = Washington Square Park

  • 400 octagonal prisms = High rooftops

  • 200 octagonal prisms = Miscellaneous areas

​

The point of this installation is to immerse users amongst pigeons and advocate for a more cordial human-pigeon interaction. Instead of peering down or up like we do at pigeons, how can humans co-exist amongst these animals? Rather than having a combative relationship, how can we live with them? Thus, why the installation is at level-view -- to immerse the user alongside them. By moving around the map users can view different points at which the automata is placed onto pigeon-dense locations in Manhattan. The buildings are green to symbolize nature and pigeon cells are light blue, well, to represent the color of pigeons.

​

Questions that can further develop this installation include how can users interact with the automata from a physical-digital realm and what can users learn from interacting with these abstractions?
 

giphy (1).gif
giphy (12).gif
giphy (13).gif
giphy (14).gif

P A R T  V I I I  -  D O N N A  H A R A W A Y
S U M M A R Y

“There are a lot of ways to begin to think about the need for situated knowledges.” This was the first statement that stood out upon listening to Donna Haraway’s interview on the For The Wild podcast. “Situated knowledge” is a meta term defined as “the idea that all forms of knowledge reflect the particular conditions in which they are produced, and at some level reflect the social identities and social locations of knowledge producers [1].” In the context of her conversation, Haraway discusses approaching this term as “work[ing] out a notion of situated knowledge to affirm strong truths, not weak truths. Truths that you would live and die for. Truths that are about who lives and who dies, and whether the planet will flourish or not [2].” Spoken in metaphor, Haraway discusses situated knowledges based on objective truths such as the rapidity of climate change, coral reef decay, etc. and eventually shifting the conversation to her renowned 1985 essay, A Cyborg Manifesto – a metaphor on social feminism. 

​

Recounting her work, Haraway touches on cyborgs coming into the world “in particular historical conjunction” and “injunctions” [2] circling back to her interest in situated knowledges being grounded in objectivity. That being said, this objective truth is communicated once again through the lens of a metaphor. In A Cyborg Manifesto, Haraway states that the lines between human and machine have been blurred stating the rigid form of the “cyborg” definition has now become obsolete. She then compares this to the traditional notions of feminism, criticizing its rigidity, and calling for the movement to move beyond its then current norms such as gender. 

​

Many critics of her work have argued that Haraway outlines a dichotomy of the “cyborg feminist” and the “goddess” referring to her line. “I would rather be a cyborg than a goddess [3].” Therefore, I end with the following questions:

  • How does Donna Haraway’s work correlate with Kimberlé Crenshaw’s writings on intersectionality? Does it?

  • Is Haraway truly advocating for a binary form towards feminism or does the cyborg metaphor invite varying syntheses? 

P A R T  I X  -  1 0 x 1 0  B R A I N S T O R M I N G 
F O R  F I N A L

Ideas

WhatsApp Image 2022-11-20 at 9.12.57 PM.jpeg

Variations

WhatsApp Image 2022-11-20 at 9.12.58 PM.jpeg

P A R T  X  -  F I N A L  P R O P O S A L

P A R T  X I  -  P R O T O T Y P E  &   S T O R Y B O A R D

P A R T  X I I  -  P R O J E C T  R E F L E C T I O N

There can be something ugly in beautiful things.

The leading research question for my final project is as follows: “How can motion data link us to the narrative of wildlife affected by chemical pollutants?” In this scenario, the context is wildlife affected by human-made oil spills in the oceanic biome.

 

With this project, my aim was to create an installation that allowed users to empathize with wildlife when affected by hydrocarbon poisoning – a component of crude oil that when ingested, causes hallucinations, neurologic problems, lung irritation, coughing, choking, and shortness of breath [1]. I also studied photographs of animals affected by crude oil disasters, however, beyond these pictures, I wanted to create an experience that allowed humans to see through the eyes of these animals. From studying the effects of hydrocarbon poisoning on wildlife, two words came to mind: weight and drag. Therefore, how can “weight and drag” be simulated in the context of this situation? Thus, presented the opportunity to use motion data as a method of visualizing and understanding this experience. Through motion, my intent was to simulate a “discombobulating” experience that “dragged the weight of the user” on screen and serve as a metaphor for environmental awareness. My goal was to make this metaphor as playful as possible so through the medium of play, users had a heightened understanding of the animals’ plight. 

istockphoto-188020464-612x612.jpg
https___cdn.cnn.com_cnnnext_dam_assets_211005124857-01-wildlife-oil-spill-file.jpg
oil-spills-e1603963890596.jpg
istockphoto-172644617-612x612.jpg

My first test was to experiment with motion capture as a method of engagement. I traveled to NYU’s motion capture studio in the Brooklyn Navy Yard and Todd, the director of the studio, let me play around in the mocap suit. Although the experience was exhilarating, I felt that there was an opportunity to curate my metaphor without the technical glitches and constraints of traditional motion capture – I wanted to create something that was approachable and accessible at a high-level. 

WhatsApp Image 2022-12-19 at 8.19.22 PM.jpeg
WhatsApp Image 2022-12-19 at 8.19.15 PM.jpeg

Therefore, I resorted to testing my idea in Touchdesigner – a visual language-based motion data software. Admittedly, however, my first tests were on Blender 3D. And although these visuals were eventually scrapped, representing the early stages of this project helped me storyboard and flesh out my idea.

Screenshot 2022-12-19 202754.jpg

My final idea was to use my laptop camera to simulate “weight and drag” in Touchdesigner. When I entered the software, I used some of the previous images of oil spills and wildlife as a reference. I used the discoloration of water when interacting with crude oil as a source of simulating drag on screen. I then used a node object to instance the user’s body repeatedly as they moved around the room.

Screenshot 2022-12-19 203414.jpg

The project was then presented in the final gallery walkthrough for class. At the time, I presented the image instance file as the final product. The feedback for the experience was exactly what I was hoping for. Words such as “playful” and “discombobulating” were used to describe the activity. However, a lack of context was apparent with the “blurry screen” being the final product. What did this experience represent? How can it be associated within the context of hydrocarbon poisoning? I wanted the experience to be a metaphor, however, it was as if there was no starting point to dissect its narrative

WhatsApp Image 2022-12-19 at 8.36.03 PM.jpeg

After the final presentation, I refined the project by creating another visual that could play in tandem with the “blurry screen.” I still wanted it to serve as an illusory abstract, but this time, have users begin with a starting point of what was being represented. In Touchdesigner, I created an image/video outline effect that traced moving objects in frame. In this case, a stock video of gulls interacting with a lake (this video was then manipulated by the TD nodes).  I  toyed with the coloring filters and had it match with the ocean discoloration as aforementioned. “There can be something ugly in beautiful things” was my personal leading quote for this singular story. 

Screenshot 2022-12-19 204057.jpg
Group 2 (2).png

My ideal scenario is for this project would be to present it as an installation similar to the diagram seen above. This way, users are engaged with its discombobulating experience from all directions/perspectives.

bottom of page