This is where my thesis documentation and process will live throughout the next few weeks as I explore my thesis research.
How can digital avatars and virtual worlds open up new ways to deliver messages, stories and information? How do they become reflections of the real world, creating feedback loops off of each other?
This focus has come out of a few classes at ITP that I have found myself completely immersed in (performative avatars, motion capture and computational approaches to narrative). The culmination of learned skills and conceptual theories has led me to explore how I can apply them to reflect and critically examine the present world we live in. I am curious in exploring avatars as extensions of digital human communication and representation. I want to explore the uncanny valley and digital anthropomorphism. What does it mean to use yourself as a dataset, creating a virtual likeness of you? What are the tradeoffs between staying true to your physical representation and the ideal self representation that lives virtually, forever? How do we address these emerging virtual worlds and our digital representations that reside in them? The internet has become a new form of social contact and a laboratory of experimentation, allowing constructions and re-constructions of the self that characterize and explore our different identities and personalities.
At this point, my thesis will be a series of experiments with a possible installation.
La Turbo Avedon
Neural Story Generator
Avatars and Computer-Mediated Communication
Interactive Realistic Digital Avatars
How to Build a Data Set For Your Machine Learning Project
I plan to build three different digital versions of myself (one made using a structure sensor, second using a photo/headshot morph with iClone, and third through photogrammetry using Metashape/Agisoft to combine the photos into a 3D model). Each one will have it's pros and cons and I am interested to explore how using different technologies to represent oneself reflects slight differences. I will also rig each avatar.
The second part of production would be training/fine tuning an NLP model (using GPT-2 or possibly a chatbot) that I would then link to each rigged avatar in Unreal. At the very least I would have one avatar rigged to this, but my goal would be to have each avatar linked to a different NLP model that has been trained off of slightly different datasets in order to show how datasets can reflect different outcomes and therefore decisions.