Inspired by Situationist principles as a way of re-contextualizing lived experiences, this project uses machine learning to classify urban sounds and highlight the juxtaposition between digital/physical spaces and notions of distance therein. It aims to create an environment where movements within a physical space can enable interaction and navigation within latent algorithmic space, in a collaborative and performative context.
It draws from 4k filtered/collected/tagged sound samples of urban sounds and used their tags as inputs to an NLP model (Stanford's GloVe) that places them in a high-dimensional vector space. These samples form different neighborhoods and clusters—a digital morphology that can translate to the idea of neighborhoods within an urban, physical, setting. Walks are initiated within neighborhoods of sounds and are then passed on to Supercollider which recomposes these sound fragments into generative compositions. Thus, each neighborhood acquires its own sonic signature. Future goals of this framework is to generalize its functionality so that it can be applied to other cultural data artifacts (whether they are manually tagged or not).
On one end of the spectrum, I hope to see this used as a tool that can enable a convivial human/AI context of work/inspiration for eg. musicians and AV artists. On another, it could be used as a tool to examine variance/highlight bias within algorithmic models. From a post-structuralist perspective, it seeks to investigate the following provocation: can heterogeneously tagged data artifacts existing within opposite sides of a latent space yield the same sensed experience when encountered/invoked within the physical space?
Some sonic artifacts from these experiments can be found here.