30 Mar 2017

self.

The art installation [self.] was created by Oeyvind Brandtsegg and Axel Tidemann in an attempt to explore artificial intelligence and its relation to us humans. It is a robotic head with speakers, microphones, camera, and video projection. [self.] learns from people talking to it, by analyzing incoming sound and images, comparing similarity and context of sound segments. Its utterances are solely made from learned segments. Audio processing in [self.] is done in Csound, supported by Python for the artificial intelligence. A video of the project: https://www.youtube.com/watch?v=HErOfnqREBQ

A limited version of [self.] attended the International Csound Conference in St. Petersburg in 2015, and a video of its memories thereof can be seen here: https://www.youtube.com/watch?v=E7fYV4K-9_s


self.