This experimental toolkit can make one sound more similar to another by applying audio effects. The parameters of the audio effects are controlled dynamically based on analyzed audio features of the input sounds and/or the other sound. The audio features are mapped to effect parameters by using artificial neural networks evolved by the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. The toolkit includes a web app that interactively visualizes the results of a NEAT run with various charts and graphs, and it lets you listen to the output sounds. After evolving an audio effect you like, you can have the toolkit generate a csd file with this effect. It can be applied to real-time audio input.
Audio and video demos are available at iver56.github.io/cross-adaptive-audio/