FFT stands for Fast Fourier Transform, which derives from the Fourier Transform. The Fourier Transform is essentially a mathematical algorithm that calculates what a signal is made from. In the case of audio it would split a audio file up into samples which relate to each frequency.
With it a user can make analytical software such as EQ's and their visualisers. FFT is a very complicated subject and it is not really required for much more detail other then the fact that it can be used to control frequency bands to any scale required. For example a audio sample that contains a frequency range of 0Hz to 20,000Hz you could have 20,000 bands, one for each frequency along the spectrum. You could also have this range split into smaller bands for wider ranges and broader control of them.
Here is a Interactive Guide to Understanding The Fourier Transform (19W) .
For my personal project I have Peer Play on Youtube to thank for my initial guide to audio visualisation. By following his tutorial series (11DL) I was able to learn how Unity takes sample data, and how I could use that sample data to script and animate based on my own audio sources. Although his visualisations are of a more recognizable style then mine.
I was able to adapt his code to my requirments as they did not work correctly for what I needed. His code targeted audio coming from the master channel only. So a modification was required so that the applied scripts took audio from the same asset that it was applied to and not the master channel. This worked in conjunction with my name convention across the project, using the output of the audio source to connect to its own channel within the Unity Mixer.
With this set up I am able to apply any audio file to an object as an audio source with its output conneceted to the mixer. I can then attach a FFT capture script witch reads the sample data from that source. The data can then be used in various ways depending on the requirements. I wanted to use the data to animate the string section bows. This would cut the corner on manual animation and scripting. Which would save on time once the code is perfected. It would allow for any object with similar motion to use the same scripts to animate to the defined audio source.
For example if the scene has 4 Violas with seperate musical parts... the script can be used to animate those seperate parts. They will animate independently from one and other to create more immersive movement. This is great for different musical elements that have their own playing styles such as Cellos and Violas in comparrison to percussive parts like the Concert Bass Drum or Timpani. Below shows some screen shots of the script in action.
Please look at the video examples found here.
This sample data can then be used to send to a animation script. Below shows the parameters that I scripted to used for multiple instruments across the scene.