This is an old revision of the document!
What's often confusing about the FFT is that the transform seems to have numbers that are too big. Let's call this the “FFT scale factor” problem (it's a reasonable google search). The frequency domain seems to have much more energy (much higher values) than one might expect from the time-domain waveform from which it originated.
When trying to understand this, the uninitiated user will google something like “FFT scale factor” and get lots of really good explanations about how the math produces values that are N times larger than one might expect, N being the size of the FFT.
But really, these users are trying to figure out why, not how. I'm mean, isn't that wrong? Why is it N times larger? Shouldn't someone be dividing by N?
The answer is yes, sort of. First off, remember what the first “F” in FFT stands for: Fast. Maybe users of standard FFT libraries don't care about the absolute scale, they care about the spectral shape or changes in the spectral shape. If the standard library divided by N, it would be less fast for everyone. The point being, if you want to take the hit caused by dividing N frequency bins by N, you do it.
If you really do care about the absolute power in the frequency-domain bins, you might want to divide by N. Maybe you care about the total energy. Oh wait, you should sum the bins, then divide by N one time! Now it's faster for you, too. Maybe you only care about the energy of the strongest bin. Just divide that one bin by N and ignore the others!
As you may realize, even many people who do care about the absolute power, don't care about it to the degree of dividing by N for every single bin.
So *why* do FFT implementations seem to be off by a factor of N? It's for performance - the FFT is an intermediate result and you should divide by N if and when you need it.
How is an FFT implementation off by N? Google “FFT scale factor” and read some much better explanations than I could describe about the neato math behind it.