Reading it. So far looks legit. Twiddle factors are not rational, so it is an algorithm to come up with approximations to twiddle factors with limited precision w/o loosing reversibility and orthogonality. A possible trade off maybe that the approximation may distort the spectrum a little depending on the number of bits used. Consider the Fast Walsh Hadamard transform which is kind of a 1-bit FFT.
BTW, this is general enough to do real and complex FFTs. There are ways to do a real FFT with a half size complex FFT efficiently (see Numerical Recipes).
I guess Im mostly lost because im only currently in the continuous time systems and signals class, and wont be taking the discrete time version for a while, but in my digital class for my final project i wanted to implement a real time fft on my fpga.
So im scrambling to attempt to learn the dft/fft on my own, as well as figure out this integer version since i only know how to do integer math in digital heh
Once you get a handle on the DFT, read on the FFT (which is an efficient implementation of the DFT). Then move on to pipeline FFTs. Radix-2 should suffice. Then moved into this integer FFT which handles the twiddle factors of the FFTs whatever the radix.
I think they are concerned about irrational numbers. So they approximate them with rational ones. The problem may be that what is rational in decimal, may not be rational in binary (and the other way around). There fore, while something like 0.25 has a rational binary representation (0.01), something like 0.7 may not.
The motivation is that something that is rational in base 2, has a fixed point representation depending on where you set the binary point: 011 = 3.0, 01.1 = 1.5, 0.11 = 0.75 ...
Agreed. But they make a specific point of mentioning fixed point implementations and how this integer one is better. Or maybe I’m confused about the difference
Fixed point does not necessarily mean that the number cannot be fractional. The main difference between floating point and fixed point implementations is that floating point uses special number representations such as IEEE 754-1985. Floating point representations usually include sign/exponent bits and fraction bits.
An FPGA can do floating point, but operations are a bit more involved. Most folks use the FPGA vendor's floating point libraries. Fixed point does not need all of that. But you bet there is a assumed fractional point there.
I know, but they said it was integer, not fixed point, thats why im confused. Oh i know they can, i just havent learned about implementing floating point logic, so I dont want to use any. IEE754 is interesting though, learned about that in my previous digital classes.
Fixed point math is always going to be faster than floating point - to a huge extent on small embedded chips like ARM7 which must use soft float, but even on DSPs which are optimized with lots of floating point hardware, dedicated MAC units, etc.
Yeah. I think I understand what they are getting at, but the paper is not perfect. Read the other one I linked in this thread Soup. It's a classic, and it's not BS. I think I've used it before to implement pipeline FFTs in an FPGA. As far as DSPs, those are different animals.
Don't get stuck in the details about that paper. Conference papers also tend to have a few issues. Get whatever is useful out of them. If you need any help or have some questions, let me know. I might be able to help you.
Read these "rules" AND introduce
yourself before your first post
Being true to what the artists intended
(opinion / entertainment piece)
Comments on Profile Post by SoupRKnowva