Comments on Profile Post by SoupRKnowva

  1. SoupRKnowva
    SoupRKnowva
    Nov 29, 2017
  2. ultrabike
    ultrabike
    Reading it. So far looks legit. Twiddle factors are not rational, so it is an algorithm to come up with approximations to twiddle factors with limited precision w/o loosing reversibility and orthogonality. A possible trade off maybe that the approximation may distort the spectrum a little depending on the number of bits used. Consider the Fast Walsh Hadamard transform which is kind of a 1-bit FFT.
    Nov 29, 2017
    SoupRKnowva likes this.
  3. ultrabike
    ultrabike
    BTW, this is general enough to do real and complex FFTs. There are ways to do a real FFT with a half size complex FFT efficiently (see Numerical Recipes).
    Nov 29, 2017
    SoupRKnowva likes this.
  4. ultrabike
    ultrabike
    The 3 multiplies vs 3 multiples also seems legit.
    Nov 29, 2017
    SoupRKnowva likes this.
  5. ultrabike
    ultrabike
    Very nice paper! Thanks!
    Nov 29, 2017
    SoupRKnowva likes this.
  6. SoupRKnowva
    SoupRKnowva
    but isnt there more to it than just the approximation of the twiddle factor? Since they also ditched the twiddle factor multiplication?
    Nov 29, 2017
  7. SoupRKnowva
    SoupRKnowva
    I guess Im mostly lost because im only currently in the continuous time systems and signals class, and wont be taking the discrete time version for a while, but in my digital class for my final project i wanted to implement a real time fft on my fpga.
    Nov 29, 2017
  8. SoupRKnowva
    SoupRKnowva
    So im scrambling to attempt to learn the dft/fft on my own, as well as figure out this integer version since i only know how to do integer math in digital heh
    Nov 29, 2017
  9. ultrabike
    ultrabike
    As far as I can tell it's just the twiddle factor approximation. But that's not trivial at all. FFT quantization noise is a very real problem.
    Nov 29, 2017
    SoupRKnowva likes this.
  10. ultrabike
    ultrabike
    Once you get a handle on the DFT, read on the FFT (which is an efficient implementation of the DFT). Then move on to pipeline FFTs. Radix-2 should suffice. Then moved into this integer FFT which handles the twiddle factors of the FFTs whatever the radix.
    Nov 29, 2017
    SoupRKnowva likes this.
  11. ultrabike
    ultrabike
    You will want to look into bit reversal as well
    Nov 29, 2017
    SoupRKnowva likes this.
  12. ultrabike
  13. ultrabike
    ultrabike
    For HW implementation that is.
    Nov 29, 2017
    SoupRKnowva likes this.
  14. SoupRKnowva
    SoupRKnowva
    Im just seriously lost how they think .7 or .4 are integers haha this is my main confusion about the paper
    Nov 30, 2017
  15. ultrabike
    ultrabike
    I think they are concerned about irrational numbers. So they approximate them with rational ones. The problem may be that what is rational in decimal, may not be rational in binary (and the other way around). There fore, while something like 0.25 has a rational binary representation (0.01), something like 0.7 may not.
    Nov 30, 2017
  16. ultrabike
    ultrabike
    The idea seems to be there. But the execution would have to be in base 2, not 10 me thinks.
    Nov 30, 2017
  17. ultrabike
    ultrabike
    The motivation is that something that is rational in base 2, has a fixed point representation depending on where you set the binary point: 011 = 3.0, 01.1 = 1.5, 0.11 = 0.75 ...
    Nov 30, 2017
  18. SoupRKnowva
    SoupRKnowva
    Agreed. But they make a specific point of mentioning fixed point implementations and how this integer one is better. Or maybe I’m confused about the difference
    Nov 30, 2017
  19. ultrabike
    ultrabike
    Fixed point does not necessarily mean that the number cannot be fractional. The main difference between floating point and fixed point implementations is that floating point uses special number representations such as IEEE 754-1985. Floating point representations usually include sign/exponent bits and fraction bits.
    Nov 30, 2017
  20. ultrabike
    ultrabike
    An FPGA can do floating point, but operations are a bit more involved. Most folks use the FPGA vendor's floating point libraries. Fixed point does not need all of that. But you bet there is a assumed fractional point there.
    Nov 30, 2017
  21. SoupRKnowva
    SoupRKnowva
    I know, but they said it was integer, not fixed point, thats why im confused. Oh i know they can, i just havent learned about implementing floating point logic, so I dont want to use any. IEE754 is interesting though, learned about that in my previous digital classes.
    Nov 30, 2017
  22. bengo
    bengo
    Fixed point math is always going to be faster than floating point - to a huge extent on small embedded chips like ARM7 which must use soft float, but even on DSPs which are optimized with lots of floating point hardware, dedicated MAC units, etc.
    Nov 30, 2017
  23. ultrabike
    ultrabike
    Yeah. I think I understand what they are getting at, but the paper is not perfect. Read the other one I linked in this thread Soup. It's a classic, and it's not BS. I think I've used it before to implement pipeline FFTs in an FPGA. As far as DSPs, those are different animals.
    Dec 1, 2017
  24. ultrabike
    ultrabike
    Don't get stuck in the details about that paper. Conference papers also tend to have a few issues. Get whatever is useful out of them. If you need any help or have some questions, let me know. I might be able to help you.
    Dec 1, 2017
  25. ultrabike
    ultrabike
    IEEE papers tend to be of higher quality.
    Dec 1, 2017
  26. ultrabike
    ultrabike
    BTW Soup, what are your requirements: Radix-2 FFT sufficient? How many points? 8-point, 256-point?
    Dec 1, 2017