Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Faster-Than-Fast Fourier Transform

samzenpus posted more than 2 years ago | from the greased-lightning dept.

Math 271

First time accepted submitter CanEHdian writes "MIT news reports on research done resulting in a Faster-than-fast Fourier Transform algorithm. 'At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.'"

Sorry! There are no comments related to the filter you selected.

transmit large video files (1, Insightful)

TaoPhoenix (980487) | more than 2 years ago | (#38759102)

In before _____.

Key phrase:

"smartphones to wirelessly transmit large video files without ... consuming their monthly bandwidth allotments."

Everyone see the connection to the Copyright Mayhem this week?

Bueller?

Another use -- Emergency News Broadcasting (2)

Taco Cowboy (5327) | more than 2 years ago | (#38759812)

Another use that I can see for this technology is to use it for emergency and impromptu news broadcasting, especially when faced with dictatorial authorities (like Iran/China/Syria) which will do anything to stop unfavorable news from getting out to the world

Can it be done effectivly without an FPU? (0, Interesting)

Anonymous Coward | more than 2 years ago | (#38759104)

One of the things that might be cool to do with this is use it to do DSP with Arduino and similar processors. However, being very simple, they don't have a math coprocessor and are somewhat slow at floating point math. Anything to help improve that would be a boon.

Re:Can it be done effectivly without an FPU? (5, Insightful)

Alex Belits (437) | more than 2 years ago | (#38759210)

DSP with Arduino

What is wrong with this picture?

Processors used in Arduino (Atmel 8-bit AVR series) are minimal, general-purpose microcontrollers, a replacement for earlier PIC microcontrollers. Using them for DSP is only slightly less stupid than building DSP boards entirely out of individual discrete transistors. There is A WHOLE FUCKING CATEGORY OF IC dedicated to DSP, plenty of microcontroller-style yet high-performance SoC suitable for DSP, and FPGA with DSP-specific blocks.

But noooo, ignorant people such as yourself, would rather recommend the most un-DSP-ish device in the world just because it comes bundled with Java-based IDE that runs on your beloved wiiiiiiiiindows and has the ugliest editor ever written since xedit.

Re:Can it be done effectivly without an FPU? (4, Interesting)

rev0lt (1950662) | more than 2 years ago | (#38759266)

While I do argue with your point, there are a ton of devices that would benefit from it. Why do you think the implementations of FFT for 8/16-bit microcontrollers such as the 8051 are so popular? DSPs are popular in applications that make heavy use of FFT, but there are a ton of general purpose implementations that require FFT algoritms, ranging from multitone generation/decoding to D/A conversion. Why should I use 2 ICs (plus assorted external logic) on a given circuit, if I can easily do it in software on my general-purpose microcontroller, that besides the integrated ram and rom, also bundles me with a serial port, multiple I/O ports and a ton of "extra" features, at a fraction of the price?

Re:Can it be done effectivly without an FPU? (3, Interesting)

dintech (998802) | more than 2 years ago | (#38759540)

The Spectrum Analyser. This is one of the most common uses is staring you right in the face in almost every mp3 player.

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

Re:Can it be done effectivly without an FPU? (4, Informative)

introcept (1381101) | more than 2 years ago | (#38759682)

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

This is a property of the Discrete Fourier transform itself. For a certain window size there are only a discrete number of frequencies that you can calculate.
  The larger your window-> The more frequenecies you can see (at the expense of a longer sampling time)

FFT algorithms simply try to compute the DFT quickly, this advance is significant for paractical applications but doesn't change the underlying mathematics.

Re:Can it be done effectivly without an FPU? (4, Interesting)

SuricouRaven (1897204) | more than 2 years ago | (#38759744)

It's more than a property of the DFT - it's a much deeper mathematical truth than that. The same fundamental mathematics is used to derive the uncertainty princible in quantum mechanics: For a signal to be perfectly defined in frequency it must be infinite in time, and vice versa. When you apply the same to quantum wave-functions... the uncertainty limit isn't a limit on measurements. It's a limit on what the universe is capable of representing.

Re:Can it be done effectivly without an FPU? (1)

Bill Currie (487) | more than 2 years ago | (#38759694)

I fully expect it to be the same, as it comes down to information content. It's just not possible to extract information that isn't there, no matter what algorithm you use. Same problem as with "Enhance!" in the movies.

Re:Can it be done effectivly without an FPU? (2)

Rising Ape (1620461) | more than 2 years ago | (#38759758)

The interesting thing is that the greater time window over which the FFT operates, you can observe finer frequency detail within that particular window at the expense of how quickly the graph or bars change over time (in simplistic terms). I wonder how this new algo will change the frequency detail/transient time detail trade-off. Do we see more detail in both domains? Less? The same?

That's a consequence of a fundamental mathematical relationship between the frequency and time domains and has nothing to do with the algorithm itself. A signal of length L can be exactly represented by a Fourier series of frequencies 0, 1/L, 2/L, 3/L... etc., with nothing in between, so there's no finer detail to see.

Re:Can it be done effectivly without an FPU? (4, Interesting)

mikael_j (106439) | more than 2 years ago | (#38759276)

Well, if for some reason someone is already using Arduino boards to do a bunch of stuff it might make sense if it was just fast enough. But yeah, if you have a choice it doesn't make much sense to pick an Arduino for this task.

So far the only "usefulness" I've seen in the Arduino has been for that retro kick, being artificially limited to the kind of environment I haven't had to work in for a long time, but if I wanted to build something useful I'd probably at least want a 16-bit processor these days, the difference in price is negligible for any hobby project (hell, a 32-bit processor probably wouldn't be a major budget concern considering what you're likely to spend on the rest of your hardware when building something)...

Re:Can it be done effectivly without an FPU? (3, Interesting)

Anonymous Coward | more than 2 years ago | (#38759600)

A year ago, there was still easily an order of magnitude difference in price going from AVR to 32-bit microcontroller land in base deve hardware alone. You say that for one hobby item the price difference (say $10 --->$100) is neglible, because they are both smallish numbers relative to income... 1) You need to learn multiplication 2) A lot of hobbyists have the desire to do their own version of "ubiquitous" computing.. Your "negligible" price difference theory breaks down in all the cases I'm familiar with.

And please spare me with examples of ultra cheap 32-bit systems, which I know always exist at any given moment. I still remember that time when the TI eval robot had a leaked promo code. That thing took down TIs website and then of course was promptly gone... When assisting a local youth group with a robotics project I looked at numerous "inexpensive" ARM options, only to find that at any given time product availability in this domain is *inconsistent*. Say what you will, but there is almost always an Arduino clone around, and if you fry it, you don't care. If you get a special on a 32-bit ARM system, good luck buying another in 3 months.

In my business, if you absolutely *NEED* a dev board you must still be prepared to pay $300-1000 bucks for the base board alone. I'm a day-job embedded product designer that also works with hobbyists at a hackerspace, so I am not hung up on any one usage and I can tell anybody that thinks that AVRs are irrelevant are still full of shit.. Where is your raspberry pi available for sale, fucker?

Re:Can it be done effectivly without an FPU? (1)

SuricouRaven (1897204) | more than 2 years ago | (#38759748)

They do well when you need something a little more powerful and extensible than a PIC, but still need to watch the power consumption - and the shield system and standardised communication allowed for economy of scale in peripherals. You can take an arduino and just wire it to an ethernet interface, or a bluetooth interface, or an 11g interface, or an SD card reader, or all sorts of other forms of IO, without having to worry overly about compatibility. Plug and play for embedded computing.

Re:Can it be done effectivly without an FPU? (2)

thegarbz (1787294) | more than 2 years ago | (#38759794)

Actually a very large number of projects you see out there in the hobby arena are the opposite. It's a case of people throwing 8-bit microcontrollers at simple problems that can more easily / better be solved with either some discrete parts, logic ICs, or even analogue circuits.

Currently I use 8bit AVRs (not Arduinos) for most of my projects, but when I look around I find I have the opposite opinion to you. I have a digital radio complete with 128x64 pixel screen and a menu system that goes 3-4 levels deep that all runs on an 8bit microcontroller. There are projects out there that collect data from various forms then transmit wirelessly to a datalogger. There's even entire web servers running on 8bit microcontrollers. What I'm really wondering is what kind of an incredibly complex project do you have that requires more than what's available in most common $10 8-bit microcontrollers?

Also 16bit microcontrollers and worse still ARMs, comes with an even larger feature set than the standard AVR that powers the Arduino. This has some repercussions including making software more complicated (more registers to juggle which could be good or bad), often means a more complicated footprint (how many 16bit microcontrollers come in a 12pin DIP package), and sadly for several companies also means no decent free developer tools or a crippled compiler.

So ... just what kind of supercomputer are you building with your fancy 16bits? :-)

Re:Can it be done effectivly without an FPU? (2)

digitig (1056110) | more than 2 years ago | (#38759438)

I agree with the AC -- it would be cool. [S]he didn't suggest using it in commercial applications, just as a fun thing to do. With a possible ten-fold increase in speed, DSP that was out of the Arduino's range comes within it, so there are now more things the Arduino can do. That's cool. If you want to simply buy a purpose-built DSP chip that's fine, but in this place of all places don't knock the geeks who want to build the thing at home from the lowest-level components practicable.

Re:Can it be done effectivly without an FPU? (-1)

Anonymous Coward | more than 2 years ago | (#38759534)

Well. So much for that hobby.

Maybe you can go home and kick your dog a few times now.

Re:Can it be done effectivly without an FPU? (5, Insightful)

ThatsLoseNotLoose (719462) | more than 2 years ago | (#38759592)

Slashdot could really use a +1 Insightful but Unnecessarily Dickish mod.

Re:Can it be done effectivly without an FPU? (4, Funny)

turing_m (1030530) | more than 2 years ago | (#38759974)

Slashdot could really use a +1 Insightful but Unnecessarily Dickish mod.

What is wrong with this picture?

Moderation categories already encompass both categories mentioned, and are an improvement on a simple "like" or "dislike". Adding multiple combinations of moderation (e.g. Insightful + Flamebait or Informative + Troll or any of the other 90 combinations you would needlessly add to the slashdot moderation system) is only slightly less stupid than allowing the herd of cats that is the slashdot moderator community to just create arbitrary moderation categories out of thin air any time they feel like it. There ARE TWO WHOLE FUCKING CATEGORIES OF MODERATION already dedicated to "Insightful but Unnecessarily Dickish", namely Insightful and Flamebait. The general idea is that on average the +1 from the Insightful and the -1 from the Flamebait cancel each other out, and people checking the moderations on their comments over time will come to realize they should be less of a dick but retain their level of insight.

But noooo, ignorant people such as yourself, would rather recommend slashdot implement the most arbitrary and poorly architected moderation system in the world just so you can get the +5 insiiiiiiiiightful. (And even if you would argue that Dickish!=Flamebait you are proposing a new 11th moderation category for "insightful but Unecessarily Dickish" worth a +1, which would result in the sizable misanthropic subset of slashdot users who would actually TRY to get their posts moderated as your wondrous new moderation category, thinking it is an improvement over plain old insightful.)

Re:Can it be done effectivly without an FPU? (-1)

Anonymous Coward | more than 2 years ago | (#38759622)

Dick.

Re:Can it be done effectivly without an FPU? (2)

Fraggy_the_undead (758495) | more than 2 years ago | (#38759722)

No reason to shout. Let people have their hobby! I think AVR and similar inexpensive microprocessors are a great way to get started in "not purely software" hackery. And if Arduino and the likes make it even easier, the better! Just because not everyone was born with a keyboard in their hand, typing into emacs, doesn't mean they don't have the right to explore the realm of programming with whatever editor and platform they like. Sure, there are a lot of applications these 8-bit MCUs are ill-suited for, but let them figure that out by themselves. It's a learning process, but you in your arrogance shout at AC because he/she wished for something you think is impossible. And maybe it is, but the way you communicate that is somewhat arrogant. There are plenty of paths to becoming an educated hacker and if it's via a Java-based IDE on Windows that's just as well.

Re:Can it be done effectivly without an FPU? (4, Insightful)

introcept (1381101) | more than 2 years ago | (#38759742)

While I too am sick of seeing dinky little Arduino projects, there are plenty of good reasons to be doing FFT on low power micros.
For example, the company I work for designs battery powered wireless sensors: They have to be compact, consume minimal power and have minimal components for manufacture. We've got a single ATMega processor that sits between a sensor and a radio, doing, among other things, FFT and other 'DSP' functions to compress the data before we spend our precious milliamps transmitting it.
It really is the cheapeast, lowest power way to get the job done, sometimes a hardware DSP is overkill.

Re:Can it be done effectivly without an FPU? (0)

Anonymous Coward | more than 2 years ago | (#38759770)

Um, and how much do those chips cost? You're seriously pitching an FPGA with DSP-specific blocks against an 8-bit AVR? That's like criticizing a Golf driver because IF YOU WANT TO GET SOMEWHERE FAST YOU USE A FERRARI.

And hey, if the Arduino has poor DSP performance then maybe a new faster algorithm is of interest to that community.

Note that there is a segment of Arduino users who likes to do as much as possible given the limitations. It's a form of creativity. Sorry it doesn't meet your approval.

Re:Can it be done effectivly without an FPU? (-1)

Anonymous Coward | more than 2 years ago | (#38759884)

I liked xedit

Re:Can it be done effectivly without an FPU? (0)

Anonymous Coward | more than 2 years ago | (#38759950)

Do you think the IT-lifers that frequent slashdot think of these things? Obviously not.

Re:Can it be done effectivly without an FPU? (1)

Doc Ruby (173196) | more than 2 years ago | (#38759996)

Except when your device is already made with Arduino, and then it needs to do something new that is DSP. Then you need DSP with Arduino. Even if it's "stupid" it might be necessary. Most application development must use the HW already present in its installed base, or die trying.

Since DSP (or rather NSP, since "Native Signal Processing" is what non-DSP CPUs were needed to do when there was no DSP but only a CPU in the installed base) isn't impossible on microcontrollers, just somewhat impractical on many of them. Yet the AVR ATMega is common in Arduino [google.com] , and includes a HW multiplier that can be used [atmel.com] for multiply-accumulate that is the core of DSP:

The component that makes a dedicated digital signal processor (DSP) specially suit-
able for signal processing is the Multiply-Accumulate (MAC) unit. This unit is
functionally equivalent to a multiplier directly connected to an Arithmetic Logic Unit
(ALU). The megaAVR microcontrollers are designed to give the AVR family the ability
to effectively perform the same multiply-accumulate operation. This application note
will therefore include examples of implementing the MAC operation.

Meanwhile, new Arduino Due boards include ARM processors with HW multipliers [diydrones.com] .

Even if the processor/microcontroller has no HW MAC, it can do DSP - just less efficiently. If its application needs DSP, it can do it.

Unless the developer just insists it's stupid. Then the Arduino cannot do it. But not because of the chip.

Re:Can it be done effectivly without an FPU? (-1)

Anonymous Coward | more than 2 years ago | (#38759240)

>However, being very simple, they don't have a math coprocessor and are somewhat slow at floating point math. Anything to help improve that would be a boon.

Like not using an arduino to begin with.

Re:Can it be done effectivly without an FPU? (3, Interesting)

rev0lt (1950662) | more than 2 years ago | (#38759474)

Not every FFT implementation is floating-point (I believe fixed-point implementations are quite more common than floating-point ones), and software-based floating point is not "somewhat slow", is "slow as hell". For many applications, fixed-point and good ol' lookup tables (either complete tables or sample points with interpolation) do the trick rather well.

Re:Can it be done effectivly without an FPU? (1)

YoopDaDum (1998474) | more than 2 years ago | (#38759750)

A math coprocessor is designed to speed-up floating point computations. But a FFT can be applied to integers too, and there are plenty of integer based FFT applications (telecom, audio...). In this case a basic FPU will not help you. On such a small processor as in the Arduino any use of the FFT will likely be integer based, with a small FFT size (think coarse spectrum analyzer for example).

The reason I say "basic FPU" above is that it seems to me most vector engines can also operate on integers, not only on floats. So they're not strictly FPU only anymore. And such vector units can typically also be used to speed-up integer FFTs, using a FFT specific addressing scheme (search for "Butterfly diagram" in wikipedia if you're interested). But then we're far from an Arduino ;)

Security (2)

TaoPhoenix (980487) | more than 2 years ago | (#38759112)

Doesn't some part of Internet Security (of ____, I'll leave that to my betters to explain) rely on Fourier properties?

So if we can bust those "Faster than Fast", what next?

Re:Security (4, Informative)

SuricouRaven (1897204) | more than 2 years ago | (#38759168)

Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

Re:Security (4, Informative)

gnasher719 (869701) | more than 2 years ago | (#38759394)

Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

Re:Security (4, Informative)

Anonymous Coward | more than 2 years ago | (#38759504)

Nope. FFTs are used in many fields, but cryptography is not one of them. Not since the old analog radio scramblers of decades past.

Informative and wrong. When you calculate powers of 4096 bit numbers, the fastest way to calculate the 4096 bit products is convolution via FFT.

These matrices are not sparse, though, and thus the algorithms under discussion are not suitable, or any faster than FFT.

Re:Security (1)

SuricouRaven (1897204) | more than 2 years ago | (#38759688)

I was not aware that the mathematics of multiplication were equivilent to the FFT. Interesting. I'm not sure how to multiply via convolution though, other than the obvious trivial cheat of a 1x1 matrix.

Re:Security (3, Informative)

Damnshock (1293558) | more than 2 years ago | (#38759726)

Multiplications in time domain = convolution in transformed domain

Sometimes it's easier to go to the transformed domain, convolute and then go back to time domain :)

Re:Security (4, Informative)

misof (617420) | more than 2 years ago | (#38759868)

Yes, FFT may be used in cryptography. But this is unrelated, as the first post in this thread talks about security. FFT has no connection to the security of cryptosystems.

As far as I'm aware, the security of *absolutely no* cryptosystem used in practice depends in any way on the FFT.

Yes, FFT gives us a way to multiply big integers quickly. But all cryptosystems that use big integers already *do* assume that everyone *can* multiply big integers quickly. Even if there was a ten-times speedup, this would mean absolutely no danger to their security.

(And one final nitpick: FFT is not the fastest way to multiply 4096-bit integers, those are still considered fairly short and you would probably gain a better performance out of some simpler-but-asymptotically-slower algorithm.)

Re:Security (1)

Anonymous Coward | more than 2 years ago | (#38759424)

The fastest algorithms for multiplying large integers employ FFT. Some cryptosystems multiply large integers...

soft computing (-1)

Anonymous Coward | more than 2 years ago | (#38759114)

is the future.

Doesn't sound THAT useful (5, Interesting)

tempmpi (233132) | more than 2 years ago | (#38759120)

It doesn't improve the regular FFT but only sparse variants. Image or Audio compression nearly always uses the regular non-sparse FFT.

Re:Doesn't sound THAT useful (3, Interesting)

Anonymous Coward | more than 2 years ago | (#38759256)

The paper claims the opposite: "Images and audio data are equally sparse. This sparsity provides the
rationale underlying compression schemes such as MPEG and JPEG."

Re:Doesn't sound THAT useful (5, Informative)

MojoRilla (591502) | more than 2 years ago | (#38759306)

This isn't right. The proposed algorithm is better on sparser signals, but is a regular FFT. The point is that many things we want to compress are sparse, or at least have pieces that are sparse. In JPEG, for example, images are broken into 8x8 pixel blocks and then DCT'ed (which is similar to FFT). Many of those blocks might be sparse, and benefit from this new approach, even if the most complex blocks aren't and don't benefit.

Re:Doesn't sound THAT useful (1)

fph il quozientatore (971015) | more than 2 years ago | (#38759514)

Yeah, but a big-O improvement may do you no good if you only need n=8...

Re:Doesn't sound THAT useful (1)

dkf (304284) | more than 2 years ago | (#38759568)

Yeah, but a big-O improvement may do you no good if you only need n=8...

Depends on what sort of complexity it is. When the complexity function is larger than polynomial, even n=8 can be worrying. (I remember working with algorithms that were O(EXP^2(n)) at one point, many years ago; they had combinatorial searches over a system where the test function was itself an NP-Hard problem. Nasty, and we couldn't use the structure of the problem to restrict anything too much either. Suffice to say that at the time we were having problems finding supercomputers large enough to run our code...)

Re:Doesn't sound THAT useful (2)

fph il quozientatore (971015) | more than 2 years ago | (#38759700)

The complexity of FFT is O(n\log n).

Re:Doesn't sound THAT useful (-1)

Anonymous Coward | more than 2 years ago | (#38759972)

I helped your mom by giving her a big-O

Re:Doesn't sound THAT useful (5, Interesting)

Anonymous Coward | more than 2 years ago | (#38759316)

I'm only awake due to a bathroom break, so I may not be the most clear-minded at the moment, but subtracting coefficients from the bins seems like a pretty obvious permutation of algorithmic techniques to try, and having 2/3 probability of a correct FT analysis doesn't seem like an inspiring trade-off (skimmed... perhaps they explain ways to improve this?) Is this anything more than a highly specialized, not particularly practical version of the algorithm? I'm used to working on algorithms which are much more general-case (e.g., new solutions for SCALAPACK problems).

Whoa (5, Funny)

HappyClown (668699) | more than 2 years ago | (#38759406)

Let me get this straight - you're saying you woke up in the middle of the night intending to take a dump, and somehow ended up posting about complex mathematical algorithms on the Internet instead? Respect.

Re:Whoa (2)

K. S. Kyosuke (729550) | more than 2 years ago | (#38759552)

Let me get this straight - you're saying you woke up in the middle of the night intending to take a dump, and somehow ended up posting about complex mathematical algorithms on the Internet instead? Respect.

If the dump takes a long time (which it sometime does), what else is one supposed to do in the meantime, sitting on the toilet bowl with a laptop in one's lap?

It could be huge (1)

Anonymous Coward | more than 2 years ago | (#38759436)

In a cell phone or wi-fi, there are FFTs all over the place. They use them for things you would never suspect ... even if you have taken a DSP course. In modern radios (cell phones or anything else connected wirelessly) the DSP is the main chip.

Every time you find a computationally efficient way to do something, you improve battery life. So, phones can stay the same size and take advantage of even more streaming content.

Re:It could be huge (2)

YoopDaDum (1998474) | more than 2 years ago | (#38759858)

It's actually not very applicable to telecommunications. Yes, FFTs are a key item to all OFDM/OFDMA based protocols like WiFi and WiMAX/LTE. But the content is definitely not sparse at all. Each FFT bin is a carrier frequency. In OFDM when transmitting all carriers are used, so clearly not sparse.
In OFDMA only some carriers could be used if the network is lightly loaded, but this can change in real time and you may not know if it's sparse or not in advance (in LTE for example, a device only get its own allocation ahead of time). The worst case will be common, and less efficient than a simple FFT.
That's why the authors mention picture or video or instrument based sounds. Here you can have significant cases with sparse content in the frequency domain, and be quite sure to win on average.

It's not the algo that's sparse, it's the signal (1)

Nicolas MONNET (4727) | more than 2 years ago | (#38759502)

Saying that compression uses the regular "non-sparse" algorithm is rather meaningless; they use what is available, and I don't believe there was a sparse-optimized algo until now.

Wonder how it'd "work" on cats. (5, Funny)

sethstorm (512897) | more than 2 years ago | (#38759146)

So the cat gets transformed even faster [xkcd.com] .

(apologies to XKCD)

Re:Wonder how it'd "work" on cats. (4, Funny)

Anonymous Coward | more than 2 years ago | (#38759392)

So the cat gets transformed even faster [xkcd.com] .

(apologies to XKCD)

Be glad it's not the furrier transform of your cat :)

Re:Wonder how it'd "work" on cats. (1)

sethstorm (512897) | more than 2 years ago | (#38759698)

Mine's dead, so I'd be kind of worried about any kind of spontaneous transforms.

Potentially huge digital A/V benefits (4, Informative)

sound+vision (884283) | more than 2 years ago | (#38759156)

Lossy compression such as MP3, AAC, and according to TFS also video compression, are all fundamentally based on FFT (fast fourier transforms). Depending on how well this new specific method works, we could see decreases in the time it takes to encode all this stuff. Which can be quite a long time... have you ever tried to do a 2-pass MPEG 4 conversion on a feature-length digital video? Video encoding doesn't scale well to multiple cores either, so nowadays when performance increases coming mostly from adding more cores... video encoding hasn't been getting faster at the same pace it used to. I'm thinking this new FFT algorithm could make a big difference in encoding speeds.

Additionally, I know lots of audio (and I'm assuming video) effects, DSPs of the kind you find in Adobe Audition, Audacity, et al., rely on FFTs. Computing such effects could get a lot faster. With increases in both the speed of adding effects, and compression afterwards, we might be seeing a significant speedup in the overall digital audio/video-editing process.

Re:Potentially huge digital A/V benefits (1)

Anonymous Coward | more than 2 years ago | (#38759344)

Who holds the patents for this? That's the most important question.

the MAFIAA obviously (1)

RotateLeftByte (797477) | more than 2 years ago | (#38759550)

They will try to keep this under wraps as anything that speeds up video encoding/decoding will be classes as anti Hollywood and a tool for piracy.
Standby for a few mysterious disappearances of any public copies of the algorithm.

Re:the MAFIAA obviously (2)

semi-extrinsic (1997002) | more than 2 years ago | (#38759772)

Please, enlighten me, how do public copies of algorithms "disappear"? Counterpoint: deCSS, the release of (master) keys for AACS and HDCP.

Anyway, we can just publish it as a poem [cmu.edu] and it's protected under free speech:

Now help me, Muse, for
I wish to tell a piece of
controversial math.

Re:Potentially huge digital A/V benefits (0)

Anonymous Coward | more than 2 years ago | (#38759346)

"Fast Fourier Transform (FFT)" IS NOT equal to "Discrete Fourier Transform (DFT)"

Digital videos/photos use "Modified Discrete Fourier Transform (MDFT)", using only integer arithmetic (not floating numbers, not complex numbers).

For video/photo, MDFT-2D of 8x8 pixels is oftenly used, it's a very small number of samples for a Fourier Transform.
For sound, a discrete of the variant of FFT-1D of 1024 samples is oftenly used, but it's better to see the MP3 internals or any lossy Advanced Audio Coder for technical details.

JCPM

Re:Potentially huge digital A/V benefits (0)

Anonymous Coward | more than 2 years ago | (#38759404)

"Fast Fourier Transform (FFT)" IS NOT equal to "Discrete Fourier Transform (DFT)"

Correct. FFT is used to compute the DFT. What was your point again?

Re:Potentially huge digital A/V benefits (1)

Anne Thwacks (531696) | more than 2 years ago | (#38759672)

Well, maybe not "IS", but "can be", under some circumstances, when data is dynamic. If data is static, there is often no benefit.

And yes, I have done 8-bit integer FFTs. Loads of them!

Re:Potentially huge digital A/V benefits (5, Insightful)

drinkypoo (153816) | more than 2 years ago | (#38759506)

Video encoding doesn't scale well to multiple cores either,

Welp, that is patently false. Video encoding is embarassingly parallel in any case where video has keyframes and the video is sufficiently long to hand a roughly equal number of chunks of video (keyframe to keyframe) to each core.

Re:Potentially huge digital A/V benefits (2)

alexhs (877055) | more than 2 years ago | (#38759526)

I'm thinking this new FFT algorithm could make a big difference in encoding speeds.

I'm not so sure. It has the potential to make a big difference in decoding speed :
I would think that the input signal is not that sparse, but rather that it has plenty of small components. The goal of the compression is to remove the smallest components as unnoticeable noise. Only after that step you get a sparse signal. So what you can actually improve significantly is technically the inverse FFT (which uses basically the same algorithm as the FFT), used for decoding.

Re:Potentially huge digital A/V benefits (3, Informative)

Anonymous Coward | more than 2 years ago | (#38759558)

I have implemented (baseline) h264 on GPU, and I tuned it for at least 6 month. The FFT (actually fastDCT variant), only took 20% of the runtime, because it was done 4x4 blocks, it was the most parallel part of the codec, the easiest to implement. By far, the lossless compression, and the motion estimation are more complicated. Especially motion estimation, which is very important for good bitrate / quality in a video codec.
So this wouldn't really speed up h264 like video codes..(only a little bit).

Re:Potentially huge digital A/V benefits (1)

SuricouRaven (1897204) | more than 2 years ago | (#38759784)

Maybe on audio, but on video the FFT itsself is only a tiny part of the encoding time. The really slow part is finding motion vectors, which is a completly seperate part, and usually the slowest one.

Video encoding actually should scale fantastically to multible cores - it's almost linear, in theory. It's just that very few encoders support it due to the complexity of multithreading what is already a very complicated program.

Wish I could understand the details of FFTs (4, Interesting)

Viol8 (599362) | more than 2 years ago | (#38759218)

I'm sure I'm not the only person who completely understands what a FFT does but simply can't get his head around the actual implementations details. I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out. Its mathematical magic to me. Wish I could "get" it. But I guess I'll just have to settle for using it.

Re:Wish I could understand the details of FFTs (0)

Anonymous Coward | more than 2 years ago | (#38759290)

i share your trouble

Re:Wish I could understand the details of FFTs (4, Informative)

lachlan76 (770870) | more than 2 years ago | (#38759292)

In the DFT case the signal is merely a finite number of samples, so you can forget about the time-dependence and look at it as a long vector. If you sit down and work out the maths, you find that the vectors corresponding to the constant function and the various sinusoids are linearly independent.

All you're really doing then is solving a system of linear equations---they don't just have to be sinusoids, you could find the coefficients for any linearly independent set of functions. You could just as easily replace one of the sinusoids with an exponential function and still get coefficients out such that you could do your weighted sum of vectors and get back your original function.

Re:Wish I could understand the details of FFTs (1)

Viol8 (599362) | more than 2 years ago | (#38759328)

I'm afraid you lost me at "long vector"

"sit down and work out the maths"

Ah , if only I could. But I'd settle for understanding it first! :o)

Re:Wish I could understand the details of FFTs (5, Informative)

lachlan76 (770870) | more than 2 years ago | (#38759426)

"Sit down and work out the maths" is really just code for "here's one I prepared earlier". If you're keen on this sort of thing, read a bit about solving systems of linear equations, and you will hopefully be able to look at the problem, exclaim "this is trivial!" and live forever happy in the knowledge that it is indeed theoretically soluble (as I tried to describe above) without having to concern ones self with the computational subtleties that suck all of the fun out of it.

MIT OCW have a set of videos on linear algebra if your curiosity is sufficient to justify chewing up some time on it. Probably some of the most useful partially-post-high-school maths that one can learn. Here [ups.edu] is a free textbook on it.

Re:Wish I could understand the details of FFTs (1)

scum-e-bag (211846) | more than 2 years ago | (#38759510)

It's beautiful, isn't it.

I read the linked articles and it dawned on me. So very beautiful.

I'm glad there are others. :))

Re:Wish I could understand the details of FFTs (0)

Anonymous Coward | more than 2 years ago | (#38759466)

Key phrases: "the various sinusoids are linearly independent" and "solving a system of linear equations".

These algorithms are essentially a very specialized way to compute a set of values that, when applied to a set of equations, gives the original waveform.

The new algorithm is even more specialized: it's optimized to only compute the values that matter.

Re:Wish I could understand the details of FFTs (1)

lachlan76 (770870) | more than 2 years ago | (#38759588)

Rereading your comment, I think I may have slightly misunderstood what you were saying. If you have produced a digital signal by sampling an analogue one (i.e. if you look at the values of a signal at various points) over a finite, the DFT (or the FFT as its most popular algorithm) will not tell you what the fundamental frequency is.

You can get an approximation by making assumptions about the nature of the signals that you sample, but this is not the best way to do it, and those assumptions are normally that the signal contains frequencies in a range only half the sample rate.

Re:Wish I could understand the details of FFTs (1)

Damnshock (1293558) | more than 2 years ago | (#38759738)

Well, you can write any sinusoid in form of exponentials, that's why it still works ;)

Re:Wish I could understand the details of FFTs (1)

lachlan76 (770870) | more than 2 years ago | (#38759776)

Sorry, I meant to say a real exponential. It's probably true for more than one as well, but that means more than one line of Octave code to check (or some kind of conscious thought).

Re:Wish I could understand the details of FFTs (1)

expatriot (903070) | more than 2 years ago | (#38759420)

I only got FFT by working forward from different frequency sine ways to triangle, sawtooth, square, and other waves. After you do this, you realize it must be possible to go the other way.

A very coarse FFT can be calculated by multiplying the waveform by two square waves that are 90 degrees out of phase and summing each sequence. If there is no component corresponding to the square wave frequency, the two totals will be near zero.

Re:Wish I could understand the details of FFTs (1)

gnasher719 (869701) | more than 2 years ago | (#38759430)

Well, that is not what FFT does. You have to look at plain old FT (Fourier Transform) first, not FFT (Fast Fourier Transform), and better CFT (Continuous Fourier Transform) first.

CFT transforms a continuous signal into continuous frequencies, and it does that by integrating the input signal multiplied by a complex sine wave. The mathematics for that is reasonably understandable, doing the calculations is horribly difficult.

FT does an approximation of CFT, by first changing the input signal to discrete values (like 44,100 per second for music), calculating only discrete frequencies, (440 Hz, 441 Hz, 442 Hz...) and replacing the integral with a sum. So you calculate for each point the sum of (input signal times sine wave). For each of n inputs you add n values, for a total of n^2 products.

FFT or Fast FT uses a clever algorithm and properties of the sine function to reduce the number of calculations to about n * log (n). Understanding FFT has nothing to do anymore with your problem; it just is understanding a clever optimization.

Re:Wish I could understand the details of FFTs (0)

Anonymous Coward | more than 2 years ago | (#38759444)

Start by looking at the continuous Fourier transform. It is a function that translates a time-based (or spatial) function into a time-frequency (or spatial frequency) function. For each frequency point you are integrating the product of the time/space function by a kernel over all time/space. For a given frequency, the kernel represents the magnitude and phase (in complex form) of a sinusoid of that frequency. In essence, the Fourier transform is computing a complex correlation coefficient (magnitude and phase) for all frequencies. That coefficient tells you how much of each frequency (magnitude) is present in the original signal and how much it must be shifted (phase). By summing the scaled and shifted sinusoids over all possible frequencies you can (in theory) reconstruct the original signal.

Now on to the FFT: The FFT is a fast implementation of the Discrete Fourier Transform (DFT). The DFT is a discrete representation of the continuous Fourier transform derived using sampling theory and discrete time signal processing. If you understand the continuous case, you understand the discrete case, at least in concept. There is a lot of math that goes into deriving how continuous signals are sampled and processed using DFTs/FFTs, which is the basis for Digital Signal Processing.

Hope this helps.

Re:Wish I could understand the details of FFTs (5, Insightful)

davidoff404 (764733) | more than 2 years ago | (#38759528)

The most important bit of practical advice I ever received was given to me by an advisor during the first week of my PhD programme:

"You might think you understand the mathematics behind a model but until you can sit down and code it up on a computer, you can't say you really get it."

This was mentioned within the context of building a model for gravitational wave signatures for binary black hole systems, but I've found it applies to everything. You can't claim to understand a piece of mathematics properly until you can break it down into steps that a computer can process.

If you're interested in applying this to Fourier transforms there are a few really good books that explain the procedure in detail. I'd start with this [amazon.com] , which focuses mainly on wavelets but also includes a really good popular account of the importance of Fourier transforms and FFT in particular. Then I'd go and get myself a copy of this [amazon.com] , which explains FFTs in much more detail. It's even got code for the FFTs in BASIC!

Re:Wish I could understand the details of FFTs (0)

Anonymous Coward | more than 2 years ago | (#38759548)

Here's a pretty good explanation on how Fourier Transforms work: http://altdevblogaday.com/2011/05/17/understanding-the-fourier-transform/

Re:Wish I could understand the details of FFTs (2)

serviscope_minor (664417) | more than 2 years ago | (#38759574)

I simply don't get how a signal of many mixed frequencies which essentially means the amplitude is moving all over the place can travel through some equations and a neat list of the fundamental and harmonics can pop out.

But it's easiest to start with the O(n^2) DFT, not the FFT.

As always, the only way to truly understand it is to work out the maths. But, it it helps, let's try a simple one.

Imagine you have some vectors in 4D called a, b, c, d and are respectively [1 0 0 0 ] [0 1 0 0 ] [0 0 1 0] and [0 0 0 1].

You can see by inspection that a.a = 1, b.b=1 etc, but a.b=0 and so on. I.e. if you dot two of the same vectors you get exactly 1, otherwise you get exactly 0. I will call this property 1.

Now imagine you have 4 numbers, e, f, g, h. With suitable choices, you can make up any 4D vector by doing v = ae + bf + cg + dh.

Using property 1, you can see v.a = a.(ae + bf + cg + dh) = e. In other words if you multiply the new vector with one of the original vectors, the amount of that original vector pops out.

You can trivially verify this. Clearly in this case, v = [e f g h] and so it's obvious that e corresponde to the amount of [1 0 0 0] in v.

Now here's the fun part. Everything except for the trivial verification above only usef property 1. The only requirement is on the dot products, and it still works.

Try it with the four vectors [1 1 1 1]/sqrt(4), [1 1 -1 -1]/sqrt(4) [1 -1 0 0]/sqrt(2) and [0 0 1 -1]/sqrt(2). You can verify Property 1 very easily. Yiu can now take any 4D vector and find out how much of [1 1 -1 -1] there is in it, using the same method as above.

We still haven't done the DFT (ot FFT) yet. Actually, we've just covered the Haar Wavlet transform.

Nothing above is limited to 4 dimensions. It will work equally well in 100, 1000 or 1024 dimensions. You just need more vectors. You can at this point almost certainly invent all sorts of patterns of vectors (in 2^N dimensions) that fit Property 1.

Well, here's a special one. Imagine that i is the index of a point in the vector, and so goes from 0 to 2^N-1.

You can make vectors where the elementts are cos(n*i*pi / (2^N-1)) and sin(n*i*pi/(2^N-1) for various values of n. You will find that they all satisfy Property 1. So now you can find out how much of those cos and sin vectors are in your original vector v.

Or, in other words you cna find out how much of a certain frequence is in there.

And that's the DFT in a nutshell.

If you do some manipulation, you can se that this maths is just a matrix times a vector:

e a_vector v_0
f = b_vector v_1
g c_vector * v_2
h d_vector v_3

In general there is no O(N log N) method to perform this multiplication. But for certain problewm structures (like the FFT) there are faster algorithms, but they're still doing the same maths.

Another fun fact is that any matrix whose rows satisfy Property 1 is a rotation matrix. So, you're just rotating your data. And you can get a Fourier transform by masically making everything bigger until its continuous. So there you see, a Fourier transform is just a rotation in an infinite dimensional space.

Ta da.

Re:Wish I could understand the details of FFTs (1)

Splab (574204) | more than 2 years ago | (#38759790)

Why is it, math books and people into math uses words like "obvious" or "you can very easily"; it really really destroys your readers enthusiasm for reading what you are doing.

Yes, it might very well be obvious and trivial to you, but it sure as hell make me feel dumb when I don't get the obvious in the first pass.

Other than that, nice intro to FT.

Ok , I guess I'm mathematically a bit thick but... (1)

Viol8 (599362) | more than 2 years ago | (#38759910)

... how do you get from a load of say 16 bit samples into vectors? If I have sin wave sample data in an array such as the following:

0,5,10,5,0,-5,-10,-5,0

how does that then become a sequence of vectors to calculate the FT?

Re:Ok , I guess I'm mathematically a bit thick but (1)

Rockoon (1252108) | more than 2 years ago | (#38759994)

0,5,10,5,0,-5,-10,-5,0

You can consider them as vectors, but normally they are considered complex numbers.

0 + 0i, 5 + 0i, 10 + 0i, 5 + 0i, ....

Re:Wish I could understand the details of FFTs (0)

Anonymous Coward | more than 2 years ago | (#38759638)

I think intuitively of the Discrete Fourier Transform as wrapping the amplitude signal (like a thread, with amplitude encoded, say, as the mass at each point of the thread) around a unit circle (like a wheel). At any point on this wheel, amplitudes (masses) are additive. At equally spaced points on this wheel, we measure the total amplitude. If, for example, the frequency pi/2 is prominent in the original signal, many points of the string with large amplitudes will end up on top of the pi/2 point on the unit circle, thus resulting in a large pi/2 spike in the spectrum.

Re:Wish I could understand the details of FFTs (1)

Viol8 (599362) | more than 2 years ago | (#38759882)

Ok , I think I understand what you're on about but if amplitudes are additive wouldn't they cancel out over the course of N whole waveform samples because you'd be adding the positive and negative values together? Or do you just sample N+1/2 waveforms?

Re:Wish I could understand the details of FFTs (1)

Delgul (515042) | more than 2 years ago | (#38759958)

It is actually quite simple provided you have at least some basic math skills. Don't try to wrap your head around the math involved just yet. Just do this:

1) Have a look at http://en.wikipedia.org/wiki/Fourier_transform [wikipedia.org] and only look at Definition and Introduction.
2) Get some tool like Matlab or Octave (the last one is OSS)
3) Generate a pure sine wave signal and put that through the formula's you found in 1). You should get a single spike in your results
4) Now add a second sine wave with a different frequency to the first signal. and put that signal through the same formulas. You should find two spikes.
5) Try experimenting with this, adding signals and experiencing how the amplitude and frequency impacts the spike height and position.

When you have a feeling for this, THAT is the time to read the entire article. You will find it easier to understand.

You now have a reasonable understanding of the Fourier Transform. The Fast Fourier Transform or classic FFT is no more than some mathematical trick to make these calculations faster and actually it has it's drawbacks, like your nr of sample must be a power of 2 and some other stuff I won't go into here, although these are acceptable in most practical cases. This new FFT transform seems to be a lossy variant of FFT which will impact the resulting signal, negating the contribution of frequency area's with low energy content. It could be especially useful in situation where that loss of information is acceptable as it is in sound and video to some degree. However, I did not really read up on this new method so I could be off the mark...

mIare (-1)

Anonymous Coward | more than 2 years ago | (#38759372)

Autotune (1)

X.Coward (1757734) | more than 2 years ago | (#38759440)

Does this mean we get better and faster autotune?

Faster Mersenne Prime Calculations? (3, Interesting)

DarkHelmet (120004) | more than 2 years ago | (#38759472)

From what I know, the Great Internet Mersenne Prime Search [mersenne.org] (GIMPS) uses a Fast Fourier Transform to quickly find the square of a number. This is a required part of the Lucas-Lehmer test (the test that determines if the number is prime).

If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

This is a potentially exciting development in this field.

Re:Faster Mersenne Prime Calculations? (1)

gnasher719 (869701) | more than 2 years ago | (#38759606)

If this form of FFT can do fast squaring, it will reduce the amount of time taken to find new, large primes.

It wouldn't, because it only works when your data contains many zeroes. The products in GIMPS are of pseudo-random data, so I would expect half the bits to be non-zero, and almost none of the numbers involved to be zero.

Re:Faster Mersenne Prime Calculations? (0)

Anonymous Coward | more than 2 years ago | (#38759620)

I'm afraid it can't because the transforms used in bigint arithmetic generally aren't sparse.

Nice. More tuning! :) (1)

msobkow (48369) | more than 2 years ago | (#38759482)

No matter how good you think the latest and greatest algorithm is, there is usually someone who can come up with a way to improve on it.

For example, I DO know how to structure the probes done by MSS Code Factory such that you could stuff the rule base in Oracle and use some very Oracle-specific syntax to do the probes in one query, taking the first record from a sorted result set as your "answer" to the probe. I've written similar queries for other databases, but they're constrained by how many levels deep you want to write the query for. As there is no telling how deep the query would be for MSS Code Factory, and I don't want to tie myself to an expensive product like Oracle, what I have now is pretty much the best I can do at the moment. Maybe some day I'll come up with more tuning ideas, but not right now.

Kudos to the FFT team. I look forward to reading the details, though my memory of FFT math is pretty old, fuzzy, and shaky nowadays.

I used to be heavily into computer graphics and audio processing in the university days, but I've spent decades focusing on business programming instead. Unless you really LOVE graphics and sound, and want a job in a very narrow field (including video games), I think it's inevitable that you put away the toys and pick up the tools of industry that will earn you a pay cheque.

Well put, & REALISTIC... apk (0)

Anonymous Coward | more than 2 years ago | (#38759692)

Same deal here: I was "into" other areas of programming (mainly systems programming) but found that the job market was far more active in what you're calling "business programming" (i.e.-> MIS/IS/IT work & mostly DB-related areas, which is why "SQL's your pal").

* QUESTION - Does Oracle still provide the "OO40" middleware & API's to communicate with it (vs. ADO/RDO etc.)?

See... Last time I coded around it 1999-2001, the company I worked for used a combination of ADO (faster on READS from Oracle's DB devices) + OO40 (faster on WRITE, going BACK to the Oracle DB's devices) from a 32-bit VB6 front-end (calling stored procs from Oracle).

(That was a decade++ ago though... so, just curious & asking!)

APK

P.S.=> Nice to see a realist around here... & you're right about "engines/algorithms" too - sooner or later, someone finds a better way of doing things, even IF only in "exception" situations with specific data/conditions, etc./et al... apk

Re:Well put, & REALISTIC... apk (1)

msobkow (48369) | more than 2 years ago | (#38759938)

I've no idea. I've been using JDBC for the past 15 years, or ODBC for .Net, and it's been 3-4 years since I last touched Oracle. The tuning tricks I was taught 20 years ago still seem to work though, and for most databases, not just Oracle.

I have 30+ years of RDBMS experience, having cut my teeth on the very earliest releases of Oracle, Sybase, and Ingres back when I was working for NorTel in around '89. I was on a project tasked to write real life applications with each of them so the company could compare performance. So three of us each developed our approaches independently, focusing on one database, but working together to make sure we were implementing the same business solution in each case.

While my database assignment was Ingres 1.0, I learned a lot about the differences from our design meetings, and went from there to Florida, where I soon became an Oracle DBA-level resource, brought on as a query tuning expert by Orange County's GIS department, AT&T's Payroll Processing department, and a couple others. I tuned a 27 hour GIS department query to run in 2-3 hours, and took AT&T's payroll processing down to around 20% of the run time.

When it comes to databases, I know my schite, but that doesn't mean I know ALL the tricks by any means. The more I learn, the more I realize how much I don't know.

For example, I've never used Oracle's C/C++ OCI APIs.

"Tenfold"? (1)

KramberryKoncerto (2552046) | more than 2 years ago | (#38759652)

TFA trivializes the result. A reduction of up to 90% the speed is not theoretically interesting at all - it could always be overridden by difference in architecture and implementation, often making older algorithms faster. There are many better ways to describe the reduction in time complexity to the laymen.

Re:"Tenfold"? (1)

KramberryKoncerto (2552046) | more than 2 years ago | (#38759658)

Typo: A reduction of up to 90% the "time"...

FTS applications... (1)

geogob (569250) | more than 2 years ago | (#38759762)

As a scientist in the filed of Fourier Transform Spectroscopy I find this perspective quite interesting, even though it is not that different from other techniques we tried to apply in the field to improve performance. Needless to say, we are quite dependent on FFT performances in this field, even more since the deployment of Imagine FTS systems. Any improvement in FFT performance is noteworthy.

Lets see if its one of those "fields" where drastic performance improvement can be met! Regardless, it's quite an interesting break for signal processing.

SOPA SODA (0)

peawormsworth (1575267) | more than 2 years ago | (#38759786)

This is all very interesting. But can u explain to me how SODA relates to SOPA?

The Endochronic Properties of Resublimated Fourine (-1)

Anonymous Coward | more than 2 years ago | (#38759888)

Perhaps, in time, and with sufficiently careful resublimation, an FFT technique could be devised that could return the FT before the signal itself even existed!

(Asimov is duly acknowledged.)

Theoretical Minimum Joules Per Bit? (1)

Doc Ruby (173196) | more than 2 years ago | (#38759894)

Has research established a theoretical minimum amount of joules required to transmit one bit of info, at the limits of computation?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?