×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Next-Gen Low-Latency Open Codec Beats HE-AAC

timothy posted about 3 years ago | from the ogg-still-rocks-anyhow dept.

Communications 166

Aldenissin writes "From the Xiph.org developers, Opus is a non-patent encumbered codec designed for interactive usages, such as VoIP, telepresence, and remote jamming, that require very low latency. When they started working on Opus (then known as CELT), they used the slogan 'Why can't your telephone sound as good as your stereo?', and they weren't kidding. Now, test results demonstrate that Opus's performance against HE-AAC, one of the strongest (but highest-latency) codecs at this bitrate, bests the quality of two of the most popular and respected encoders for the format, on the majority of individual audio samples receiving a higher average score overall. Hydrogenaudio conducted a 64kbit/sec multiformat listening test including Opus, aoTuV Vorbis, two HE-AAC encoders, and a 48kbit/sec AAC-LC low anchor. Comparing 30 diverse samples using the highly sensitive ABC/HR methodology, Opus is running with 22.5ms of total latency but the codec can go as low as 5ms."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

166 comments

timothy beats his meat (-1)

Anonymous Coward | about 3 years ago | (#35824020)

my first post beats yours.

Re:timothy beats his meat (1)

by (1706743) (1706744) | about 3 years ago | (#35824588)

"Low-Latency" is in the summary title, and that's really the best you can do for a first post?

Re:timothy beats his meat (0)

Anonymous Coward | about 3 years ago | (#35824966)

4.56×10^6 milliseconds of latency and that's the best you can do?

Next level beats (3, Funny)

MadAhab (40080) | about 3 years ago | (#35824028)

This will be perfect for my next level beats.

Re:Next level beats (1)

davester666 (731373) | about 3 years ago | (#35825972)

The only possible way they can actually make a legally truthful claim of "non-patent encumbered codec" is if the codec were designed and specified, if not actually implemented far enough in the past for any patents that had been granted at the time have all expired.

Perhaps they could switch to "has not yet been challenged in court for any possible patent infringement". But who would use a codec like that? Besides Google, of course.

HE-AAC is worse than LE-AAC in terms of quality (0, Troll)

carlhaagen (1021273) | about 3 years ago | (#35824052)

HE-AAC uses SBR to reduce its data footprint. This results in worse reproduction of the source audio than LE-AAC at same bitrate (and often even lower bitrate). The whole deal with HE is that it can maintain good quality at very low bitrate, by giving up accuracy. So far, Apple's LE-AAC encoder in their Core Audio framework is the best choice for digitally non-lossless compression.

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

Anonymous Coward | about 3 years ago | (#35824098)

...in your opinion.

Re:HE-AAC is worse than LE-AAC in terms of quality (-1)

Anonymous Coward | about 3 years ago | (#35824186)

So long as you are only using it the way Jobs has allowed you to use it.

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35824216)

So, TFA brings up some actual tests of Opus and HE-AAC. Do you have anything similar?

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

tuxgeek (872962) | about 3 years ago | (#35824830)

Of course parent doesn't have anything substantial to support his opinion.
This is slashdot, where opinions bear more credibility than point-of-fact

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35824362)

At 64kbit/sec there is basically no comparison: HE-AAC is much stronger. If you had LC-AAC in the test it would have had results similar to the vorbis results.

Re:HE-AAC is worse than LE-AAC in terms of quality (5, Insightful)

woolpert (1442969) | about 3 years ago | (#35824424)

HE-AAC uses SBR to reduce its data footprint. This results in worse reproduction of the source audio than LE-AAC at same bitrate (and often even lower bitrate). The whole deal with HE is that it can maintain good quality at very low bitrate, by giving up accuracy. So far, Apple's LE-AAC encoder in their Core Audio framework is the best choice for digitally non-lossless compression.

While your rant appears informative if not insightful on its face, it is completely missing the point.

This is a test of audio codecs at low bitrates.

I don't know what this "LE-AAC" is you speak of (and rather suspect you don't either) but AAC-LC was actually in this test, as the low anchor.

At these bitrates (~64kbps) HE-AAC (despite its "low-accuracy" as you put it) is perceptually better sounding than AAC-LC. Lossy audio codecs (even the LE-AAC [sic] encoder in Apple's Core Audio framework you love) can only be judged by how they sound, not how they look. "Accuracy" is not a metric very worthy of discussion.

Re:HE-AAC is worse than LE-AAC in terms of quality (-1)

Anonymous Coward | about 3 years ago | (#35825020)

While your rant appears informative if not insightful on its face, it is completely missing the point.

False dilemma. [wikipedia.org]

This is a test of audio codecs at low bitrates.

Appeal to hypocrisy. [wikipedia.org]

"Accuracy" is not a metric very worthy of discussion.

Appeal to popularity. [wikipedia.org]

I know that many Slashdot users have trouble with identifying logical fallacies, but that was just ridiculous. Your post was absolutely filled with them. You should be more careful in the future.

Re:HE-AAC is worse than LE-AAC in terms of quality (3, Informative)

Anonymous Coward | about 3 years ago | (#35825322)

He was discussing accuracy as being irrelevant because perception is more important in a medium designed to be perceived by a human. You've now apparently converted it to "because fewer people care about accuracy", which was in no way his point. Or: Straw man. [wikipedia.org]

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35825354)

Appeal to incorrectness. Logical fallacies don't make for a very good argument.

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

RightSaidFred99 (874576) | about 3 years ago | (#35825400)

Jesus dude, did you just get done reading a high school debate book or something? None of what you posted makes any sense. I call your method of argument "Non Sequitur Appeal to Misused Logical Fallacies", I just haven't added it to the Wiki yet so dunces like you can misapply and mangle it.

Re:HE-AAC is worse than LE-AAC in terms of quality (-1)

Anonymous Coward | about 3 years ago | (#35825548)

Jesus dude, did you just get done reading a high school debate book or something?

Straw man. [wikipedia.org]

I call your method of argument "Non Sequitur Appeal to Misused Logical Fallacies", I just haven't added it to the Wiki yet so dunces like you can misapply and mangle it.

Appeal to ignorance. [wikipedia.org]

Why do so many people think that logical fallacies make for a good argument? This is getting ridiculous.

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35825582)

That is not a straw man. WTF do you think you are doing berating others if you don't know that?

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35825624)

That is not a straw man.

Appeal to authority. [wikipedia.org]

You should read up on logical fallacies. Then you'll be able to avoid ones like the one you just used, and, as a consequence, your argument will be more valid.

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35826034)

That is not a straw man.

Appeal to authority. [wikipedia.org]

You should read up on logical fallacies. Then you'll be able to avoid ones like the one you just used, and, as a consequence, your argument will be more valid.

troll [urbandictionary.com]

Re:HE-AAC is worse than LE-AAC in terms of quality (3, Interesting)

parlancex (1322105) | about 3 years ago | (#35824542)

That's the whole idea behind any lossy codec. You're trading mathematical accuracy for psycho-acoustical accuracy; personally, I don't care if the root mean square error is higher, I just need it to sound like the original.

Anyway, if this really IS an improvement over HE-AAC, which uses some very techniques, I'll be extremely impressed, and quite pleased that it's patent free.

Re:HE-AAC is worse than LE-AAC in terms of quality (3, Interesting)

woolpert (1442969) | about 3 years ago | (#35824630)

The sad thing is it shouldn't be better than HE-AAC. Being low latency does tend to mean one is better at the kind of time-domain issues many find so objectionable, but outside that OPUS is really packing a MUCH smaller toolkit than HE-AAC.

This is really egg on AAC's face, IMHO, and quite the upset. OPUS is so immature the bitstream isn't even stable yet.

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

pseudonomous (1389971) | about 3 years ago | (#35825302)

what kind of latency do you get with AAC? Do you know? (I'm trying to find out now via google)

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35825614)

The absolute minimum the HE-AAC can do is around 94 ms, but most encoders will add more delay for analysis and you'll have delay of about 250ms or so.

Re:HE-AAC is worse than LE-AAC in terms of quality (-1)

Anonymous Coward | about 3 years ago | (#35824648)

waaaaankerrrrrrrrr. he's a fucking waaaaannnnkerrrrrr!

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

plonk420 (750939) | about 3 years ago | (#35824670)

i rather am ok with the sound of AAC-HE opposed to MP3 (and MP3Pro), WMA9Prowhatever, and older OGG. haven't heard OGG in a while, tho.

however, my first test with Opus/CELT blew me away...

http://www.multiupload.com/HTIGW82UD0 [multiupload.com]


i suppose in hindsight i could provide the original lossless clip (cut after the fact) to play with... http://www.multiupload.com/XVRXX256WO [multiupload.com]

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35824856)

Er. this post appears to be linking to some site that tries to install malware.

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

Aldenissin (976329) | about 3 years ago | (#35824936)

Not sure what AC is talking about, seems to be what the site says it does. I downloaded a sound file. This is the first time I have heard the codec, and it does sound extremely good. I don't know much about these matters, but I liked that it was "open", and could be relevant to my interests somehow.

Re:HE-AAC is worse than LE-AAC in terms of quality (4, Interesting)

jmv (93421) | about 3 years ago | (#35824702)

If we were talking about a 96 kb/s test, I'd agree with you. But at 64 kb/s, HE-AAC sounds much better than AAC-LC. The guys who organized this test picked the best AAC implementation they could find at the rate the test was run at.

Re:HE-AAC is worse than LE-AAC in terms of quality (2)

Wannabe Code Monkey (638617) | about 3 years ago | (#35824724)

Apple's LE-AAC encoder in their Core Audio framework is the best choice for digitally non-lossless compression

Yes, but for digitally re-un-non-illossless compression I would go with the Foobar Audio Framework.

Re:HE-AAC is worse than LE-AAC in terms of quality (0)

Anonymous Coward | about 3 years ago | (#35824914)

non-lossless == lossy

Re:HE-AAC is worse than LE-AAC in terms of quality (1)

Skuto (171945) | about 3 years ago | (#35825606)

Parent post is complete bullshit. HE-AAC greatly outperforms LC-AAC at 64kbps. This can be seen in several previous listening tests, including the ITU ones that standardized the format itself.

And this 'SILK' codec? (1)

countertrolling (1585477) | about 3 years ago | (#35824058)

Patent free? Or royalty free?

Re:And this 'SILK' codec? (4, Informative)

jmv (93421) | about 3 years ago | (#35824264)

To be exact, there *are* patents, but they will be available without fee in a way that is compatible with FOSS licences such as the GPL. The main idea behind these patents is that your license terminates if you sue someone by claiming Opus infringes your patents. Almost like a copyleft, but for patents (of course the details are different because copyright != patent).

Re:And this 'SILK' codec? (1)

countertrolling (1585477) | about 3 years ago | (#35824510)

Just a lot of mumbo-jumbo [skype.com] to me.. I'd stay away. Not safe..

Re:And this 'SILK' codec? (3, Informative)

jmv (93421) | about 3 years ago | (#35824548)

This is the license for the "old" SILK codec. The patent licenses for Opus has nothing to do with that. Please read them:

Xiph.Org IPR statement: https://datatracker.ietf.org/ipr/1524/ [ietf.org]
Broadcom IPR statement: https://datatracker.ietf.org/ipr/1526/ [ietf.org]
Skype IPR statement: https://datatracker.ietf.org/ipr/1525/ [ietf.org]

Re:And this 'SILK' codec? (0)

countertrolling (1585477) | about 3 years ago | (#35824622)

It stil appears that Skype can call a halt to further development by Opus at any future date. I wouldn't want to take that chance.

Re:And this 'SILK' codec? (4, Informative)

jmv (93421) | about 3 years ago | (#35824672)

What makes you say that? If you find a real issue, please raise it -- either on the mailing list: codec@ietf.org, or to me privately (jmvalin@jmvalin.ca). Skype is on the good side on this one. The technology they have contributed is very useful and they're open about resolving any licensing issue.

Re:And this 'SILK' codec? (1)

countertrolling (1585477) | about 3 years ago | (#35824744)

Maybe my main issue that it validates software patents. Regardless how generous the license might be, I would prefer that not happen.

Re:And this 'SILK' codec? (0)

Anonymous Coward | about 3 years ago | (#35824980)

They wouldn't be software patents. Codec patents seldom are, which is why there are so many codec patents issued in Europe, for example.

remote jamming? (4, Informative)

mirix (1649853) | about 3 years ago | (#35824084)

and remote jamming

Took me a while to figure out they meant in a band. I was wondering how they were going to jam some sort of signal with this codec.

Re:remote jamming? (1)

jd (1658) | about 3 years ago | (#35824956)

I thought "remote jamming" meant jamming of remotes. Which would be awesome to do, next time the neighbors channel-surf at high volume.

HE-AAC is worse than LE-AAC in terms of quality (-1, Redundant)

carlhaagen (1021273) | about 3 years ago | (#35824088)

HE-AAC uses SBR to reduce its data footprint. This results in worse reproduction of the source audio than LE-AAC at same (and often even lower) bitrate. The whole deal with HE is that it can maintain good quality at very low bitrate, by giving up accuracy. The article is misleading in the sense that it alludes people into believing that HE-AAC is the best choice for audio, regardless what kind and its purpose, incl. the thing the predominant masses think of when hearing the term audio - music - and do far no digitally lossy encoder can rival Apple's Core Audio LE-AAC codec when it comes to that case.

sorry for being dense, but... (0)

Anonymous Coward | about 3 years ago | (#35824090)

Who really would need such low latency? Even back when I used to play games online with voice chat, I could call out "incoming enemy!" and the other players on my team would have plenty of time to react.

Re:sorry for being dense, but... (2)

tepples (727027) | about 3 years ago | (#35824166)

Even back when I used to play games online with voice chat

Imagine Rock Band with voice chat. Or imagine actually making real music with voice chat.

Re:sorry for being dense, but... (4, Insightful)

Anaerin (905998) | about 3 years ago | (#35824182)

As mentioned, it's needed for VoIP systems. With a full-duplex system, more than 150ms of lag is audible and noticeably uncomfortable, breaking the flow of conversation (As the apparent lag is doubled in a "conversation", with the delay at each end adding cumulatively). For simple half-duplex systems like gaming, more lag is not really noticeable.

Re:sorry for being dense, but... (1)

sahonen (680948) | about 3 years ago | (#35825406)

For simple half-duplex systems like gaming, more lag is not really noticeable.

The only practical difference between gaming VOIP and Skype is having to hit a push-to-talk button. Latency issues like people stepping on each other crop up in gaming VOIP in much the same way that they pop up in high-latency cell phone or Skype conversations.

Re:sorry for being dense, but... (1)

Anaerin (905998) | about 3 years ago | (#35825492)

For simple half-duplex systems like gaming, more lag is not really noticeable.

The only practical difference between gaming VOIP and Skype is having to hit a push-to-talk button. Latency issues like people stepping on each other crop up in gaming VOIP in much the same way that they pop up in high-latency cell phone or Skype conversations.

Not really. You're not (typically) having a back-and-forth conversation while gaming, just announcing your information and clearing the channel. So there is little difference, conversationally speaking, if your burst is delayed by half a second or so. It's not a conversation, it's a series of announcements. With noticeable lag in a phone call, however, you'll find yourself (and the caller/callee) tripping over each other's sentence beginnings as you both play the "no, after you" as the lag causes you (and your train of thought) to be interrupted. Add to that the highly distracting nature of hearing your own words back after a half-second delay (try it, it's very confusing and distracting) that a lack of echo cancellation can cause, and you have a recipe for conversational disaster.

Re:sorry for being dense, but... (1)

sahonen (680948) | about 3 years ago | (#35825542)

The problem arises when two people have an announcement to make at the same time, usually when they're both waiting for another person to finish making their own announcement. Also don't forget that gaming VOIP software is quite often used for social purposes (VOIP use in public server TF2 is very very rarely related to the game at hand), and occasionally used by casters for commentating as well. It absolutely needs to live up to the same demands that "conversational" VOIP software needs to live up to.

Re:sorry for being dense, but... (2)

parlancex (1322105) | about 3 years ago | (#35824566)

In something like an actual telephone conversation it creates awkward pauses when each person is finished speaking of a length equal to twice the latency. High latency codecs also greatly encumber echo cancellation algorithms and hardware, which is extremely important in VoIP as anyone who has had to deal with it would know.

Total Latency (1)

TubeSteak (669689) | about 3 years ago | (#35824114)

Is that 5~22.5ms of latency on top of network latency?

Re:Total Latency (5, Informative)

jmv (93421) | about 3 years ago | (#35824198)

Yes, 5 to 22.5 ms is the algorithmic delay of the codec. By comparison, codecs like AAC/MP3/Vorbis have more than 100 ms algorithmic delay (you need to give the encoder side more than 100 ms of audio before the decoder side gives you any audio back).

Re:Total Latency (0)

Anonymous Coward | about 3 years ago | (#35824570)

I demand 4ms.

Re:Total Latency (2)

jmv (93421) | about 3 years ago | (#35824596)

There are "custom modes" that can do that. With those you can go as low as 2.5 ms. The only down side is that you can't switch frame size dynamically when you use these custom modes.

Re:Total Latency (1)

Malc (1751) | about 3 years ago | (#35825746)

I wonder if this is the same concept as the decoding delay in video, where there's a difference between decode time and presentation time, perhaps due to re-ordering of frames? Sometimes you can get higher quality doing this, but latency is obviously undesirable in an interactive application like telephony.

Re:Total Latency (0)

Anonymous Coward | about 3 years ago | (#35824652)

In custom modes it can get down to 2.6ms, but the required bitrates for good quality with that size are stupid.

Re:Total Latency (0)

Anonymous Coward | about 3 years ago | (#35824754)

5ms should be enough for anyone

So just to be clear... (1)

TangoMargarine (1617195) | about 3 years ago | (#35825056)

The end user has no reason to care about this, right? It's just an implementation thing?

Re:So just to be clear... (1)

thegarbz (1787294) | about 3 years ago | (#35825142)

If someone migrates from an analogue source the lag starts becoming apparent. Between the codec, the transmission system, and the decoder on the other side it can create a noticeable delay in a conversation. People not used to this will find themselves interrupting each other.

It's much the same as a digital radio dilemma. Analogue 2way conversations sounded quite reasonable if the transmitter and one of the receivers were in the same room. Worst case if the volume was cranked to 11 you end up with feedback, but mostly it just sounds like the transmitter is talking through a microphone. However in a digital 2-way system such as the one we use at work with 200ms lag having a transmitter talk and a receiver standing next to them sounds like mad incomprehensible garbage and has in many instances caused a pause in conversation. Imagine having someone say everything back to you with a slightly delay while you're trying to talk to them.

Why users care... (1)

xiphmont (80732) | about 3 years ago | (#35825178)

...is at the top of the first Opus/CELT demo page:

http://people.xiph.org/~xiphmont/demo/celt/demo.html [xiph.org]

The low latency makes more interactive applications possible. By way of illustration, the total algorithmic delay of an Opus or CELT stream is approximately equivalent to the time it takes sound to travel from you to someone standing five feet away.

And that isn't just important online (2)

Sycraft-fu (314770) | about 3 years ago | (#35825060)

When you are dealing with audio signals in the home, low latency can be needed too. If you are doing something like playing prerecorded video then no, the system can find out the delays of the screen, audio, codecs, etc and insert delays as needed to sync it all up. However not if you are doing something live, like games. That's the reason for stuff like Dolby Digital Live and DTS Interactive. They are made so that you can get low latency encoding so the sound from a game console syncs up with the video.

It is also important for mobile phones. There's only so much latency you can tolerate in a conversation before things start to sound strange to the people using it. Of course there's already latency from the phone network, so codec latency matters. That is part of the reason why new phone standards aren't using something like AAC to get better sounding audio out of the bandwidth available.

As such this project is has a lot of really cool potential. If it not only offers better per-bit perceptual sound but also is extremely low latency, it can be used in situations the others can't.

That's all fine and dandy, but.... (2, Insightful)

adolf (21054) | about 3 years ago | (#35824192)

Who cares what codec is being used for my VoIP phone at home or on my desk, when anyone I call is still most likely to be connected over the PSTN with g.711 or g.723, or (far worse) a cell phone?

And don't get me wrong: I want to care; I really do. And maybe I did care, at one point. I was going to build an Asterisk system for home -- I even collected some of the hardware to make it work.

But I stopped caring when the boy got old enough to properly want a cell phone, the wife got a cell phone, and I had a cell phone. After that, I dropped the home phone line altogether, since it was just a waste of money.

I have no interest, at this moment, in having any sort of telephony tied to my premises.

And while I could, I suppose, run some manner of VoIP client on my Droid over cellular, I think that's a complete non-starter at the moment: I had trouble earlier today getting a 64kbps MP3 to stream correctly over 3G Verizon (even though I controlled both ends of the stream), but that was just an inconvenience.

It'd be a lot more than simply inconvenient if my phone calls were that spotty. I don't care how good it sounds if it doesn't work.

Is there any good and practical use for this new codec?

Re:That's all fine and dandy, but.... (4, Insightful)

nog_lorp (896553) | about 3 years ago | (#35824254)

Lol what? You're crazy. I suppose it is never worth inventing a new codec ever, since everyone uses old codecs! /fail argument

Re:That's all fine and dandy, but.... (1)

adolf (21054) | about 3 years ago | (#35825576)

I think my point was more that it is currently seldom worth using a new codec, since the folks in the middle are using old codecs. And when I say old, I mean it: Many decades old, in some cases.

I can feed pristine 96KHz 24-bit audio into the PSTN, and still will never get anything better than g.711 out of the other end, because it gets ruined in the middle.

Re:That's all fine and dandy, but.... (0)

Anonymous Coward | about 3 years ago | (#35824636)

Is there any good and practical use for this new codec?

You mean like a ground breaking low latency interactive video game from the cloud kind of thing? No, it never could happen, impossible the "experts" say....

Re:That's all fine and dandy, but.... (1)

parlancex (1322105) | about 3 years ago | (#35824690)

Disregarding the bandwidth your service provider may or may not provide you, VoIP clients on mobile devices are difficult or impossible to use due to the reliance of even modern VoIP protocols such as SIP on RTP which uses UDP for media transport, and every 3G provider I've ever seen deploys wide scale NATing to all their connected devices. They could make a legitimate argument for a lack of addresses, but there's kind of a conflict of interest there too.

Re:That's all fine and dandy, but.... (1)

pseudonomous (1389971) | about 3 years ago | (#35825274)

But, to reply more to the parent than to you: it's not like you're just going to be using this over wireless internet; some people actually have DSL or better connections with less then 40ms of latency; at rates like that a codec latency of 4ms is still 20% of the total latency. At that kind of latency 45ms you could play music with someone without driving yourself crazy because you both sound like your lagging behind the beat (you WILL both appear to be lagging to each other, but 45ms is a small enough amount of latency that it won't completely destroy the performance). In fact, even 100ms of total latency is probably survivable in a mid-tempo song (but will be very noticable), anything you can do at the codec level to chop that down improves the experience.

Re:That's all fine and dandy, but.... (2)

adolf (21054) | about 3 years ago | (#35825530)

So, it's something that might be useful for musicians. Maybe.

100ms of total, round-trip, end-to-end-to-end latency (remember to count both hypothetical DSL connections) is the same as two musicians trying to play together when they are about 56 feet apart. It might be practical, but it doesn't sound very fun for many types of informal "jam"-oriented music: There's a reason the bass player often stands next to the drummer, and it's usually not because he wants more hearing damage.

I just listened to some Beatles (just because of their typical hard-panned stereo separation) with the left channel delayed by 100ms, and found it to be fairly bothersome.

If I were playing bass with that sort of delay, I'd expect either myself or the drummer to become very annoyed very quickly.

But at least you answered my question. :)

Thanks.

Re:That's all fine and dandy, but.... (1)

EdIII (1114411) | about 3 years ago | (#35825314)

This is absolutely important.

There are a lot of PSTN providers using g711 or g723. Quite a number of them offer g729.

The big difference here is:

1) Not being limited to 8khz sampling rate which sucks
2) Not being limited by the phone manufacturers to g729 for the better bandwidth usage
3) It HANDLES MUSIC.

If you are using g729 in a system hold music is horrible especially when you are trying to use your own or a radio. This is because g729 works best on speech not music. Music sounds messed up when transcoded into g729. They are working on a variant which treats music better, but nobody has it or supports it yet.

Let's not forget that also said they handle chat and video. That means the codec itself is capable of transmitting more types of data.

If the sound quality and abilities are that much better you will see some phone manufacturers incorporate into their supported codecs. And why not? It's does not have patent fees associated with it and delivers better features and better quality.

Asterisk will catch up and start offering the codec. The PSTN's will only be a matter of time.

Don't forget that we are moving to an LTE network in the US with better bandwidth available. Supposedly voice and data will be the same. That does not rule out the carriers using a different codec as well. So cell phones might get better sound quality simply because of the arrival of LTE.

There is a lot to consider here and this is progress in the right direction. I can't wait till the day a phone call is end to end 44khz sampling.

Re:That's all fine and dandy, but.... (1)

afidel (530433) | about 3 years ago | (#35825494)

UMTS uses a max 12.2Kbps for voice channels and since LTE allows seamless fallback to UMTS towers I can't see how that changes until providers go pure LTE and remove the voice terminal class from phones (a decade from now, maybe?). Btw the difference in bandwidth between a UMTS voice channel and the bandwidth this codec was tested at is the same as the difference between this codec and 320Kbps CBR just to put into perspective how much bandwidth this thing is using compared to conventional cellphone calls, and THAT is why our phones can't sound like our stereo =)

Re:That's all fine and dandy, but.... (1)

Air-conditioned cowh (552882) | about 3 years ago | (#35825446)

Is there any good and practical use for this new codec?

Yes. Live audio applications such as digital radio mics. Before the only viable option was ADPCM which slew-rate limits horribly and sounds awful. Either that or find enough RF bandwidth to send uncompressed PCM. For live applications 3ms delay is needed or drummers start playing out of time etc. If it can be tweaked to less than 5ms then it's got a future in this application.

Re:That's all fine and dandy, but.... (1)

adolf (21054) | about 3 years ago | (#35825648)

Good example.

And to think that for all this time, I've been giving guitarists two choices: Either plug in with a real wire, or prepare to be strangled with that cheap-shit wireless kit.

Sometimes they plug in, and other times the show gets delayed while we hunt around looking for enough air to blow up a backup guitar player. (It's their head that's the problem -- it takes forever to inflate it to the correct size.)

Perhaps this new codec will help save a guitarist.

Re:That's all fine and dandy, but.... (0)

Anonymous Coward | about 3 years ago | (#35826006)

Well, yes, there are lots of uses for this CODEC:

Massive online game chat: WoW, Mumble/Murmur...
Free WiFi audio/video telephony: Ekiga...
Radio Amateur digital voice over satellite.
Digital voice over HF radio.
Digital voice over any existing data channel that is already 'full'.
Digital voice chat and telephony over a LAN, without clogging up the network.

ok but how is dtmf detection? (1)

NynexNinja (379583) | about 3 years ago | (#35824268)

if the codec cant reliably do dtmf detection, then its no good -- i'll stick with ulaw disallow=all allow=ulaw

Re:ok but how is dtmf detection? (3, Interesting)

parlancex (1322105) | about 3 years ago | (#35824712)

You do realize that most modern VoIP hardware / software supports out of band DTMF? In fact, the most modern software demands it.

Re:ok but how is dtmf detection? (1)

afidel (530433) | about 3 years ago | (#35825504)

I was about to say the same thing, out of band is the only way to make DTMF work reliably with VoIP IME.

Re:ok but how is dtmf detection? (0)

Anonymous Coward | about 3 years ago | (#35825588)

WTF is DTFM?

We need a new Wikipedia article, STAT!

Re:ok but how is dtmf detection? (1)

adolf (21054) | about 3 years ago | (#35825722)

WTF is DTFM?

Dual-Tongued Female Mutants

We need a new Wikipedia article, STAT!

Forget Wikipedia. I'm registering dtfm.xxx.

Re:ok but how is dtmf detection? (1)

Sique (173459) | about 3 years ago | (#35825792)

It's DTMF, Dual-tone multi-frequency signaling. It's the different sounds you hear if you hit the dial keys on your phone.

Curious (1)

gaelfx (1111115) | about 3 years ago | (#35824320)

To be honest, I didn't click most of those links in the summary, but I did check out the codec's website, and it made me wonder where I can find an app that actually uses this codec. I would be really interested in trying this out or participating in any kind of testing they might be doing since I live in China, Skype is uber-slow here and I do enjoy jamming from time to time. Anyone know how to put this codec to use yet?

Re:Curious (1)

True Vox (841523) | about 3 years ago | (#35824784)

Well, I have no idea about any newest implementations, but I know Mumble uses CELT (version 0.11.0 as of the current version). I would suspect that Opus is on the road map.

Re:Curious (1)

Aldenissin (976329) | about 3 years ago | (#35824952)

Good to know, I use Mumble and was curious if it might plan to take advantage. My understanding is that Mumble is encrypted, but all the more reason for lower latency?

Technically... (2)

jd (1658) | about 3 years ago | (#35824468)

...it can't have been "then known as CELT" since it is a merge of two codecs of which CELT is one and SILK is the other. It's good that it's an IETF standard as that will help some with adoption. It will also help some with getting other implementations. (Hell, Dirac is a great codec for video but because it's not a recognized standard for anything it's not getting used.)

Re:Technically... (1)

countertrolling (1585477) | about 3 years ago | (#35824538)

*danger Will Robinson* SILK is patented, so the summary is somewhat misleading..

Not so— (3, Informative)

Anonymous Coward | about 3 years ago | (#35824562)

Skype will release their patents under a free software compatible license if the codec is standardized by the IETF: https://datatracker.ietf.org/ipr/1525/

Re:Not so— (0)

Anonymous Coward | about 3 years ago | (#35825544)

Skype is partially owned by ebay. Look at how they manage themselves and paypal.

latent neogod inbred abuse victims in charge of us (-1)

Anonymous Coward | about 3 years ago | (#35824922)

so, the requests to tone fatal friday down a bit make sense. how about fateful friday? fearful frightful friday? fulfilling friday? finally leaving friday? fed up friday? & as evile never sleeps, we get two more days per week to bid it out of existence. even more manure saturday..... satan's sunday best....

it's all in the genuine native elders teepeeleaks etchings.

Another headline you could write from that: (0)

Anonymous Coward | about 3 years ago | (#35826044)

"Open-Source Vorbis codec uses higher bitrates than a proprietry HE-AAC to provide worse quality"

What's more, that headline is actually backed up by statistically significant results. Those bars are 95% confidence -- 2 sigma. So we're saying that Opus is near as damn it 2 sigma better than Apple's HE-AAC (and at lower bitrates, which I'd have expected the summary to comment if, well, this wasn't Slashdot). That's good and they should certainly be pleased, but it's not really *that* statistically significant. 2 sigma results crop up all the time in science and are typically rejected as fluctuations. The thing that surprised me, since I've always liked Vorbis, is how it's about four sigma worse than Opus, and perhaps two and a half three or so worse than HE-AAC. At a higher bitrate. That's a fair old fail for Vorbis there, to go along with at least a nice result for Opus. I don't think they'll take much joy from the fact that they've been demonstrated to be as shit as Nero's HE-AAC implementation. While using a higher bitrate.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...