Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Canonical Explains Decision to License H.264 For Ubuntu

kidjan Re:H.264 IS OPEN SOURCE!!!! (372 comments)

I don't agree. If I tell you that you can use my software, but only under the condition that you distribute the source code, or make code that utilizes my code open source as well, then that is obviously stipulating "what terms they can give out the software or derived works to other people."

more than 4 years ago
top

Canonical Explains Decision to License H.264 For Ubuntu

kidjan Re:H.264 IS OPEN SOURCE!!!! (372 comments)

We can argue symantecs till the end of time but isn't a patented, open-source piece of software an oxymoron? I mean I am not exactly jumping for joy and screaming yay that I can use it because I might have the patent trolls jump all over me.

No, it's not even remotely an "oxymoron"; open source isn't about giving up your property rights. It's about _respecting_ property rights. This is why open source projects _include a license_, and that license stipulates how people may use the project in detail. How is me requiring people open source projects that use my property any different than me requesting they pay me to use my property? In either scenario, I am putting forth the stipulations for use. If you're against paying to use property, so be it, but don't make the mistake of thinking open source code is devoid of property rights.

more than 4 years ago
top

Canonical Explains Decision to License H.264 For Ubuntu

kidjan That article is wrong. (372 comments)

First of all, H264 is not a "closed-source..codec"--this is complete nonsense. The standard itself is completely published and documented, and there is nothing stopping open source projects from creating H264 encoder and decoders. And have they ever--hands down, the best H264 encoder implementation today is x264, which is licensed under the GPL. The patent issue is totally separate, but let's not conflate "patented" with "open source." The real issue with H264 is who will pay royalties for the patents. For Windows 7 and OSX, MSFT and APPL pay those royalties. In the case of Ubuntu, it makes it easier for commercial entities to distribute Ubuntu if they know royalties and licensing fees are already being handled. So to be honest, this just makes Ubuntu an easier sell to PC manufacturers because they aren't liable for royalty costs or hidden "gotchas"

more than 4 years ago
top

Can Faraday Cages Tame Wi-Fi?

kidjan Ummm..... (145 comments)

....wouldn't it just be easier to use a wire rather than construct a building in such a manner? Or use a powerline network instead? Nobody worth their tin-foil hat would ever think such a drastic measure was worthwhile.

more than 8 years ago

Submissions

top

Should there be standard for HD video?

kidjan kidjan writes  |  more than 4 years ago

kidjan (844535) writes "A posting about VP8 quality and HD video:

"what really irks me about this WebM debate isn't that there's a legitimate competitor to H264--competition is always a good thing, and I welcome another codec to the fold. What bothers me is that we would let bogus metrics and faulty data presentation dominate the debate. At this point, I see no indication that VP8 is even remotely on par with a good implementation of H264--perhaps it can be sold on other merits, but quality simply isn't one of them. This could change as the VP8 implementation improves, but even the standard itself is roughly equivalent to baseline profile of H264.

Putting that whole debate aside and returning to the notion of a high definition video standard, using these methods--and in particular, a box plot--one can establish that a video at least meets some baseline requirement with regard to encoder quality. The blu-ray shots above are a pretty clear indication that this industry needs such a standard. More importantly, consumers deserve it--it's not right for people to shell out cash for some poorly-encoded, overly-quantized, PSNR-optimized mess.""

Link to Original Source

Journals

top

Why dual cores are going to rock domes.

kidjan kidjan writes  |  more than 9 years ago Although there is plenty of enthusiasm over dual cores, I also get the sense that many people don't see them as being very useful.

While I can certainly see where they're coming from, I think it's much more productive to see examples of software that can make use of dual cores this very instant. A few examples perhaps beyond what people may be envisioning:

1. MP3/WMV/AAC/whatever encoding rates can be effectively doubled, whether working on a single track or multiple tracks. Multiple tracks is easy, because you just have two encoding threads leapfrogging one another. On a single track, it'd be a bit more involved, but basically a file could be broken into two prior to encoding. Either that, or they could both pull from the same buffer and leapfrog audio samples (or chunks).

2. Video re-encoding. Imagine being able to both VIEW and ENCODE a DVD all at once. One cpu is devoted to playback of the audio and video, and the other CPU is devoted to ripping and re-encoding.

3. The application I'm currently working on is video surveillance (more on this in a future article when I'm not under NDA). Not only can I easily eat all the CPU Intel and AMD are currently throwing at me, but I can potentially eat it for years to come when re-encoding content for the Internet, local area devices, and so forth. Audio and video applications are an almost endless souce of dual core perks, in my opinion.

4. Once games have been properly threaded, I think increases of anywhere fom 30-45% can be expected. Additionally, one could do other things in the background while playing a game.

There are some HUGE advantages to having extra parallel cycles on an end user's computer from a developer standpoint. I think it's going to usher in new eras of audio/video capabilities, and also enable user's computers to be more like "servers" and offer them things over the Internet that they never would have dreamed possible before.

top

Dual Core Mumbo Jumbo

kidjan kidjan writes  |  more than 9 years ago The other day, I came across a very nasty article which claimed (among other things) that many hardware review sites were biased and bought out by industry.

This came as little surprise to me; however, the reasoning behind all of this had to do with the recent reviews of Intel dual core chips.

The article in question:

http://www.theinquirer.net/?article=22332

Charlie Demerjian states: "To step sideways a bit, the current dual core chips are all going to suck on games regardless of whether they come from Intel or AMD. Both are heat limited and will debut several clock bins below their single core counterparts. The Intel side also takes a step backward in bus speed because of the added loads on the bus. All these are engineering realities, and in no way diminish the really great jobs both companies are doing to bring dual cores to the masses.
It does mean however that until software catches up, most likely not this year, that gaming is going to suck on them. They will cost more, take more power, and be a status symbol for the rich and stupid, but their frame rates will blow dead goats. On multitasking and multithreaded apps, they will shine like the sun, but how many of these are there? How many times do you encode a movie while typing a document, zipping your C drive, doing some heavy CFD work all while listening to a few MP3s? Yeah, me neither, but at least 3DSMax and photoshop will rock on the new chips.
Getting back to reality, imagine my surprise when I saw that this new preview studiously avoided games. They are testing two of the most popular gaming chips out there, and the heir to the throne, and they did not put in one single game benchmark. Not one, think about that.
In the rebuttal to this, there will be the usual cries of 'we were not testing gaming performance' or some such bullsh*t ass covering, but here is the truth, if you are going to multitask and do and do anything that tasks both of the CPUs, one of those is going to be a game.
If you read up on the benchmarks posited by the current crop of reviews, how many are things you do regularly? How many fit the a scenario that you have ever found yourself in? How many of you would do seven things concurrently if you had seven things to do rather than do one or two at a time, and probably end up at the finish line first? The human mind does not multitask well, so 19 active windows is 17 or 18 more than you really can use at once."

The only real-world evidence provided to support the claim that reviewers are being bought out is regarding dual core processors, so that's what I'll address.

Charlie claims that because benchmarks for dual cores tested things no typical user would ever do regularly, and reviewers avoided benchmarks where dual cores performed poorly, the reviewers must have been bought out.

Regarding the multitasking that no users would ever do: first Charlie points out that multi-core CPUs cannot be fully utilized until the software industry as a whole begins threading properly, and then moves on to browbeat reviewers because they didn't use multithreaded software to test, instead using suites that don't properly emulate real-world behavior? This is a ridiculous catch-22. Reviewers are doing a bad job because they didn't test real-world software usage, despite there being very few applications currently available that can actually make use of a dual-core properly? I'm sorry, but to me that just smacks of hypocrisy.

Regarding reviewers not testing benchmarks that make a product look bad: This is, again, a half-truth at best. Given that:

1. Applications that are single threaded aren't going to benefit from dual cores, and 2. Most games are currently single threaded (mostly, anyways)

...why would anyone benchmark current single threaded games? It's pretty obvious what the results would be. The dual-core is going to be slower than the single core that's clocked faster. There isn't any point in benchmarking a single threaded application on both a dual core and a single core chip running the same core where one is clocked faster than another. It'd be like comparing a 2.8 GHz P4 to a 3.6 GHz P4--the result should be pretty obvious.

One site that is probably guilty in the eyes of this article would be Anandtech. Their first dual-core article clearly omitted gaming performance, and they state the following:

Anandtech wrote: (For plain-jane single threaded application performance), the Pentium Extreme Edition or the Pentium D will simply perform identically to the equivalently clocked Pentium 5xx series CPU. The second core will go unused and the performance of the first core is nothing new. Given the short lead time on hardware for this review, we left out all of our single threaded benchmarks given that we can already tell you what performance is like under those tests - so if you're looking for performance under PC WorldBench or any of our Game tests, take a look at our older reviews and look at the performance of the Pentium 4 530 to get an idea of where these dual core CPUs will perform in single threaded apps. There are no surprises here; you could have a 128 core CPU and it would still perform the same in a single threaded application. Closer to its launch, we will have a full review including all of our single and multithreaded benchmarks so that you may have all of the information that will help determine your buying decision in one place.

Did Anandtech omit benchmarking games? Yes! Is there a good reason to do so? Yes!

However, in Anandtech's second review, they compare apples to apples from a price perspective--a dual core 2.8 GHz proc and a 3.0 GHz single processor P4. The dual core is clearly a better processor in this scenario, given the price. Applications that are threaded properly are going to utterly smoke the faster clocked single core processor.

An example of this is the application I'm currently writing, which is heavily threaded and includes multiple video encodes and decodes. The things I could do with a dual core architecture makes me giddy like a school boy. Not only would my application be able to re-encode content on the fly, but it would be capable of doing it with minimal interference to the rest of the computing being done by the computer. If I can get dual core hyperthreaded performance, that's even better.

It's going to take time for the software industry to catch up with dual cores, but when they do, the end user is going to see noticable benefits.

top

Why video conferencing isn't going to happen any time soon.

kidjan kidjan writes  |  more than 9 years ago Before I'm savegely beaten for the title alone, let me first say a bit about what I do for a living.

I currently work for a company that is designing and implementing an IP-based security system. As such, we stream video over an IP based network using an RTP library that I adopted and implemented. I have extensive experience with video compression technology and streaming media over IP networks--especially lossy, erratic networks--from a programmatic perspective.

That being said, there are some fundamental roadblocks to having video conferencing. The common retort to this claim is "Well, we currently have audio--what's so different about audio and video transmission over the Internet?" Quite a bit, actually--we'll go into that a bit more.

The problem involves several factors:
1. A lack of bandwidth on people's upload connections to the internet. While many ISPs provide people with three and four megabit connections to the Internet, often times their upload rate is no more than 256 kb (kb being kilobit, KB being kilobyte). While audio is perfectly happy residing in 30-50 kb, video (in any acceptable quality) is not.

2. The Internet itself is unsuitable for delivering large video in a timely manner. The Internet is a big, lossy, trashy, inconsistent network. Packets arrive out of order. Packets are lost. Data arrives at inconsistent intervals. TCP does a lot to correct this, but while TCP has respect for guaranteed delivery, it has very little respect for timeliness of delivery. For a real-time application, like video conferencing, this is bad. If retransmitting lost data ends up taking 200 ms, on top of the rest of the time, the quality is degraded.

3. Lastly, while there have been huge advances in video compression technology, there are tradeoffs for everything. An MPEG4 stream is going to look better than a MJPEG stream at half the bandwidth, but the MPEG4 stream has qualities that make it significantly worse in several aspects. Codecs have what we call "loss tolerance," or their ability (or lack thereof) to loose data mid-stream and still be able to render. Because MJPEG is just a series of frames with no relationship to previous or future frames, loosing a frame isn't a big deal. In fact, it's not a problem at all. You skip the frame, and move on. With MPEG4, however, you have frames that rely on previous frames--loosing one isn't an option.

Combine #2 with #3 and you have a big, big problem. How do you reconcile a trashy, lossy network that doesn't deliver stuff on time with a codec that's completely intolerant to loss? Furthermore, even an MPEG4 stream will need 300+ kb/sec to look decent (we're talking 320*240--nothing spectacular), so we're already in conflict with #1.

My prediction: video conferencing will not be solved until loss tolerant codecs are developed with low bandwidth (not likely), or the Internet itself has a huge increase in reliability and/or throughput.

top

Ah....member number 844535....

kidjan kidjan writes  |  more than 9 years ago ....I feel so privaledged. :D My interests include computers, C++, video compression and streaming, amplifiers and speakers, and other obviously geek interests. Perhaps I'll post my $.02 about various topics here from time to time, starting with a nice article about why WMV9 is going to dominate over MPEG4. :D

Slashdot Login

Need an Account?

Forgot your password?