WebP vs. JPEG vs. JPEG XRkidjan (844535) writes "A comparison of the three different formats when varying quality and source material. WebP fares well, depending on the source. JPEG XR...let's just say it has problems.
Link to Original Source
Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.
Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and learn more about it. Thanks for reading, and for making the site better!
While I can certainly see where they're coming from, I think it's much more productive to see examples of software that can make use of dual cores this very instant. A few examples perhaps beyond what people may be envisioning:
1. MP3/WMV/AAC/whatever encoding rates can be effectively doubled, whether working on a single track or multiple tracks. Multiple tracks is easy, because you just have two encoding threads leapfrogging one another. On a single track, it'd be a bit more involved, but basically a file could be broken into two prior to encoding. Either that, or they could both pull from the same buffer and leapfrog audio samples (or chunks).
2. Video re-encoding. Imagine being able to both VIEW and ENCODE a DVD all at once. One cpu is devoted to playback of the audio and video, and the other CPU is devoted to ripping and re-encoding.
3. The application I'm currently working on is video surveillance (more on this in a future article when I'm not under NDA). Not only can I easily eat all the CPU Intel and AMD are currently throwing at me, but I can potentially eat it for years to come when re-encoding content for the Internet, local area devices, and so forth. Audio and video applications are an almost endless souce of dual core perks, in my opinion.
4. Once games have been properly threaded, I think increases of anywhere fom 30-45% can be expected. Additionally, one could do other things in the background while playing a game.
There are some HUGE advantages to having extra parallel cycles on an end user's computer from a developer standpoint. I think it's going to usher in new eras of audio/video capabilities, and also enable user's computers to be more like "servers" and offer them things over the Internet that they never would have dreamed possible before.
This came as little surprise to me; however, the reasoning behind all of this had to do with the recent reviews of Intel dual core chips.
The article in question:
Charlie Demerjian states:
"To step sideways a bit, the current dual core chips are all going to suck on games regardless of whether they come from Intel or AMD. Both are heat limited and will debut several clock bins below their single core counterparts. The Intel side also takes a step backward in bus speed because of the added loads on the bus. All these are engineering realities, and in no way diminish the really great jobs both companies are doing to bring dual cores to the masses.
It does mean however that until software catches up, most likely not this year, that gaming is going to suck on them. They will cost more, take more power, and be a status symbol for the rich and stupid, but their frame rates will blow dead goats. On multitasking and multithreaded apps, they will shine like the sun, but how many of these are there? How many times do you encode a movie while typing a document, zipping your C drive, doing some heavy CFD work all while listening to a few MP3s? Yeah, me neither, but at least 3DSMax and photoshop will rock on the new chips.
Getting back to reality, imagine my surprise when I saw that this new preview studiously avoided games. They are testing two of the most popular gaming chips out there, and the heir to the throne, and they did not put in one single game benchmark. Not one, think about that.
In the rebuttal to this, there will be the usual cries of 'we were not testing gaming performance' or some such bullsh*t ass covering, but here is the truth, if you are going to multitask and do and do anything that tasks both of the CPUs, one of those is going to be a game.
If you read up on the benchmarks posited by the current crop of reviews, how many are things you do regularly? How many fit the a scenario that you have ever found yourself in? How many of you would do seven things concurrently if you had seven things to do rather than do one or two at a time, and probably end up at the finish line first? The human mind does not multitask well, so 19 active windows is 17 or 18 more than you really can use at once."
The only real-world evidence provided to support the claim that reviewers are being bought out is regarding dual core processors, so that's what I'll address.
Charlie claims that because benchmarks for dual cores tested things no typical user would ever do regularly, and reviewers avoided benchmarks where dual cores performed poorly, the reviewers must have been bought out.
Regarding the multitasking that no users would ever do: first Charlie points out that multi-core CPUs cannot be fully utilized until the software industry as a whole begins threading properly, and then moves on to browbeat reviewers because they didn't use multithreaded software to test, instead using suites that don't properly emulate real-world behavior? This is a ridiculous catch-22. Reviewers are doing a bad job because they didn't test real-world software usage, despite there being very few applications currently available that can actually make use of a dual-core properly? I'm sorry, but to me that just smacks of hypocrisy.
Regarding reviewers not testing benchmarks that make a product look bad: This is, again, a half-truth at best. Given that:
1. Applications that are single threaded aren't going to benefit from dual cores, and 2. Most games are currently single threaded (mostly, anyways)
One site that is probably guilty in the eyes of this article would be Anandtech. Their first dual-core article clearly omitted gaming performance, and they state the following:
Anandtech wrote: (For plain-jane single threaded application performance), the Pentium Extreme Edition or the Pentium D will simply perform identically to the equivalently clocked Pentium 5xx series CPU. The second core will go unused and the performance of the first core is nothing new. Given the short lead time on hardware for this review, we left out all of our single threaded benchmarks given that we can already tell you what performance is like under those tests - so if you're looking for performance under PC WorldBench or any of our Game tests, take a look at our older reviews and look at the performance of the Pentium 4 530 to get an idea of where these dual core CPUs will perform in single threaded apps. There are no surprises here; you could have a 128 core CPU and it would still perform the same in a single threaded application. Closer to its launch, we will have a full review including all of our single and multithreaded benchmarks so that you may have all of the information that will help determine your buying decision in one place.
Did Anandtech omit benchmarking games? Yes! Is there a good reason to do so? Yes!
However, in Anandtech's second review, they compare apples to apples from a price perspective--a dual core 2.8 GHz proc and a 3.0 GHz single processor P4. The dual core is clearly a better processor in this scenario, given the price. Applications that are threaded properly are going to utterly smoke the faster clocked single core processor.
An example of this is the application I'm currently writing, which is heavily threaded and includes multiple video encodes and decodes. The things I could do with a dual core architecture makes me giddy like a school boy. Not only would my application be able to re-encode content on the fly, but it would be capable of doing it with minimal interference to the rest of the computing being done by the computer. If I can get dual core hyperthreaded performance, that's even better.
It's going to take time for the software industry to catch up with dual cores, but when they do, the end user is going to see noticable benefits.
I currently work for a company that is designing and implementing an IP-based security system. As such, we stream video over an IP based network using an RTP library that I adopted and implemented. I have extensive experience with video compression technology and streaming media over IP networks--especially lossy, erratic networks--from a programmatic perspective.
That being said, there are some fundamental roadblocks to having video conferencing. The common retort to this claim is "Well, we currently have audio--what's so different about audio and video transmission over the Internet?" Quite a bit, actually--we'll go into that a bit more.
The problem involves several factors:
1. A lack of bandwidth on people's upload connections to the internet. While many ISPs provide people with three and four megabit connections to the Internet, often times their upload rate is no more than 256 kb (kb being kilobit, KB being kilobyte). While audio is perfectly happy residing in 30-50 kb, video (in any acceptable quality) is not.
2. The Internet itself is unsuitable for delivering large video in a timely manner. The Internet is a big, lossy, trashy, inconsistent network. Packets arrive out of order. Packets are lost. Data arrives at inconsistent intervals. TCP does a lot to correct this, but while TCP has respect for guaranteed delivery, it has very little respect for timeliness of delivery. For a real-time application, like video conferencing, this is bad. If retransmitting lost data ends up taking 200 ms, on top of the rest of the time, the quality is degraded.
3. Lastly, while there have been huge advances in video compression technology, there are tradeoffs for everything. An MPEG4 stream is going to look better than a MJPEG stream at half the bandwidth, but the MPEG4 stream has qualities that make it significantly worse in several aspects. Codecs have what we call "loss tolerance," or their ability (or lack thereof) to loose data mid-stream and still be able to render. Because MJPEG is just a series of frames with no relationship to previous or future frames, loosing a frame isn't a big deal. In fact, it's not a problem at all. You skip the frame, and move on. With MPEG4, however, you have frames that rely on previous frames--loosing one isn't an option.
Combine #2 with #3 and you have a big, big problem. How do you reconcile a trashy, lossy network that doesn't deliver stuff on time with a codec that's completely intolerant to loss? Furthermore, even an MPEG4 stream will need 300+ kb/sec to look decent (we're talking 320*240--nothing spectacular), so we're already in conflict with #1.
My prediction: video conferencing will not be solved until loss tolerant codecs are developed with low bandwidth (not likely), or the Internet itself has a huge increase in reliability and/or throughput.