NVIDIA Launches Tegra K1-Based SHIELD Tablet, Wireless Controller
According to http://techreport.com/news/268... :
- It can stream games from a PC
- It has a stylus to allow handwriting recognition
- It has processing power to emulate a nintendo wii (see https://www.youtube.com/watch?... )
Qualcomm Announces Next-Gen Snapdragon 808 and 810 SoCs
"I know there's a lot of noise because Apple did [64-bit] on their A7. I think they are doing a marketing gimmick. There's zero benefit a consumer gets from that," -Anand Chandrasekher, former Qualcomm CMO
According to Anandtech ( http://www.anandtech.com/show/... )
"Integer performance: The AES and SHA1 gains are a direct result of the new cryptographic instructions that are a part of ARMv8. The AES test in particular shows nearly an order of magnitude performance improvement. This is similar to what we saw in the PC space with the introduction of Intel's AES-NI support in Westmere. The Dijkstra workload is the only real regression. That test in particular appears to be very pointer heavy, and the increase in pointer size from 32 to 64-bit increases cache pressure and causes the reduction in performance. The rest of the gains are much smaller, but still fairly significant if you take into account the fact that we're just looking at what you get from a recompile. Add these gains to the ones you're about to see over Apple's A6 SoC and A7 is looking really good from a performance standpoint.
If the integer results looked good, the FP results are even better: The DGEMM operations aren't vectorized under ARMv7, but they are under ARMv8 thanks to DP SIMD support so you get huge speedups there from the recompile. The SFFT workload benefits handsomely from the increased register space, significantly reducing the number of loads and stores (there's something like a 30% reduction in instructions for the A64 codepath compared to the A32 codepath here).
The conclusion? There are definitely reasons outside of needing more memory to go 64-bit."
Intel's New Desktop SSD Is an Overclocked Server Drive
There is a new standard which will increase SATA speed ( http://en.wikipedia.org/wiki/S... )
Currently, Apple computers use PCIe SSD disks, which increases their performance:
"I'm very pleased with Apple's PCIe SSD, at least based on Samsung's new PCIe controller. Sequential performance is up considerably over last year's 6Gbps SATA drive. Go back any further and the difference will be like night and day, especially if you were one of the unfortunate few with an older Toshiba drive. Internal transfers are quicker, but to actually use the new SSD to its potential you'll really need a very fast external Thunderbolt array - even USB 3.0 can't completely tax it. There's still a lot more investigating that I want to do on Samsung's new controller, but my early results look very promising. It's sort of crazy that Apple now ships a mainstream consumer notebook with a PCIe SSD capable of almost 800MB/s. Now that Apple is off SATA, scaling storage performance should be much easier to do going forward. "
Dell Is Now a Private Company Again
since they are no longer bound to share-holders; and can innovate for the sake of innovation? No need to bow down to MS Overlords and do as they or the so-called markets please. They can afford to lose a billion bucks in chasing their own dreams.
Well... MS Overlords have lent Dell $2 billion.
"Dell on February 5 announced that Michael Dell and investment firm Silver Lake had offered $24.4 billion, or $13.65 per share, to buy out the company. The offer, subject to shareholder approval, included a $2 billion loan from Microsoft, and debt financing from Bank of America, RBC Capital Markets, Merrill Lynch, and Barclays."
The Chip That Changed the World: AMD's 64-bit FX-51, Ten Years Later
I was hoping to find a current review of the processor against current CPUs....
However, in AnandTech bench you can compare an AMD Athlon X2 4450e (2.3GHz - 1MB L2) with current CPUs. If you compare this to an Intel Core i7 4770K (3.5GHz - 1MB L2 - 8MB L3, one of the best CPUs right now), you can find that the Intel CPU is between 3 times faster and 9 times faster. Most of the times is about 6-7 times faster.
However, if you could compare an AMD FX-51 with an Pentium 66 Mhz (best CPU in September 1993), I think that the difference would be way greater.
CPU process is currently focused on efficiency and lower power. However, in the ARM field, you can still find progress in CPU performance.
Ask Slashdot: Is iOS 7 Slow?
"When asked whether you should install iOS 6 on an iPhone 3GS, we can say "yes" without hesitation or condition. When it comes to the iPhone 4 and iOS 7, our response is a more measured "do it if you like the new features, but have you considered a newer phone?"
iOS 7 on Apple's oldest-supported hardware is hardly a disaster, but it's apparent that the only reason Apple issued this update was because they were selling the iPhone 4 free with contract up until September 10. It has been their value option for a year, and in the Apple ecosystem, even people who bought a new iPhone 4 on September 9 will get at least a year's worth of updates. The A4 simply isn't up to the task of rendering iOS 7 as Apple intended, and the upgrade in general performance and apparent smoothness between even the iPhone 4 and year-newer 4S is significant (to say nothing of the iPhone 5, 5C, and 5S).
When it comes to launching apps, the iPhone 4's general slowness is only exacerbated by the too-long animation durations in iOS 7. This is also a problem on the faster phones and tablets, but at least there you've got faster underlying hardware to keep everything moving at a steady clip.
It's great that Apple isn't abandoning older iPhone owners really. People buying an iPhone 4 free with contract were still getting a phone that felt reasonably fast with iOS 6, and they weren't necessarily aware that they were getting an older single-core SoC with an older, slower GPU that would be ill-suited for Apple's new direction. At least they have the option to upgrade. That said, the iPhone 4 and iOS 7 just can't quite provide an experience that's up to Apple's usual standard. Apply the update if there's an iOS 7 feature (or an iOS 7-only app) that you need in your life, but our recommendation now would either be to wait for potential performance boosts in a future iOS 7 update or to start looking into a new iPhone 5C or 5S."
Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell
There is a good comparison of ARM vs x86 power efficiency at anandtech.com: http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown
"At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014."
"As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. (...) As for the rest of the smartphone SoC, Intel is on the right track."
The future for CPUs is going to be focused on power consumption. The new Atom core is two times more powerful at the same power levels than the current Atom core. You can see http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested:
" Looking at our Android results, Intel appears to have delivered on that claim. Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year.
The power consumption, at least on the CPU side, also looks very good. From our SoC measurements it looks like Bay Trail’s power consumption under heavy CPU load ranges from 1W - 2.5W, putting it on par with other mobile SoCs that we’ve done power measurements on.
On the GPU side, Intel’s HD Graphics does reasonably well in its first showing in an ultra mobile SoC. Bay Trail appears to live in a weird world between the old Intel that didn’t care about graphics and the new Intel that has effectively become a GPU company. Intel’s HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4. It’s a huge step forward compared to Clover Trail, but clearly not a leadership play, which is disappointing."
Apple Renews Contract With Samsung Over A-Series Processors
Take it with a pinch of salt
Nvidia Tegra 4 Benchmark Results
Could these recent mobile phone processors be adapted to the desktop?
You can find Cortex A15 processors in the Samsung Chromebook. According to anandtech ( http://www.anandtech.com/show/6422/samsung-chromebook-xe303-review-testing-arms-cortex-a15/6 ):
"The Cortex A15 is fast. Across the board we're seeing a 40 - 65% increase in performance over a dual-core Atom. Although it's not clear how performance will be impacted as companies work to stick Cortex A15 based SoCs in smartphones with tighter power/thermal budgets, in notebooks (and perhaps even tablets) the Cortex A15 looks capable of delivering a good 1 - 2 generation boost over Intel's original Atom core.
Windows Software Coming To Android Via Wine
How about GNU software? I'd love to run KDE on my phones and tablets.
More info at http://ruedigergad.com/2012/12/21/plasma-active-for-nexus-7-running-the-touch-optimized-plasma-active-linux-distribution-on-nexus-7/
NVIDIA Unveils GRID Servers, Tegra 4 SoC and Project SHIELD Mobile Gaming Device
According to a study of power efficiency focused in tablets with different CPUs (nVidia Tegra3, Qualcomm Krait APQ8060A, Samsung Exynos Cortex A15) from anandtech.com ( http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown and http://www.anandtech.com/show/6529/busting-the-x86-power-myth-indepth-clover-trail-power-analysis ), nVidia Tegra3 is less efficient than Intel Clovertrail platform:
* Intel Clovertrail vs nVidia 3: "Ultimately I don't know that this data really changes what we already knew about Clover Trail: it is a more power efficient platform than NVIDIA's Tegra "
* Intel Clovertrail vs Qualcomm Krait: "We already know that Atom is faster than Krait, but from a power standpoint the two SoCs are extremely competitive. At the platform level Intel (at least in the Acer W510) generally leads in power efficiency. Note that this advantage could just as easily be due to display and other power advantages in the W510 itself and not necessarily indicative of an SoC advantage."
* Intel Clovertrail vs Samsung Exynos Cortex A15: "The Cortex A15 data is honestly the most intriguing. I'm not sure how the first A15 based smartphone SoCs will compare to Exynos 5 Dual in terms of power consumption, but at least based on the data here it looks like Cortex A15 is really in a league of its own when it comes to power consumption. Depending on the task that may not be an issue, but you still need a chassis that's capable of dissipating 1 - 4x the power of a present day smartphone SoC made by Qualcomm or Intel. Obviously for tablets the Cortex A15 can work just fine, but I am curious to see what will happen in a smartphone form factor"
The Wii Mini Is Real, Arrives December 7 — In Canada
According to the Verge (see http://www.theverge.com/2012/11/27/3698060/nintendo-wii-mini-no-internet-no-future ), ", the Wii Mini comes with some unfortunate compromises, most notably the lack of any sort of online connectivity. Instead of being a media streamer-killing Netflix box that can also play a huge library of games, the Wii Mini feels more like a missed opportunity."
ARM Announces 64-Bit Cortex-A50 Architecture
Anandtech has a better article:
According to them, ARM Cortex A57 core is a tweaked ARM Cortex A15 core with 64 bit support. And ARM Cortex A53 core is a tweaked ARM Cortex A7 core with 64 bit support. It is possible to mix A57 and A53 cores in the same die to improve efficiency.
What I would like to see is this kind of approach in the x86 world. Imagine having an AMD processor with two fast cores (Piledriver's successor, Steamroller) for heavy processing and two lower cores for longer battery life (Bobcat's successor Jaguar).
Or Intel with their future Haswell and Silvermont architectures...
Report: Apple To Switch From Samsung to TSMC For ARM CPU Production
According to Anandtech, Intel Core 2013 ULV processors will start from 10W
" Finally, at IDF Intel showed a demo of Haswell running the Unigen Heaven benchmark at under 8W.
The chain of events tells us two things: 1) Intel likes to play its cards close to its chest, and 2) the sub-10W space won't be serviced by Atom exclusively.
Intel said Haswell can scale below 10W, but it didn't provide a lower bound. It's too much to assume Haswell would go into a phone, but once you get to the 8W point and look south you open yourself up to fitting into things the size of a third generation iPad. Move to 14nm, 10nm and beyond then it becomes more feasible that you could fit this class of architecture into something even more portable."
AMD Trinity APUs Stack Up Well To Intel's Core 3
AMD allowed websites to publish a preview of the benchmarks before the estimated date if they only focused on graphics performance. This is an unfair move by AMD.
Read http://techreport.com/blog/23638/amd-attempts-to-shape-review-content-with-staged-release-of-info for more details
(maybe in a couple of weeks you will find that AMD Trinity APUs have abysmal x86 performance compared to Intel CPUs)
Disclaimer: I own a laptop with an AMD cpu inside
Startup Aims For $99, Android-Powered TV Game Console
... and I could continue with differences in the gpu (nvidia Tegra vs unknown gpu)
These things matter for a videoconsole
ARM Publishes 64-bit "AArch64" Linux Kernel Support
I think that what is really awesome is that adding just 23k lines of code gives you support for a new CPU architecture!
Tegra 4 Likely To Include Kepler DNA
It is possible to use the GPU effectively to speed up some scientific simulations. Usually in fluid mechanics problems that could be solved by time marching (or physics that obey hyperbolic governing differential equations). But working with the GPU is a real PITA. There is no standardization. There is no real support for any high level languages. Of course they have bullet points saying "C++ is Supported". But you dig in and find, you have to link with their library, there is no standardization, you need to manage the memory, you need to manage the data pipe line and fetch and cache, the actual amount of code you could fit in their "processing" unit is trivially small. All it could store turns out to be about 10 or so double precision solution variables and about flux vector splitting for Navier Stokes for just one triangle. About 40 lines of C code.
On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.
Then their sales critter create "buzz". Make misleading, almost lying, presentations about GPU programming and how it is going to achieve world domination.
According to wikipedia, there are frameworks (like Open CL http://en.wikipedia.org/wiki/OpenCL ) in order to program in high level languages and have compatibility through various platforms
The Consoles Are Dying, Says Developer
Adobe Makes Flash on GNU/Linux Chrome-Only
Is it a viable alternative against flash? According to http://gnashdev.org/ last version is 0.8.9 published in march 2011.
IYagami has no journal entries.