Java Vs. C#: Which Performs Better In the 'Real World'?
I would say that async/await turned out to be a lot less useful compared to how it could have been. The .Net team figured out how to write very clever and efficient iterators by letting the compiler construct a state machine under the hood, a great hammer in the .Net toolbox.
The risk with figure out something really clever is that you want to apply it to more things, and the .Net team applied their compiler trick to the wrong thing. Sure, it will work for GUI programs that might create 100 or even 1000 concurrent with async. But unless I completely has misunderstood how they implement this , async/await in C#5 will not scale to 100.000 concurrent tasks as it will put way to much presure on the GC and the memory bus due to the way they. (lots of memcpy from the stack into the heap when an async task need to block).
Description of how it is working under the hood can be found here
The trick applied here is to put all the automatic variables on the stack just as expected, but a method marked as "async" might "lift" the local variables into a Task instance (i.e. allocate a piece of memory an copy the needed context from the stack into that memory area). This won't scale well in scenarios where you have hundred-of-thousands of concurrent clients to track, and they are doing a lot calls that need to block for some time.
Erlang and Go got this right, they don't allocate large number of memory areas and copy stuff around, they just ensure that every new process/goroutine (equivalent of Task in Erlang/Go) can start out with a _very_ small stack (like 200-300 bytes) and switching to a new process/gorouting will just involve switching the CPU-registers (including the stackpointer).
C# got another thing wrong in the process: marking a method returning T as async changes the signature of that function. It now returns Task instead. That is bad because you have to decide upfront which methods you want to run asynchronous and which to run synchronous, unless you are prepared to break your API later. Yet again something Erlang and Go got right, the signature of a function is always the same as you always implement it as if it would run synchronous. The _caller_ of your function determins if he/she want to make a synchronous ("normal" call) or asynchronous call (just wrap it into a process/goroutine).
They rewritten the async API 3 times in .Net now, they need at least one more try to get it right.
Samsung's Galaxy S III Steals Smartphone Crown From iPhone
Well, both Apple and Samsung got smoked by the famous company Others that sold 127.6 million units.
Eat that Apple/Samsung!
Intel Details Power Management Advancements in Haswell
Well, comparing Atom N570 based system vs some Cortex A9 SoC isn't really a fair comparison, is it?
The Atom system has to power things like PCI busses, SATA-controllers etc.
How about redoing that comparison using Medfield (Atom based SoC) that still using an Atom CPU (the Bonnell core) that can hit 1.6GHz, but uses FAR less power when looking at the system as a whole.
The Linux-Proof Processor That Nobody Wants
Unless Intel starts making load and store instructions that can function in big endian mode (we can only dream), data loading in an emulator/JIT will always be a huge execution burden.
You mean like the movbe instructions supported by Atom and Haswell?
All current x86 CPUs do have support for the bswap instruction, so they can emulate movbe with one bswap and one mov.
Cops Can Crack an iPhone In Under Two Minutes
Creating tools is perfectly legal.
Not according to 17 USC 1201(a)(2) and 17 USC 1201(b)(1) it isn't.
From the article: "Micro Systemation, a Stockholm-based company..."
Instead of re-inventing something, I'll just reuse a quote from Pirate Bay,
"As you may or may not be aware, Sweden is not a state in the United States
of America. Sweden is a country in northern Europe.
Unless you figured it out by now, US law does not apply here."
Intel Relying On Ice Cream Sandwich For Tablet Push
The Z6xx based SoC has a TDP of 1.3-3W. TDP does not equal idle power consumption, TDP is much closer to the maximum power usage since TDP (Thermal Design Power) is the amount of heat that the system must be able to move away from the chip if it should operate reliably.
The performance figures that AnandTech posted for Medfield seems to suggest that Atom has an IPC (instruction per clock) much closer to Cortex A9 than A8. A9 should be up to 25% faster on the same clock speed according to ARM. The power-figures for Medfield doesn't look to shabby either, but they where measured by Intel so I guess they need to be taken with a grain of salt.
Link to medfield article: http://www.anandtech.com/show/5365/intels-medfield-atom-z2460-arrive-for-smartphones
Intel's Plans For X86 Android, Smartphones, and Tablets
The mistake most people seem to make here is to compare ARM to IA32, when they should be comparing ARM to Intel64/AMD64 (x86_64) since even Atom can run 64-bit code these days.
Going to 64-bit does increase code size a bit, but one of the good things about x86/x86_64 code is that it is VERY dense. This document
suggests that 64-bit x86 code is actually even denser than ARM-thumb code in most cases (which in turn is denser than "normal" ARM code).
High code density means more cache hits, which means better performance and less power-hungry.
x86_64 has the same amount of integer registers as ARM: 16.
Every single x86_64 CPU has support for SSE, which means that floating point operations can (and is) handled by the 16 SSE registers instead of the old x87 fpu-stack.
Fact is that the 64-bit specification for x86 fixed a large number of problems that the 32-bit specification had, making x86_64 a really good architecture without any significant flaws.
What Is the Most Influential Programming Book?
I used a lot of the techniques described in "The Dilbert Principle" combined with "The Pragmatic Programmer" and I must say that it has made wonders to my career as a programmer/software architect so far (and I really DO mean that).
A Real World HTML 5 Benchmark
Built in browser, HTC Desire, 1GHz ARM Cortex A8, Android 2.2: 2404 (137/1/1690)
Chrome 8 (64-bit), Dell E4300, 2.4 GHz Core2Duo, Kubuntu 10.10: 11306 (652/61/7057)
Rekonq (64-bit), Dell E4300, 2.4 GHz Core2Duo, Kubuntu 10.10: 10006 (576/27/6634)
Chrome 8 (32-bit), MBP 2010, 2.4 GHz Core i5, OS X 10.6: 11475 (630/56/7686)