top Nokia's N1 Android Tablet Is Actually a Foxconn Tablet
Still, the only advantage is if someone manages to run Linux on it. Might make up for the extra heat and lower battery life, to be able to run full featured Linux on it.
Um, you know Android is Linux, right? There's not much special about the Linux kernel in Android, just a few tweaks to the stock kernel to make it suitable to the environement on which it runs.
Almost all the Android special sauce is the user space, so the main difference between an Android system and a regular Linux distribution is what happens when PID 1 is executed. Change init, and you change the system entirely.
debian chroot under Android if you want a regular looking Linux. Install a terminal app on android, and use a BT keyboard for input, and you'd be hard pressed to tell the difference from a regular Linux distro command line.
top Will HP's $200 Stream 11 Make People Forget About Chromebooks?
- return Windows to Microsoft for a $100 refund
Seriously - if you can score this hardware for $100 and run some other OS, they'll sell like hotcakes.
Except you'd more likely get around the ~$15 (guesstimate, no citation) OEM unit license cost as a refund. We're not talking the retail license here, and it puts into light what a rip-off the retail license cost actually is (assuming the extra cost of for the "support" that comes with the retail license.)
top Fuel Efficiency Numbers Overstate MPG More For Cars With Small Engines
Top Gear had an interesting experiment where they raced a Prius against a BMW M3. But what they did was have the Prius go all out and the M3 just paced it. Then they measured the actual gas consumed and found that the BMW had better mileage under those circumstances.
This is what I'd expect, to be honest. I'd expect efficiency losses from the mixed drive train of the Prius, as there are energy losses when charging the battery under braking. Plus, most of the kinetic energy of the Prius is lost under braking as heat in the brakes, as the battery can't handle the power of hard braking under regenerative braking, so the physical brakes take over. Under normal driving conditions, much more of the kinetic energy would have been captured with much less aggressive breaking.
This scenario was pretty much the worst case for the Prius, whereas the BMW was very far from the worst case (being a faster car generally, it was cantering.)
But Jezza did make the point that it is largely how you drive that determines mileage, and even quite big cars can get excellent mileage with highway driving, especially if they use efficient turbo diesel egines (which have much better part throttle efficiency compared to comparable gas/petrol engines.) Get up to speed, stick the cruise control on, then relax.
about a month and a half ago
top Update: At Least 31 People Feared Dead After Japan Volcano Erupts
Perhaps you should read the articles you link.
The scientists where well aware of the earth quake! But they did not issue proper warnings! And that *is manslaughter*!
According the article, the scientists were well aware of the tremors that preceded the major earth quake, but as Italy is seismically very active anyway, didn't see any cause for concern of the tremors over and above all the other trremors Italy gets. And as none of them were alive in the 14th, 15th and 18th century (for which I doubt accurate, detailed seismic accounts are kept) they had no previous data on which to base their findings other than the ongoing seismic activity.
The article points to nothing else other than a travesty of justice.
top Mark Zuckerberg Throws Pal Joe Green Under the Tech Immigration Bus
Dick Cheney brought us the current mess. He set the bar. W was just his sock-puppet.
Oh yeah, that makes sense. The son of a former President, former CIA Director, grandson of a U.S. Senator, and great-grandson of one of the 19th centuries rail barons was merely a sock puppet serving the interests of the son of a minor bureaucrat with the Department of Agriculture. You know, people should look at the nature of history before they start building conspiracy theories.
son of a bastard nobody changed millions of lives with war.
If you think "stock" makes people great or powerful, then you're no better than all the various monarchies overthrown in the last few hundred years. Nepotism only goes so far.
top The Quiet Revolution of Formula E Electric Car Racing
A lot of car makers left F1 in the past, Mercedes has returned, but Honda, BMW and Toyota have left.
Only because they were having their ass handed to them on a plate. Toyota achieved literally nothing in their F1 stint, BMW did get some wins, but weren't competitive enough to justify the investment. Honda ditto, but left at the wrong time (the post-Honda Brawn team won the 2009 championship with the Honda designed car.)
And there are other racing series, which may be more road relevent. The Audi R18 e-tron has a Diesel hybrid drivetrainm with flywheel based energy storage. Very road relavent and innovative in the field.
Motor Racing does help drive innovation but in a sport where the FIA have virtually done away with any concept of innovation, it'll be difficult to see how this new formula will enhance the sport or spur innovation in day to day cars. Fans are leaving, sponsors are worried and that means no money and a dead series coming soon.
It's not all about innovation. It's also about the grunt work of refining what you have. That's why Mercedes are dominating even the other identically powered cars. They've done the best job within the rules defined.
And there are lots of ways to innovate in chassis and aerodynamic design. The current crop of F1 cars have a very diverse array of front end designs.
And lets be honest, most F1 innovations don't translate to road cars anyway. The biggest influence of F1 and other motor racing has been in the engine management and fuel injection areas. Racing aerodynamics? Moot. Suspension design? Not applicable to most road cars. Sequential gearboxes? Came from bikes anyway. Tires? Irrelevent unless you only want your tires to last a week.
top Research Shows RISC vs. CISC Doesn't Matter
20 years ago, RISC vs CISC absolutely mattered. The x86 decoding was a major bottleneck and transistor budget overhead.
As the years have gone by, the x86 decode overhead has been dwarfed by the overhead of other units like functional units, reorder buffers, branch prediction, caches etc. The years have been kind to x86, making the x86 overhead appear like noise in performance. Just an extra stage in an already long pipeline.
All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.
top UK To Allow Driverless Cars By January
I suggest government ministers, who seem to be driven round everywhere anyway. Might as well save a few drivers' salaries in these times of austerity.
top Greenpeace: Amazon Fire Burns More Coal and Gas Than It Should
How far is it from Amsterdam to Luxembourg anyway?
A four-hour drive by car (359.5 km), according to Google. I'm curious to know if taking a plane is more energy efficient than a car or train.
Depends on how full the plane, train and car are. A single person in a largish car (he's a CEO, remember) probably won't beat a full short haul flight for the same distance.
Trains are among the most efficient transportation methods (hard wheels on smooth rails = low rolling resistance) but the journey may not be the fastest nor most direct.
top Nearly 25 Years Ago, IBM Helped Save Macintosh
PowerPC had good performance for several years. When the 603 and 604 were around they had better performance than x86 did. The problems started when the Pentium Pro came out. Even then it was not manufactured in enough numbers to be a real issue. Then the Pentium II came out...
No, I think it was more the Pentium IV where Intel overtook Motorola. The PPC G4 design had started to hit up against clock speed walls, and couldn't scale the FSB up either. While Netburst was a disaster for performance/watt, it did scale clock speed wise and had a very fast FSB and memory subsystem, and while everyone else was hovering around the sub-2GHz mark, Intel got plenty of high clock frquency practice.
Once the Netburst FSB was moved to the P6 architecture in the form of the Pentium M, Intel had the winner on their hands that propelled them to this day ahead of anyone else.
top Nearly 25 Years Ago, IBM Helped Save Macintosh
No, this is stupid, wasteful, unoptimized software that performs like feces compared to a platform optimized piece of software.
Eh? What are you on about?
Yeah, those hand tweaked 16bit binaries performed really well on the pipelined i486 processors of the time. Really extracted all the potential out of the advances that were taking the CPU industry by storm.
In case you missed, I was being sarcastic. "Platform optimised" (read DOS) programs held the industry back at least a decade, and it's only after we left the 16-bit sheckles^W^Wplatform optimized software behind that x86 based platforms started to reach parity with their RISC based peers.
Afterall, Doom was famously written in C on a Next cube, then ported to x86/DOS and "platform optimised" as a final step.
The whole myth I've heard about software portability most of my life has never bore fruit that didn't need tweaks for different platforms.
The software I write has had no tweaks since we stopped supporting HP-UX 10. The biggest headache is GUI code, but libraries such as Qt take care of that.
The only performance tweaks we do is upgrading compilers, and ensuring we use efficient algorithms (ie, not O(N^2) when O(NlogN) is available.)
The whole notion in the first place was to expand programming to the masses by giving the appearance of the elimination of the need of specialists.
A good intention, to be sure, except for the specialists. The problem was that a specialist with knowledge of how the hardware operates could write software that took more advantage and/or better performed on a given platform. Things like CPU instruction set options, memory alignment, etc.
There is now a resurgence of platform optimized specialization thanks to big data. Do you want your humungous data sets processed and analyzed in months or years by the average programmer, or do you want it in days and weeks by the programmer that really, really knows how to squeeze the hardware.
That's right, the demand for hand optimizing assembly programmers is though the roof.
Do you want your big data software written in months or years, as the programmer tries to squeeze every once of performance from the CPU, while your competitor has had the software running already for months and compensated for the lack of optimization by buying an extra rack of servers.
Big data is processed faster by better algorithms, not platform tweaking.
Facebook optimized their platform by JIT compiling their PHP, but the stuff was still written in PHP in the first place by "non-specialists" and the optimization was a relatively small final step. As an added bonus, they're also porting to ARM by basically re-implmenting just the JIT compiler for ARM. So not really optimized for any particular platform, just x64 by virtue of being their primary target platform.
Google use C++, Java and Python, and I'd bet there isn't any Google hand optimized assembler in any of that mix. They kicked big data butt by using clever, scalable algorithms.
top Study: Why the Moon's Far Side Looks So Different
So.... At the risk of stating the obvious: modern man has been on this planet for around 50,000 years;
Australia has been colonised by "modern man" for longer than 50,000 years. Modern man left Africa more like 100,000 years ago, and if you lifted one of those babies out and plonked him in the "modern world" noone would notice the difference.
Anatomically modern humans are more like 200,000 years old, and I dare speculate that their predocessors gazed at the stars and moon.
top Samsung Release First SSD With 3D NAND
Parent poster here, I use the 840 Pros I mentioned above on my laptop, I already have extensive caching going on with about 12GB of my 32GB of RAM, but its still saturates the SATA bus due to the mostly random nature of the I/Os. Its basically a giant 300GB b+tree with 2MB leaf nodes and about a 40% insertion 40% lookup and 20% deletion ratio.
Wouldn't something like
this or another enterprise drive be a better match for you?
top HP Unveils 'The Machine,' a New Computer Architecture
Everything should be by reference. Copying crap all over is bullshit.
How do you atomically validate and use data that is passed by reference? You might validate the data, then use it, but in between, the source of the data might change it in nefarious ways, leaving you open to a timing based security attack. Some copies are unavoidable, and single or multiple address spaces don't make any difference whatsoever in this case.
top HP Unveils 'The Machine,' a New Computer Architecture
As I pointed out above, go check out the design specs for OS/400 (System i). It's got a flat address space and was one of, if not the first mid-range system to achieve C2 certification. But I suppose you're talking about a flat address space with an
open-source system - you're probably right.
OS/400 is a bit different in that programs are not native to the CPU, and are compiled into native code at installation time. In that sense, OS/400 can enforce security and separation statically at compile time, and so doesn't need isolated address spaces.
For native processes requiring any semblance of isolation, processes would have to be tagged to detemine which addresses in a flat address space they can access, which implies some sort of segmentation or page tagging, and once we're validating page accesses anyway, we might as well have a full MMU as well.
top HP Unveils 'The Machine,' a New Computer Architecture
From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.
Paging provides address space isolation and protection, separation of instruction and data (unless you advocate going back to segmented memory). It won't be going very far anytime soon.
A single flat address space would be a disaster for security and protection.
Still, it would make the filesystem potentially simpler, making non-mmap reads/writes basically a memcpy in kernel space.
top UK May Kill the EU's Net Neutrality Law
And buy European porn. Love the common market.
top Finally, Hi-Def Streaming Video of the ISS's View of Earth
... All Japanese, although TFA is at pains to point out that there was some seemingly minor American involvement too. Are there any major camera manufacturers left in the US?
Well, it is the
International Space Station, not the American Space Station (though that would have a much better initialism.)
top Why Portland Should Have Kept Its Water, Urine and All
People get into high positions by rising as those above are destroyed in the public eye. Those above are destroyed in the public eye when they fail to respond to every absurd panic with equal panic and alarm. A rational leader is soon removed from power.
Or, more succinctly, shit floats!
top SSD-HDD Price Gap Won't Go Away Anytime Soon
Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?
When doing anything random IO intensive, even a SSD won't saturate a SATA 3 link. But it'd still be orders of magnitude faster than a mechanical disk.
Consider, a 15000 RPM enterprise disk might top out at perhaps 250 random IOPS. The lowest of low end SSD will beat that by perhaps 10x. So to get the equivalent performance to an entry level SSD, you might require 10+ enterprise 15K RPM disks. Once your SSD storage becomes big enough, it might actually be more cost effective to have SSD instead of the 10x HDD and associated management and enclosure costs.
In fact, in the TPC benchmarks so favoured by DB vendors, a large proportion of the costs of the rig are the massive storage requirements to fulfill the IO rates required (no, I don't have a citation off hand) even if the vast majority of the storage space is not actually used (HDD used in these scenarios tend to be "short stroked").
However, for most people a compromise probably works best. Bulk storage on slow HDD, with fast random IO like hot data and journals on SSD. ZFS is a prime example, using L2ARC and ZIL on SSD for hot data.