Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Fuel Efficiency Numbers Overstate MPG More For Cars With Small Engines

Christian Smith Re:Top Gear had an interesting experiment (403 comments)

Top Gear had an interesting experiment where they raced a Prius against a BMW M3. But what they did was have the Prius go all out and the M3 just paced it. Then they measured the actual gas consumed and found that the BMW had better mileage under those circumstances.

This is what I'd expect, to be honest. I'd expect efficiency losses from the mixed drive train of the Prius, as there are energy losses when charging the battery under braking. Plus, most of the kinetic energy of the Prius is lost under braking as heat in the brakes, as the battery can't handle the power of hard braking under regenerative braking, so the physical brakes take over. Under normal driving conditions, much more of the kinetic energy would have been captured with much less aggressive breaking.

This scenario was pretty much the worst case for the Prius, whereas the BMW was very far from the worst case (being a faster car generally, it was cantering.)

But Jezza did make the point that it is largely how you drive that determines mileage, and even quite big cars can get excellent mileage with highway driving, especially if they use efficient turbo diesel egines (which have much better part throttle efficiency compared to comparable gas/petrol engines.) Get up to speed, stick the cruise control on, then relax.

about three weeks ago
top

Update: At Least 31 People Feared Dead After Japan Volcano Erupts

Christian Smith Re:No warning? (54 comments)

Perhaps you should read the articles you link.

The scientists where well aware of the earth quake! But they did not issue proper warnings! And that *is manslaughter*!

According the article, the scientists were well aware of the tremors that preceded the major earth quake, but as Italy is seismically very active anyway, didn't see any cause for concern of the tremors over and above all the other trremors Italy gets. And as none of them were alive in the 14th, 15th and 18th century (for which I doubt accurate, detailed seismic accounts are kept) they had no previous data on which to base their findings other than the ongoing seismic activity.

The article points to nothing else other than a travesty of justice.

about a month ago
top

Mark Zuckerberg Throws Pal Joe Green Under the Tech Immigration Bus

Christian Smith Re:Stop using Facebook (261 comments)

Dick Cheney brought us the current mess. He set the bar. W was just his sock-puppet.

Oh yeah, that makes sense. The son of a former President, former CIA Director, grandson of a U.S. Senator, and great-grandson of one of the 19th centuries rail barons was merely a sock puppet serving the interests of the son of a minor bureaucrat with the Department of Agriculture. You know, people should look at the nature of history before they start building conspiracy theories.

This son of a bastard nobody changed millions of lives with war.

If you think "stock" makes people great or powerful, then you're no better than all the various monarchies overthrown in the last few hundred years. Nepotism only goes so far.

about a month ago
top

The Quiet Revolution of Formula E Electric Car Racing

Christian Smith Re:quiet = powerful (116 comments)

A lot of car makers left F1 in the past, Mercedes has returned, but Honda, BMW and Toyota have left.

Only because they were having their ass handed to them on a plate. Toyota achieved literally nothing in their F1 stint, BMW did get some wins, but weren't competitive enough to justify the investment. Honda ditto, but left at the wrong time (the post-Honda Brawn team won the 2009 championship with the Honda designed car.)

And there are other racing series, which may be more road relevent. The Audi R18 e-tron has a Diesel hybrid drivetrainm with flywheel based energy storage. Very road relavent and innovative in the field.

Motor Racing does help drive innovation but in a sport where the FIA have virtually done away with any concept of innovation, it'll be difficult to see how this new formula will enhance the sport or spur innovation in day to day cars. Fans are leaving, sponsors are worried and that means no money and a dead series coming soon.

It's not all about innovation. It's also about the grunt work of refining what you have. That's why Mercedes are dominating even the other identically powered cars. They've done the best job within the rules defined.

And there are lots of ways to innovate in chassis and aerodynamic design. The current crop of F1 cars have a very diverse array of front end designs.

And lets be honest, most F1 innovations don't translate to road cars anyway. The biggest influence of F1 and other motor racing has been in the engine management and fuel injection areas. Racing aerodynamics? Moot. Suspension design? Not applicable to most road cars. Sequential gearboxes? Came from bikes anyway. Tires? Irrelevent unless you only want your tires to last a week.

about 2 months ago
top

Research Shows RISC vs. CISC Doesn't Matter

Christian Smith Final nail in the Itanium coffin (161 comments)

20 years ago, RISC vs CISC absolutely mattered. The x86 decoding was a major bottleneck and transistor budget overhead.

As the years have gone by, the x86 decode overhead has been dwarfed by the overhead of other units like functional units, reorder buffers, branch prediction, caches etc. The years have been kind to x86, making the x86 overhead appear like noise in performance. Just an extra stage in an already long pipeline.

All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.

about 2 months ago
top

UK To Allow Driverless Cars By January

Christian Smith First Customers? (190 comments)

I suggest government ministers, who seem to be driven round everywhere anyway. Might as well save a few drivers' salaries in these times of austerity.

about 2 months ago
top

Greenpeace: Amazon Fire Burns More Coal and Gas Than It Should

Christian Smith Re: Clever editors. (288 comments)

How far is it from Amsterdam to Luxembourg anyway?

A four-hour drive by car (359.5 km), according to Google. I'm curious to know if taking a plane is more energy efficient than a car or train.

Depends on how full the plane, train and car are. A single person in a largish car (he's a CEO, remember) probably won't beat a full short haul flight for the same distance.

Trains are among the most efficient transportation methods (hard wheels on smooth rails = low rolling resistance) but the journey may not be the fastest nor most direct.

about 3 months ago
top

Nearly 25 Years Ago, IBM Helped Save Macintosh

Christian Smith Re:Another misleading headline (236 comments)

PowerPC had good performance for several years. When the 603 and 604 were around they had better performance than x86 did. The problems started when the Pentium Pro came out. Even then it was not manufactured in enough numbers to be a real issue. Then the Pentium II came out...

No, I think it was more the Pentium IV where Intel overtook Motorola. The PPC G4 design had started to hit up against clock speed walls, and couldn't scale the FSB up either. While Netburst was a disaster for performance/watt, it did scale clock speed wise and had a very fast FSB and memory subsystem, and while everyone else was hovering around the sub-2GHz mark, Intel got plenty of high clock frquency practice.

Once the Netburst FSB was moved to the P6 architecture in the form of the Pentium M, Intel had the winner on their hands that propelled them to this day ahead of anyone else.

about 3 months ago
top

Nearly 25 Years Ago, IBM Helped Save Macintosh

Christian Smith Re:Pairing? (236 comments)

No, this is stupid, wasteful, unoptimized software that performs like feces compared to a platform optimized piece of software.

Eh? What are you on about?

Yeah, those hand tweaked 16bit binaries performed really well on the pipelined i486 processors of the time. Really extracted all the potential out of the advances that were taking the CPU industry by storm.

In case you missed, I was being sarcastic. "Platform optimised" (read DOS) programs held the industry back at least a decade, and it's only after we left the 16-bit sheckles^W^Wplatform optimized software behind that x86 based platforms started to reach parity with their RISC based peers.

Afterall, Doom was famously written in C on a Next cube, then ported to x86/DOS and "platform optimised" as a final step.

The whole myth I've heard about software portability most of my life has never bore fruit that didn't need tweaks for different platforms.

The software I write has had no tweaks since we stopped supporting HP-UX 10. The biggest headache is GUI code, but libraries such as Qt take care of that.

The only performance tweaks we do is upgrading compilers, and ensuring we use efficient algorithms (ie, not O(N^2) when O(NlogN) is available.)

The whole notion in the first place was to expand programming to the masses by giving the appearance of the elimination of the need of specialists.
A good intention, to be sure, except for the specialists.
The problem was that a specialist with knowledge of how the hardware operates could write software that took more advantage and/or better performed on a given platform. Things like CPU instruction set options, memory alignment, etc.

There is now a resurgence of platform optimized specialization thanks to big data. Do you want your humungous data sets processed and analyzed in months or years by the average programmer, or do you want it in days and weeks by the programmer that really, really knows how to squeeze the hardware.

That's right, the demand for hand optimizing assembly programmers is though the roof.

Do you want your big data software written in months or years, as the programmer tries to squeeze every once of performance from the CPU, while your competitor has had the software running already for months and compensated for the lack of optimization by buying an extra rack of servers.

Big data is processed faster by better algorithms, not platform tweaking.

Facebook optimized their platform by JIT compiling their PHP, but the stuff was still written in PHP in the first place by "non-specialists" and the optimization was a relatively small final step. As an added bonus, they're also porting to ARM by basically re-implmenting just the JIT compiler for ARM. So not really optimized for any particular platform, just x64 by virtue of being their primary target platform.

Google use C++, Java and Python, and I'd bet there isn't any Google hand optimized assembler in any of that mix. They kicked big data butt by using clever, scalable algorithms.

about 3 months ago
top

Study: Why the Moon's Far Side Looks So Different

Christian Smith Re:"Very Long Time?" (79 comments)

So.... At the risk of stating the obvious: modern man has been on this planet for around 50,000 years;

Australia has been colonised by "modern man" for longer than 50,000 years. Modern man left Africa more like 100,000 years ago, and if you lifted one of those babies out and plonked him in the "modern world" noone would notice the difference.

Anatomically modern humans are more like 200,000 years old, and I dare speculate that their predocessors gazed at the stars and moon.

about 4 months ago
top

Samsung Release First SSD With 3D NAND

Christian Smith Re:USD/GB? (85 comments)

Parent poster here, I use the 840 Pros I mentioned above on my laptop, I already have extensive caching going on with about 12GB of my 32GB of RAM, but its still saturates the SATA bus due to the mostly random nature of the I/Os. Its basically a giant 300GB b+tree with 2MB leaf nodes and about a 40% insertion 40% lookup and 20% deletion ratio.

Wouldn't something like this or another enterprise drive be a better match for you?

about 4 months ago
top

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

Everything should be by reference. Copying crap all over is bullshit.

No.

How do you atomically validate and use data that is passed by reference? You might validate the data, then use it, but in between, the source of the data might change it in nefarious ways, leaving you open to a timing based security attack. Some copies are unavoidable, and single or multiple address spaces don't make any difference whatsoever in this case.

about 5 months ago
top

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

As I pointed out above, go check out the design specs for OS/400 (System i). It's got a flat address space and was one of, if not the first mid-range system to achieve C2 certification. But I suppose you're talking about a flat address space with an open-source system - you're probably right.

OS/400 is a bit different in that programs are not native to the CPU, and are compiled into native code at installation time. In that sense, OS/400 can enforce security and separation statically at compile time, and so doesn't need isolated address spaces.

For native processes requiring any semblance of isolation, processes would have to be tagged to detemine which addresses in a flat address space they can access, which implies some sort of segmentation or page tagging, and once we're validating page accesses anyway, we might as well have a full MMU as well.

about 5 months ago
top

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.

Paging provides address space isolation and protection, separation of instruction and data (unless you advocate going back to segmented memory). It won't be going very far anytime soon.

A single flat address space would be a disaster for security and protection.

Still, it would make the filesystem potentially simpler, making non-mmap reads/writes basically a memcpy in kernel space.

about 5 months ago
top

Finally, Hi-Def Streaming Video of the ISS's View of Earth

Christian Smith Re:Useless (97 comments)

... All Japanese, although TFA is at pains to point out that there was some seemingly minor American involvement too. Are there any major camera manufacturers left in the US?

Well, it is the International Space Station, not the American Space Station (though that would have a much better initialism.)

about 6 months ago
top

Why Portland Should Have Kept Its Water, Urine and All

Christian Smith Re:Don't tell them that... (332 comments)

People get into high positions by rising as those above are destroyed in the public eye. Those above are destroyed in the public eye when they fail to respond to every absurd panic with equal panic and alarm. A rational leader is soon removed from power.

Or, more succinctly, shit floats!

about 6 months ago
top

SSD-HDD Price Gap Won't Go Away Anytime Soon

Christian Smith Re:RAID? (256 comments)

Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

When doing anything random IO intensive, even a SSD won't saturate a SATA 3 link. But it'd still be orders of magnitude faster than a mechanical disk.

Consider, a 15000 RPM enterprise disk might top out at perhaps 250 random IOPS. The lowest of low end SSD will beat that by perhaps 10x. So to get the equivalent performance to an entry level SSD, you might require 10+ enterprise 15K RPM disks. Once your SSD storage becomes big enough, it might actually be more cost effective to have SSD instead of the 10x HDD and associated management and enclosure costs.

In fact, in the TPC benchmarks so favoured by DB vendors, a large proportion of the costs of the rig are the massive storage requirements to fulfill the IO rates required (no, I don't have a citation off hand) even if the vast majority of the storage space is not actually used (HDD used in these scenarios tend to be "short stroked").

However, for most people a compromise probably works best. Bulk storage on slow HDD, with fast random IO like hot data and journals on SSD. ZFS is a prime example, using L2ARC and ZIL on SSD for hot data.

about 6 months ago
top

Heartbleed OpenSSL Vulnerability: A Technical Remediation

Christian Smith Re:Thank you for the mess (239 comments)

Yes, there are some people who are incapable of compiling their own software who will have to wait until the patch comes through. Those people shouldn't be managing security for a large website (or any website really, in an ideal world).

Nonsense. I'd want only vendor supplied fixes applied, unless the vendor is so slow as to be incompetent (but then, why would you be using them?)

Why? Because user applied fixes tend to be forgotten, and if the library isn't managed by the package system (you've uninstalled the package you're overwriting, right?) you might miss subsequent important updates.

An example from a far from fuckwitted user:
http://marc.info/?l=sqlite-use...

Yes, the author of the SQLite library fell pray to this very issue. Let the package manager track packages.

Of course, you could also build binary packages from source, but then that assumes the upstream source packages have been patched, or you're happy to patch the source packages yourself.

about 7 months ago
top

Should Microsoft Be Required To Extend Support For Windows XP?

Christian Smith Re:No. (650 comments)

Wow, you have *no* fucking idea what you're talking about, do you?

Maybe not....

Let's see... there is simply nothing equivalent to the Mandatory Integrity Control system in NT versions before 6.0 (Vista/Server 2008). You can't build that on top of the existing ACL system, because the existing ACL system didn't support anything that behaves that way.

The ACL system was a user space component? I'm not overley familiar with how security is implemented in NT as well as later Windows, but doesn't the application say "I want to do this" and the response is either OK or not? The application doesn't give a hoot how that decision is implemented. It's allowed or denied.

ASLR is a major change in the way processes start and load libraries.

You're kidding, right? ASLR is a change in the how the loader chooses a base address. The loader in XP already has to be able to handle relocations in DLLs. ASLR is an extension of that.

The "split token" model for UAC - where the same account can usually be a non-Admin but sometimes be an Admin without actually changing to a different user - is also completely new and wasn't possible before, because that kind of group membership used to be tied to the user's identity.

A neat feature, but again, does this require changes in the API/ABI? Or is it just implemented in the background. Anything that operates as a black box, *should* be relative simple to change.

Then there's all the tons of other stuff that changed. One good example is the removal of the global scheduler lock, which substantially improves performance on machines with multiple hardware threads when making frequent context switches (as desktop OSes often do).

Implementation detail. The Linux kernel did a similar change between 2.4 and 2.6, but it had zero impact from an API/ABI point of view.

The switch to user-mode drivers for most things - including video drivers, which were one of the primary causes of BSODs on XP - is another big deal; the video driver model of XP requires kernel-mode drivers and it was a major effort to re-architect the driver model so that the kernel could simply restart a crashed video driver.

User mode drivers are available for WinXP too:

http://msdn.microsoft.com/en-u...

But perhaps not for video drivers. I haven't checked.

Full IPv6 support required substantial changes to the network driver interface.

What? IPv6 sends ethernet packets differently to IPv4 packets?

Does that also mean WinXP doesn't have full IPv6 support? Perhaps not. Can it not support IPv6? I doubt it.

The fact that the ABI hasn't changed *more* is a testament to Microsoft's backward compatibility efforts - usually in the form of leaving legacy interfaces in place for legacy code to use, but deprecating them for new code - but it has definitely changed. Leaving aside the stuff that is purely additions to the ABI, you still have things like the updated NDIS requirement causing some legacy WiFi drivers to be unable to get IP addresses, and the removal of the XP video driver model in Win8+ makes anything pre-WDDM incompatible at the binary level.

Most of the work in security patching apears to be in the user level components. Things like .NET updates, browser updates, Silverlight updates. These transcend the Windows versions, and don't rely on any changes in driver models and such like.

Kernel level updates are usually bug fixes in things like standard drivers. Sure, a driver bug could open a privilege escalation hole, but access to the machine is required first, and that usually comes from attacking the user space components or duping the user into running a trojan.

All the stuff you listed is just fluff from a user point of view.

My personal opinion is that an OEM Windows license, which is not transferrable, should be perpetual and not tied to a specific Windows version, but tied only to the machine it's running on. If it the original release goes out of support, then it should allow the installation of a supported Windows release.

about 7 months ago

Submissions

top

'Modern' computers 60 years old

Christian Smith Christian Smith writes  |  more than 6 years ago

Christian Smith writes "Stored program computers are 60 years old on Saturday. The Small Scale Experimental Machine, or "Baby", first run on the 21st June 1948, in Manchester. While not the first computer, nor even programmable computer, it was the first that stored it's program in it's own memory. Luckily, transistors shrank the one tonne required for this computing power to something more manageable. Full story at the BBC website"
Link to Original Source

Journals

Christian Smith has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?