Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



Greenpeace: Amazon Fire Burns More Coal and Gas Than It Should

Christian Smith Re: Clever editors. (281 comments)

How far is it from Amsterdam to Luxembourg anyway?

A four-hour drive by car (359.5 km), according to Google. I'm curious to know if taking a plane is more energy efficient than a car or train.

Depends on how full the plane, train and car are. A single person in a largish car (he's a CEO, remember) probably won't beat a full short haul flight for the same distance.

Trains are among the most efficient transportation methods (hard wheels on smooth rails = low rolling resistance) but the journey may not be the fastest nor most direct.

2 days ago

Nearly 25 Years Ago, IBM Helped Save Macintosh

Christian Smith Re:Another misleading headline (236 comments)

PowerPC had good performance for several years. When the 603 and 604 were around they had better performance than x86 did. The problems started when the Pentium Pro came out. Even then it was not manufactured in enough numbers to be a real issue. Then the Pentium II came out...

No, I think it was more the Pentium IV where Intel overtook Motorola. The PPC G4 design had started to hit up against clock speed walls, and couldn't scale the FSB up either. While Netburst was a disaster for performance/watt, it did scale clock speed wise and had a very fast FSB and memory subsystem, and while everyone else was hovering around the sub-2GHz mark, Intel got plenty of high clock frquency practice.

Once the Netburst FSB was moved to the P6 architecture in the form of the Pentium M, Intel had the winner on their hands that propelled them to this day ahead of anyone else.

about two weeks ago

Nearly 25 Years Ago, IBM Helped Save Macintosh

Christian Smith Re:Pairing? (236 comments)

No, this is stupid, wasteful, unoptimized software that performs like feces compared to a platform optimized piece of software.

Eh? What are you on about?

Yeah, those hand tweaked 16bit binaries performed really well on the pipelined i486 processors of the time. Really extracted all the potential out of the advances that were taking the CPU industry by storm.

In case you missed, I was being sarcastic. "Platform optimised" (read DOS) programs held the industry back at least a decade, and it's only after we left the 16-bit sheckles^W^Wplatform optimized software behind that x86 based platforms started to reach parity with their RISC based peers.

Afterall, Doom was famously written in C on a Next cube, then ported to x86/DOS and "platform optimised" as a final step.

The whole myth I've heard about software portability most of my life has never bore fruit that didn't need tweaks for different platforms.

The software I write has had no tweaks since we stopped supporting HP-UX 10. The biggest headache is GUI code, but libraries such as Qt take care of that.

The only performance tweaks we do is upgrading compilers, and ensuring we use efficient algorithms (ie, not O(N^2) when O(NlogN) is available.)

The whole notion in the first place was to expand programming to the masses by giving the appearance of the elimination of the need of specialists.
A good intention, to be sure, except for the specialists.
The problem was that a specialist with knowledge of how the hardware operates could write software that took more advantage and/or better performed on a given platform. Things like CPU instruction set options, memory alignment, etc.

There is now a resurgence of platform optimized specialization thanks to big data. Do you want your humungous data sets processed and analyzed in months or years by the average programmer, or do you want it in days and weeks by the programmer that really, really knows how to squeeze the hardware.

That's right, the demand for hand optimizing assembly programmers is though the roof.

Do you want your big data software written in months or years, as the programmer tries to squeeze every once of performance from the CPU, while your competitor has had the software running already for months and compensated for the lack of optimization by buying an extra rack of servers.

Big data is processed faster by better algorithms, not platform tweaking.

Facebook optimized their platform by JIT compiling their PHP, but the stuff was still written in PHP in the first place by "non-specialists" and the optimization was a relatively small final step. As an added bonus, they're also porting to ARM by basically re-implmenting just the JIT compiler for ARM. So not really optimized for any particular platform, just x64 by virtue of being their primary target platform.

Google use C++, Java and Python, and I'd bet there isn't any Google hand optimized assembler in any of that mix. They kicked big data butt by using clever, scalable algorithms.

about two weeks ago

Study: Why the Moon's Far Side Looks So Different

Christian Smith Re:"Very Long Time?" (79 comments)

So.... At the risk of stating the obvious: modern man has been on this planet for around 50,000 years;

Australia has been colonised by "modern man" for longer than 50,000 years. Modern man left Africa more like 100,000 years ago, and if you lifted one of those babies out and plonked him in the "modern world" noone would notice the difference.

Anatomically modern humans are more like 200,000 years old, and I dare speculate that their predocessors gazed at the stars and moon.

about three weeks ago

Samsung Release First SSD With 3D NAND

Christian Smith Re:USD/GB? (85 comments)

Parent poster here, I use the 840 Pros I mentioned above on my laptop, I already have extensive caching going on with about 12GB of my 32GB of RAM, but its still saturates the SATA bus due to the mostly random nature of the I/Os. Its basically a giant 300GB b+tree with 2MB leaf nodes and about a 40% insertion 40% lookup and 20% deletion ratio.

Wouldn't something like this or another enterprise drive be a better match for you?

about a month ago

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

Everything should be by reference. Copying crap all over is bullshit.


How do you atomically validate and use data that is passed by reference? You might validate the data, then use it, but in between, the source of the data might change it in nefarious ways, leaving you open to a timing based security attack. Some copies are unavoidable, and single or multiple address spaces don't make any difference whatsoever in this case.

about a month and a half ago

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

As I pointed out above, go check out the design specs for OS/400 (System i). It's got a flat address space and was one of, if not the first mid-range system to achieve C2 certification. But I suppose you're talking about a flat address space with an open-source system - you're probably right.

OS/400 is a bit different in that programs are not native to the CPU, and are compiled into native code at installation time. In that sense, OS/400 can enforce security and separation statically at compile time, and so doesn't need isolated address spaces.

For native processes requiring any semblance of isolation, processes would have to be tagged to detemine which addresses in a flat address space they can access, which implies some sort of segmentation or page tagging, and once we're validating page accesses anyway, we might as well have a full MMU as well.

about a month and a half ago

HP Unveils 'The Machine,' a New Computer Architecture

Christian Smith Re:Run a completely new OS? (257 comments)

From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.

Paging provides address space isolation and protection, separation of instruction and data (unless you advocate going back to segmented memory). It won't be going very far anytime soon.

A single flat address space would be a disaster for security and protection.

Still, it would make the filesystem potentially simpler, making non-mmap reads/writes basically a memcpy in kernel space.

about a month and a half ago

Finally, Hi-Def Streaming Video of the ISS's View of Earth

Christian Smith Re:Useless (97 comments)

... All Japanese, although TFA is at pains to point out that there was some seemingly minor American involvement too. Are there any major camera manufacturers left in the US?

Well, it is the International Space Station, not the American Space Station (though that would have a much better initialism.)

about 3 months ago

Why Portland Should Have Kept Its Water, Urine and All

Christian Smith Re:Don't tell them that... (332 comments)

People get into high positions by rising as those above are destroyed in the public eye. Those above are destroyed in the public eye when they fail to respond to every absurd panic with equal panic and alarm. A rational leader is soon removed from power.

Or, more succinctly, shit floats!

about 3 months ago

SSD-HDD Price Gap Won't Go Away Anytime Soon

Christian Smith Re:RAID? (256 comments)

Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

When doing anything random IO intensive, even a SSD won't saturate a SATA 3 link. But it'd still be orders of magnitude faster than a mechanical disk.

Consider, a 15000 RPM enterprise disk might top out at perhaps 250 random IOPS. The lowest of low end SSD will beat that by perhaps 10x. So to get the equivalent performance to an entry level SSD, you might require 10+ enterprise 15K RPM disks. Once your SSD storage becomes big enough, it might actually be more cost effective to have SSD instead of the 10x HDD and associated management and enclosure costs.

In fact, in the TPC benchmarks so favoured by DB vendors, a large proportion of the costs of the rig are the massive storage requirements to fulfill the IO rates required (no, I don't have a citation off hand) even if the vast majority of the storage space is not actually used (HDD used in these scenarios tend to be "short stroked").

However, for most people a compromise probably works best. Bulk storage on slow HDD, with fast random IO like hot data and journals on SSD. ZFS is a prime example, using L2ARC and ZIL on SSD for hot data.

about 3 months ago

Heartbleed OpenSSL Vulnerability: A Technical Remediation

Christian Smith Re:Thank you for the mess (239 comments)

Yes, there are some people who are incapable of compiling their own software who will have to wait until the patch comes through. Those people shouldn't be managing security for a large website (or any website really, in an ideal world).

Nonsense. I'd want only vendor supplied fixes applied, unless the vendor is so slow as to be incompetent (but then, why would you be using them?)

Why? Because user applied fixes tend to be forgotten, and if the library isn't managed by the package system (you've uninstalled the package you're overwriting, right?) you might miss subsequent important updates.

An example from a far from fuckwitted user:

Yes, the author of the SQLite library fell pray to this very issue. Let the package manager track packages.

Of course, you could also build binary packages from source, but then that assumes the upstream source packages have been patched, or you're happy to patch the source packages yourself.

about 4 months ago

Should Microsoft Be Required To Extend Support For Windows XP?

Christian Smith Re:No. (650 comments)

Wow, you have *no* fucking idea what you're talking about, do you?

Maybe not....

Let's see... there is simply nothing equivalent to the Mandatory Integrity Control system in NT versions before 6.0 (Vista/Server 2008). You can't build that on top of the existing ACL system, because the existing ACL system didn't support anything that behaves that way.

The ACL system was a user space component? I'm not overley familiar with how security is implemented in NT as well as later Windows, but doesn't the application say "I want to do this" and the response is either OK or not? The application doesn't give a hoot how that decision is implemented. It's allowed or denied.

ASLR is a major change in the way processes start and load libraries.

You're kidding, right? ASLR is a change in the how the loader chooses a base address. The loader in XP already has to be able to handle relocations in DLLs. ASLR is an extension of that.

The "split token" model for UAC - where the same account can usually be a non-Admin but sometimes be an Admin without actually changing to a different user - is also completely new and wasn't possible before, because that kind of group membership used to be tied to the user's identity.

A neat feature, but again, does this require changes in the API/ABI? Or is it just implemented in the background. Anything that operates as a black box, *should* be relative simple to change.

Then there's all the tons of other stuff that changed. One good example is the removal of the global scheduler lock, which substantially improves performance on machines with multiple hardware threads when making frequent context switches (as desktop OSes often do).

Implementation detail. The Linux kernel did a similar change between 2.4 and 2.6, but it had zero impact from an API/ABI point of view.

The switch to user-mode drivers for most things - including video drivers, which were one of the primary causes of BSODs on XP - is another big deal; the video driver model of XP requires kernel-mode drivers and it was a major effort to re-architect the driver model so that the kernel could simply restart a crashed video driver.

User mode drivers are available for WinXP too:

But perhaps not for video drivers. I haven't checked.

Full IPv6 support required substantial changes to the network driver interface.

What? IPv6 sends ethernet packets differently to IPv4 packets?

Does that also mean WinXP doesn't have full IPv6 support? Perhaps not. Can it not support IPv6? I doubt it.

The fact that the ABI hasn't changed *more* is a testament to Microsoft's backward compatibility efforts - usually in the form of leaving legacy interfaces in place for legacy code to use, but deprecating them for new code - but it has definitely changed. Leaving aside the stuff that is purely additions to the ABI, you still have things like the updated NDIS requirement causing some legacy WiFi drivers to be unable to get IP addresses, and the removal of the XP video driver model in Win8+ makes anything pre-WDDM incompatible at the binary level.

Most of the work in security patching apears to be in the user level components. Things like .NET updates, browser updates, Silverlight updates. These transcend the Windows versions, and don't rely on any changes in driver models and such like.

Kernel level updates are usually bug fixes in things like standard drivers. Sure, a driver bug could open a privilege escalation hole, but access to the machine is required first, and that usually comes from attacking the user space components or duping the user into running a trojan.

All the stuff you listed is just fluff from a user point of view.

My personal opinion is that an OEM Windows license, which is not transferrable, should be perpetual and not tied to a specific Windows version, but tied only to the machine it's running on. If it the original release goes out of support, then it should allow the installation of a supported Windows release.

about 4 months ago

Should Microsoft Be Required To Extend Support For Windows XP?

Christian Smith Re:No. (650 comments)

Not all reasons to upgrade is obvious to the user. Like security. A lot has happened to the underlying OS since XP. The only way to back port that to XP is to ....upgrade XP to use the Vista/7/8 kernel. Which would introduce the same compatibility problems the users may want to avoid.

I'd be surprised if much has changed beyond the shell. Win Vista/7/8 use basically the same kernel as XP. Sure, it's had some enhancements to support newer tech, but nothing ground breaking. It's more like the changes between Linux 2.4 and 2.6+.Sure, there's some implementation changes, but the ABI is still the same, and the Linux userspace from 2001 would most likely work unchanged on a Linux 3.14 kernel.

The biggest change I can see between XP and subsequent Windows is in security policy. But that's policy build on top of the same underlying mechanisms.

about 4 months ago

Samsung SSD 840 EVO MSATA Tested

Christian Smith Re:I would like to know (76 comments)

As other have said, it must help out or Samsung's engineers wouldn't have put it there, so what's it for?

The embedded controller is just a computer, like any other computer. In the case of the Samsung controller, it's got 3 ARM processors (citation needed, blah blah blah, google it), one for SATA IO, and one each for reading and writing to FLASH (I think). The controller needs firmware to make it do anything useful, and it needs RAM for it's working data and firmware code.

So the firmware just acts like a regular, multi-threaded embedded program. It probably contains an embedded general purpose kernel, which manages and dispatches the threads, and the threads cooperate to manage all the translations to and from FLASH and the SATA bus.

All this translation requires large lookup tables, to map from SATA LBA to FLASH blocks. If it's mapping at a 4K sector level, a 1TB drive would require ~250 million lookup entries, which is a lot of data, requiring a lot of RAM. I'd imagine the lookup tables are trie based, with the table backed on FLASH, and demand loaded. Still, the working set of lookup data will be many 10s or 100s of megabytes.

Note, the early sata controllers were probably single cored controllers, and suffered from stuttering as a result of writes hogging the CPU while erasing and writing to FLASH.

about 4 months ago

Invention Makes Citibikes Electric

Christian Smith Re:What is wrong with pedals? (166 comments)

I frequently do 30-50 miles in the dead of summer in Phoenix. The temperature is literally 110 degrees fahrenheit during those times. It really isn't as bad as it sounds, when you're cycling you've got the wind to keep you cool.

Isn't somewhere like Phoenix as flat as a pancake? I'd take heat over hills any day.

about 5 months ago

Rolls Royce Developing Drone Cargo Ships

Christian Smith Re:until someone hacks it (216 comments)

Support pylons of the Golden Gate Bridge, have several of them collide at the entrance to the Long Beach shipping terminal, blocking access for a few weeks, run over the deep water loading ports for crude oil. Run over a deep water drilling rig. I can think of any number of terrorist activities one could do. And remember, time and time again, no one really thinks of security until that "oh s___, we've been hacked" moment.

Except they could already do that with a manned vessel if it was at all feasible.

about 5 months ago

Ford Dumping Windows For QNX In New Vehicles

Christian Smith Re:Having used both (314 comments)

The SYNC system has nothing to do with the powertrain. It's only used for infotainment and climate control.

So you're saying it'll still have a shit transmission.

Ford seems to have their priorities seriously screwed up if that is the case. Shouldn't they make sure the powertrain works before working on the infotainment system.

Just curious, did you lift your foot from the gas while changing gear? I've never driven a (semi-)automatic gearbox car, and wondered if keeping the foot to the floor gas wise affected it's behaviour on when to change gear.

Personally, I'm happy driving stick, clutch and all. No computer to blame for bad gear changes.

about 5 months ago

Hubble Telescope Snaps Images of Tarantula Nebula

Christian Smith Feel small? (32 comments)

I do!

about 6 months ago



'Modern' computers 60 years old

Christian Smith Christian Smith writes  |  more than 6 years ago

Christian Smith writes "Stored program computers are 60 years old on Saturday. The Small Scale Experimental Machine, or "Baby", first run on the 21st June 1948, in Manchester. While not the first computer, nor even programmable computer, it was the first that stored it's program in it's own memory. Luckily, transistors shrank the one tonne required for this computing power to something more manageable. Full story at the BBC website"
Link to Original Source


Christian Smith has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account