×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Is LTO Tape On Its Way Out?

bored Re:Tape is dead (284 comments)

The problem is cross vendor replication ends up taking the form of rdiff-backup or rsnapshot. Otherwise you end up with a single point of failure when your stupid NAS vendor pushes a firmware update that corrupts all your backups.

about three weeks ago
top

Is LTO Tape On Its Way Out?

bored Re:Shyeah, right. (284 comments)

Yah, you are 100% right, to bad you don't work for the tape vendors.

The one little niggle, is that your numbers fail to account for the price of the library, which anyone writing more than one tape's worth of data is going to want really fast. Add in the price of a small/medium sized autoloader and it doubles the break even point.

Hence my own personal use of cloud backup technologies and small portable eSATA/USB raid arrays. Just don't drop them, and make sure to transport them in very well padded containers.

about three weeks ago
top

Is LTO Tape On Its Way Out?

bored Re:its all about selling Autoloaders (284 comments)

I use LTO3-6 all the time, they are pretty bullet proof. I can't remember the last time there was an actual hardware problem. All the tape problems I've had over the last few years have been PEBCAK, or should that be PEBCAL(ibrary).

LTO and modern tape aren't like the old ones (1990 and earlier) with dropped leaders, read errors, etc. Besides, running at many hundreds of MB/sec per drive, they have layers and layers of ECC, embedded servo tracks, etc ,etc. All the reliability engineering you expect from modern hardware. Also, while your vendors will cringe if you drop them on the floor, I've actually thrown them around, cracked the cases, etc and the data is still recoverable, heck fingerprints on the media are even tolerated.

Its just to bad that they are dragging their feet coming out with new generations cause its a fantastic way to take a snapshot, and bury it somewhere in case of disaster.

about three weeks ago
top

Is LTO Tape On Its Way Out?

bored Re:its all about selling Autoloaders (284 comments)

The T10kd which came out over a year ago, puts 8.5TB uncompressed on a tape (reusing the previous generation's media no less). So, basically the vendors are holding LTO back for unspecified reasons, probably in a vain attempt to recoup the last generations investment, or maybe to boost sales of their "enterprise" drives (T10kd, 3592).

about three weeks ago
top

Multiple Manufacturers Push Hydrogen Fuel Cell Cars, But Can They Catch Tesla?

bored Re: GM will have a model come out. (293 comments)

Hydrogen can be produced from the electrolysis of water

And it hideously inefficient, generally less than 50%, although HTE can apparently get as high as 64%

https://en.wikipedia.org/wiki/...

And that is just one step in the chain, now you need to catalyze or pressurize it, transport it, etc.

about three weeks ago
top

What To Expect With Windows 9

bored Re:So what's Metro? (545 comments)

? Are you trying to tell me that GDI wasn't directly accelerated in pre vista, non win9x OSs?

Because I believe you to be strongly mistaken. From the windows 2000 XDDM reference, GDI functions Implemented by printer and display drivers:

http://msdn.microsoft.com/en-u...

You will notice that everything from font rendering to curve drawing, path filling (yes with xor), you name it _CAN_ be implemented although only a small subset is required. _BUT_, I would say that most were implemented by the better hardware manufactures for the common video resolutions.

You will also notice that the documentation has been updated and says "The functions documented in this section are implemented by printer drivers and by drivers in the Windows 2000 display driver model, but they are not implemented by drivers in the Windows Vista display driver model." This is also directly noticeable in GDI benchmarks between the two OS's (especially when run on slower CPU machines, or while monitoring CPU usage). There are also a fair number of youtube ( for example https://www.youtube.com/watch?...) videos of people showing things like the scrolling speeds in explorer on XP vs vista. 7 improved the situation slightly, but as of a few years ago the benchmarks I remember seeing were still strongly tilted in XP's favor if one monitors CPU usage during the benchmark.

about 3 months ago
top

What To Expect With Windows 9

bored Re:Bring back windows XP. (545 comments)

You said nearly everything I was going to say, except that it is possible to bypass the 4GB _license_ limit in XP32.

about 3 months ago
top

What To Expect With Windows 9

bored Re:So what's Metro? (545 comments)

Standard desktop apps were accelerated on OS's that predate windows vista. In fact GDI was hardware accelerated all they way back to the window 3.0 days.

This is actually one of the reasons that older windows releases often feel snappier, since probably >50% of windows applications are using GDI or a toolkit that uses GDI.

about 3 months ago
top

If Tesla Can Run Its Gigafactory On 100% Renewables, Why Can't Others?

bored For the same reason you don't have a solar panel. (444 comments)

There are dozens of reasons. Lets start with, the costs go up. Its the free market after all, if a company could _ACTUALLY_ reduce their power bill they would do it.

Second, lots (most?) of companies are strongly OPEX leaning, meaning that they are already shifting all their CAPEX , and investing in solar/wind is overwhelmingly CAPEX (or its going to drive up their debt).

Third, most companies are busy worrying about their next product, and a long list of other issues.

I could probably list another dozen things, but I'm betting that combination pretty much covers 99% of US companies.

about 3 months ago
top

Fedora To Get a New Partition Manager

bored Re: So.... (170 comments)

I considered moderating you, but I think this is really a case of <whine> "C++ is haaardddd, learning it enough to understand how to plug in a new module is going to take me months. Instead I'm going to rewrite it" </whine>

Or similar bullshit by people who think "scripting" languages are appropriate for base system tools. Now you will have python dependency hell every-time you want to do something simple like repartition your disks. Oh, and is that project python 2 or python 3? On and on..

Frankly, its fsking stupid and its another sign that redhat is jumping the shark.

Plus, do you really want to depend on the skills of some "leet" hacker that thinks python is an appropriate tool for this?

about 3 months ago
top

Amazon's Plan To Storm the Cable Industry's Castle

bored Re:Russian revolution? (85 comments)

It will never be a real battle until amazon starts providing last mile services. The Cable Co's and the content providers (amazon in this case) need each other to much to actually have a battle to the death.

So, much like the "blackouts" and other BS that happens once in a while, the end result is not positive for the consumer. The cable bills never go down or even stay the same. Instead they go up and both sides get to blame the other. All while making record profits for wall street.

Nothing will change until we start actually regulating the last mile providers in meaningful ways. That includes a more alacart channel selection where the _CONSUMER_ chooses which media/content providers they wish to subscribe to. I don't mind the content providers bundling things (aka get National Geo, Fox New, FX, etc as a block), its just that I want to be the one making the choices rather than having to give Fox money when all I want is to watch a couple HBO channels.

about 4 months ago
top

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

bored Re:DDR2/3/4 (181 comments)

CAS latency hasn't been measured directly in nanoseconds for some time now. It is now measured in clock cycles.

Yah, so to compare two different sticks of RAM you have to multiply the time/cycle by the number of cycles. Which gives you (wait for it....) time!

Which the parent did, to point out that all these "new" memory technologies haven't been decreasing the RAM latency much at all. RAM latency is still a _VERY_ important part of overall execution performance. Particularly for single threaded operations reading RAM in unpredictable manners. Cache misses are overwhelmingly the single largest optimization variable for modern applications.

about 4 months ago
top

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

bored Re:*drool* (181 comments)

Yes, and for a desktop machine probably 90% of what I do is limited by the single thread performance .Hence why I haven't upgraded in a while myself.

So, I do welcome faster machines, what I don't welcome is the fact that the vast majority of machines being sold today are actually _SLOWER_ than what was available a few years ago.

This happened at work, we replaced a couple of older machines that cost a fortune with a couple newer far less expensive one and the performance was actually worse.

about 4 months ago
top

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

bored Re:*drool* (181 comments)

There are LOTS of applications outside of gaming where more speed is appreciated.

But a lot of those applications are also runnable in networked clusters. I stopped compiling code on my desktop probably 15 years ago and haven't looked back. Buying a single machine with 32-cores and a super fast RAID shared between a dozen or so developers both improves individual compile times and saves a bunch of money over buying a bunch of faster desktop machines for everyone. Edit the code locally, save to a network share, compile remotely.

Same thing for VMs, ray tracing, transcoding, scientific computing, etc,etc.

There are still a few "workstation" level applications but its questionable whether the i7 line is more appropriate in those circumstances than just buying multisocket xeon configurations (which provide even more cores and memory bandwidth).

All that said, don't get me wrong, I really like my single threaded performance which is where I think people have been sort of missing the boat for the desktop. AKA I would pick a dual core machine over a 16 core one if the cores were even 2x as fast at single threaded operations.

about 4 months ago
top

Intel's Haswell-E Desktop CPU Debuts With Eight Cores, DDR4 Memory

bored Re:*drool* (181 comments)

Its not even about optimizing the code, its making choices that from the beginning cannot result in faster code. People like to focus on the overhead of JITs, GC's, and hidden object copies, etc, in many "modern" languages, but frankly while they have an effect, the mindset they bring is a worse problem.

Modern machines can lose a lot of performance with poor memory placement/allocation in a NUMA configuration, doing cache line ping ponging, and on an on. Things that are simply not controllable if your language cannot even guarantee a consistent location for the data in question.

Lets not even talk about the horrors of HTML/javascript/CSS/AJAX/etc.

Now, all that said, a huge percentage of applications are going to be "fast enough" if they were written in bash, running in an emulated x86, in javascript, in firefox on a $50 tablet. Simply because even the slowest thing available today has 100x the performance of the machines 15 years ago which somehow managed to be useful without storing all their data in the "cloud" for the NSA to peruse.

about 4 months ago
top

IBM Gearing Up Mega Power 8 Servers For October Launch

bored Re:Are they available in the cloud? (113 comments)

The problem is that trying to fab a processor without a foundry seems to be a big disadvantage. For IBMs mainframe business its probably not a critical problem as they aren't as performance intensive.

But for something like POWER which directly competes with x86 I suspect that they will have an even harder time selling their processors if they follow the AMD (or sparc, mips, etc) route. The ARM vendors seem to do fine without foundries, but the best performing ones seem to regularly come from companies that actually have their own in house fabs.

about 4 months ago
top

IBM Gearing Up Mega Power 8 Servers For October Launch

bored Re:"2-socket system" (113 comments)

If the workload naturally fits into more nodes of smaller size, it frequently makes sense to opt for the higher node count. There is of course different break points depending on judgement calls, but most places seem to think of two sockets as about the sweet spot.

That describes the problem I work on, the throughput scales pretty nice as the machine size grows, but the costs of the larger machines grow much faster than their performance. So, it is far more cost effective to ship a few 2 socket machine with higher clocked processors than try to cram it all into one or two large machines.

But! While the throughput of the larger machines scales, their latency does not. In fact for the latency sensitive portions of our application we are far better off with smaller machines with faster ram, faster clocked CPU's , and closer IO busses. There are points where its actually impossible to buy better latency than we get for just a couple grand in our mid-range machine.

about 4 months ago
top

IBM Gearing Up Mega Power 8 Servers For October Launch

bored Re:That ship has already sailed. (113 comments)

The pricing I saw a couple months ago didn't even approach what we are paying for our machines. Sure the machines in question _may_ have been ~30% faster but they cost literally 4x as much.

For customers buying larger Intel platform machines (4 sockets or more) the power8's are possibly competitive, but compared with the mid-range dual socket machines its wasn't even close.

Maybe IBM has adjusted the pricing since then, they keep telling me its going to be better than x86, but I have yet to see that for our use cases. Plus, I suspect that Intel will adjust their pricing in a few months if POWER is actually competitive. They have a habit of doing that. Just taking back the 4 socket "tax" they added a few years ago when AMD stopped being competitive will probably blow a hole in IBM's model.

about 4 months ago
top

IBM Gearing Up Mega Power 8 Servers For October Launch

bored Re:Are they available in the cloud? (113 comments)

If you go to IBM conferences you will find a fair amount of talk on this very topic by 3rd party vendors. There are probably a dozen vendors that want to provide AS400/iSeries cloud instances, but IBM won't let them because it violates the terms of the IBM i license which is tied to a hardware instance.

Plus, the whole software ecosystem piggybacks on the same idea, (often based on machine capabilities). This means that even if you can rent an iSeries for an hour its likely your software vendor won't license you their application.

So, while it is entirely possible, IBM seems to be dragging their feet on the license issues, and the vendors seems to be in a chicken/egg situation.

about 4 months ago
top

Linus Torvalds: 'I Still Want the Desktop'

bored Re:Nobody else seems to want it (727 comments)

Not sure what the GP actually intended, but I'm convinced the fact that the kernel and a few thousand drivers all simultaneously have to be bug free for any given "release" is a serious problem. Should your hardware experience a driver problem you get to roll the dice again and hope the next version fixes your problem without breaking something else. Good luck, especially if you have a couple dozen different hardware configurations to contend with, especially if any of them are not x86.

Its futile, the drivers and the kernel should be separate and there should be a stable API, if not a full blown ABI for them. Linux has been evolving for ~20 years now its probably time to start trying to maintain some kind of actual kernel mode API. That way the _USER_ can pick and choose the kernel and any given set of drivers independently from one another. If kernel X happens to be "good" but you need a driver newer than that kernel you shouldn't have to upgrade to the latest buggy kernel just to get a driver for a more recent piece of hardware.

Android avoids this problem because the OEM spends time assuring that the driver set for their device is working/stable before shipping the device. Then rarely are they ever upgraded for anything other than bug fixes.

about 4 months ago

Submissions

bored hasn't submitted any stories.

Journals

bored has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?