×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Touring a Carnival Cruise Simulator: 210 Degrees of GeForce-Powered Projection

unrtst Re:882 foot Titanic (42 comments)

At 882 feet, the modern 1100 foot super cruise ship doesn't kill it.

You don't have to read the article but, if you just glance at it, one of the first things you'll see is a rendering of one of their cruise ships next to the titanic. I'd agree with the author, "Compared to a modern cruise ship, however, Titanic was a pipsqueak."

2 days ago
top

Unity 8 Will Bring 'Pure' Linux Experience To Mobile Devices

unrtst Re:Ugh (125 comments)

Laptops with touchscreens make perfect sense.

Some of us... Some of us... Some of us...

You're not even trying to pretend there is a majority, let alone a small enough group of those that do want a touchscreen to make supporting one viable.

Some of us don't like holding our arm out in mid-air just to move the pointer and to select things.

I know of no desktop nor laptop with a touchscreen that lacks a secondary pointing device. Sure, you could make one that way, but you'd have to do so purposefully. Augment your pointer usage with a touchscreen and it can be very useful, especially on a laptop.

On a laptop sans-touchscreen, there are many times I just want to jab at the screen to hit some button or notification, rather than have to move my mouse around to get to it (via crappy touchpad or nub). Even if you have a mouse attached, a quick jab to the screen right where the button is will be faster than moving your hand to the mouse and moving it around and clicking and them coming back to the keyboard. It's perfectly workable to live without a touchscreen, but let's not pretend that it's a negative.

AFAICT, marks on the screen are the only real downside to adding a touchscreen. I don't eat cheetos while typing, so it's not much of a problem for me, and certainly nothing that a quick wipe down won't cure/mitigate.

That said, it'd be useless on my desktop because, as you noted, it's too far away. Dual 30" monitors aren't really the norm either though.
On a tablet or phone, I think we're all fine with the touchscreen (though I still prefer a hardware keyboard.... wish more phone models had them).

about two weeks ago
top

Why Elon Musk's Batteries Frighten Electric Companies

unrtst Re:Are they really that scared? (460 comments)

I wonder about the value of capturing power during off peak hours and providing it back during peak hours.

Hot damn that sounds like a great idea!
While I'm confident it wouldn't be profitable due to combo of price of batteries + efficiency of them (ie. loss) + difference between day/night rates won't be enough to cover the loss and up front cost, it's still a fun thought.

Your load avg would look crazy, especially if you had solar during the day feeding excess back to the grid - massive negative usage during the day, massive usage at night... ramp it up as high as you can.

If it were profitable, the gigafactory itself could do that.

about two weeks ago
top

Consumer-Grade SSDs Survive Two Petabytes of Writes

unrtst Re:Most people write far less. (125 comments)

What's the math to be applied to LBAs? How big is an LBA? A 512 byte sector?

My nearly 4 year old Samsung shows just under 2 TB written if I multiply the SMART-provided Total LBAs written against a 512 byte block.

Correct.
Though there could be differences depending on the model of drive you have, it's very likely 512B LBAs:
http://www.samsung.com/global/...

Since you said you have a samsung, you can run the Samsung Magician 4.0 and it'll do the conversions for you (assuming you're running Windows or Mac; AFAIK, Magician isn't available for Linux).

about two weeks ago
top

Consumer-Grade SSDs Survive Two Petabytes of Writes

unrtst Re:HDD endurance? (125 comments)

Let's do some math here, shall we? At 200 MB/s, you can overwrite a 1 TB drive in an hour. 1 PB you can reach in a month. The hard drives are a few times larger than the SSDs, so you'd need ~ 10 TB instead of 2, which means 10 months.

Include all the actual variables, and you might get a usable answer. Just blowing data on the disk isn't the only thing this is doing (AFAIK). You've gotta detect errors, so you've gotta read back the data and validate it. This page goes through their full testing methodology (hint: they're using Anvil, a static file collection that includes a copy of a windows install, some applications, some movies, and some incompressible data, among other things, and every file has its md5sum checked after writing): http://techreport.com/review/2...

An easier calculation would be to scale their timelines to the HDD stats. For example:
Samsung 840 Pro sequential read/write: 540MB/s / 520MB/s (390MB/s for 128GB)
WD Caviar Black: about 180MB/s read/write (ex. http://www.storagereview.com/w...)
Rough math: 520 / 180 = 2.89 = it'll take 2.89 times as long to do the test on the same size drive.

Samsung 840 Pro size in the article: 256GB
Assuming you use WD Caviar Black 1TB = 4x's the size.
2.89 * 4 = 11.55 = that many times as long to do the same operations they've done thus far.

Their test has been running for over a year. So it'd take (roughly) over 11.5 years to do the same on the WD Caviar Black. I understand that's a very very rough estimate, but I think it's MUCH closer to the ballpark than 10 months!

My bet: the WD will be dead long before that time. I've had drives last longer than that, but they got VERY VERY little use and were simply powered on all the time. I've had some that lasted longer than that and got a fair bit of use (ex. db servers), but they were never filled to capacity, they were enterprise drives, and some of their neighbors did die (RAID).

about two weeks ago
top

Gangnam Style Surpasses YouTube's 32-bit View Counter

unrtst Re:32 bit signed integer, obviously (164 comments)

If they'd used a 32 bit unsigned integer they might have bought another 6 months or something.

You could say the same of the unix time_t problem, which is a signed 32bit int. If it were unsigned, it'd go to 2106 instead of 2038. Either way, that's not not really the solution. The solution, as youtube has done, is to move to 64bit int.

Personally, I'm amazed at the hit count!
There are 2^31 seconds between 1970-01-01 and 2038-01-19.
If this video was watched once every second since 1970, it'd still have 24 years before it rolled over that counter.
By comparison, it hasn't been available very long. How many views a second is that thing getting? On average, more than 28 hits a second!!!

28 hits/sec may not seem outrageous for a very popular file on a very popular site, but that's averaged since July 2012 until today. That, IMO, is nuts.

about two weeks ago
top

Chromebooks Overtake iPads In US Education Market

unrtst Re:simple (193 comments)

There are certainly costs associated with ruggedizing things; but those ruggedization costs apply to any laptop(so if it's more expensive than a chromebook now the ruggedized version is going to be more expensive than the ruggedized chromebook);

The ruggedizing is, essentially, a flat cost. As such, the price increase as viewed in relation to the cost of the original device would be much greater on a chromebook. Eg.
$200 chromebook + $200 to ruggedize it = 2x's the base cost, or 100% more
$900 laptop + $200 to ruggedize it = 1.22x's the base cost, or 22% more

When you're getting a bunch of them, that significantly changes the number of them you can get.
$20,000 = 100x $200 chromebooks ... or = 50 ruggedized $400 chromebooks
$90,000 = 100x $900 laptops ... or = 82 ruggedized $1100 laptops

This is the key point I think the others we making. You'll still get broken ruggedized ones, but fewer of them. How many need to break of the cheap model before it is worth getting the ruggedized ones? With chromebooks being so cheap, there would have to be a phenomenal number of broken ones before you'd break even.

Car analogy... it'd be like getting full coverage insurance on a used 1986 honda civic that you own outright. It'd be cheaper to pay for a new one with cash than deal with the deductible + high rate when they total it!

about two weeks ago
top

A Mismatch Between Wikimedia's Pledge Drive and Its Cash On Hand?

unrtst Re:I don't think you know what that word means (274 comments)

To quote sribe, "It would only be circular if in turn the higher reserves led to higher expenses"

The theory that is being suggested is one of linear growth, not circular. As they grow (or as time moves forward), they continue to spend more on stuff (staff, operations, management, etc). This has an effect on anything tied to revenue: larger investments should be returning proportionally larger returns; taxes, if any, will increase proportionately; and, yes, if they want 1 year worth of reserve cash, that value will increase proportionately to spending.

That is not circular. Perhaps if someone proposed a theory the higher reserves are encouraging higher spending and had any backing for said statement, then maybe there'd be a circular condition, but that is not what was stated.

about two weeks ago
top

CoreOS Announces Competitor To Docker

unrtst Re:Where Docker failed (71 comments)

Please correct me if I'm wrong (I've read loads of docs on Docker, but have not used it yet).
From what I've read, the problem you describe is not a technical limitation/implementation detail of Docker, but is simply a symptom of how it is generally being used.

Only, the implementation brings with it the same flawed approach as Solaris Zones. Do we really need a full OS image running in a container? ...

I think what Rocket really represents is a way to do containers right. Containers should run a single process. We shouldn't look at containers as a more efficient VM. We should see containers as a way to increase security and reduce overhead. ...

From what I've read, a Docker container can have as few things in it as you want (or as much as you want, up to the everything but the kernel). If you were doing an Apache container, you might put apache, mod_ssl, the ssl libs, mod_php, perl, libperl, mod_perl, etc in there, but you don't have to put glibc or other libs in there. It'll use the hosts libs and apps as needed/configured. As far as I can tell, you don't even have to put all that in there... you could leave the openssl, libperl, etc outside on the host and configure the container to use the hosts stuff for those. Again, please correct me if I'm mistaken.

I get the feeling that many are embracing Docker as a way to distribute containers for certain tasks. As such, they include everything the services within the container needs so that it will run on any host with Docker support, which makes them easier to distribute (somewhat like VMware's Virtual Appliance exchange). The fact that it can function this was does not mean (AFAICT) that it must function this way.

Docker containers can also stack, where one container may just be a diff on top of one (or more?) other containers. There's a whole lot of flexibility. That flexibility does make it somewhat difficult to approach, and I think the result is what we're seeing - containers being distributed that tend to look a lot more like virtual appliances withe a nearly complete OS stack included. AFAIK, that's not the only way to do it, it's just the best fit when you're offering a container for anyone to download and use.

Personally, I'd like to see some examples where a normal OS install is altered to use containers in all the places that chroot's are currently used, and do so with similarly light handed approaches. For example, see default bind installs where chroot is often done by default with the distro. I imagine it would be quite trivial to stick bind in a Docker/LXC/Rocket container with almost no OS parts included. I think this is the sort of solution you were referring to as "do containers right". Can this not be done with Docker today? If not, why not?

about two weeks ago
top

Firefox 34 Arrives With Video Chat, Yahoo Search As Default

unrtst Re:video chat (237 comments)

What I don't get are these two comments directly from the first article linked:

1. "Not only do you not have to sign up for a service, but you also don’t need the same software or hardware as the person you want to call, since WebRTC is compatible with Chrome and Opera browsers as well."

2. "... by sharing the generated callback link. To call you, they’ll naturally need Firefox 34."

So which is it? Something's wrong there.

As others have said, this should be an add-on. That said, I doubt it introduces much of any bloat when you're not using it (at least I really really hope it doesn't do anything at all unless you use it).

about two weeks ago
top

Shale: Good For Gas, Oil...and Nuclear Waste Disposal?

unrtst Re: the best use (138 comments)

This thread is surprisingly short, and mostly has people either agreeing that fast breeders or something similar are a great solution (maybe with some bickering on the finer points), or off topic arguing about the total investments made in various tech. FWIW, I'm 100% on board with reprocessing. I can only guess that either:
a) most people are also fine with this, so no need to post to agree... let's just post in places where we can argue
b) the proliferation risks make the conversation untouchable to them

This seems to happen on every nuclear thread on slashdot. I really really really don't understand why the US doesn't just set up one plant to reprocess waste. I'm very much against burying all the existing waste anywhere (Yuka, shale, or any other hole). As it is, it simply has too long of a life for me to accept that it'll be fine - we're really bad at thinking on such scales. If it were reprocessed first into something with a MUCH MUCH shorter half life, then I'd be fine burying that stuff - I think we might be able to handle managing a big dump of stuff for 1-2 hundred years, though that's still a stretch.

The point I'm getting at is, if we had a fast breeder reprocessing all our nuclear waste, I think many of the other concerns about waste would just about disappear. The topic would change to protecting the much smaller amount of weapons grade waste. Since it's small and in one place, I think that's not only feasible, but much easier than dealing with protection and maintenance of more than a hundred piles of nasty waste spread all around the country. I'm not a nuclear engineer, but it seems like a no brainer to me, and the only argument I've heard against it is the nuclear proliferation laws and concerns regarding plutonium. To those, I saw WTF - that's very minor red tape in comparison to things like the Yuka Mt debate.

about three weeks ago
top

Ask Slashdot: Best Drone For $100-$150?

unrtst Re:Gaaa! (116 comments)

Drone:
        - Can fly out of line of sight.
        - Transmits video in real time.
        - Can accept high level commands, such as position and heading, and handles pitch/roll/yaw itself.
        - You control it by looking at the video.

Check out the Hubsan H107D FPV X4. It's in a bit a grey area regarding your definitions. It can fly out of line of sight and transmits video in real time, and it's only about $150. It doesn't have GPS and can not be programmed, so it fails the full-out drone spec. I'd still call it an RC toy, but if you've got FPV video it's still a great stepping stone to spending buttloads on something high-end!

about three weeks ago
top

Is LTO Tape On Its Way Out?

unrtst Re: Value for money (284 comments)

Now backup that dataset weekly for two months, tape wins easily. Even without the need for archives a minimal useful backup strategy favors tape.

This can be done if plan it correctly for the medium of choice. If you're doing full snapshot backups weekly, you're rigging the game for tape to win.

Just one example that can work and achieve similar or greater levels data integrity and more fine grained backups (ex. daily):

* BackupPC server
* two external raid arrays (cheap-ish USB or ESATA things with JBOD and software raid)
* take one offsite
* do backups on the other
* swap periodically (weekly, per your requirement)

It'll use far less storage space due to file level dedupe and compression, so you don't actually need the same amount of raw storage space.
Availability is faster, especially for random file restores.
You can do far more frequent backups.
Total cost will be less.

Granted, while they're both backups, they have some fundamental differences. BackupPC, for example, is not suited to doing bare metal restores. That's not it's purpose though, and what it does do, it does very well (as do similar commercial products).

Tape wins easily given specific requirements that favor it, and those requirements may be justified. However, for a very large amount of backup needs, even in the enterprise, disk can win in many ways. There are really way to many factors to just say one is best, and there's a lot of middle ground where a blend of the two is better, or either-or may be fine. When you add in the cloud (ex. Amazon Glacier), it makes it really easy to consider dropping tape from the mix.

about three weeks ago
top

Is LTO Tape On Its Way Out?

unrtst Re:Shyeah, right. (284 comments)

It's 2014, you can just run your backups to low cost cloud storage that is replicated across the world.
  And when an array dies and you need to load all 5 TB of data from backup, let us know what your boss says when you tell him it'll take a week to restore, assuming a 100Mbit internet connection.

1. he included keeping a local copy, so unless both the production RAID and the local backup system both failed, he'd just pull it from the local copy.

2. It won't take a week if you're using the right cloud thingy. Ex. Amazon Glacier has an Import/Export and they can ship around drives with your data: http://aws.amazon.com/importex...
They also have a Direct Connect option, so you could establish a high speed dedicated network connection from you to them, bypassing the internet at large, going up to 10 Gbps.

FWIW, I wouldn't rely on it as the only backup storage. However, based on your statement, I'm assuming you're restoring from local media as well, so all is equal there (he said he'd have a local copy). How well does your offsite deal with restores?

Disclaimer: I've yet to use Glacier. I just really like the design, pricing, and features, and I want to use it at some point. For my personal data, it's a non-starter because I have insufficient upstream bandwidth (sneaker net FTW, ugh). For work, we already have a bunch of data centers with fat dedicated pipes between them (I'm still hoping to move to Glacier to greatly reduce (not eliminate) the crap we have to maintain).

about three weeks ago
top

Revisiting Open Source Social Networking Alternatives

unrtst Re:cross compatability (88 comments)

I don't think a law will be needed, but IMO you are exactly right that cross compatibility will be key.

Personally, I'm hoping that HTML5/AJAX/etc gets to be such a big deal that all data going to/from facebook is done that way. It's then a fairly clean API others can use (even if there are legal issues with that). It could be done now with a mix of that and screen scraping, but it'd be difficult to keep up.

If, at some point, someone created a client based application (probably browser based and in javascript) that had a plugin for facebook, and turned those streams into a common format (pick one of the better open source distributed/federated social networks and use that format), then it could offer federation to facebook to said distributed network.

One thing I'm curious about, but not enough to research right now, is the compatibility of the existing federated social networks! I'm kind of amazed that wasn't the whole point of the main article. If they're federated, can they talk to each other? If not, why not? I don't care if they don't share internal API's, but the first thing they should make (during or after working out their internals) is a way to talk to each other in a common way. Do that, and all the ones listed on the main article (and more) become one big network - still probably not enough to sway a significant part of facebook users, but that doesn't really matter. This has to come first. Then add a plugin (possibly unofficial due to legal reasons) to plug in facebook.

Maybe/hopefully, facebook will take up that charge. They won't gain those external users, but they'd be giving their users access to the other networks where some small group of more security/privacy/just-plain-paranoid people reside.

I like to think of it somewhat like email. "You got mail"... AOL is more-or-less dead, but not because they allowed users to interact via email with external networks. That may be the only thing that kept them alive as long as it has. Of course, email was designed from the ground up to work that way, so we'll have to work backwards.

This post is getting too long, but one last thing... I'm really disappointed in Google Hangouts. They had talk, and it was federated, and anyone with an XMPP/Jabber server could federate with them, but they're cutting that off. This is not just a disappointment with Google, but with all these types of networks. IM is SOOOO much easier than social, and yet MSN, AIM, Jabber/XMPP, Google Hangouts, Yahoo, MS Lync... they can't talk to each other**. That's just stupid. The Google move is a step backward, and does not bode well for integration of social networks.

** I know there are ways to do this, such as with XMPP bridges, but they're ugly and generally unsupported. AFAIK, I can't search for an MSN user while on AIM, and in this day and age, that's stupid.

about three weeks ago
top

A Toolbox That Helps Keep You From Losing Tools (Video)

unrtst Re:checking out stuff? (82 comments)

You could fit every tool with an RFID tag and put a small computer with an RFID reader in the tool box. ...

This was one of the best ideas I ever saw when I read it in one of Cory Doctorow's books. I think the book was "Makers", and here's the excerpt where it was introduced: http://www.iconeye.com/404/ite...

about three weeks ago
top

Ask Slashdot: Best Practices For Starting and Running a Software Shop?

unrtst Re: Mod parent up. (176 comments)

Job security has its own value, along with enjoyment. You can't base everything off of the pay.

Job security is a myth.
You're only chance at job security in a large company is to do what the GP stated, "so they can hide in a corner doing minimal work while collecting a mediocre salary".

Those tireless technical people will actually have better job security at a start up or small shop. They often need a lot of things that large companies frown upon, like flexible hours (to support binge sessions of 18hr days busting out some new thing), some level of control/authority over design and business decisions that would otherwise get delayed with red tape (ex. purchasing several additional servers to support some new thing/design/etc; if it's delayed, it'll ruin their momentum), lower overhead of stupid rules and meetings and such allowing them to focus, etc. These *can* be had at a larger company, but its rare, and it's not often tolerated.

I can't speak for the GP, but I doubt he was basing everything off of pay. That sounds like one of the least important factors, but it doesn't hurt the decision.

about three weeks ago
top

Ask Slashdot: Best Practices For Starting and Running a Software Shop?

unrtst Re:First and foremost (176 comments)

Why is a pay check important? Having a portfolio of work, be it class projects, contributions to an open source project, perhaps having a patent granted, etc. should count just as much as earning a pay check for a few years working as an assistant code monkey to the junior developer of some corporate sub-project.

While I agree with the other replies to your comment (ie. it is quite different and very important), none of them seem to mention the grey area here where you are right. That is, having a portfolio of unpaid work can be plenty in order to get a low to mid level developer job.

However, the question was about a lead developer position. At that level, you can disregard the "developer" part in order to answer this question as it applies to practically all professions. It doesn't matter how awesome you are at the core task (translating ideas into code that works) if you have zero experience with all the other duties that make up a head/chief/lead position.

This also goes the other direction. An applicant could have years of experience being a very effective manager in another field but, if they do not have any development experience, they shouldn't get the lead developer position. The position bridges two areas of expertise and requires experience in both.

The dual role has one relaxed restriction - you do not need to be the hottest code monkey there is, nor do you need to be a six sigma black belt. You do need to understand both and how to communicate across the domains.

about three weeks ago
top

Former Police Officer Indicted For Teaching How To Pass a Polygraph Test

unrtst Re: First Post (328 comments)

but they weren't clearly criminal things either (letter of the law, maybe, but it's not like he was knowingly training terrorists or killing people etc)

Umm, did you read the indictment? One of his would be clients told him that he was worried about his polygraph because he had engaged in smuggling while employed for Homeland Security. He proceeded to assist that would be client (actually an Undercover LEO) with the falsification of his testimony to the Federal Government. Mens rea was clearly evident on the part of Mr. Williams.

Assuming that's all true, yeah, those things are illegal. IMO, and I realize this doesn't follow all the laws on the books, is that the main thing he was doing is not illegal at all. The polygraph is not accurate, and he shows people how it works (or doesn't work). Who cares why someone wants to beat one if they aren't admissible anyway? Yes, technically he should have stopped the guy before he knew the motivation or aborted once he knew it, but that extra fluff really shouldn't matter. He's getting strung up on technicalities.

If they busted a drug mule with a ton of coke, but only got him for speeding, lying to an officer, and transporting goods across a state line without the right documentation, I'd say the same thing... the stuff they're busing him for aren't all that awful. Difference is, in this case the actual thing he was doing was perfectly fine, thus all the, "suppress speech that the U.S. government dislikes," type of reactions.

Greed? it was just $5k.
Federal crimes? they're some of the weakest/generic white-collar ones that exist. It's Martha Stewart level bullshit. Technically illegal, and he'll almost certainly go to prison for it, but these aren't the sort of things that directly threatening to society.

Here's a question for you (or anyone): do you think he is horrible and evil, or do you think he just made some stupid mistakes?

about a month ago
top

Former Police Officer Indicted For Teaching How To Pass a Polygraph Test

unrtst Re: First Post (328 comments)

I didn't realise there was so much money involved.
Looks like Scam VS Scam.

I didn't read the article. Are you referring to the same figure Shakrai quoted - $5000?

Sorry, but that's NOT a lot of money. If he had one $5k client a month, that's only $50k/year. Sure, he *could* have more clients, but I doubt the demand is all that high, and I suspect that the training takes a fair bit of time, even if it is very simple in theory. It's not like he's got a bunch of employees and is making millions.

As far as greed goes, this is more like greed to have enough money to live on, not greed to have piles of surplus cash.

He made a couple statements/choices that could (and did) get him in trouble, but they weren't clearly criminal things either (letter of the law, maybe, but it's not like he was knowingly training terrorists or killing people etc).

about a month ago

Submissions

Journals

unrtst has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?