×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Seagate Bulks Up With New 8 Terabyte 'Archive' Hard Drive

WuphonsReach Re:these are WORM drives (219 comments)

Modern versions of Linux - you don't need "noatime" for ext4. The "defaults" uses "relatime" by default, which does a fake atime and doesn't break applications that are looking for atime.

4 days ago
top

French Publishers Prepare Lawsuit Against Adblock Plus

WuphonsReach Re:They can go bite a donkey (683 comments)

I've taken vendors to task for making changes to their site that required that I allow dozens of sites. They had a simple choice - fix things, or I take my business elsewhere.

If you don't complain - then they will keep doing what they are doing.

about a week ago
top

Samsung SSD 850 EVO 32-Layer 3D V-NAND-Based SSD Tested

WuphonsReach Re:Why no 2tb model? (127 comments)

You're young. Early PCs cost 4-5k. Individual hard drives were in the $1000 range back in the 80s.

For someone who absolutely needs 10TB of zero-wait storage in a 2.5" form factor, 4-5k is not a big deal. Because pretty soon it will be $2000, then $1000, then $500.

Inexpensive enterprise SSD is having a big impact on how you spec out servers now. Do you build something with a bunch of 15k RPM drives in a RAID 0+1 array, short-stroked and end up with about 1TB of useful space? Or do you simply put 2x1TB in a RAID-1 array in a much smaller unit?

I paid about $650 per drive last week for 1TB enterprise quality SSDs. I expect them to be below $400 by this time next year. By 2016, I suspect you will not be able to buy a 15k RPM SAS drive as the enterprise SSDs are crushing them from above on price/performance.

about a week ago
top

Consortium Roadmap Shows 100TB Hard Drives Possible By 2025

WuphonsReach Re:100k employees making 100k a day in email (215 comments)

(shrugs) your IT is definitely stuck in the 2000s (i.e. 5+ years ago).

Cost per TB (raw storage, the hardware to hold the storage, plus the backup tapes / disks) for bulk storage - is definitely more like $800-$1000 per TB these days and not $10k. The sweet spot for bulk storage these days is the 3TB 3.5" enterprise SATA drives at about $230 each. Add in the loss of capacity due to RAID + server costs and you're at about $500/TB of actual storage.

Primary storage is still much more expensive at $1500-$2000 per TB. But primary storage is using SSDs (around $1/GB) or 15k SAS drives (about $0.35/GB to $0.50/GB). And not the relatively inexpensive 3TB enterprise drives at $0.08/GB.

about three weeks ago
top

Windows 10 To Feature Native Support For MKV and FLAC

WuphonsReach Re:About time (313 comments)

After all those years of the big sweaty one Nadella is just the breath of fresh air that MSFT needed!

I'll believe that once they spin off some divisions and simplify licensing costs for corporate users. And release all of their applications on Android + iOS + OS X.

This is just a retrenchment. Their game plan is still "lock-in lock-in lock-in", also known as "Embrace, Extend, Extinguish".

about three weeks ago
top

Consortium Roadmap Shows 100TB Hard Drives Possible By 2025

WuphonsReach Re:How about transfer rate and reliability? (215 comments)

In practice, SSDs have only 20-100x the IOPS of a similar number of spinning platter drives. Which is still a huge improvement, but not three orders of magnitude (1000x). The bigger advantage is that when you have more workers accessing the drive, latency performance doesn't dive off a cliff like it does with spinning platter drives. It instead degrades gracefully on the SSDs.

SSDs are definitely edging 15k SAS drives out of the market. SSDs do everything at 15k SAS drives can do, with at least an order of magnitude more IOPS/drive, for only about 2-4x the cost of the 15k SAS drive. And putting a writeback SSD cache in front of a spinning platter drive array is even more economical.

about three weeks ago
top

How Intel and Micron May Finally Kill the Hard Disk Drive

WuphonsReach Re:What about long-term data integrity? (438 comments)

A powered-down SSD that has been written once should be able to retain data for ~10 years or so. Longer if kept in a cool place.

Nope. Most MLC SSDs will lose their data in about a year and the TLC SSDs in about 6 months of being powered off. (Don't confuse older flash media which was probably SLC with newer MLC/TLC media. Or which had larger feature sizes.)

As the size of the feature that stores your bits shrinks, so does the archival lifetime before something bad happens to one or more of the bits. That holds true for everything from tape, to hard drives, to CDs to flash drives.

about three weeks ago
top

How Intel and Micron May Finally Kill the Hard Disk Drive

WuphonsReach Re:Empty article.. (438 comments)

Also incorrect assertion that drives don't go faster than 7200 (there are 15k drives, just they are pointless for most with SSD caching strategies available).

With Enterprise SSD drive prices hitting $1/GB (granted some are still $2-3/GB), the days of 15k RPM drives are definitely numbered. You get 50-100x the IOPS out of SSDs compared to the 15k RPM SAS drives. That means for a given level of IOPS that you need, you can use a lot fewer drives by switching to SSDs.

I'd argue that if you are short-stroking your 15k SAS drives to get increased IOPS out of the array, it's past time to switch to enterprise SSDs.

about three weeks ago
top

Sony Pictures Computer Sytems Shut Down After Ransomware Hack

WuphonsReach Re:How do WE fight this? (155 comments)

Using rdiff-backup, rsnapshot or rsync across the LAN via SSH in a "pull" configuration is the safest. The server pulls the files from the client PC. Alternately, you could do the above in a push configuration and limit where the origin PC can write to on the backup server. Even in a "push" configuration, I don't know of any malware currently capable of figuring out that there is an rdiff-backup script which stores data on a different server.

The server then sends files to tape / disk / offsite.

Basically - you need to have a centralized backup solution with multi-generation removable media.

For immediate restores, you pull the files back off the backup server. The next level after that is pulling files off of removable media which has been kept offsite or disconnected.

about three weeks ago
top

Highly Advanced Backdoor Trojan Cased High-Profile Targets For Years

WuphonsReach Re:Microsoft Windows only (143 comments)

microsoft is one price and you get a server and tools and all the features

That's a good one, go ahead and pull my other leg while you're trying to spin that for Microsoft.

Microsoft licensing is a nightmare. Just look at the segments for the desktop operating system. Or try to figure out which version of MS Office you need and whether a volume license will save you money (and whether you'll be in compliance). The server-side is no different with the different restrictions on the different variants of Windows Server, SQL Server, etc.

(They're still a babe in the woods compared to some other vendors like Oracle, but they're trying to catch up.)

about three weeks ago
top

Highly Advanced Backdoor Trojan Cased High-Profile Targets For Years

WuphonsReach Re:Microsoft Windows only (143 comments)

That meme "security through obscurity" only really applies in cases of improper reliance on "security via obscurity", once the secret is known - the system is insecure and anyone can access it.

Examples of this would be "hand rolled encryption algorithm that we hide in a black box", "secret handshakes", "back doors which are left unlocked".

about three weeks ago
top

BitTorrent Unveils Sync 2.0

WuphonsReach Re:FOSS solution available (60 comments)

I'd argue for Seafile as another option. It does what it says on the tin (file sharing / sync) and does it well.

about a month ago
top

Launching 2015: a New Certificate Authority To Encrypt the Entire Web

WuphonsReach Re:quick question (212 comments)

Short answer: no it's not possible to detect that with the current system

The slightly longer answer is either browser pinning of certificates, or better, DANE. With a system like DANE, it's much harder to impersonate large swathes of the domains like you can today.

about a month ago
top

Joey Hess Resigns From Debian

WuphonsReach Re:I will be changing to FreeBSD too (450 comments)

There's definitely going to be some teething pains. Which is why I'm not rolling out anything production on RHEL7 until 7.2 or 7.3 comes out next year.

But I am looking forward to having (1) log file to dig through instead of two dozen or more. And being able to easily pull that to a centralized log server (and pull is more secure then push). I'm also looking forward to not having to write monit / nagios scripts to restart services if other services restart.

about a month ago
top

Bounties vs. Extreme Internet Harassment

WuphonsReach Re:Longstanding Police Tactic (716 comments)

They don't need to be effective - the reason all the bigwigs get up there and smile is because it gets them re-elected. They can be shown to be tough on crime by supporting things like Crime Stoppers.

about a month and a half ago
top

The Fight Over the EFF's Secure Messaging Scoreboard

WuphonsReach Re:OpenPGP (63 comments)

The problem with Perfect Forward Secrecy (PFS) in the case of GPG/PGP encrypted messages is that PFS requires two-way communication between the end-points at the start to securely transmit and agree on a ephemeral key for that session.

That's not practical in the case of sending an encrypted email/file to someone. There is no "session" to speak of. There's no two-way conversation at the start before the file/information is transmitted.

GPG/PGP is designed to defend against disclosure of data-at-rest (i.e. an email body sitting on someone's server or a file sitting on your hard drive). It just so happens that because it defends in the data-at-rest scenario that it can also help protect the contents in transit. It's very good at what it does, but trying to use it in a situation where you want PFS is a misapplication of the technology.

(So yeah... the EFF folks are idiots and are lumping together apples and oranges.)

about a month and a half ago
top

Android 5.0 Makes SD Cards Great Again

WuphonsReach Re:Still a second class citizen (214 comments)

In general, if a device supports microSD cards of 64GB, they'll work fine past that point.

The original SD spec was limited in size. SDHC came out in 2006 and allowed for card capacity of up to 32GB. Most devices made in 2013 or earlier are SDHC with a 32GB limit (such as my Thinkpad T61p laptop and my Asus TF700T tablet). That means putting a 64GB card into a SDHC slot is a bad idea (it will probably corrupt the data once it tries to write past the 32GB mark).

SDXC was introduced three years later in 2009, and allows for cards up to 2TB in size. A lot of times, the manufacturers will only certify up to the size that was available when the device was released. So larger cards may very well work, up to the limits of the spec.

about a month and a half ago
top

The Effect of Programming Language On Software Quality

WuphonsReach Re:I have just one word for you (217 comments)

A lot of Java boilerplate code (and not just getters/setters) can be gotten rid of with a bit of AspectJ (Spring Roo leverages this heavily). With good use of AspectJ, your java objects look like POJOs (plain old java objects) with all of the extra stuff added at compile-time by the .aj files.

about a month and a half ago
top

New Atomic Clock Reaches the Boundaries of Timekeeping

WuphonsReach Re:Old saying (249 comments)

Best practice in the real world is four reference clocks or only one. With just three configured you run into the problem of ending up in the "just two clocks situation" more often then not. At which point, NTP is likely to oscillate between the two remaining good candidates (without the "prefer" keyword).

How you choose to configure NTP is a tricky art depending on how resilient you want to be and whether you have a local time source or need less then 5ms accuracy. For most situations (99% of servers), being within 500ms of the "internet time" is enough. Your goal is mostly to avoid the issue where the clock is off by tens of seconds or worse.

about a month and a half ago
top

Ask Slashdot: How Useful Are DMARC and DKIM?

WuphonsReach Re:I send bulk email.. (139 comments)

I send bulk email for an opt-in list with mailman (opt in as in you have to walk in the store and physically write your email on our sign up sheet).

It's not opt-in unless you send out a verification email to the address on the sign-up sheet. You have zero guarantee that the person writing down that address has the permission of the person who receives mail at that address. That verification email should explain how you obtained the address and require action on the recipient's part in order to remain on the list. If you get no response or the recipient takes no action, you should throw away that record.

No, you're not allowed to do advertising in that initial mailing either. And those "asking permission" emails should go out sooner (within a week) rather then later (months+).

about a month and a half ago

Submissions

Journals

WuphonsReach has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?