×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Backblaze's 6 TB Hard Drive Face-Off

Fweeky Re:Meaningless? (173 comments)

I find it odd that the WD drives, at the 5400rpm speed, were able to write data faster than the 7200rpm Seagate drives.

Maybe the Seagates are more sensitive to vibration, either from making more of it when you shove 45 into a cheap metal box, or by being less tolerant to it because they're pushed harder.

4 days ago
top

Seagate Bulks Up With New 8 Terabyte 'Archive' Hard Drive

Fweeky Re:Just in time. (219 comments)

I recall reading that the uncorrectable read error rate tends towards the 2TB mark.

12.5TB, assuming the specified 1-in 10^14 bit uncorrectable-read-error rate specified for most consumer drives is accurate. I certainly don't see rates anywhere near that high with my consumer drives, but I could just be lucky.

about a week ago
top

Windows 10 To Feature Native Support For MKV and FLAC

Fweeky Re:Rather late (313 comments)

I still have two or three recent (i.e. last four or five years) devices that have problems seeking VBR files or displaying the proper duration.

Even foobar2000 has issues with seeking in MP3s. From the FAQ:

Why is seeking so slow while playing MP3 files?

The MP3 format doesn't natively support sample-accurate seeking, and sample accurate seeking is absolutely required by some features of foobar2000 (such as .CUE playback). MP3 seeking can't be optimized neither for CBR files (frame sizes aren't really constant because of padding used), nor for VBR files (both Xing and VBRI headers in those files contain only approximated info and are useless for sample-exact seeking). Therefore MP3 seeking works by bruteforce-walking the MPEG stream chain and is appropriately slow (this gets faster when you pass through the same point of file for the second time because seektables have been built in the RAM).

about three weeks ago
top

Windows 10 To Feature Native Support For MKV and FLAC

Fweeky Re:Where does this leave independant media players (313 comments)

You need weird-ass buggy fb2k plugins, but are only missing format support in WMP? Do you play a lot of ancient tracker music or something?

If you find the fb2k interface so intimidating perhaps you'd be better off with its much simpler cousin, Boom. Not sure if it's got much support for particularly oddball media formats though.

about three weeks ago
top

Multi-Process Comes To Firefox Nightly, 64-bit Firefox For Windows 'Soon'

Fweeky Re:Tempting (181 comments)

Multi-process architecture... I've not really noticed a problem with the threaded one, and Firefox already sticks flash objects in a separate process. So what's the real draw

Isolation. The same reason you want different apps to have their own processes instead of having the whole of userspace in one big blob. You can give processes reduced privileges to reduce the scope of exploits, hangs and crashes don't take down more than they have to, and leaks don't force you to restart the entire system to recover resources.

Plus it makes for simpler concurrency. Kind of handy when you've got a stop-the-world garbage collector if you can just split the world into many smaller independent units, each able to run at the same time and each with an order of magnitude less work to do and no synchronisation to worry about.

64bit... again, bragging points about how many bits you use, no functional difference to anyone

ASLR is a fuckload more effective when it has a reasonably sized address space to work with, and 2^32 is miles away from being reasonable. It's the difference between an attacker having to guess one of 8 locations and one of 8 billion. Plus, memory mapping things is awesome, and also a fuckload easier with a reasonably sized address space.

And hey, some of us actually use our browsers quite a lot. Mine's eating 5.5G right now. So many windows and tabs, and absolutely no fucking reason whatsoever why that should be considered even slightly unreasonable.

about a month ago
top

Passwords: Too Much and Not Enough

Fweeky Re:Per-user salting (223 comments)

How many people do per-user salting of the password hash?

People spouting things like this is precisely why we have tens of millions of web apps using shitty password storage solutions that boil down to HASH(salt + password) and are thus borderline fucking useless. It's like asking if someone's home-grown encryption algorithm uses an IV - that might be an important part of it but it's kind of missing the point.

If you're using passwords for authentication in your app, use a recognised key derivation function. Use PBKDF2 or bcrypt and tune them to take at least 100ms to run. If you're extra paranoid, use scrypt and tune it to take 100ms and 16MB of memory. If you're doing anything else without having a well-received peer reviewed academic paper describing it, you might want to reconsider.

about 2 months ago
top

Choose Your Side On the Linux Divide

Fweeky Re:My opinion on the matter. (826 comments)

FreeBSD uses init.d

FreeBSD uses rcNG, acquired from NetBSD (basically shell scripts and a binary for resolving dependency order defined in magic comments), on top of a simple BSD-style init. There's some vague movement towards porting launchd, but I don't think anyone's holding their breath.

about 4 months ago
top

Facebook Seeks Devs To Make Linux Network Stack As Good As FreeBSD's

Fweeky Re:This does pose the question: (195 comments)

pkgng's made port upgrading much less burdensome - even fairly complex dependency changes can be handled automatically as of 1.3, and the official package repositories are a lot more useful now. They even have stable security-fix-only branches.

I still make my own customised builds, but I make binary packages in an isolated jail using poudriere. 99% of upgrades are a matter of updating its ports tree, running rebuild-packages, and running pkg upgrade on all my machines.

You couldn't pay me to go back to portupgrade/portmaster/portmanager.

about 4 months ago
top

How long ago did you last assemble a computer?

Fweeky Re:so, I'm in the more than 8 yrs ago camp (391 comments)

If you're actually that bothered about the data integrity benefits of ZFS, it'd probably have been a good idea to go for ECC memory. Pools can pretty much self-destruct in face of memory corruption, and memory failure rates are not that much different to disk failure rates.

Such bullshit that it's so rare and poorly supported. The actual material cost is tiny - a few more motherboard traces and 1 extra memory chip for every 8. With AMD at least it's mostly a case of finding a good motherboard vendor, instead of the server/workstation board and CPU combo Intel demand.

about 5 months ago
top

FreeBSD 9.3 Released

Fweeky Re:What is BSD good for? (77 comments)

Not really - ports doesn't even have a *concept* of upgrading, it's just uninstall/reinstall and hope you can work out how to handle all the dependencies. This is why FreeBSD's got so many tools for managing them - portupgrade, portmanager, portmaster, all with their own little and not so little quirks.

We do have an apt-alike these days, in the form of pkgng. pkgsrc also has pkgin.

about 5 months ago
top

FreeBSD 9.3 Released

Fweeky Re:What is BSD good for? (77 comments)

It's stable enough for general use, but maturity counts for a lot with filesystems, especially when they're as complex as ZFS. It's also a third-party add-on rather than an official part of the OS which does raise some issues.

Conversely it's practically the default on FreeBSD, and it's been available since 2008.

about 5 months ago
top

Ask Slashdot: Practical Alternatives To Systemd?

Fweeky Re:Accept, don't fight, systemd (533 comments)

Every release seems to take the system one step closer to exactly what you describe

Erm, like what?

about 7 months ago
top

Ask Slashdot: Practical Alternatives To Systemd?

Fweeky Re:I've been toying with rolling my own distro (533 comments)

pkgng's still missing the ability to track certain changes automatically, so you occasionally have to force-remove a package or manually change an origin as per /usr/ports/UPDATING. I think they're expecting to resolve that in 1.3 fairly soon.

I've been using it for about 18 months across a small group of machines with about 1400 packages between them, and it's pretty much entirely demolished any apt-envy I've had.

about 7 months ago
top

OpenSSL Cleanup: Hundreds of Commits In a Week

Fweeky Alternatively (379 comments)

You can also track the changes in a somewhat friendlier format using FreshBSD. Full commit messages (up to a point) upfront, more useful Atom feed, breakdown by committer etc.

about 8 months ago
top

How Data Storage Has Grown In the Past 60 Years

Fweeky Re:How long id a song (100 comments)

Reality disagrees with you. The user data portion of a sector is normally a power of two for convenience, being used on computers with power of two page sizes, but drives themselves are no more limited to power of two number of or size of sectors than your computer is limited to power of two size array or structure lengths, and this is readily confirmed by the existence of disks with 520 byte sectors (and somewhat different physical sizes) and an irritatingly diverse range of sector counts.

about 9 months ago
top

How Data Storage Has Grown In the Past 60 Years

Fweeky Re:How long id a song (100 comments)

Hard disk drives use sectors which at some basic level have to be addressed by a powers of two binary addressing system. This means that no matter what else you do with sector sizes or block sizes, the binary counting system *always* comes into the picture.

Right, they're addressed using LBA48, which happens to be encoded in binary because that's how we build computers. That doesn't imply disks naturally only support powers of two for sector counts or sizes - they evidently don't.

CDs and DVDs have 2,352 and 2,418 byte physical sectors. Some Fibre Channel HD's support 520 byte sectors, and of course like optical discs all HD's have substantially bigger physical sectors internally for error detection and correction. A quick sampling of some of my HD's reveals drives with 732,566,646, 3,907,029,168, 500,118,192 and 312,581,808 sectors (at least they're all even?).

Ethernet is even more flexible, supporting any frame sizes between 64 bytes to over 9KB, hardware permitting. Note 9KB is not a power of two.

Wrong, and wrong again. *All* computer peripherals transmit data to and from computers encoded in binary signals. It means that all computer based addressing is essentially binary

Um. Yes, the numbers are encoded in binary. No, this doesn't mean computers can only handle number maximums that are a power of two. Memory happens to be like that because it has to be insanely low latency and simple bit operations like masking off the lower portion of an address is very efficient, but not everything is so restricted.

about 9 months ago
top

How Data Storage Has Grown In the Past 60 Years

Fweeky Re:How long id a song (100 comments)

Why always picking on the HD manufacturers? Your GigE network runs at 1,000,000,000 bits per second, not 1,073,741,824, what a scam!

Memory is measured in multiples of powers of two because that's how the addressing works. Disks and network have no such fundamental limitations - they count in sectors and frames, which are themselves not necessarily powers of two.

about 9 months ago
top

NVIDIA GeForce GTX 780 Offers 2,304 Cores For $650

Fweeky Re:Bitcoin / Litecoin mining? (160 comments)

It'll perform a bit worse than a GTX Titan, which gets in the region of 330Mhash/sec. For comparison, an AMD HD5870 from 2009 managed about 400Mhash/sec.

about a year and a half ago
top

Btrfs Is Getting There, But Not Quite Ready For Production

Fweeky Re:ZFS (268 comments)

8GB isn't hefty by any stretch of the imagination, especially not when you're messing with dedup. For decent performance the recommendation is somewhere along the lines of 20-30GB per TB, though you can mitigate that somewhat by using an SSD for L2ARC.

about a year and a half ago

Submissions

Fweeky hasn't submitted any stories.

Journals

Fweeky has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?