×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OpenBSD 4.9 Released

timothy posted more than 2 years ago | from the release-the-kraken dept.

Operating Systems 137

An anonymous reader writes "The release of OpenBSD 4.9 has been announced. New highlights included since 4.8: enabled NTFS by default (read-only), the vmt(4) driver by default for VMWare tools, SMP kernels can now boot on machines with up to 64 cores, support for AES-NI instructions found in recent Intel processors, improvements in suspend and resume, OpenSSH 5.8, MySQL 5.1.54, LibreOffice 3.3.0.4, and bug fixes." Also in BSD news, an anonymous reader writes "DragonFly BSD 2.10 has been released! The latest release brings data deduplication (online and at garbage-collection time) to the HAMMER file system. Capping off years of work, the MP lock is no longer the main point of contention in multiprocessor systems. It also brings a new version of the pf packet filter, support for 63 CPUs and 512 GB of RAM and switches the system compiler to gcc 4.4."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

137 comments

Why is NTFS read only. (4, Funny)

jack2000 (1178961) | more than 2 years ago | (#35993294)

Why is NTFS always read only. It shouldn't be so hard to make a proper file system driver what the hell?

Re:Why is NTFS read only. (2, Insightful)

Anonymous Coward | more than 2 years ago | (#35993312)

If it's so easy, and you seem to care, can we expect your diff on misc@ in the next few days?

Re:Why is NTFS read only. (2, Insightful)

JamesP (688957) | more than 2 years ago | (#35993592)

They can post, but TDR will never accept it. NEVER!!11 (insert maniac laughter)

OpenBSD is knows for things like throwing away wireless drivers, for example.

Re:Why is NTFS read only. (1)

Anonymous Coward | more than 2 years ago | (#35994270)

Throwing out closed source binary blobs you mean. Yes.

Re:Why is NTFS read only. (3, Informative)

the_B0fh (208483) | more than 2 years ago | (#35995626)

eh? The last wireless fiasco I remembered was one of the linux wireless guys stealing openbsd's reversed engineered code, and re-releasing it as their own. I guess you can say they threw away encumbered code as they reverse engineered and re-wrote it.

Re:Why is NTFS read only. (1)

SlashV (1069110) | more than 2 years ago | (#35997806)

That wouldn't have been a problem, since that is what BSD'ed code is for. The other way around (bsd ppl 'stealing' linux gpl'ed code) is a problem, so my guess is that was what happened and that is also the way I remember it.

Re:Why is NTFS read only. (-1)

Anonymous Coward | more than 2 years ago | (#35993316)

Why is NTFS always read only. It shouldn't be so hard to make a proper file system driver what the hell?

because niggers are using too much welfare. don't be so selfish, there's only so many resources to go around. you just wouldn't believe how fertile poor black women can be.

Re:Why is NTFS read only. (1)

webmistressrachel (903577) | more than 2 years ago | (#35993960)

This post makes me ashamed to troll slashdot. I hope you get run over by a bus full of Jamaican tourists before you can post another... blind justice and all that...

Re:Why is NTFS read only. (3, Informative)

Phibz (254992) | more than 2 years ago | (#35993336)

You do realize that NTFS is completely closed source right? All the work on it has been done through reverse engineering.

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35993388)

You do realise NTFS has been writable by Unix/Linux for a number of years,so his question is a bloody good one. Why the hell does anyone persist in read only nonsense?

Re:Why is NTFS read only. (3, Informative)

phantomcircuit (938963) | more than 2 years ago | (#35993610)

NTFS is only writable on linux though NTFS-3G, the write support in the kernel only works if the file size doesn't change.

Re:Why is NTFS read only. (2)

Tarlus (1000874) | more than 2 years ago | (#35997102)

And even still, writing with NTFS-3G isn't 100% perfect. It is progressing very nicely but it's far from being bulletproof.

Re:Why is NTFS read only. (2)

rubycodez (864176) | more than 2 years ago | (#35993916)

you do realize NTFS-3G had horrible bugs until this month, look at the fixed list of the april 11, 2011 release. I wouldn't have touched that shit with a ten foot pole until two weeks ago. And it might still have some major problems.

Re:Why is NTFS read only. (4, Informative)

DarkOx (621550) | more than 2 years ago | (#35993424)

Add to that a few other fun things

1.Multiple versions of NTFS with subtle changes
2.Its a complex file system with lots of features, some of which are not even used by windows but you still have to take care of the on disk data correctly.
3.The security scheme does not cleanly map onto UNIX style rules even with ACL support and such.

NTFS is by no means avant guard but its by any means simple and without documentation figuring out its internals completely and correctly is a BIG job. Now why they can't gleen allot of that from the Linux source I don't know. I know they can't use the Linux source because of the GPL being incompatible with BSD maybe there is a contamination concern.

Re:Why is NTFS read only. (3, Insightful)

hedwards (940851) | more than 2 years ago | (#35993466)

Contamination isn't normally an issue for kernel code, they can always cram it in its own corner of the code and not include it in binaries by default.

Without being involved with the discussions its hard to say, but I've personally found Linux filesystem code to be less than reliable. But there's also the issue of that it would have to pass their auditing to be included in the base install, there's a reason why they have so few base exploirts.

Re:Why is NTFS read only. (4, Funny)

Hognoxious (631665) | more than 2 years ago | (#35993570)

NTFS is by no means avant guard

Just like your knowledge of French, it would seem.

Re:Why is NTFS read only. (1)

poetmatt (793785) | more than 2 years ago | (#35993920)

wha? he was exactly right. NTFS isn't innovative, it's deliberately not cross-compatible.

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35994556)

That doesn't make his spelling and grammar any more correct.

Re:Why is NTFS read only. (2)

billstewart (78916) | more than 2 years ago | (#35995030)

Those crafty French persons not only provide cliche'd phrases that we're expected to adopt as binary blobs, they deliberately obfuscate them by using letters that aren't supported in normal open-source ASCII.

Re:Why is NTFS read only. (0)

CharmElCheikh (1140197) | more than 2 years ago | (#35995280)

That appropriate word is "avant-garde". Nothing non-ASCII in here. (not that anyone cares but well... you're a surrender-monkey snail eater spelling nazi or you're not; and I am)

Re:Why is NTFS read only. (1)

the_B0fh (208483) | more than 2 years ago | (#35995660)

what in the world are you smoking? People who've looked at disk structures say that NTFS is just like VMS fs - you know, the OS that Dave Cutler wrote at |D|I|G|I|T|A|L|

Re:Why is NTFS read only. (5, Informative)

mlts (1038732) | more than 2 years ago | (#35993746)

Those are important items, especially #1. There are a lot more which make life hell for someone trying to get NTFS to work fully as a supported filesystem for a UNIX based OS. A few more:

4: Alternate data streams. It is common for malware to add an ADS onto a file, a directory, a junction point, or even the C: drive object itself. Without a dedicated utility that snorts out these, they are essentially invisible.

5: Like #1 above, NTFS changes in undocumented [1] ways. For example, EFS changed to add different encryption algorithms between Windows XP and Windows XP Service Pack 3. So, not knowing that may bring someone a world of hurt.

6: Similar to #3, NTFS's ACLs are hard to reimplement in the UNIX world. U/G/O permissions can be mapped (Cygwin does this).

7: For a filesystem to be usable as a production one, it needs a filesystem checking utility that can go through the whole filesystem and check/repair integrity on every part of it, be it mostly unimplemented/unused items (transactional-NTFS), features off the filesystem (NTFS compressed files, EFS), and many other items.. Yes, there are ways to run Windows's chkdsk.exe utility, but that is a hack at best.

One of the biggest problems with operating systems today is that there are no compatible filesystems beyond FAT and FAT32. Perhaps UFS. Either one filesystem has too much patent encumbrance to be used, or its license.

I wonder how easy life would be if we had a standard filesystem that could replace the LVM (similar to ZFS), offer modern features (deduplication, encryption, 64-bit checksumming [2], encryption, compression (various levels), snapshotting [3]. On an LVM level, it would be nice to have mountable disk images similar to OS X's sparse bundles. If something changes on the encrypted drive, only a few bands change, as opposed to having to back up the whole file.

Life would be easier if every OS out there had a common filesystem with modern features. A good example about how useful this would be would be antivirus scanning. Unpresent a LUN from a Windows server, scan it on a Solaris box for malware, then re-present it, for example.

[1]: Undocumented unless you are elite enough to have the MS source code handed to you, all work on the filesystem is all reverse engineering.

[2]: Backup programs would have it easy and not rely on dates or archive bits... just look for files where the checksum has changed and back those up just like the -c option in rsync.

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35994428)

The DIR /R command works just fine for showing alternate data streams.

Stuff NTFS hides in Alternate Data Streams (1)

billstewart (78916) | more than 2 years ago | (#35995046)

I briefly used Kaspersky anti-virus, and now lots of my files have :KAVICHS: or something like that tacked onto them as alternate data streams. When I copy those files to devices that don't support them (e.g. memory sticks using FATxx), Windows has to pop up dialog boxes to warn me that it'll be unable to copy the extra baggage. [Insert snarky comments here...]

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35997802)

One of the biggest problems with operating systems today is that there are no compatible filesystems beyond FAT and FAT32. Perhaps UFS. Either one filesystem has too much patent encumbrance to be used, or its license.

That is wrong. UDF up to version 2.01 is supported read/write by all modern operating systems. There are a few smaller issues but generally it works.

Re:Why is NTFS read only. (1)

ChunderDownunder (709234) | more than 2 years ago | (#35993738)

if ReactOS would adopt a BSD filesystem-to-rule-them-all then ntfs goes the way of the dodo.
Some clever soul then slipstreams that code into your Win8 installer as a root fs driver and world domination ensues. :-)

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35994004)

You do realize that NTFS is completely closed source right? All the work on it has been done through reverse engineering.

I'm happy for you. The point stands that the driver is perfectly capable of writing to an NTFS partition. Why is it read only by default? To frustrate users is the only reason I can come up with.

Re:Why is NTFS read only. (2)

grub (11606) | more than 2 years ago | (#35994248)


Why is it read only by default? To frustrate users is the only reason I can come up with.

If a driver is known to be potentially flaky and may put data at risk, I think the user having to knowingly enable RW with that caveat is a safe and decent.

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35993452)

licensing issues.

Re:Why is NTFS read only. (2)

DaMattster (977781) | more than 2 years ago | (#35993562)

This is done so that there is no risk of corrupting the NTFS File System. If you ask me, this is a good idea. What is so bad about simply copying the data you need onto your BSD4.4 File system?

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35993728)

That's a bullshit excuse and you know it.

Re:Why is NTFS read only. (2)

drolli (522659) | more than 2 years ago | (#35993698)

a) yes it is hard to make a proper (*) file system "driver"

b) its not getting easier by the file-system being closed source

(*) proper here means: will under no circumstances behave in a way that you loose data trough silent corruption, as opposed to: will not normally loose data obviously after using if for a few hours.

Re:Why is NTFS read only. (1)

jack2000 (1178961) | more than 2 years ago | (#35994106)

"driver", daemon, kernel module whatever the hell you want to call it.
You know the source for that OS was leaked. I'm not saying developers should just outright copy it. Just look at the source off record and make your own implementation of it.

Re:Why is NTFS read only. (1)

leamanc (961376) | more than 2 years ago | (#35994430)

Can't you just run a script to tighten up that loose data? It't not like you would *lose* data, would you?

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35994978)

No, the data's not loose, it's the data trough that's loose -- it wobbles, and some data slops over the side. You will indeed lose the data that sloshes out of your loose data trough.

Re:Why is NTFS read only. (1)

clang_jangle (975789) | more than 2 years ago | (#35993850)

Why is NTFS always read only. It shouldn't be so hard to make a proper file system driver what the hell?

In FreeBSD you can enable write support and recompile your kernel, not sure about OpenBSD. I always wondered why default kernels in BSD and Linux don't just enable all well-supported filesystems to be available rw by default. Let the burden be on people who want the two second advantage in booting or something, rather than those of us who are trying to do something as basic as access our data.

Re:Why is NTFS read only. (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#35994184)

The specifications for NTFS are completely closed. If it's what Windows produces when told to format a volume as NTFS, it is NTFS. There are reverse-engineered attempts(NTFS-3G being the most practical, if rather slow); but they aren't entirely to the point where you'd want to trust vital data to them.

In the specific case of OpenBSD, I suspect that the read-only support is because the OpenBSD team has very low tolerance for what they see as crap. If they can't support something the way that they want to, they can and will just toss it(see the Adaptec RAID driver case, or some wireless chipsets). They don't do binaries, they don't do NDAs, they don't do blobs. They also don't like software they consider to be of inadequate quality. Thus, since the state of full NTFS support is a bit dodgy, it is entirely in character for them to drop it.

More broadly, NTFS read/write isn't really something that there is a strong incentive in the OSS world to polish to a high sheen. NTFS-3G is pretty much good enough for dual booters and rescue disks. NTFS doesn't have any points of superiority strong enough that building top-notch reverse-engineered support would be competitive with spending the same effort implementing a non-secret design. Also, for the sorts of purposes that pay the bills for a lot of Linux development, NTFS support is largely irrelevant. You don't dual-boot servers, and any halfway serious network setup is going to either use SMB/NFS(which makes the local filesystem irrelevant to all other hosts), or some filesystem with concurrent access support or other esoteric features that isn't NTFS.

NTFS R/W is really just a convenience feature for sneakernet and dual-boot scenarios. Neither of those really pay for enough development to get a fully baked reverse engineering of a (quite complex) filesystem.

Re:Why is NTFS read only. (0)

Anonymous Coward | more than 2 years ago | (#35994434)

Why is NTFS always read only. It shouldn't be so hard to make a proper file system driver what the hell?

You should ask them on the mail list if that's a feature you would like, but they have their own priorities with limited developers.
If you don't like it, fix it or don't use it.
Although, I suspect you're just looking for something to complain about.

missing some key features... (0)

Anonymous Coward | more than 2 years ago | (#35993296)

wake me when they have:

1) start/stop scripts, so I don't have to ps|grep|kill|...crap, what were those flags for the daemon again... to manage running processes or daemons

2) easy way to patch, so I don't have to set up a full development environment and compile everything on production servers

Re:missing some key features... (5, Informative)

snowgirl (978879) | more than 2 years ago | (#35993464)

wake me when they have:

1) start/stop scripts, so I don't have to ps|grep|kill|...crap, what were those flags for the daemon again... to manage running processes or daemons

Well, for this one:

New rc.d(8) [openbsd.org] for starting, stopping and reconfiguring package daemons:
The rc.subr(8) framework allows for easy creation of rc scripts. This framework is still evolving.
Only a handful of packages have migrated for now.
rc.local can still be used instead of or in addition to rc.d(8).

Re:missing some key features... (1)

MichaelSmith (789609) | more than 2 years ago | (#35993756)

Cripes they could have just copied from netbsd ten years ago.

Re:missing some key features... (3, Insightful)

rubycodez (864176) | more than 2 years ago | (#35993996)

why, they're not necessary. The flags for starting a daemon are in /etc/rc.conf and /etc/rc.conf.local, and the pid of running daemons are in /var/run or use ps ea for them. Simple and clean with no cruft is why I like OpenBSD for applicances and routers so much.

Re:missing some key features... (1)

MichaelSmith (789609) | more than 2 years ago | (#35994678)

But every application has its own interpretation of signals. For some a HUP may reload the configuration or force an update. For others this is not so. For some there may be a safe way to request a shutdown. For others, no. I have written start/stop scripts for daemons I wrote. I think it is a lot more consistent that way and I bet many openbsd users have improvised their own rc.d mechanisms.

Re:missing some key features... (2)

rubycodez (864176) | more than 2 years ago | (#35996548)

actually, various unofficial rc.d projects by various people have been available for openbsd for at least 10 years including port of the netbsd one. Most OpenBSD users say "ick" because of the normal use of OpenBSD...

OpenBSD primarily gets used on boxes with very focused purpose, so just a few daemons to manage and I'd rather have single file to control them than runlevels and rc.d

Re:missing some key features... (0)

Anonymous Coward | more than 2 years ago | (#35993468)

wake me when they have:

1) start/stop scripts, so I don't have to ps|grep|kill|...crap, what were those flags for the daemon again... to manage running processes or daemons

2) easy way to patch, so I don't have to set up a full development environment and compile everything on production servers

1) They do have this now.

Major disrespect (0)

Anonymous Coward | more than 2 years ago | (#35993306)

Some minor bugfixes get their own news article here, but two major releases of BSD based OSes are bundled together in the same news article?! WTF, dude, what's next, the /. BSD news digest posted once a year?

Re:Major disrespect (1)

Kjella (173770) | more than 2 years ago | (#35993770)

Some minor bugfixes get their own news article here, but two major releases of BSD based OSes are bundled together in the same news article?! WTF, dude, what's next, the /. BSD news digest posted once a year?

Because that never happens on Linux [slashdot.org].

Re:Major disrespect (0)

Anonymous Coward | more than 2 years ago | (#35993852)

Because that never happens on Linux.

You do realize that both Slackware and Ubuntu are just Linux distributions, right?
OpenBSD and DragonflyBSD may derive from the same BSD code base, but they have evolved into totally different operating systems.

Re:Major disrespect (1)

Beelzebud (1361137) | more than 2 years ago | (#35994082)

I'm not sure what the big distinction is. Couldn't it be said that OpenBSD and DragonflyBSD are just different distributions of BSD? I mean, yeah Slack and Ubuntu are Linux distributions, but as operating systems they have major differences as well.

Re:Major disrespect (2)

shutdown -p now (807394) | more than 2 years ago | (#35994118)

Couldn't it be said that OpenBSD and DragonflyBSD are just different distributions of BSD?

It couldn't, because they have very different kernel and base system (source code wise). They have descended from the same codebase, yes, but it was a very long time ago.

Slack and Ubuntu use the same Linux kernel, albeit with a certain combination of patches in case of Ubuntu.

Way too f***ing complicated. (0)

Anonymous Coward | more than 2 years ago | (#35993340)

Why? Just give me a simple OS that can make it into the mainstream. Something I can program, and script, and alter to my taste. This has become too unwieldy and way too over done.

Re:Way too f***ing complicated. (1)

Anonymous Coward | more than 2 years ago | (#35994262)

Why? Just give me a simple OS that can make it into the mainstream. Something I can program, and script, and alter to my taste. This has become too unwieldy and way too over done.

OpenBSD is about as simple as you can get.

Stop me if you've heard this one (0)

SilverHatHacker (1381259) | more than 2 years ago | (#35993342)

All three OpenBSD users are thrilled.

Re:Stop me if you've heard this one (1)

Anonymous Coward | more than 2 years ago | (#35993382)

Stop, HAMMER time!

Re:Stop me if you've heard this one (-1)

Anonymous Coward | more than 2 years ago | (#35993842)

So they removed Theo de Raadt's write privileges, Theo will try to take credit for someone else's project and pretend it's the fault of his operating *system*, and people will confuse HappyBSD with something usable on any contemporary hardware fir the next 10 years.

OpenBSD's much lauded security is from its lack of features and component integration. It's instability after installation is because you have to build your own toolchaini to do *anything*!!! And its most famous component, OpenSSH, was actually written by Tatu and published under GPL. Most of the actual feature development for it now occurs in the Linux world but takes at least five years to roll back into the mainline kernel. Chroot cages for scp and ssh are still not properly implemented, and sftp and scp still have no concept of symlinks.

63 CPUs? (1)

Anonymous Cowar (1608865) | more than 2 years ago | (#35993394)

Why 63? Because 64 would be just too KA-RAY-ZEE?

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35993408)

63 CPUs is enough.

Re:63 CPUs? (1)

carlosap (1068042) | more than 2 years ago | (#35993440)

63 CPUs is enough.

also 640GB would be enough

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35993688)

63 CPUs is enough.

also 640GB would be enough

And 63 million CPUs would be enough, by analogy of how much you overstated Bill Gates' (alleged) remark

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35997952)

(Whispers) It was a real remark by Steve Jobs who said "64K is enough" but what with /. and the technology media being anti-Portsistas ... .

Re:63 CPUs? (1)

Bengie (1121981) | more than 2 years ago | (#35993908)

AMD is releasing a 20core CPU next year, lets hope they don't sell a quad socket mobo because this OS won't support it.

Re:63 CPUs? (1)

dbIII (701233) | more than 2 years ago | (#35994900)

SuperMicro sell quad socket boards that take the AMD twelve core CPUs. I'm not even the first in my small city to get one of those 48 CPU beasts.
I have CentOS 5.5 on it because that's what the commercial software that it has to run likes and they won't support it on anything newer. That's a pity because I can't get the most recent version of blender to build on the thing to play with out of hours.

Re:63 CPUs? (3, Interesting)

m.dillon (147925) | more than 2 years ago | (#35994998)

The basic mobo support for large N-way configurations has gotten cheap. Power management still has a long ways to go on these beasts, though. Our monster.dragonflybsd.org box is using the quad-socket supermicro mobo with four 12-core opterons (48 cores) and 64G of ram, and I think all told cost around $8000 or so.

The limitation for for these sorts of boxes is basically just power now. The 12-core opterons are effectively limited to 2GHz due to power issues, and these big beasts are really only high performers in environments where all the cores can be used concurrently with very little conflict.

By comparison, a PhenomII x 6 or an Intel I7 runs 6 cores for the PhenomII and 4 x 2 cores for the I7 but automatically boosts the base ~3.2GHz clock to almost 4 GHz when some of the cores are idle. These single chip solutions also have a MUCH faster path to memory than multi-chip solutions, particularly the Intel Sandybridge cpus, and much faster bus locked instructions. So if your application is only effectively using ~4-6 cores concurrently it will tend to run at least twice as fast as it would on a high-core-count monster.

That means that for most general server use a single-chip multi-core solution is what you want. The latest single-chip mobos for Intel and AMD support 16G-32G of ram and 5 or more SATA-III (6GHz) ports. Throw in a small SSD and you are suddenly able to push 400MBytes/sec+ in random-accessed file bandwidth out your network using just ONE of those SATA-III ports. That's in a desktop configuration! So today's modern desktop mobos is equivalent to last year's server mobos at 30-50% the power cost.

A modern high-end configuration as above eats ~60W idle where as the absolute minimum power draw on a 48-core Supermicro box w/ 64G of ram (the ram eating most of the power) is ~250-300W. Big difference.

So lots of cores is not necessarily going to be the best solution. In fact, probably the only really good fit for a 48+ core box is going to be for virtualization purposes.

-Matt

Re:63 CPUs? (1)

the linux geek (799780) | more than 2 years ago | (#35993476)

I'm kind of surprised that large-hardware support is that low. There are plenty of large RISC or mainframe commercial systems with more than 64 cores and 512GB.

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35994340)

yes. all of which are being donated to these projects for people to compile/develop/test on.

Re:63 CPUs? (1)

Anonymous Coward | more than 2 years ago | (#35993486)

The two sides almost came to blows over whether to allocate four bits or eight bits in the descriptor for the CPU ID. Finally, after weeks of infighting they compromised at six bits, with one reserved value.

from "Difficult: the Inside Story of OpenBSD" (unpublished)

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35993518)

"Taking x86-64 to 63 cpus was easy because we have 64-bit bit instructions. I'm stealing a bit in the cpumask for a pmap spinlock, or it would be 64."

http://leaf.dragonflybsd.org/mailarchive/users/2010-12/msg00079.html

Re:63 CPUs? (0)

Anonymous Coward | more than 2 years ago | (#35993714)

"Taking x86-64 to 63 cpus was easy because we have 64-bit bit instructions. I'm stealing a bit in the cpumask for a pmap spinlock, or it would be 64."

OMG, theft! I'm calling 911 right now!

911: "911, state the nature of your emergency."
Me: "Yes... I would like to report a theft... Matt Dillon stole a bit from my cpumask..."
911: "Matt Dillon, the actor?"
Me: " No, not that Matt Dillon, the Dragonfly BSD guy!"
911: "Uh... Ok... Do you still have the said mask?"
Me: "Yes, but the stolen bit is being used in a spinlock, Dillon admitted it, he even bragged about it on the Internet!"
911: "Do you know the location of the stolen item?"
Me: "For the sake of all things BSD, send the police over to Matt Dillon's house and return my bit, I need it!"
911: "Calm down, madam, I'm dispatching a unit as we speak..."
Me: "Who are you calling a madam?! I'm a guy!"
911: "Uhh... I said - calm down, mad man..."
Me: "Oh... Ok... Let me know when you get my bit back, K THX BYE."

Re:63 CPUs? (2)

m.dillon (147925) | more than 2 years ago | (#35994904)

Atomic ops are limited to 64-bits for the most part (though maybe 128 bits w/fp insns we can't really depend on that). There are several subsystems in the kernel which rely on atomic ops to test and manipulate cpu masks which would have to be reformulated.

The main issue there is one of performance. We don't want to have to use a spinlock for cases where cmpxchg solves the problem because spinlock collisions can get VERY expensive once you have more than 8 cpus in contention.

Similarly the stolen bit for the pmap spinlock (reducing the limit from 64 to 63) is there to deal with a race where one thread needs to do a SMP invtlb style operation just as a new cpu tries to switch-in a thread using the same pmap (adding another cpu to the mask of cpus that need their TLBs to be invalidated). It's a fairly rare race but it has to be dealt with properly. Also fixable with some work.

The 512GB memory limit only exists because we still populate the DMAP entries manually and it is currently hard coded for 512G (one DMAP pte). A good programmer could fix that issue in about 2 hours but we're not going to worry about it unless we actually get hardware to test with with > 512G of dram populated. That much dynamic ram is a bit beyond our budget, not to mention the 1000W+ (~8A) of power it would eat.

-Matt

Using 1000 watts of power for DRAM? (1)

billstewart (78916) | more than 2 years ago | (#35995132)

1000 watts is about what you need for a toaster. And the usual operating system for various toasters was always BSD, so what's not to like there?

Re:Using 1000 watts of power for DRAM? (2)

m.dillon (147925) | more than 2 years ago | (#35995522)

Well, you don't run your toaster 24x7. In fact, most residental homes use less than 1000W of power averaged 24x7 for the entire home.

Running 1000W 24x7 is ~$180-$240/month in electricty depending on where you live. Commercial power isn't much cheaper (and due to improvements in density most colo facilities now also charge for power or include only a few amps in the lowest-tier of service).

It adds up fairly quickly. The DragonFly project has 7 core production machines. Six of those in my machine room together eat around ~3.4A of power 24x7 (idle), and a lot more when they are busy. The last one is colocated and eats ~2A. There are another 2-3 essentially dedicated colocated boxes which are donated and another ~12 boxes on the third tier which donate mirroring and bandwidth. And DragonFly is a very small project.

For small projects... and here I'm not talking just about BSD projects but also many Linux projects, running your own machines requires either a fat purse somewhere or a sponser. FreeBSD gets a lot of sponsorship to help cover continuing costs.

For DragonFly we get some sponsership in the form of a few remote colocated boxes with reasonable bandwidth but mostly there are just two of us funding ongoing operations. I also fund getting ~4 new almost-bleeding-edge single-socket machines every year to keep us up-to-date on hardware and post the old boxes to various developers in need as they get replaced by new boxes, in a sort of pipeline. But it's taken 3 years to build that pipeline. New boxes come in and operate as test machines for ~1 year, then production machines for ~1-2 years, then get rotated out.

This situation has gotten a little better over the years as small projects can now run their boxes on real machines at home with a reasonable amount of upstream bandwidth, then drill a VPN through to a colocated IP service to route the IPs without having to deal with ISP filters (ISP-allocated static IPs tend not to work very well because AT&T and COMCAST's stateful filters can mess up your TCP connections when you have a lot of concurrency).

Even so it seems to me that a lot of projects don't even have that... they either rent time on a virtual machines or depend on sharing space with other larger projects. It's possible to do a lot with virtualized resources, up to a point, but rented virtualized resources tend to have very non-deterministic resources and you can wind up in trouble if you get a demand spike.

-Matt

No thanks, I'll wait for OpenBSD 5. (0)

Anonymous Coward | more than 2 years ago | (#35993798)

Everyone knows it's no good until at least version 5.

What about FBI back doors? (-1)

Anonymous Coward | more than 2 years ago | (#35993914)

I didn't see anything in the release notes about FBI back doors. Are these still supported? Am I at risk of losing this important feature if I upgrade?

Re:What about FBI back doors? (1)

Anonymous Coward | more than 2 years ago | (#35994246)

IPsec stack audit was performed, resulting in:

        Several potential security problems have been identified and fixed.
        ARC4 based PRNG code was audited and revamped.
        New explicit_bzero kernel function was introduced to prevent a compiler from optimizing bzero calls away.
-- http://www.openbsd.org/49.html

Re:What about FBI back doors? (1)

Anonymous Coward | more than 2 years ago | (#35994290)

IPsec stack audit was performed, resulting in:

        * Several potential security problems have been identified and fixed.
        * ARC4 based PRNG code was audited and revamped.
        * New explicit_bzero kernel function was introduced to prevent a compiler from optimizing bzero calls away.

from:
http://www.openbsd.org/49.html

At the risk of being modded flamebait, etc (1)

00_NOP (559413) | more than 2 years ago | (#35994376)

Back in the day - or rather the last time I was paying a lot of attention to /. - all BSD articles were flooded by that "BSD is dying... blah, blah confirms it" story (I believe the kids call it a "meme" but I am too old for that).

Now they are not here: is this because they are blocked before they get posted or because it was one obsessive who died/finally had a beer/discovered masturbation or because the idea just, errr, died?

Really interested to know what the answer to this one is.

Re:At the risk of being modded flamebait, etc (0)

Anonymous Coward | more than 2 years ago | (#35994528)

Because BSD now -is- dead so it's not really worth posting about.

Really, any time I see "new (foo)BSD released" the only proper reaction is "so what? We've got Linux". There's nothing the BSDs can do that Linux can't do better.

In before this gets modded Troll or Flamebait by fanboys, despite it being truthful and accurate.

Re:At the risk of being modded flamebait, etc (0)

Anonymous Coward | more than 2 years ago | (#35994702)

It isn't true.

DragonFly has the HAMMER filesystem, which is considerably ahead of anything available on Linux right now. (btrfs looks exciting, but isn't ready yet; HAMMER has been running for ~3 years). HAMMER gives you fine-grained snapshots, CRCs all the way, replication, and dedup. Any Linux FSes close?

Re:At the risk of being modded flamebait, etc (0)

Anonymous Coward | more than 2 years ago | (#35994810)

Solaris called. It wanted to say, "been there, done that".

Re:At the risk of being modded flamebait, etc (1)

Anonymous Coward | more than 2 years ago | (#35994946)

Absolutely! And Solaris's ZFS is a superb filesystem. Definitely superior to HAMMER in a number of ways. Miles ahead of anything when its integrated volume management is considered.

But HAMMER offers a few interesting design points to contrast with ZFS. HAMMER was born with support for what would be called 'bprewrite' in ZFS -- dedup for example can be carried out just fine in the garbage collection phase in hammer, something which has no analogue in ZFS. HAMMER's performance under write streams is solid, since it can stream writes to contiguous sections of disk and repack them at idle time.

The DragonFly analogue to ZFS's L2ARC is swapcache; (both have no real good analogue in Linux... dm-flashcache would be your best bet I think). swapcache is more general than the L2ARC I think, though in practice the L2ARC may be a better design.

Unrelated, if you're willing to look into the platform (something the BSDs and Illumos integrate; I'll use glibc for the Linux platform here), there is a lot of interesting work in malloc() in the BSDs and Solaris; FreeBSD and NetBSD have the well-engineered jemalloc (Jason Evans's allocator), while Dragonfly and Solaris run variants of the Bonwick/Adams magazine/slab allocator. glibc runs a considerably weaker (imho) design, a hacked-up older version of ptmalloc, itself hacked up from Doug Lea's design. You might see recommendations on Linux systems to use Google's tcmalloc to improve MySQL scaling (for example); you won't see those recommendations on {Free,Net,DragonFly}BSD and Solaris.

Anyway, the point I originally wanted to make was not 'boo Linux', but that there is interesting engineering going on in other communities! Engineering that is at least worth a look...

Re:At the risk of being modded flamebait, etc (5, Informative)

m.dillon (147925) | more than 2 years ago | (#35996552)

ZFS has a large team of people behind it and resources that I don't have. That said HAMMER wasn't really designed to try to compete against it. HAMMER was designed to solve similar problems, but it wasn't designed to replace RAID as ZFS was. But ZFS is no panacea, and anyone who uses it can tell you that. The IP is now owned by Oracle, the license isn't truly open-source. ZFS itself is an extremely heavy-weight filesystem and essentially requires its ARC cache and relatively compatible workloads to work efficiently... and a veritable ton of memory.

HAMMER has a tiny footprint by comparison, gives you fine-grained automatic snapshots, and most importantly gives you near real-time queueless mirroring streams that makes creating backup topologies painless. Among many other features. Frankly ZFS might be the filesystem of choice if you are running dozens of disks but HAMMER is a much better fit otherwise.

People scream the RAID mantra all the time but the vast majority of people in the open-source world don't actually need massive RAID arrays to put together a reliable service. Often it takes just one 2TB HD and one 80G SSD x a few servers and in DragonFly HAMMER + swapcache fits that bill extremely well.

Our ultimate goal is real-time multi-master clustering. HAMMER doesn't get us quite there, primarily owing to the topology mismatch between HAMMER's B-Tree and OS filesystem cache topologies (mostly the namecache), but as the work progresses it will eventually achieve that.

In anycase, there's a huge difference between the people who do the actual design and implementation of these filesystems and the people who merely use them. Our goals as designers and programmers are not necessarily going to match the goals of the typical end-user who wants a magical black box that does everything under the sun with maximal performance in all respects and works without having to life a finger. ZFS can't even achieve that!

-Matt

Dragonfly is looking interesting (1)

AbrasiveCat (999190) | more than 2 years ago | (#35994908)

I have been running Open for quite a while (2.8 I believe), but Dragonfly has gotten to the point where it has some interesting things to try. The Hammer file system and the MP lock. I may have to give it a spin. Got love the BSD crowd, different groups trying it their way! :)

What's the point of dragonfly again? (1)

Anonymous Coward | more than 2 years ago | (#35995088)

Yes I'm a FreeBSD person and no I'm asking an honest question. I've read the info on dragonfly but really where is the result? It split at the same point that I started on FreeBSD using 5.0 which he split it using the older 4 base.. The guy that forked dragonfly wanted to take a different approach to multiprocessing. I have to assume he knew something of what he was talking about. The project is mature enough, it's had years. Milticore processors are common now. So where are the benchmarks proving his point? There has to be some specific test case and set of compiled applications that perform better on his architecture.

If not then why is he continuing the project? I can respect his going his own way. But it seems to me he was proven wrong so why continue?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...