Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux 3.7 Released

timothy posted about 2 years ago | from the under-the-radar dept.

Open Source 151

The wait is over; diegocg writes "Linux kernel 3.7 has been released. This release adds support for the new ARM 64-bit architecture, ARM multiplatform — the ability to boot into different ARM systems using a single kernel; support for cryptographically signed kernel modules; Btrfs support for disabling copy-on-write on a per-file basis using chattr; faster Btrfs fsync(); a new experimental 'perf trace' tool modeled after strace; support for the TCP Fast Open feature in the server side; experimental SMBv2 protocol support; stable NFS 4.1 and parallel NFS; a vxlan tunneling protocol that allows to transfer Layer 2 ethernet packets over UDP; and support for the Intel SMAP security feature. Many small features and new drivers and fixes are also available. Here's the full list of changes."

cancel ×

151 comments

Sorry! There are no comments related to the filter you selected.

Linux 3.7 Released (-1)

Anonymous Coward | about 2 years ago | (#42251261)

Out, proud and I just came in my pants.

Re:Linux 3.7 Released (2, Funny)

Anonymous Coward | about 2 years ago | (#42251319)

Just proves what a wanker you are, then

Who cares? (-1)

Anonymous Coward | about 2 years ago | (#42251311)

Who cares?

Re:Who cares? (1, Offtopic)

smitty_one_each (243267) | about 2 years ago | (#42251379)

They guy on First.

Re:Who cares? (0)

Anonymous Coward | about 2 years ago | (#42253737)

typo ruined it...

Linux 3.7 has bugs! (1, Funny)

For a Free Internet (1594621) | about 2 years ago | (#42251315)

Beware everyone!

Last night I upgraded my boxen to Linux 3.7. AWERSORME!

But when, proud of my accomplishments, I went to a well-deserved sleep in my bed, I discovered a horrible problem!

BEDBUGS!

These little critters are gross and almost impossible to get rid of. Since the only thing I did yesterday was upgrade my Linuxes, it had to come from them.

Dear friends on slashdort, Play it safe and wait for the next point release!

Re:Linux 3.7 has bugs! (0, Offtopic)

smitty_one_each (243267) | about 2 years ago | (#42251405)

We got politicians who will claim to fix all your problems. First, give them your liberty and your vote.

Improved SAMBA client support? (5, Informative)

CajunArson (465943) | about 2 years ago | (#42251337)

experimental SMBv2 protocol support;

This can't come soon enough for Linux clients. SAMBA already has SMBv2+ server-side support, with SAMBA 4 apparently even supported SMB 3.0. This is especially true for a high-latency connection through the VPN where the reduced chattiness of newer SMB protocols gives a nice performance bump.

You can post all day & all night about how NFS/CODA/GlusterFS/etc./etc. is better, but at the end of the day the CIFS protocols are supported by every Windows machine out there and should be supported by Linux too. Plus, if you are a free-software purist, then you could setup a 100% GPL'd installation with SAMBA servers and Linux clients, so it would totally make sense for the Linux clients to actually support the modern protocols.

Re:Improved SAMBA client support? (4, Funny)

Anonymous Coward | about 2 years ago | (#42251501)

This is exactly what Linux was missing: Super Mario Brothers version 2.

Re:Improved SAMBA client support? (2)

Ynot_82 (1023749) | about 2 years ago | (#42251543)

the ending's disappointing, though...

Re:Improved SAMBA client support? (1)

ByOhTek (1181381) | about 2 years ago | (#42251585)

Are you talking about "The Lost Levels" Mario 2, or "Doki-Doki Panic" Mario 2?

Re:Improved SAMBA client support? (0)

Anonymous Coward | about 2 years ago | (#42251829)

The lost Levels was v1.1
The official numbered 2 was really v1.1 of another seperate product.
Super Mario Bros 3 was the official numbered 3, and was actually labelled v3.0

So we never really had a SMBv2 before.

Re:Improved SAMBA client support? (0)

Anonymous Coward | about 2 years ago | (#42252085)

He's talking about the real SMB2 released in Japan, before it was re-released as The Lost Levels.

SMB2: The name sounds so dirty (2)

tepples (727027) | about 2 years ago | (#42251915)

Super Monkey Ball 2 [wikipedia.org] .

Re:Improved SAMBA client support? (4, Insightful)

rubycodez (864176) | about 2 years ago | (#42251627)

purists can also get Linux into the door at the clients with windows desktops, the basics of authentication, file and print sharing are enough for most small/medium business. I've done that a few times over the last five years, clients are still happy as the server just works,and are adopting more Linux boxes including some desktops.

Re:Improved SAMBA client support? (1)

hobarrera (2008506) | about 2 years ago | (#42251663)

Why do you state that linux "should be supported by Linux"? And why should I, as a *nix user, care about what windows supports.

Integrate *nix with Windows (1)

tepples (727027) | about 2 years ago | (#42251925)

And why should I, as a *nix user, care about what windows supports.

Because you may end up having to integrate the *nix that you use with the Windows that an employer, client, etc. uses.

Re:Integrate *nix with Windows (1)

hobarrera (2008506) | about 2 years ago | (#42253085)

Windows supports WebDAV since windows98 IIRC. And I think *nix users tend to avoid windows employers/clients. There's plenty of jobs to get picky about the ones you choose.

Jobs not evenly distributed geographically (1)

tepples (727027) | about 2 years ago | (#42253385)

There's plenty of jobs to get picky about the ones you choose.

Unless you happen to have grown up in an area where there aren't plenty of jobs and need a job to save money so that you can move to where there are plenty of jobs.

Re:Improved SAMBA client support? (2)

Shaman (1148) | about 2 years ago | (#42251993)

Uhm, since deployed Windows systems largely don't support SMB 2.x much less SMB 3.x I fail to see how this is a major failing on the part of Linux. Although I am of course entirely for supporting the current protocols.

Re:Improved SAMBA client support? (2)

kelemvor4 (1980226) | about 2 years ago | (#42252179)

Uhm, since deployed Windows systems largely don't support SMB 2.x much less SMB 3.x I fail to see how this is a major failing on the part of Linux. Although I am of course entirely for supporting the current protocols.

Windows 8 supports SMB3, and MS claims to have sold 40 million copies already. Sources: http://www.reuters.com/article/2012/11/27/us-microsoft-windows-idUSBRE8AQ18W20121127 [reuters.com] and https://en.wikipedia.org/wiki/Server_Message_Block#SMB_3.0 [wikipedia.org]

Re:Improved SAMBA client support? (1)

rsmith-mac (639075) | about 2 years ago | (#42253231)

And everything since Vista/Server2K8 supports SMB 2.x. Unless you're still running XP machines (in which case your time is quickly approaching) then your systems are probably already using SMB 2.x.

DRM (1)

leromarinvit (1462031) | about 2 years ago | (#42251371)

Signed modules? Yay for tivoization!

Re:DRM (1)

ssam (2723487) | about 2 years ago | (#42251485)

except you control the keys

Re:DRM (4, Interesting)

leromarinvit (1462031) | about 2 years ago | (#42251673)

Only when you control the kernel/boot loader. I have a feeling that this will be used a lot by vendors to lock you out of your own devices, e.g. Android phones etc.

I'm as paranoid as the next geek, and the idea of secure boot etc. appeals a lot to me if done correctly. As in, if it's MY device, then I get to decide what runs on it, and no one else. But it's a tool, and as such it can be used both for you and against you. There can't be a technical solution, technology is dumb. We need a legal solution, either in the form of regulation or widespread adoption (and enforcement) of the GPLv3.

Re:DRM (0)

Anonymous Coward | about 2 years ago | (#42253851)

"technology is dumb. We need a legal solution"

You think our lawmakers AREN'T dumb? These are the same people who think that the way to get out of a fiscal cliff that is caused by massive debt, is to take on more debt.

Re:DRM (1)

dpilot (134227) | about 2 years ago | (#42251611)

Signed modules are a two-edged sword. They can be used for Tivoization, as you say. They can also be used by you to secure your own system.

Really, it's too bad that none of the major distributions have set this up. I've had TPMs on the past 2 work laptops. I've rather wanted to "take ownership" of them, principally to prevent anyone else from doing so. But it's rather a pain, supported, but in more of an expert-only mode, so I've never had the time.

Module signing would be same type of thing. If RedHat and Ubuntu put in place module-signing infrastructure in a user/owner empowering way, it would help security for everyone, and they'd occupy that space making it just a bit harder for someone else to move into the vacuum and take it over.

Re:DRM (4, Informative)

Microlith (54737) | about 2 years ago | (#42251623)

Module signing has been in place with Fedora 18 and Ubuntu 12.10 as it's required to be compliant and get a signature on the bootloader for Secure Boot. I assume the code was backported.

Re:DRM (1)

Anonymous Coward | about 2 years ago | (#42251651)

Signed modules are a two-edged sword. They can be used for Tivoization, as you say. They can also be used by you to secure your own system.

If root is inserting untrusted modules into his kernel, he has bigger problems than module signing can fix.

Re:DRM (1)

Bengie (1121981) | about 2 years ago | (#42252471)

It takes little effort to sign but adds more security to your system. Maybe not a lot more, but more non-the-less.

Re:DRM (1)

Anonymous Coward | about 2 years ago | (#42252511)

It's a security-in-depth measure. If attackers gets root access to a machine, they'll often load a rootkit as a kernel module.
If they can't load kernel modules, they may have to do something more intrusive, with a greater risk of discovery.

Re:DRM (1)

wolrahnaes (632574) | about 2 years ago | (#42252833)

You're absolutely correct that if an attacker is performing actions as root you have a big problem, but if that attacker is able to succeed and inject modules in to the kernel you have much bigger problems. Root's actions can still be monitored, logged, etc. where a malicious kernel module can hide any evidence of its existence from the running system.

Having this feature enabled (and of course keeping the private key elsewhere if you build your own modules) means that a root exploit turning in to a rootkitted box requires a kernel bug rather than just insmod.

UDP ... (0)

Anonymous Coward | about 2 years ago | (#42251385)

Why does vxlan transfer L2 packets using UDP and not TCP? I have also seen this on other L2 protocols like L2TP and PPTP ... just curious ...

Re:UDP ... (5, Interesting)

vlm (69642) | about 2 years ago | (#42251533)

Why does vxlan transfer L2 packets using UDP and not TCP? I have also seen this on other L2 protocols like L2TP and PPTP ... just curious ...

TCP has a feedback loop when packets are lost... So you'd have that at both layers, the actual session and the tunnel.

Its an engineering thing where if you embed a feedback loop inside a feedback loop, things will be OK if you're VERY careful but most are not and you'll make a lovely oscillator and just blow it all to bits.

Fundamentally, UDP doesn't guarantee delivery so its OK to shove it inside UDP, and TCP has its own repair mechanism so you don't need to guarantee its sub-layers, so its not like you're missing anything.

Finally it just kills performance because TCP loves big buffers for each connection so you need megatons of ram until you start dropping packets and letting TCP police itself. Which meanwhile results in horrific latency. But if you tunnel over UDP, you don't really need much of a buffer on the tunneler itself and you'll overall end up with better latency specs. So its cheaper and works better. Hard to beat that combo...

Re:UDP ... (1)

seyfarth (323827) | about 2 years ago | (#42251965)

Nice discussion! I have run OpenVPN over port 80 TCP in order to get past a firewall. It worked but a little later I tried port 80 UDP. It worked better. I was happy to discover an unblocked UDP port for my needs.

Re:UDP ... (1)

TheLink (130905) | about 2 years ago | (#42252031)

UDP port 53 or port 500 are often unblocked.

UDP port 53 might be redirected to a local server in many places though.

Re:UDP ... (4, Informative)

Anonymous Coward | about 2 years ago | (#42252061)

I agree with your comments but want to add a clarification to your last paragraph for the benefit of all /. readers.

TCP needs enough buffer that it can hold a copy of each packet sent until it receives an acknowledgment because it may need to re-transmit the packet if it gets lost. Once the packet is acknowledged as having been received, TCP frees up the space. As such, there is a straight forward way of computing how much buffer TCP needs if you want to fully utilize the bandwidth of the bottleneck link along the path.

The amount of buffer is twice the round-trip time multiplied by the bandwidth of the bottleneck link (aka, "the bandwidth delay product"). More that this is a waste as it won't be used. Now, the effective round-trip time will increase if you have packet loss along the path. And congestion in the network (possibly made worse by the buffer bloat the previous post points out) will also increase the round-trip time. And the bandwidth of the bottleneck link is probably not directly knowable by the end hosts (although it can be reasonably estimated). Thus the amount of buffer space can be estimated a priori.

Note: you will still have to have this much buffer space to achieve full performance even if you tunnel TCP through UDP. It is just that you won't have to have much more than that amount. Also, having inner and outer TCP connections result in them fighting against each other, as you point out. (That is why it is not a good idea to tunnel TCP over TCP, not primarily because of buffer concerns.)

Note: you do need to have sufficient space for the inner TCP or it won't be able to operate at full speed. But you won't need double the space as you would with TCP within TCP (assuming you could solve the fighting among themselves issue).

Re:UDP ... (1)

TheNinjaroach (878876) | about 2 years ago | (#42252589)

TCP needs enough buffer that it can hold a copy of each packet sent until it receives an acknowledgment because it may need to re-transmit the packet if it gets lost.

Thanks for the explanation!

Re:UDP ... (1)

dpilot (134227) | about 2 years ago | (#42251551)

I haven't RTFA, but looking at the things you want to transport, it looks as if you're tunneling other stuff - potentially including TCP.

Tunneling TCP over TCP is generally a Bad Thing. The flow control of the tunnel and the flow control of the tunneled can interact in really ugly ways. By using UDP to create the tunnel, when you send TCP over that tunnel there will be only one flow control.

This is from the ancient days of "PPP over SSH/Telnet", when it used to be possible to get a shell account, but not IP access, from many "internet providers".

Re:UDP ... (2)

advantis (622471) | about 2 years ago | (#42251575)

At that point you don't need the reliabilitiy and retransmission features of TCP. Once you stack the layers up, TCP will take care of that anyway, without running it over TCP again. Think IP: unreliable datagrams; you put TCP on it and presto: reliable, ordered, everything. Run a VPN, and you do it over UDP, and end up with something like IP -> UDP -> TCP, and then TCP again does its thing, without a care in the world about the layers below. Same principles apply with this new things too. If your underlying layers are flaky, you can't make them less flaky by adding more TCP to your cake. In effect, you make them even more flaky as each TCP layer tries to do its own retransmission and floods your line.

Re:UDP ... (4, Interesting)

vlm (69642) | about 2 years ago | (#42251701)

I forgot to mention one real life situation where UDP over TCP does not work.. UDP conceptually works pretty well with real time live streaming. "Here's 5 seconds of audio of the ball game". 5 seconds later, if lost, that packet is meaningless, don't bother re-sending it, the RX will just output 5 secs of silence or whatever. TCP does not understand that at all, so you can get serious problems with live streaming if you try to stick that inside TCP and experience significant network congestion. Buffers get bigger until they pop, "live" becomes randomly "tape delayed" based on recipient... Also TCP doesn't understand variable bit rate, so its ideas about buffer allocation bear little resemblance to what the codec actually wants to do.

Re:UDP ... (1)

elfprince13 (1521333) | about 2 years ago | (#42251971)

Presumably the reduced overhead of UDP was considered by the developers to be a worthwhile tradeoff against the convenience and stronger guarantees afforded by TCP.

Re:UDP ... (3, Insightful)

petermgreen (876956) | about 2 years ago | (#42252059)

TCP tries to (and usually succeeds in) trasfer a stream of bytes reliablly and in the right order over an unreliable packet based system.

To achieve this two things have to happen
1: the sender must resend lost packets
2: the recipiant must hold packets after

However there is no way for a sender to determine if a packet has actually been lost or just delayed. So the sender must use a timeout to deem a packet as lost and retransmit it.

Now suppose someone builds a tunnel using TCP and runs TCP over that tunnel so your stack looks something like.

Application
TCP (inner)
IP
Tunneling protocol
TCP (outer)
IP
underlying network

Everything works fine as long as no packets are lost. However when a packet is lost by the underlying network the outer TCP layer freezes all transmissions through the tunnel until it has retransmitted the packet. During this time it is likely that the innner TCP layer will also deem the packet(s) lost and try to retransmit them (possiblly more than once due to the auto-adjusting timeouts used by TCP). Then when the outer TCP does recover it will deliver both the original packet and the retransmission from the outer TCP. This behaviour is very similar to what happens when a network is congested and make cause the inner TCP to unnessacerally back off the data rate.

Re:UDP ... (1)

petermgreen (876956) | about 2 years ago | (#42252069)

it will deliver both the original packet and the retransmission from the outer TCP

That should have said

it will deliver both the original packet and the retransmission from the inner TCP

kernel in c++? (1, Funny)

Anonymous Coward | about 2 years ago | (#42251423)

kernel in c++? no? ill move on then,

Re:kernel in c++? (5, Informative)

advantis (622471) | about 2 years ago | (#42252217)

And you need a kernel in C++ why? Because you can't get your head around objects that aren't enforced by the language? Or you can't get your head around doing error cleanup without exceptions enforced by the language? The Linux kernel even does reference counting without explicit support from the language.

Just to get a complete picture, I looked at some competing kernels (I skimmed over the source really quickly):

FreeBSD kernel - C, with objects and refcounts, similar to Linux
OpenBSD kernel - C, but I have a hard time finding their equivalent to objects and refcounts, and I gave up looking
GNU Hurd - C, and I'm not even going to bother looking around too much
XNU - C, but with I/O Kit in C++ - works only with Apple software?
Haiku kernel - C++, which is interesting in itself - but supports only IA-32?
Plan9 kernel - C
OpenSolaris kernel - C

I think it's pointless to look at the rest. All the others listed by Wikipedia are even more obscure than some of the above.

C seems to dominate the kernel arena, so Next time you post, I'd like to know what you think C++ would bring to the party. No, really. I've seen many dismiss that Linux isn't written in C++, but haven't seen a single one of these trolls (yes, I'm feeding you) say what that would accomplish, and I'm really really really curious. I'll throw a bone from the XNU Wikipedia article: "helping device drivers be written more quickly and using less code", and that seems to be the only bit written in C++, yet Linux does pretty well without, and apparently so do the majority (see above).

Re:kernel in c++? (4, Informative)

petermgreen (876956) | about 2 years ago | (#42252403)

IIRC modern windows is a mixture of C and C++.

As to what C++ achives it's the automation of tedious and error-prone boilerplate. Rather than manually incrementing and decrementing reference counts you can have it happen automatically as values are copied and overwritten. Rather than manually building procedure address tables for polymorphism you can get the compiler to do it for you.

Re:kernel in c++? (0)

Anonymous Coward | about 2 years ago | (#42253237)

Yes, but there is a cost for that automation in the form of processor cycles and memory utilization.

Re:kernel in c++? (3, Insightful)

maxwell demon (590494) | about 2 years ago | (#42253669)

For any reasonable C++ compiler and well-written program, the cost is exactly the same as if you do it manually.

In some cases it will even be less because the compiler knows what's going on and can use that knowledge in optimization, e.g. replace indirect calls by direct calls where it knows exactly the dynamic type of an object, which is generally not possible for hand-written call tables.

Re:kernel in c++? (0)

Anonymous Coward | about 2 years ago | (#42253817)

C++ is pretty good about only paying for what you use.

Re:kernel in c++? (2)

Rhacman (1528815) | about 2 years ago | (#42253677)

I can't speak to kernel development but I did develop a data processing engine in C that incorporated design features more traditionally suited to C++ development like polymorphism, interfaces, run-time loadable components, etc. The choice of C was meant to aid with future porting to systems for which C++ compilers were believed not to exist. The system worked but not without encountering instances where someone who developed a component for the system misunderstood some aspect of the architecture and implemented something incorrectly or deliberately took a shortcut that broke the model. To help keep things clean and orderly we had a very rigid coding standard that (most) people followed but it was still not as clear to follow as it would have looked coded in C++. Training developers to code for it who were familiar with C++ took a bit more effort as well. All that said, while I wish we had done it in C++ I don't relish the thought of re-writing it from scratch to use C++ constructs.

Re:kernel in c++? (4, Informative)

Anonymous Coward | about 2 years ago | (#42252623)

Haiku kernel - C++, which is interesting in itself - but supports only IA-32?

Haiku have active ports to PowerPC, ARM and x86-64 in progress.

Re:kernel in c++? (0)

Anonymous Coward | about 2 years ago | (#42253153)

c++ brings hookers and blackjack to the party. then the party gets wilder and wilder. then the police (Linus) shows up, lays a beatdown on everyone and bans hookers and blackjack for the foreseeable future. so from now on we all have nice, well-maintained parties.

but you always think back to that time you had that one really really awesome party. :)

pity comment (-1)

Anonymous Coward | about 2 years ago | (#42251439)

pity comment

Next up 64 bit Raspberry PI? (1)

cod3r_ (2031620) | about 2 years ago | (#42251461)

Yes please.

Re:Next up 64 bit Raspberry PI? (1)

HaZardman27 (1521119) | about 2 years ago | (#42251563)

Why? It's nowhere near 32-bit memory limitations, does it have a shortage of registers or something?

Re:Next up 64 bit Raspberry PI? (1)

Luyseyal (3154) | about 2 years ago | (#42251711)

Nah, but 64-bit gets work done twice as fast as 32-bit! Didn't you know? ;)

-l

Re:Next up 64 bit Raspberry PI? (1)

cod3r_ (2031620) | about 2 years ago | (#42251769)

Well they've got 512MB ram now. The next logical jump is support for 8GB. Duh.

Re:Next up 64 bit Raspberry PI? (2)

SuricouRaven (1897204) | about 2 years ago | (#42251857)

There is some interest in ARM for low-power servers and server appliances. Support for more than 4GB of ram would come in useful there.

Re:Next up 64 bit Raspberry PI? (1)

HaZardman27 (1521119) | about 2 years ago | (#42251911)

But we're talking about the Raspberry Pi, a $25-$35 USD computer that currently has 512MB of RAM, and that's in the more expensive model.

Re:Next up 64 bit Raspberry PI? (1)

fnj (64210) | about 2 years ago | (#42252159)

Raspberry Pi is just the meme. Consider what the Raspberry Pi can do for 1/8 the cost the big players were charging us. Now imagine a 64 bit server for 1/8 what one costs now.

Re:Next up 64 bit Raspberry PI? (1)

SuricouRaven (1897204) | about 2 years ago | (#42252323)

I'm thinking NAS boxes. You want low-power, so they are mostly ARM already - but with 64-bit ARM, you could also throw lots and lots and lots of RAM in for disk cache.

Re:Next up 64 bit Raspberry PI? (1)

micheas (231635) | about 2 years ago | (#42252143)

It depends on the work load.

IIRC on AMD64 most programs are about five to ten percent larger if they are compiled for 64 bit instead of 32 bit with a slight slowdown. However SSL and other programs that extensively use numbers larger than 32bits tend to be twice as fast on 64bit than 32bit. So if you are doing mostly authentication or ssl on your PI then 64 bit would make sense.

Is Btrfs for real yet? (2, Interesting)

Anonymous Coward | about 2 years ago | (#42251463)

Does it `Just Work' (tm)? I really want rolling snapshots ah la NetApp.

Sorry to be obtuse. Not much time for experiments.

Re:Is Btrfs for real yet? (3, Informative)

ssam (2723487) | about 2 years ago | (#42251527)

SUSE enterprise linux has offered BTRFS as a supported option since Feb.

Conservative folk wont touch it until they know its been used by millions of people for many years.

I use it with backups on ext4.

Re:Is Btrfs for real yet? (2, Interesting)

Anonymous Coward | about 2 years ago | (#42251625)

The SUSE implementation of Btrfs is quite good. It's quite a bit ahead of the Btrfs support I've seen on other distributions and setting it up is pretty much automated by the installer. I agree Btrfs isn't stable yet and so shouldn't be used in production yet, but it looks like it is getting closer.

Re:Is Btrfs for real yet? (2)

Andy Prough (2730467) | about 2 years ago | (#42251653)

Last I looked a couple weeks ago, openSUSE support forums are still advising that BTRFS should not be used on production machines - experimental only. I don't know if SUSE enterprise is giving different advice, but I doubt it.

Re:Is Btrfs for real yet? (0)

Anonymous Coward | about 2 years ago | (#42251783)

Re:Is Btrfs for real yet? (1)

Andy Prough (2730467) | about 2 years ago | (#42251807)

That's great news!

Re:Is Btrfs for real yet? (-1)

Anonymous Coward | about 2 years ago | (#42251901)

Btrfs is developed by Oracle aka SCO 2.0. I would "rather" rimjob a FAT SWEATY graybeard than use their "software."

APK

PS: => If you "are" a => fat sweaty graybeard, consider this an offer for a "rimjob"!

apk...

Meh (-1, Flamebait)

Anonymous Coward | about 2 years ago | (#42251493)

I'll stick with Windows 8 and actually get some work done.

Re:Meh (0)

HaZardman27 (1521119) | about 2 years ago | (#42251569)

Try harder, troll.

Re:Meh (-1)

Anonymous Coward | about 2 years ago | (#42252693)

Troll? If this was an article about Windows 8 and he said "I'll stick with Linux and actually get some work done" you would be singing his praises. Go fuck yourself, asshole.

Re:Meh (1)

HaZardman27 (1521119) | about 2 years ago | (#42253419)

That wouldn't be trolling, because Slashdot is a largely pro-Linux community. If he went to an MSDN forum and posted that, then he would be trolling. Whoever posted this is trolling because he/she knows that Slashdot is pro-Linux.

Btrfs finally ready? (1, Interesting)

javilon (99157) | about 2 years ago | (#42251557)

Is it finally ready for prime time? any one with experiences/horror stories?

Re:Btrfs finally ready? (2)

GeniusDex (803759) | about 2 years ago | (#42251645)

I used to run btrfs roughly a year ago for half a year and had no issues with data integrety etc whatsoever. The downside at that time was that performance for working with loads of small files was noticably worse than with ext4. The result of this was that a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4. Apart from that I had no issues whatsoever; performance on other loads was decent.

I occasionaly look for benchmarks showing that the small files performance is up to par, but so far I have been unable to find them.

Re:Btrfs finally ready? (3, Interesting)

diegocg (1680514) | about 2 years ago | (#42251703)

a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4.

That's not due to poor small file performance in Btrfs, it's due to poor fsync() performance (which package tools like rpm and dpkg use quite a lot). In this new kernel version the Btrfs fsync() implementation is a lot faster.

Re:Btrfs finally ready? (0)

Anonymous Coward | about 2 years ago | (#42251657)

I'm using it for two years now. havn't lost any data. but it is not ready for prime time.

Re:Btrfs finally ready? (0)

Anonymous Coward | about 2 years ago | (#42252049)

I had a happy time with btrfs for six months or so on my home computer. When I wanted to upgrade my kernel for even fresher btrfs goodness I figured the new kernel wouldn't be able to read the old btrfs partition, since the binary format had changed. I moved my data to ext4 and it has lived there since.

Re:Btrfs finally ready? (1)

Bill Dimm (463823) | about 2 years ago | (#42252207)

Apparently SUSE Enterprise Linux [linux.com] thinks so, as of last week.

How fractured is ARM? (2)

timeOday (582209) | about 2 years ago | (#42251637)

The ability to boot into different ARM systems using a single kernel is kind of cool, but the need to do it is kind of scary. Is ARM not actually a single instruction set architecture, and if so, what is it?

Re:How fractured is ARM? (4, Informative)

Burdell (228580) | about 2 years ago | (#42251809)

There are variants in the instruction set (just like there are in the x86 world, where i686 is a superset of i383 for example). However, that isn't the big problem with ARM; there isn't a single-standard way of booting like there is with x86 (where most things are IBM PC BIOS compatible, with some now moving to EFI/UEFI). Also, there's no device enumeration like ACPI; lots of ARM vendors build their own kernel with a static compiled-in list of devices, rather than having an easy way to probe the hardware at run-time.

Re:How fractured is ARM? (2, Informative)

Anonymous Coward | about 2 years ago | (#42251819)

It's not the instruction set, it's the differences in boards [lwn.net] .

Re:How fractured is ARM? (0)

Anonymous Coward | about 2 years ago | (#42253339)

It's not the instruction set, it's the differences in boards [lwn.net] .

Ya, I don't get that. I'd ratther have a bunch of board files, than a single SoC file. Seems if you get rid of the multiple board files then you need multiple device tree files. Instead of nice, tightly focused board files just for that board you need a more bloated, generalized kernal to parse a device tree for that board. To me, Linux on ARM means embedded systems and every byte saved is a penny earned. I don't want to bother with parsing a device tree file! Am I missing something?

Re:How fractured is ARM? (0)

Anonymous Coward | about 2 years ago | (#42252781)

ARM isn't fractured. It's just a sprained wrist.

nice... (0)

Anonymous Coward | about 2 years ago | (#42251649)

now lets work on making linux work on a desktop/workstation;-)

Re:nice... (1)

kthreadd (1558445) | about 2 years ago | (#42251883)

now lets work on making linux work on a desktop/workstation;-)

Works quite well already on my workstation. Any particular areas of interest where it needs improvement in order to work?

Re:nice... (2)

marcello_dl (667940) | about 2 years ago | (#42251999)

ironically enough the problematic area was games and the linux detractors never brought it up. Let us see what Valve comes up with.

Pffft (1, Funny)

Anonymous Coward | about 2 years ago | (#42251693)

Windows is up to 8. Obviously, it is more than twice as good.

Re:Pffft (2)

fibonacci8 (260615) | about 2 years ago | (#42253103)

So it's still a regression from 98?

Re:Pffft (2)

maxwell demon (590494) | about 2 years ago | (#42253701)

Not to mention the massive regression from 2000.

The Kernel Newbies site isn't accessible for me (1)

crivens (112213) | about 2 years ago | (#42251959)

The Kernel Newbies site isn't accessible for me, clearly they're using 3.7. :)

Linux-libre 3.7 released (-1)

Anonymous Coward | about 2 years ago | (#42251977)

If you value your freedom, say no to Linus. [wikipedia.org]

Re:Linux-libre 3.7 released (1)

petermgreen (876956) | about 2 years ago | (#42252621)

For many classes of device the choice comes down to either propietary firmware in a rom on the card or propietary firmware included with the operating system. Do you really belive the former is better for freedom? if so why?

Re:Linux-libre 3.7 released (1)

turbidostato (878842) | about 2 years ago | (#42253149)

"Do you really belive the former is better for freedom? if so why?"

Certainly it's better. Because once bought the device is static while the OS is not.

In other words: you don't want not to be able to upgrade to 3.8 just because the vendor dropped support for your otherwise perfectly working device.

Active directory (0)

Anonymous Coward | about 2 years ago | (#42252447)

Great. But let me know when it supports http://en.wikipedia.org/wiki/Active_Directory better.

In the meantime...

http://lordandhooks.com/blog/likewise-open-6-and-samba/

ftape? (0)

Anonymous Coward | about 2 years ago | (#42253127)

Yes, but does it support ftape? That's the burning question in my mind. BRING BACK FTAPE. I USE IT.

Support for trim on software raid? (1)

TheSunborn (68004) | about 2 years ago | (#42253645)

Does: "MD: TRIM support for linear (commit), raid 0 (commit), raid 1 (commit), raid 10 (commit), raid5 (commit)"

meen that if I run a software raid-1 on sdd disk, then Linux can do Trim on the disks?

IPV6 NAT support (0)

Anonymous Coward | about 2 years ago | (#42253977)

great, ipv6 can started to be used

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?