Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Open Source Upgrades Linux

Linux 3.7 Released 151

The wait is over; diegocg writes "Linux kernel 3.7 has been released. This release adds support for the new ARM 64-bit architecture, ARM multiplatform — the ability to boot into different ARM systems using a single kernel; support for cryptographically signed kernel modules; Btrfs support for disabling copy-on-write on a per-file basis using chattr; faster Btrfs fsync(); a new experimental 'perf trace' tool modeled after strace; support for the TCP Fast Open feature in the server side; experimental SMBv2 protocol support; stable NFS 4.1 and parallel NFS; a vxlan tunneling protocol that allows to transfer Layer 2 ethernet packets over UDP; and support for the Intel SMAP security feature. Many small features and new drivers and fixes are also available. Here's the full list of changes."
This discussion has been archived. No new comments can be posted.

Linux 3.7 Released

Comments Filter:
  • by CajunArson ( 465943 ) on Tuesday December 11, 2012 @12:33PM (#42251337) Journal

    experimental SMBv2 protocol support;

    This can't come soon enough for Linux clients. SAMBA already has SMBv2+ server-side support, with SAMBA 4 apparently even supported SMB 3.0. This is especially true for a high-latency connection through the VPN where the reduced chattiness of newer SMB protocols gives a nice performance bump.

    You can post all day & all night about how NFS/CODA/GlusterFS/etc./etc. is better, but at the end of the day the CIFS protocols are supported by every Windows machine out there and should be supported by Linux too. Plus, if you are a free-software purist, then you could setup a 100% GPL'd installation with SAMBA servers and Linux clients, so it would totally make sense for the Linux clients to actually support the modern protocols.

    • by Anonymous Coward on Tuesday December 11, 2012 @12:46PM (#42251501)

      This is exactly what Linux was missing: Super Mario Brothers version 2.

    • by rubycodez ( 864176 ) on Tuesday December 11, 2012 @12:57PM (#42251627)

      purists can also get Linux into the door at the clients with windows desktops, the basics of authentication, file and print sharing are enough for most small/medium business. I've done that a few times over the last five years, clients are still happy as the server just works,and are adopting more Linux boxes including some desktops.

    • Why do you state that linux "should be supported by Linux"? And why should I, as a *nix user, care about what windows supports.

      • And why should I, as a *nix user, care about what windows supports.

        Because you may end up having to integrate the *nix that you use with the Windows that an employer, client, etc. uses.

        • Windows supports WebDAV since windows98 IIRC. And I think *nix users tend to avoid windows employers/clients. There's plenty of jobs to get picky about the ones you choose.

          • There's plenty of jobs to get picky about the ones you choose.

            Unless you happen to have grown up in an area where there aren't plenty of jobs and need a job to save money so that you can move to where there are plenty of jobs.

            • by spazdor ( 902907 )

              You mean people *aren't* like virtual machine instances? You can't just kill one here and bring up another in a different availability zone?

              • Well, you can but the provisioning time for the new instance is ridiculous.
              • by tepples ( 727027 )
                Remember before the 1990s when there was no public Internet, long distance telephone calls used to cost a lot of money, and the Postal Service was the cheapest way to move a lot of data around? Human beings are like that, except worse. It costs a lot of money to get someone relocated to another availability zone.
    • by Shaman ( 1148 )

      Uhm, since deployed Windows systems largely don't support SMB 2.x much less SMB 3.x I fail to see how this is a major failing on the part of Linux. Although I am of course entirely for supporting the current protocols.

    • by Rich0 ( 548339 )

      I don't care much about native linux support for Windows. However, the sad thing is that in many ways SMB is probably the best networked filesystem on linux just the same, even though it doesn't support half of POSIX. The closest competitor is NFS, and that is full of security issues.

      Linux really needs a SIMPLE network filesystem solution that is secure and functional in all routine modes of operation. No, I don't want to set up a kerberos realm and openafs/etc.

      • by dbIII ( 701233 )
        Fuck simple - parallel NFS4 offers far more promise than you can get with simple. RAID just stepped off the single machine with redundant disks and moved into a pool of redundant servers, which offers a pile of improved latency, bandwidth and data redundancy options.
        • by Rich0 ( 548339 )

          Lovely - kerberos...

          I just love the thought of getting linux to boot with an nfs root filesystem using kerberos for authentication... If you don't implement kerberos, then it is insecure, which was half my complaint with nfs in the first place.

  • Signed modules? Yay for tivoization!

    • by ssam ( 2723487 )

      except you control the keys

      • Re:DRM (Score:5, Interesting)

        by leromarinvit ( 1462031 ) on Tuesday December 11, 2012 @01:02PM (#42251673)

        Only when you control the kernel/boot loader. I have a feeling that this will be used a lot by vendors to lock you out of your own devices, e.g. Android phones etc.

        I'm as paranoid as the next geek, and the idea of secure boot etc. appeals a lot to me if done correctly. As in, if it's MY device, then I get to decide what runs on it, and no one else. But it's a tool, and as such it can be used both for you and against you. There can't be a technical solution, technology is dumb. We need a legal solution, either in the form of regulation or widespread adoption (and enforcement) of the GPLv3.

        • Unfortunately secure booting is linked so tightly with vendor lockdown, tracking, and DRM concerns that I never expect it to be embraced by any open-source community. Hysteria over treacherous computing [gnu.org] so far has been overblown. For example, the potential abuse of the unique ID features of the TPM chips were not sufficient reason for the boycott against using them when available they generated--especially if you're booting into an open-source OS.

          It's pretty ridiculous that software like trusted grub [sourceforge.net] isn'

          • by epyT-R ( 613989 )

            It has so far been overblown only because it is just now becoming a real threat. This is mainly due to the introduction of new platforms that have become insanely popular. Think about it. All the new computing devices we have are, for the most part, locked out of the box. ..and while most people don't care to mess with their devices, they're still affected negatively because they do benefit from the efforts of those who do.

        • I think there can easily be a technical solution. Just put a switch on every computer. If it's in UNLOCKED position you may install a new operating system, if it is in LOCKED position you may not install a new OS and the whole boot process is locked down.

    • by dpilot ( 134227 )

      Signed modules are a two-edged sword. They can be used for Tivoization, as you say. They can also be used by you to secure your own system.

      Really, it's too bad that none of the major distributions have set this up. I've had TPMs on the past 2 work laptops. I've rather wanted to "take ownership" of them, principally to prevent anyone else from doing so. But it's rather a pain, supported, but in more of an expert-only mode, so I've never had the time.

      Module signing would be same type of thing. If RedHat

      • Re:DRM (Score:5, Informative)

        by Microlith ( 54737 ) on Tuesday December 11, 2012 @12:57PM (#42251623)

        Module signing has been in place with Fedora 18 and Ubuntu 12.10 as it's required to be compliant and get a signature on the bootloader for Secure Boot. I assume the code was backported.

      • by Anonymous Coward

        Signed modules are a two-edged sword. They can be used for Tivoization, as you say. They can also be used by you to secure your own system.

        If root is inserting untrusted modules into his kernel, he has bigger problems than module signing can fix.

        • by Bengie ( 1121981 )
          It takes little effort to sign but adds more security to your system. Maybe not a lot more, but more non-the-less.
        • You're absolutely correct that if an attacker is performing actions as root you have a big problem, but if that attacker is able to succeed and inject modules in to the kernel you have much bigger problems. Root's actions can still be monitored, logged, etc. where a malicious kernel module can hide any evidence of its existence from the running system.

          Having this feature enabled (and of course keeping the private key elsewhere if you build your own modules) means that a root exploit turning in to a rootkit

  • by Anonymous Coward

    kernel in c++? no? ill move on then,

    • Re:kernel in c++? (Score:5, Informative)

      by advantis ( 622471 ) on Tuesday December 11, 2012 @01:54PM (#42252217)

      And you need a kernel in C++ why? Because you can't get your head around objects that aren't enforced by the language? Or you can't get your head around doing error cleanup without exceptions enforced by the language? The Linux kernel even does reference counting without explicit support from the language.

      Just to get a complete picture, I looked at some competing kernels (I skimmed over the source really quickly):

      FreeBSD kernel - C, with objects and refcounts, similar to Linux
      OpenBSD kernel - C, but I have a hard time finding their equivalent to objects and refcounts, and I gave up looking
      GNU Hurd - C, and I'm not even going to bother looking around too much
      XNU - C, but with I/O Kit in C++ - works only with Apple software?
      Haiku kernel - C++, which is interesting in itself - but supports only IA-32?
      Plan9 kernel - C
      OpenSolaris kernel - C

      I think it's pointless to look at the rest. All the others listed by Wikipedia are even more obscure than some of the above.

      C seems to dominate the kernel arena, so Next time you post, I'd like to know what you think C++ would bring to the party. No, really. I've seen many dismiss that Linux isn't written in C++, but haven't seen a single one of these trolls (yes, I'm feeding you) say what that would accomplish, and I'm really really really curious. I'll throw a bone from the XNU Wikipedia article: "helping device drivers be written more quickly and using less code", and that seems to be the only bit written in C++, yet Linux does pretty well without, and apparently so do the majority (see above).

      • Re:kernel in c++? (Score:5, Informative)

        by petermgreen ( 876956 ) <plugwash.p10link@net> on Tuesday December 11, 2012 @02:11PM (#42252403) Homepage

        IIRC modern windows is a mixture of C and C++.

        As to what C++ achives it's the automation of tedious and error-prone boilerplate. Rather than manually incrementing and decrementing reference counts you can have it happen automatically as values are copied and overwritten. Rather than manually building procedure address tables for polymorphism you can get the compiler to do it for you.

        • I can't speak to kernel development but I did develop a data processing engine in C that incorporated design features more traditionally suited to C++ development like polymorphism, interfaces, run-time loadable components, etc. The choice of C was meant to aid with future porting to systems for which C++ compilers were believed not to exist. The system worked but not without encountering instances where someone who developed a component for the system misunderstood some aspect of the architecture and imp
      • Re:kernel in c++? (Score:4, Informative)

        by Anonymous Coward on Tuesday December 11, 2012 @02:28PM (#42252623)

        Haiku kernel - C++, which is interesting in itself - but supports only IA-32?

        Haiku have active ports to PowerPC, ARM and x86-64 in progress.

    • Why? It's nowhere near 32-bit memory limitations, does it have a shortage of registers or something?
      • by Luyseyal ( 3154 )

        Nah, but 64-bit gets work done twice as fast as 32-bit! Didn't you know? ;)

        -l

      • There is some interest in ARM for low-power servers and server appliances. Support for more than 4GB of ram would come in useful there.

        • But we're talking about the Raspberry Pi, a $25-$35 USD computer that currently has 512MB of RAM, and that's in the more expensive model.
          • by fnj ( 64210 )

            Raspberry Pi is just the meme. Consider what the Raspberry Pi can do for 1/8 the cost the big players were charging us. Now imagine a 64 bit server for 1/8 what one costs now.

            • I'm thinking NAS boxes. You want low-power, so they are mostly ARM already - but with 64-bit ARM, you could also throw lots and lots and lots of RAM in for disk cache.

              • It wasn't very long ago that one of the machines I was looking after was 300MHz and mostly doing a decent job as a small business mail server and web proxy. Handling large mailboxes (6GB+) in webmail is the thing it couldn't do well and the solution was a system with more memory, but the actual clock speed, even so slow, wasn't really a problem (the replacement that is ten times the speed is not ten times as effective for most tasks).
                A 1GHz ARM system with bucketloads of RAM (16GB is cheap these days) woul
                • by fnj ( 64210 )

                  My BeagleBone with 256MB is dandy serving as DNS, DHCP server, cvs server, web and some other stuff.

                  • by dbIII ( 701233 )
                    Yes, I was just pointing out a nice application for a relatively slow machine with more than 4GB of memory. Of course there is a lot you can do with a lot less.
      • by micheas ( 231635 )

        It depends on the work load.

        IIRC on AMD64 most programs are about five to ten percent larger if they are compiled for 64 bit instead of 32 bit with a slight slowdown. However SSL and other programs that extensively use numbers larger than 32bits tend to be twice as fast on 64bit than 32bit. So if you are doing mostly authentication or ssl on your PI then 64 bit would make sense.

  • by Anonymous Coward

    Does it `Just Work' (tm)? I really want rolling snapshots ah la NetApp.

    Sorry to be obtuse. Not much time for experiments.

    • Re: (Score:3, Informative)

      by ssam ( 2723487 )

      SUSE enterprise linux has offered BTRFS as a supported option since Feb.

      Conservative folk wont touch it until they know its been used by millions of people for many years.

      I use it with backups on ext4.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        The SUSE implementation of Btrfs is quite good. It's quite a bit ahead of the Btrfs support I've seen on other distributions and setting it up is pretty much automated by the installer. I agree Btrfs isn't stable yet and so shouldn't be used in production yet, but it looks like it is getting closer.

      • Last I looked a couple weeks ago, openSUSE support forums are still advising that BTRFS should not be used on production machines - experimental only. I don't know if SUSE enterprise is giving different advice, but I doubt it.
    • by Lennie ( 16154 )

      Have considered Ceph ?

  • Btrfs finally ready? (Score:2, Interesting)

    by javilon ( 99157 )

    Is it finally ready for prime time? any one with experiences/horror stories?

    • I used to run btrfs roughly a year ago for half a year and had no issues with data integrety etc whatsoever. The downside at that time was that performance for working with loads of small files was noticably worse than with ext4. The result of this was that a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4. Apart from that I had no issues whatsoever; performance on other loads was decent.

      I occasionaly look for benchmarks showing that the small files performance

      • by diegocg ( 1680514 ) on Tuesday December 11, 2012 @01:04PM (#42251703)

        a dist-upgrade took more than 4 hours instead of the expected 1.5 to 2 hours it takes with ext4.

        That's not due to poor small file performance in Btrfs, it's due to poor fsync() performance (which package tools like rpm and dpkg use quite a lot). In this new kernel version the Btrfs fsync() implementation is a lot faster.

        • by Lennie ( 16154 )

          You can use libeatmydata to disable the many fsyncs in dpkg which will obviously solve that problem, it might be smart to make a btrfs snapshot first. So if something bad does happen, you can go back to a working snapshot.

          • by Rich0 ( 548339 )

            How important are the fsyncs? I think a lot of software uses them due to some implementation decisions with ext4 (though Linus's decision to override the default settings set by the ext4 team alleviated many of them). However, with btrfs being copy-on-write I would think that you'd be far less vulnerable to issues if you modify a file in place without fsyncing. With btrfs you'll end up with either the original file intact or the modified file intact. With ext4 and some journal settings I think you could

    • Apparently SUSE Enterprise Linux [linux.com] thinks so, as of last week.

  • by timeOday ( 582209 ) on Tuesday December 11, 2012 @12:58PM (#42251637)
    The ability to boot into different ARM systems using a single kernel is kind of cool, but the need to do it is kind of scary. Is ARM not actually a single instruction set architecture, and if so, what is it?
    • by Burdell ( 228580 ) on Tuesday December 11, 2012 @01:14PM (#42251809)

      There are variants in the instruction set (just like there are in the x86 world, where i686 is a superset of i383 for example). However, that isn't the big problem with ARM; there isn't a single-standard way of booting like there is with x86 (where most things are IBM PC BIOS compatible, with some now moving to EFI/UEFI). Also, there's no device enumeration like ACPI; lots of ARM vendors build their own kernel with a static compiled-in list of devices, rather than having an easy way to probe the hardware at run-time.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      It's not the instruction set, it's the differences in boards [lwn.net].
  • Pffft (Score:1, Funny)

    by Anonymous Coward

    Windows is up to 8. Obviously, it is more than twice as good.

  • The Kernel Newbies site isn't accessible for me, clearly they're using 3.7. :)

  • Does: "MD: TRIM support for linear (commit), raid 0 (commit), raid 1 (commit), raid 10 (commit), raid5 (commit)"

    meen that if I run a software raid-1 on sdd disk, then Linux can do Trim on the disks?

E = MC ** 2 +- 3db

Working...