×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Google Uses Linux

timothy posted more than 4 years ago | from the rebasing-not-freebasing dept.

Google 155

postfail writes 'lwn.net coverage of the 2009 Linux Kernel Summit includes a recap of a presentation by Google engineers on how they use Linux. According to the article, a team of 30 Google engineers is rebasing to the mainline kernel every 17 months, presently carrying 1208 patches to 2.6.26 and inserting almost 300,000 lines of code; roughly 25% of those patches are backports of newer features.'

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

155 comments

Frosty Piss! (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30016618)

What's the difference between the guy who mods this down and a bucket of shit? The bucket!

Re:Frosty Piss! (-1)

Anonymous Coward | more than 4 years ago | (#30016966)

Funniest frosty ever.
But you're still a loser and will die a virgin.

A New Culture (3, Funny)

Anonymous Coward | more than 4 years ago | (#30016658)

Hmmm... Techno-Amish? (i.e. "We'll use your roads, but not your damned cars!")

Wow! (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30017940)

Omgz! Google AND Linux in a story!!! Every slashbots wet dream!!!

Release the patches already (5, Interesting)

Dice (109560) | more than 4 years ago | (#30016662)

They monitor all disk and network traffic, record it, and use it for analyzing their operations later on. Hooks have been added to let them associate all disk I/O back to applications - including asynchronous writeback I/O.

I. Want. This.

Re:Release the patches already (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30016734)

Funny that Google's privacy policy doesn't mention this storing of user data.

Re:Release the patches already (3, Informative)

Darkness404 (1287218) | more than 4 years ago | (#30016842)

Its kinda common sense that Google would see how much disk space is used or how much CPU time is used. I mean, what admin -doesn't- know that 2 Gigabytes of space is used by xxxx@gmail.com? Even if all the data was super-encrypted you would still know how large the file is.

Re:Release the patches already (0)

Anonymous Coward | more than 4 years ago | (#30018358)

The situation here is actually like stripping the address down to "gmail user in eastern US". It's not like there is one server per person; even in applications where it might make sense to do it that way, you can't do it and be cost effective with the required redundancy for reliability.

Paranoid AC, do you feel similarly threatened when the subway reports total ridership counts by route and day of week?

Re:Release the patches already (1)

Rip Dick (1207150) | more than 4 years ago | (#30016868)

Should I take the time to actually read Google's privacy policy and verify this or just take it on AC's good word?

Re:Release the patches already (0)

Anonymous Coward | more than 4 years ago | (#30018162)

He's wrong, Google's privacy policy seems to cover this use of data with the following:

# Log information – When you access Google services, our servers automatically record information that your browser sends whenever you visit a website. These server logs may include information such as your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.
# User communications – When you send email or other communications to Google, we may retain those communications in order to process your inquiries, respond to your requests and improve our services.

Breaking news: IT people like to keep logs and like to parse them. Film at eleven.

Re:Release the patches already (5, Informative)

Anonymous Coward | more than 4 years ago | (#30016768)

Try iotop.

http://guichaz.free.fr/iotop/ [guichaz.free.fr]

Re:Release the patches already (5, Funny)

Anonymous Coward | more than 4 years ago | (#30017212)

Can we donate some money and buy these people a site that _doesn't_ look like a goatse link?

Re:Release the patches already (1)

rdnetto (955205) | more than 4 years ago | (#30018622)

I like the way the site is designed. Nice and simple, not like some sites where you have to turn to google to find a single page.

Togh (3, Informative)

Anonymous Coward | more than 4 years ago | (#30016850)

Google does not distribute the binaries, so they are not obliged to publish the source.

Re:Togh (2, Interesting)

MichaelSmith (789609) | more than 4 years ago | (#30016918)

TFA does suggest though that google have gotten themselves into a horrible mess with their local changes and would be better off by offloading their stuff to the community and taking properly integrated releases.

Re:Togh (1, Informative)

Anonymous Coward | more than 4 years ago | (#30017076)

I think TFA also tries to notice how stupid is to base all your work in a old kernel because it's supposed to be the well-know stable release used in the organization, and then waste lots of human resources into backporting features from newer kernels. This is what Red Hat and Suse used to do years ago, and avoiding it is the main reason why Linus' set up the new development model. Google could learn from the distros, they probably can use all those human resources to follow more closely the kernel development. Switching to git will probably help a lot.

Re:Togh (5, Insightful)

pathological liar (659969) | more than 4 years ago | (#30017436)

Yeah great work Linus.

The distros STILL stick with older versions and backport fixes, because who in their right mind is going to bump a kernel version in the middle of a support cycle? It's even MORE broken because the kernel devs rarely identify security fixes as such, and often don't understand the security implications of a fix, so they don't always get backported as they should.

The Linux dev model is NOT something to be proud of.

Re:Togh (5, Funny)

grcumb (781340) | more than 4 years ago | (#30017592)

The Linux dev model is NOT something to be proud of.

Indeed:

"The Linux dev model is the worst form of development, except for all those other forms that have been tried from time to time." - Winston Churchill

... Oh wait, no. That was me, actually.

Re:Togh (3, Insightful)

Anonymous Coward | more than 4 years ago | (#30017770)

Oh actually I think the form of development used by the BSDs is a lot better. At least it is a lot more efficient. They don't just crap software and deprecate it as soon as it remotely works (hal).

Re:Togh (2, Interesting)

grcumb (781340) | more than 4 years ago | (#30018028)

The Linux dev model is NOT something to be proud of.

Indeed:

"The Linux dev model is the worst form of development, except for all those other forms that have been tried from time to time." - Winston Churchill

... Oh wait, no. That was me, actually.

Holy humour-impaired down-modding, Batman! How is the above a troll?

For those too dense to get the joke: I actually agree that the Linux development model has significant weaknesses. It's just that, despite its shortcomings, it actually has proven workable for many years now.

I'm not implying that there aren't better community-driven coding projects in existence. Nor do I want to suggest that critiquing the community is unwarranted (or even unwanted). It's just that, for all its warts, it has produced consistent results over the years.

Re:Togh (0, Offtopic)

X3J11 (791922) | more than 4 years ago | (#30018484)

I got it, and I chuckled a bit. I'd mod you back up, but alas I am unable.

Now I'll just wait for my off topic.

Re:Togh (0)

Anonymous Coward | more than 4 years ago | (#30019460)

The distros STILL stick with older versions and backport fixes, because who in their right mind is going to bump a kernel version in the middle of a support cycle?

They stick with SLIGHTLY older versions because of all of their extra, uselessly custom 'patches', just like Google.

Who in their right mind would do THAT!?

I use nothing but the latest vanilla kernel and report the few regressions that I come across. The Linux kernel development model is the best for anyone who is not a masturbating monkey.

Re:Release the patches already (0)

Anonymous Coward | more than 4 years ago | (#30017824)

U wont get it

DTrace (2, Informative)

Anonymous Coward | more than 4 years ago | (#30018008)

They monitor all disk and network traffic, record it, and use it for analyzing their operations later on. Hooks have been added to let them associate all disk I/O back to applications - including asynchronous writeback I/O.

I. Want. This.

DTrace code:

#pragma D option quiet

io:::start
{
                @[args[1]->dev_statname, execname, pid] = sum(args[0]->b_bcount);
}

END
{
                printf("%10s %20s %10s %15s\n", "DEVICE", "APP", "PID", "BYTES");
                printa("%10s %20s %10d %15@d\n", @);
}

Output:

# dtrace -s ./whoio.d
^C
        DEVICE APP PID BYTES
          cmdk0 cp 790 1515520
              sd2 cp 790 1527808

More examples at:

http://wikis.sun.com/display/DTrace/io+Provider

Open source is the coat tails that Google rides. (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30016920)

They take and take from open source and throw back a couple of table scraps and you people all kiss their ass for it.

Amazingly short sighted.

Re:Open source is the coat tails that Google rides (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30017114)

For Free Software, 'take' is fine. 'Provide but restrict' is not.

Re:Open source is the coat tails that Google rides (5, Insightful)

IamTheRealMike (537420) | more than 4 years ago | (#30017158)

Hmm, you realize that Android alone is over 10 million lines of code right? That's a pretty big open source contribution right there. But then there's also over a million lines of code across 100+ smaller projects too. So I am not sure what your definition of "table scraps" is but it's significantly more lines of code than most companies do.

Re:Open source is the coat tails that Google rides (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30017268)

So bloat would be a good thing by your standards? Using lines of code as a positives metric is kind of sad.

Re:Open source is the coat tails that Google rides (1)

Rockoon (1252108) | more than 4 years ago | (#30017308)

This vs the other metric.. how many anonymous posters downplay their massive contributions.

I'm not a big fan of Google, but god damn man. These guys are a huge player no matter what they do.

Re:Open source is the coat tails that Google rides (1)

thetoadwarrior (1268702) | more than 4 years ago | (#30017936)

Using the "I only count the bits I want them to release the source to" metric is also a shit way to gauge their contribution to open source.

Re:Open source is the coat tails that Google rides (1)

tyrione (134248) | more than 4 years ago | (#30018564)

Hmm, you realize that Android alone is over 10 million lines of code right? That's a pretty big open source contribution right there. But then there's also over a million lines of code across 100+ smaller projects too. So I am not sure what your definition of "table scraps" is but it's significantly more lines of code than most companies do.

I see millions of lines of Code from the Apache Foundation's various Java projects in Android.

So about 1/10th Sun's contribution (1)

saleenS281 (859657) | more than 4 years ago | (#30018576)

That's a drop in the bucket compared to what Sun has contributed to open source. Of course, slashdot appears to be perversely against Sun for some reason I cannot fathom.

Re:So about 1/10th Sun's contribution (4, Funny)

Again (1351325) | more than 4 years ago | (#30019252)

That's a drop in the bucket compared to what Sun has contributed to open source. Of course, slashdot appears to be perversely against Sun for some reason I cannot fathom.

Names are very important. The name Sun reminds of that place on the other side of the door where if we go, our skin gets red and burns. Google reminds us of that friendly homepage that would load under 5 seconds on dial-up.

Re:Open source is the coat tails that Google rides (1)

nloop (665733) | more than 4 years ago | (#30018704)

Android is not GPL'd. Android is released under the Apache license. As of Android 2.0 Google has opted to not released the code [google.com] to the Android Open Source Project. Those 10 million lines of code are for the most part closed. Sure, they have to release the kernel itself, but "Android" is theirs and they are keeping it.

I'm assuming this is to give Verizon exclusivity with their "droid" phone to be the only one running 2.0. I don't think they anticipated projects like cyanogenmod taking off quite like they have. Why buy a droid if your cheap g1 can run the latest software?

Do No Evil?

Re:Open source is the coat tails that Google rides (1)

jo_ham (604554) | more than 4 years ago | (#30017328)

Is "Amazingly short sighted" your sig, that is a self referential thing you need to tack onto everything you write? Seems very apt.

Are you nuts (3, Insightful)

Anonymous Coward | more than 4 years ago | (#30017968)

I'm not a huge goog fan, I never take their cookies so I don't use anything but search..but JUST search is way more "give back" than table scraps. If they announced tomorrow their search would now cost x-dollars a year, as long as it was somewhat reasonable,like an extra 5 bucks a month on top of my ISP bill, I'd pay for those table scraps. Google search has done more than anything else to make the web actually *useful* since the invention of the hyperlink.

Sure, there are other search engines, but if you actually learn to *use* the features and filters present wih google's, it just stomps all the others flat.

Whatever they give back in terms of code is just gravy on top of that.

Is it worth it? (2, Interesting)

ToasterMonkey (467067) | more than 4 years ago | (#30016976)

The whole article sounds so painful, what do they actually get out of it?

Google started with the 2.4.18 kernel - but they patched over 2000 files, inserting 492,000 lines of code. Among other things, they backported 64-bit support into that kernel. Eventually they moved to 2.6.11, primarily because they needed SATA support. A 2.6.18-based kernel followed, and they are now working on preparing a 2.6.26-based kernel for deployment in the near future. They are currently carrying 1208 patches to 2.6.26, inserting almost 300,000 lines of code. Roughly 25% of those patches, Mike estimates, are backports of newer features.

In the area of CPU scheduling, Google found the move to the completely fair scheduler to be painful. In fact, it was such a problem that they finally forward-ported the old O(1) scheduler and can run it in 2.6.26. Changes in the semantics of sched_yield() created grief, especially with the user-space locking that Google uses. High-priority threads can make a mess of load balancing, even if they run for very short periods of time. And load balancing matters: Google runs something like 5000 threads on systems with 16-32 cores.

Google makes a lot of use of the out-of-memory (OOM) killer to pare back overloaded systems. That can create trouble, though, when processes holding mutexes encounter the OOM killer. Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight.

Ooooh... efficiency.. I'm curious what the net savings is.. compared to buying more cheap hardware.

So what is Google doing with all that code in the kernel? They try very hard to get the most out of every machine they have, so they cram a lot of work onto each.

(30 * kernel engineer salary) / (generic x86 server + cooling + power) = ?

Re:Is it worth it? (5, Insightful)

Rockoon (1252108) | more than 4 years ago | (#30016994)

This company had about a million servers last time I cared to find out. I dont think 'more cheap hardware' means the same thing to you as it does to Google.

Re:Is it worth it? (1)

kjart (941720) | more than 4 years ago | (#30017116)

This company had about a million servers last time I cared to find out.

How did you manage that?

Re:Is it worth it? (1)

Rockoon (1252108) | more than 4 years ago | (#30017218)

Re:Is it worth it? (1)

TheSunborn (68004) | more than 4 years ago | (#30017320)

Funny, but that search don't give any reliable source for the 1 million servers estimate. The only source is an estimate by Gartner, and if you belive them, you also belive that Itanium II is the most sold 64 bit server chip

Re:Is it worth it? (0)

Anonymous Coward | more than 4 years ago | (#30018494)

Mayyyyybe this [slashdot.org] source will be trustable enough for you?

Re:Is it worth it? (0)

Anonymous Coward | more than 4 years ago | (#30019452)

From YFA: "Google never says how many servers are running in its data centers."

Re:Is it worth it? (4, Insightful)

Sir_Lewk (967686) | more than 4 years ago | (#30017002)

They are already running absolutely absurd amounts of cheap hardware. "Just buying more" is something that I'm sure they are already doing all the time but clearly that only goes so far.

(30 * kernel engineer salary) / (generic x86 servers + cooling + power) = ?

I suspect the answer to that is a very very small number.

Re:Is it worth it? (1)

onepoint (301486) | more than 4 years ago | (#30017718)

Well it's most likely worth it ( the investment in people ). think about it, Google is trying to manage how much cycle consumption per request is happening, save a few in the right area, you no longer need that extra machine or... you have an energy savings.
Looks like a long term payoff.

Re:Is it worth it? (1)

Stewie241 (1035724) | more than 4 years ago | (#30019126)

Another way to put it... say you can make the server produce 1% extra performance

A *very* conservative estimate of 100 000 servers (I'd be shocked if they didn't have many times that) means that you now have the capacity of an extra 1 000 servers, which means 1 000 less servers that have to be purchased, deployed and maintained.

Re:Is it worth it? (4, Insightful)

coolsnowmen (695297) | more than 4 years ago | (#30017004)

You are clearly not an engineer of scientist. Aside from the fact that some people just like to solve technical problems, I am betting google's logic goes something like this:
We have a problem that is basically only costing us $0.01*10,000computers/day. While that seems low, we plan on staying in business a long time, we could pay someone to solve the problem. Then there is that X factor, that if you don't do it, if you stop innovating, your competitors will, and they will get more and you will get less from the pool of money that is out there. In addition to that, the CS guy you paid to solve that is now worth more to your company (if you employed him) because [s]he now has a better understanding of a complex bit of code (the linux kernel) that you rely on heavily.

Re:Is it worth it? (5, Insightful)

Rockoon (1252108) | more than 4 years ago | (#30017178)

Also consider the fact that Google has been basically deploying new servers non-stop for many many years. They are already purchasing cheap hardware at a very high rate. Even a tiny 1% improvement in efficiency for the existing and future servers is a huge huge win for them.

That could amount to hundreds of millions of dollars saved over the next decade, and it doesnt take a genius to realize that a couple dozen programmer salaries will be a hell of a lot less than that.

Re:Is it worth it? (4, Interesting)

LordNimon (85072) | more than 4 years ago | (#30017214)

Porting patches from one kernel version to another is not innovation.

A while back I got an invitation to work for Google as a kernel developer. I declined to interview, because I already had a job doing just that. This article makes me glad I never accepted that offer. I feel sorry for those kernel developers at Google. Porting all that code back-and-forth over and over again. Now *that's* a crappy job.

Re:Is it worth it? (0)

Anonymous Coward | more than 4 years ago | (#30018736)

Don't worry about them, worry about yourself.

They're working at Google... and putting it on their resume.

Re:Is it worth it? (0)

Anonymous Coward | more than 4 years ago | (#30017414)

I'd say more like $0.10*1,000,000 servers / 1 day. 36.5 million is chicken feed, but, it doesn't cost 1.2 million a year to pay an engineer. Or I'm in the wrong profession.

Re:Is it worth it? (1, Informative)

Anonymous Coward | more than 4 years ago | (#30017056)

Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight.

Wait, what? Has Google seriously never heard of vm.overcommit_memory [kernel.org]?

Re:Is it worth it? (5, Interesting)

dingen (958134) | more than 4 years ago | (#30017072)

Ooooh... efficiency.. I'm curious what the net savings is.. compared to buying more cheap hardware.

We're talking about Google here. They have dozens of datacenters all over the globe, filled with hundreds of thousands of servers. Some estimate even a million servers or more.

So lets assume they have indeed a million servers and they need 5% more efficiency out of their server farms. Following your logic, it would be better to add 50,000 (!) cheap servers which consume space, power and require cooling and maintenance, but I'll bet you paying a handful of engineers to tweak your software is *a lot* cheaper. Especially since Google isn't "a project" or something. They're here for the long run. They're here to stay and in order to make that happen, they need to get the most from their platform as possible.

Re:Is it worth it? (2, Interesting)

Taur0 (1634625) | more than 4 years ago | (#30017272)

I really hope you're not an engineer, because your solution to a problem should never be: "Screw the most efficient solution, we'll just go out and buy more and waste more energy!" These incremental increases in efficiency will drastically change a product overtime, look at cars for example. The countless engineers working at GM, Toyota, Ford, etc. could have easily said: "meh whatever, just make them buy more gas". The modern combustion engine is only about 30% efficient, but that's far better than when the combustion engine was first thought of, which was somewhere around 0.4%.

Re:Is it worth it? (1)

Tharald (444591) | more than 4 years ago | (#30019294)

Sorry to burst your bubble, but the combustion engine still only has about 20% efficiency.

ICE [wikipedia.org]

Re:Is it worth it? (0)

Anonymous Coward | more than 4 years ago | (#30017330)

Oh really? Running a that big server park of x86 servers would be ridicule slow and resource eating at the best.

Re:Is it worth it? (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30017434)

In essence, Google is doing exactly what they have always done to make their money: steal someone else's work and tweak it at very little cost and then rebrand it. Oh poor Open Source. Open Source code used by Google to make MONEY! And here I thought M$ft was the bad guy. Could I be wrong???? But, if I were, then I would not be reading /. would I?Or do I read /. to see how much MSFT haters are lulled by Google, Apple and anything else which is not MSFT. Remember: It is ALL about business and making money. Including usin Open Source code for you own profit.

Re:Is it worth it? (1)

cyber-vandal (148830) | more than 4 years ago | (#30017756)

I'm not quite sure where it says in the whole OSS ethos that making a profit from OSS is against the rules. Redhat have been doing it for a while, as have IBM, and I'm sure Dell et al wouldn't be selling Linux PCs and servers if they weren't making money from doing so. Google have released the source to loads of different stuff as well so again I'm not sure exactly where you're coming from or why the insightful mod was awarded.

Re:Is it worth it? (1)

itzdandy (183397) | more than 4 years ago | (#30017868)

Google has more than 15,000 servers. A well tuned system can outperform a poorly tuned system 2:1 for very specialized apps like google uses. you dont think that having 15,000 vs 30,000 servers is worth maybe 2Mil in wages and power bill? google had a 2Mil power bill per month. Those developers are starting to look pretty cheap..

Increasing the efficiency of their code, from memory management and scheduler to proxy servers can save huge amounts of CPU time which in turn lowers electricity requirements and number of servers needed.

I am not surprised at all by this and wonder when google will look at using small for factor DC power ARM systems. A fairly recent platform they used ran a custom motherboard and power supply and they started out on some sparc,x86, and an RS/6000 so they are not affraid of some custom hardware. Cutting that power bill can be a very significate improvement in the cost structure just like improving the performance of the OS.

http://en.wikipedia.org/wiki/Google_platform#Server_types [wikipedia.org]

Re:Is it worth it? (1)

epine (68316) | more than 4 years ago | (#30017952)

People seem to be ignoring in this equation that this team of engineers becomes deeply familiar with the Linux kernel and likely participates in a lot of problem solving and strategic work on the side. Knowing Google, they are confronting this patch migration problem from a high level and generally thinking about *all* the problems in the Linux kernel development and maintenance space. I'm sure this mess also counts toward code review against their mission critical infrastructure and their general handling of SCM issues at all levels of software development.

Just buy more hardware: none of the above. They're already world class at scalability. Wouldn't surprise me if they had 3000 engineers making contributions counted generously toward scalability, across algorithms, services, and hardware. One percent of their engineers engaged in a high-level chess game with "grin and bear it" is entirely justified.

Re:Is it worth it? (1)

Youngbull (1569599) | more than 4 years ago | (#30018046)

let's assume that they have about a million servers already, then an improvement in overall load of 0.01% would then save them from buying 100 machines... makes spending time making the operating system run smoothly a lot more profitable.

Re:Is it worth it? (1)

jelle (14827) | more than 4 years ago | (#30018896)

"Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight."

I realize this is formulated in a negative way, with no prior reservation of resources, but erm, it was fast and easy right now and gave a sufficient response to the thread with the lowest possible latency, and if and when it ever becomes important I'll reformulate it nicely right before it's needed, and until that time those resources stay available for other uses. So be warned, here it comes: Probably that is because Mike doesn't know what lazy allocation means, why it is used, and that that means that there is not an allocation request to fail when the OOM condition happens?

Hmm, I sound so arrogant in this post that I'm probably wrong... But I can't help but feel that I'm pretty close to being right...

Low memory conditions (5, Interesting)

jones_supa (887896) | more than 4 years ago | (#30017000)

Google makes a lot of use of the out-of-memory (OOM) killer to pare back overloaded systems. That can create trouble, though, when processes holding mutexes encounter the OOM killer. Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight.

This is something I have been wondering too. Doesn't it just lead to applications crashing more often than them normally reporting they cannot allocate more memory?

Re:Low memory conditions (4, Insightful)

IamTheRealMike (537420) | more than 4 years ago | (#30017036)

Well, most programs are not OOM safe. It turns out to be really hard to write programs that behave gracefully in OOM scenarios. Killing a sacrificial process when the system is out of memory works OK if you have a pretty good idea of priority ordering of the processes, which Google systems do.

The Win32 Way (0)

Anonymous Coward | more than 4 years ago | (#30017060)

Under Windows, if you commit memory, it's yours and it will be there. If the system can't make that promise, it will fail to commit the memory and return an error.

Re:The Win32 Way (1)

ettlz (639203) | more than 4 years ago | (#30017520)

So what does it do if I allocate a couple of hundred megabytes and then don't use them?

Re:The Win32 Way (1)

0ld_d0g (923931) | more than 4 years ago | (#30018890)

So what does it do if I allocate a couple of hundred megabytes and then don't use them?

Nothing. Other apps continue to use all of the memory that you aren't using. You OTOH, just burned a hole in your virtual address space.

Re:The Win32 Way (2, Interesting)

Sam Douglas (1106539) | more than 4 years ago | (#30017578)

In Unix if malloc returns null then the memory allocation failed and you don't have the memory. A well written program should check that. Overcommitting memory can have efficiency advantages, but things can also turn out badly. Linux has heuristics to determine how much to overcommit the memory, or it can be disabled entirely.

http://utcc.utoronto.ca/~cks/space/blog/unix/MemoryOvercommit [utoronto.ca]

http://utcc.utoronto.ca/~cks/space/blog/linux/LinuxVMOvercommit [utoronto.ca]

Re:The Win32 Way (1, Informative)

Anonymous Coward | more than 4 years ago | (#30018394)

Unless you run Linux!

" By default, Linux follows an optimistic memory allocation strategy. This
              means that when malloc() returns non-NULL there is no guarantee that the
              memory really is available. This is a really bad bug. In case it turns
              out that the system is out of memory, one or more processes will be
              killed by the infamous OOM killer. In case Linux is employed under cir-
              cumstances where it would be less desirable to suddenly lose some ran-
              domly picked processes, and moreover the kernel version is sufficiently
              recent, one can switch off this overcommitting behavior using a command
              like:

                      # echo 2 > /proc/sys/vm/overcommit_memory
"

Does Google give coade back (4, Insightful)

TorKlingberg (599697) | more than 4 years ago | (#30017010)

Does Google give any code and patches back to the Linux kernel maintainers? Since they probably only use it internally and never distribute anything they are not required to by the GPL, but it would still be the right thing to do.

Re:Does Google give coade back (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30017226)

yeah, so that Microsoft can use some of it in Windows 7.. :)

Re:Does Google give coade back (5, Informative)

MBCook (132727) | more than 4 years ago | (#30017234)

Yes, they do. Since they use older kernels and have... unique... needs, they aren't a huge contributor like RedHat, but they do a lot.

During 2.6.31, they were responsible for 6% [lwn.net] of the changes to the kernel.

Re:Does Google give coade back (1, Informative)

Anonymous Coward | more than 4 years ago | (#30017376)

During 2.6.31, they were responsible for 6% [lwn.net] of the changes to the kernel.

That's 6% of non-author signoffs. It's not 6% of changes. I'm not saying they don't contribute, but the manner of their contribution isn't what your suggesting.

Re:Does Google give coade back (4, Interesting)

marcansoft (727665) | more than 4 years ago | (#30017380)

Andrew Morton, Google employee and maintainer of the -mm tree, contributed the vast majority of the changes filed under "Google" (and most of those changes aren't Google-specific - Andrew has been doing this since before he was employed there). If you subtract Andrew, Google is responsible for a tiny part of kernel development last I heard, unfortunately.

Re:Does Google give coade back (3, Informative)

CyrusOmega (1261328) | more than 4 years ago | (#30017658)

A lot of companies will also use a single employee for all of their commits too. I know the company I used to work for made one man look like a code factory to a certain open source project, but, in fact, it was a team of 20 or so devs behind him doing the real work.

Re:Does Google give coade back (4, Informative)

marcansoft (727665) | more than 4 years ago | (#30017726)

Andrew has been doing a large amount of kernel work for some time now, before his employment with Google. Note that the 6% figure is under non-author signoffs - people that patches went through, instead of people who actually authored them. Heck, even I submitted a patch that went through Andrew once (and I've submitted like 5 patches to the kernel). Andrew does a lot of gatekeeping for the kernel, but he doesn't write that much code, and he certainly doesn't appear to be committing code written by Google's kernel team under his name as a committer.

Google isn't even on the list of actual code-writing employers, which means they're under 0.9%. I watched a Google Tech Talk about the kernel once (I forget the exact name) where it was mentioned that Google was (minus Andrew) somewhere in the 40th place or so of companies who contribute changes to Linux.

Re:Does Google give coade back (0)

Anonymous Coward | more than 4 years ago | (#30018836)

Who is this Andrew guy and how do we pat him on the back?

Re:Does Google give coade back (1)

tyrione (134248) | more than 4 years ago | (#30018572)

A lot of companies will also use a single employee for all of their commits too. I know the company I used to work for made one man look like a code factory to a certain open source project, but, in fact, it was a team of 20 or so devs behind him doing the real work.

You clearly know nothing about Linux Kernel development if you think Morton is a face for a team of hidden coders.

Re:Does Google give coade back (2, Insightful)

ibwolf (126465) | more than 4 years ago | (#30017710)

most of those changes aren't Google-specific

Why would they submit "Google-specific" patches?

It would make sense for them to only submit those patches that they believed to be of general utility. Other stuff would likely not be accepted.

Re:Does Google give coade back (3, Informative)

marcansoft (727665) | more than 4 years ago | (#30017896)

By that I meant "developed for Google, useful to other people".

We can divide Andrew's potential kernel work into 4 categories:

  1. Private changes for Google, not useful for other people.
  2. Public changes for Google, deemed useful to other people but originally developed to suit Google's needs.
  3. Public changes of general usefulness. Google might find them useful, but doesn't drive their development.
  4. Maintaining -mm and signing off and merging other people's stuff

Points 1 and 2 can be considered a result of Andrew's employment at google. Points 3 and 4 would happen even if he weren't employed at Google. From my understanding, the vast majority of Andrew's work is point 4 (that's why he's listed under non-author signoffs as 6%, along with Google). Both Andrew's and Google's commit-author contributions are below 0.9%.

So what we can derive from the data in the article, assuming it's accurate, is:

  • Google's employees as a whole authored less than 0.9% of the changes that went into 2.6.31
  • Andrew authored less than 0.8% of the 2.6.31 changes
  • Andrew signed off on 6% of the 2.6.31 changes
  • Besides Andrew, 3 other changes were signed off by Google employees (that's like .03%)

So no, Google doesn't contribute much to the kernel. Having Andrew on board gives them some presence and credibility in kernel-land, but they don't actually author much public kernel code. Hiring someone to keep doing what they were already doing doesn't make you a kernel contributor.

Re:Does Google give coade back (4, Informative)

farnsworth (558449) | more than 4 years ago | (#30017764)

Google is responsible for a tiny part of kernel development last I heard, unfortunately.

I don't know that much about google's private modifications, but the question of "what to give back" does not always have a clear default answer. I've modified lots of OSS in the past and not given it back, simply because my best guess was that I am the only person who will ever want feature x. There's no point in cluttering up mailing lists or documentation with something extremely esoteric. It's not because I'm lazy or selfish or greedy -- sometimes the right answer is to just keep things to yourself. (Of course, there are times when I've modified something hackishly, and had been too lazy or embarrassed to send it back upstream :)

Perhaps google answers this question in a different way than others would, but that doesn't necessarily conflict with "the spirit of OSS", whatever that might be.

Real example... (4, Interesting)

Fished (574624) | more than 4 years ago | (#30018456)

Back in the 90's, we had a customized patch to Apache to make it forward tickets within our intranet as supplied by our (also customized) Kerberos libraries for our (also customized) build of Lynx. It all had to do with a very robust system for managing customer contacts that ran with virtually no maintenance from 1999 to 2007--and I was the only person who understood it because I wrote it as the SA--when it was scrapped for a "modern" and "supportable" solution that (of course) requires a dozen full-time developers and crashes all the time.

Not really bitching too much, because that platform was a product of the go-go 90's, and IT doctrine has changed for the better. No way should a product be out there with all your customer information that only one person understands. But it was a sweet solution that did its job and did its job well for a LONG time. Better living through the UNIX way of doing things!

But, anyway, I never bothered to contribute any of the patches from that back to the Apache tree (or the other trees) because they really only made sense in that particular context and as a group. If you weren't doing EXACTLY what we were doing, there was no point in the patches, and NOBODY was doing exactly what we were doing.

Re:Does Google give coade back (1)

jelle (14827) | more than 4 years ago | (#30019000)

"simply because my best guess was that I am the only person who will ever want feature x"

You may have been underestimating 'the others'... "Release early, release often" means release it, even if you think it's (still) useless junk. Just label it as that, and perhaps others will find it better than useless junk, or if needed maybe clean it up and turn it into something you never even thought it could be.

At least send a message 'listen guys, this is what I threw together for myself and here is why', or put up a webpage on a blog or wiki somewhere with your patches and mention the site once on the mailing list.

Do it for the others, whom may surprise you.

A lot of programmers with good intentions end up never releasing what they've made, and what could have turned into something great, just because they want to 'clean it up first', or because they think 'nobody would want it' (they wanted it, so somebody did, making it less than unlikely that somebody else wants it too). Release it, just be honest and say that even you the creator thinks it's dirty and useless. Perhaps others disagree about the 'useless', or are better/faster than you in cleaning it up, or maybe it inspires others to make something similar, or more advanced, in 'the right way'.

Re:Does Google give coade back (3, Insightful)

itzdandy (183397) | more than 4 years ago | (#30017890)

If you subtract search engines google is responsible for a a tiny portion of the internet. Andrew gets benies from google so I suppose they do get some credit for the quantity of his work as he needs to eat and pay rent so that he can code.

kernel development (0)

Anonymous Coward | more than 4 years ago | (#30017020)

this is very interesting, but i have a question. As I understood, google have its own kernel development line?

Expected (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#30017280)

I am with Linus on this one
Linus is right
The man makes sense
He is absolutely correct on this one

wtf? (2, Funny)

Anonymous Coward | more than 4 years ago | (#30018434)

Oh sorry...title had me thinking this was penguin porn

Reminds me of Android (2, Insightful)

cycoj (1010923) | more than 4 years ago | (#30019350)

Somehow I'm reminded about the whole Android thing. Google really seems to have the urge to only do their own thing. Same thing with android where they have thrown out the whole "Linux" userspace to reinvent the wheel (only not as good, see Harald Welte's Blog for a rant about it). Here it seems the same thing they just do their own thing without merging back and disregarding experiences others might have had.

On a side note, their problems with the Completely Fair Scheduler should be a good argument for pluggable schedulers. It shows one scheduler can't fit all use cases, but I doubt Linus will listen.
C

Solaris (1)

kriston (7886) | more than 4 years ago | (#30019360)

It's amazing how many of these problems, especially with regard to multi-threading issues and multiple cores, have already been solve and implemented in Sun Solaris. In 1994. Fifteen years ago.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...