Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Needs Resource Management For Complex Workloads

Soulskill posted about 3 months ago | from the dirty-job-but-somebody's-gotta-do-it dept.

Data Storage 161

storagedude writes: Resource management and allocation for complex workloads has been a need for some time in open systems, but no one has ever followed through on making open systems look and behave like an IBM mainframe, writes Henry Newman at Enterprise Storage Forum. Throwing more hardware at the problem is a costly solution that won't work forever, he notes.

Newman writes: "With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries."

cancel ×

161 comments

Sorry! There are no comments related to the filter you selected.

This obsession with everything in RAM needs to end (0)

Anonymous Coward | about 3 months ago | (#47492609)

I know you're afraid of the garbage collector, but it won't bite. I promise.

Re: This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47492627)

So then what should we be obsessed with? Light weight, shiny screens, and rounded corners?

Re: This obsession with everything in RAM needs to (5, Funny)

JMJimmy (2036122) | about 3 months ago | (#47492697)

Boobs.

Re: This obsession with everything in RAM needs to (1)

Anonymous Coward | about 3 months ago | (#47493255)

So, rounded corners then.

Re: This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47492631)

12345678910+Ã--Ã=%_@$!#/\&*()

next-generation (0)

Anonymous Coward | about 3 months ago | (#47492637)

next-generation is a word that schould be forbidden

Re:This obsession with everything in RAM needs to (5, Insightful)

Lisias (447563) | about 3 months ago | (#47492649)

I know you're afraid of the garbage collector, but it won't bite. I promise.

Yes, it will. It's not common, but it happens - and when it happens, it's nasty. Pretty nasty.

But not so nasty as micromanaging the memory by myself, so I keep licking my wounds and moving on with it.

(but sometimes would be nice to have fine control on it)

Re:This obsession with everything in RAM needs to (1)

K. S. Kyosuke (729550) | about 3 months ago | (#47492877)

I guess that's why Azul hired all those smart people, to make that go away for good.

Re:This obsession with everything in RAM needs to (3, Insightful)

Tough Love (215404) | about 3 months ago | (#47492975)

Garbage collector with no overhead, hmm? Easy peasy with no satanic complexity I suppose. And of course no obnoxious corner cases. Equivalently in engineering, when your bridge won't stay up you just add a sky hook. Easy.

Re:This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47493007)

No need to be facetious. If computer system engineering had no obnoxious corner cases, half of the problems not associated with GC would disappear as well. It's not like you can magically solve everything anyway.

And yes, a garbage collector with zero overhead. Who would have thought? Well, pretty much anyone in the know, I guess.

Re:This obsession with everything in RAM needs to (2)

Lisias (447563) | about 3 months ago | (#47493979)

And yes, a garbage collector with zero overhead. Who would have thought? Well, pretty much anyone in the know, I guess.

MARK / RELEASE from the Pascal days used to work pretty well - this is the less overhead "garbage collector" possible.

It's impossible to have a Garbage Collector without some kind of overhead - all you can do is try to move the overhead to a place where it's not noticed.

There's no such thing as Free Lunch.

Re:This obsession with everything in RAM needs to (1)

Anonymous Coward | about 3 months ago | (#47492667)

Why not map everything in RAM? These days even Windows gives every process 128 terabytes of address space. TERA BYTES.

Re:This obsession with everything in RAM needs to (1)

jones_supa (887896) | about 3 months ago | (#47493337)

Microsoft's Virtual Address Space (Windows) [microsoft.com] page claims that it is 8 terabytes (with a special feature to allocate just a full 2 gigabyte chunk).

Re:This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47494023)

That page has a comment on the bottom indicating that the information is out-of-date, with a pointer to more recent information: "Windows 8.1 and Windows Server 2012 R2: 128 TB"

dom

Re:This obsession with everything in RAM needs to (1)

jones_supa (887896) | about 3 months ago | (#47494139)

Ah yes, I missed that. Apparently my microsecond-long attention span wasn't good enough to read the full page properly.

Re: This obsession with everything in RAM needs to (1)

loufoque (1400831) | about 3 months ago | (#47492723)

Garbage collection necessarily wastes memory by factor of 1.5 to 2.
The collection itself also slows down the program, and in some languages cannot even happen asynchronously.

Finally, the most important aspect for program performance is locality and memory layout, something you cannot even optimize for in a language where every object is a pointer to some memory on a garbage-collected heap.

Re: This obsession with everything in RAM needs to (1, Insightful)

K. S. Kyosuke (729550) | about 3 months ago | (#47492889)

Garbage collection necessarily wastes memory by factor of 1.5 to 2.

And manual memory management on a similar scale wastes CPU time. And the techniques that alleviate one also tend to help the other, or not?

Finally, the most important aspect for program performance is locality and memory layout, something you cannot even optimize for in a language where every object is a pointer to some memory on a garbage-collected heap.

There's not a dichotomy here. Oberon and Go are garbage collected without everything being a heap pointer.

Re: This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47493691)

> And manual memory management on a similar scale wastes CPU time.
No, it doesn't. Manual memory management could include no heap allocation at all, but let's suppose you mean RAII style. GC necessarily waste more than this, and stores up that more for some undefined point in the future. Why? Because if you have perfectly matched creation/deletion, it uses what it needs to. If you have a pooled RAII allocator for use in a specific call tree, it can benefit from the huge economy of scale of discarding an entire pool. Unlike GC though RAII memory management does not need to throw away the information it has at the point of discarding a reference, just to do it later.

Re:This obsession with everything in RAM needs to (-1)

Anonymous Coward | about 3 months ago | (#47493685)

Just because you're too dumb to understand C++ doesn't mean everybody is...

Re:This obsession with everything in RAM needs to (0)

Anonymous Coward | about 3 months ago | (#47494269)

And I promise to pull out in time, honey.

Just lay back and think of England.....

Oblig XKCD (0)

Anonymous Coward | about 3 months ago | (#47492633)

http://xkcd.com/619/

Re:Oblig XKCD (0)

Anonymous Coward | about 3 months ago | (#47492653)

That's so painfully true because Linux still has choppy playback of Flash/HTML5 video on low-performance hardware. It still is mostly a server OS (a very good one though).

Re:Oblig XKCD (0)

Anonymous Coward | about 3 months ago | (#47492689)

Firefux still has choppy playback of HTML5 video on Windows. Give me Flash or go to hell.

Re:Oblig XKCD (1)

Stumbles (602007) | about 3 months ago | (#47493039)

I still get that problem with firefox + flash. They all suck.

Re:Oblig XKCD (1)

Zero__Kelvin (151819) | about 3 months ago | (#47494105)

"It still is mostly a server OS ..."

Yes. I just answered a call on my Samsung S3 server a little while ago in fact. I also watched some TV on my Comcast Server Set-top box. I'm thinking you either don't know very much about Linux, or what a server is.

From the "is it 2005? department" (2)

dbIII (701233) | about 3 months ago | (#47492651)

"next-generation technology like non-volatile memories and PCIe SSDs"

That generation has been going on for a while storagedude. People have been scaling according to load to deal with it.

Re:From the "is it 2005? department" (1)

Anonymous Coward | about 3 months ago | (#47492693)

"next-generation technology like non-volatile memories and PCIe SSDs"

That generation has been going on for a while storagedude. People have been scaling according to load to deal with it.

He just woke up from a coma you insensitive clod.

Re:From the "is it 2005? department" (1)

K. S. Kyosuke (729550) | about 3 months ago | (#47492897)

Uh, no. PCIe SSDs are just coming into regular use in many places, and I haven't even heard of non-volatile memories being on the market (GB-sized, mind you - not tiny FRAMs for embedded applications).

Re:From the "is it 2005? department" (1)

viperidaenz (2515578) | about 3 months ago | (#47493059)

Fusion-io's ioDrive has been around since 2007. It's been in regular use for those who need it - like 4k video editing.
The original 7 year old drive is still faster than any SATA SSD you can find today.

Re:From the "is it 2005? department" (2)

K. S. Kyosuke (729550) | about 3 months ago | (#47493177)

That's the former, not the latter, but OK. (I also said "in many places", one would have thought it obvious that these things sort of trickle down from the top over time, especially given the initial limitations on the technology.)

Re:From the "is it 2005? department" (1)

swb (14022) | about 3 months ago | (#47493229)

Yeah, but how many people were editing 4k video in 2007? I'm sure the 3 people at the time weren't worrying about scheduling their Fusion ioDrives across workloads, either, just pounding them into submission. Wider adoption usually means mixed workloads where scheduling scarce resources matters more and is more complicated.

FWIW I don't know if I agree with the article premise -- it seems like most of these resource scheduling decisions/monitoring/adjustments are being made in hypervisors now (think VMware DRS, as only one example). And a lot of storage resource allocation isn't even done at the hypervisor level, it's done in the SAN which simply allocates maximum storage bandwidth to to the host and figures out on its own which storage to use.

Re:From the "is it 2005? department" (1)

dbIII (701233) | about 3 months ago | (#47493239)

Uh, no. PCIe SSDs are just coming into regular use in many places

OCZ seem to have been selling them via retail outlets for three years or more - let alone high end use.
There were various PCI things before the PCIe interface came into use.

Re:From the "is it 2005? department" (1)

EETech1 (1179269) | about 3 months ago | (#47493885)

IBM has DIMMs with flash memory already.

www-03.ibm.com/systems/x/options/storage/solidstate/exflashdimm/

mainframe is old crap for geezers (-1)

Anonymous Coward | about 3 months ago | (#47492657)

Big Iron is dead dead dead. Cloud is where it's at! Get with the times, jerks.

Re:mainframe is old crap for geezers (1)

Anonymous Coward | about 3 months ago | (#47492839)

Cloud???
Isn't that a mainframe connected over the internet with dumbed down terminals which require little complexity because the real complexity is located at a central point.

To clarify, cloud services act as the modern equivalent of the classic mainframe and the communication channels between the core system and the terminals has changed.

Re:mainframe is old crap for geezers (1)

K. S. Kyosuke (729550) | about 3 months ago | (#47492909)

When you go cloudy, you can do the same things on a somewhat higher level. As in, when you go Google-sized, the allocation and management of resources with a granularity of a computing node doesn't probably bother you much, because you have tens or hundreds of thousands of them. Trying to solve these problems on the single system level might be a waste of time for many applications. This is more of a problem for on-site big iron. It's an interesting problem, and if solved, could be of use to many people, but it would be much less useful for cloud providers.

Re:mainframe is old crap for geezers (3, Informative)

viperidaenz (2515578) | about 3 months ago | (#47493075)

On the contrary, if you can increase the performance of each node by 2x with 100,000 nodes, you've just saved 50,000 of them.

That's a pretty big cost saving.

The larger the installation, the more important resource management is. If you need to add more node, not only do you need to buy them, increase network capacity and power them, you also need to increase your cooling capacity, and floor space. Your failure rate goes up too. The higher the failure rate, the more staff you need to replace things.

Re:mainframe is old crap for geezers (2)

K. S. Kyosuke (729550) | about 3 months ago | (#47493163)

I don't dispute the possible savings and their value on large scale, but in general, it seemed to me that these proposals (what TFA describes) covered inter-application interactions, and not intra-application performance management. That's what I had in mind. With application-dedicated nodes (in cloud systems), improving performance is still of paramount importance but you do that with better data structures, careful application design, basically using internal domain knowledge etc., not with some some sort of app/OS generic resource allocation protocols. Or did I miss something?

Re:mainframe is old crap for geezers (1)

Stumbles (602007) | about 3 months ago | (#47493033)

Why yes, yes it is nothing more than a rehash of the old days where dumb terminals connected to a mainframe. Sometimes those dumb terminals were connected via terrestrial microwaves or phone lines. Now where did I put my 3270 and where did I put my modified termcap file for a vt220.

Re:mainframe is old crap for geezers (2, Informative)

Anonymous Coward | about 3 months ago | (#47493015)

Yeah - the sky is the limit!!!
Use your Microsoft cloud capabilities without hesitation....

This message was brought by you by your friendly NSA..

Re:mainframe is old crap for geezers (0)

Anonymous Coward | about 3 months ago | (#47494439)

And when those clouds swell and rise to the heights too fast, there will be a corporate hail storm and thunder. Possibly even a business tornado.

Re:mainframe is old crap for geezers (1)

blackjackshellac (849713) | about 3 months ago | (#47493605)

Are you being intentionally obtuse? It would seem so, but sometimes it's hard to tell on /.

This belongs in the cluster manager (4, Informative)

Animats (122034) | about 3 months ago | (#47492699)

That level of control probably belongs at the cluster management level. We need to do less in the OS, not more. For big data centers, images are loaded into virtual machines, network switches are configured to create a software defined network, connections are made between storage servers and compute nodes, and then the job runs. None of this is managed at the single-machine OS level.

With some VM system like Xen managing the hardware on each machine, the client OS can be minimal. It doesn't need drivers, users, accounts, file systems, etc. If you're running in an Amazon AWS instance, at least 90% of Linux is just dead weight. Job management runs on some other machine that's managing the server farm.

Re:This belongs in the cluster manager (0)

Anonymous Coward | about 3 months ago | (#47492715)

This exactly ... this stuff does not belong in the OS itself ... the OS needs to have the appropriate hooks to support this kind of external configuration/administration ...

Re:This belongs in the cluster manager (2)

K. S. Kyosuke (729550) | about 3 months ago | (#47492915)

Honestly, in MVS (z/OS), it probably makes perfect sense to have this in an OS, especially if you're paying through the nose for the hardware already. But solving it on the VM level surely makes it a huge win for everyone.

Re:This belongs in the cluster manager (2)

Tough Love (215404) | about 3 months ago | (#47492983)

If you're running in an Amazon AWS instance, at least 90% of Linux is just dead weight

Which 90% would that be, and in what way would it be dead weight? If you don't mind my asking.

Re:This belongs in the cluster manager (4, Interesting)

Lennie (16154) | about 3 months ago | (#47493377)

Yes and no.

No, large (Linux using) companies like Google, Facebook, Twitter have always used some kind of Linux container solution, not virtualization.

Yes, policy is controlled by the cluster manager.

But for example Google uses nested CGroups for implemeting those policies for controlling resources/priorities on their hosts.

Virtualization is very ineffcient and Docker/Linux containers are a perfect example of how peole are starting to see that again:
https://www.youtube.com/watch?... [youtube.com] / https://www.youtube.com/watch?... [youtube.com]

Suppposedly, CPU utilization on AWS is very low, maybe even only 7%:
http://huanliu.wordpress.com/2... [wordpress.com]

The reason for that is, is that VMs get allocated resources they never end up using. Because the host kernel/hypervisor doesn't know what the VM (kernel) is going to do/need.

For their own services Google doesn't use VMs, but Google does offer VMs to customers and to control the resources used by VM they run the VM inside a container.

Here are some talks Google did at DockerCon that mentions some of the details of how they work:
https://www.youtube.com/watch?... [youtube.com]
https://www.youtube.com/watch?... [youtube.com]

Linux Cgroups (3, Informative)

corychristison (951993) | about 3 months ago | (#47492711)

Is this not what Linux Cgroups is for?

From wikipedia (http://en.m.wikipedia.org/wiki/Cgroups):
cgroups (abbreviated from control groups) is a Linux kernel feature to limit, account, and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups.

From what I understand, LXC is built on top of Cgroups.

I understand the article is talking about "mainframe" or "cloud" like build-outs but for the most part, what he is talking about is already coming together with Cgroups.

Re:Linux Cgroups (2, Informative)

Anonymous Coward | about 3 months ago | (#47492797)

the article is not about "mainframe" or "cloud"... it is "advertising" for IBM... a company in the middle of multi-billion dollar deals with apple, all the while fighting to remain even slightly relevant.

IBM has the magic solution to finally allow the world to run simple web queries.

FUCK OFF

Re: Linux Cgroups (0)

Anonymous Coward | about 3 months ago | (#47492807)

Not an expert on cgroups but yes, some of what he wants certainly seems to be covered like IO bandwidth and per group memory resources

What's He trying to sell ? (0)

Anonymous Coward | about 3 months ago | (#47492717)

Load balancing clustering, JIT storage, cloud services, mainframe offloading, dedicated database servers, high avail redudant networking, etc....

The whole world is a nail to the man with a hammer....

So Who is paying his salary (or this trip)

Re:What's He trying to sell ? (-1)

Anonymous Coward | about 3 months ago | (#47492831)

IBM is in bed with apple. apple has a propaganda machine that could to war with russia.

get ready to learn why everything you know is wrong, and IBM is the answer. YOU WERE SO DUMB BEFORE.

THINK DIFFERENT.

Look better it's already there (1)

dutchwhizzman (817898) | about 3 months ago | (#47492737)

KVM, Xen and other hypervisors make Linux systems look like IBM mainframes. The whole "Virtual Machine" hype where we have guest operating systems running on hypervisors is just like IBMs Z series.

Vista got this (1)

eyjeryjertj (3755333) | about 3 months ago | (#47492751)

This feature was introduced in Windows Vista, and as we all know, this is the best OS ever because of that. Cant wait until Linux will becomes more like Vista.

Is this real or fantasy? (3, Interesting)

m00sh (2538182) | about 3 months ago | (#47492781)

I read the article and I can't tell if this is a real problem that is really affecting thousands of users and companies, or a fantasy that the author wrote up in 30 minutes after having a discussion with an old IBM engineer.

Sure, IBM has all these resource prioritization in mainframes because mainframes cost a lot of money. Nowadays, hardware is so cheap you don't have to do all that stuff.

If some young programmer undertook the challenge and created the framework, would anyone use it and test it? Will there be an actual need for something like this?

My point is that an insider information to what is really going on in the cutting edge usage of linux or just some smoke being blown around to an obligated write up.

Re:Is this real or fantasy? (1)

Kjella (173770) | about 3 months ago | (#47492955)

These resources are all being managed today, there already are priorities for CPU, QoS for network bandwidth, ionice and quotas for storage and so on with a lot of specialization in each. He wants to build some kind of comprehensive resource management framework where everything from CPU time, memory, storage, network bandwidth etc. is being prioritized. It sounds extremely academic to me, particularly when I read the line:

I will make the assumption that everything at every level is monitored and tracked (...)

Besides, resource management isn't something that happens only on this level, for example if I have an SQL server then clearly who gets priority there matters, these are order transactions that should have millisecond latency and here's the consolidated monthly report we need by noon tomorrow. Load balancers, cache servers, read-only slaves, thread pools, TCP congestion logic, it's like you took something that you can write a whole library about and said "we need a framework for it". Good luck writing a framework that can balance anything in any situation, yes I suppose that from a galaxy away it might look like everything is a resource and we have consumers who need prioritization but the specifics of the situation matter a lot. Which is why there are many, many specialized systems that all do their specialized kind of resource management.

Re:Is this real or fantasy? (1)

Anonymous Coward | about 3 months ago | (#47493903)

Nowadays, hardware is so cheap you don't have to do all that stuff.

Instead of spending a bit of those resources to allocate the rest with good efficiency, the standing assumption is that resources are effectively free anyway and so wasting them with gay abandon is worth it. This is the assumption, but it's not really true.

At sufficient scale even the smallest cost becomes non-negligible. This isn't just for the few of us who write "truly web-scale" or whatever the term is today. Even in something as simple as an end-user application like, oh, a video player, "saving" programmer time effectively moves the cost onto the end user. This should be multiplied with the number of users as well as the frequency with which the end user gets hit with it. Especially that per user multiplication we often forget. As an example, VLC has an estimated 30-odd million users, so that say, shaving one second off the start-up time, means a yield of almost a year not forcing your users to sit and wait. It's not just start-up, it's hiding in just about everything computers do.

While it's true that some optimisations simply aren't worth it, what I'm on about is the reverse: Deliberately not caring about even reasonable care not to waste resources wantonly. Consequently, that wasting does happen a lot.

If some young programmer undertook the challenge and created the framework, would anyone use it and test it? Will there be an actual need for something like this?

Personally I don't believe in fabricating frameworks. It's mainly self-serving make-work so the programmer can kid himself he's being useful to the world. More often than not it results in slow gloopy bloat that needs to be carried around for its own sake possibly much more so than because it's useful.

There are ways to avoid this, but it's basically never by setting out to write "a framework" before you've written a few applications that could use it.

My point is that an insider information to what is really going on in the cutting edge usage of linux or just some smoke being blown around to an obligated write up.

No idea. But the state of computing is such that there's a lot to be improved yet.

Have your cake, and eat it too (0)

Anonymous Coward | about 3 months ago | (#47492789)

Mainframes have always looked massively expensive, so we made do with cheap commodity crap. And crappy it was. You can see it everywhere, from (lack of, or bolted on as an afterthrought) management features, to single points of failure everywhere, to being cheaply made and so prone to breakage and very hard to diagnose. Most of us have never worked with anything else so have no idea that things could be massively better. Resource management in the OS is but a small thing lacking in comparison.

What's most amazing is that this status quo is gospel, that nobody saw fit to sit back and really think about the whole thing and perhaps start a project or two to try and do something about it. Instead we see marginal fiddling that really isn't innovating at all. From the poetteringware that's deliberately but unnecessarily breaking compatability in the name of progress but hardly progressing at all, to a bright new "standard" in rack sizes, right smack dab between the previous two(!) existing standards in size while still managing to fail to seize the chance to go metric, with a lot of cheap more-of-the-same software and hardware inbetween. The larger theme in computing is that it's not progressing much at all. It's not even baby steps, it's fiddling, doodling, not going anywhere at all.

Re:Have your cake, and eat it too (1)

K. S. Kyosuke (729550) | about 3 months ago | (#47492999)

right smack dab between the previous two(!) existing standards in size

That reminds me of the (rejected) compromise that suggested that we index arrays starting with 0.5. :)

Re:Have your cake, and eat it too (0)

Anonymous Coward | about 3 months ago | (#47493109)

What do you mean "rejected"?? That's basically exactly what you do in OpenGL when you (mis-)use a texture as an array! ;)
How's that saying with no idea is too stupid to not be implemented?

This isn't just "Taken care of" by a hypervisor (0)

Anonymous Coward | about 3 months ago | (#47492827)

This really can be a user-visible problem.
For example, the scheduling of things like SSD trims really needs to be stepped up.

Right now you can get unexpected blocking behaviour, for up to a whole second.
And there's no way for user-land to see it's going to happen, or even really to know what level of storage it is going to be using.

Maybe this stuff wants to be done as cluster management, rather than as part of the core kernel; but from a user's point of view - it just needs to be done.

Re:This isn't just "Taken care of" by a hypervisor (1)

jones_supa (887896) | about 3 months ago | (#47492863)

Why is the I/O layer team of Linux not taking responsibility to make TRIM work properly? Linux still sends individual TRIM sector commands instead of TRIM ranges. This creates unnecessary traffic in the bus and is especially nasty for everything before SATA 3.1, because then the TRIM command has to be executed synchronously, meaning that the device command queue has to be completely flushed first.

Re:This isn't just "Taken care of" by a hypervisor (1)

Anonymous Coward | about 3 months ago | (#47493693)

because maintaining lists of blocks and having algorithms to coalesce them and flush to disk from time to time sounds simple, but is actually very complicated, almost as complicated as the rest of the driver. It is basically implementing garbage collection in a disk driver, which introuces all sorts of asynchrony and plays havoc with latencies. Love doing that sort of thing in kernel space, no? The spec is fine, but doing that sort of thing in a driver is asking too much. It should be done in user space.

Re:This isn't just "Taken care of" by a hypervisor (1)

jones_supa (887896) | about 3 months ago | (#47493813)

That is true.

Lotta work for an OS nobody uses (0)

Anonymous Coward | about 3 months ago | (#47492847)

What is it, like 2% share? I mean, it was cool when 1% used it but now it's just an old, desperate OS looking for something, ANYTHING, to keepit from dying completely.

Re:Lotta work for an OS nobody uses (2)

Z00L00K (682162) | about 3 months ago | (#47493115)

2% may be the desktop share for Linux, but when it comes to servers and handheld devices like Android it's a different story.

So ... (0)

Anonymous Coward | about 3 months ago | (#47492875)

Linux grew because when people wanted/needed something, they wrote it themselves. Companies helped with money/manpower because they got some benefits.
So, if there's something missing, then it's probably not needed, or the other solutions cover it well enough.

Straw Proposals? (1)

Anonymous Coward | about 3 months ago | (#47492901)

I thought the title wanted to talk about something revolutionary, so I read through the details.

What I discovered was that the title was bullshit, so were the concerns surrounding Linux's capabilities. Some of them make sense for general all-purpose computation, some of them don't. I don't see why anybody should take these proposals too seriously for kernel inclusions.

The portion on primary memory management is perfect. Hadoop does suffer from lack of cache aware code; So far, only modified kernels have been in use with systems such as Azul's C4 based Virtual Machine.

The portion on user driven resource management (CPU/disk) is a very thorny issue. Most people don't use big monolithic computers, but provisioned, distributed systems. This leads to better separation of concerns and better diagnostics. This may be a non issue for most people than create complicated, entangled, scheduler code.
The portion on User Accounting generally does not make sense for most Linux machines in production today. Most people who favor lower latencies do not want context switches.

Linux is not a product, but a meta-product. It is up to the implementor to take a variety of components, put them together in a logical way, and configure the modules/userland to work with them correctly. If IBM feels that they want to bring in that prickly complexity to run Linux on a boilerplate expensive mainframe computer, it's their headache.

64-bit address space.. (0)

Anonymous Coward | about 3 months ago | (#47492913)

..ought be enough for everyone. I mean, 2^64 could address all atoms in the solar system. How much porn do you expect to be able to store anyway?

Re:64-bit address space.. (0)

Anonymous Coward | about 3 months ago | (#47493005)

The amount of available information is only a fraction of what is possible information at any point of time. Since there is time, anything requiring to document information is proportional to the product of space and time. Since time is infinite unless YOU prove it otherwise, you need a lot of space to store all of that.

Re:64-bit address space.. (0)

Anonymous Coward | about 3 months ago | (#47493175)

It is "only" 2^48 currently though.

Re:64-bit address space.. (1)

Anonymous Coward | about 3 months ago | (#47493711)

I mean, 2^64 could address all atoms in the solar system.

False. It could almost address all atoms in a milligram of matter, though.

complex application example (4, Insightful)

lkcl (517947) | about 3 months ago | (#47492919)

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

Re:complex application example (0)

Anonymous Coward | about 3 months ago | (#47492935)

It's a interesting combination when you are clearly a very knowledgeable guy but don't use capital letters to begin sentences. :)

Re:complex application example (0)

Anonymous Coward | about 3 months ago | (#47492957)

Here? Try stack overflow.

Re:complex application example (1)

sonamchauhan (587356) | about 3 months ago | (#47492989)

Try putting a load balancer (Cisco ACE, Citrix NetScaler) on a virtual IP and load balancing the UDP packets across several nodes behind the balancer.

complex application example (0)

Anonymous Coward | about 3 months ago | (#47493017)

Switch to Go. 15 scripts to re-write is not the end of the world. Use goroutines. profit ?

Re:complex application example (0)

Anonymous Coward | about 3 months ago | (#47493019)

If you've got no confidence in the Linux kernel then why don't you port your code to some alternative OSes (Solaris, FreeBSD, OS X, Windows etc.) to compare performance?
Reading your post there's a few oddities that occur to me, though obviously I'm probably missing a lot of the relevant information.
If you're trying to achieve maximum performance I'm wondering why you're coding with python.
Why are all of your processes waiting for epoll? Surely you've got one process reading the network data and spawning the required threads?
You might find you don't need much resource locking at all with the right design.
Have you worked out the theoretical maximum performance you could achieve with the hardware configuration you've chosen? How close to this are you getting with your current implementation? Maybe it would be more practical to scale your system horizontally rather than spending more time and money trying to squeeze more performance out of your current architecture.

Re:complex application example (1)

Anonymous Coward | about 3 months ago | (#47493867)

...If you're trying to achieve maximum performance I'm wondering why you're coding with python...

That was my Daily WTF too

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

So the OP is using 1 thread, even though each incoming UDP packet can be "pre-processed" embarrassingly parallel fashion? The main issue I see with the OP's design is that each UDP packet is being worked on by at least 17 threads/processes:
      1) the dispatcher (pre-processing) thread
      2) all the ## "scripts" (OP said 15)
      3) then the "post-precessing" thread

That is a hell of a lot of inter-process (or inter-thread) communication for EACH UDP packet. How about this design:

      1) thread to handle incoming UDP packets; ie just put then into a queue that worker threads pull from -- if the queue is full, start dropping packets
      2) a thread-pool of 'workers', who:
              a) "pre-processes" the UDP packet
              b) does the 'work' of the 'scripts' single-threadedly (each worker thread can handle ANY UDP packet)
              c) post-processes and return results to where ever they go

Now each UDP packet is touched by 2 threads, not 17.

captcha: calmness

Re:complex application example (0)

Anonymous Coward | about 3 months ago | (#47493149)

You might benefit from looking at zeromq, which can simplify this type of coordinated processing, both in single- and multi-node systems. There's a Python binding, so you should be able to give it a go quite quickly. Not guaranteed that this is the right approach for your particular requirements, but it does sound similar to stuff I've worked on in the past, and in my opinion zmq does simplify away a lot of the complexity in a reliable way. Performance is pretty amazing too! See the zeromq guide for details [zeromq.org]

Re:complex application example (5, Insightful)

Mr Thinly Sliced (73041) | about 3 months ago | (#47493195)

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

Re:complex application example (4, Informative)

lkcl (517947) | about 3 months ago | (#47493359)

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:

questions:

* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

Re:complex application example (4, Informative)

Mr Thinly Sliced (73041) | about 3 months ago | (#47493435)

First - the problem with python is that because it's a VM you've got a whole lot of baggage in that process out of your control (mutexes, mallocs, stalls for housekeeping).

Basically you've got a strict timing guarantee dictated by the fact that you have incoming UDP packets you can't afford to drop.

As such, you need a process sat on that incoming socket that doesn't block and can't be interrupted.

The way you do that is to use a realtime kernel and dedicate a CPU using process affinity to a realtime receiver thread. Make sure that the only IRQ interrupt mapped to that CPU is the dedicated network card. (Note: I say realtime receiver thread, but in fact it's just a high priority callback down stack from the IRQ interrupt).

This realtime receiver thread should be a "complete" realtime thread - no malloc, no mutexes. Passing messages out of these realtime threads should be done via non-blocking ring buffers to high (regular) priority threads who are in charge of posting to something like zeromq.

Depending on your deadlines, you can make it fully non-blocking but you'll need to dedicate a CPU to spin lock checking that ring buffer for new messages. Second option is that you calculate your upper bound on ring buffer fill and poll it every now and then. You can use semaphores to signal between the threads but you'll need to make that other thread realtime too to avoid a possible priority inversion situation.

> how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received

As mentioned, dedicate a CPU mask everything else off from it and make the IRQ point to it.

> what support from the linux kernel is there to ensure that this happens

With a realtime thread the only other thing that could interrupt it would be another realtime priority thread - but you should make sure that situation doesn't occur.

> is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else

Yes, IRQ mapping to the dedicated CPU with a realtime receiver thread.

> the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process

You might get away with having the realtime receiver thread do the zeromq message push (for example) but the "real" way to do this would be lock-free ring buffers and another thread being the consumer of that.

> what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent

You want to avoid this. Use lockfree structures for correctness - or you may discover that having the realtime receiver thread do the post is "good enough" for your message volumes.

> to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements

No offense, but Linux has support for this kind of scenario, you're just a little confused about how you go about it. Priority inversion means you don't want to do it this way on _any_ operating system, not just Linux.

Re: complex application example (1)

rkit (538398) | about 3 months ago | (#47493567)

You should look up mutex attributes, in particular priority inheritance. Also, I think you are experiencing the "thundering herd" effect. Maybe the leader/follower pattern could be effective here.

Re:complex application example (1)

anon mouse-cow-aard (443646) | about 3 months ago | (#47493779)

Given this problem, there are several options for fanout... Im assuming that hardware can be added, so adding a load balancer and then three or four machines to cope with the load behind the load balancer might be the quickest (least code change) way to address the issue. Especially if there is no global state needed, this is likely the most expedient.

An option that might be a bit more flexible on a single box, while still scalable, would be to have a task that parses each incoming job and posts it to a rabbitmq instance (AMQP bus.) rabbitmq works very well out of the box, with little tweaking. you then have the fifteen scripts called in subscriber instances as separate processes. You are essentially farming out all the IPC to the broker, and the broker does this sort of thing very well. The scripts are now isolated processes, and their memory management etc... now become separate issues (if one misbehaves, you an always have the subscription management wrapper around it restart it from time to time.)

Pika would be the preferred python bindings appropriate for speaking with the broker. You might still be beyond what can be done with a single node, but growing things with AMQP/rabbit is straight-forward.

Re:complex application example (1)

Alef (605149) | about 3 months ago | (#47493563)

Honestly - why are people trying to do things that need guarantees with python?

Oh, you got that far at least? What I wonder is, why are people trying to do things that need guarantees using UDP with no back-communication, no redundancy built in to the protocol, and not even detection of lost packets? External requirement my ass, why do you accept a contract under those conditions? The correct thing to say is "this is broken, and it's not going to work". If they still want the turd polished, it should be under very clear conditions of not accepting responsibility for the end result, and they should be known and understood by all decision makers at the customer. And even so I would be wary.

Otherwise, you're in a prime position for getting hit by the blame when shit hits the fan, either because it doesn't work, or because you didn't tell them that in the first place, since you are supposed to be the expert.

Re:complex application example (1)

Mr Thinly Sliced (73041) | about 3 months ago | (#47493623)

FWIW I agree vis-a-vis using UDP for a business critical thing. I'd want exemption from responsiblity for any missed packets purely due to the infrastructure in between.

Re:complex application example (1)

hyc (241590) | about 3 months ago | (#47494525)

Totally agreed. The lack of guarantees re: UDP is built into the UDP spec, it's not a failing of the Linux kernel (nor any other OS) that it won't tell you about dropped packets. Luke, you should know better than this.

Re:complex application example (1)

Gothmolly (148874) | about 3 months ago | (#47493635)

a) Your UDP buffers probably suck. OOB RedHat gives you 128K, and each packet takes up 2304 bytes of buffer space. Try 100MB, or whatever YOUR_RATE/2304 works out to.
b) Pull off the queue and buffer in RAM as fast as you can
c) Have a second thread read from RAM
d) Don't invoke scripts to process each packet, you're spinning all your time in process creation. In fact, don't use interpreted scripts at all.

Re:complex application example (1)

raxx7 (205260) | about 3 months ago | (#47494651)

Interesting. I sounds a bit like an application I have.
Like yours, it involves UDP and Python.
I have 150.000 "jobs" per second arriving in UDP packets. "Job" data can be between 10 and 1400 bytes and as many "jobs" are packed into each UDP packet as possible.

I use Python because, intermixed with the high performance job processing, I also mix slow but complex control sequences (and I'd rather cut my wrists than move all that to C/C++).
But to achieve good performance, I had to reduce Python's contribution to the critical path as much as possible and offload to C++.

My architecture has 3 processes, which communicate through shared memory and FIFOs.
The shared memory is divided into fixed size blocks, each big enough to contain the information for a maximum size jobs.

Processs A is C++ and has two threads.
Thread A1 receives the UDP packets, decodes the contents, writes the decoded job into a shared memory block and stores the block index number into a queue.
Thread A2 handles communication with process B. This communication consists mainly of sending process B block index numbers (telling B where to get job data) and receiving block index numbers back from process B (telling A that the block can be re-used).

Process B is a single threaded Python.
When in the critical loop, it's main job is to forward block index numbers from process A to process C and from process C back to process A.
(It also does some status checks and control functions, which is why it's in the middle).
In order to keep the overhead low, the block index numbers are passed in batches of 128 to 1024 (each block index number corresponding to a job).

Process C is, again, multi-threaded C++.
The main thread takes the data from the shared memory, returns the block index numbers to process B and pushes the jobs through a sequence of processing modules, in batches of many jobs.
Withing each processing module, the module hands out the batch of jobs to a thread pool and back, while preserving the order.

"open systems" vs closed systems (0)

Anonymous Coward | about 3 months ago | (#47492947)

"Resource management and allocation for complex workloads has been a need for some time in open systems"

Not that mainstream closed systems like microsoft corporation's so-called "windows" product and apple's "macos" system had anything like this.
And in microsoft corporation's "windows" operating system it is even impossible to implement this, due to buggy system design and already existing tons of issues related to resource management that would require rewriting windows from scratch. Since microsoft corporation's "windows" is just a toy for (rather stupid) children, this will never happen.

Just a note for people misunderstanding open and closed systems.

disk, memory access and cpu usage (1)

Mister Liberty (769145) | about 3 months ago | (#47493079)

Weren't they added in Linux 0.01 around 1991?

Parallel Processing Super Computers (0)

Anonymous Coward | about 3 months ago | (#47493265)

Parallel processing super computers are a cost effective way of managing complex resources. The new technologies mentioned in Mr. Newman's article will make these super computers all the more efficient.

Re:Parallel Processing Super Computers (1)

anon mouse-cow-aard (443646) | about 3 months ago | (#47493843)

uh, no, just the opposite. Many supercomputing applications are about getting access to compute/memory/io bandwidth with as little intermediation as possible. Job allocation methods on supercomputers typically allocate entire nodes, so the sort of fine grained prioritization is prescribed is rather irrelevant. Whole article looks a bit anachronistic, maybe it is sensible to people from a mainframe background where reliability/predictability trumps other requirements, but when performance is an important concern, this kind of intrusive over-monitoring described would not be wanted.

Where is the market demand? (1)

Stonefish (210962) | about 3 months ago | (#47493277)

There is a solution that does this, it called a mainframe, they're hideously expensive, cooked a motherboard recently 1.2 million, want a 10G network card $20000. Now you can buy an awful lot of commodity hardware for much cheaper so that you have excess resources, need a dedicated system for a database buy one, run the other applications on a shared resource, you'll still end up with spare change if you dump a mainframe contract. You can replace a mainframe with commodity items you just need to plan for it. The cost of this scheduling is more expensive than deploying a couple of dedicated components.
The last time that I looked the number of cycles being performance on mainframes had been decreasing for over 25 years. ie there's not a great deal of market demand in this area and most of this market is with legacy systems.
The other litmus test is to look at how many successful IT companies that have developed in the last 20 years use a mainframe. I suspect that it is zero. Do google, facebook amazon etc use mainframes?
Scheduling and resource control on systems, is a bit like QoS, if you can buy fat pipes just buy fat pipes, it's a better solution and it makes all of the problems go away. Introduce scheduling and you'll be employing goons for now to enternity trying to sort out which application is king and performance still sucks.

Linux ALREADY has it! (2)

Cyberax (705495) | about 3 months ago | (#47493285)

Really. Author is an idiot. He should actually read something that is not a documentation volume for his beloved IBM mainframe.

Linux has cgroups support which allows to partition a machine into multiple hierarchic containers. Memory and CPU partitioning works well, so it's easy to give only a certain percentage of CPU, RAM and/or swap to a specific set of tasks. Direct disk IO is getting in shape.

Lots of people are cgroups in production on very large scales. There are still some gaps and inconsistencies around the edges (for example, buffered IO bandwidth can't be metered) but kernel developers are working on fixing them.

Make 5000 bucks every month (-1)

Anonymous Coward | about 3 months ago | (#47493341)

Make 5000 bucks every month... Start doing online computer-based work.I have been working from home for 4 years now and I love it. Our system is the best/easiest way to earn legitimate money on the internet. You are your own Boss and you will work, whenever you want to. For more info click here

WWW.MONEYKIN.COM

_why_ can't we keep throwing hardware at it? (1)

fygment (444210) | about 3 months ago | (#47493825)

Moore's Law speaks to computational horsepower per unit per cost. But even if the computational abilities do not continue to increase, the costs will keep coming down.

Hardware is cheap. It's not an elegant solution, but it's cheap. And getting cheaper.

Focus on the UX, because without that, who cares what your kernel can do? Machines are plenty powerful enough, what you want to do is get your OS in to the hands of the most users possible .... right?

Re:_why_ can't we keep throwing hardware at it? (1)

Jeremi (14640) | about 3 months ago | (#47494121)

Hardware is cheap. It's not an elegant solution, but it's cheap. And getting cheaper.

Right, but if your company comes up with an elegant solution that gets 10x better performance out of a given piece of hardware, and your competitors cannot (or do not) do the same, then you've got a cost advantage over your competitors and can use that to get customers to choose to buy your product rather than theirs.

That will always be true, no matter how fast and cheap the hardware gets. Either your customers will be able to do 10 times more work with your product, or (if there isn't 10 times more work to actually do), they can get the job done with 10 times less hardware (and thus 10 times less expense).

Focus on the UX, because without that, who cares what your kernel can do?

There is a whole world of software out there that runs in the background and doesn't require much (if any) UX. Think of the software that generates your credit card statement every month.

tuned (1)

bill_mcgonigle (4333) | about 3 months ago | (#47494095)

I don't have hard data yet, but I'm finding that EL7 is much much faster than EL6 on the same hardware for the workloads I've tried so far.

I don't know that tuned [fedorahosted.org] is most responsible, but I can see that it's running and that's what it's supposed to do.

I realize that the kernel is better and perhaps XFS helps, but those alone seem insufficient to realize the difference.

Anyway, it's somewhat along the direction people are talking about, even if only minimally.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?