×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

That is such an idiotic statement that I won't even bother continuing the discussion. This link is the wikipedia page.

Perhaps you should actually read the link you specified. Linus himself describes the term "hybrid" kernel as simple marketing, which is exactly what I said.

Pulseaudio is a brain damaged [beastwithin.org] piece of software and one of the first things to be removed in any distribution.

And yet, it's not removed seeing as it's still around and still quite popular.

5 days ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

Yet for machines like home computers it is simply not possible. It is only with the relative advance of hardware that the microkernel can actually get close to a monolithic kernel in terms of performance.

This is not true. The L4 kernel ran Linux as a guest OS with 5% overhead. This was the birth of hypervisors like Xen, which are really just a sort of microkernel. Virtualization is everywhere now, and no one seems to be complaining about overhead. If you can run a virtualized host with so little overhead, native execution is at least as fast as your guest.

It's perhaps too easy to introduce performance problems via a poor choice of decomposition, but that doesn't entail that decomposed systems must necessarily perform poorly.

It is this performance consequence that made Windows NT, originally designed on a microkernel architecture, move towards a hybrid kernel.

Poorly designed systems perform poorly. The NT kernel might have been a nice kernel design from some people's standpoint, but then people thought the Mach kernel was a good microkernel too. Both opinions are incorrect.

There are far more Mac and Windows installations than all Linux distributions combined. These are all microkernel or hybrid-kernel architectures.

There is no such thing as a hybrid. You are either on fire, or you are not. You are either a microkernel, or you are a monolithic kernel. Mac's may use the Mach microkernel, but it's a poor kernel and the BSD subsystem really consists of most of the system calls, which all execute in kernel space. This is a monolithic kernel that has a poorly designed microkernel as its ancestor.

Systemd is all about marketing, and nothing about engineering. It too will fail and be replaced, just like PulseAudio by ALSA.

Except PulseAudio hasn't been replaced, it's still used by most distributions.

about two weeks ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

In the end the real debate was HOW to accomplish the modularity, not whether to make the kernel modular or not

Right, so make the kernel modular via isolation which provides fault tolerance in your most critical piece of code, or make it monolithic, ie. everything lives in kernel address space.

There is no reason on earth that an init system would need a specific journal daemon

There is no reason on earth device drivers need to live in kernel space either. Performance arguments are simply false, and this point has been disproven many times over.

Of course arguments and hard data aren't meaningful in these discussions, and monolithic has clearly won in terms of marketshare. Once again, why fight the tid of history instead of being more constructive? You are going to lose.

about two weeks ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

But systemd isn't actually monolithic, it's monolithic by fiat because the daemons refuse to play well with others, which (again) is against the Unix Way.

Microkernels are arguably more "Unixy" than monolithic kernels. Each device driver is simply a process that has a well-defined stream interface that can be piped to any other process, not just the kernel itself. Microkernels are Unix taken to the extreme.

So again, this argument failed for microkernels, so why should it succeed here? Perhaps some core functionality, not just system calls, should also be in some monolithic service and not a set of composable subsystems.

It's taken until recently for Minix to become even vaguely usable as anything other than a learning operating system, it's lagged behind everything else in terms of features always.

Tanenbaum and others weren't arguing that Minix was the microkernel OS that should be selected, merely that some microkernel should be preferred over monolithic kernels. The high performance L3 and EROS microkernels both existed at the time, albeit in early stages like Linux.

about two weeks ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

Yes, that is an excellent reason to add even more vulnerability vectors!

Granted, but more granular fault isolation wasn't convincing when Tanenbaum and Linus were arguing microkernels vs. monolithic kernels, so why should it be convincing now? I'm certain your other complaints are fixable given the current framework, assuming there aren't other mitigating issues.

about two weeks ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

I have no opinion on systemd. However, more granular fault isolation wasn't convincing when Tanenbaum and Linus were arguing microkernels vs. monolithic kernels, so why should it be convincing now?

Every condemnation leveled against the monolithic systemd are just rehashed arguments of monolithic vs microkernels. Monolithic kernels clearly dominate, and chances are systemd will similarly dominate, so instead of wasting your time battling the tide of history, perhaps you should be more constructive.

about two weeks ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

naasking Re:Lennart, do you listen to sysadmins? (551 comments)

What a lot of people are concerned about is that this entirely new and largely untested (in the 'wild', as it were) and very very large, complex piece of software which runs at a very very privileged level in the operating system is going to become the main source of security vulnerabilities in Linux.

Linux has almost two orders of magnitude more code than systemd, and it changes all the time. Security vulnerabilities are far more likely to be in the monolithic kernel.

about two weeks ago
top

An Open Letter To Everyone Tricked Into Fearing AI

naasking Re:"AI" vs Strong AI (227 comments)

It's still limited by the FPGA's gate count, which is pretty low by CPU standards.

about two weeks ago
top

An Open Letter To Everyone Tricked Into Fearing AI

naasking Re:Killer AI will kill journalists for slandering (227 comments)

We cannot build a computer that can model a bug's brain activity

Actually, I believe IBM emulated a rabbit sometime in the past couple of years.

about two weeks ago
top

Tumblr Co-Founder: Apple's Software Is In a Nosedive

naasking You can have yearly releases (598 comments)

You can have yearly releases as long as you're willing to ruthlessly cut features that aren't sufficiently stable. If frequent updates are more important than features, then that's achievable.

The problem would be if marketing had a hand in both direction AND quality control. That's the recipe for disaster.

about three weeks ago
top

MIT Unifies Web Development In Single, Speedy New Language

naasking Re:Frames in 2014 (194 comments)

The demo site uses frames. FRAMES.

This is the researcher's website. A RESEARCHER. Who cares if he sucks at web design? Ur/Web can generate any HTML you want.

about 1 month ago
top

CIA Lied Over Brutal Interrogations

naasking Re:Enlightening... (772 comments)

That gave me some hope for the world. At least some stood up and said "No" and likely ended their careers over it.

Yup, it's a grand, sickening Milgram experiment writ large.

about a month and a half ago
top

CIA Lied Over Brutal Interrogations

naasking Re:From Jack Brennan's response (772 comments)

I think that it's more protective of citizens to behave in a way that isn't morally reprehensible.

That's an excellent and underappreciated point. The only difference between a morally reprehensible government-sanctioned action against a terrorist, and an action against you is an easily manufactured excuse. If morally reprehensible actions were never permitted, then the citizens need never fear their own government, which was the whole point of the constitution to begin with.

about a month and a half ago
top

CIA Lied Over Brutal Interrogations

naasking Re:Really? .. it comes with the job (772 comments)

you can separate them and ask them questions then torture them when their answers don't match.

Except that doesn't work, because people being tortured will say anything to make it stop. At no point when they change their stories can you be certain they're now telling the truth. Even if their stories suddenly match, it could be a complete fluke, or as a result of the interrogator asking leading questions. Torture is useless.

about a month and a half ago
top

Halting Problem Proves That Lethal Robots Cannot Correctly Decide To Kill Humans

naasking Can't decide WITH CERTAINTY (335 comments)

One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.

It's not curious at all. The goal was to determine if a computer can decide with certainty whether another agent intends to do harm. This is obviously unsolvable, even for humans. Of course, we don't require humans to be absolutely certain in all cases before pulling the trigger, we just expect reasonable belief that oneself or others are in danger (for a self-defence argument). Reasonable belief is even easier to decide for computers, since the internal states resulting in that conclusion are fully available to us (unlike the human mind).

about 2 months ago
top

Joey Hess Resigns From Debian

naasking Re:Gnome3, systemd etc. (450 comments)

When Debian pushed Gnome3 and the community didn't like it [...] Now there is the systemd debacle. A large number of people have voiced their disapproval [...]

You seem to be speaking for "the community", but I don't see any hard numbers suggesting that the majority of said community actually shares your opinions. Just because many voices cry out and cry loudly, does not make those voices representative of anything meaningful.

about 3 months ago
top

The Effect of Programming Language On Software Quality

naasking Re:More factors to normalise out. (217 comments)

scanning the whole address space multiple times and guessing

You don't understand how garbage collection works.

about 3 months ago
top

The Effect of Programming Language On Software Quality

naasking Re:Other factors. (217 comments)

While they do have the necessary language support for functional programming, the fact that they are impure means that even when you're following the functional paradigm you can't count on the rest of the program playing by the same rules. Any call to external code may perform I/O or depend on or modify global mutable state.

Sure, but triggering side-effects during a fold can be perfectly sensible, and this doesn't make functional programming languages any less functional. Find me one person that considers this program to be non-functional, as your definition does:


let main = let list = [10; 2; 99; 30; 3] in
    map (fun x -> printf "%d\r\n" x);;

about 3 months ago
top

The Effect of Programming Language On Software Quality

naasking Re:More factors to normalise out. (217 comments)

Oh, I also forgot to mention:

and that your resources are freed deterministically the instant you are done with them, rather than "at some time in the future, maybe".

Except this can lead to extremely high latency due to cascading deletions, which is another potential source of performance problems in C/C++. If you try to bound the amount of work to do to avoid this problem, you necessarily introduce reclamation latency. Reclamation latency isn't necessarily a bad thing.

about 3 months ago
top

The Effect of Programming Language On Software Quality

naasking Re:More factors to normalise out. (217 comments)

If you use smart pointers to manage your dynamic allocations, you'll find that memory management in C++ isn't any harder than in a garbage-collected language.

Because smart pointers are degenerate garbage collection.

about 3 months ago

Submissions

naasking hasn't submitted any stories.

Journals

naasking has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?