Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Kernel Git Repositories Add 2-Factor Authentication

samzenpus posted about a month and a half ago | from the locking-things-down dept.

Security 49

LibbyMC writes For a few years now Linux kernel developers have followed a fairly strict authentication policy for those who commit directly to the git repositories housing the Linux kernel. Each is issued their own ssh private key, which then becomes the sole way for them to push code changes to the git repositories hosted at While using ssh keys is much more secure than just passwords, there are still a number of ways for ssh private keys to fall into malicious hands. So they've further tightened access requirements with two-factor authentication using yubikeys.

cancel ×


Sorry! There are no comments related to the filter you selected.

Oh no. (4, Funny)

i kan reed (749298) | about a month and a half ago | (#47698059)

Someone might commit code to our open source project. We can't have that.

Re:Oh no. (1)

Anonymous Coward | about a month and a half ago | (#47698083)

Yes, malicious actors. The Linux kernel repos have never been a free-for-all.

Re:Oh no. (5, Funny)

i kan reed (749298) | about a month and a half ago | (#47698115)

Okay, so once again I have to be reminded that no one is allowed to joke about the Linux kernel, because the distros are responsible for packaging a sense of humor.

Re:Oh no. (0)

Anonymous Coward | about a month and a half ago | (#47698137)

Jokes are defined by being funny.

Re:Oh no. (2, Funny)

i kan reed (749298) | about a month and a half ago | (#47698169)

As long as we're being humorless assholes:
Jokes are defined by the intention of humor. Lots of things are funny that aren't jokes, like, say, if you died, it'd be hilarious. Lots of things are jokes that fail at being funny: see the complete works of Carlos Mencia.

Re:Oh no. (0, Offtopic)

Zero__Kelvin (151819) | about a month and a half ago | (#47698187)

"Jokes are defined by the intention of humor."

So you're saying that when your parents had you you weren't a joke? Then how did you become one?

Re:Oh no. (-1, Offtopic)

i kan reed (749298) | about a month and a half ago | (#47698199)

A debilitating STD I caught from your mom.

Re:Oh no. (0)

Zero__Kelvin (151819) | about a month and a half ago | (#47698239)

My mom says you were weak and infirm before she met you.

Re:Oh no. (-1)

Anonymous Coward | about a month and a half ago | (#47698247)

I didn't say all things that are funny are jokes. I said jokes are defined by being funny.

Re:Oh no. (-1)

Anonymous Coward | about a month and a half ago | (#47698261)

Funny to who? Anyone? There's no such thing as being objectively funny.

Re:Oh no. (0)

i kan reed (749298) | about a month and a half ago | (#47698329)

Look, use a dictionary next time you want to talk about definitions. Your own are just wrong.

Re:Oh no. (1)

PopeRatzo (965947) | about a month and a half ago | (#47698469)

Carlos Mencia

Ain't he that four-star prospect whose tearing up triple-A in the Cubs farm system? I saw him play with Iowa and he's a terrific-looking second baseman who can hit for power.

I didn't know he's a comedian too.

Re:Oh no. (1)

ArhcAngel (247594) | about a month and a half ago | (#47698579)

As long as we're being humorless assholes: Jokes are defined by the intention of humor. Lots of things are funny that aren't jokes, like, say, if you died, it'd be hilarious. Lots of things are jokes that fail at being funny: see the complete works of Joe Rogan.


Re:Oh no. (0)

sillybilly (668960) | about a month and a half ago | (#47701635)

And Unix is defined by being simple. Which Linux, it no longer is.

They should worry less about authenticating who contributes, and then finding the scapegoat to blame for the mess ups, but instead they should try to go back to core principles, and clear up the mess and establish a system where mess ups are impossible. It's not the individual programmers who are messing up, but the leadership at the top who fails to implement core principles, who have allowed themselves to stray far from them, under the pressure of features, and patching the patches that patch the patches that patched we don't even remember what anymore. The herd simply just follows the command of the shepherd through his dogs. You can't blame the ewe. You can't blame the dogs. If both the ewe and the dogs each follow command as they are supposed to. That's how a military works. Chain of command. Battle of Jutland is a good read on military and controlling chaos into musical and dance-like order. Jellicoe's formation of the ships, where they almost hit each other while assuming positions, "flying" by each other at only a few miles per hour. Battle about turn to starboard, by Scheer, a motion by complete mess-prone chaotic-prone beings executing it in unison, from prior practice. That is the way to beat down chaos, in middle of a messy battle, which by definition is chaos itself. Top down chain of command, following orders, everyone moving in unison.

The basic problem with Linux is complexity. I've stopped using Linux ever since kernel 2.6.26 or so, anything new that boots does just way way too much. It's obvious what a hopeless mess it is just from the boot up messages. Damn Small Linux is trying to get back to core principles, but it's hopeless with the present code size of the kernel. The basic principle of Unix is the KISS principle. Quoting from the Wikipedia page:

The principle most likely finds its origins in similar concepts, such as Occam's razor, Leonardo da Vinci's "Simplicity is the ultimate sophistication", Mies Van Der Rohe's "Less is more", or Antoine de Saint Exupéry's "It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away". Colin Chapman, the founder of Lotus Cars, urged his designers to "Simplify, and add lightness". Rube Goldberg's machines, intentionally overly-complex solutions to simple tasks or problems, are humorous examples of "non-KISS" solutions.

An alternative view - "Make everything as simple as possible, but not simpler." - is attributed to Albert Einstein.
That is a warning that even the KISS principle should not be abused, though maximized as much as possible.

I did a google search on "core principles of unix," and I came up with this: [] [] []
etc, etc.

In all of them the basic principle of Unix is simplicity, clarity, modularity, human readability, beating complexity down with a club anywhere you can, if you can find clever ways to get something accomplish, forget about it, it's too complex, do it cleanly, neatly, simply, and even brute force. Don't be clever, be stupid, and expect everyone to be stupid. In Unix, every program does one thing, and does it extremely well. If you need features, you write a different program. Then these programs come together and interact through extremely simple interfaces, and this soup of experts interacting simply to accomplish any needed complex task in the world is what you call Unix. The swiss army knife of software. Which also goes for C, as C and Unix are the same thing.

The first thing the Linux developers have to accomplish is to beat down the complexity mess they've created, to gut the whole thing to bare bones, throw away "clever" things, and substitute them with "dumbed" down things. Sound the tornado siren or air raid siren in the bazaar, where everyone runs to shelter, or picks up defensive positions. Practice emergency evasive maneuvers in unison, where everyone follows orders, and can be relied to follow orders by their sub-superiors. You have to organize yourselves like [] This requires lots of existing volunteers signing up for various functions, just like during WW2 the US military had an influx of volunteers that picked up specific roles. Responsibility trickles down from the top through the chain of command.

Once the war or campaign to fix up Linux, and gcc, and the like, is over, then you can disintegrate again, like the baby boomers in peace time, and everyone goes back to bazaar mode, coming up with random ideas they can toss into the common pile, but every time things get out of hand, and complexity, like a big boa constrictor, gets loose, and starts choking the whole system, you have to fight it off to get a breath of fresh air.

I'm not a programmer, but here are some ideas from someone who has no clue:
A milestone of progress would be a compiled Linux core kernel of 50 KB, running in Vesa mode graphics (nobody uses serial printers anymore). You may have to throw away gcc for this and hand code the whole thing in assembler, per architecture. But 50 KB is human readable amount of machine code. I know hardware has become complicated where you have to take care of its features and performance benefits. See what features you can ignore, and if you cannot, pressure the manufacturers to make them ignorable. Code size above all is the measure of complexity. Ability to hand trouble shoot simple machine code that's not clever, or even efficient, but extremely simple and plain English, is like gold.

Then organize the add-ons that tie into the minimalist core, based on levels of complexity. The level 0, 50 KB compiled kernel should boot on anything with a bios, level 1 should boot on most modern architectures, but without performance features, level two add performance features to video, disk and memory, level 3 network performance, level 4 encryption and security to all levels below it, level 5 everything else. You probably understand these things better. You could send the kernel into different run levels to easily find out what makes it crap out. And by this we don't mean System 5 runlevels of what processes to run, but the level of complexity the kernel is attempting to fight with. Every benefit comes with a complexity cost, and you should be able to pick and choose as you need. In fact performance might be better with less complexity, to where hardware manufacturers could design hardware to better fit and adapt to simple code, as opposed to you having to design software to manage the complexity needed to get extra performance out of that hardware. Security is only possible through utter simplicity, and even then you get all kinds of surprises. A very important thing is humor. Old school Unix was very simple, yet there was place for humor in it. It is difficult to not be clever, yet humorous, but in fact being utterly dumb, and humorous about it at the same time, glitters like gold in degree of cleverness. In essence, the dumber you can afford to get the smarter you have proven yourself to be.

A favorite game of mine is Go, Wei Chi, Baduk. It takes 5 minutes to learn the rules and the game, and 30 years to become a good go player. The guiding principles - cut where you can cut, connect where you can connect, enlarge eyespace reduce eyespace, the enemy's key point is your key point, do not use thickness to build territory, drive weak stones against your thickness to attack them, recognize bad shape and how to avoid it, force your opponent into bad shape, etc. By far the most important principle is the sente/gote pair, fighting for the sente, the initiative, and only sacrificing it when there is clear and tangible profit, and then doing everything to gain it back again, to dictate the moves around the board. In that, the most important thing is being able to ignore your opponent last move, and move somewhere else on the board, where the economics prove that it's worth more then faithfully and blindly replying to an attack. This quest for ability to ignore has distilled into the very being of players, and you recognize a strong player by his ability to ignore moves you yourself would not have ignored, because of the grave economic sacrifice that creates. This ability to ignore, and in that ignore necessary features to their maximum possible extent, and find them unnecessary instead, this utter terseness, could come in handy when designing the core of Unix, and the first parts of any new add on, before one can let loose and add the remaining desired features, but there has to be a core, a small stepping stone built on rock, a foundation, for any new features, as Unix philosophy dictates, that is extremely simple, robust, clean, dumb, terse, secure, transparent, and easy to build on, and easy to fall back on, into manual mode, when the automation part messes up.
  The guiding principles of Go are simple, applying them in practice is the difficult part, and someone who's 7 dans runs circles around someone who is 5 kyu, there is a gaping abyss between them. This kind of skill rating system could be used in ranking coders in the kernel, in a amazon-like purchase review stars, by anyone who revises anyone else's code. I know that's a great source of flame wars, but direct criticism like that, and achieving a higher rank, such as 4 kyu from 5 kyu, or 3 dan from 1 dan, is a great incremental ego booster. And anyone who achieves level 7d on IGS, Tokyo, their games get "observed" by many other players, they can teach by example, and it's very clear from the rankings who the experts are, or how many handicaps should a new game get, from simple game results. There is no simple feedback mechanism to rate coders in Linux, or black belt achievement awards, so the ranking system from Go could be pirated. It is much simpler and plain English than a chess Arpad Elo score of 2473 or 2945 - that's too many numbers, unbeautiful in design compared to dan and kyu rankings. In go you start at 30kyu, which is -30, or BC (beginner category), and then you escalate through the ranks to 2kyu, then 1 kyu, and 1 dan means you have grasped some rudimentary basics of the art. The highest dan ever is 9d, for amateurs or professionals alike, but few players reach it. Nevertheless 6 dan amateur ~ 1 dan professional. A good go book to start with, for both 30 kyu experts, or 9 dan professionals to constantly remeditate and review, is Kageyama Toshiro, Lessons in the Fundamentals of Go. By the way he never reached 9 dan in his life, he maxed out at 7 dans, but could teach much better than 9 dan professionals, who can't really verbalize and express their intuition.
There was a recent post on Slashdot about how the Japanese still have telegrams as an option. STOP. When even the Brits, known for their robustness, have abandoned it. STOP. But the japs have abandoned Morse coding them in the 60's too, unfortunately. STOP. I mentioned how in Japan they understand the KISS principle, when you look at the furniture in a paper room: there is almost zero, there is a low lying coffee or tea table, and two floor pillows to sit on. That's what you call a functional room, and simplicity is the ultimate sophistication. There is almost nothing left to take away, and you can find beauty in that. I have to add, that personally, I could not live like that, I need some complexity in my living environment to feel good, as in baroque or rococo lush exuberance of features, but I used that to highlight what Unix is about, if it is Unix that you're trying to create. And it comes down to personal preferences, as you look at pussy(or labia minora), and some pussy is complicated rococo, and some pussy is straightforward and simple Unix. When it comes to pussy, both kinds are beautiful, but when it comes to software, only the simple one, only Unix is beautiful. You can get the features out of Unix by add ons, and combinations of known things, variations on the same topic, and thus you can pile on the complexity, similar to how life follows a simple design of A C T G, and then varies that topic to create butterflies, flowers, you and me, your cat, an elephant or a bird, grass, tree, bacteria, fish. That's how Unix, a swiss army knife software, is supposed to function, extremely simple design at the core, but anything desirable can be easily accomplished with it.

Ok, that above paragraphs may not have been very useful, coming from a person with zero experience of programming. But let me quote some things from the page links I posted above;

This is the Unix philosophy: Write things that do only one thing and do it extremely well. Write programs to work together. Write things to handle text streams, because that is a universal interface.

Rule of Modularity: Write simple parts connected by clean interfaces.
Rule of Clarity: Clarity is better than cleverness.
Rule of Composition: Design programs to be connected to other programs.
Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
Rule of Simplicity: Design for simplicity; add complexity only where you must.

Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do. (I'm adding that this is Einstein's principle, and you must follow it, as in give me 19 parameters and I can fit an elephant with an algebraic curve, give me 20 and I can fit the tail accurately too if necessary (it may not be necessary), but I cannot fit an elephant with only 2 parameters, no matter how badly I wish to do so.)

Rule of Transparency: Design for visibility to make inspection and debugging easier.
Rule of Robustness: Robustness is the child of transparency and simplicity.
Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
Rule of Least Surprise: In interface design, always do the least surprising thing.
Rule of Silence: When a program has nothing surprising to say, it should say nothing.
Rule of Repair: When you must fail, fail noisily and as soon as possible.
Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
Rule of Diversity: Distrust all claims for “one true way”.
Rule of Extensibility: Design for the future, because it will be here sooner than you think.

If you're new to Unix, these principles are worth some meditation. Software-engineering texts recommend most of them; but most other operating systems lack the right tools and traditions to turn them into practice, so most programmers can't apply them with any consistency. They come to accept blunt tools, bad designs, overwork, and bloated code as normal — and then wonder what Unix fans are so annoyed about.

By the way, in Go, the proverbs, the guiding principles are extremely simple, but putting them effectively into practice is a superhuman feat even for the cleverest of cleverest players. And the rule to every rule is that there is no rule, and every proverb has an exception, and when you can find those exceptions to be the correct things to do given the circumstances, you're perverting and subverting a principle, which is highest beauty in itself. For instance, pros know how to avoid bad shape, and how to force the opponent into bad shape, but sometimes the world champion will intentionally seek out a horrible looking bad shape group, whose beauty only shines after long meditation of the possibilities, and showing that they are all fake, and the bad shape is untouchable. Thus, the guiding principle of "avoid bad shape" has been perverted, or subdued, and is a source of BDSM-like sexual gratification, so to speak. But only when done by the 9 dan pros, not the clumsy 30 kyu or 15 kyu beginners, who haven't even grasped a concept of good shape, what it looks like. There has to be a similar concept in coding, as an expert can only glimpse at a position, and rate it for shape in under a second, on appearance, and know instantly the rank of the players, within 2 kyu accuracy. There has got to be a similar thing with code, one quick glimpse from the experts, and every programmer could get an initial ranking that gets fine tuned as things go along.

Linux needs to be gutted, in a seasonal cleanup fest, and all the garbage that wants to be clever thrown away and rewritten in dumb ways.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47701897)

On common dumb phrase in Unix is "Everything is a file." How can you say a piece of hardware is like a file on a disk, when it obviously has various features? That's the whole point, ignore, ignore, ignore the features to utter terseness in the interface, beat it down with a club into a uniform, system wide dumb mode first, then add the features and complexities later, but preserve the uniform text interface in everything, don't customize into a myriad of unique and non-compliant standards. If I remember right, in later versions of Linux I had issues treating items under /dev and especially /proc as regular files, they forgot about implementing the core principles of behavior. But I don't claim to understand the reasoning behind it, which may be reasonable. It is still nice to be able to dd if=/dev/cdrom of=./file.iso , from hardware. You can't do that in windows, or dos, from simple command line, but in Unix everything is a file, that you can read from and possibly write to, precious data. That's what a computer is for: helping me massage my data. And in that, I'd like to reserve my 1 TB portable disk for pictures of butterflies, or videos, and would prefer if the code that got everything accomplished stayed under 150KB. I realize that's not possible, but what's happening in today's computing environment, when it comes to multi megabyte code size that massages the 1TB data, is simply a joke. There was this saying back in the day, at IBM, that 1 MB should be enough for everyone. And by that I think they meant for code size, not data, as I wish to store months and months worth of compressed videos, and I would have no problem with code that could massage it to all my needs, AND stay under that 1MB. If nothing else, self generated code from simple rules, could accomplish starting from an under 1MB seed. However self generated code brings about the danger of mutations and AI. And AI, along the movie of "Screamers" from 1996, is the biggest threat humanity is facing today, bigger than biotech or nuclear holocaust, because you can run away from those two into outer space, but not from an AI cleverer than you set out to chase you and hunt you down. That movie might seem funny at first, but it's really not funny. Everyone who programs computers, and automates designs, similar to how life automates mutations as new Monte-Carlo designs and selection from them, should always keep that movie in the back of their minds. And in this sense they should set limits on computing power available in laptops and desktops, and the relentless drive for more supercomputing, and more bloated software like KDE4 and higher, when even KDE3's biggest problem is codesize and performance, compared to windows. For instance, when copying files in Konqueror, Windows 2000 ran circles around KDE 3, but not in command prompt of course. So how do I get user friendliness out of Unix, and performance at the same time? All I see is bloat bloat bloat these days, that refuses to run on older hardware, when in my mind progress in software science would mean increased speed even on older hardware, because, as far as the features go, I'm not seeing any tremendous advances, it's all just the same thing, with a different face slapped on it, 10x slower than 10 years ago. Like MS Office 97 can accomplish pretty much everything that Office 2010 can, and even do so more efficiently, plus in a way I find awkward because I got used to the old ways, so why should I get used to the new ones when I get no benefit at all from it. In fact I have to click twice to get to a menu item that used to take one click on the tool bar.. Similar gripes of I used to be able to do this and that in Linux, and all of a sudden they took it away, and got something "better"? Really? That much better that it's worth constantly switching ways of doing things? How are businesses to rely on that kind of Unix, that won't last the full carrier of 30 years of one of their freshly hired college grads? They never get the chance to get expert at it, or create software that lasts 30 years and becomes a good investment for the business, because everything is constantly in the fast lane. It's like I'm used to doing computations with a paper and pencil using Indian/Arabic numerals, and know what a horrible ordeal it would be to use Roman ones, but anyone who keeps inventing new systems of doing the same computations, while not providing me a better way to do it, but something in between Roman and Arabic, something worse than Arabic, is not very enticing to me. We have not found better ways to deal with numbers than Indian/Arabic to this date. In software, there may come such a time too when it's time to rest, and become good in the status quo that lasts centuries, instead of flipping the rug out from under you every two years.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47707649)

Also, computing already follows similar simplicity at the core as life does. Everything in life is A C T G, and everything in computing is 0,1. But we're dealing with the complexity issues arising at higher levels, than 0,1. In particular, the closer you are to 0,1 the simpler and more straight forward, the more "Unix" the procedures should be, and the farther away you are, the more variety, the greater flamboyance, the greater exuberance of rococo, such as menu options, flamboyant colors, and richness of decoration. For instance, a colorful lizard is flamboyant at the user interface, appearance to the external world, but adheres to the principles of all eukaryotes, in how it functions closer to the ACTG level. Similarly, KDE should be flamboyant at the user interface level, or VB Classic/VBA Macros can be flamboyant in what all is implemented in them, at the user interface level, but simple at the vbrun60.dll level. All eukaryotes adhere to the principles of having a cellular unit with a nucleus, mitochondria, etc, and in this trees and grass and lizards and you and I are similar. It does not mean that the principle is a correct one, for instance a totally mindless and dumb prokaryote might evolve that does not have a nucleus, and chew and digest off Alien's face, digest up all trees, all other lifeforms, and in a sense, prevail in the competition to survive by becoming the new top predator. Being a eukaryote means you believe that it's efficient, and stick to it, but that belief is not a guarantee, and you can have raging debates about it, or even might have some human cells where it might make economic sense to be prokaryote instead of eukaryote for some of the specialized cells, but in the meantime everyone sticks to this guiding principle, not because it's correct, or good, but because it is what it is. Similarly, in an aerobic, oxygen atmosphere, for multicellular life other than bugs, which have no blood but a tracheal lung system that penetrates all the way to individual cells and brings oxygen directly, so for multicellular lifeforms that do have blood, hemoglobin based on iron has arisen as the dominant and only standard, being the most efficient life could come up with. There are exceptions with blue blooded molluscs, like octopi, which sometimes live in depths of great oxygen depravation, and at those oxygen concentrations the copper based blue blood is more efficient, but all fish, all reptiles, all birds and mammals have red blood. Red blood might be complex in an of itself, but it became a standard everyone follows because we don't know anything more efficient. In the biotech future there might be artificial synthesis of something more efficient, and human derived artificial people based on them, such as green blood or orange blood, who can win olympics, but for now life is stuck at hemoglobin, which is both complex and efficient, and maximum state of the art. Similarly all vision systems are based on a vitamin A derived molecule, already at the limits of theoretical efficiency, as in 4 photons able to trigger a visual response in the brain, but there might be inventions where a single photon does it - and it may not be the molecule itself, because, in theory, that changes conformation from a single photon, but the overhead circuitry, which, may be damping out and filtering single photon detection on purpose - but once you got single photon detection, you know you can't get any better than that, maybe less expensive molecules to be used. In number systems around the world we use the positional Hindu/Arabic system as opposed to the Roman, and we have nothing better so far, so it's a universal standard.

(By the way the positional system is encoded instinctively into words such as 10, 20, 30, 40, and such things as 31,32, 33,... then 41, 42, 43, so it's weird that the Greeks and Romans could not come up with 4something to represent 40, with zero. But the french have quatre vingt huit for 88, literally meaning 4-20-8, as in 4x20+8, so some number systems don't follow the simple rules that would yield zero, and near 10 english does not either, as in eleven, twelve, instead of oneteen, twoteen, thirteen, fourteen, meaning the dozen, divisible by 2,3,4,6,12, was often preferred to ten, divisible by 2,5,10, especially lacking divisibility by 3,which is often needed as the next level after division by 2.)

So in Unix, you have a eukaryote-like guiding principle - of specialized individualized objects that can be easily separated out from the whole, such as cell nuclei, that do one thing and do it well, - the guiding principle to abstain from complexity and cleverness, near the 0,1 level, or at the compiler level, and add complexity the farther away you are from the core, and closer to the user experience, such as outer appearances of a lizard, compared to molecular processes. In that there might be some hemoglobin-like security features that are complex but most efficient known in the present circumstances, or light sensor like setups, such as good vision octopi and humans both have the exact same eye developed, fairly complex and simple at the same time, but seems to be better than the bug compound eye, which on the other hand has even higher color sensitivity than the human and octopus eye, and it fits well in a circumstance where you have to go from flower to flower 2 feet apart from each other, compared to distance vision to a couple hundred yards required by the human and octopus predators, motion sensitivity topping color acuteness, except for monkeys trying to decide whether to hop branch to branch to pick ripe or unripe banana, which is very color dependent.

In KDE you get, or at least used to get, a flamboyance of features, all based on the ease of use of the qt c++ library. Qt C++ is much easier to program than the Win32 C Api, which is more verbose, but, unfortunately, when it comes to user experience, Win32 runs circles around anything invented yet, except command line Unix. C++ was available for a long time for windows, and you even had experiments such as Microsoft foundation classes, which have been horrible and overly complex to debug, too abstractualized without getting enough bang for the buck, for the complexity at the core assumed for the benefit of ease of use due to abstractualization, and so far simpler ways to ride on top of Win32, such as Delphi 5 and 7 Pascal, or VB Classic (which lacks critical features of multithreadedness for instance) have proven more efficient than C++, and Windows has stuck to a C API core, the developers or guiding leaders, such as Charles Simonyi deliberately not taking on the standard C++ libraries way of doing things, and instead rolling their own, specialized class libraries we call Win32 C Api. To every complexity there is a cost, and a performance hit. KDE Konqueror can never hope to achieve the speed of Windows Explorer of Windows 95 or 2000, for the simplest of simplest computing taks, such as a file copy diagram, even though it has features of automatic skip of files, etc, but it's hopeless in speed, because of the C++ library ease of programmer use, that sacrifices speed of the software. In linux if you have to copy lots of files, you have to drop back to typing in a console, and then you can beat Win32 Windows Explorer, but not by very much. So there is a huge gaping problem at the rococo, at the user interface level in Linux, because they have not achieved harmony, or a balance between complexity and efficiency to the level that Simonyi achieved by avoiding C++. Of course you cold fix up C++ Qt Api to be better than the Win32 C object Api, and still call it C++, you should be agnostic about these things, but you might have head on clashes with the ISO C++ standardization committee on certain issues. Sometimes it almost does not matter if the kernel itself is super good when it comes to an end user, if you lack user friendliness and efficiency. Win32 has both efficiency at the core, and user friendliness efficiency at the flamboyant, rococo user interface end of things, at least like it used to be in Windows XP. I for one want features that overflow me in software, as when I walk through all the standard menu options of Office 97, compared to newer software of recent years, that gives me a single menu, with a single button to click, with a message above saying choose the button you want to click, and my only choice is that single button. I feel cheated under such circumstances. I like freedom, I like options, I like variety, but not at the cost of internal complexity, at the rule, the mechanism level, to where the whole thing collapses internally.

As far as the designers go, you could have a ranking system for programmers, just like you do in the military. Just because someone is an inept new recruit, who has no clue, it does not mean he's useless. To the contrary. But you're not gonna make him an Admiral, or even a platoon leader, he has to start where everyone starts, and advance through the ranks, through established procedures. Then the whole gang functions in unison, utilizing both the incompetent and the competent to their maximum talents. The biggest problem in anything is incompetence, and this includes military, industry, education, and in military they come up with the best they can, and lucky is a military that has leaders like Alexander(never met his match), Hannibal, Scipio(his match), Napoleon, his match being Arthur Wellesley, 1st Duke of Wellington teamed with Gebhard Leberecht von Blücher, Jellicoe, Nimitz, but each of these have to rely on an army of incompetent recruits. You always walk knee deep in incompetence, whether it's military, or software programming, and you have to find a way to make the best of it, how to best use your incompetent people, and have procedures in place where experience enhances competence.

For instance, in Linux you could have a kyu-dan based ranking system, and you don't really have to authenticate any programmers for who they really are, they could contribute anonymously under a certain user name/password that carries a certain rank. Once they abandon that user name and pick a new one, it may be some time before they rise through the rank to their previous level, but that's an easy system that self manages leadership based on competence. Then any piece of code in Linux could have a "touched by highest rank" tag to it. In this sense some BC beginner category, or 30 kyu programmer could submit a piece of code, on some new esoteric hardware device, that nobody has touched yet, then anybody already in the kernel, at say 15 kyu level, can seek out code fragments that have a more negative than 15kyu tag to them, such as written by 20kyu, and automatically get their modifications accepted into the kernel, however ideas to pieces of code touched by 10 kyus would hang around as requires approval at the 10kyu or better level, and not get automatically accepted. The really important features would of course be touched by the 9 dans, who'd trump anybody out there. Above 5 dan in amateur Go you start to notice the appearance of style, while maintaining top efficiency but taking on flair, playfulness, as in becoming a show-off, as in look what I can do, and while I do it, I do it with style, and still maintain the core principles to their max, that amateur coders "just can't touch this", hammer timer. In martial arts, such things arise as the [] drunken monkey sub-style of monkey style Kung Fu, [] to add flair and style while maintaining top efficiency by black belts. There are of course variations such as [] Dragon, Mantis, when you have become so good and efficient at what you do, that you have wiggle room to add style with flair. 9d kernel guru's probably each have their own style, and can recognize each other's code simply based on style, and newbies instantly by not only lacking style, but simple retardedness, of not even have assimilated basic principles of efficiency, economy, common sense. Just like in martial arts, in software too it takes lots of discipline and fighting practice to become an expert black belt, and not everyone has the talent for it.

Still, even that ranking system is vulnerable to internal collapse, as in having copperheads, some extremely good programmers such as programmer spies, with an agenda to destroy, advancing through the ranks to 9d levels, then pulling some stunts that bring down the whole system. Supposedly they are big fans of commercial software, or that ideal, or they may even be paid by some commercial software agencies who see Linux and the blocking GPL license eat at their bottom line compared to a BSD public domain license, so Linux is under this constant attack, constant war, which is even stronger than Wikipedia commercial competitor abuse, because in Wikipedia it's easy to fix messups, and even if they are present, everyone using Wikipedia is aware of the contribution process, and the facts being plain wrong issues, and even subtle sabotage issues by the experts. But nothing comes close to Wikipedia in broadness and scope on most topics. Software is much different, because the barrier to entry is humongous, 99,999 out of 100,000 people can't grok the Linux kernel code, as opposed to 99,999 out of 100,000 people knowing how to grok the Wikipedia pages. And in a sense factual plain wrong wikipedia style issues are accepted by the end users who are aware of the contribution process, as long as the overall picture, the overall benefit is positive. For instance if Linux kernel used the VBA Macro/VB 6 Classic style of Basic programming, or even just for the end user interface programming part, such as the very limited scope but very easy to use Gambas is attempting to do, with super efficient translation methods to get tight and efficient code, maybe you could get 9,500 per 100,000 people able to contribute, which is 9500 times greater than 1 out of 100,000 people of the general population. Even Linus constantly says that the kernel is usually not a big deal, he's got that covered with a few colleauges, but the end user interfaces, the games he can play, lack in richness, such as xsoldier that he'd play on an airplane fight. He constantly complains about lack of good application software, and a VB-like, Gambas like programming interface might turn the app-side of Unix into a wikipedia style thing, where anyone can add features, and others quickly review the changes and undo them, for instance. But the barrier to access, the difficulty of groking the code is huge both with the c++ abstractualized complexity that qt is, or even the verbose win32 api style GTK, both of which really suck on performance already, let alone put something like gtk-python on top of them, and you'd end up with a horrible horrible bloat and speed hindrance, compared to what even Gambas or VB for Freebasic can accomplish. VB classic could ride on top of Win32 C Api without much speed penalty, or bloat, compared to the bloat, and library sizes that VB dotnet and C# are. For instance, even Freebasic can sometimes produce the most efficient code on par with C and Pascal in language comparison tests, all leaving Java and Python and C#/VB dotnet in the dust, however there is no easy access, easy to use interface to let developer contribute wikipedia style, in plain English Basic, as opposed to C, or Java, or Python, and the like.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47707789)

Yeah, I forgot the part where the copperheads advance to 9dans, and destroy the system from within. The situation then becomes similar to the French Revolution, where the 30kyu peasantry rebels against the 9dan monarch and 5 dan nobility, and guillotine their heads. If a 9d copperhead is caught, after getting away with things for a long time, just like the nobles/kings could in France, but eventually there'd be massive flame-wars about it, in a rebellion style, where it becomes obvious you have to roll out the guillotine and have some heads roll. Of course this situation can be abused too, to where someone like Linus or Alan Cox could be ousted, and then it's up in the air whether the rebels are paid by Microsoft, to kill Linux, or there is a true problem with a rotten leadership. Of course it's hard to see Linus intentionally damage Linux, on the other hand he does look like he's the cousin of Bill Gates, and he lives in Oregon too, not too far. So ya never know these things. That's right, I pick on everyone equally without exception, nobody is safe from the terror of my words, even Linus. But if it bothers you, you can ignore, and get on with your life. Nobody forces you to read all this.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47708389)

I just had this idea that in programming, Roman numerals might be more efficient than Arabic/Indian, but only when you can guarantee gaps, or granularity. Such as when dealing with bytes, each being 8bits, or 2^8=256, and wasting space, granularity when you only need to represent 17, which can be done in 2^5=32, but not in 2^4=16. Similar things are FAT cluster sizes, that waste space on clustering, in the name of Roman numeral style efficiency. In this sense, in roman numerals you only represent the centuries by a few characters, such as MM for 2000, MCM for 1900, MDCCC for 1800, MDCC 1700, MD1500, MCD 1400, MCCC for 1300, ..etc. and MC 1100, M 1000, CM 900, DCCC 800, etc. You could cut this down to 3 digits, to lose even more granularity, but your granularity is maximum near the KB or MB levels, right near the cluster size, and instead have options of MM, MCM, MDC, MD, MCD, MC, M, CM, DC, D, CD, which work near the 500 level, and some granularity nearby it, but are inefficient away from the 500 cluster level. If you could get more terse like that, in expression, as in military units one would be M, other D, other D, other L, other X, then V, and the individual soldier, I, so with cutting down to one character or two characters instead of 4 characters required by Indian/Arabic, you could get some bandwidth, terseness of conveyed information, and less parallel lines to carry that bandwidth, to memory for instance. This is illustrated in datatypes, which also lose granularity, but are the equivalents of M, D, C, L, X, V, I, in most typed programming languages. So I'm just adding this thought of how the Roman numerals are stupid to do math calculations, but in certain circumstances they make sense, such as counting available military units.

I just got curious what the letter for 5000 would be, so I looked it up in Wikipedia. There is no such number, and the ancient system is only good for 3999, and there are two methods to get higher numbers, one draw a bar above a number meaning it's multiplied by M, or 1000, so you can represent up to 3,999,000 or almost 4,000,000, and most millionaires could not count their money like that, but then you can keep adding more lines to keep getting higher multiples. The other method, more complicated, but descended from Etruscan tradition that Roman numerals came from, cannot be printed here, because Slashdot eats the funky backward C characters, and you have to read it at []

So these are the two known methods of number representation, there is also an option for factoradic, with full granularity, where 1!+2!+3!+...+n!=(n+1)!-1, n!=1x2x3..xn, 1!=1, 2!=2, 3!=1x2x3=6, 4!=1x2x3x4=24, 5!=120, 6!=720, etc, but it's not more efficient than Arabic/Indian, because that uses the rule applied to geometric progressions, of b^0+b^1+b^2+b^3+..+b^n=b^(n+1)-1, and as long as you have your highest base, b, there is no reason at each positional representation not to go all the way up to that highest point. For instance when we say 3528 we mean 8x10^0+2x10^1+5x10^2+3x10^3=8x1+2x10+5x100+3x1000=8+20+500+3000=3528. In factoradic representation you'd only be able to go up to n! at each position, not the full base, such as 3528 being 8x1!+2x2!+5x3!+3x4!=8x1+2x2+5x6+3x24=8+4+30+72=114, a very slow and inefficient way to store numbers, because you could have used each position to store at the highest base available, such as 10 base, implicitly understood, and get much better storage capacity while you still get the full granularity of representation, for each natural number, unlike with chopped roman numerals, or even scientific notation for Indian/Arabic numbers, which we call floating point representation, which is full of holes and cannot represent every single integer to its highest limits. With factoradic you can only go up to n! times the decimal base at each position, while with Indian/Arabic you can go up to base^position, and only for very high numbers do factorials surpass geometric terms, as from the Stirling approximation of factorials, n! ~ constant x sqrt(n) x (n/e)^n, the constant being sqrt(2xPi)=sqrt(2 x 3.1416..) and e=2.71828.. the base of natural logarithms, base x (n/e)^n surpassing base x (base)^n, as in n^n > 10^n for n greater than the base. So maybe factoradics could be used in computing, where you need full granularity that floating point cannot provide, and terseness, such as when describing terabyte and petabyte or huge values, the representation mechanism understood to be factoradic for the values coming through the 168 parrallel lines to a memory stick, instead of binary Indian/Arabic geometric progression. Where am I wrong in this argument?

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47708413)

I just figured that one out, or more like I just got the divine revelation, even though "they" teased me with the same thing before, in 2009, but mind controlled me not to realize it.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47708501)

It's more complicated than what I made it sound, because each position has a variable base as opposed to a fixed base in geometric progression, b, so you could in theory reserve more room than a base for higher numbers. The situation is similar to decimal hex, or binary coded decimal, where your base is F, but you only go up to 9, and lose the available representation space between 10 and 16. With a fixed base, factoradic loses the representation space available for each digit, on the other hand it takes up a fixed amount of memory per digit, so there is various mixes and matches between fixed base geometric, fixed base factoradic, and variable base factoradic, and variable base factoradic might be the most efficient with full granularity encompassing all natural numbers in a range for huge numbers, while fixed based factoradic might be more efficient for intermediate numbers, and fixed base geometric series Indian/Arabic for small numbers.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47708649)

Now I went to the Wikipedia page, and it says you're allowed digits only up to the position number at each position in the factoradic, such as
Radix 8 7 6 5 4 3 2 1
Place val 7! 6! 5! 4! 3! 2! 1! 0!
Plcv decl 5040 720 120 24 6 2 1 1
Highest 7 6 5 4 3 2 1 0
digit allowed

I thought since both 0! and 1! equal 1, you start from 1, not 0, and also that you could go to 5039 in digits at 7!, right before you hit 5040, in a variable base, just like you can go up to 9, right before you hit 10, in digits in a fixed base geometric progression. Or to 23 in digits right before you hit 4!., as in 1,2,3,4,5,6,7,8,9,A,B,C,D,E,F(15 of hexa), G, H, I, J,K(icosa),L,M,N(23, of tricosa), so you could have a number like L511 meaning Lx4!+5x3!+1x2!+1x1!. Something is not right here, because the 1x2! and 1x1! only allow 1 as the highest digits, hmm. So anyway the number then is 23x24+5x6+1x2+1x1=552+30+2+1=585, which is much higher than the next up number, 5!=120, so you'd end up with multiple representations for the same number above 121, plus you'd need variable storage space, as you need 120 digits by the next round, and the 26 letters of the alphabet are not enough, so it may not be better after all. The important thing is to try, and keep trying for better. And I'm not smart enough to figure all this out anyway. In any case you can't go up to 9999 in digits when you represent 10000, for the second position at 1x000, but only to 9, or doing so does not cut down on storage costs, by the time you're done representing those digits.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47709073)

Well the important thing is, that when doing 168 pin RAM sticks, and looking at the pipe of data coming through them, on each line, if they come through in say decimal representation, with 10 voltage values each, then you can get 10^168-1 values out of it (like you can 0 to 9999 from a 4 digit bicycle lock, not 10,000, or 0-255 from 8 bits, not 256, but with factoradic representation with variable base, the highest allowed digit on the 168th line is 168, as in 1,2,3..A,B,C (you run out after 10 digits plus 26 Roman letters, so you go alpha, beta, gamma, etc, pile on everything you know til you get a 168 base digit), so you'd have a number like gamma x 168!+L x 167!+... +3 x 3! + 2x2! + 1x1!, according to their rules in Wikipedia, which is still fully granular, but much huger number, as 168! > 10^168 by a whole lot, and the situation becomes even more stark when you compare 168! > 2^168, as in binary, the only issue being that each digit must be represented by 168 possible digits, which in binary takes up 256 or 8 bits, with room to spare above 168 to 255, so you divide 168/8=21, so you can really only go up to 21!, which may still be bigger than 10^168, but still get full granularity of integers. But it's not, as 21! = 5.11x10^19 according to google, which is a whole lot less than 2^168-1= 3.74x10^50. So at 168 pins the variable base factoradic representation is not more efficient than the fixed based Indian/Arabic geometric progression. Hmm.

But that gives me the Idea, that you don't have to divide 168 by 8, as for the initial digits you only need less room, and 8 bits are wasted, so you need for 1, 2,3,4,5,6,7,8,9,10,11.... etc 1/1 bit, 2/2 bits, 2/3 , 3/4, 3/5, 3/6 ,3/7, 4/8,4/9,4/10,4/11,4/12,4/13,4/4/14,4/15, 5/16,5/17,5/18,5/19,5/20,5/21,5/22,5/23,5/24.5/25,5/26,5/27,5/28,5/29,5/30,5/31,6/32,6/33,6/34,etc, and I'm too lazy to calculate how many you need, but if you use binary geometric progression encoding of the factoradic digits, you only need 7 digits to get the numbers under 127, 6 under 64, 5 under 32, 4 under 16, 3 under 8, 2 under 4 and 1 under 2, so (168-127)x8+(127-63)x7+(63-31)x6+ etc = (1097 I got at, but I'm stupid because that's not how many digits you get, but instead =) 41/8+64/7+32/6+16/5+8/4+4/3+2/2+1/1 = 1+2*2+3*4+4*8+5*16+6*32 etc, as far as you can go, to stay under 168 available pins on the RAM stick, I make a Microsoft Works spreadsheet that came with my HP Mini Recovery DVD, that looks like this

n 2^n (n+1)*2^n Partial Sum
0 1 1 1
1 2 4 5
2 4 12 17
3 8 32 49
4 16 80 129
5 32 192 321

So it shows that 129 is the highest number under 168, so you need 129 pin to carry all factoradic variable base digits up to the 4 digit ones, now I'm confused, so let's go back, you need 1+2+2+3+3+3+3+4+4+4+4+4+4 eight times, etc.. so I'm going back to make a different spreadsheet,

Numeral Pins Subtotal
1 1 1
2 2 3
3 2 5
4 3 8
5 3 11
6 3 14
7 3 17
8 4 21
9 4 25
10 4 29
11 4 33
12 4 37
13 4 41
14 4 45
15 4 49
16 5 54
17 5 59
18 5 64
19 5 69
20 5 74
21 5 79
22 5 84
23 5 89
24 5 94
25 5 99
26 5 104
27 5 109
28 5 114
29 5 119
30 5 124
31 5 129
32 6 135
33 6 141
34 6 147
35 6 153
36 6 159
37 6 165
38 6 171

So you can get up to 37! with 165 pins, which is 1.38x10^43, which is much less than 2^168-1=3.74x10^50. Hmm. Eventually it's gotta get better with factoradic, which tends to infinity much faster than a fixed base raised to power, such as n^n > a^n over a certain threshold of n. But 165 pins is not past that critical point. Who can find the critical point where factoradic becomes a more efficient way to store numbers, than geometric progression Indian/Arabic binary base representation? Even at the expense of complex logic, but you could get better throughput through a communication pipe, while maintaining full granularity of integers, to huge numbers, something that floating point, or scientific representation cannot achieve. 64 bit operating systems are not there yet, neither would be 165 bit ones. Maybe 1024 bit. Let's calculate the case for 512. I continued the above spreadsheet, and I could drag it to 510 pins at 90 factoradic digits, so now 2^510-1=3.35x10^153, and 90!= 1.48x10^138, so I'm obviously on the wrong track here. Hmm. Oh well, I tried. We don't have anything better than Indian/Arabic for now, do we?

This whole thing should have gone into a blog, but in a blog you don't get ideas thrown at you to reply to, from which you can meander off into various thoughts, unlike on Slashdot.

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47709145)

How about if you tried to apply Roman numerals instead of Indian/Arabic binary representation for each digit? Would it get terser, as an overall sum, where you could surpass the Indian/Arabic fixed base geometric progression with the factoradic variable base progression? After all, you only need one character to represent 1000, M, from a lookup table, so you have M, D, C, L, X, V, I, or 7 different values, and a maximum craziness of MMMDCCCLXXXVIII, or 3888 requiring 15 digits within its usual representation space, instead of a uniform 4 for all digits with Indian/Arabic numerals with space to blow to 9999, but 1900 only MCM, 3 digits, so near the focusing points, you get terser, at the cost of more verbose at the extreme of 3888. For one roman numerals require something like 2.9 bit representations for each digit out of 7 possibilities, while Arabic/Indian need 10 digits so like 3.1 bit digits for the same space. And overall, roman numerals are are much busier with more digits, even if less bit per digit, in the space to 3999, than Arabic ones. But right now, I feel too stupid, tired and lazy to keep on thinking like this. Maybe someone can carry on the philosophical discussion, and tell me why I feel like a dog chasing his tail? I keep thinking maybe one day I might catch it :)

Re:Oh no. (1)

sillybilly (668960) | about a month and a half ago | (#47709233)

Correction: It's not 1!+2!+3!=4!-1, but 1*1!+2*2!+3*3!=4!-1, as in 1*1+2*2+3*6=1+4+18=23, which is 24-1. My memory failed me there, but the point is there is a different way to represent digits that gets full granularity of integers without gaps in it, other than just the geometric progression Indian/Arabic/Mayan method.

Re:Oh no. (1)

sillybilly (668960) | about a month ago | (#47727641)

By the way I found illustrative examples of what Unix should be like, on the pussy analogy:

First of all, here is a rococo-pussy (this is not what you want Unix to look like): []
It's dark, mysterious, full of features, difficult to debug.
Here it is for closer inspection, under the hood, it's still dark, mysterious and difficult to find the fleas in it: []

Compared the above to this unix-pussy:
    http://content8.pureandsexy.or... []
It's light, simple, low on features, easy to debug.
And for closer inspection, under the hood:
  http://content8.pureandsexy.or... []
It's still not that complicated, streamlined, and follows Einstein's principle of make it as simple as possible, but not simpler. It still has all the features of a pussy, properly implemented, headache free. You cannot get any simpler than that, or it's no longer a pussy.

When it comes to pussy, both are equally gorgeous, and functional. Variety is the spice of life. But when it comes to Unix, only the light, simple, easy to see, understand and debug, nonmysterious, low on complications but still getting everything done variety is what's beautiful.

Shitty analogy time (0)

Anonymous Coward | about a month and a half ago | (#47698293)

Do you invite friends in your house from times to times ?
Then by your logic it's totally okay to leave the door wide open to perfect strangers and burglars too.

Re:Shitty analogy time (1)

i kan reed (749298) | about a month and a half ago | (#47698335)

Yep, exactly. That's how I won a free meth lab.

Malware (-1)

Anonymous Coward | about a month and a half ago | (#47698117)

It sounds like they were having problems with malware injection, which may explain the bugginess of the last few Linux kernal releases.

Re:Malware (1)

i kan reed (749298) | about a month and a half ago | (#47698179)

Did they have a problem with that? Or are they operating on the possibility, instead? Do you have a third source of information?

Because the summary isn't clear, and the article is a how-to.

Re:Malware (1)

jcochran (309950) | about a month and a half ago | (#47698197)

Well, malware injection to the linux kernel isn't a mere possibility. The incident that happened back in late 2003 comes to mind.

Re:Malware (4, Informative)

Zero__Kelvin (151819) | about a month and a half ago | (#47698377)

"Well, malware injection to the linux kernel isn't a mere possibility. The incident that happened back in late 2003 comes to mind."

I don't think you are intentionally trying to misrepresent the facts, but before others take the misrepresentation of the facts and run with it ...

I think you are confusing a failed attempt with a successful injection. The checks and balances in place stopped it sans two-factor auth. This just makes it even more unlikely.

Re:Malware (1)

jcochran (309950) | about a month and a half ago | (#47699167)

Agreed, the path that was taken for that attempt wouldn't have worked. However, if someone had been able to compromise the credentials that would authorize a check in to the main repository, it most definitely would have worked. Adding in two factor authentication just makes it that much harder.

Yubikeys? (1)

ArcadeMan (2766669) | about a month and a half ago | (#47698201)

You mean I can no longer use my Authenticator for my commits?

Re:Yubikeys? (1)

Mr_Icon (124425) | about a month and a half ago | (#47698375) authenticator uses TOTP, so yes, you can. :)

Finally as secure as MMO games (5, Funny)

Zan Lynx (87672) | about a month and a half ago | (#47698227)

Finally the Linux kernel which runs almost the entire Internet is as secure as my MMORPG accounts. About time. :P

How does it work without a clock? (0)

Anonymous Coward | about a month and a half ago | (#47698271)

Can someone explain how non-challenge/response tokens without internal clocks work?

What prevents someone from just hitting the button 100 times while you're not looking and saving the codes? Or using a code you used before? Where is this information kept? They don't have an onboard processor, there's no timestamping.

Thank you!

Re:How does it work without a clock? (5, Informative)

ChadL (880878) | about a month and a half ago | (#47698365)

I have a Yubikey that I use for encrypting my password stores (using the private id as one of several components passed to a pbkdf). It detects replays by verifying that every token has a larger counter then all prior used tokens (and the timer depending on the application).
A Yubikey token looks like 'ficrtvulktgnerhddigbhcudufurijghfcckvchhjfli' and is a modhex (16 chars picked for being the same across charsets) and contains the following:
1) A public ID to identify the key
2) AES128 encrypted 128 bits containing the following:
a. Secret ID
b. Insertion counter (how many times its been plugged into a computer)
c. Token counter (within one insertion)
d. Timestamp (A counter counting the time since the token was inserted into the computer)
e. Random number
f. Checksum of the above
Their website [] has full specifications and documentation.

Re:How does it work without a clock? (3, Informative)

Mr_Icon (124425) | about a month and a half ago | (#47698581)

Yubikeys also support the HOTP standard, which produces 6-digit codes. This is what actually uses, not yubikey's own implementation.

Re:How does it work without a clock? (3, Interesting)

jcochran (309950) | about a month and a half ago | (#47698413)

Well, you could have answered your own question by simply using google to look up Yubikey and reading a bit. But to give you a partial answer, the token generates an AES encrypted value and passes that value to the server for authentication. During authentication, the server decrypts the value. (the shared secret between the token and the server is the AES encryption key). The decrypted value includes a counter. And if the counter isn't greater than the previously used counter, the authentication attempt is invalid. So if you were to hit the button 100 times and record those codes, you could authenticate using any of those codes, but as soon as I hit the button and authenticated using the resulting code, all of the codes you recorded would become instantly invalid.

Re:How does it work without a clock? (0)

Anonymous Coward | about a month and a half ago | (#47703957)

The monotone-increasing counter is a form of clock, its just not tied to realclock time and intervals between 'ticks' aren't constant in realtime. The Earth can spin through arbitrary degrees between ticks :-)

Thumbs up (1)

Mister Liberty (769145) | about a month and a half ago | (#47699231)

Let's see:
- Authenticated kernel source.
- Checksummed distro ISO.
- Eeprom subverted thumb drive.

keys are not issued to someone they are generated (3, Insightful)

tota (139982) | about a month and a half ago | (#47701477)

The user is not issued a key, he generates one and gives it to the repository administrator to get ssh access. This process is called *generating* a key, and you can publish the public key to anyone, including the repository administrator which will then use it to grant you access. The private key however.. should remain private.

The point is that only *you* should ever have access to the private key, having someone else generate it (as is suggested by the wording in this article) would be very unusual, as you would not want to use this key for anything else, and someone else would have your private key for no good reason. Someone could even potentially use this key to fake your identity in commits.

The problematic wording is here: "Each is issued their own ssh private key".

Re:keys are not issued to someone they are generat (0)

Anonymous Coward | about a month and a half ago | (#47702695)

No. SSH keys are not used to sign commits, GNUPG (gpg) 4096-bit RSA keys are used to sign commits and tags. The SSH keys mentioned are used for git-over-ssh transport, only, not even for remote shell access (which doesn't exist anymore in, I am told).

Re:keys are not issued to someone they are generat (1)

Mr_Icon (124425) | about a month and a half ago | (#47704449)

There is no mistake here -- the ssh private keys are generated on the provisioning system, encrypted to the developer's PGP key (which is verified using the PGP web of trust) and then emailed out. The developer then decrypts the ssh private key on their workstation using their own PGP private key. Our copy of the ssh private key is destroyed in the process, so we only keep the ssh public key. PGP web of trust is king in the world.

Re:keys are not issued to someone they are generat (0)

Anonymous Coward | about a month and a half ago | (#47704599)

I think his point is that you should NEVER use a private key created by a 3rd-party. Security 101. There's always exceptions to the rules, even in the case of "never", but it should be very rare and well understood. Still considered bad security practice.

Re:keys are not issued to someone they are generat (1)

tota (139982) | about a month and a half ago | (#47709225)

Really, that's interesting and I stand corrected. Thanks.
I still fail to see the benefit of generating the key on the system and sending it PGP encrypted. Why not just generate yourself and send it to the administrators PGP encrypted instead? (as per the reply below, using 3rd party private keys is bad practice)

Re:keys are not issued to someone they are generat (1)

Mr_Icon (124425) | about a month and a half ago | (#47713381)

Least amount of back-and-forth between the developer and the admin ("sorry, your key has to be at least 2048 bits", "you forgot to sign your mail", "sorry, I sent you guys the wrong key"), plus it helps assure it's a dedicated SSH key and isn't shared between many other projects and therefore copied across workstations. Mostly, though, it reduces hassle.

Meh (0)

Anonymous Coward | about a month and a half ago | (#47701875)

As someone who has to use a smartcard to even login to their workstation and every message is digitally signed, I'm not particularly impressed.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?