Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

33-Year-Old Unix Bug Fixed In OpenBSD

kdawson posted more than 6 years ago | from the yet-another-stack-overflow dept.

Programming 162

Ste sends along the cheery little story of Otto Moerbeek, one of the OpenBSD developers, who recently found and fixed a 33-year-old buffer overflow bug in Yacc. "But if the stack is at maximum size, this will overflow if an entry on the stack is larger than the 16 bytes leeway my malloc allows. In the case of of C++ it is 24 bytes, so a SEGV occurred. Funny thing is that I traced this back to Sixth Edition UNIX, released in 1975."

cancel ×

162 comments

Sorry! There are no comments related to the filter you selected.

Time to patch (5, Funny)

Anonymous Coward | more than 6 years ago | (#24108651)

Wouldn't want to let anyone take over your system with yacc. Seriously.

Re:Time to patch (5, Funny)

slew (2918) | more than 6 years ago | (#24108701)

Wouldn't want to let anyone take over your system with yacc. Seriously.

But ./ is already taken over with yak. Seriously.

Re:Time to patch (4, Funny)

Anonymous Coward | more than 6 years ago | (#24108811)

Who cares about OpenBSD yacc? BSD is dying and Netcraft confirms it. The world has moved to GNU/Linux and Bison.

Re:Time to patch (1)

dosius (230542) | more than 6 years ago | (#24109391)

Ugh. I still use byacc as my yacc of choice because Bison, like all GNUware, is bloated.

-uso.

Re:Time to patch (3, Funny)

msuarezalvarez (667058) | more than 6 years ago | (#24110899)

So you are including bison in your own apps and its `bloatedness' becomes a problem? Maybe you should read the manpage...

Re:Time to patch (3, Interesting)

setagllib (753300) | more than 6 years ago | (#24111795)

Who cares? Like GCC versus TinyCC, being bloated means it can produce a more useful output. GNUware can be faulted for being heavy compared to traditional Unix tools, but the functionality and flexibility provided more than makes up for it.

Except for autotools. What the HELL were they thinking.

Re:Time to patch (1)

cryptoluddite (658517) | more than 6 years ago | (#24111217)

Wouldn't want to let anyone take over your system with yacc. Seriously.

I think if you've installed yacc with setuid bit then you have other problems to worry about. Seriously.

Re:Time to patch (1)

Schraegstrichpunkt (931443) | more than 6 years ago | (#24111711)

What, so in the "Web 2.0" world, it would inconceivable that somebody would provide a web-accessible yacc service to the world?

Re:Time to patch (3, Funny)

setagllib (753300) | more than 6 years ago | (#24111833)

Ah, but it would be written as a J2EE application. And the input wouldn't be .y, it'd be an XML document. And the output wouldn't be C, it'd be another XML, passing through a terabyte of XSLT. Then you pass this compiled parser XML, only a gigabyte in size, and your language file to a parser web service and it returns even more XML representing the parse tree.

Ahh, progress.

Re:Time to patch (1)

wylderide (933251) | more than 6 years ago | (#24112605)

Better than your system taken over Yahoo Seriously.

From back when (4, Funny)

Yold (473518) | more than 6 years ago | (#24108669)

Unix beards were Unix stubble

Dupe! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#24108681)

Dupe de dupe dupe dupe!

bad omen (4, Funny)

spir0 (319821) | more than 6 years ago | (#24108709)

a 33 year old bug, plus a 25 year old bug (http://it.slashdot.org/article.pl?sid=08/05/11/1339228)....

if we keep going backwards, will the world implode? or will daemons start spewing out of cracks in time and space?

Re:bad omen (4, Funny)

je ne sais quoi (987177) | more than 6 years ago | (#24108835)

Nah! What this means is that they are fixing bugs faster than they're making new ones. If they weren't, they'd spend all their time chasing the newest ones. :)

Re:bad omen (1)

CptChipJew (301983) | more than 6 years ago | (#24108911)

That isn't necessarily true. It's just as possible people are wasting time fixing unimportant issues and ignoring more important ones.

I'm not trying to disparage the OpenBSD team or anything. It's just that no development team is perfect.

Re:bad omen (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24109467)

Just because I hate it when people do this:

WHOOOOSH!!!

Sorry about that...

Re:bad omen (4, Funny)

Dunbal (464142) | more than 6 years ago | (#24110041)

It's just as possible people are wasting time fixing unimportant issues and ignoring more important ones.

      We're talking programmers here, not politicians...

Re:bad omen (0)

Anonymous Coward | more than 6 years ago | (#24112673)

That's right. A good programmer knows to ignore both.

Re:bad omen (3, Insightful)

incripshin (580256) | more than 6 years ago | (#24111763)

Well, they're not checking yacc for bugs for the hell of it. They're reimplementing malloc to be more efficient, but it broke buggy code. Is there any other option than to fix yacc?

Re:bad omen (2, Insightful)

p0tat03 (985078) | more than 6 years ago | (#24110185)

Or we're so painfully slow with fixing bugs that we JUST got around to 1975 :P There are always multiple views :P

Re:bad omen (0)

Anonymous Coward | more than 6 years ago | (#24113099)

Actually, we're fixing bugs so fast that they're traveling back in time.

Re:bad omen (5, Funny)

exley (221867) | more than 6 years ago | (#24108935)

a 33 year old bug, plus a 25 year old bug (http://it.slashdot.org/article.pl?sid=08/05/11/1339228)....

if we keep going backwards, will the world implode?

Well since time began only 38.5 years ago we should find out the answer very soon!

Re:bad omen (3, Funny)

Dunbal (464142) | more than 6 years ago | (#24110003)

or will daemons start spewing out of cracks in time and space?

      I finally figured out what the UAC were doing on the Mars colony... and it had nothing to do with those artifacts!

      Thank god there's a division of Space Marines there...

Re:bad omen (4, Interesting)

K. S. Kyosuke (729550) | more than 6 years ago | (#24110199)

First it was a fourth of a century, then it was a third of a century. The only logical consequence is that the next bug they will find now will be a memory leak in McCarthy's Lisp intepreter from '59 or some strange corner case in the Fortran I compiler. (Oh, and after a careful consideration, I am leaving the *next* bug as an exercise to the reader.)

Re:bad omen (2, Funny)

cryptoluddite (658517) | more than 6 years ago | (#24111179)

Well since bugs before the epoch [wikipedia.org] were actual insects, judging by past precedent they'll get super powers... like wall-climbing ability or maybe spidey senses ??

Re:bad omen (1)

Zencyde (850968) | more than 6 years ago | (#24112827)

Not a Snopes link but good enough: http://tafkac.org/faq2k/compute_86.html [tafkac.org] I'm sure there's a Snopes article on this that I'm too lazy to find. Now to put to rest this idea of "bugs" originating as actual bugs.

Re:bad omen (4, Funny)

menace3society (768451) | more than 6 years ago | (#24111445)

The next bug will be in Boolean logic. After that, OpenBSD devs will start fixing structural engineering errors the Tower of Pisa.

Great! (5, Interesting)

Anonymous Coward | more than 6 years ago | (#24108713)

Any word on when they're going to fix the even older "Too many arguments" bug?

Sorry, but any modern system where a command like "ls a*" may or may not work, based exclusively on the number of files in the directory, is broken.

Re:Great! (5, Funny)

The Master Control P (655590) | more than 6 years ago | (#24108927)

I too was devastated to learn that my poor Linux box can only handle 128KB of command line arguments [in-ulm.de] . How can I possibly finish typing in that uncompressed bitmap...

Re:Great! (1)

Dunbal (464142) | more than 6 years ago | (#24109881)

128k should be enough for anyone.

Re:Great! (1)

Malevolyn (776946) | more than 6 years ago | (#24110195)

Some of us require, maximum, 640k.

Re:Great! (0)

Anonymous Coward | more than 6 years ago | (#24110699)

I have no sympathy for you. If you exceed the 128k on the command line, you're doing it wrong. And if this problem causes you grief, then you've no business on the command line.

Re:Great! (2, Interesting)

Anonymous Coward | more than 6 years ago | (#24110811)

So, as an example, let's say I want to archive a bunch of files, then remove them from my system, to save space. I packed them up, using:

        tar cf archive.tar dir1 dir2 file1 file2 file3

and, because I'm extremely paranoid, I only want to delete files I'm sure are in the archive. How would I do that? Could I use:

        rm `tar tf archive.tar`

How about:

        tar tf archive.tar | xargs rm

I'm pretty sure neither of those will work in all cases. The first will fail if there are more than a few thousand files in the archive, and the second will fail if the files in the archive contain spaces or special characters. Can you give me one command that will work in all cases?

Re:Great! (1)

The Master Control P (655590) | more than 6 years ago | (#24111493)

Have a script loop over your directories, adding them to the archive before firing the Are-Em Star at them.

/The power to destroy an entire filesystem is insignificant next to the power of the Farce

Re:Great! (4, Funny)

menace3society (768451) | more than 6 years ago | (#24111523)

Burn the contents of the tar archive onto a CD. Mount the CD over the original directory structure. Use find(1)'s -fstype option to locate all the files that aren't on the CD, copy them to an empty disk image, then eject the CD. Remount the disk image over the original directory, delete all the files in the directory, then unmount the disk image. The files identical in name to those that were on the disk image (which are those that weren't on the CD) won't be deleted thanks to the peculiarities of mount(2).

You're welcome.

Re:Great! (0)

Anonymous Coward | more than 6 years ago | (#24109981)

A better example is everyone's favorite:

find /usr -type f | xargs grep foo

It's really annoying knowing you can't use this in a crontab script because it might fail.

Re:Great! (0)

Anonymous Coward | more than 6 years ago | (#24110883)

find /usr -type f | xargs grep foo

Works with the GNU versions of these tools (eg on every Linux distribution):
find /usr -type f -print0 | xargs -0 grep foo

Re:Great! (4, Interesting)

Craig Davison (37723) | more than 6 years ago | (#24110285)

If "ls a*" isn't working, it's because the shell is expanding a* into a command line >100kB in size. That's not the right way to do it.

Try "find -name 'a*'", or if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"

Re:Great! (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#24110633)

If "ls a*" isn't working, it's because the shell is expanding a* into a command line >100kB in size.

Umm... duh. I know this.

Try "find -name 'a*'", or if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"

Instead of "ls a*"? Seriously? Hopefully, someone will mod you funny.

Re:Great! (2, Informative)

drinkypoo (153816) | more than 6 years ago | (#24110987)

Instead of "ls a*"? Seriously? Hopefully, someone will mod you funny.

Unix has extremely low overhead spawning processes. If you prelink and have a little cache this is plenty fast :P

Seriously though, this is a serious annoyance in the way Unix does business. Shell globbing is very convenient for programmers, but not so convenient for users in an awful lot of situations.

Re:Great! (4, Informative)

Just Some Guy (3352) | more than 6 years ago | (#24111601)

if you want ls -l style output, "find -name 'a*' -exec ls -l {} \;"

Yeah, because nothing endears you with the greybeards like racing through the process table as fast as possible. Use something more sane like:

$ find -name 'a*' -print0 | xargs -0 ls -l

which only spawns a new process every few thousand entries or so.

Re:Great! (2, Informative)

QuoteMstr (55051) | more than 6 years ago | (#24113097)

On modern systems, find -name 'a*' -exec ls -l {} +

Personally, however, I prefer find -name a\* -exec ls -l {} +

Also, you probably want to add a -type f before the -exec, unless you also want to list directories.

Either that, or make the command ls -ld to not list the contents of directories.

The Problem is *why* it's the wrong way to do it (3, Interesting)

billstewart (78916) | more than 6 years ago | (#24112601)

You're correct that's it's not the right way to do it. The problem is *why* it's not the right way to do it. It's not the right way to do it because the arg mechanism chokes on it due to arbitrary limits, and/or because your favorite shell chokes on it first, forcing you to use workarounds. Choking on arbitrary limits is a bad behaviour, leading to buggy results and occasional security holes. That's separate from the question of whether it's more efficient to feed a list of names to xargs or use ugly syntax with find.

Now, if you were running v7 on a PDP-11, there wasn't really enough memory around to do everything without arbitrary limits, so documenting them and raising error conditions when they get exceeded is excusable, and if you were running on a VAX 11/780 which had per-process memory limits around 6MB for some early operating systems, or small-model Xenix or Venix on a 286, it's similarly excusable to have some well-documented arbitrary limits. But certainly this stuff should have been fixed by around 1990.

In Defense of Limits (2, Interesting)

QuoteMstr (55051) | more than 6 years ago | (#24113147)

Soft limits can actually mitigate bugs. If we limit processes by default to 1,024 file descriptors, and one of them hits the limit, that process probably has a bug, and would have brought the system to its knees had it continued to allocate file descriptors. Programs designed to use more descriptors could to increase the limit.

Re:Great! (1)

GuyverDH (232921) | more than 6 years ago | (#24110425)

it was fixed years ago....

find . -name "a*" -prune -exec ls -ld {} \;

(note: this command line was generated by reading the man page for gnu find - may not work on all unix/linux variants)

Re:Great! (0)

Anonymous Coward | more than 6 years ago | (#24110747)

Well, strictly speaking the limit isn't exclusively the number of files. It's a combination of filename length and number of files.

(modern_system != infinite_memory) (1, Interesting)

Zero__Kelvin (151819) | more than 6 years ago | (#24110943)

"Sorry, but any modern system where a command like ls "a*" may or may not work, based exclusively on the number of files in the directory, is broken."

It is not broken. The fact that it complains "too many arguments" is evidence that it is not broken, since the program (ls) is doing bounds checks on the input. If it was broken, you wouldn't get the message; there would be a buffer overflow because the programmer didn't do constraints checking.

ERRATA (2, Insightful)

Zero__Kelvin (151819) | more than 6 years ago | (#24111425)

I'll catch myself before someone else does. Everything I said above is true, except that ls isn't complaining. The OS, specifically exec() and friends, is complaining because the command line length when the shell expands the wildcard exceeds ARG_MAX. Increase ARG_MAX if you want to allow more files, or use a variation of find with the -exec option or xargs, etc.

Re:Great! (1)

Jeffrey Baker (6191) | more than 6 years ago | (#24110971)

Actually, a patch was recently added to Linux to dynamically allocate the command line, so your argument length is now bounded only by available memory.

Re:Great! (0)

Anonymous Coward | more than 6 years ago | (#24111449)

You sure that would be an OS patch? Wouldn't that be something the bash/csh/ksh/whatever-shell maintainers would be responsible for?

Re:Great! (4, Informative)

Jeffrey Baker (6191) | more than 6 years ago | (#24111613)

It's both. The kernel is responsible for setting up the execution environment, and in the past it used a fixed 32 pages for the arguments. 32 pages on an ordinary PC is 128KiB, which is the old limit. The new limit is that any one argument can be up to 32 pages, and all the arguments taken together can be 0x7FFFFFFF bytes, which is ~2GiB.

Here's the diff: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=b6a2fea39318e43fee84fa7b0b90d68bed92d2ba;hp=bdf4c48af20a3b0f01671799ace345e3d49576da [kernel.org]

After that, it was up to libc people to fix the globbing routines. Ulrich Drepper, taking some time off from his full-time job of being an asshole on mailing lists, managed to work this into glibc 2.8:

http://sourceware.org/ml/libc-alpha/2008-04/msg00050.html [sourceware.org]

Re:Great! (1)

Lord Kano (13027) | more than 6 years ago | (#24111365)

Sorry, but any modern system where a command like "ls a*" may or may not work, based exclusively on the number of files in the directory, is broken.

Have you ever tried to get the contents of a directory on NT* when there are a buttload of files in it?

We encountered that a few months ago at work. A few hundred thousand log files in a single directory is not a good thing.

LK

Re:Great! (1)

rakslice (90330) | more than 6 years ago | (#24111797)

heh... FWIW on Windows people are stuck with only a few kB of command line and no shell wildcard expansion at all, and they don't seem to be crying in their beers (... it's the market leader last time I checked)

The (not-so-)secret is to not do things by passing big lists around using command line arguments. Back in unix land, you can do glob-filtered listings like the one you suggested with the find command. And even the basic commands like ls can take parameters via xargs instead of regular command line arguments. (As always, see the find and xargs manual pages for more information.)

Fixed in Linux (1)

blitzkrieg3 (995849) | more than 6 years ago | (#24113007)

Any word on when they're going to fix the even older "Too many arguments" bug?

Use linux instead.
CHANGELOG [kernelnewbies.org]
git commit [kernel.org]

maybe it's just me (1, Interesting)

Anonymous Coward | more than 6 years ago | (#24108891)

But this code just seems wrong. What is C code doing referencing the stack pointer directly?

Yeah, it's probably you. (3, Informative)

Estanislao Martnez (203477) | more than 6 years ago | (#24108943)

I bet you they're not talking about the system stack pointer. Remember, yacc is a parser generator; parsing algorithms always use some sort of stack data structure. So, the "stack pointer" in question is just a plain old pointer, pointing into a stack that yacc's generated code uses.

Re:Yeah, it's probably you. (1, Informative)

Anonymous Coward | more than 6 years ago | (#24109351)

Exactly. The code:yym = yylen[yyn];
yyval = yyvsp[1-yym];

This is one of the reasons that I hate C code (but I love it most of the time). If your stack was an object (preferably a STL vector), bugs like this wouldn't arise in a way that they could be exploited (your program would instead terminate with an uncaught exception that would point you exactly where your bug was).

Re:Yeah, it's probably you. (2, Informative)

Skrapion (955066) | more than 6 years ago | (#24110061)

Actually, the [] operator of an STL vector doesn't throw any exceptions, and will happily allow you to reference an index which is out of bounds.

That's not a bad thing, because it's more efficient when you already know that your index is in rage. But if you don't know that, you're better off using the at() function.

Re:Yeah, it's probably you. (1)

EvanED (569694) | more than 6 years ago | (#24111759)

Actually, the [] operator of an STL vector doesn't throw any exceptions, and will happily allow you to reference an index which is out of bounds.

It's entirely possible that your STL implementation IS doing bounds checking on [] when doing debug releases, which means that if you do testing under a debug build, you're more likely to find problems even if you use [], so you're still in better shape than if you had used arrays.

For instance, the following program compiled with Visual Studio 2008 under release mode performs an illegal operation (I don't really understand why it does, but it does), but under debug build fails an assertion that says "vector subscript out of range".

#include <iostream>
#include <vector>
 
using namespace std;
 
int main()
{
    vector<int> v;
    v.reserve(10);
    v[0] = 5;
 
    cout << v[0] << endl;
}

GCC doesn't seem to do this, but I only tried under Cygwin and could be mistaken.

(I also tend to think that the bounds checking properties of [] and at() should be reversed, but that's just me. I think that 95% of the time you should include checking, and [] is easier to read, more natural, and fits better with templates than at().)

Re:Yeah, it's probably you. (2, Informative)

setagllib (753300) | more than 6 years ago | (#24111949)

Best of all, even if you use assert() (or similar) for really explicit bounds checking, GCC will omit it from code paths where it's deemed to be unused. So if your accesses are being inlined (and if they're not, take a long hard look at your life) then the already-safe paths won't have the check overhead even in a debug build.

Yes, I've tested it. Yes, it's impressive.

Re:Yeah, it's probably you. (1)

EvanED (569694) | more than 6 years ago | (#24111957)

For instance, the following program compiled with Visual Studio 2008 under release mode performs an illegal operation (I don't really understand why it does, but it does)

Figured it out.

Even under release builds, VS will by default do range checks on []. (This *is* allowed by the C++ standard, even if it's a little outside of the spirit.)

Adding the following to the top of the file:

#define _SECURE_SCL 0

(see here [microsoft.com] ) will cause it to run to completion, with "5" as the output.

Re:Yeah, it's probably you. (1)

tomhudson (43916) | more than 6 years ago | (#24110929)

There's a bug in the explanation.

From the linky (emphasis mine):

yypv =- yyr2[n];
yyval=yypv[1];
access an item above the stack pointer. If yyr2[n] is zero, this is a potential access outside the stack. Note the archaic use of =-, we write -= these days.

"-=" is NOT the same as "=-".

Example:

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[], char* env[]) {
int a=10;
int b=10;

a =- 5;
b -= 5;printf("a=%d, b=%d\n", a, b);

return EXIT_SUCCESS;
}

returns a=-5, b=5

BIG difference.

Re:Yeah, it's probably you. (1)

trb (8509) | more than 6 years ago | (#24111379)

No, it's probably you. In days of old, C's =- operator was equivalent to its present-day -= operator, as is clearly shown in the example. For conclusive evidence, see Dennis Ritchie's article, The Development of the C Language. [bell-labs.com] See the section headed "More History." It discusses =+, which was a sibling of =- . Ritchie says it was fixed in 1976 (by allowing +=), but I remember compilers also accepting the deprecated =- until 1980 or so.

Re:maybe it's just me (1)

DogAlmity (664209) | more than 6 years ago | (#24109217)

A stack pointer, not THE stack pointer. Just the generic data structure from CS 201.

Re:maybe it's just me (1)

Zero__Kelvin (151819) | more than 6 years ago | (#24111513)

"What is C code doing referencing the stack pointer directly?"

Because it is C, and C is designed to be able to do so? How do you think the Linux kernel gets implemented, though it also has Assembly to be sure. C was designed to allow implementation of Operating Systems. The capability to reference the Stack Pointer and do other assemblyie things via the asm keyword [gnu.org] is part of its charm ;-)

Dupe (-1, Redundant)

Nimey (114278) | more than 6 years ago | (#24108905)

No, damn (0, Redundant)

Nimey (114278) | more than 6 years ago | (#24108917)

Damn. Ignore that, it's a different ancient Unix bug.

Re:No, damn (1)

MichaelSmith (789609) | more than 6 years ago | (#24110721)

Lets have a BSD article for every bug in lex and yacc.

Re:Dupe (-1, Troll)

Anonymous Coward | more than 6 years ago | (#24109077)

Geesh, what an idiot... NO it is a DIFFERENT bug.
Why don't you do some research BEFORE you post stupid remarks? Although looking at the posting history of "Nimey (114278)" you seem to have a long history of stupid posts so I'm guessing you're a 14-year old boy, probably an only child, who hasn't discovered his complete sexuality yet because of the various filters and nanny software installed on his Windows-ME computer that's in the living room of the house because his parents don't trust him alone in his bedroom with a computer.
Yeah, I run BSD. Suck it.

Semi-OT (0, Offtopic)

pxc (938367) | more than 6 years ago | (#24108987)

Does anyone know if there's a way to make bsd.slashdot.org show up as a section on the main lefthand menu?

Re:Semi-OT (0)

Anonymous Coward | more than 6 years ago | (#24109245)

Yes [userscripts.org]

Was it really a bug back then? (4, Interesting)

Just Some Guy (3352) | more than 6 years ago | (#24109057)

Was this a bug when it was originally written, or is it only because of recent developments that it could become exploitable? For instance, the summary mentions stack size. I could imagine that a system written in 1975 would be physically incapable of the process limits we use today, so maybe the program wasn't written to check for them.

Does your software ensure that it doesn't use more than an exabyte of memory? If it doesn't, would you really call it a bug?

Re:Was it really a bug back then? (5, Insightful)

QuantumG (50515) | more than 6 years ago | (#24109711)

If you overflow a buffer then it's a bug, whether it is exploitable or not.

Re:Was it really a bug back then? (5, Funny)

russlar (1122455) | more than 6 years ago | (#24110217)

If you overflow a buffer then it's a bug, whether it is exploitable or not.

If you can overflow an exabyte-sized memory buffer, you deserve a fucking medal.

Re:Was it really a bug back then? (1)

JoshJ (1009085) | more than 6 years ago | (#24112271)

int *buffer; /*Pointer to exabyte-sized buffer*/

while(1){
  *buffer=1;
  buffer++;
}

/*Where's my medal?*/

Re:Was it really a bug back then? (4, Funny)

AJWM (19027) | more than 6 years ago | (#24112621)

/*Where's my medal?*/

You'll get it when the buffer overflows. If you're running it on a system that processes a billion of those loops per second, that should be in a bit over 31 years. Scale accordingly for your processor and memory speed.

Re:Was it really a bug back then? (0)

Anonymous Coward | more than 6 years ago | (#24112947)

Too long. I want my medal in a month using a 1 MHz processor...

while (1) *(buffer += 31*12*1000) = 1;

Re:Was it really a bug back then? (2, Interesting)

Just Some Guy (3352) | more than 6 years ago | (#24111427)

If you overflow a buffer then it's a bug, whether it is exploitable or not.

It is today, but my questions is whether it was even overflowable (is that a word?) when it was written. For example, suppose it was written for a 512KB machine and had buffers that could theoretically hold 16MB, then it wasn't really a bug. The OS itself was protecting the process by its inability to manage that much data, and it wouldn't have been considered buggy to not test for provably impossible conditions.

I'm not saying that's what happened, and maybe it really was just a dumb oversight. However, I think there's a pretty strong likelihood that it was safe to run in the environment where it was written, and the real bug was in not addressing that design characteristic when porting it to a newer platform.

See also: Ariane 5. Its software worked great in the Ariane 4, but had interesting behavior when dropped into a faster system.

Re:Was it really a bug back then? (1)

QuantumG (50515) | more than 6 years ago | (#24111679)

Failure to check for a buffer overflow is an error. It doesn't matter if someone else will do it for you and, as such, the error will never result in a problem for someone. It's simply wrong.

Re:Was it really a bug back then? (2, Informative)

jd (1658) | more than 6 years ago | (#24110085)

It would have been a bug, but not necessarily one that would have security implications, though that could be system-dependent. The summary mentions a specific malloc was used to get a segfault. Another malloc library may well not have faulted. That would only matter if it was possible via the buffer overflow to get yacc to do something (such as run your code) with privileges other than those you would ordinarily have had.

Now, looking at it just as a bug, if the yacc script overflowed the buffer, yacc can either stop cleanly or crash untidily. It has the same effect - nothing much happens - unless, for some weird reason, the kernel holds onto the memory. That would be a kernel bug, though, the yacc bug would merely be a catalyst for exposing it.

And Next Week (-1)

Anonymous Coward | more than 6 years ago | (#24109075)

And next week some clevel clogs fixes where humanity fucked up with Free Will.

Re:And Next Week (0, Redundant)

Pazy (1169639) | more than 6 years ago | (#24109143)

Sorry that message was me, didnt seem to want to keep me signed in :|

Re:And Next Week (1)

X0563511 (793323) | more than 6 years ago | (#24109363)

Sorry that message was me, didnt seem to want to keep me signed in :|

Re:And Next Week (0)

Anonymous Coward | more than 6 years ago | (#24109769)

Sorry that message was me, doesn't seem to want to keep me signed in :|

Re:And Next Week (1)

setagllib (753300) | more than 6 years ago | (#24111981)

Sorry that ^U I am Spartacus.

Other Unixes (2, Interesting)

jasonmanley (921037) | more than 6 years ago | (#24109109)

Forgive me if this is obvious but if the bug goes that far back will it not affect all other unixes that are based on this same source code - not just OpenBSD?

Re:Other Unixes (5, Informative)

X0563511 (793323) | more than 6 years ago | (#24109313)

Yes. But OpenBSD fixed it, so they get credit for the fix. It's up to the maintainers of the other unix(ish) versions to implement the fix.

Meh, real men use Bison - EOM (-1)

Anonymous Coward | more than 6 years ago | (#24109149)

Meh, real men use Bison...

*BSD is Dying (-1, Troll)

Anonymous Coward | more than 6 years ago | (#24109829)

It is now official. Netcraft confirms: *BSD is dying

One more crippling bombshell hit the already beleaguered *BSD community when IDC confirmed that *BSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming on the heels of a recent Netcraft survey which plainly states that *BSD has lost more market share, this news serves to reinforce what we've known all along. *BSD is collapsing in complete disarray, as fittingly exemplified by failing dead last [samag.com] in the recent Sys Admin comprehensive networking test.

You don't need to be the Amazing Kreskin [amazingkreskin.com] to predict *BSD's future. The hand writing is on the wall: *BSD faces a bleak future. In fact there won't be any future at all for *BSD because *BSD is dying. Things are looking very bad for *BSD. As many of us are already aware, *BSD continues to lose market share. Red ink flows like a river of blood.

FreeBSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time FreeBSD developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: FreeBSD is dying.

Let's keep to the facts and look at the numbers.

OpenBSD leader Theo states that there are 7000 users of OpenBSD. How many users of NetBSD are there? Let's see. The number of OpenBSD versus NetBSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetBSD users. BSD/OS posts on Usenet are about half of the volume of NetBSD posts. Therefore there are about 700 users of BSD/OS. A recent article put FreeBSD at about 80 percent of the *BSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeBSD users. This is consistent with the number of FreeBSD Usenet posts.

Due to the troubles of Walnut Creek, abysmal sales and so on, FreeBSD went out of business and was taken over by BSDI who sell another troubled OS. Now BSDI is also dead, its corpse turned over to yet another charnel house.

All major surveys show that *BSD has steadily declined in market share. *BSD is very sick and its long term survival prospects are very dim. If *BSD is to survive at all it will be among OS dilettante dabblers. *BSD continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, *BSD is dead.

Fact: *BSD is dying

You do realize.. (-1, Troll)

ruinevil (852677) | more than 6 years ago | (#24110663)

OpenBSD was using GCC as their default compiler until just a few years ago. They, however, wanted a BSD/ISC licensed C-compiler, so they got an ancient open-source one [wikipedia.org] and started hacking it to get it up to modern C standards. They also wanted a secure compiler, since OpenBSD is all about security, and GCC is way too complex to figure out all the possible security issues that might pop up.

Re:You do realize.. (1, Informative)

vbraga (228124) | more than 6 years ago | (#24110957)

Mod parent -1 Bullshit.

yacc is not a compiler, go read the link you posted.

This [thejemreport.com] links to what you probably means, but yacc has nothing to do with it.

Re:You do realize.. (4, Informative)

QuantumG (50515) | more than 6 years ago | (#24111405)

yacc is not a compiler,

Excuse me?

Yet Another Compiler Compiler most definitely is a compiler.

Re:You do realize.. (1)

vbraga (228124) | more than 6 years ago | (#24111437)

Sorry, I'm not a native speaker. Not a C compiler, as GP said.

Re:You do realize.. (1)

menace3society (768451) | more than 6 years ago | (#24111563)

Mod parent -1 Horseshit.

yacc is a compiler, what do you think the two c's stand for?

Re:You do realize.. (4, Informative)

wb8wsf (106309) | more than 6 years ago | (#24111815)

OpenBSD still uses GCC, version 3.3.5 on i386. I can't say which version is used on the other platforms.

You are talking of PCC, which is being worked on by some of the OpenBSD developers, but I think its a parallel project, see http://pcc.ludd.ltu.se/
for more information.

Jem Matzen talked of this too, see http://www.thejemreport.com/mambo/content/view/369/

Re:You do realize.. (4, Interesting)

incripshin (580256) | more than 6 years ago | (#24111859)

gcc still is the default. pcc isn't ready yet, and I don't expect it to be for at least a couple years, and I say that with zero confidence (I'm just an OpenBSD user; I have no idea how the progress is going on pcc).

Note to Self... Core memory (1)

gearloos (816828) | more than 6 years ago | (#24111535)

Note to self: Don't worry about having that extra 2k of core memory as the buffer overrun does not work anymore. Sweet.

Hilarious! (5, Funny)

BollocksToThis (595411) | more than 6 years ago | (#24112023)

Funny thing is that I traced this back to Sixth Edition UNIX, released in 1975

My sides are completely split! Invite this guy to more parties.

Bringing down the product for ones own fame (1)

nikanth (1066242) | more than 6 years ago | (#24112991)

Will any proprietary business be ready to accept that their product had a bug for the past 3 decades which was identified only now? Does this mean the community is not very active? But somehow this do not seem to bring negative impact in the minds of people as most of the open source consumers know that this means that even the old code is continuously tuned and open source guys have realistic sensible expectations.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>