Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Full-Disclosure Wins Again

CmdrTaco posted about 7 years ago | from the mommy-where-do-patches-come-from dept.

Security 122

twistedmoney99 writes "The full-disclosure debate is a polarizing one. However, no one can argue that disclosing a vulnerability publicly often results in a patch — and InformIT just proved it again. In March, Seth Fogie found numerous bugs in EZPhotoSales and reported it to the vendor, but nothing was done. In August the problem was posted to Bugtraq, which pointed to a descriptive article outlining numerous bugs in the software — and guess what happens? Several days later a patch appears. Coincidence? Probably not considering the vendor stated "..I'm not sure we could fix it all anyway without a rewrite." Looks like they could fix it, but just needed a little full-disclosure motivation."

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

A bug only exists... (4, Insightful)

InvisblePinkUnicorn (1126837) | about 7 years ago | (#20237481)

A bug only exists if the public knows about it.

Monster truck madness exists when... (-1, Redundant)

Anonymous Coward | about 7 years ago | (#20237511)

Your mom takes me to a monster truck show and lets me hold her binoculars

MADNESS?! (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#20238067)

In the Ghetto Part I
It was pitch-black in the roach-infested double-wide trailer. All was quiet except for the gurgling sound of an infant, shut away in its own room.
The floor creaked as a mass of flesh made its way to the infant's room. The door slowly opened, creating a growing triangle of light on the floor. The infant coughed and gagged and began to cry as it was overpowered by the horrid stench.
"It's time to suck on daddy's special pacifier, Marticock!"
As the door creaked shut, the whale in the master bedroom wept into her pillow. What kind of life was this for her precious little Marticock?
In the Ghetto Part II
Reza sat at the kitchen table, her cigarette smoke blending with the steam rising from her coffee cup. She took a sip of coffee and adjusted the rollers in her hair. She rubbed her eyes, which were still puffy from crying into her pillow all night.
Vlad shuffled into the room, wearing nothing but his briefs and a stained t- shirt, "hey you fat, pig, how are you this morning," he burped.
"Vlad, we have to talk."
"What now, fat-ass?"
"It's about Marticock. You have to stop molesting him... I think it will make it harder for him to make friends later on..."
Vlad slammed his fist onto the table, causing Reza to jump. She farted.
"You're not supposed to think, bitch! You're supposed to be making me breakfast! Where is it?!"
"Vlad, please..."
"Remember when we saw the Matrix Reloaded last weekend? THAT'S the style of Martial Arts I practice, bitch. You're about to get a FREE LESSON!"
Reza rose from her chair, causing it to fall over and ran - as best she could - into the bedroom to cry again.
"Fucking bitch," Vlad farted.
In the Ghetto Part III
Vlad sat on the living room couch, the cushions torn and vomiting foam from their inner core as he bounced up and down in excitement. He screamed at the television, as he shook his fist at it, sending Budweiser spilling out all over the floor.
"Come on, you goddamn white boy, if I wuz in there, he'd be dead now. Let's see some goddamn wrestling!"
Vlad didn't notice the doorbell ringing and continued screaming obsessively at the television as Reza bounced through the room to open the door.
Reza stood at the doorway in a sheeny, purple, see-through nighty, smiling at the black couple who greeted her, "hi I'm Reza! You must be Pedro and Florence from the personal ad..."
As a commercial flashed onto the television, Vlad turned to see what all the comotion was about. He recognized Pedro in the blue cathode-ray glow, "hey DOOOOOG!"
Vlad hopped from the couch, emitting a spurt of gas as he did so.
Pedro and Florence, who was holding a fat black child with curly hair, entered the dingey apartment. Vlad and Pedro high-fived each other and then Pedro began to rap:
This is Pedro G
Gangsta P
Sippin' on Hi-C
Smokin' PCP
Smooooooth Nigga
Vlad began to break-dance to the off-the-cuff rap. Farting with each bend of the leg and twist of the waist.
Oooooooh. A little Mastah B on the Bonus T
Got it goin' on girl
Droppin' Baby Marticock on your ass, Byatttch
Vlad and Pedro laughed heartily and butted guts.
"You ready to do some swappin' V-Dog," Pedro drooled. Reza grew wet with the suggestion.
"You bet I is, bro," Vlad replied eagerly. He trampled off into the other room, much to Reza's confusion.
In no time at all, Vlad returned holding Marticock, gurgling and farting. Pedro took his child from Florence and exchanged him for Marticock.
Reza frowned and shook her head, "no, no, no!"
Vlad and Pedro laughed as Reza and Florence ran into the bedroom to weep.
Vlad grinned as he removed the black child's diaper, "this is gonna be good! I never cornholed me a nigra before!"
In the Ghetto Part IV
Reza gazed into the bathroom mirror. The sense of despair overwhelmed her. Vlad's nightly visits to Marticock the Gurgling Penis Socket had been torturing her for weeks. She hadn't slept at all and it was beginning to show. The bags under her eyes were dark and full. Her eyes were red from constant crying. She even thought she could see some wrinkles appearing in her puffy face. She had to do something.
She opened the mirror to reveal a series of shelves. She found Vlad's razor and took it with her shaking hand. She closed the mirror again and stared deeply into her own eyes. Did she really want to do this? Was this the only way out of this mess? She heard the front door slam, followed by a loud belch and fart. Those three sounds that used to bring her so much comfort. She decided she must end it now.
In the living room of the double-wide, Vlad threw his empty Budweiser can onto the floor. He plunged his hand down the back of his pants to fish desperately for a ball of crust caught in his anal hairs that had been plaguing him all day. He moaned deeply as he plucked the nugget from his anus, taking along a few assorted hairs with it.
"Hey, bitch, where's my dinner?" Vlad screamed.
He heard a crashing sound in the bathroom, followed by a thud and a large splash.
"Fucking cunt," he spluttered.
Vlad trudged into the kitchen and liberated another Budweiser from the bottom shelf. He opened the can and drank half the contents, signifying his approval with an enormous belch. Suddenly, he heard more splashing and riff-raff in the bathroom. He wallowed across the room and into the hallway and opened the bathroom door.
Vlad was shocked. Reza was laying in the tub, completely bald. Not only were her legs hairless for the first time in years, but so were her armpits! The hair on her upper lip had been completely removed, as well as the ratty hair that covered her scalp. Vlad's mind reeled. How could she have managed to reach her legs with all of that lard in the way?
"Look, Vlad! I am Marticock!" Reza said with a hint of hope in her voice, "you can molest me now!"
Vlad farted, "Marticock?"
Reza nodded, "Ummmm-hmmmm!"
Vlad shook his head. He noticed the comforting gurgling sound in the room down the hall, "Marticock..."
Vlad turned and stumbled down the hallway to Marticock's room. He opened the creaking, paper-thin door, "daddy's home, Marticock! Open wide!"
Reza cried so hard that she deficated in the tub.
In the Ghetto Part V
Vlad farted.
It was a plump, furry fart with a long wet tail. Vlad couldn't tell if the vibration at the end of the fart was just particularly chaotic or if it had left a moist surprise for him. He reached down the back of his underwear to feel around and discovered some moistness on his underwear. He sniffed his fingers and his mouth watered at the unusual scent. He wiped his fingers dry using his t-shirt.
Vlad knew this would be a good day.
In the Ghetto Part VI
Vladinator's bulk took up most of the sidewalk as he waddled slowly towards his destination. Pedestrians jumped out of his way into oncoming traffic to avoid being trampled. Finally, panting heavily and with a river of sweat running from every gland, his three-block walk came to an end. He found what he had been searching for. He let out a satisfied fart as he looked at the sign:
Chicago Unemployment Office
Inside, a long line of vagrants and lost souls stood waiting in front of the counter. It smelled of piss and vomit. Scott Lockwood squeezed himself through the door, and the smell worsened tremendously.
Vlad looked at the dozens of people in line ahead of him, contemplating the long wait he faced before he could get his check. "No, this won't do," he thought. "This won't do at all." His face took on a grimace of deep concentration, followed by an intense sigh of relief.
It was silent, but deadly. One by one, his fellow unemployables screamed as they felt the burning in their lungs. Some began to retch. Within a minute, everyone had either fled in terror or lost consciousness. Vlad was now at the head of the line! "Much better," he thought to himself.
He waddled up to the counter and smiled a yellowish smile. Standing there was a cute teenage girl wearing a standard-issue gas mask. But Vlad had no interest in girls. "Give me mah check!"
"Your name please, sir?"
"William Scott Lockwood III. Hurry up, cunt!"
"One moment." The girl tapped buttons on her computer, and then frowned at Vlad. "I'm sorry, sir, but your Unemployment coverage has expired. It looks like you've made no effort to get a job for over two years, so we have to cut you off."
Vlad's face turned red with rage. He farted an angry fart. "WHAT?! I'll fucking kill you, bitch!" Vlad dropped into a sad parody of a martial arts stance. The sound of his pants ripping was followed immediately by another kind of ripping. The counter-girl's gas mask began to melt away.
She pushed a button, and a plexiglass barrier dropped between her and Vladinator. "I wouldn't try that, sir."
Vlad fell on his ass, exhaused from his attempt at moving his body. A farting sound was heard as he landed. He began to cry. "Why would you do this to me? Is it because I is black?"
"Pardon me, sir?"
"Whitey is just trying to keep the Black Man down! Everywhere I go, nobody gives me a chance, because of the color of my skin! A nigger just can't make it in the world today because of all the prejudice and bigotry!"
The girl looked at him with sympathy and concern. She pointed to a mirror on the wall next to Vlad. Vlad turned his head to look, and then screamed in terror at what he saw.
"WHITE! I'm white?! NOO!!"
The girl nodded.
Vlad continued to blubber, "you mean, the white man ISN'T keeping me from getting a job? The white man ISN'T keeping me from having a good life? All these years I thought..." He trailed off into incoherant sobbing and farting.
The girl shook her head sadly. "Mr. Lockwood, there IS a white man keeping you from having a good job and a good life. That white man is yourself."
Vlad felt shame as his bladder emptied into his pants, because he knew it was true.
In the Ghetto Part VII
Vladinator slowly waddled home smelling or urine and flatulence after his unsuccessful trip to the Chicago Unemployment Office. Tears and mucous ran down his face as he sobbed about the sad state of his life. Even knowing that baby Marticock would be there in a skimpy diaper waiting for him at home was no comfort now. He was miserable and he wanted to die.
Not wanting to go home and face his family without his unemployment check, he turned off the sidewalk into a narrow alley between two tall buildings. Then he realized he wouldn't fit through the alley, and went back to find another one that was wider. It grew dark as he walked through the forgotten back-alleys of Chicago's filthy slums.
Suddenly, Vlad heard a gun being cocked. A voice behind him barked out, "don't move or you're a dead man!"
Vlad saw that a homeless Negroid mugger had pointed a gun at him. "What do you want from me?" Vlad whined.
The homeless Negroid mugger responded, "You have two choices. Give me all your money, or I'm going to rape your ass. By the way, I have AIDS."
Vlad stammered, "B...but I don't have any money!"
"I guess I'm going to have to rape your ass, then," the homeless Negroid mugger replied sadly.
"Yes, well, I guess that's the only other choice, now isn't it," Vlad quickly replied.
"If that's how it has be... well, okay. I'm going to rape your ass now."
So he did.
Afterwards, when the homeless Negroid mugger had left, Vlad removed the large wad of money that he'd hidden between two rolls of his stomach that morning, and took a deep whiff of it.
"I am SO clever," thought Vlad.
In the Ghetto Part VIII
Vlad sat at the kitchen table finishing off a case of Budweiser. The metal table, with its peeling yellow paint comforted Vlad. He belched heartily as he admired his vinyl and plastic life in the double-wide. The only fly in Vlad's ointment was the elephantine woman who was cowering in the bedroom closet sobbing at Vlad's approaching Marticock Time.
As much beer as Vlad had consumed, he still could not alleviate the burning in his pants. His penis was chafed and red from its constant attacks upon Marticock's ass. Vlad loved the tightness of it, but it did have drawbacks. He shoved his hand down his unzipped pants and worked his hand around the layers of lard until he found his penis. He rubbed it carefully, sending thin rolls of dirt-encrusted skin flaking into his pubic hair. His penis burned intensely as he rubbed away the scabs to leave large areas of tender, pus-coated sores.
Vlad's penis twitched to attention as he manipulated it. Vlad downed the rest of his beer and tossed the can onto the living-room couch - the back seat of the Chevy van which was up on blocks in Vlad's front yard. It was Marticock Time. Vlad waddled to the refrigerator and fished out a tub of I Can't Believe It's Not Butter. He plunged his stiffened penis into the margarine, causing a thimble-sized indentation in the otherwise smooth yellow surface. Vlad massaged the margarine into his penis until it was nice and buttery.
Vlad belched, "Wake up Marticock! Here cums daddy!"
The mating call caused a spasm of nervousness to clench around Reza's bowel. She farted loudly as she quivered to each heavy thud of Vlad's footsteps. Her crying was matched only by the screaming of the molested Marticock.
In the Ghetto Part IX
Reza's face was cast in stark shadow. Marticock gurgled and drooled as she looked upon his paleness, lit only by the blue moonlight filtering through the window. Reza heard an enormous explosion in the bathroom and knew that Vlad would be there for a long while. She stared at the drooling Marticock with a grimace on her face. This was the first time that she actually looked upon her baby son with disdain. It was all Marticock's fault. She hadn't had sex since he came spilling from her substantial gut. Before Marticock, Vlad had treated her with respect, only beating her when she deserved it, and had made her feel like a real woman for the first time in her life.
Now, it seemed that all Vlad could do was drink beer, emit gasses and molest his son. He had even managed to land a job shoveling cow dung at the local slaughterhouse, only to get fired the next morning for being late - late because he spent too much time probing the tiny anus that lay before Reza. She frowned even more. How could it have come to this? For nine months, she had planned a wonderful life with her husband and their child. The reality was quite different.
A small, thin stream of gas escaped from Marticock's anus. Marticock's flatulence was a point of pride for Vlad and he often enjoyed a burst of cool gas on his penis while molesting his son. But the spurt of noise only reminded Reza of the times before Marticock. When Vlad would violate her as no other man could, with his farts causing him to vibrate like an electric dildo. That was the last straw for Reza. She let her robe slip from her body as she undressed Marticock.
Vlad squeezed out the last few drops of liquified feces from his gut. Satisfied that he had completely expunged the wastes that lurked within, he reached around the back of the toilet for the golf club. He wrapped a thick mass of toilet paper around the club and squatted on the floor, using the club to wipe his ass - the only way he could reach himself through the vast rolls of lard.
Vlad pulled his yellowed t-shirt down over his waist and headed for the nursery. Nothing crowned a relaxing defecation like a visit to Marticock. He quickly opened the door to Marticock's room and flipped on the light. Vlad farted in shock at the sight before him.
Reza was squatting on the floor, her paper-white ass cheeks protruding like two enormous dumplings, pocked and wrinkled and bursting with cellulite. Marticock was hanging from her vagina, his head fully wrapped in the rancid lips, which were stretched so wide that they had begun to tear. Blood and pus oozed from the yeast-infected vagina all over Marticock's body. Vlad stared, his mouth agape, as Reza held Marticock's legs and shoved him, as hard as she could, back into her womb.
As Reza shoved on Marticock, his legs spread apart, causing his anus to be plainly visible. Vlad grew excited and could not control his desire for little Marticock. Vlad ran over to Reza and shoved his scabbed penis into Marticock's anus. With each violent thrust, Marticock was shoved further into the mounds of disgusting lard. Vlad licked Reza's blood and pus from Marticock as he ejaculated a few spurts of semen onto his own pot-belly.
All of the commotion had caused Reza to become extremely excited and she exploded with an orgasm - an orgasm that had been months in the making. The force of the release caused Marticock to shoot back out from her vagina. Reza fell backward onto Vlad and the three Cockwoods lay in a giant, pale heap on the floor, farting with satisfaction.
In the Ghetto Part X
The orange vinyl of the couch stuck to Vlad's pale, massive leg as he guzzled another beer. Vlad had "made" the couch from the back seat of his Chevy van after the bank had repossessed most of his belongings. The seat was not needed on the van anyway, since it had been up on blocks in the front yard for the better part of a year. Vlad farted and enjoyed the unique sound of the vinyl flapping against his fattened leg due to the vibration of the escaping gasses. The couch was the only seating in the living room of the double-wide and so Reza was usually consigned to the floor. Vlad never let her sit next to him, claiming that his massive gut "needed to breathe".
Reza sat on the stained yellow carpet wearing a see-through purple gown. She sat with her legs spread open, exposing her red, infected vagina. The massive flaps of flesh that were her labia hung from her crotch and melted into a heap on the floor, still stretched and bruised from her attempt to reinsert Marticock into her womb. Various milky and pungent substances oozed from the massive black hole onto the floor to create a sticky white puddle. Carter, the Lockwood's dog, mosied over to the puddle and lapped it up as Reza belched up a portion of the evening's meal.
Vlad dug his fingernails deep into the flesh surrounding his anus and scratched heartily, oblivious to the tiny details of Lockwood life that were playing out around him. His meditations were, however, interrupted by a banging on the loosened screen door of the double-wide. A pang of excitement shot through Vlad's bowel and expressed itself as a thunderous burst of flatulence. He tried to leap up from the couch, but the hold of gravity upon his massive body slowed him significantly. Eventually Vlad made it to his feet and trudged to the door. He opened it to an extremely large man, with a flabby gut hanging all the way down to his knees.
"Poppa!"
"Hey, Vladdie," the gruff voice chortled, "give me some sugar, son..."
Vlad melted into the massive, hairy arms and inhaled deeply to savor the comforting scent of week-old sweat. Poppa rubbed Vlad's back with his dirt- encrusted hands, massaging his way down to Vlad's butt. He took one cheek in each hand and squeezed passionately. Vlad moaned with pleasure and placed his lips firmly on his father's. Vlad partially opened his mouth, and stuck his tongue out, past his missing teeth and into his fathers mouth.
Vlad could taste the residue of tobacco his father had been chewing and this excited him even more. He moved his hands down his father's back and into the back of his pants. Vlad carressed his father's bare ass, exploring each pock and wrinkle with his fingers and massaging his anus. Vlad's penis swiftly snapped to its full 1 inch of attention as he explored the moist, tight anus of his father.
"Vlad! What about me, damnit, " Reza screamed.
Vlad pulled away from his father and shook his head, "oh yeah, follow me, Poppa."
Reza smiled with a glimmer of hope which was quickly smothered as Vlad walked uncaringly past her, followed by Poppa. Reza began to sob uncontrollably then screamed loudly as Poppa stepped on her bruised labia. She rolled over onto her massive stomach and cried and screamed as she pounded the dirty floor of the double-wide.
Vlad motioned his father to Marticock's room, "I figure we can start out with me in Marticock's ass and you in mine," Vlad said eagerly.
"Now wait a minute, son! I want a piece of that tight little ass too!"
Vlad's eyes brightened with hope, "does that mean you're gonna let me in the back door this time, Poppa?"
Poppa smiled and patted Vlad on the back, "you betcha, son. I've been lookin' forward to this for a loooong time. Three generations of Lockwood, doin' it the Lockwood way!"
Vlad farted with excitement.

Re:A bug only exists... (-1)

Anonymous Coward | about 7 years ago | (#20237657)

No, a bug only exists if the black hats know about it.

Re:A bug only exists... (2, Insightful)

Billosaur (927319) | about 7 years ago | (#20237877)

Incorrect. A bug exists if a bug exists. A bug only gets fixed if the public knows about it, specifically the computer savvy segment of the population, since the average user can't tell a bug from a feature.

Re:A bug only exists... (1)

InvisblePinkUnicorn (1126837) | about 7 years ago | (#20237957)

My statement was made from the point of view of a company. I thought that was obvious.

I thought wrong.

Re:A bug only exists... (4, Insightful)

xappax (876447) | about 7 years ago | (#20238253)

It's unfortunately not that hard to imagine that your sarcastic remark was serious - we constantly hear the same sentiment echoed very seriously in relation to computer security, electronic voting machines, even terrorism and criticism of the Iraq War.

Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.

Re:A bug only exists... (2, Insightful)

TubeSteak (669689) | about 7 years ago | (#20239117)

Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
Look at it from their point of view:
Anyone who points out problems, is creating a problem.

A lot of times, if you don't officially know about it, you don't have to officially do anything about it.

Re:A bug only exists... (2, Insightful)

dmpyron (1069290) | about 7 years ago | (#20240815)

Except that they officially knew about the problem. Assuming he had taken the time to sign his email. When they said they did know if they could fix it without a major rewrite, that was a tacit admission that they had known about it.

At least he went to the company first and sat on it for a while. Lots of people publish first, then notify the maker. That definitely makes him a white hat in my book.

Re:A bug only exists... (5, Funny)

Thuktun (221615) | about 7 years ago | (#20240095)

Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.
NARRATOR: Fortunately, our handsomest politicians came up with a cheap, last-minute way to combat global warming. Ever since 2063 we simply drop a giant ice cube into the ocean every now and then. Of course, since the greenhouse gases are still building up, it takes more and more ice each time. Thus solving the problem once and for all.

GIRL: But--

NARRATOR: Once and for all!

Re:A bug only exists... (1, Funny)

Anonymous Coward | about 7 years ago | (#20240723)

Sadly, we live in a world where most people in power actually believe that anyone who points out problems is just as bad as someone who causes and exploits problems.

IMHO, knowing about specific software flaws is an advantage to everyone but the company that makes the software with the flaw. The people in power only "think" the way you describe because they get their power from the same companies that loose out when someone finds a flaw with that companies software.
Hand a policeman a $20 bill and help you around a law and you will go to jail for bribery. Hand a politician a $20 bill for the same thing and you will get your favorable treatment and get invited to dinner.

Re:A bug only exists... (4, Funny)

mrchaotica (681592) | about 7 years ago | (#20238373)

He fell into the sarchasm.

Re:A bug only exists... (1)

Morgor (542294) | about 7 years ago | (#20240819)

That is probably the closest to the truth, however I keep wondering if, by releasing the full details of a bug or security hole to the public, you force the developers to make patches that fixes that specific exploit but leaves open the hole, thus protecting the software from script kiddies browsing through security sites in search of exploits, but not from being cracked again in the future. What if the company did have some truth in saying that the bug could only be thoroughly fixed by rewriting the software from scratch? I'm not saying that this is the truth in every case at all, but I think it is worth considering in this debate.

Re:A bug only exists... (1)

Billosaur (927319) | about 7 years ago | (#20240943)

Sure, but the patch buys them time so that they can fix the actual hole. Usually a hole is a sign of a bigger problem, and certainly any developer would want to re-write vulnerable sections to close the hole up permanently. Of course the other issue is with the development cycle; if you're coming out with a new version of the software, do you really want to invest that much time in re-writing the old code to eliminate the bug. Probably not. You'd want to patch the hole and then make sure it did not recur in the new software.

Re:A bug only exists... (2, Funny)

grumpy_old_troll (1049646) | about 7 years ago | (#20238637)

A time bomb doesn't exist if it hasn't exploded yet.

Who's next? (0)

Anonymous Coward | about 7 years ago | (#20240381)

Ten to one [slashdot.org] , we hear next week that some large repository of Student papers is vulnerable too.

does it have to be turned into law? (4, Insightful)

toQDuj (806112) | about 7 years ago | (#20237503)

I believe there is a system that forces a company into action if it delivers faulty products.

Why then, should software be any different? Do we have to force companies to take action once a bug is submitted to them?

B.

The difference (3, Insightful)

InvisblePinkUnicorn (1126837) | about 7 years ago | (#20237529)

Somehow I don't think that too many lives are being put at risk if EZPhotoSales has a bug in its software. Now a seat buckle on a car, that's a different story...

Re:The difference (2, Interesting)

toQDuj (806112) | about 7 years ago | (#20237573)

Sure, seatbelts are a prime example, but I've also seen recalls for much more mundane stuff, such as Ikea furniture and kiddie toys. A bug in software could really cause problems, albeit probably indirectly.

B.

Re:The difference (1)

owlstead (636356) | about 7 years ago | (#20238259)

Many software products contain many bugs. Because software systems contain so many source code lines, you are almost certain that there are bugs. This is especially true if languages like C++ are used to create relatively undemanding applications. Many upon many of these bugs will never show up, if they were, they would have been discovered during testing. And if they show up, they may not do much harm. For example, a memory leak or buffer overflow in a graphics application won't matter too much.

I don't think that this goes for physical products as much. If there is a weak point in a kiddie toy, it might break the product, or a piece of product could be swallowed (ugh, just thought about the Simpsons: could somebody please think about the children?!). Also, if a bug in a software product gets exploited, it can still be fixed afterwards. Sure, the bug does some damage, but it might be much more economical this way.

Of course, there is one serious problem with this scheme: manufacturers won't be pushed to create better, more bug free products. Then again, too much exposure of bugs to the buying public might, and you might piss off existing customers as well. You can see this is happening in this case. Maybe the programmer that created the bugs will be more closely monitored or will have to create a unit test or two more for his next project.

Re:The difference (1)

Alphager (957739) | about 7 years ago | (#20238417)

Many software products contain many bugs. Because software systems contain so many source code lines, you are almost certain that there are bugs. This is especially true if languages like C++ are used to create relatively undemanding applications. Many upon many of these bugs will never show up, if they were, they would have been discovered during testing. And if they show up, they may not do much harm. For example, a memory leak or buffer overflow in a graphics application won't matter too much.

Yeah, that's the reason the highly important security-bugs do not exist: if they were important, they would have surfaced during the testing phase.

Re:The difference (2, Insightful)

owlstead (636356) | about 7 years ago | (#20238619)

Those "highly important security-bugs" will most likely be found in OS or server components. Sure, that's a lot of software, but they aren't general purpose applications. I tried to exclude those products by writing "undemanding applications" and "memory leak or buffer overflow in a graphics application". So either I was still not clear enough or you were misunderstanding me for some other reason.

Buffer overflows and such tend to not surface in normal applications, because you would have to go out of your way to exploit them or find them during testing, and then gain nothing at all. Of course, underlying libraries such as JPEG software might be used by either OS or application software, so general purpose libraries should be rather bug-free or they could cause serious problems (such as in Java or in browsers).

I hope this makes things more clear for you.

Re:The difference (1)

toQDuj (806112) | about 7 years ago | (#20239051)

But in a way, a piece of software is just like a complicated design document of an actual product. Faults could slip in either. I find that programs should be more like Volkswagen designs instead of Yugo's. Instead of only looking at the exterior look and feel of the software, more attention needs to be spent on designing the internal components well.

That means for software companies, that they should put more manpower on coding beyond the outer shell. Well thought-out interior functions are just as important as the outside look.

B.

Re:The difference (2, Interesting)

xmarkd400x (1120317) | about 7 years ago | (#20238371)

The difference might not be as big as you think. Most hardware that can harm people has a very specific application. If you modify it or use it outside of its intended use, the manufacturer has no liability. For instance: if your seatbelt won't fit around your belly, and you cut it sewing some cloth in to make it longer. You get in an accident, and you die because the seatbelt broke. This is by no means the fault of the manufacturer. Now, how this applies to software: If software was to become liable for any bug whatsoever, vendors would start making similar claims. Their software would be able to be used only on certified operating systems, and only with certified software. Any attempts to modify the source, binaries, or memory while in execution will cause the user to assume all liability. I don't think those scenarios are all that different.

Software failing isn't necessarily harmless (4, Insightful)

Hemogoblin (982564) | about 7 years ago | (#20238689)

I was thinking of moderating, but I'll reply instead:

Its possible to be injured in ways other than just physically. What about fraud and identity theft? It could be very damaging to thousands of people if one of the software applications that your company is using has flaws that allow fraud or identity theft to occur on massive scale.

To quote "Going Postal" by Terry Pratchett: "You have stolen, embezzled, defrauded, and swindled without discrimination. You have ruined businesses and destroyed jobs. When banks fail, it is seldom bankers who starve. In a myriad small ways you have hastened the deaths of many. You do not know them, but you snatched bread from their mouths and tore clothes from their backs."

Theres a reason why fraud and theft can have as harsh a punishment as assault. (In Canada at least.)

Maybe EZPhoto Editor isn't going to put anyone at great risk if it fails, but I'm sure you could think of some software that might.

Re:The difference (1)

Kingrames (858416) | about 7 years ago | (#20239395)

What if the security vulnerability ended up compromising the personal information of someone in the witness protection program?

Certainly that would qualify as a life at risk.

Re:does it have to be turned into law? (4, Insightful)

hateful monkey (1081671) | about 7 years ago | (#20237729)

The biggest reason this wouldn't work well right now is because there are so many pieces of software that are written by small companies that couldn't afford a massive change in liability laws. This would turn software into a business that needs an enormous amount of money to enter the market, which would essentially destroy small startups and leave the business to large well-funded corporations. Open source software would never be usable outside of a very narrow range of applications that present little to no legal liability unless a large company were willing to absorb their liability costs (insurance, etc.). As it stands even Microsoft states in its EULA that it does not warrant Windows or Office to be good for any purpose. If every student or business person could sue Microsoft for losing their important document minutes before their presentations, even Microsoft, with their billions in the bank, would not be able to stay in business long. In addition, the reason companies fix publicly disclosed bugs is not because of liability, it is because a known bug makes them look bad to prospective customers. If they had to worry about the sort of liability you are talking about they would be hesitant to fix any bug that didn't open them to a lawsuit, just in case the FIX created an issue they could be sued for.

big deal it needs to change (-1, Troll)

Anonymous Coward | about 7 years ago | (#20238385)

the entire industry has grown up and adopted around coding practices knowing they can use a get out of jail free card EULA, so fast sloppy is the norm, the evil "buffer overflow" is an expected feature they don't care about too much, and using active scripting on websites is considered cool. You know what, a lot of consumers don't care, because they are the ones suffering while the multi hundred billion a year industry cashes the checks. Enough's enough, now it is time for them to grow up and enter the adult world were warranties for consumer products are normal. Don't you think half a century of helping get the industry off the ground is ENOUGH training wheels time? How much of "you can get patents on your precious IP making it a legal product" don't you understand? Either give up patents and high prices, or openly admit your stuff is experimental betaware and is much closer to artwork than engineered products and charge a pittance or free for it, one or the other, but charging big bucks for no warranties got old a long time ago for the 99.9999% of the public who AREN'T coders but have to toe the line when it comes to whatever they "produce".

Software is SO bad that people throw away perfectly functioning hardware, thinking their computer is "broken".

That's NUTZ! It's a blatant ripoff, too, something ya'all chuckle about I am sure.

Now, I run FOSS that is free for the taking, and I know full well it is always experimental beta, even if it is called "full stable release", BUT, if I was running some expensive proprietary "solution" that some billion buck obscene profits software company "licensed" to me so they could "leverage" more quarterly profits and it got hosed and cost me the big bucks, you'd bet the farm their ass would be in court and I'd be challenging that damn "no fault" EULA, and patents and business models would be a big part of the case, and I'd invite any and all interested parties to join in a class action, all the way to the supreme court if necessary.

And I bet it happens sometime, eventually some rich PHB who ISN'T in the software industry but who has to adhere to consumer warranties with whatever he makes is just gonna be pissed off enough from hosed software and ridiculously stupid computers that are more trouble than they are worth and he'll have access to onstaff lawyers and other lawyers who can get the ball rolling and he'll have enough cash to push the issue, because it NEEDS to happen. As long as the industry calls it a product and treats it like a product, instead of "art", as in copyrights only like other typed up stuff, then fuck 'em, they need a normal consumer warranty and it needs to be reasonably free from defects and damn suitable for purpose. Remember, all other industries went through this phase, a long time ago we had zero warranties on anything, the entire world of manufacturing fought having warranties, but you know what? They adapted. It's not perfect now, and yes, sometimes there are recalls big and small, just look at what Nokia is having to suck up now,but it still works, we still have manufacturing, profits are still being made, proving their cries of "end of civilization" if warranties were enforced were total bogus alarmist lies. The only place snakeoil caveat emptor still exists is with software "products". So tough ruck, either start doing the right thing voluntarily or wake up one day being FORCED to do it, because it IS going to happen. There's just so much of a free skate consumers and the voting public will allow before they get mightily annoyed.

And it will only take *one pissed off rich guy* to get this going, just *one*. Feeling lucky? You've pushed the envelope on expensive crapware for decades now, eventually you'll get called on it. Joe public is hard to get moving, but once moving, they are hard to stop, and things DO change then.

why drag lawyers and the government into this? (1, Interesting)

Anonymous Coward | about 7 years ago | (#20238013)

Here is a PERFECT example where
a) change was needed
b) public was unaware
c) individual wanted change
d) individual alerted a portion of the public
e) change was made.

No lawyers, no State, no violation of freedoms, no taxes, no fines.

Call me sceptical (2, Interesting)

RingDev (879105) | about 7 years ago | (#20239385)

I'm not familiar with the software in question, but are they meaning to say that the company did nothing for a month, then they posted the vulnerabilities publicly, and in less than 7 days the company became aware of the post, tested the vulnerabilities, designed a solution, corrected the code, and had a software update tested and ready for deployment?

If so, that is some AMAZING response time. But I would venture a guess that they had already been working on the corrections. The public posting may have made a couple of coders work over time, and cut the testing phase out of the cycle, but for them to do the whole thing in less than 7 days is highly unlikely.

Not only that, but since they would have either had to cut short, or cut out entirely the testing phase of the release, it is MORE likely that security issues remain, or that new security issue have been created and not found.

I'm not sure I'd call this one a "win" just yet.

-Rick

Adopt the cryptographer threat model (5, Insightful)

Ckwop (707653) | about 7 years ago | (#20237521)

In the threat-models used by cryptographers, the attacker is assumed to know everything except cryptographic keys and other pre-defined secrets. These secrets are small in number and small in size. Their size and their limited distribution means we can trust protocols based on these secrets.

Software that is used by millions of people is the very antonym of a secret. Compiled source is routinely reverse engineered by black hats. Web-sites are routinely attached using vectors such as SQL injection. In short, you can't assume that any of the source code is secret. Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.

In light of adopting such a severe threat-model, the argument over full disclosure is a non-debate. Black-hats with sufficient resources probably already know of the bug. The only people aided by disclosing it wide and publically are the people who run the software who can take evasive action. In contrast, you only told black-hats what they already know.

Simon

Re:Adopt the cryptographer threat model (1)

AnonymousCactus (810364) | about 7 years ago | (#20237895)

It's still a debate. Business is about likelihood not absolute truth and definitely not the idealized world that cryptographers make up. Sure, if someone tells you about a bug in your software you risk your software being responsible for damages to your customers. That's a potential cost. If you fix it, that's also a cost. Perhaps you simply disagree with the company's assessment of the relative costs. Keep in mind, from the company's viewpoint, for the bugs to have a true effect, someone has to do something illegal. That gives them additional incentive not to fix it. Something goes wrong, it was the hacker, not them, that is responsible (so goes their spin). Not to mention, for every bug that's out there known by someone legitimate, there are potentially many more known by people that aren't. One more reason why it could be considered not cost effective to drop everything to fix the latest bug someone pointed out.

Re:Adopt the cryptographer threat model (4, Insightful)

Otter (3800) | about 7 years ago | (#20237905)

Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.

But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.

Re:Adopt the cryptographer threat model (4, Insightful)

Ckwop (707653) | about 7 years ago | (#20238145)

But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.

It's a ridiculous assumption until you try to work out how you can usefully weaken the assumption! Ask yourself this, how do you know how good the attacker is? They're not going to share their successes with you, in fact, they will probably never make contact with you.

You are only as strong as your weakest link but with the vast distribution that's possible this days you have to expect to be up against the very best attackers. So what then is the plausible attacker your meant to be up against?

Incidentally, this is why cryptographers choose such a harsh threat-model in which to place their protocols and ciphers. Only by designing against an attacker who is effectively omniscient can you truly get security. You need to look no further than Diebold to see what happens when you don't do this.

Sure in the real world, disclosing vulnerabilities has an impact! Of course it does, but to say it decreases the security of the users of the software is simply nonsense. It may well do in the very short term, but in the longer term it is absolutely vital that full disclosure occurs if security is to improve.

Simon

Re:Adopt the cryptographer threat model (1)

Otter (3800) | about 7 years ago | (#20238397)

Sure in the real world, disclosing vulnerabilities has an impact! Of course it does, but to say it decreases the security of the users of the software is simply nonsense. It may well do in the very short term, but in the longer term it is absolutely vital that full disclosure occurs if security is to improve.

Yes, that'd be the entire point! When you're talking about the field of cryptography research that calculation is obvious. But users of software can't be expected to put up with increased vulnerability in "the very short term" (i.e. months or years) even if it results in improved security over decades. (Which it doesn't, anyway. Cryptography builds from one generation to the next. User-facing software keeps implementing the same vulnerabilities over and over. Letting Mosaic users get ravaged "in the very short term" wouldn't have kept IE and Mozilla developers from making the same mistakes.)

Re:Adopt the cryptographer threat model (1)

LeafOnTheWind (1066228) | about 7 years ago | (#20238777)

I think a very old quote sums up the response to this quite well: Security through obscurity is NOT security.

Re:Adopt the cryptographer threat model (0)

Anonymous Coward | about 7 years ago | (#20239615)

Ask yourself this, how do you know how good the attacker is?
You don't.

You can do some risk analysis, though. I can tell you that the vast majority (meaning all) of attacks I see on my servers are random, IP-based, scripted attacks. I suspect that most organizations are subject to very few focused attacks. Partly this follows from the fact that, of all 'hackers,' few are good, and most are script-kiddies. Presumably, "not good" attackers use scripts that exploit widely-known vulnerabilities, and "good" attackers use those plus vulnerabilities they have personally (or within a small circle) discovered. In any case, few attackers use undisclosed vulnerabilities.

Discovery of vulnerabilities is hard, so any hints are helpful-even just knowing that a particular package has a vulnerability. The more information you add, the more false leads you remove from the uninformed attackers path and the easier you make discovery. True, anyone who already know about the hole is able to continue to exploit it, but those people are also motivated not to communicate the vulnerability, in order to minimize the likelihood that a defense will be found.

Therefore, a period of nondisclosure allows an ethical vendor time to make repairs even while clients are vulnerable to a small number of attackers. Immediate disclosure makes it a race, and the attackers are likely to win, leaving clients vulnerable to a large number of attackers for however long it takes to create a patch or workaround. An ethical vendor should get that window of low threat due to low disclosure, but clients with persistently vulnerable systems have the right to be warned.

How many? (2, Interesting)

benhocking (724439) | about 7 years ago | (#20238365)

There *are* two genuine conflicting sides here and you can't just wave one of them away.
I can count at least 3, and I wouldn't be surprised if there aren't a lot more. Between only telling the company about a discovered security flaw and immediately announcing it to the entire world is a whole range of possibilities. To name a few:
  • Initially tell only the company. If they do nothing, then release it to everyone.
  • Initially tell only the company, but tell them that you will release it to everyone in X days.
  • Initially tell the company and CC a few other white hats that you trust.
  • Initially tell the company and CC the better business bureau, etc.
(By "CC" I'm implying that you're letting the company know that you're telling other people.)

Re:How many? (1)

MarkGriz (520778) | about 7 years ago | (#20240311)

There *are* two genuine conflicting sides here and you can't just wave one of them away.
I can count at least 3, and I wouldn't be surprised if there aren't a lot more.
That's ridiculous! This is slashdot, where there are only 2 ways to do something... your way, and the wrong way.

Re:Adopt the cryptographer threat model (2, Interesting)

MostAwesomeDude (980382) | about 7 years ago | (#20238825)

I went back and looked at some statistics for my Subversion logs and bug tracker. I find that roughly 11% of bugs were "discovered;" that is, filed first, by me. That means a whopping 89% of programming errors went unnoticed by me, and were found by the community. Now, I may be a lone maintainer of code, but even in a team, bugs will still get past. The assumption that the public, or at a minimum, the black-hat community knows more about your bugs than you do is not unreasonable. It is just as valid in the context of SQL injections in PHP scripts as it does in the context of buffer overflows in hardware DVD players.

For example, read up on the ongoing attacks on AACS. The black hats (and yes, they are black hats) working on breaking AACS have exploited all kinds of software and hardware bugs and shortcomings in order to gather more information and cryptographic secrets. They have the upper hand because they are not fully disclosing their work. If they were to fully disclose the bugs in various tabletop HD-DVD players and software tools that they use to garner keys, you can bet that the problems would be fixed. As is, though, they are still ahead of the AACSLA.

That's a bad example. (1)

Kadin2048 (468275) | about 7 years ago | (#20240741)

For example, read up on the ongoing attacks on AACS. The black hats (and yes, they are black hats) working on breaking AACS have exploited all kinds of software and hardware bugs and shortcomings in order to gather more information and cryptographic secrets. They have the upper hand because they are not fully disclosing their work. If they were to fully disclose the bugs in various tabletop HD-DVD players and software tools that they use to garner keys, you can bet that the problems would be fixed. As is, though, they are still ahead of the AACSLA.
I'm not sure I'd go so far as to say that. DRM is a poor example for any security model, because there's no real security there, just obscurity. In the long term, it doesn't really matter what the hackers release, because there's no long-term way for the AACSLA to stop them (well, aside from putting them all in jail, which is doubtless what they'd love to do). You can't give someone both enciphered information, and the key to the cipher, and expect them to not be able to combine the two -- that's exactly what DRM does. It's fundamentally a shell game.

The reason the attackers are keeping their work secret is usually for two reasons (1) they want an advantage over other attackers, to be the first to break it really thoroughly, and (2) they don't want the AACSLA to plug any holes before they can find a break that will be impossible to plaster over.

Also, that's a poor example because a lot of the AACS hacking goes on in the open. When a break is found it's usually documented (at least if you go to the right forums). They're not sending it to the AACSLA to fix, but it's not really all that 'secret,' it's more like academic research where the research is conducted behind closed doors, but the findings get published when there's something significant.

Re:Adopt the cryptographer threat model (1)

10101001 10101001 (732688) | about 7 years ago | (#20239175)

Taken to its logical conclusion, you must therefore assume the worst; that the black-hats know of far more bugs than you do. In fact, strictly speaking you assume they know every bug that exists in your software.
But that's a ridiculous assumption! It makes sense in the context of cryptography research, but you're turning it into a assertion that publicizing software vulnerabilities doesn't have any negative consequences, which is absurd. There *are* two genuine conflicting sides here and you can't just wave one of them away.

You're right, it is a ridiculous assumption. Now, consider this. By releasing information about a bug to the public, then you *know* black hats are aware of the bug. So, everyone involved can react as such without it being "ridiculous". Now, given that a user can completely nullify the effect of any bug by not using the software, they can use work arounds and patches to nullify most bugs, or they can continue to take a calculated risk by continuing to use software with bugs, this does rather nullify the "other side". Or, in short, the most reasonable position would seem to be to release as soon as possible bug information to as many people as possible. In any other situation, one is generally left to make widely innacurate assumptions (all code is buggy, and so you can't use any code; or, all code is safe, and you can continue to use it without any risk).

Re:Adopt the cryptographer threat model (1)

Otter (3800) | about 7 years ago | (#20239477)

Yeah, that's a great plan.

I can't remember if I turned the stove off when I left for work this morning -- I'd better call my neighbor and ask him to set my house on fire!

Re:Adopt the cryptographer threat model (0)

Anonymous Coward | about 7 years ago | (#20238071)


This reasoning is whack.

you only told black-hats what they already know.

This is only true if you your assumption that the black-hats know about all of your bugs. Which is probably not the case. You are only assuming that they know all of the bugs as part of a threat model.

If a government said "The government of Evilland will try to get spies to find out our secrets. Let us assume that they have found out all of our secrets. The logical conclusion to this is that we should make all of our secrets available to all governments." you would think they were idiots.

The true logical conclusion is to patch your damn bugs as soon as you find them.

FWIW I'm for full disclosure after giving vendors a reasonable time to fix bugs.

Re:Adopt the cryptographer threat model (1)

Poltras (680608) | about 7 years ago | (#20238395)

This reasoning is whack.

you only told black-hats what they already know.
This is only true if you your assumption that the black-hats know about all of your bugs. Which is probably not the case. You are only assuming that they know all of the bugs as part of a threat model.

If a government said "The government of Evilland will try to get spies to find out our secrets. Let us assume that they have found out all of our secrets. The logical conclusion to this is that we should make all of our secrets available to all governments." you would think they were idiots.

Actually, that statements is stupid because you just push forward a fallacy. You should NOT make all your secrets available to all governments, but at the very least you should as a government take action based on the fact that the enemy knows everything about you.

It is the same for security and disclosure. You should assume that the blackhats know about the bugs (and in reality they do, not all blackhats but most probably at least one), then you can take decisions based on the maximum risk, not just a probability.

You don't secure safes by assuming dynamite is hard to find. Likewise, you secure your software taking into account that the worst will happen, and then you judge cost versus damage and security level.

Require login, forbid any subdirectory access. (4, Interesting)

Spy der Mann (805235) | about 7 years ago | (#20238093)

I saw the vulnerability page. They don't have access restriction to subdirectories.

Here's how I've solved this problem:

1) Modify the htaccess (or even better, the httpd.conf) files, so that ANY access to any of the subdirectories of the main app is forbidden. The only exceptions are: a) submodule directories, whose php files do a login check, or b) common images (i.e. logos) /CSS/XSLT/javascript dirs.

2) The only way to view your files is through the web application's PHP file lister and downloader. This should be child's play for anyone with PHP knowledge: PHP has the fpassthru function, or if you're memory-savvy, use standard fopen. Make sure the lister doesn't accept directories above the ones you want to list, and for the files use the basename() function to strip them from subdirectories.

3) Any file in the PHP application MUST include() your security file (which checks if the user has logged in and redirects them to the login page otherwise). For publicly-available pages, add an anonymous user by default.

4) For log in (if not for the whole app), require https.

4a) If you can't implement https, use a salt-based login, with SHA-256 or at least MD5 for the password encryption.

5) Put the client's IP in the session variables, so that any access to the session from a different IP gets redirected to the login page (with a different session id, of course).

6) After log in, regenerate the session id.

7) Put ALL the session variables in the SESSION array, don't use cookies for ANYTHING ELSE.

I consider these measures to be the minimum standard for web applications. It shocks me that commonly used apps still fail to implement them properly.

Re:Require login, forbid any subdirectory access. (1)

Dirtside (91468) | about 7 years ago | (#20240621)

This is a good general method, but there are some problems in certain environments. My company, for example, runs a massive load-balanced server farm; we can't really use PHP sessions because two successive requests from the same user may go to separate servers.

Locking to IP address is a non-starter because there are ISPs who will rotate their visible IP range dynamically, so that user A might appear to be coming from IP X on one request, and from IP Y on the subsequent request. Then that's user's screwed.

Re:Require login, forbid any subdirectory access. (2, Informative)

Rich0 (548339) | about 7 years ago | (#20241041)

If you store your session IDs in a central database you'd be covered. Maybe under extremely high load this might be an issue, but often these bugs crop up in software that doesn't face these sorts of high-demand applications.

Re:Adopt the cryptographer threat model (0)

Anonymous Coward | about 7 years ago | (#20240473)

There is a large difference between assuming that attackers know all the bugs and handing them the bugs directly. You know that this assumption does not hold up in the real world, especially since there are a large number of different "Attackers" with different skill levels.

This is like assuming that all criminals own a gun, and therefore giving everyone a gun will just let honest people defend themselves! What about the criminals who were only using knives? What about the generally honest people who, with access to the gun, turn to a life of crime?

The cryptographer threat model is a fine mindset, in that it dictates that security through obscurity will never work, and the bug needs to be fixed asap. It however does not mean that you should hand all of your known bugs out to anyone and everyone. At best, it means you should provide critical users of your software who would be affected with the bug reports so they can defend themselves, which is much different that disclosing to anyone and everyone.

Incentives (5, Insightful)

gusmao (712388) | about 7 years ago | (#20237537)

It was aways clear to me that full disclosure is a better option simply because people react to incentives, and bad publicity creates a strong incentive for vendors to fix and patch their systems.
Nothing like fear of losing sales and yearly bonus to motivate higher management.

Re:Incentives (1, Insightful)

Anonymous Coward | about 7 years ago | (#20238775)

Well not exactly. Publicly reporting a security bug simply changes an engineering group's priorities. Other bugs don't get fixed, new features won't get added. We can debate whether or not that's a bad thing, but that's all it is - the publicly disclosed bug will just get fixed first.

Depends What You Mean (1)

JamesRose (1062530) | about 7 years ago | (#20237607)

Full discoluse could be finding a bug and then posting it onto the first 1337 haxxor forum you can find- which most people would agree is wrong, but full disclosure after giving the software company warning can't do any harm- cos either they'll have fixed it or they wont bother fixing it untill forced or not at all.

Re:Depends What You Mean (0)

Anonymous Coward | about 7 years ago | (#20238139)

Or they'll sue your ass. Reporting problems to the vendor first, has definite risks.

Re:Depends What You Mean (1)

xappax (876447) | about 7 years ago | (#20238667)

but full disclosure after giving the software company warning

It's debatable whether that's considered true, good-faith full disclosure. If you discover a security vulnerability, you are suddenly burdened with a moral imperative. You know that many people are in danger, but the people don't. Every day that you delay telling them is another day they're in danger, and quite possibly being exploited. It's important to remember that by denying people the knowledge of the insecurities in their systems, you are effectively protecting the interests of the attackers, regardless of your intentions.

Furthermore, there's the question of whether the company deserves a free pass for designing insecure software. If they're given special advance knowledge of the problem, they're able to create a fix that they otherwise wouldn't have. Then, when the vulnerability is publicized, they can have the issue already patched, and claim credit for being a responsible company when in fact it took a third-party volunteer to fix their shit for them, and they only acted on it because they knew it would become public.

Telling companies in advance about security vulnerabilities ironically creates an incentive for them to design even less secure code. After all, if an army of volunteers will spot and confidentially help them fix any errors that are discovered later at no cost to the company reputation or payroll, why would they do it themselves? There always needs to be a negative cost to the company associated with having insecure code, and full disclosure preserves this.

Re:Depends What You Mean (1)

dgatwood (11270) | about 7 years ago | (#20239077)

It's important to remember that by denying people the knowledge of the insecurities in their systems, you are effectively protecting the interests of the attackers, regardless of your intentions.

IMHO, the proper thing to do is to do a partial disclosure with selective full disclosure. Disclose all the details to the company. Disclose all details to CERT, who will issue a bulletin available only to certified vendors and will wait a period of time before public disclosure. Simultaneously post a public statement that says that you have informed the company about [n] vulnerabilities. Be sure to provide the date when CERT will disclose them in detail to the general public, and make it clear that you have reported it in the proper manner. Describe the threat level of the vulnerability (don't say "remote execution" unless you have a working test case or are darn sure that one exists), and tell what steps (if any) you can take to secure your copy of the software without giving away the details necessary to reproduce the vulnerability.

By doing this, you are ensured that A. the public knows about it, and B. if the public is worried about it, they can stop using the software or use your workaround (if one exists) or take precautions about opening files from strangers or whatever. If the public doesn't care (and 99.999% of people won't), they will wait for the vendor to release a fix, but you have put the choice in their hands. Best of all, by submitting it to CERT and thus setting a full disclosure date, you have set a deadline for the company to fix the problem. This will ensure that a fix comes promptly instead of the company sitting on its hands.

Telling companies in advance about security vulnerabilities ironically creates an incentive for them to design even less secure code. After all, if an army of volunteers will spot and confidentially help them fix any errors that are discovered later at no cost to the company reputation or payroll, why would they do it themselves? There always needs to be a negative cost to the company associated with having insecure code, and full disclosure preserves this.

No, disclosure---even delayed full disclosure---preserves that negative cost. Immediate full disclosure amplifies that cost to dangerous proportions. Immediate full disclosure should be a felony. Everyone makes mistakes, including you. I guarantee that there is not a single seasoned programmer out there who writes in C who has never written a single piece of code with a potential security hole. Using that as a way to shame the company is no different than taking pictures of someone committing an affair and posting them on the Internet to shame a politician (except that hopefully not all politicians have affairs). It basically is a way to harm the company for your own personal joy, which is not the same thing as protecting the public, no matter how you twist it.

Immediate full disclosure harms the public. It does not help it. In the long run, companies with lots of security bugs find themselves having to explain to their users why their software is so insecure regardless of whether those vulnerabilities were disclosed immediately or were sat upon for a month to give the vendor time to protect the public from the threat. However, by disclosing things fully immediately, you are giving the bad guys---black hats who probably would not have discovered that vulnerability yet---all the information they need to crack someone's system, but you are not giving the average user enough information to protect themselves against it because most users are incapable of figuring out how to reverse engineer software and fix a security bug. Thus, in effect, you are putting the entire computer ecosystem in jeopardy just to "teach the company a lesson". It's no different than taking out bolts in the landing gear of every airplane at a major airport to teach the airlines that frequent inspections are important.

There is NO excuse for immediate full disclosure. It is reckless, unethical, and harmful to not only the computer industry, but also to computer users. Immediate disclosure is a good thing, so long as it is done without sufficient detail to reproduce the exploit. Immediate full disclosure is not.

Re:Depends What You Mean (1)

xappax (876447) | about 7 years ago | (#20239821)

disclosure---even delayed full disclosure---preserves that negative cost. Immediate full disclosure amplifies that cost to dangerous proportions.

In a way, you're right. Full disclosure makes the cost to the company significant enough that it's a danger to the company's interests. This is as it should be - there's no reason to make selling insecure code less dangerous.

Everyone makes mistakes, including you.

Damn right, and when I make a mistake, I have to face the consequences. When I mess up, I don't get to secretly go back and fix everything up like I never made a mistake - I get called out on it and have to explain myself, and demonstrate why I won't make that mistake again. And to be honest, I'm a lot more likely to be careful in the future if I have to suffer through that ordeal than if I was able to correct it before anyone important noticed.

Using that as a way to shame the company is no different than taking pictures of someone committing an affair and posting them on the Internet to shame a politician

Yeah, and running with that idea, "responsible disclosure" is like continuing to lie to the politician's wife about what he's doing on those business trips so that he can have a chance to maybe change his ways. That's the "less messy" way of handling it, since she doesn't find out until after the problem has been corrected, but the damage is now twofold, because the damage was still done, and you're now complicit in the affair by deceiving her about it.

black hats who probably would not have discovered that vulnerability yet

Probably? What's the probability here? Probably doesn't apply in security design. A system is not secure because there's only a 2% chance that someone knows the secret to breaking in, a system is secure because there is no secret to breaking in. I don't want to play odds with my security, and other people playing those odds for me without my knowledge is unethical.

It's no different than taking out bolts in the landing gear of every airplane at a major airport to teach the airlines that frequent inspections are important.

Actually, it's more like finding out that it's possible to routinely gain access to planes and remove the bolts from their landing gear, and then letting the public know that they are in danger because of this. And I sure fucking hope that if someone found that out they'd tell me right away, because I'd far prefer to know about it and just take the train than obliviously keep flying, my life depending on the odds that no malicious individuals have found out about the problem before the bureaucratic security apparatus can figure out some way to address it.

Re:Depends What You Mean (1)

Bloodoflethe (1058166) | about 7 years ago | (#20239073)

That isn't considered Full Disclosure. Posting on a script kiddie forum is not disclosure to a public information disclosure service like Bugtraq. The former is considered malicious disclosure of security threats.

The Government (2, Insightful)

Sunrise2600 (1142529) | about 7 years ago | (#20237619)

It works in software, it works in government too. Only slimy bastards hide behind their veil of secrecy to their customers/public. Maybe one day we will have open source voting machines.

Re:The Government (1)

Pojut (1027544) | about 7 years ago | (#20237855)

ENTIRELY off topic, I know, but why is it so difficult to make a secure (both digitally and physically) electronic voting machine that actually WORKS? We can put people on the moon, travel miles below the ocean, build computers the size of fingernails, and yet can't create an electonic voting machine that doesn't break when you so much as look at it?

Re:The Government (1)

sobachatina (635055) | about 7 years ago | (#20238505)

Because, it turns out, humans are pretty smart when we put our minds to things.

In the examples you cited the people who have exposure to those systems are motivated to see them succeed. I imagine the space shuttle would be easy to break if malicious individuals had access to it.

If ALL of the users of voting machines were motivated to see them succeed- what we have would work wonderfully. Unfortunately finding solutions that other people can't break when they are trying hard is not so easy.

Of course there is the other issue that the company making those voting machines has a reputation for being greedy and sloppy. That might have something to do with it.

Re:The Government (1)

Pojut (1027544) | about 7 years ago | (#20238593)

My point is that it really cannot be that hard to make a system that is physically and digitally secure. I know that people are smart and that they will circumvent something if they want to, but seriously. Come on. Is it really that difficult to make something that people can't fuck with within the 2 minutes that they are standing there?

Re:The Government (1)

caerwyn (38056) | about 7 years ago | (#20238767)

Why? Because, as it stands now, it's much harder to build something unbreakable than it is to break something. This applies to digital and physical security alike- especially when you have the perpetually weak link- human intereaction- in the mix.

We give the electronic voting makers a lot of crap for making insecure systems, and rightly so- knowing they're insecure, they shouldn't put them on the market to be used in something so important. But it's easy to forget that it really is a hard problem. The fact that they make it harder for themselves with their attempts to cut corners, of course, only makes it worse.

Re:The Government (1)

Pojut (1027544) | about 7 years ago | (#20238835)

That's just it...it's not hard. No ports except for power and for the little card, and make the port for the cards in the voting machines write-only...only have the readers in one central location for each district where the cards are sent to. That solves your physical problem AND your electronic problem (short of the cards being hijacked on the way to the readers, of course)

Re:The Government (1)

caerwyn (38056) | about 7 years ago | (#20240015)

That's just it. Many of the hacks of existing voting machines don't necessarily have to do with the machine itself- they have to do with grabbing the card and modifying it externally in some fashion. So now you have you worry about physical card security at all times- and you also have the same unsolvable problem that the RIAA is dealing with when it comes to DVDs- you have to have an encrypted card, but you've got to have the key to decryption buried somewhere in the machine...

It really isn't as trivial as it might seem.

Re:The Government (1)

Pojut (1027544) | about 7 years ago | (#20240115)

Again, easily solved. The folks sitting in the voting booths insert a card into the machine that they remove from a sealed package, and the card stays completely internal in the machine. After the person is done selecting their votes, the card remains in the machine and is not retrieved until the voting booth is closed.

I know that the human element always exists, but it is drastically reduced if the person voting A. sees the card coming out of a sealed package into the machine and B. never actually touches the card

Re:The Government (0)

Anonymous Coward | about 7 years ago | (#20240541)

No offense, but you don't seem to know dick about voting machines, security, software, hardware, or the potential threats that you trivialize.

None of your "security" measures would mean anything if I were the person in charge of counting the votes and chose to alter the database. Or if I developed the voting machine and inserted code to skew the counts in some way.

Paper counting is (somewhat) secure because ballots can be recounted and verified. Your measures do nothing to permit verification of ANYTHING, which is where the insecurities lie in the current systems.

Re:The Government (0, Troll)

adisakp (705706) | about 7 years ago | (#20237875)

It works in software, it works in government too. Only slimy bastards hide behind their veil of secrecy to their customers/public.

But the current admistration has held all their policy meetings in secrecy and has failed to provide disclosure of details of it's inner workings to congress even in numerous private sessions due to "executive privilege". Are you calling our great leader a slimy bastard ?

Two basic problems (2, Interesting)

cdrguru (88047) | about 7 years ago | (#20237673)

Full disclosure results in announcing a bug not to the world, but only to people that are paying attention. Does this include all the users of that software? No, not even most of them. So who gets informed? People looking for ways to do bad things. The user's do not hear about the defect, the potential exploit or the fix that corrects it.

They are just left in their ignorance with the potential for being exploited.

The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.

The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.

All in all, full disclosure is simply blackmail. Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes. Their instance of the product isn't fixed because unless they are paying attention they don't know. They get to lose support if the company decides to pull the product rather than kneel to the blackmail. If the bug is exploited the end user get to suffer the consequences.

You can think this would justify eliminating exclusions for damages for software products. There isn't any way this would fly in the US because while we like to think we're as consumer-friendly as the next country, the truth is this would expose everyone to unlimited liability for user errors. Certainly unlimited litigation even if it was finally shown to be a user error which is by no means certain. And do not believe for a moment that you could somehow exclude software given away for free from damages. If you have an exclusion for that you would find all software being free - except it would be costly to connect to the required server for a subscription or something like that. Excluding free software would be a loophole that you could drive a truck through.

Re:Two basic problems (4, Insightful)

garett_spencley (193892) | about 7 years ago | (#20237869)

This is a very odd point of view.

First of all, if the users of the software aren't paying attention, who's fault is that ?

Secondly, you would think and hope that the software manufacturers would be paying attention and that they would inform their users, who may or may not be paying attention.

Full disclosure doesn't just imply disclosure to a small, specific group of people. It involves making information PUBLICLY available to EVERYONE. If someone isn't paying attention then that's their own fault. But if you don't feel like end users who are too worried with other things to be paying attention to Bugtraq are getting a fair break then point the finger at the software manufacturer instead. After all, they're the ones who sold faulty software and they're often the ones who continue to sell faulty software when bugs are not disclosed to the public, because they take the mind set of "what they don't know can't hurt them".

Unfortunately, what "they" don't know CAN hurt them. Because those same people you were talking about who are "interested in doing harm" are usually the ones to find the bugs to begin with. So they already know and those end users that you are so adamant about protecting are already at risk.

So IMO it's the responsibility of the software manufacturers to pay attention, fix bugs, release patches and inform their users that they need to apply said patches ASAP.

I mean, are you really advocating keeping information from people ? What if you had cancer, would you prefer that your doctor not inform you ? As I already stated, full disclosure is all about making information publicly available to absolutely everyone, so that absolutely everyone can make whatever choices they feel like with that information. Your argument is that full disclosure is selective about who it makes the information available to. I have to disagree. At the very least it makes the information available to the developers who made the buggy software to begin with, and competent admins who follow those lists so they know what kind of bugs are running on their servers (I used to be one of those).

Yep yep (1)

Bloodoflethe (1058166) | about 7 years ago | (#20239195)

I'm glad you said that and not me. I was about to write a dissertation on security and disclosure based on the the SEC's stance and requirements. Citations and everything.

Re:Two basic problems (0)

Anonymous Coward | about 7 years ago | (#20238223)

This is patently ridiculous. The people who spend the *most* time reading the full-disclosure mailing lists, bugtraq feeds, et al are *security professionals*, people who work on security products (firewalls, ips, anti-virus, so forth and so on). I'm not talking about the high-profile folks like Dave Aitel or Dan Kaminsky et at. I'm talking about who actually write the signatures and detection routines that go into AV releases, the people at symantec or kaspersky who do little else but analyze malware as fast as they can. The faster those people get information, the faster they can decide whether or not it goes into their product. Even if you don't like antivirus and the concept of "enumerating badness", you have to admit that these products do at least attempt to institute fixes and coverage faster than the vulnerable product vendor can.

The concept that "full disclosure is blackmail" is propagated by the tragically misinformed.

Re:Two basic problems (1)

dgatwood (11270) | about 7 years ago | (#20239239)

Sorry, but you're simply wrong. Most full disclosure mailing lists are open to the public, therefore there are black hats reading those lists. The fact that the majority of people are legitimate researchers is not relevant. It only takes one black hat to put a nail in the coffin of full disclosure. Disclosing it on those lists is like posting a paper on the Internet about how to make a "cool" new kind of explosive that can't be detected by the airlines. Sure, the security researchers can find out ways to detect it, but by that time, somebody has already blown up a plane and you're ten seconds away from Gitmo.

Disclosing to CERT, by contrast, ensures that ONLY certified antivirus vendors and people packaging and distributing the software to others (e.g. RedHat) get the information. That's why they disclose to vendors, then disclose to the public a couple of weeks later. A staged disclosure is simply the only means of disclosure that makes sense.

Re:Two basic problems (1)

99BottlesOfBeerInMyF (813746) | about 7 years ago | (#20238355)

Full disclosure results in announcing a bug not to the world, but only to people that are paying attention.

Yes, but the group that is paying attention includes the people with the greatest need to maintain security.

The "I want to do bad things" community has the information and is paying attention. Their community gets advance information before there even is a fix and they get to evaluate if it is worth their efforts to exploit it.

True, although sometimes this community already knows some of it.

The other group that gets to benefit from full disclosure is the media. Starved for news of any sort, bad news is certainly good news for them.

This is a good thing. First it informs the people. Second, it gives people a bad impression of vendors who have security holes and encourages them to move to more secure vendors. That's the free market improving security.

All in all, full disclosure is simply blackmail.

Nope. Offering to not release the vulnerability for cash is blackmail.

Unfortunately, no matter what the result is the user of the product affected gets all of the negative attributes.

Look, I'm a huge advocate of responsible disclosure. Sometimes immediate disclosure is the best option. Some companies don't see security as a concern at all. They ignore any bugs you submit to them and don't bother trying to prevent new ones. The best you can do in some of those cases is immediate, partial disclosure. For other cases, where there are better alternatives to the software in question or a work around, is immediate full disclosure. Eventually the company will lose enough customers because of it so that they reform or have too few customers to cause a large problem anymore.

I agree with you that litigation for damages is not a good way to clean up the problems. The real problem is lack of competition for the free market to deal with the problem. Legislation encouraging or even mandating open standards for file formats and protocols and cleaning up the travesty that is Microsoft's ongoing illegal actions would be the best way to fix the security problem in software today.

mommy-where-do-patches-come-from (0)

Anonymous Coward | about 7 years ago | (#20237705)

Well, sometimes a programmer and a program are very much in love dear.

Occasionally, software has bugs, and the try to practice "safe bugs" and keep them secret, but sometimes the screcy breaks, called full disclosure. Due to this accedent, 9 months later, a patch will be born.

no, this is responsible disclosure (4, Insightful)

Lord Ender (156273) | about 7 years ago | (#20237803)

This is not about full disclosure. This is responsible disclosure. Full disclosure would be if he went to bugtraq before contacting the vendor. Responsible disclosure is where a responsible security research goes to the vendor FIRST, and only goes to the public after the vendor has had a reasonable amount of time to fix the problem.

Responsible disclosure allows responsible companies to get a fix before a flaw is used maliciously, but the researchers still get credit. With responsible disclosure everyone wins except black hats.

Full disclosure benefits black hats more than it does anyone else.

Re:no, this is responsible disclosure (1)

griffjon (14945) | about 7 years ago | (#20238015)

This is spot on. Many companies don't see the business interest in responding to security flaws until it hits full disclosure. It doesn't logically follow that we should jut go straight to full disclosure. Let the company know that there's a flaw, and that you will disclose said flaw in some reasonable timeframe that balances the patch time with the severity of the flaw. Insightful companies will get to work patching, the rest will be gruff or nonresponsive ... and then you disclose and they get around to patching. Long-run, companies will learn to patch before disclosure to reduce bad press, or risk losing customers to companies that do, which will get better reputations as being more secure.

I agree with you for the most part, but... (1)

daveschroeder (516195) | about 7 years ago | (#20238491)

...responsible disclosure would also include:

- the timeline for full disclosure being given to the vendor (I don't know whether that did or didn't happen in this case), and

- reaching some mutual or community agreement on what a "reasonable amount of time to fix the problem" is for the problem in question.

That said, I definitely agree this wasn't "full disclosure", since the vendor was informed, but it wasn't necessarily responsible disclosure, either. To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens, and the discoverer/reporter and the vendor work in concert on the disclosure.

Then, the debate becomes: What if the vendor doesn't fix the problem in a reasonable amount of time? What is a "reasonable amount of time"? Is that amount of time necessarily the same for every issue in every product? (Arguably, no.) And so on.

Re:I agree with you for the most part, but... (2, Informative)

Lord Ender (156273) | about 7 years ago | (#20239007)

To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens
No. Wrong. It's not a matter of opinion. With responsible disclosure, a security researcher notifies a vendor before publishing his research. It absolutely DOES NOT imply that a patch is made available before the researcher publishes his findings. A vendor is still free to shoot itself in the foot under responsible disclosure.

The only gray area is determining just how much time is reasonable to release a patch. The standard accepted period these days seems to be between two weeks and two months. Mozilla's CEO would say "ten fucking days." Escaping part of an SQL string or recompiling code with a buffer overflow check doesn't take all that long to do.

If a vendor chooses to ignore a researcher, it does not change that fact that the researcher acted responsibly by providing the vendor with the courtesy of a "heads up" warning.

Re:I agree with you for the most part, but... (2, Insightful)

Bloodoflethe (1058166) | about 7 years ago | (#20239867)

Someone mod the parent up!

Re:I agree with you for the most part, but... (1)

daveschroeder (516195) | about 7 years ago | (#20240275)

No. Wrong. It's not a matter of opinion. With responsible disclosure, a security researcher notifies a vendor before publishing his research. It absolutely DOES NOT imply that a patch is made available before the researcher publishes his findings. A vendor is still free to shoot itself in the foot under responsible disclosure.

I didn't say it implied that; I said, "To me, "responsible disclosure" implies that a patch is made available BEFORE the detailed disclosure of the vulnerability happens". And it is a matter of opinion; it is NOT simply any notification of the vendor before full release of vulnerability details.

To this end, the length of time between vendor notification and disclosure becomes critical. Just as "vendor is still free to shoot itself in the foot under responsible disclosure", the discoverer/reporter is "free" to work with the vendor a little bit longer by waiting before disclosure. Two weeks? Two months? I'd say some could reasonably be longer. Some shouldn't be longer than 10 days. But it's often a lot more than just a purely technical fix: who gets to decide?

So, as I said, it all comes back to the time limit. I don't mean to say "responsible disclosure" ALWAYS requires a patch be made available by the vendor first; just that IDEALLY it does. If there is a 48 hour period between vendor notification and disclosure (assuming no patch), is that "responsible"? I'd say it's not. Two weeks? Maybe, depending on the nature of the issue. Two months? Almost absolutely. Six months or longer (as some have been)? Absolutely.

But it's not a clear cut issue, and it most certainly is a matter of opinion. "The only gray area", as you note, is actually a huge gray area and that's often the critical difference between a patch being available before disclosure and not: someone chooses to disclose in a timeframe that is still within zero to two months: is that acceptable? That's my point.

Re:I agree with you for the most part, but... (1)

Lord Ender (156273) | about 7 years ago | (#20240987)

I didn't say it implied that; I said, "To me, "responsible disclosure" implies that
This is a contradiction. The phrase "to me" prepended to a factual predicate does not change the meaning of the statement. If you aren't a native English speaker, and I am misunderstanding what you mean to say, I apologize.

It is every vendor's dream to have security researchers work as free consultants, hand-holding them through fixing security problems. The reality is that researchers are under no obligation to do anything other than publish directly to bugtraq--aka full disclosure.

If they give vendors lead-time on the publication, the researchers are being somewhat responsible--an act of charity. It is a continuum--some durations are more responsible than others, but they all fall under "responsible disclosure."

Waiting TOO LONG starts becoming less responsible. The users of a vendor's software would want to know if there are flaws the vendor is neglecting to fix.

Whether a disclosure is somewhat responsible or optimally responsible can be debated on a case-by-case basis, but it is all best described by the term "responsible disclosure."

Re:I agree with you for the most part, but... (1)

daveschroeder (516195) | about 7 years ago | (#20241205)

This is a contradiction. The phrase "to me" prepended to a factual predicate does not change the meaning of the statement.

No, it is not. It means that is what is implied by "responsible disclosure" to me, which is exactly what I said. That isn't what it necessarily means to anyone, or that I think that's what it should mean to anyone, and I understand that.

That is what responsible disclosure means to me, and that is a valid viewpoint; it most certainly is a matter of opinion.

Tied to that, obviously, is the notion that the timeframe to wait cannot be unlimited. Some would err on the side of allowing the vendor more time to patch before disclosing. Some wouldn't. Exactly what this amount of time should be is what is up in the air, and very likely should be variable depending on the nature, scope, and impact of the problem, the complexity of the solution, the product, and so on.

The nature and notion of responsible disclosure isn't as clear cut as you make it out to be. A cursory examination of thoughts on responsible disclosure in articles, blogs, and elsewhere on the web would quickly confirm that.

Re:I agree with you for the most part, but... (1)

Lord Ender (156273) | about 7 years ago | (#20241483)

"Two plus two equals four" and "To me, two plus two equals for" are equivalent statements.

The word "responsible" refers ENTIRELY to the researcher, not to the vendor. Any definition of full disclosure which depends on whether or not a vendor choses to act is therefore an invalid definition.

In your own cursory examination of articles and blogs, what term did you find the industry uses for disclosures in which the researcher gave a company advance notice of a publication, but not as much lead time as some would prefer? If the term "responsible disclosure" does not fit, which term does fit?

Re:no, this is responsible disclosure (3, Insightful)

xappax (876447) | about 7 years ago | (#20238943)

With responsible disclosure everyone wins except black hats.

Black hats win too. You ask 4cId_K1LL3R whether he'd like you to "fully" or "responsibly" disclose the 0day buffer overflow that he discovered a week ago and has been using to break into systems. I'm sure he'd far prefer that you keep the public in the dark about the issue for a month or so while the company leisurely gets around to patching it.

Black hats win, but software companies win most of all - which, after all is why software companies invented and promoted "responsible disclosure" in the first place. "Responsible" disclosure allows a company to improve their reputation and their software at little to no cost, thanks to volunteers who fix their security problems without telling the public. This, in turn, enables them to continue using the same irresponsible software engineering practices as they always have, with no impact on their bottom line.

Re:no, this is responsible disclosure (1)

Lord Ender (156273) | about 7 years ago | (#20239343)

Realistically, if two people discover a vulnerability independently, one of them is likely to know about this long before the other. In such cases, one additional month is a negligible amount of time compared to the overall time the initial discover had free reign of the affected systems.

Additionally, most companies can't immediately implement work-arounds on the day of a 0-day publication. They have to wait until a patch is released from a vendor. In such cases, the black hat has the same amount of time to hack the target systems, and a thousand other black hats have a window of opportunity to attack which they would not have had under responsible disclosure.

What you are saying is correct--but only for some rare and contrived scenario, and not when you consider the bigger picture.

Re:no, this is responsible disclosure (1)

Bloodoflethe (1058166) | about 7 years ago | (#20240021)

Additionally, the scenario given is typical of the /. responses I see:

This, in turn, enables them to continue using the same irresponsible software engineering practices as they always have, with no impact on their bottom line.
Compiling and testing doesn't find everything. It's easy to accuse an ace coder or a crack team of programmers of sloppiness when you don't know the people. Sure some companies push an overly aggressive time frame, but not all of them do and (from what I can tell) not most of them, either. People make mistakes, give them a little slack, but not much. Responsible developers will have the fix out by the time you remember to put the full disclosure of the bug in your calendar(exaggerating here: 7-14 days is realistic, depending on severity of the issue).

The Beatles said it best - (1)

Recovering Hater (833107) | about 7 years ago | (#20237825)

"Got a good reason for taking the easy way out..." - Daytripper

How Software Works (5, Funny)

mfh (56) | about 7 years ago | (#20237841)

1. Bug is reported.
2. Secretly, a team of crack programmers (or programmers on crack) develop the patch.
3. The patch sits in a repository until public outcry.
4. Public outcry.
5. Patch released... LOOK HOW FAST WE ARE!

Is it just me or (1, Insightful)

Anonymous Coward | about 7 years ago | (#20237941)

have Slashdot stories become more openly biased. I wouldn't even call this a story, it's an opinion.

Current climate... (0)

Anonymous Coward | about 7 years ago | (#20238083)

I hope whomever decides to post a bug has enough common sense to remain anonymous.

Patchy code? (1)

Valdrax (32670) | about 7 years ago | (#20238151)

Coincidence? Probably not considering the vendor stated "..I'm not sure we could fix it all anyway without a rewrite." Looks like they could fix it, but just needed a little full-disclosure motivation.

They might not have been lying. Fixing it properly might have required a rewrite, and instead they may have been forced to include a number of slapped-together kludges with Lord-knows-what side-effects under extreme time pressure. I know what kind of code *I* write when I'm under that kind of time constraint, and I've never seen evidence in any of the projects I've worked on where other people coding under an unreasonable schedule resulted in code I didn't want to completely scrap upon examination (but wasn't allowed to thanks to my own time constraints and lack of testing resources). Frankly, I've had bugs I couldn't fix without major rewrites thanks to the fixes that other people have put in.

So, who knows? Maybe these quick patches will result in other vulnerabilities or just nigh-impossible to maintain code. I've seen it before. While its good to put a little pressure to see bugs fixed, I wouldn't say public disclosure is a secret recipe for correct and functional software when it results in embarrassing a company into getting a fix -- ANY fix -- into place ASAP.

I do realize, though, that the alternative may be the company deprioritizing the bug and never fixing it at all. Companies are lazy that way. It seems like a lose-lose scenario.

False assumptions? (2, Interesting)

mmeister (862972) | about 7 years ago | (#20238543)

There seem to be some false assumptions here. It is assumed the company did not look at the bug and potential fixes until after it was "fully disclosed". If they released a fix a couple days later, the more likely scenario is that they've been looking at the problem and assessing what options they had to address the problem.

Ironically, the full disclosure probably forced them to put out the solution before it was ready, leaving the risk of new bugs. IMHO, forcing a company to rush a fix is not the answer. If you work for a real software company, you know that today's commercial software often has thousands of bugs lurking, although many are very edge case and are often more dangerous to fix than not fix (esp if there is a workaround).

There should be enough time given to a company to address the issue. Some can argue whether or not 5 months is enough time, but that's a different argument. I think forcing companies to constantly drop everything for threat of full disclosure will end up doing more harm than good.

Re:False assumptions? (2, Insightful)

Minwee (522556) | about 7 years ago | (#20239243)

If the company was indeed looking at the problem, then they lied about it. Their response to being notified of the problems, as described in the article, was to say "Gee, we're not going to bother fixing that. Instead we're going to work on a new product and just sell it as an upgrade to everybody."

When someone tells you flat out that they aren't going to do anything, why is assuming that they aren't doing anything false?

Re:False assumptions? (1)

TubeSteak (669689) | about 7 years ago | (#20239337)

There seem to be some false assumptions here. It is assumed the company did not look at the bug and potential fixes until after it was "fully disclosed".
I don't think you RTFA.
He told them about the problems.
Their response: We're not fixing it because we have a new client coming up.
5 months later, no new client, so he went public.

If you read the other article linked in the summary, it seems like they could have trivially done a lot to secure things server side. Like not making the password hash file readable and not allowing user uploaded scripts to run on the server.

Re:False assumptions? (2, Interesting)

mmeister (862972) | about 7 years ago | (#20240169)

Sorry, I was trying to make a more generic argument, and clearly flubbed that. My original point is that we will likely to more long term damage if all we do is bully companies. Believe it or not, there is more going on that just folks sitting waiting to fix bug reports that comes in for some random guy. And with smaller companies, they don't have a team that is on the attack for vulnerabilities found.

I didn't see the original email he sent to the company. Nor did I see mention of followups to try and push them. That makes a difference as well, because I've seen plenty of "your stuff doesn't work" bug reports from folks.

What fully disclosing probably did was put the company in fire mode. They had to stop everything else to attend this. This can really hurt smaller companies long term. Most can't afford teams that sit around waiting to attack these flaws.

I do think full disclosure can be an important tool when you've tried again and again to get an important security issue addressed. But it should never come as a surprise to the company. There should be communication with the company throughout the process from the first report to the alert that you will be making this public in a month's time.

I think it more harmful than not to try and play "gotcha" with companies.

Now mind you, I'm not sure their one time fee model will last all that long -- but that's a separate issue.

Several days later a patch appears. (3, Insightful)

Trillan (597339) | about 7 years ago | (#20239201)

"Coincidence? Probably not considering..."

Yeah, everyone knows that patching security holes is an instant process. What other explanation could there possibly be? The public found out about the bugs, and the vendor waved a magic wand, and presto-changeo, they were fixed.

Okay, now let's be real here.

That the patch appeared almost immediately after is the surest sign that the vendor was already working on them. It probably also indicates the vendor wasn't confident that they were finished, and rushed them to get them out after only a couple days of public disclosure.

So enjoy your half-baked patch.

True economics (1, Interesting)

Anonymous Coward | about 7 years ago | (#20239331)

There seems to be this strange notion that blackhats benefit from full disclosure.

The thinking seems to be something like this: when a bug is disclosed, blackhats that were unaware of the bug become informed and have a window of exploitation until the bug is patched.

This seems absurd to me. As soon as the bug is disclosed, users become aware and can immediately block the vulnerability. If there is no other solution, they could at least stop using the vulnerable software. So the window of exploitation is the amount of time from the disclosure to widespread awareness and shutdown of buggy software.

Some would say that it is over-simplistic to think that you can just shut down vulnerable software. Some might claim that it just isn't practical. I think what this argument really means is that it could be very costly to some enterprises to have to shutdown some vulnerable systems. The system administrators would have to weigh the costs of shutting down against the costs of being exploited.

Full disclosure is really just an economic issue. Full disclosure highlights the costs of using buggy software. Distributors of more buggy software may not appreciate the reflection on the total costs of using their software. Some businesses and people may not appreciate the forced realization of the total costs of the software that they use.

Some people may tweak their bathroom scales to make them feel better about the total costs of their dietary habits. But they shouldn't rant about standard, untweaked scales being unethical in their methods of disclosure.

If the truth about the software you develop or use is uncomfortable, don't try to cover it up. Hiding your eating disorder doesn't solve the problem.

You can't make the best economic decisions unless you recognize the true economics of your software choices.

morality (2, Funny)

Anonymous Coward | about 7 years ago | (#20240111)

Forget morality for a minute... Making the bigwigs at some major company cry out "OH SHIT" in unison is one of the few sources or free entertainment I have left.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>