Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Researcher Discloses New Batch of MySQL Vulnerabilities

samzenpus posted about 2 years ago | from the protect-ya-neck dept.

Open Source 76

wiredmikey writes "Over the weekend, a security researcher disclosed seven security vulnerabilities related to MySQL. Of the flaws disclosed, CVE assignments have been issued for five of them. The Red Hat Security Team has opened tracking reports, and according to comments on the Full Disclosure mailing list, Oracle is aware of the zero-days, but has not yet commented on them directly. Researchers who have tested the vulnerabilities themselves state that all of them require that the system administrator failed to properly setup the MySQL server, or the firewall installed in front of it. Yet, they admit that the disclosures are legitimate, and they need to be fixed. One disclosure included details of a user privilege elevation vulnerability, which if exploited could allow an attacker with file permissions the ability to elevate its permissions to that of the MySQL admin user."

cancel ×

76 comments

Sorry! There are no comments related to the filter you selected.

You deserve it! (3, Insightful)

Anonymous Coward | about 2 years ago | (#42169549)

When you leave 3306 open on the internet.

Re:You deserve it! (0)

Hizonner (38491) | about 2 years ago | (#42170607)

You deserve it when you run crappy software that needs a firewall in front of it to be minimally safe.

Especially when that software has to enforce internal permissions and boundaries.

Sorry, but I'm pretty sick of these excuses for garbage code.

Re:You deserve it! (1)

Dwonis (52652) | about 2 years ago | (#42171599)

Indeed. Firewalls break end-to-end connectivity, and incentivise a protocol-encapsulation arms race that is bad for the Internet. It's 2012; You have no business writing more code that speaks the Internet Protocol unless it can actually handle being on the Internet.

Re:You deserve it! (1)

Anonymous Coward | about 2 years ago | (#42172285)

Car analogy time! Ever get stuck behind a tractor driving 5 mph, carrying a fully loaded manure spreader dripping all over? Then you try to pass him but all the fumes have made you high and you crash your car. You wake up and find yourself chained up in a farmer's sex dungeon and he proceeds to sodomize you for 3 months until you finally die of an impaled rectum.

Well, maybe your car shouldn't be on the road either!

My God (1)

Safety Cap (253500) | about 2 years ago | (#42172821)

You wake up and find yourself chained up in a farmer's sex dungeon and he proceeds to sodomize you for 3 months until you finally die of an impaled rectum.

You describe perfectly what a seasoned, experienced developer feels like when he (or she) has to wade through a typical *MP "application" in order to fix and extend it.

Proof Cthulhu is real: *MP kiddies believe view logic is Best Logic.

Re:You deserve it! (0)

Anonymous Coward | about 2 years ago | (#42175623)

You deserve it when you run crappy software that needs a firewall in front of it to be minimally safe.

Especially when that software has to enforce internal permissions and boundaries.

Sorry, but I'm pretty sick of these excuses for garbage code.

Hey Hizonner,

Hopefully you don't work in IT, because if you did I suggest you find another line of work. Your attitude about firewalls would put your entire company at risk. DB servers should never be left open to the internet. They are designed to be accessed via applications not by everyone on the planet that wants a login to your corporate database.

Joe

Re:You deserve it! (0)

Anonymous Coward | about 2 years ago | (#42175815)

Did you hear that whooshing sound? No? Well, that was the whole point of the GP flying right over your head.

Yes, you should have a properly configured and restricted firewall in place.

BUT, the lack of one does not excuse bad protocols and implementations on the part of the software lingering behind that firewall. Besides, a firewall does nothing to protect you against a malicious employee or someone that has otherwise penetrated your "protected" network through some other means.

There is no excuse for shitty, vulnerable software. If, as a developer, you insist bugs in your code can be "fixed" by a firewall, you deserve to be dragged in front of the nearest tree and strung up by your nuts.

Re:You deserve it! (1)

ls671 (1122017) | about 2 years ago | (#42177077)

You do not need a firewall, just listen to local IP addresses, not the public Internet one ;-)

Re:You deserve it! (0)

Anonymous Coward | about 2 years ago | (#42180179)

Isn't that like a firewall-rule then?

Re:You deserve it! (0)

Anonymous Coward | about 2 years ago | (#42178113)

You can close the port for direct connections but if you host crappy stuff allowing sqlinjections then you are just as fucked.

At the risk of getting modded down... (0, Troll)

Viol8 (599362) | about 2 years ago | (#42169657)

... is someone who spends their working day just trying to poke holes and find vulnerabilities in software a "researcher"? Glorified tester maybe but thats about it. I somehow don't think these people hang around in white labcoats in clean rooms with clipboards looking at the latest results. More like some fat guy slouching with a pizza running yet another penetration program that someone else wrote.

Re:At the risk of getting modded down... (1)

Anonymous Coward | about 2 years ago | (#42169763)

Agreed. If it weren't for 'researchers' like Slouches-With-Pizza, here, we wouldn't have to worry about computer crime at all.

Re:At the risk of getting modded down... (2)

mcgrew (92797) | about 2 years ago | (#42169779)

You don't need a lab coat or even a lab to research.

Re:At the risk of getting modded down... (5, Funny)

ducomputergeek (595742) | about 2 years ago | (#42170061)

No, but you need a bow tie to be the doctor...because bow ties are cool.

Re:At the risk of getting modded down... (-1)

Anonymous Coward | about 2 years ago | (#42172329)

"The problem with your [ducomputergeek's] face is that it's ugly" - Everyone.

Re:At the risk of getting modded down... (5, Insightful)

K. S. Kyosuke (729550) | about 2 years ago | (#42169787)

is someone who spends their working day just trying to poke holes and find vulnerabilities in software a "researcher"?

Yes. Much like people trying to poke holes in other people's scientific research are scientists.

Re:At the risk of getting modded down... (0)

Anonymous Coward | about 2 years ago | (#42170171)

is someone who spends their working day just trying to poke holes and find vulnerabilities in software a "researcher"?

for sure he s searching something,

Re:At the risk of getting modded down... (-1)

Anonymous Coward | about 2 years ago | (#42169819)

I poked your mom's hole with my 10-inch missile. Then a blew a bug sloppy load in her snatch while your dad subsequently lapped it up like a good little cuck.

Re:At the risk of getting modded down... (0)

Anonymous Coward | about 2 years ago | (#42169885)

Actually, it's more like the person writing the penetration programs.

Re:At the risk of getting modded down... (5, Insightful)

Anonymous Coward | about 2 years ago | (#42169905)

If it's so easy, why aren't you doing it and making money turning in chrome flaws to google? or firefox flaws to mozilla? Or Ie flags to microsoft? They all pay for real vulnerabilities.

The answer: It's not easy. There is no magic "penetration program". It requires detailed knowledge of processors, compilers, and software architecture. It requires skills that you won't learn in most colleges (R/E). It requires patience. It requires methodical documentation to be good at it. And at the end of the day, there is absolutely zero guarantee that you will find any vulnerabilities or that a vulnerability even exists.

Re:At the risk of getting modded down... (0)

Anonymous Coward | about 2 years ago | (#42176403)

Unfortunately the companies that you mention do not pay anything close to what you can sell a weaponized 0day for to a state spy agency or one of their proxies.

Perhaps because its boring? (1)

Viol8 (599362) | about 2 years ago | (#42177273)

Creating something is a lot more fun than picking it apart.

Re:At the risk of getting modded down... (-1)

Anonymous Coward | about 2 years ago | (#42169929)

... is someone who spends their working day just trying to poke holes and find vulnerabilities in software a "researcher"? Glorified tester maybe but thats about it. I somehow don't think these people hang around in white labcoats in clean rooms with clipboards looking at the latest results. More like some fat guy slouching with a pizza running yet another penetration program that someone else wrote.

Don't worry, I'm certain a few mods will come by and claim (via ACs) that their "job" is exactly that, refuse to elaborate on how they get paid for it, make thinly-veiled threats if you press the question, and conveniently "correct" your facts for you by downmodding your clearly flawed facts.

Re:At the risk of getting modded down... (1)

Anonymous Coward | about 2 years ago | (#42169945)

... is someone who spends their working day just trying to poke holes and find vulnerabilities in software a "researcher"? Glorified tester maybe but thats about it. I somehow don't think these people hang around in white labcoats in clean rooms with clipboards looking at the latest results. More like some fat guy slouching with a pizza running yet another penetration program that someone else wrote.

So you are unwilling to qualify a fat guy slouching with pizza dripping down his face who finds 7 vulnerabilities in MySQL during the weekend as a "researcher", and give the title to a Monday-Friday 9:00AM-5:00PM labcoat wearer who (probably hates his job and) believes MySQL is secure? Why?

FWIW the fat guy also found 4 other vulnerabilities in other software.

If running some other person's software to find these vulnerabilities is so damn easy, how come the guys with the fancy labcoats didn't find them sooner?

Re:At the risk of getting modded down... (1)

lennier (44736) | about 2 years ago | (#42173419)

If running some other person's software to find these vulnerabilities is so damn easy, how come the guys with the fancy labcoats didn't find them sooner?

That's the question that the survivors picking their way through the rubble of the Internet will be asking in a few years.

It's not like these vulnerabilities are hard to find, as evidenced by the constant flood of discoveries by tiny private research groups. Yet our current best-of-breed million-dollar industrial-strength software development industry swears it's absolutely impossible / impractical to do it at any cost. And the academic software engineering community apparently agrees.

Something does not add up here. It should not be possible for these low-budget hackers to beat the entire world's programming experts at their own game. And yet, here we are.

What's the explanation?

Re:At the risk of getting modded down... (0)

Anonymous Coward | about 2 years ago | (#42175895)

What's the explanation?

The explanation is that developers tend to be rather inept at spotting flaws in *their own* code. The developer typically has an expectation of "sunny day" inputs and acts accordingly. In C/C++ code, it's disgusting how often you see lack of bounds checking (probably the #1 source of exploitable flaws) because the developer was lazy, ignorant, or incompetent. People, developers included, are too trusting by default. The danger comes in software when developers trust input from users, the network or even internally. If you're developing a library, your public interface should do boundary checking, precondition/postcondition checking; even in release builds. In debug builds, your internals should be more pedantic and assert if their conditions are not met (and hopefully provide meaningful information for debugging along the way). But, to fall back to undefined, unspecified or implementation-specific (without properly understanding the implementations you're utilizing) behavior is abhorrent.

Couple the above with often ill-defined or ambiguous requirements, lack of sufficient code reviews, the need to rush to market, cutting the QA department because they cost too much, and you can very quickly come to see how so many flaws squeak through to production.

While I applaud Google's quick turnaround time on some of the zero-days of which they've been made know, it also begs the same question, with a 24 hour turn around, how sufficiently have those fixes been fixed? What's the quality of the fix? Is it a quick hack that just blocks that particular attack? Or, is it a well thought out solution that actually renders that attack vector nullified? I don't know, so I can't pass judgement, but I do wonder.

Memory Safe Languages (0)

Anonymous Coward | about 2 years ago | (#42177617)

About 50% of listed exploits (in things like the CVE database) rely on "stressed/lazy/incompetent programmer using C/C++". Buffer overruns, bad pointers, double free()s, accessing bad pointers etc. I would argue that due to human and organization nature, we cannot make the human programmer eliminate these issues dramatically. There will always be a boss around with the agenda of "delivering product to superiour management on time, on budget". That means some poort developers cannot be as diligent as they should be.

But these 50% of CVE issues could be dealt with using technology: Automatic bounds checking, strong typing (no more funny insecure casts), reference-counted memory management (much more to my taste than Garbage Collection, as it happens at defined points in time and is incremental/controlled by programmer). Of course now comes the "C++ Expert" crowd crowing that "this is inefficient and we are so perfect, we don't need it either". Both arguments don't hold water.

First, even experienced guys run their code on valgrind, boundschecker or Purify, because they make mistakes, too. Secondly, even simplistic implementations of bounds checking and refcounting do typically reduce runtime efficiency by about 10 to 20%. Below you will find the URL of a language of mine which proves that. But even that moderate inefficiency could be dramatically reduced by compilers which can prove that a certain check must be done only once instead of for each loop iteration.

http://freecode.com/projects/sappeur-compiler

Re:Memory Safe Languages (0)

Anonymous Coward | about 2 years ago | (#42178327)

Automatic bounds checking, strong typing (no more funny insecure casts), reference-counted memory management (much more to my taste than Garbage Collection, as it happens at defined points in time and is incremental/controlled by programmer). Of course now comes the "C++ Expert" crowd crowing that "this is inefficient and we are so perfect, we don't need it either".

Bullshit. Any competent, let alone expert, C++ programmer will be using those things all the time.

Re:Memory Safe Languages (0)

Anonymous Coward | about 2 years ago | (#42178507)

Your "my shit doesn't stink" mentality is the reason we still have buggy insecure code.

Re:Memory Safe Languages (0)

Anonymous Coward | about 2 years ago | (#42179403)

So let's follow your thought process here:

1) the GGP recommends a bunch of coding techniques and bashes C++ programmers for not using them
2) I point out that decent C++ programmers do in fact use those techniques
3) you somehow conclude that I think the code I write is perfect, and blame me for personally ruining the entire IT industry

Yeah, you're a fucktard.

Om nom nom (2)

Cid Highwind (9258) | about 2 years ago | (#42170147)

The troll eats well today...

Re:At the risk of getting modded down... (1)

NotBorg (829820) | about 2 years ago | (#42170333)

If they would have said "hacker" there would have been another debate about if that word were used correctly. Neither debate matters because everyone but the stupid pendant gallery understood what was meant and that language is a mailable medium that relies heavily on context.

Also, the stereotyping certainly doesn't make your argument stronger. It simply makes you look like a clueless outsider that gets his bearings from Hollywood and Internet memes.

Re:At the risk of getting modded down... (0)

Anonymous Coward | about 2 years ago | (#42178477)

I don't think you truly understand the amount of work that goes into creating some of these exploits. The guy who released this stuff is a fucking legend. I doubt there are many people in the world that find vulnerabilities at the rate he does, let alone publicly release PoC.

just look the list of public exploits he has released.

http://1337day.com/author/1154
http://1337day.com/author/1632

If you think this is some 19 year old kid living in his moms basement sucking down pizza, you are terribly, terribly wrong. This dude could most likely code circles around 99.9% of the people reading this.

Re:At the risk of getting modded down... (1)

GameboyRMH (1153867) | about 2 years ago | (#42179349)

Hey you know what's under a labcoat? A pizza-stained shirt. From slouching and eating it while running an experiment with someone else's discoveries.

Researchers use responsible disclosure (1)

raymorris (2726007) | about 2 years ago | (#42170001)

"Researcher", insomuch as it implies a level of professionalism, should be reserved for those with a modicum of professionalism, such as responsible disclosure. I could have had my 15 minutes of fame with a vulnerability I discovered that could have been used to take fown wikipedia and many other sites, but instead I reported it through the proper channels so it was fixed, not exploited. Perhaps "security attention-seeker" would be a better term.

Re:Researchers use responsible disclosure (1)

greg1104 (461138) | about 2 years ago | (#42170975)

A look at the twitter feed [twitter.com] of the submitter and his associated web site--"Farlight Elite Hackers Legacy"--does not give the impression of responsible disclosure. But this is the same guy who released the 2010 “Apache Killer” [tallpoppygroup.com] ; calling attention to problems with exploit code is this guy's method. I'd rather see that than no disclosure at all. He does appear to be a professional penetration tester at work, who does things like speak at conferences [wordpress.com] on his methods too.

Re:Researchers use responsible disclosure (2)

Anonymous Coward | about 2 years ago | (#42171041)

As someone who he released a vulnerability for this weekend, and the person responsible for security of the product in question...

I WOULD VERY MUCH LIKE IF HE WOULD NOTIFY US (the affected vendor) AT LEAST AT THE SAME TIME HE PUBLICLY RELEASES IT!!!

We found out via support cases coming in from clients who were reading FullDisclosure before I got into the office to check my morning emails. NOT COOL!

Re:Researchers use responsible disclosure (1, Insightful)

Cid Highwind (9258) | about 2 years ago | (#42171225)

We found out via support cases coming in from clients who were reading FullDisclosure before I got into the office to check my morning email

...and you think it's somehow reasonable for a "person responsible for security" to sit back and wait for vulnerability reports to find their way through product support channels, instead of monitoring FullDisclosure?

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42171447)

Well, it was after 11PM local time, I'm sorry if I sleep.

Oh, wait, I am not.

Re:Researchers use responsible disclosure (1, Insightful)

Cid Highwind (9258) | about 2 years ago | (#42171571)

Well, it was after 11PM local time, I'm sorry if I sleep.

Now you want advance notice based on your timezone... this is called "moving the goalpost".

Originally you said: "I WOULD VERY MUCH LIKE IF HE WOULD NOTIFY US (the affected vendor) AT LEAST AT THE SAME TIME HE PUBLICLY RELEASES IT!!!"

There's an easy solution to that:
1: Subscribe to FD.
2: There, now you're being notified at the same time as the public.

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42172205)

I am subscribed to FD, and I have a rule setup to flag anything with our company or product names, label them high priority, etc. Since I have life, a body that needs sleep, etc. response was delayed until the first of either 1)tech support cases come flooding in, or 2) I wake up and check my emails. Still, he didn't even make an attempt to contact us, instead just posting everywhere he could. We had a workaround within an hour of finding out about the vulnerability, and we are in the process of deploying the patch to customers, so 24 hours turn-around for us.

Still, I consider it VERY bad form to not at least attempt to notify us, we've had customers compromised, something which could have been easily avoided, or at least the risk reduced, with a organized disclosure, either by contacting us, or by contacting a CERT team or similar. We have no problem giving credit, and agree that public disclosure is the correct thing to do, as it is the best way to enhance the security of the user base. Despite that, damn, at least send mail to support@company.tld when you announce an exploit, and provide working exploit code against their product! Ideally, give them heads up and the code before hand so they can verify and develop a patch! Industry standard for responsible disclosure used to be 3 months advance notice, or when a fix is available and agreed upon (if sooner), which is completely understandable for companies with huge products and lack of ability to quickly pinpoint problems. Honestly, if we would always be given 1 week notice, that would be great for how we work. As is, a lot of overtime here, but we got it done in 24 hours, instead of including it in our planned release later this week.

TL;DR:
security research... good
public disclosure... good
giving credit... good
this disclosure... NOT GOOD!

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42172605)

Don't know about your code. But, I don't think this person wrote it or released it. If you released bug free code there wouldn't have been an issue. There was public disclosure, which according to you is good. You wanted individual notification prior to the public or at least simultaneous. If public disclosure is good any then timing on that notification is an opinion on whether it is "responsible". Given public notification, you have access to that notification - any additional notification is "good will" and, frankly, unnecessary. He did you a favor by publicizing the vulnerability. Your company is responsible for the bug, not him. end of story. Save you righteous anger for your own QA (and "C" code for using null terminated strings...)

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42173535)

If you released fairy dust and unicorn eggs there wouldn't have been an issue.

That's about as realistic as your version....

Re:Researchers use responsible disclosure (1)

Cid Highwind (9258) | about 2 years ago | (#42173527)

THREE MONTHS?!?

That's insane. Maybe it was appropriate in the 1980s when "security researchers" and "black-hat hackers" were sets of bored grad students with slightly different moral compasses, but now with various governments and criminal enterprises buying up exploits, one should probably just assume that anything disclosed publicly is also for sale from another "vendor" or already packaged into as-yet-undetected malware.

As for sleep, it sounds like somebody has created an expectation of 24x7 on-call security support, without funding positions for more security people. Don't let the company make that your problem!

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42173645)

I'd have to say, his work around provided probably helped in fixing them. Maybe, despite his poor venue for releasing it, offer him a full time job doing just this. He obviously is very dedicated, likes his work (even if he probably has an arrogance that he found something your teams missed, or in this case 7 somethings), and seems to be better than the rest of your team. I haven't reviewed these as I didn't read the article and just started reading the comments (because ones like this are entertaining to say the least), but honestly you have to consider maybe he didn't want to take the time to contact you. Instead opening up the information for would-be hackers and Oracle to read at the same time.

Tell me, do you think if he sent these items to support@oracle.com or support@mysql.com an hour, a day, or even a week before releasing on FD he would have A) gotten as much credit, B) would these have been taken seriously or read by someone who mattered instead of someone in india that sat on it for weeks on end, and C) if you didn't I go back to A. They wouldn't be big a deal and he would have been forgotten in obscurity instead of having something he could put on his resume (or if he runs a company, a proof in the pudding example that his company is on the front line and is a first responder to bugs). Which lastly warrants your behavior for your post on /. it is bad form to bitch as a employee in this sector on a public forum. I'd be equally as pissed, but you are a spokesman for your company when you claim to work for them. Which is why you put @company.tld instead of @oracle.com and posted as an AC. Lucky you don't work under me, I can typically discern my employee's writing style and reaction and simply said you probably wouldn't be fired but a second incident after the first discussion would lead me to firing you.

In the future, don't claim to work for the company, instead state "as a worker in this field I find it bad form for someone to post in this method". Because, as a worker in this field I also find it bad form to not give atleast a minor heads up to the company. Heck, I'd expect a heads up with a bit of a kings ransom attached to it (given that he provider one of the many, most likely that most important), in which I would take to my VP or Executive and work with the individual in question.

Though going back to my point B, have you contacted your support desk to see if you were alerted before hand by him or in any fashion? Did you check to see if someone tossed it in the "not credible" box and you were alerted?

TL;DR
If you actually work for the company, don't post like this. It's bad PR.
If you are a manager in this format, are you sure that you were not informed? And if you were not, can you seriously blame the guy?
I agree, he should have contacted the company AND demanded a ransom for the information before going to FD.

Re:Researchers use responsible disclosure (1)

lennier (44736) | about 2 years ago | (#42173531)

There's an easy solution to that:
1: Subscribe to FD.
2: There, now you're being notified at the same time as the public.

There's an even easier solution:

0. Don't introduce security vulnerabilities into your own product to start with.

We have compilers and testing suites for a reason. Use them. And if your language and testing toolchain are insufficient to the job of making sure your product does not endanger the entire Internet, then use a better one. If your architecture doesn't allow you to write provably secure code, write a better one.

It's 2012. There's no excuse for this anymore. Do it right, or don't put your code on the Internet.

Re:Researchers use responsible disclosure (1)

philip.paradis (2580427) | about 2 years ago | (#42176179)

The first rule of software is that all software beyond the barest of trivial examples will have bugs. Compilers are software, and have the same long and sordid history of bugs. Since compilers have been mentioned specifically, you might be interested in the classic work Reflections on Trusting Trust [bell-labs.com] (it was apparently written by a guy who knows a thing or two about the topic, some Ken Thompson fellow).The same goes for test suites. In many cases, bugs translate to security vulnerabilities. In some cases, perfectly rational behavior demonstrated by entities known as programs results in unexpected behavior when they are made to exchange data. This phenomenon is referred to as "novel outcomes" in some circles, and "wow, that's some fucked up shit" in others. There is a reason the field of information security is as broad as it always has been, is, and always will be.

Your post proves you have never worked as a professional developer, or for an organization where your role was deeply connected to systems or development work. Heck, it proves you've never worked on any major open source project either, for that matter. I suppose we should all stop using anything resembling software immediately to prevent the planet from caving in under the weight of its own failure. Or perhaps you should take your obviously extremely advanced software engineering skills and produce the one true invulnerable platform for everyone, one layer and application at a time.

As Bruce Schneier famously said, "security is a process, not a product." That process never ends, and involves complexities I believe could be delicately framed as things that aren't exactly your area of expertise. That's okay, though; you can always start educating yourself [schneier.com] immediately. We're all looking forward to your next batch of brilliant revelations on infosec strategy.

Re:Researchers use responsible disclosure (3, Insightful)

greg1104 (461138) | about 2 years ago | (#42171291)

If Oracle doesn't have someone reading FullDisclosure every day, including the weekends, you deserve to be embarrassed and shamed by your customers. Hint: someone from the MariaDB team was adding to the discussion [seclists.org] already by Sunday.

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42171509)

I'm not with Oracle, but rather one of the dozen-or-so-other companies he posted vulnerabilities with on Sunday.

Re:Researchers use responsible disclosure (1)

Hizonner (38491) | about 2 years ago | (#42172099)

I used to handle ALL of these issues for a very large vendor. Yes, people did wake me up over things, until I wised up that my employer's problems during my off hours were in fact my employer's problems, not mine, and that my employer as an institution didn't give a fuck about anything but saving face.

I quit about the time vendors started trying to dodge responsibility by talking about other people's "responsible disclosure".

You are not entitled to know about a problem before those who are actually affected (hint: that's the users, not you). Your company's unwillingness to staff 24-hour incident response does not entitle it to special consideration. Maybe early disclosure to you would help the customers you've already failed... if you could turn a patch around in, say, a week. So few companies do that that it's not really worth the discoverer's time to think about the possibility. The usual "responsible disclosure" demand for weeks or months to accommodate internal laziness, bureaucracy, incompetence, and spin control is ridiculous and helps nobody but the vendors themselves.

Any vulnerability could very well already be known to some bad guys somewhere... and most vendors leak the information like crazy once they have it, long before they get their patches out. So waiting around for vendors just creates more risk. It's end user self-help or nothing.

Your company really lost all right to act wronged the minute it released the buggy code. If somebody wants to give you a few extra hours, I won't fault that person, but I won't say it's good, either.

Toughen up. Maybe you should try to release fewer bugs.

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42173235)

Your company really lost all right to act wronged the minute it released the buggy code.

Whereas you have a 100% perfect track record in that area, I take it?

Re:Researchers use responsible disclosure (1)

Hizonner (38491) | about 2 years ago | (#42173335)

Nope, but I didn't whine about it.

Re:Researchers use responsible disclosure (1)

lennier (44736) | about 2 years ago | (#42173481)

As someone who he released a vulnerability for this weekend, and the person responsible for security of the product in question...

... shouldn't you be apologising for not finding the vulnerability in your own product yourself?

You've got the source code, all the architecture notes, the people who wrote it, the comprehensive testing suites... and yet you still let a critical security error get through that some random guy on the street with a $10 fuzzer found by accident.

There's a problem here, and it's not with the security researcher. Sorry.

Re:Researchers use responsible disclosure (1)

idontgno (624372) | about 2 years ago | (#42180545)

Hold it. You don't monitor full disclosure security websites yourself?

It's called "Intel". It's worth the effort.

Full disclosure is only a problem if you don't take advantage of it yourself. Otherwise, it's embarrasing when your customers do your job for you, or when the blackhats do a little personal disclosure on your assets.

Yeah, yeah, I know. There aren't enough hours in the day, you don't have enough staff, etc., etc. That just means management isn't prioritizing and allocating correctly. That still means "you" are doing it wrong, in the collective organizational sense of "you".

You're proactive, or you're a victim. This is reality. You're just lucky your customers feel invested in helping you, even if out of self-defense.

War on full disclosure (0)

Anonymous Coward | about 2 years ago | (#42171645)

"Responsible disclosure" my ass. Call it what it is: delayed disclosure.

I don't trust Oracle, and I'd rather that they didn't know how to hack my servers before I do.

Re:Researchers use responsible disclosure (0)

Anonymous Coward | about 2 years ago | (#42173151)

I've talked to kcope on several occasions and hang out in the same irc channels as him. He's a solid and intelligent guy. A lot of us have tried "responsible disclosure" and it fails more than it works. We notify vendors just to get ignored or told it's not serious or it takes months to fix. After being told that numerous times we've said fuck it and just release stuff. Personally, I don't release 100% working exploits and generally keep the cool ones to myself and a small group of people that exchange them. I'll release a PoC that will cause a DoS or something else to demonstrate the vuln and only release working exploits to people I trust. By releasing a PoC you force the vendors hand. They can't tell you it's not serious and they can't wait months to fix an issue. They have to fix it now and admit they have a problem or face negative PR.

Privilege Elevation bug not much of a bug (2)

detain (687995) | about 2 years ago | (#42170201)

from what I'm reading the privilege elevation bug requires that you as a non root user be able to write files to a /var/lib/mysql// directory. I dont remember ever seeing a setup where those directories are world writable or where normal non-root users would be added to the mysql group.

Re:Privilege Elevation bug not much of a bug (0)

Anonymous Coward | about 2 years ago | (#42170405)

Well, here are 104,000 examples of that vulnerability being valid (and who knows how many people have followed this advice) --
http://lmgtfy.com/?q=%22chmod+777+/var/lib/mysql%22

Re:Privilege Elevation bug not much of a bug (4, Interesting)

greg1104 (461138) | about 2 years ago | (#42170819)

Right, suggestions like the Zenoss commentor [zenoss.org] who says "f you dont want to frack around, just chmod those puppies 777" are the reason why this is a problem. It's sadly common advice in the "I want setup to be easy" land of MySQL priorities.

Note that if you change the directory a PostgreSQL server writes to so that other users are allowed to write there, too, the server will refuse to start until you fix the permissions so that isn't the case. New database installations [postgresql.org] made with initdb have the right permissions, but the code checks against people "fracking" themselves by making them less secure later. The only way around this is to modify the source code [nabble.com] to disable the check!

Re:Privilege Elevation bug not much of a bug (1)

Gothmolly (148874) | about 2 years ago | (#42171097)

Just like the world is full of "developers" who write everything to c:\temp, the world is full of Unix hacks who chmod 777 everything "because then it works".

Re:Privilege Elevation bug not much of a bug (1)

defcon-11 (2181232) | about 2 years ago | (#42172071)

I haven't done much windows development, are the semantics of C:\temp different from /tmp? Why is writing to it a bad idea?

Re:Privilege Elevation bug not much of a bug (1)

turbidostato (878842) | about 2 years ago | (#42172179)

"I haven't done much windows development, are the semantics of C:\temp different from /tmp? Why is writing to it a bad idea?"

It's bad idea for at least two reasons:
1) You should never write to C:\temp, or /tmp for that matter. You should write to %TEMP% or $TMP instead.
2) Even if you write to %TEMP%, please think twice what you write there and where exactly within: i.e. to a non-predictable file name.

Re:Privilege Elevation bug not much of a bug (0)

Anonymous Coward | about 2 years ago | (#42172253)

You failed to answer the question. Man up and admit that you don't know why!

Re:Privilege Elevation bug not much of a bug (1)

drkstr1 (2072368) | about 2 years ago | (#42173153)

Writing to tmp breaks encapsulation, and so it is considered more "dangerous" than setting up your own internal temporary storage mechanism. File name collisions are the most obvious issue to arise from this. In worse cases, you can leak sensitive information (I remember one of the GUI terminals in Gnome was dumping the buffer as plain text to tmp, even when using SSL).

Re:Privilege Elevation bug not much of a bug (1)

lennier (44736) | about 2 years ago | (#42173823)

Writing to tmp breaks encapsulation, and so it is considered more "dangerous" than setting up your own internal temporary storage mechanism.

Race conditions like c:\temp and /tmp are an example of why the current 40-year-old operating system model we have, with lots of secure processes but all using a big shared filesystem, needs a long overdue rethink. And we're missing the chance to do it with the best opportunity we have - tablets - because they're inheriting the same fundamentally broken OS design.

Another big other example of why our OSes need a rethink is virtualisation. It shouldn't have to take simulating an entire CPU, motherboard and OS just to get provable separation of shared processes. That sort of thing is exactly what an OS was invented to do in the first place - but our shared-filesystem model simply doesn't allow it, so we have to virtualise the hard and slow way, creating entire virtual machines when all we'd need in a well-designed system was processes. That's nuts.

Yet another example is installation. We write software at the process level that's neatly encapsulated into objects which don't overwrite each other's memory space, and we learned since the 1980s that "global variables" are bad. But those objects only exist in process RAM, we implement them with subtly different semantics for each language, and don't persist them to long term OS storage in any kind of consistent manner. And when it comes to write the installer, we just throw out everything we learned in software development school, and shove a bunch of files and directories and registry keys into that big ol' global variable we call "the filesystem" (plus databases, net-attached services, and on Windows, the COM object state). Then we put a thin layer of access permissions over the top to cover up the shared-everything fail underneath. And so every worm that comes along that once gets access to the filesystem or worse, our network credentials, can do whatever it wants. So no matter how pretty and clean our high-level security abstractions, underneath we're pretty much still right back in 1960s-era shared-memory COBOL mainframes with GOTO statements and global shared databases. /facepalm.

How about we take those functional and OO design principles we love so dearly and build an OS on them? I seem to recall that was the promise of the entire 90s generation of OSes, from OS/2 to Taligent/Pink to Windows NT/Cairo. Did any of it eventuate? Nope. At least not for security. We added secure object capabilities on top of an insecure substrate which is still there - but security is about removing capabilities, and then proving that you removed them. That's why we can't do security.

Half True (0)

Anonymous Coward | about 2 years ago | (#42177381)

While most installers do what you say, Google has proven that you don't need Admin privileges to install large software packages (namely Chrome).

But yeah, M$ should kick developers into their assess to get this right.

Re:Privilege Elevation bug not much of a bug (1)

defcon-11 (2181232) | about 2 years ago | (#42174679)

I guess most of the tempfile stuff I've worked with has been in Python and Ruby, where the std lib has functions to create tempfiles, and you don't have to worry much about file paths or name collisions or having to set the correct file permissions. Presumably there are similar third party libs for languages lacking these features in the std lib.

Re:Privilege Elevation bug not much of a bug (1)

WuphonsReach (684551) | about 2 years ago | (#42172177)

Just like the world is full of "developers" who write everything to c:\temp, the world is full of Unix hacks who chmod 777 everything "because then it works".

Or the Linux hacks who disable SELinux because they can't figure out how to create exceptions or fix the file system labeling problems.

Re:Privilege Elevation bug not much of a bug (1)

Anonymous Coward | about 2 years ago | (#42170509)

C:\>dir /var/lib/mysql//
Invalid switch - "var".

What is going on here? Is my system vulnerable or not?

Re:Privilege Elevation bug not much of a bug (5, Funny)

TheSpoom (715771) | about 2 years ago | (#42170731)

If you're running Windows, you can default to "yes".

Re:Privilege Elevation bug not much of a bug (0)

Anonymous Coward | about 2 years ago | (#42172133)

This bug is not about file system permissions.
The user in this case is a mysql user who has the FILE privilege.
You connect to mysql, then use "select into outfile... " and mysqld writes the file you told it to.
This can be used to create a file that contains a trigger definition.
Now, the problem with triggers in mysql is that which user it should run as is defined in the trigger itself.
If you in the previous step wrote "root@localhost" in the trigger file, guess what, the trigger will run as root@localhost.

The fix to this is not to fix file permissions, it is to revoke FILE privileges.

Security researchers are not welcome in Brazil (0)

Anonymous Coward | about 2 years ago | (#42170613)

Ms. President of Brazil just signed a law that outlaws the development, distribution, dissemination or anything related to "tools to break into computer systems".

There is no provision for researchers. If you are a security researcher, feel free to visit Brazil but bring a lawyer.

7 at once? (-1)

Anonymous Coward | about 2 years ago | (#42170893)

even oracle ain't *that* bad...

must be a case of the "researcher"-couldn't-find-a-buyer-at-the-right-price-so-might-as-well-go-public

Only improperly configured installations? (4, Insightful)

Anonymous Coward | about 2 years ago | (#42173145)

I don't quite agree that this only effects improperly configured installs. If you leave 3306 open to the Internet, yes, that would be an improper configuration and you kind of get what you deserve there.

However, imagine the case of having a webserver open to the world hosting $RANDOM_PHP_APP_OF_THE_DAY, with a MySQL server backend on a separate private network it must talk to. Everything is properly firewalled, only the webapp can access MySQL on 3306, and only has access to it's own database(s) it needs to, and nothing more. Now random exploit for PHP app happens, which gives the attacker access to run their own SQL commands, gain user shell access, whatever. These exploits are common.

This instead of limiting the attacker to the database credentials within the app itself, now gave the attacker full access to the entire MySQL infrastructure bypassing any local ACL's you have in place. Instead of just being able to access your application database, now they can access any other databases setup on the same server.

Most exploits these days are fairly innocuous by themselves - it's when you string them together is where they get to be important. Any attacker worth their salt has lists of thousands of exploitable webapps they are saving for just such days, when a new backend zero day hits. Then they fire up their tools to take advantage first of the known hole in the web application they already scanned for months ago, to then exploit the more severe underlying exploit which is "behind the firewall so we don't care".

Security is a multi-layered thing. You cannot be secure in a bubble, and you cannot say something doesn't effect you just because an attacker can't directly exploit the problem from a random wireless access point in a coffee shop somewhere. Very few exploits I see these days are that "easy" to pull off - they all require multiple exploits used at once, to gain the access needed to the target.

Re:Only improperly configured installations? (1)

jhol13 (1087781) | about 2 years ago | (#42176113)

If you leave 3306 open to the Internet, yes, that would be an improper configuration and you kind of get what you deserve there.

Why? Why do you "deserve" that the database is not safe to use that way? Shouldn't it be? If not, why it should not be safe?

I think this is the main reason why every fucking application from browsers to document viewers to databases to webapps to firewalls to php's are buggy: developers assume "it will be protected by other means, I don't need to check my code or sanitize input. They'll use apparmor. it is not that serious a hole".

Re:Only improperly configured installations? (0)

Anonymous Coward | about 2 years ago | (#42176367)

I actually don't disagree, deserve was a strong word. However, you pretty much nailed the reason why in your post.

I think it's utterly stupid that you can get such a security hole *pre authentication*. Inexcusable really. If this was a privilege escalation after the fact, I could at least understand it.

I've simply learned over the years that people are human, and you'll be a lot happier in life if you recognize that fact and act pragmatically. I spent my younger years screaming from a mountaintop at how criminally stupid the average programmer is when it comes to security, but that was a pretty shitty life overall.

However, keep in mind there are a lot more things on the Internet than software exploits. Firewalling off unneeded services to the public is best practice simply due to the human factor - if I have an employee who adds a grant for root@"%" - at least I have an added layer of protection there to save us from his stupidity. Yes, this has happened before, and we thankfully caught it relatively quickly in a standard audit.

This also doesn't even touch on the lesser problems such as your every day DDoS - I don't think I should expect the authors of MySQL to have to try to put DoS protection in their code, I'd rather they work on you know.. database features. The very few use cases that require 3306 open to the Interwebs means I can focus my own personal resources on such an endeavor vs. wasting the time of people who will never use such a feature should I ever need to.

Nothing wrong with using firewalling as another piece to the toolkit. Relying on it as some form of ultimate protection is stupid, and unfortunately seems to be the direction the industry has moved over the past decade. In my opinion it's due to both laziness, and simply the exploding number of applications out there that eventually get compromised in some fashion or another that the average IT folk cannot realistically keep up with. The result is a completely unsustainable security model, which is simply "matter of time" territory until someone gets behind the firewall and then more or less has free access to whatever they like. If I had a dollar for every time someone has said "oh it's behind the firewall, who cares" when I tell them to update something I'd be a rich rich man :)

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?