Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Study Shows Many Sites Still Failing Basic Security Measures

Unknown Lamer posted more than 2 years ago | from the remember-stack-smashing dept.

Security 103

Orome1 writes with a summary of a large survey of web applications by Veracode. From the article: "Considered 'low hanging fruit' because of their prevalence in software applications, XSS and SQL Injection are two of the most frequently exploited vulnerabilities, often providing a gateway to customer data and intellectual property. When applying the new analysis criteria, Veracode reports eight out of 10 applications fail to meet acceptable levels of security, marking a significant decline from past reports. Specifically for web applications, the report showed a high concentration of XSS and SQL Injection vulnerabilities, with XSS present in 68 percent of all web applications and SQL Injection present in 32 percent of all web applications."

cancel ×

103 comments

Sorry! There are no comments related to the filter you selected.

Citicorp Hack (5, Interesting)

Anonymous Coward | more than 2 years ago | (#38292106)

Then there is the Citicorp hack, where they dont even bother hashing the account numbers in the URL...

Re:Citicorp Hack (3, Insightful)

tomhudson (43916) | more than 2 years ago | (#38293080)

The *real* Citicorp hack was getting bailed out with $308 billion in loan guarantees, and NOBODY going to jail.

Re:Citicorp Hack (1)

FoolishOwl (1698506) | more than 2 years ago | (#38294342)

It might be interesting to compare the total amount of losses to bank robbery and this sort of hacking to the amount pocketed by execs in the bailout.

Re:Citicorp Hack (3, Informative)

tomhudson (43916) | more than 2 years ago | (#38294826)

Latest stats for the US - 2nd quarter of 2010 from the FBI: 1,007 bank robberies (includes credit unions, savings and loans, as well as the "too big to fail" commercial banks). [fbi.gov]

Total loot: $7,820,347.96 in cash, $298.88 in cheques. So far, they've gotten back $1,801,073.18, for a net loss of $6,019,573.66

Extrapolated to an entire year, that would still be under $25 million net. A rounding error compared to all the US bank bail-outs.

Re:Citicorp Hack (0)

Anonymous Coward | more than 2 years ago | (#38293532)

Then there is the Citicorp hack, where they dont even bother hashing the account numbers in the URL...

Yes, but did they get Always The Low Price[TM]????

Fixed (0, Troll)

masternerdguy (2468142) | more than 2 years ago | (#38292120)

And they can be improved with Norton Internet Security! (Preparing for the new age of ask slashdot)

Re:Fixed (-1, Offtopic)

forkfail (228161) | more than 2 years ago | (#38292644)

Buy McAfee Enterprise with at least 1000 licenses today, and get a free iTouch!

200 (5, Insightful)

badran (973386) | more than 2 years ago | (#38292152)

I wonder how they test. Some sites that I manage return the user to the homepage on a hack attempt or unrecoverable error resulting in a 200 return. Would they consider such a system as hacked, since they got a 200 OK return, or not.

Re:200 (0)

Anonymous Coward | more than 2 years ago | (#38292292)

I think your question illustrates one of the many shortcomings of site scanners. I have personally seen 'Certified Hacker Proof' and other garbage badges on various sites, that were very obviously vulnerable to these types of attacks.

Even mysql.com can't get it right.

Re:200 (5, Interesting)

slazzy (864185) | more than 2 years ago | (#38292418)

One of the sites at a company I worked for provides fake data back when people attempt sql injection, sort of a honeypot to keep hackers interested long enough to track them down.

Re:200 (1)

TheSpoom (715771) | more than 2 years ago | (#38294638)

That's awesome. They should open source that component.

Re:200 (2)

phorm (591458) | more than 2 years ago | (#38294874)

Hmmm, how about
a) Have a secondary instance running with dummy/fake data
b) Have a wrapper around queries that checks for attempted injections (perhaps a pre/post sanitization check), if the query is an injection attempt, grab data from the fake DB
c) Watch for people using data from the fake DB, attempt to use a fake (but realistic enough to pass a smell test) CC# are fraud attempts flagged to visa...

Re:200 (1)

Fallingcow (213461) | more than 2 years ago | (#38292482)

Why wouldn't you return a more appropriate code (something from 4xx or 5xx) in those cases? Since you can always send whatever content you want along with (almost) any code, might as well give standards-compliant HTTP feedback.

Re:200 (1)

Fallingcow (213461) | more than 2 years ago | (#38292546)

I should add that appropriate error codes can help drive off traffic from automated scanners of various sorts, looking for open proxies and other problems. Things like your 404 or 401 pages should definitely not return a 200 OK, for that reason if no other.

Re:200 (1)

badran (973386) | more than 2 years ago | (#38292676)

Because I have no control over the specifications.

Re:200 (1)

Fallingcow (213461) | more than 2 years ago | (#38292866)

Haha, yeah, there's always that I suppose.

Re:200 (0)

Anonymous Coward | more than 2 years ago | (#38293020)

Why _would_ you? Is there incentive to be standards-compliant, friendly, and heterogenous-mix-of-clients interoperative with attackers?

Re:200 (2)

Fallingcow (213461) | more than 2 years ago | (#38293294)

Aside from feel-good "adhering to the standards" crap, it makes your site look less inviting to attackers (a 4xx page returning a 200 OK looks, and is, sloppy as hell) which isn't a bad thing. It discourages automatic scanners from marking vulnerabilities that don't actually exist, which can get your site on all sorts of lists that can drive even more (often automated) traffic your way, wasting cycles and bandwidth. I'd much rather Chinese proxy and Wordpress installation scanners get a 4xx FUCK OFF than an erroneous 200 OK.

With accurate status codes, end users' web clients can (possibly) provide them with better information. It makes it way the hell easier to convert your content or processes in to consumable RESTful services (maybe you want to expose your site resources to a mobile app, say) if you're already reliably and universally slinging appropriate HTTP status codes. Makes writing quick-n-dirty remote unit tests or QoS monitors easier. Apache (or whatever) error logs remain useful.

Lots of reasons. I'm sure there are others that aren't coming to mind, or that I don't know about.

Incidentally, we do need a 400-range "FUCK OFF" status code.

Re:200 (1)

Anonymous Coward | more than 2 years ago | (#38296436)

Incidentally, we do need a 400-range "FUCK OFF" status code.

I'll admit: I wanted to write something snarky. So, I went to RFC 2626 looking for a preexisting code that would say something like, "The request was understood, but the manner of presentation raised suspicions. The client SHOULD NOT repeat the request." There is none, and you're dead-on right about the need for a 400-range FUCK OFF status code.

The specs assume that the server will always transmit information unless the request is malformed or the resource is protected/missing: there needs to be an *error* on the client or server's part, not an intention of abuse. The server is supposed to be helpful, not suspicious. So, you're right: there is a real need for an error code meaning "The client's manner of making a request raises suspicions that it may attempt to subvert the server. The client SHOULD NOT repeat the request." That would be more in line with real-world implementations (at least for servers with some sort of watchdog), and error codes should include space for non-ideal situations (that is, specifications should take into account security and the possibility of abuse). I move for consideration of 499 FUCK OFF.

Re:200 (1)

Fallingcow (213461) | more than 2 years ago | (#38296676)

Looks like nginx uses a non-standard code 444 No Response for that purpose. May have to modify my Apache config to start using that, if I can...

Re:200 (5, Informative)

jc42 (318812) | more than 2 years ago | (#38293628)

Why _would_ you [send valid content with a 4xx or 4xx code]? Is there incentive to be standards-compliant, friendly, and heterogenous-mix-of-clients interoperative with attackers?

Perhaps because you know that the "attacks" are coming from sites that don't know they're attacking you, but are merely asking for content.

The specific cases I'm thinking of are some sites that I'm responsible for, which can deliver the "content" information in a list of different formats such as HTML, PS, EPS, PS, RTF, GIF, PNG (and even plain text ;-). The request pages list the formats that are available; a client clicks on the one(s) that they want and presses the "Send" button, and gets back the information in the requested format(s). The data is stored in a database, of course, and converted on the fly to whatever format is requested. Things like PS and PDF are huge in comparison, so we don't save them. The required disk space would be exorbitantly expensive.

There is a real problem with such an approach: The search sites' bots tend to hit your site with requests for all of your data in all of your formats. Some of them do this from several addresses simultaneously, hitting the poor little server with large numbers of conversion requests per second, bringing the server to its knees. Converting plain text to all the above formats can be quite expensive.

How I handled this was to, first (as an emergency measure), simply drop the request from an "attacker" IP address. This gave breathing space, while I implemented the rest. What's in place now is code that honors single requests, but if it sees multiple such requests in the same second coming from a single address or a known search-site address block, replies to just one of them, and sends the rest an HTML page explaining why their request was rejected.

Over time, this tends to get the message through to the guys behind the search bots, and they add code on their side to be nicer to smaller sites like ours.

I've also used this approach to explain to search-site developers why they should honor a nofollow attribute. After all, they get no information from the expensive formats like PS, PDF or PNG that's not in the plain-text or HTML file, so there's no real reason for a search site to request them.

Note that, in this case, we do actually refer to such misbehaved search bots as "attackers". They're clearly DOSing us, for no good reason. But the people responsible aren't actually malevolent; they just didn't realize what they're doing to small sites. If you can defuse their attacks gently, with human-readable explanations, they'll usually relent and become better neighbors. This helps their site, too, since they no longer waste disk space and cpu time dealing with duplicate information in formats that are expensive to decode and eat disks.

It's yet another case where the usual simplistic approach to "security" doesn't match well with reality.

(It should be noted that the above code also has a blacklist, which lists addresses that are simply blocked, because the code at that site either doesn't relent, or attempts things like XSS or SQL attacks, which are recognized during the input-parsing phase. Those sites simple get a 404. But those are a minority of our rejections. We don't mind being in the search site's indexes; we just don't like being DOS'd by their search bots.)

Re:200 (1)

Anonymous Coward | more than 2 years ago | (#38298664)

How about using Robots.txt crawl delay?

User-agent: *
Crawl-delay: 10

See http://en.wikipedia.org/wiki/Robots_exclusion_standard#Crawl-delay_directive

Re:200 (2)

lonecrow (931585) | more than 2 years ago | (#38299762)

I have been in a similar circumstances and there are a few other ways to handle it.
#1 solution use a link for your main format that you want the search engines to read (html) then instead of links for the other version use forms. You can still use get, and you can style the submit button to look like a link. Sure its a bit more html then a simple link but as a solution it is simple and effective.

I work at Veracode; here's how we test. (5, Informative)

Anonymous Coward | more than 2 years ago | (#38293540)

I work at Veracode, and can share how we test. I'll be brief and technical here, as there's lots of marketing material available other places. In short, we scan web sites and web applications that our customers pay us to scan for them; the "State of Software Security" report is the aggregate sanitized data from all of our customers. We provide two distinct kinds of scans: dynamic and static.

With dynamic scans, we perform a deep, wide array of "simulated attacks" (e.g. SQL Injection, XSS, etc.) on the customer's site, looking for places where the site appears to respond in a vulnerable way. For example, if the customer's site has a form field, then our dynamic scanner might try to send some javascript in that field, and then can detect if the javascript is executed. If so, that's an XSS vulnerability. As you might imagine, the scanner can try literally hundreds of different attack approaches for each potentially vulnerable point on the site.

The static scans are a little fancier. The customer uploads to Veracode a copy of the executable binary build of their application (C/C++, Java, .NET, iPhone app, and a couple of other platforms). From the executable binary, the Veracode systems then create a complete, in-depth model of the program, including control flow, data flow, program structure, stack and heap memory analysis, etc.. This model is then scanned for patterns of vulnerability, which are then reported back to the customer. For example, if the program accepts data from an incoming HTTP request, and then if any portion of that data can somehow find its way into a database query without being cleansed of SQL escape characters, then the application is vulnerable to SQL Injection attacks. There are hundreds of other scans, including buffer overflows, etc.

Personally, I think what we do at Veracode is pretty amazing, particularly the static binary scans. I mean: you upload your executable, and you get back a report telling you where the flaws are and what you need to fix. The technical gee-whiz factor is pretty high, even for a jaded old-timer like me.

Re:I work at Veracode; here's how we test. (3, Informative)

kriegsman (55737) | more than 2 years ago | (#38293596)

Oops, I wasn't logged in. The above comment is from me, Mark Kriegsman, Director of Engineering at Veracode.

Re:I work at Veracode; here's how we test. (1)

Just Some Guy (3352) | more than 2 years ago | (#38294746)

Thanks for posting, Mark. I'm curious, though: how do you check for stupid mistakes like that in languages that allow first-class functions? For instance, in Python I could write something like:

>>> def foo(x): print x
...
>>> arguments = ['hello, world']
>>> def call_func_with_args(func, args): func(*args)
...
>>> call_func_with_args(foo, arguments)
hello, world

Your scanner would have to determine that 1) call_func_with_args executes the passed-in function, and 2) there's some possibility that it gets executed with an SQL query as the first argument and unsafe data in the second. That seems on the order of solving the halting problem in trickiness. The article doesn't mention Python, but C# will happily let you pass functions around. How do you handle that?

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38295078)

Don't know how they do it.. since I don't work there.. but you could check if the program reads any data from stdin... then see if any changes are made to the data before it's sent to the DB. Simply passing the data around, wouldn't change anything... so it would still fail the check. And you could add a little more checking: is the data checked for single quotes, etc. Doesn't seem too complicated.

Re:I work at Veracode; here's how we test. (1)

Just Some Guy (3352) | more than 2 years ago | (#38295216)

How do you know whether the data will ever be sent to the DB? That's the problem. You can't simply mock the DB connection and watch for bad inbound queries because there might be an unsafe query that only gets executed once ever 10,000,000 page views. The hard part is telling for sure whether any given piece of data can possibly get passed to a given function, especially when you can pass functions around as arguments to other functions.

At any rate, no, you don't ever have to check the data for single quotes, etc. at all. If data is ever used to create an SQL query that gets executed (or passed back to visitors without being stripped of HTML tags), then you have a security vulnerability, period.

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38295650)

The hard part is telling for sure whether any given piece of data can possibly get passed to a given function, especially when you can pass functions around as arguments to other functions.

1) you can see if a program is sending a piece of data to a database without actually running the program. Keep in mind, they don't have the source code.. so we're talking about disassembling the program first.

If data is ever used to create an SQL query that gets executed

2) You're kidding me right? User data is turned in to SQL queries all the time. How do you think slashdot comments are stored?

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38298632)

2) You're kidding me right? User data is turned in to SQL queries all the time. How do you think slashdot comments are stored?

You're not qualified to have this conversation.

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38299912)

I've been writing web apps for 15 years... so if you really think user data never gets passed to the db, then you're an idiot.

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38303414)

Re-read what he said. Do you use visitor data to create SQL queries, or do you pass user data to the DB library as an argument to a parameterized query like everyone sane does? Do you write query = "select * from users where username = '" & form.username & "'" and send that to the DB, or do you write query = "select * from users where username = @username"? If you're doing it the first way, you're an idiot. If you're doing it the second way, you're agreeing with him.

Re:I work at Veracode; here's how we test. (4, Informative)

kriegsman (55737) | more than 2 years ago | (#38295894)

That is a GREAT question, and the full answer is complicated and partially proprietary. But basically, you've touched on the problem of indirect control flow, which exists in C (call through a function pointer), C++ (virtual function calls), and in Java, .NET, ObjC, etc. The general approach is that at each indirect call site, you "solve for" what the actual targets of the call could possibly be, and take it from there. The specific example you gave is actually trivially solved, since there's only one possible answer in the program; in large scale applications it is what we call "hard." And yes, in some cases we (necessarily) lose the trail; see "halting problem" as noted. But we do a remarkably good job on most real world application code. I've been working with this team on this static binary analysis business for eight or nine years, and we still haven't run out of interesting problems to work on, and this is definitely one of them.

Re:I work at Veracode; here's how we test. (1)

Just Some Guy (3352) | more than 2 years ago | (#38299244)

Sounds like you're doing some really cool stuff, and I admit that I'm kind of jealous because it seems like a lot of fun. Thanks again for the information!

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38295152)

I was just thinking to myself... I wish I had mod points for this, as its unusual to see a response from someone that works at the company and is willing to tie their name to it.

Luckily, I somehow got mod points today :)

Re:I work at Veracode; here's how we test. (0)

Anonymous Coward | more than 2 years ago | (#38301126)

Hey Mike, how does your technology (or your company, whatever) compare to Fortify in your opinion?

Re:200 (2)

jc42 (318812) | more than 2 years ago | (#38293926)

Another related problem I've had is that XSS seems to have a wide range of definitions, and is such a vaguely-defined concept that it applies to a lot of valid web applications.

I've seen a number of definitions of XSS that include all cases where a CGI program gets a URL for a third site, and sends an HTTP request there. I have a number of sites whose CGI software is designed to work exactly this way. The data is distributed across several hundred other sites, only a few of them mine. My main sites have small databases where they can look up parts of requests, figure out where the data can be found, pass the request over to that server, wait for the replies, and combine the results into the web page that the client wants.

I don't especially like the idea that self-styled security experts would classify such setups between cooperating sites to be security violations. And I suspect that the folks who did this study would classify our distributed database (and probably google's ;-) as implementing XSS attacks. Our main "public" web sites would be classified as doing an XSS attack on our database sites.

So is there any sensible way to figure out what any given security researcher means by an "XSS attack"? Is there a reasonable way to argue for a more restrictive definition that would permit a flock of cooperating web servers to bounce requests back and forth like ours do, without being classified as an insecure "XSS attack" site?

(Actually, we've known from the start that, in a weak sense, our CGI software can be used to "attack" other sites. Just call it with a random URL; it'll send a GET request to that site. It'll then find that the data isn't in the expected format, drop it after the first data packet, and sent you a "failed" reply. If you do this more than N times, you'll end up in our blacklist, and you'll get a reply explaining why you're blacklisted. An actual attack process would be over in a few seconds, so this isn't very useful as a way of DDOSing some victim site. ;-)

what do you expect? (3, Insightful)

Nyder (754090) | more than 2 years ago | (#38292308)

This is capitalism/corporations. It's all about profit, and spending extra on IT cuts into the bottom line.

Economy is bad, so companies make cuts. Personnel, IT, Security, and everything but the CEO's bonuses get cut.

Re:what do you expect? (4, Interesting)

Anonymous Coward | more than 2 years ago | (#38292454)

It also seems to come down to ridiculous timescales. A project is declared, a release date is set in stone. The client overruns their alloted time to come up with requirements/content, the release date stays in stone. The legal teams take forever to draw up and agree on contracts, the release date stays in stone. The IA/UX people miss their deadlines for producing the wireframes, the release date stays in stone. The design team go through a million iterations of whether the drop shadow on the footer text should be mauve or fuscia and overrun their deadline, the release date stays in stone. The client pops up again with dozens of last minute change requests, the release date stays in stone. Then it hits development's desk and suddenly the three month project has to be done in two weeks. Development is almost always the last link in the chain and, as such, always the department under constant crunch time. Developing a complex site with vague specs across half a dozen minds isn't easy, but unlike all the other parts of the chain leading up to this point, it's the part where the client can be most punished if it's not done right, yet nobody ever sees the benefit of allowing sufficient time (and doing sufficient testing).

Re:what do you expect? (2, Insightful)

Anonymous Coward | more than 2 years ago | (#38292960)

If I gave you enough time to do development right, the competition would beat us to market, drive us out of business, and you would be out of a job.

Don't think it is any different working for one of our competitors, they will overwork you just as hard for fear of US beating THEM to the market.

The market has shown a surprisingly high tolerance for bugs and security gaps, so we simply can't afford to proactively fix those.

And if you don't like my high bonus....go start your own company. After realizing just how hard and risky it all is, you will feel like you deserve a nice fat bonus too.

Re:what do you expect? (1)

ksd1337 (1029386) | more than 2 years ago | (#38293372)

Not all markets show that tolerance. Video game markets, for example. A lot of them have those "set in stone" release dates, and the games don't come out very well. (Of course, my gaming taste is stuck in the '90s where it belongs, so don't take my word for it.)

Re:what do you expect? (0)

Anonymous Coward | more than 2 years ago | (#38293772)

After realizing just how hard and risky it all is, you will feel like you deserve a nice fat bonus too.

Risky for whom, exactly? For you personally? Or for your shareholders - whom you could not care less about? You have no personal risk, especially with that bonus you get, even if the company tanks to your poor leadership.

And after understanding that, why would anyone hand their money over to you to risk it? When you get paid regardless of what happens to the company?

I find it odd that economic elites rail about personal responsibility and how there are no guarantees and such--then turn around and negotiate contracts that provide themselves with just that.

Re:what do you expect? (1)

Thing 1 (178996) | more than 2 years ago | (#38293974)

Development is almost always the last link in the chain and, as such, always the department under constant crunch time.

In my experience, QA is the last link in the chain; however, it is the Build team that gets crunched when development overruns. (And, as you pointed out, it's not always development's fault that they overrun.)

Hey, that's the company I work for .. (1)

roguegramma (982660) | more than 2 years ago | (#38296046)

Strange, and I thought I knew all the software developers working at the company.

Re:what do you expect? (1)

Mashiki (184564) | more than 2 years ago | (#38293102)

I can make wild-eyed inaccuracies too. I mean it couldn't have anything to do with laws ensuring that failing at data security means less than a slap on the wrist. Wait it means exactly that, it means that you can cut everything and then simply offer an apology. This of course really won't change until either the laws, or case law catches up to the theft of consumer data.

Re:what do you expect? (3, Insightful)

Ramley (1168049) | more than 2 years ago | (#38293120)

I am sure your point is a part of the problem, but in my (many years) of experience, this has a lot more to do with a myriad of factors, none of which really outweigh the other by much.

I am an independent developer who works on projects with security in mind from the ground up. Time/budget be damned, as it's my reputation on the line. If they can't pay for what it is worth, I tell them to find another developer.

They tend to learn the hard way — it was a better option to stick with a security minded developer in the first place. 85% of them return as customers.

The problem seems to be that most of the developers I have worked with, be it corporate employees, or indy's like myself, are one of two things, in general: (very general)

1. Lacking knowledge of how to deal with the most common security threats.
2. Lazy, and don't care enough to implement safeguards, etc.

Most of the other excuses boil down to one of the above.

That's my experience out there in the field, working with lots and lots of diverse companies. Of course profit and time to complete enter the picture, but over time, this can be overcome with a lot of experience and a lot of [code] libraries which can be easily implemented, no time lost.

Re:what do you expect? (0)

Anonymous Coward | more than 2 years ago | (#38293122)

That makes perfect since since there has never been a security problem in any site developed except for ones for corporations. Amazing.

Re:what do you expect? (1)

jc42 (318812) | more than 2 years ago | (#38296794)

I've seen comments that to a lot of management, the IT department is is conceptually similar to the janitorial department, except that the latter keeps the physical facilities clean while the former keeps the data clean (and does a poorer job at its task ;-). Both are pure operational costs that bring in no income, so their cost should be minimized.

It's funny that I've seen this attitude even when the company's products depends in large part on their software people. But the people who build the software are still considered an overhead cost, while the credit for sales goes to the marketers. We've seen this in physical manufacturing, too, where many companies have historically treated their assembly-line workers as "overhead", giving them no credit for sales of the products.

There's gotta be an economics term for this attitude ...

Thats why we outsourced our IT to the Cloud (4, Funny)

Anonymous Coward | more than 2 years ago | (#38292336)

Now its not my problem, its my Cloud providers problem.

Re:Thats why we outsourced our IT to the Cloud (1)

Anonymous Coward | more than 2 years ago | (#38293688)

Not sure if you're serious.... but if the cloud provider drops the ball you're the one losing clients.

Nothing new here (3, Interesting)

vikingpower (768921) | more than 2 years ago | (#38292354)

I am on a project for ( smoke-testing ) the core app. of a major european airport. Same problems there. Management, after having been informed, said: "Not a priority". I guess only their bonuses are "a priority" ? I am thinking seriously of giving pointers to the whole project to Anonymous.

Re:Nothing new here (1)

Anonymous Coward | more than 2 years ago | (#38292498)

If you do such that gives new light to the name "Anonymous tipster".

Re:Nothing new here (1)

Anonymous Coward | more than 2 years ago | (#38292974)

If you do such that gives new light to the name "Anonymous tipster".

Not only new light, also a Slashdot nick, an email address, a homepage, a picture and a pretty good estimate of your nationality. All stored in one of the world's most privacy conscious companies. Oh the irony...

Re:Nothing new here (5, Insightful)

delinear (991444) | more than 2 years ago | (#38292554)

The problem is that the media seem to be in the pocket of big corporations, so when Anonymous inevitably find one of these exploits and steal a bunch of data, the media never seem to hold the businesses who left the door open to account. The lack of security should be a massive topic of debate right now, but instead, outside of certain circles, it's a complete non-issue. During the coverage over here of the various exploits of Anonymous, I don't think I once heard any searching questions asked of the global corporations who allowed a bunch of teenagers to make their security look like the equivalent of a balsa wood door on Fort Knox (and that includes the BBC, who should be the least biased since they're not privately owned, but still either don't want to offend the PR departments of companies who feed them half of their content or just believe the company line and don't bother digging deeper for the real stories).

Re:Nothing new here (1)

Shifty0x88 (1732980) | more than 2 years ago | (#38293548)

THANK YOU!!!

I can't believe companies aren't held responsible for their (lack of) actions as it regards security!!! It makes me mad!!!!

It seems like we just make the people that find and exploit the security hole as the bad guys, even though it was the companies fault in the first place for having the security hold! We are in a cyber world now, and web security should be a higher priority, especially if you save personal information(credit card numbers comes to mind).

Now maybe LulzSec and Anonymous aren't going about this the right way, but at least they are pointing at companies and saying, "Hey this is a huge security hole, fix that sh!t!!"

You would think these companies would care about our information, but they don't. All they care about is if we don't put extra security in our web site, the CEO gets a bonus, or the dev teams gets a bonus for completing it before schedule(even if the product isn't full ready)

And why are we taking this?? Why aren't we like, hey wait, why didn't this company protect our information, they collected it, told us it would be safe and it isn't. We need a public outcry (greater then Anonymous themselves), that will wake companies up and make them do more. Or make it a law that it is the companies' fault that the information was stolen, and possibly make them pay us for our pain and suffering, as well as fix the problem!!!!!!!!

Re:Nothing new here (0)

Anonymous Coward | more than 2 years ago | (#38294534)

Creating that public outcry will require more awareness-raising than posting on a geek-only news site.

Re:Nothing new here (0)

Anonymous Coward | more than 2 years ago | (#38294872)

If I cared about this sort of thing, I would target companies that must maintain PCI compliance and hold onto the exploit until few days before major business days for them (e.g. week before black friday for consumer oriented businesses), then notify VISA/MasterCard about the vulnerability with a fully documented method for breaking into the site. That'd make them sit up and take notice. I can just imagine the teeth gnashing and panic when an online store stops being allowed to process credit cards during the most important shopping day of their year.

Re:Nothing new here (3, Interesting)

Just Some Guy (3352) | more than 2 years ago | (#38294834)

the media never seem to hold the businesses who left the door open to account.

To a point, I understand their logic: you don't blame the victim. But a company publishing SQL injections in 2011 should be dragged through the mud and humiliated. Maybe someone needs to start a newsroom consulting company where reporters call for technical clarification:

Reporter: Hey, Amalgamated Bookends got hacked by someone who replaced the BIOS on their RAID cards with a webserver. Who's in the wrong?
Consultant: Wow! That's a pretty ingenious trick. I hope they catch that hacker!

Reporter: Hey, Shortcake, LTD got hacked by someone who added "?admin=true" to their website's URL. Is that bad?
Consultant: See if Shortcake's sysadmin is somehow related to the owner. It bet it's his nephew.

Reporter: Hey, Sony...
Consultant: LOL dumbasses

Uh huh (5, Insightful)

TheSpoom (715771) | more than 2 years ago | (#38292360)

Security auditing company produces report that conveniently shows that their services are desperately needed. News at eleven.

Re:Uh huh (1)

Oxford_Comma_Lover (1679530) | more than 2 years ago | (#38292508)

Just because their biased doesn't mean the report is untrue--it just means there's bias.

Re:Uh huh (1)

moderatorrater (1095745) | more than 2 years ago | (#38292548)

The report seems suspect to me, but the other way. I deal with security at my job, and most applications of any complexity should be open to sql injection and XSS, especially in PHP which dominates the web right now. So, if anything, their numbers seem low unless they have a large amount of static HTML sites that they're scanning.

Re:Uh huh (0)

Anonymous Coward | more than 2 years ago | (#38292604)

if anything, their numbers seem low unless they have a large amount of static HTML sites that they're scanning.

In custom applications, the scanners don't really check GET/POST variables properly, for starters.

Any monkey can click a 'Scan Now!' button. Hey, no vulnerabilities found. Bonus check please!

Re:Uh huh (2)

justdiver (2478536) | more than 2 years ago | (#38292614)

I've seen this response before on other articles. But who else other than a security auditing company is going to do security audits? A companies internal IT may do this, and I say MAY do this, but they're certainly not going to publish their results to the public. Should we discredit companies that do automobile crash tests because they find that cars are inherently unsafe and need crash testing done to make them safer?

Yeah but in htis case they're probably right... (5, Interesting)

ray-auch (454705) | more than 2 years ago | (#38292788)

Where I work, every time we get told to put our details into some new provider system for expenses, business travel or whatever (happens regularly with corporate changes) we see who can hack it first. We're developers, it's our personal data, why wouldn't we check ?

The fraction that are hacked in minutes is probably near 50%, and 32% for SQL injection is probably about right.

I'm not sure which is more depressing - the state of the sites or that even though we have a "security" consultancy practice in house, we get corporate edicts to put our data into sites that we haven't even bothered to audit to the extent of sticking a single quote in a couple of form fields or changing the userid in the url...

Re:Yeah but in htis case they're probably right... (0)

Anonymous Coward | more than 2 years ago | (#38293796)

Where I work, every time we get told to put our details into some new provider system for expenses, business travel or whatever (happens regularly with corporate changes) we see who can hack it first. We're developers, it's our personal data, why wouldn't we check ?

Because it's probably illegal under our brain-dead justice system?

Re:Uh huh (1)

TheSpoom (715771) | more than 2 years ago | (#38293536)

Just wanted to clarify with my sibling posts that I'm not even saying that the report is wrong, just that it's incredibly biased. As a professional web developer, I'm quite certain there are many sites with XSS / CSRF / SQL injection issues.

Re:Uh huh (0)

Anonymous Coward | more than 2 years ago | (#38294096)

Yeah, thanks to Soulskill and the rest of the BoingBoing refugees that are taking over here, get ready for -plenty- more of this shit. Oh, sorry, "sponsors" fielding questions that totally won't be paying /. for the ad space.

Re:Uh huh (1)

Jaime2 (824950) | more than 2 years ago | (#38297106)

Yup. However, having just had one of my applications scanned by one of these tools, I can say that if you fail one of these scans, you're app is worse than it says it is. I got a mostly clean bill of health, but the feedback I got was ridiculous. For example, the security department says that all pages of all publicly facing web apps should use SSL. Fine. But, the scan dinged me for caching pages delivery by SSL. So, do I violate the mandate to use SSL on trivial data? Do I violate the common sense approach of adding cache-control directives to static trivial elements like company logos? All the scan did for me is make me spend 4 hours justifying why the scan was worthless.

Re:Uh huh (0)

Anonymous Coward | more than 2 years ago | (#38304290)

And mechanics almost always find problems. And programmers can always find a "better" way to write someone else's code. Etc.

ObXKCD (0)

Nighttime (231023) | more than 2 years ago | (#38292408)

Let's get Bobby Tables [xkcd.com] out of the way.

Re:ObXKCD (-1)

Anonymous Coward | more than 2 years ago | (#38293112)

Let's get Bobby Tables [xkcd.com] out of the way.

Fuck you.

reddit is down (0)

Anonymous Coward | more than 2 years ago | (#38292460)

why aren't there more comments

It's the Same Everywhere (5, Interesting)

derrickh (157646) | more than 2 years ago | (#38292504)

You have to realize that somewhere on the net there's a surveillance camera forum with guys saying 'businesses are too cheap to invest in multiple cam setups to cover exploitable deadzones'... and there's a locksmith forum with guys saying 'These companies are still relying on double bolt slide locks, when everyone knows they can be bypassed with a simple Krasner tool!'...and there's a car autosecurity forum wondering why companies still use basic Lo-jack instead of the new XYZ system.. and don't forget the personnel consulting forum where everyone complains that companies don't invest enough in training to recognize grifting attempts on employees.

It's a never ending list and to expect everyone to be on top of all of them at all times is n't realistic.

D

Re:It's the Same Everywhere (2)

naranek (1727936) | more than 2 years ago | (#38292668)

It's a valid point, but on the other hand you can't routinely try breaking into random houses or cars with little chance of getting caught, and then use them undetected for your personal gains. Your crappy lock will do unless someone from your neighbourhood personally targets your house. With computer security there is a constant global crime spree trying all the locks all the time. This is why I think that computer security needs to be handled with extra care.

Re:It's the Same Everywhere (0)

Anonymous Coward | more than 2 years ago | (#38295308)

I disagree.

If you wanted to, you could probably drop everything you own right now... Buy a Van or truck... and start driving around and breaking into random houses.

Moving from State to State, and only pawning stolen goods in a different state then where you got it, you would probably be just as safe as the online criminals.

It is pretty easy to print up a fake license plate online for the state you are in. Double points if you first drive around a mall to find a license plate number that will match your cars description.

Better yet, just steal the car to do the break in (IE someone using a zombie PC to run a SQL injection attack).

Only the dumb criminals get caught. The smart criminals know how hide, AND when to call it quits.

Re:It's the Same Everywhere (1)

bodangly (2526754) | more than 2 years ago | (#38292710)

Its one thing when physical goods are at stake, but another entirely when private data of customers is at stake. The former is a calculated risk, the latter should be considered sacred. Furthermore its not like you can just automate walking down the street and trying to open every lock, but the same thing can and is easily automated on a computer. Take a look at your firewall logs, chances are you have a fair bit of attempted "break ins" that are just bots scanning an IP range for vulnerabilities. I'd be willing to bet the number of attempted online break-ins absolutely dwarfs the number of attempted physical break ins.

Re:It's the Same Everywhere (2)

mjr167 (2477430) | more than 2 years ago | (#38293200)

Why isn't private data also a calculated risk, same as physical goods? Both have a cost associated with securing them. Both have a cost associated with losing them. Security is, always has been, and always will a cost/benefit analysis. If losing data costs less than securing it, then why bother? It's cheaper to clean up the mess than prevent it. Until losing data has a higher cost than security, you aren't going to see it treated well. This idea that virtual things are somehow different from real things needs to go away.

Re:It's the Same Everywhere (1)

bodangly (2526754) | more than 2 years ago | (#38294274)

In regards to things like trade secrets, company information, I agree, it is a calculated risk. But, for companies with customer data (and doubly so for companies storing financial data) they aren't just losing property, they are losing property that really isn't theirs to lose.

Re:It's the Same Everywhere (2)

mjr167 (2477430) | more than 2 years ago | (#38294676)

Then we need to hold them accountable for losing it. We should not expect other people to safeguard our things out of the goodness of their hearts. When you give your physical goods to another party for safekeeping, you sign a contract stating what they are and are not responsible for. When you give packages to UPS, UPS accepts a certain amount of liability if they should damage or lose the package. When you place things in storage, the storage company accepts a certain amount of liability. Before you entrust your physical goods to another entity, you sign contracts stating what your expectations of their safeguarding the goods are and the penalties you will levy against them should they fail in that trust.

We have failed to impose the same standards on those we trust with virtual goods.

The level of effort taken to secure an item should directly correlate to cost and the consequences of losing/damaging it. Some of the tests for shipping containers for spent nuclear fuel rods include surviving a 100 ft drop onto a 6 inch spike, 12 hours in burning jet fuel, and being shot with an RPG. That is all obviously overkill for a package containing my niece's Christmas present.

I can chose to have a letter sent certified/registered mail to be hand delivered by professional, bonded courier. Or I can chose to ask some kid to drop it off on his way to school. My decision will involve a cost/benefit analysis. Before you can convince people to invest in data security, you need to demonstrate that breaches have real costs that justify the expenses. Network security is really no different than physical security. Some places are going to have broken security cameras that have never actually recorded anything, and others are going to have armed guards monitoring live feeds. The difference between the two is a business decision based on a cost/risk analysis.

One of the biggest problems with data theft is proving who is responsible for the loss. It is easy for me to prove that someone lost my credit card numbers. It's much harder to prove who lost it.

Re:It's the Same Everywhere (2)

bodangly (2526754) | more than 2 years ago | (#38296238)

I think you hit the nail on the head with your first sentence. Obviously, companies aren't securing data out of the goodness of their hearts. I'm really not one for adding more laws but it seems to me there needs to be legal repercussions for negligence in regards to customer data. Of course, the issue is far deeper than a merely technical one. The US government isn't exactly known for holding corporations accountable; they much prefer to hold an individual's feet to the fire. So hold the whole damn company accountable. Even I'm not crazy enough to suggest criminal repercussions for negligent managers, after all each company is basically its own government so whos to say which individual is responsible, as you mentioned. So make breaches costly. And I'm not talking fining a multibillion dollar company $100,000. Base the fines on a (sizable) percentage of gross income per data breach. I can't think of any other reasonable way to give a company incentive to invest in security when its not crucial to their own business model.

Re:It's the Same Everywhere (1)

cusco (717999) | more than 2 years ago | (#38305088)

Finland had to do something similar to deal with traffic ticket scofflaws. People would get a ton of tickets for running red lights, speeding, etc. and since they had money they just paid them and kept doing the same thing. Now a speeding ticket for a person who makes $1 million/year is 20x the cost of the same ticket for someone who makes $50,000/year. Of course if we did that here in the States the Bush twins would have bankrupted their families in no time (not necessarily a bad thing).

Re:It's the Same Everywhere (1)

azalin (67640) | more than 2 years ago | (#38300970)

In my personal happy place we would have an organization to which people could report security vulnerabilities to. The responsible company would be contacted and given some fixed period of time (eg. three weeks, plus maybe a bonus week if they provide a good reason) to respond and fix it. After that the information is published and the company faces charges of gross negligence if bad things happen to them and their data.

This would provide some interesting metrics (number of failures, severity, dumbness, response time, ...) and force companies to act. You could even come up with a rating system through this.
Also all security breaches concerning customer date should be required by law to be made public.
Not going to happen anytime soon, but one can dream...

Re:It's the Same Everywhere (1)

theshowmecanuck (703852) | more than 2 years ago | (#38294652)

Why isn't private data also a calculated risk

It is. It is just what value do you put on your 'goods', physical or information. I lock my place with a regular dead bolt when I leave, the building is secure, and there is a concierge/security. On the other hand, Fort Knox [wikipedia.org] has steel and concrete walls and an entire army base around it, guarding it. It's a question of what level of security do you need. Make the calculation. Most people figure information is far more important since quite often you can lose more when someone steals your identity than if they just stole your car. And if you're responsible for thousands, hundreds of thousand or even millions of peoples' sensitive identity and finance related data, I'd say Fort Knox is more along the model you are looking for. If the calculation the company holding my information does not match that, I tend not to use their services in the first place. It would be nice to have open security audits so we could all make those decisions easily.

Re:It's the Same Everywhere (1)

mjr167 (2477430) | more than 2 years ago | (#38295172)

And you then are a sane, rational person. As more people begin making that same choice, companies will adjust their risk models and we will get better security. Unfortunately, its a slow migration. Look at the number of people still giving information to Sony.

Security ratings would be useful. Pretty much everything else has some kind of consumer rating now-a-days.

Re:It's the Same Everywhere (1)

theshowmecanuck (703852) | more than 2 years ago | (#38295424)

And you then are a sane, rational person.

Well I think so. But I'm sure there a number of people around here who would argue with you about that. :) But thanks none-the-less. ;)

Re:It's the Same Everywhere (1)

cusco (717999) | more than 2 years ago | (#38305202)

The other day there was a thread about someone who had tested the security of the online payment system they had signed up for and found it disastrously bad, something which I do myself. About a quarter of the posts in the thread were people pounding on him for 'trespassing' and the like, which made no sense to me at all. If I'm going to give them my personal information and/or credit card number then their web site had better be able to handle the very, very basic attack vectors that I know. A surprising number of them over the years couldn't (including an online flight booking system that didn't even use SSL for the payment page), and I generally inform friends/family of that fact. I used to tell the web site owners until one threatened me with a lawsuit (for defamation, not hacking).

Re:It's the Same Everywhere (1)

oPless (63249) | more than 2 years ago | (#38293070)

I'm interested to hear more about this Krasner tool..... (I have a friend who picks locks as his party piece and it sounds the perfect xmas present;)

Re:It's the Same Everywhere (1)

Herkum01 (592704) | more than 2 years ago | (#38294048)

Little Bobby [xkcd.com] does not expect you to be on top of everything. The basic stuff, lock your car doors, use placeholders in SQL statements should be a reasonable expectation.

Re:It's the Same Everywhere (0)

Anonymous Coward | more than 2 years ago | (#38294542)

The difference is, we are seeing a surge in exactly these types of attacks being used in the wild to perform successful compromises.

false positives (0)

Anonymous Coward | more than 2 years ago | (#38292576)

I have dealt with a number of Veracode reports in the past. There are a lot of false positives - at least in the .NET code I have seen. The application is exploitable, but only if the attacker has access to the server-side code and can call some methods directly. If they can do that, we're past XSS and SQL Injections already. On most of the occasions there is no way to trigger the vulnerability nomatter what you post from the browser.

Since most of Veracode's customers only rely on the automated tests they perform, the existence of false positive is expected and they can be justified with comments. However, they are most certainly included in this report to make it more "sensational".

Re: sensational (1)

roguegramma (982660) | more than 2 years ago | (#38296160)

However, they are most certainly included in this report to make it more "sensational".

Ehm, no, they are included because it is hard to tell what the program is doing. Not all things can be resolved with rules, e.g. a chain of regex replaces. And you cannot brute-force it either most of the time by checking all inputs.
All you can do then is to determine the possible outputs by some rules, so a false positive is reported, whenever your rules are not exact.

Just those? (0)

Anonymous Coward | more than 2 years ago | (#38292610)

Cross site scripting and SQL Injection? Not even cross site request forgery, buffer overflows, cookie poisoning, cache poisoning, clickjacking, clearjacking, or ribbon-table password hacks? So then nothing more esoteric like man-in-the-middle attacks by packet injection (TCP spoofing). Talk about low hanging fruit!

What's the real story? (2)

dreemernj (859414) | more than 2 years ago | (#38292834)

The precipitous drop in the "pass" rate for applications was caused by the introduction of new, tougher grading guidelines, including a "zero tolerance" policy on common errors like SQL injection and cross site scripting holes in applications, Veracode said.

Is the story that SQL Injection and XSS are still a problem or that Veracode just recently took a "zero tolerance" stance on SQL Injection and XSS in the applications they test?

Re:What's the real story? (0)

Anonymous Coward | more than 2 years ago | (#38292882)

The story is: sex sells in times of plenty, fear sells in times of famine.

Re:What's the real story? (0)

Anonymous Coward | more than 2 years ago | (#38294988)

If they are so intolerant now, why don't they publish the names of the vulnerable applications? If the vendors of those applications don't react, will they publish details?

Re:What's the real story? (1)

tjarrett (162732) | more than 2 years ago | (#38304526)

Our cofounders (I'm director of product management at Veracode) helped to coauthor the responsible disclosure standard, and it's linked on our web site [veracode.com] . Short version: we don't disclose details about customer findings.

68% isn't hard (1)

rsilvergun (571051) | more than 2 years ago | (#38293006)

since the definition of XSS is ridiculously broad. It took me a while to wrap my head around it when a was starting out because when you're looking up how to avoid XSS attacks on your page you come across some books that talk about preventing code injection on your forums and others talking about code running the the wrong security context.

no one cares about security (0)

thetoadwarrior (1268702) | more than 2 years ago | (#38293360)

Everyone rather have their site cheap and straight away rather than secure. It's no surprise lots of sites are insecure.

Serious Question from AC (0)

Anonymous Coward | more than 2 years ago | (#38293682)

HYPOTHETICALLY SPEAKING FOR ALL OF THIS:

I worked in IT for several years and moved up to lower-middle management (outside of IT), that's where I'm residing in my career currently. My company does a lot of B2B work. I remotely log in to one of our contracted companies. The log in to their system is two layers. You log in to the first layer with an RSA token and username, then you log into the second layer with the same username and your regular password.

The usernames are generated with numbers at the end (COMPANYREP01, COMPANYREP02, COMPANYREP03, etc., etc.), this is how I found out about this flaw in logging in to the system.

The external site you have to use your username and RSA token password to log in, this part works. The second layer has a major flaw. You can type in any username and it logs you in under that username, regardless of the password you entered. Note above that's how I found the bug, I entered the wrong number for my username and came up with the wrong e-mail. I have LIGHTLY tried this on several other companies that I know contract with this company, the bug works on all of them. Lightly meaning I logged in to check if it was possible, saw the programs they had access to in order to assure it was that user and logged out.

Now since there are two layers, I'm assuming they can see how you logged in the first time, but it's really nerve-wracking to know this is out there. How is the best way to let this security flaw get known to the right people?

I don't want to outright tell it and possibly get in trouble, I have a good career going right now. I have also read enough horror stories on Slashdot and elsewhere to think this is a bad way to approach it. I do think someone should know so the problem can be fixed. Their IT department is located in another country, as is their helpdesk. Their headquarters is also located several hundred miles away, so I can't just slip a note under someone's door.

again, this is only a HYPOTHETICAL SITUATION and I have in no way found a problem with that companies system.

My personal favourite (0)

Anonymous Coward | more than 2 years ago | (#38295232)

is the property management company formerly responsible for my apartment. They happily solicit credit card numbers over plaintext HTTP:

http://www.crosbypm.com/forms/realtors-owners-request/

They didn't believe me when I told them, and then I moved...

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>