×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

BBC Builds Smartphone Malware For Testing Purposes

Soulskill posted more than 3 years ago | from the mainstream-security-research dept.

Cellphones 60

siliconbits writes "BBC News has shown how straightforward it is to create a malicious application for a smartphone. Over a few weeks, the BBC put together a crude game for a smartphone that also spied on the owner of the handset. The application was built using standard parts from the software toolkits that developers use to create programs for handsets. This makes malicious applications hard to spot, say experts, because useful programs will use the same functions."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

60 comments

"Please turn on JavaScript... (4, Funny)

The MAZZTer (911996) | more than 3 years ago | (#33206934)

Please turn on JavaScript. Media requires JavaScript to play.

OK I'll just....

...heeeey wait a minute. You almost had me there, but you'll have to try harder than that!

Re:"Please turn on JavaScript... (0)

Anonymous Coward | more than 3 years ago | (#33207090)

Please turn on JavaScript. Media requires JavaScript to play.

OK I'll just....

...heeeey wait a minute. You almost had me there, but you'll have to try harder than that!

It's really uncommon for sole JavaScript to inject your machine.

Re:"Please turn on JavaScript... (1)

tokul (682258) | more than 3 years ago | (#33213872)

Please turn on JavaScript. Media requires JavaScript to play.

Actually it is not. You can display flash applets and play sounds without javascript. Even without html5 stuff.

Problem right off the bat.... (1)

davidwr (791652) | more than 3 years ago | (#33206946)

"Please turn on JavaScript. Media requires JavaScript to play."

Must...not...play...must...avoid...infection.

Re:Problem right off the bat.... (2, Funny)

couchslug (175151) | more than 3 years ago | (#33211082)

"Must...not...play...must...avoid...infection."

The nuns told me the same thing.

Is this going to be the new trend? (2, Insightful)

Anonymous Coward | more than 3 years ago | (#33206968)

Same thing that happens on a regular desktop computer.... BUT ON A PHONE! So it's new news!

Re:Is this going to be the new trend? (3, Insightful)

0123456 (636235) | more than 3 years ago | (#33207236)

Someone should have patented installing a trojan... ON A PHONE... and then they could sue anyone else who did so.

Re:Is this going to be the new trend? (0)

Anonymous Coward | more than 3 years ago | (#33207802)

I wouldn't be surprised if sony has a generic patent for something like that.

Re:Is this going to be the new trend? (1)

von_rick (944421) | more than 3 years ago | (#33208720)

Plus you would need a proprietary cable to have it transferred from another device.

How it is news (2, Insightful)

SuperKendall (25149) | more than 3 years ago | (#33208646)

Same thing that happens on a regular desktop computer.... BUT ON A PHONE! So it's new news!

The news is that phone OS'es are being shipped in 2010 that aren't preventing the common security problems we've seen on desktops for the past few decades.

Re:How it is news (3, Interesting)

BasilBrush (643681) | more than 3 years ago | (#33210094)

In many ways mobile phones are more secure than desktops. Sandboxes for apps, strong permissions schemes, app certification etc. But to counterbalance that, they have new facilities as standard that are more dangerous if compromised. Mobile phone charges, SMS, GPS, microphone, camera etc.

Yes, but no (1)

SuperKendall (25149) | more than 3 years ago | (#33210218)

In many ways mobile phones are more secure than desktops.

I agree with what you said.

I agree mobile apps are already by default more secure than desktop apps, but that is why the process of allowing the user to remove some security blocks is even more crucial to get right. Because a mobile app is only as secure as the degree to which you maintain the security blocks around it.

Re:Is this going to be the new trend? (0, Flamebait)

MobileTatsu-NJG (946591) | more than 3 years ago | (#33209136)

Same thing that happens on a regular desktop computer.... BUT ON A PHONE! So it's new news!

You and the idiot that modded you up don't really understand that this isn't a patent story.

Re:Is this going to be the new trend? (2, Interesting)

Snaller (147050) | more than 3 years ago | (#33209770)

On a computer you have can a firewall - you can't on a phone.
Also for Android, because googles stupid design, if an app wants to include adds it needs to have internet access. So everything wants to go on the internet.

What they should have done was have an OS module which returns the adds, so the app didn't need internet access.

Re:Is this going to be the new trend? (1)

vlueboy (1799360) | more than 3 years ago | (#33210276)

What they should have done was have an OS module which returns the adds, so the app didn't need internet access.

Yes, they should have... but "Hindsight is 20/20" applies here: Android is 2 years old, so it couldn't be held accountable, until Apple came out with the idea as a major OS / phone paradigm and loosed it to the public a couple months ago.

That's Kind of what Pete Townshend Said (1)

syntap (242090) | more than 3 years ago | (#33206982)

And what the civilian/press airport "security testers" said. Will the press be brought to justice too?

Not iPhone or Android (3, Funny)

LinuxIsGarbage (1658307) | more than 3 years ago | (#33207000)

We know it's impossible for Apple or Linux to get malware, so clearly it was only done for Windows Mobile.

I didn't see them mention it, but I think it's actually a blackberry?

Re:Not iPhone or Android (1)

Vandilzer (122962) | more than 3 years ago | (#33207580)

Right because nobody had every jailbroke an iPhone with a PDF exploit

Re:Not iPhone or Android (0)

Anonymous Coward | more than 3 years ago | (#33207974)

it's impossible for Apple or Linux to get malware

nobody had every jailbroke an iPhone with a PDF exploit

Woosh!

Also: nobody has ever jailbroken

I can understand you not getting the joke, but really now... 3 words out of 12? That's abysmal even by slashdot standards.

what (1)

The MAZZTer (911996) | more than 3 years ago | (#33207044)

"A very obvious tell-tale sign on the phone is all of a sudden your battery life is deteriorating," he said. "You wake up one morning and your battery has been drained then that might indicate that some of the data has been taken off your phone overnight."

*snicker* Quick! Put more data in my phone to charge it back up!

I can see where he's coming from with that though, a smart-smartphone would conserve battery whenever possible by powering down components and so constant active use could drain the battery quicker. But any GOOD malware would only send out data at regular intervals, not all the time, so this would be a useless check. A BAD malware author would learn this pretty quickly after he DDoSs his own servers.

No defense (4, Insightful)

Caerdwyn (829058) | more than 3 years ago | (#33207298)

What's the difference between "malicious" and "beneficial", when it comes to software?

Just about every "malicious" action that malware takes is not "malicious" for what it actually does (set cookies, record passwords, send data in response to user actions, create accounts, encrypt things). All of these things are also functions you sometimes want software to do. The maliciousness is in who data gets sent to, whether it does one thing when it presented another thing in the UI, or if it's not announced. Therefore, how can you programmatically tell malware from not-malware? You can't. And therefore, if the user has the ability to install software, all you have to do to get malware onto a device is lie about it.

Malware isn't defined by what it does. It's defined by deception and lack of consent, and only by deception and lack of consent.

And if you want widespread adoption of your malware? Just wait. Make the "trojan" part of the malware (the game, app, etc.) useful, and do ONLY that part, for a while. Don't start stealing passwords until 6 months later. Include the encryption-extortware in the 3.2 update. Cache the keystrokes and send them only when you embed a keyphrase in your product website, and upload them during an "expected" transaction such as an upgrade or content download. Build the reputation for trust and the block of reviews saying "it's never caused me trouble", then cash it in all at once.

Short of human review of the software in question prior to general availability, you're screwed. (Even then you might be, as human review isn't infallable, but it's certainly not useless) With this in mind, whether you agree that it's worth the hassle/restrictions or not, isn't Apple's AppStore strategy just a little more understandable from an objective point of view?

Maybe it's not ALL about moustache-twirling and staking out new liver donors. Maybe, just maybe, at least part of Apple's "walled garden" motives are benevolent. Maybe it's not a simple question, but a complex one, requiring not simple answers, but complex and rigorous thought. And maybe it's not black-and-white, but shades of gray with the weighting different for every user.

Re:No defense (3, Informative)

Anonymous Coward | more than 3 years ago | (#33207442)

Apple's walled garden does nothing to prevent the kind of malware you described. They don't actually inspect an app's code, they just run it (in an emulator presumably) and see if it does anything they don't like. Getting hidden malicious functionality through the approval process would be a cinch.

Re:No defense (3, Interesting)

VortexCortex (1117377) | more than 3 years ago | (#33207586)

Apple's walled garden does nothing to prevent the kind of malware you described.
Getting hidden malicious functionality through the approval process would be a cinch.

Yep, even teenagers can get trojan apps past Apple's approval process [techtipsgeek.com].

Re:No defense (1)

BasilBrush (643681) | more than 3 years ago | (#33209802)

Of course it's possible to hide malware in an application and get it into an App Store. However the value of the single App Store approach is demonstrated by the very example you use. The app was only on the web store for a few hours. It was removed when the fact that it wasn't as described was made known. And presumably the developer has been permanently barred from the store.

In a more open system, where anyone can run an app store, it would be practically impossible to stop the malware appearing and reappearing on the stores.

Though this particular malware was only harmful to network's business models, not consumers, obviously the same argument applies for malware that is malicious towards consumers.

The threat model (2, Insightful)

tepples (727027) | more than 3 years ago | (#33207504)

What's the difference between "malicious" and "beneficial", when it comes to software?

From the user's point of view, the threats are modeled rawther well on the Bitfrost page [laptop.org]. But from a platform owner's (e.g. Apple, Microsoft, Sony, Nintendo) point of view, the threats are anything that would either tarnish the brand or compete with the platform owner.

Re:No defense (3, Interesting)

erroneus (253617) | more than 3 years ago | (#33207564)

Why does this remind me of Bonzi buddy?

I gave my sons their own computer when they were in elementary school. At the time, it was somewhat rare and they were excited by it. They had internet access which I vaguely watched... (meaning checking for porn) and all seemed well.

Keep in mind that I had NEVER had problems with pop-ups and malware or any of that before simply because I instinctively knew better as do many people here on slashdot. (Not many of us had to learn the hard way... we pretty much already knew... what? install this program to see the video? WE don't fall for that one... but many do!) So it didn't occur to me that my sons were not yet as skeptical as I.

So yeah... Bonzi buddy. They found this cute thing and installed it and it was fun for them to play with. It told jokes and they could type things in for it to say. Before long, the computer was doing things they didn't tell it to do. I remember the first time my younger son rushed downstairs to tell on his older brother for having naked pictures on the computer screen! The older followed behind closely and explained that they just started appearing out of nowhere! (Pop-ups! I had HEARD about them but never saw them before at the time!)

So I reloaded the machine, let them install Bonzi buddy again and before long it was happening again. Didn't take me long to realize what Bonzi buddy was up to. Sad part was that Bonzi buddy attracted kids and exploited them with along with the adults.

In short, there's nothing new or revolutionary in your idea. It has been done a lot already.

In fact, Microsoft did that too. They could have secured their OSes from being copied from the very beginning. Instead, they used piracy (free copying) as a means of distribution to choke out the competition. Then, once they achieved the "critical mass" their revealed secret documents spoke of, they started locking their software down more and more. It's not like free copying wasn't a problem from the beginning... it's just that it was also useful in the beginning and stopped being useful once their ends were achieved.

Re:No defense (3, Interesting)

scamper_22 (1073470) | more than 3 years ago | (#33207666)

How about requiring all software be written and approved and digitally signed by licensed engineers with legal responsibility.

That way, if malware gets in, you have someone to blame.

Pardon me for combining job protection with societal benefit :P... you know... like how doctors and lawyers do.
Sure it stifles open access... but at the benefit of quality and job protection...

Re:No defense (1)

mrnobo1024 (464702) | more than 3 years ago | (#33208342)

Yes, let's hand over to the government the power to dictate to everyone what they can and can't do with the hardware they bought. Governments never abuse their power!

Re:No defense (1)

bsDaemon (87307) | more than 3 years ago | (#33210598)

Yeah, and neither to private insurance companies, retailers, etc. Face it -- everyone is going to exploit whatever power they can when ever they can for as long as they can and only children believe otherwise. Some people are just way better at it than others.

Re:No defense (1)

Abcd1234 (188840) | more than 3 years ago | (#33219270)

Yes, let's hand over to the government the power to dictate to everyone what they can and can't do with the hardware they bought.

Err, that's not what he said.

What he said was that if I release a piece of software, it's digitally signed so that it can be tracked back to me. If the application does something malicious, I can be identified and am thus on the hook.

This doesn't stop anyone from installing software. All it does is facilitate accountability.

'course, the real problem with this scheme is it stifles freedom of speech. After all, what if I release a piece of software that lets you cut a hole through the Great Firewall of China? With this scheme, they'd be able to track me down as the author.

Re:No defense (0)

Anonymous Coward | more than 3 years ago | (#33208404)

good luck with that. I hope you'll enjoy paying $2000 for MS Word.

That's the Apple iPhone development model (1)

SuperKendall (25149) | more than 3 years ago | (#33208574)

How about requiring all software be written and approved and digitally signed by licensed engineers with legal responsibility.

You sign up for iPhone development and give them your name and address.

Apart from the licensing thing, it's pretty much the same - if you tried something bad you would be legally liable and Apple could find you.

I don't know the licensing thing really buys anyone but Apple anything, making it easier for them to determine if they should register you or not. It may go that way someday I suppose, but imagine the hue and cry if they do that...

Re:That's the Apple iPhone development model (2, Interesting)

R3d M3rcury (871886) | more than 3 years ago | (#33209166)

You sign up for iPhone development and give them your name and address.

And I'm certain that Apple checks to make sure that those names and addresses are completely legit.

Of course, I also believe in the Easter Bunny.

A couple of years ago, I used one of my developer discounts to buy a machine for a co-worker. We had it shipped to his house. For the next six months, when I signed on, my account listed my first name and his last name.

Oh, but you can always look up the info? Here's a copy of Hitchhikers Guide to the Galaxy [apple.com] [Redirects to iTunes]. Go click on "Jeffrey Beyer Web Site." Hell, if Apple can't even catch things like that in their own store, I don't hold much stock in them being able to ferret out a clever hacker.

Re:That's the Apple iPhone development model (2, Interesting)

SuperKendall (25149) | more than 3 years ago | (#33209472)

And I'm certain that Apple checks to make sure that those names and addresses are completely legit.

Why is that so hard to believe?

If you are selling any app, they have to get bank contacts from you, and it cannot be just any bank - they have to support SWIFT codes, which means a pretty large bank. Between the two things Apple has a pretty good lock on who you are.

For free apps they do not require a bank account but they do verify your address.

A couple of years ago, I used one of my developer discounts to buy a machine for a co-worker. We had it shipped to his house. For the next six months, when I signed on, my account listed my first name and his last name.

Right, but they don't have the same degree of controls around developer accounts as they do iPhone developer accounts. It's a different level of checking (as in they actually do some). They take knowing who you are much more seriously.

Re:That's the Apple iPhone development model (2, Interesting)

BasilBrush (643681) | more than 3 years ago | (#33209884)

In addition to the bank checks that the other poster mentioned, you also have to supply them with tax information, and company incorporation documents if applicable. The process too a few weeks for us, and entailed a few phone calls and physical mail in both directions.

Apple certainly knew we were more than a made up name before we were allowed to upload our first app.

So maybe that easter bunny is more real than you originally thought.

Re:No defense (1)

stagg (1606187) | more than 3 years ago | (#33208708)

What's the difference between "malicious" and "beneficial", when it comes to software?

Malware isn't defined by what it does. It's defined by deception and lack of consent, and only by deception and lack of consent.

This is an extremely important sentiment and it can not be repeated enough.

You can tell right away... (2, Insightful)

yttrstein (891553) | more than 3 years ago | (#33207494)

When someone's been to Blackhat recently. There were at least half a dozen step-by-step presentations about every aspect of cellphone malware.

Finally a good title (1)

ninjacheeseburger (1330559) | more than 3 years ago | (#33207540)

I saw this story on a google news feed and the headline read "Smart phone app written by BBC reporter steals data" , now how many people read this title and will not download any apps by the bbc.

Fearmongering Bullshit... (4, Insightful)

Jahava (946858) | more than 3 years ago | (#33207616)

I'll open with a disclaimer: most of my smartphone experience and awareness is centered around Android phones. That said, this article is yet another with a standard theme: "Remember, you stupid public, that smartphones are still computers". This is another in the a set of articles about people who write phone applications requesting a smorgasbord [wikipedia.org] of permissions, receiving them from the user, and using them maliciously. Put simply, this is another in the formulaic series:

Mystique of Computers * Fear of Malware * Novelty of Phones = Profit

Chris Wysopal, co-founder and technology head at security firm Veracode, which helped the BBC with its project, said smartphones were now at the point the PC was in 1999.

No offense, but Chris Wysopal is an idiot. Modern smartphones run every application in a sandboxed per-application environment with fine-grained permission controls that are, to some degree, opaque to the user. These applications, by a well-defined default, must exist in a central repository managed by a powerful authority and receive realtime user reviews. This is nothing like PCs in 1999 (remember, that was Windows 98). Then again, he's certainly quite biased, as his company [veracode.com] makes a living certifying applications.

All of the information-stealing elements of the spyware program were legitimate functions turned to a nefarious use.

Yes, of course they were. BBC didn't actually do anything innovative, like find an exploitation or break out of the sandbox. They just abused the OS's granted privileges to the fullest extent. Is this actually a problem? Given any set of privileges and any degree of fine-grained control, you can still abuse whatever you're given to the fullest extent.

At least one fundamental thing failed here: the user installed a phone game that requested privileges [android.com] such as:

  • SEND_SMS: Allows an application to send SMS messages.
  • INTERNET: Allows applications to open network sockets.
  • READ_CONTACTS: Allows an application to read the user's contacts data.
  • READ_OWNER_DATA: Allows an application to read the owner's data.
  • ... to name a few

As the owner and user of the device, it is ultimately your responsibility to determine what software you install on your phone. If you are downloading a single-player game that asks for these kinds of permissions, you had damned well better check out the source of that game. If it's not a company that you are comfortable trusting and you still install it, then you are (frankly) stupid. BBC does, of course, presume that its users are stupid.

But that's the problem ... no amount of protection will allow stupid people have free access to a computer and remain protected. You have to strip away something from one of these factors ... either whittle down free access or reduce the base of stupid users. Better design models only serve to decrease the thresholds required for either.

Is there an inherent issue with those kinds of permissions being available and grantable? Sure, there is! Applications, especially closed-source ones, are effectively black boxes. The permissions that I am presented with at installation-time are, in fact, my only real insight as to what the application is capable of doing. Arguing for a finer grain of control is pointless, though. Regardless of what permissions are grantable, you will never circumvent the fundamental problem that stupid users will blindly install applications. Presenting them with more information will not change that fact.

It is the job of the OS vendor (Apple, Google, RIM, etc.) to declare a set of permissions that reasonably mitigates the dangers of overly-general capabilities with the nuisance of overly-verbose permission sets. Walk too far to the left and you can't tell what the application is doing. Walk too far to the right and you'll go the way of EULAs; people will not read (or understand) the lists. That said, I give Google (see disclaimer) mad props for their choices... I think they do a great job walking that line.

However, it can be difficult to separate malicious programs from legitimate ones because the connectedness of a mobile means many applications need access to contact lists and location data. For example, gamers might want to brag to their friends about achievements, post high scores to Facebook or play with a friend if they are close by. All of which would need legitimate access to those sensitive details.

How about no? Granted, there are many possible legitimate uses games can have for Internet access, SMS access, etc., but I am the user. I don't have to install these games. If there is an application that asks for combinations of privileges like "read user data" and "connect to the Internet", I should not install it unless I trust the vendor. In a perfect environment, the farther an application's privilege requests deviate from its core purpose, the fewer people should install it and the more its market share should be hurt. We're not at "perfect" yet, but the state of things isn't that bad.

What needs to happen is a general education. Fearmongering proofs of concept don't help; they just cause policy to be created to supplement vendors. The only effective way to eliminate stupid users is to impose consequences on them for their ignorance and fortunately, in this ecosystem, the malware takes care of that for us.

Re:Fearmongering Bullshit... (1)

SQLGuru (980662) | more than 3 years ago | (#33208282)

If it's not a company that you are comfortable trusting

To be fair, the BBC is one company that even a lot of skeptical, careful people would think they could trust. I don't have the app, so I'm not sure how it was listed, but if it said BBC, I could see how people would tend to trust it.

Re:Fearmongering Bullshit... (2, Interesting)

Jahava (946858) | more than 3 years ago | (#33208564)

If it's not a company that you are comfortable trusting

To be fair, the BBC is one company that even a lot of skeptical, careful people would think they could trust. I don't have the app, so I'm not sure how it was listed, but if it said BBC, I could see how people would tend to trust it.

Absolutely, and that is a wonderful part of the system. If BBC actually released this application maliciously under their trusted name, and anyone found out what it was doing, then BBC would face a hailstorm of complaints, bad press, and lost trust. This would almost certainly affect its bottom line.

Users trust BBC precisely because they have a lot to lose by betraying that trust.

Re:Fearmongering Bullshit... (1)

HybridJeff (717521) | more than 3 years ago | (#33210740)

As this Slashdot I guess I shouldn't be surprised that no one RTFA. This was an app written by a BBC journalist for the express purpose of testing how vulnerable the platform was so that he would write a story about it. It was never uploaded onto any app store, he only tested it on one single development phone... his own.

Cannot rely on education as solution (1)

SuperKendall (25149) | more than 3 years ago | (#33208494)

What needs to happen is a general education.

If you have to rely on that, the system will not work. Users don't want to, and will not be "educated" to. They want to buy and use something. You can't make users do something they don't want to, any more than force everyone to carefully listen to the flight attendants on an airline explain the safety procedures beforehand.

And frankly, I do not see that as unreasonable.

I like the Android security model with fine grained permissions but do not like how you agree to all the parts up front. In a long list it's too easy to slip some permission in that seems meaningless to the user, in fact to most people I daresay they all would be.

Instead, when a user opens an app they should be asked at the time of access to a resource if it's OK to access that resource. Now here I'm sure you start to be reminded of Vista UAC and innumerable "Are you sure" dialogs. But I don't mean every tine, I mean only once or twice and then the app is granted that permission permanently.

Yes it means that an app could potentially do something later on after being granted some permission. But it also would block a lot of obviously wrong things from working, like opening a media player and then being asked if it's OK to SMS a big ol' number you do not recognize.

This change gets you a much longer way to letting users do what they want without terrible implications for them.

Also one final change would be that all security dialogs instead of saying something like "allow" and "deny", should instead be worded more like "allow" and "Hell No".

Re:Cannot rely on education as solution (2, Interesting)

Jahava (946858) | more than 3 years ago | (#33208782)

Instead, when a user opens an app they should be asked at the time of access to a resource if it's OK to access that resource. Now here I'm sure you start to be reminded of Vista UAC and innumerable "Are you sure" dialogs. But I don't mean every tine, I mean only once or twice and then the app is granted that permission permanently.

Yes it means that an app could potentially do something later on after being granted some permission. But it also would block a lot of obviously wrong things from working, like opening a media player and then being asked if it's OK to SMS a big ol' number you do not recognize.

You mentioned the shortcomings yourself; this wouldn't stop any serious malware author. They would either wait out whatever "trial period" you impose, or find a clever way [computerweekly.com] to masquerade their malice to seem innocent. With application models like these, you really can't beat around the bush, and solutions that try and mitigate will only find their limits probed, explored, and worked around.

If you have to rely on that, the system will not work. Users don't want to, and will not be "educated" to. They want to buy and use something. You can't make users do something they don't want to, any more than force everyone to carefully listen to the flight attendants on an airline explain the safety procedures beforehand.

Education isn't as impossible as you seem to think it is. It is a compromise between the vendors and the users. I'll use browsers as examples: you'll never get Joe Averageuser to validate SSL certificate roots of trust by clicking through dialogues. You will, however, get very far giving him a simple piece of advice, like check the color of the bar before you use a banking website [mozilla.com].

That is what phone OS's need to be designed to do (and they are, hence the "bullshit" in my title). They need to simplify the absurdly-complex system that is a mobile phone down to a manageable set of qualities that everyday users can handle and make intelligent decisions based on. You will always find your idiots, but smart OS / UI design can put the top 99% [wikipedia.org] of people in a position to make the right call, and that's very powerful.

Existing mobile phone UIs certainly have plenty of room to grow, but the vendors understand the psychological and intellectual landscape, and I believe strongly that they are moving in the right direction at a very respectable pace.

Re:Cannot rely on education as solution (1)

SuperKendall (25149) | more than 3 years ago | (#33208986)

You mentioned the shortcomings yourself; this wouldn't stop any serious malware author.

Just because it cannot stop all attacks does not mean it should stop none. It's still a better solution.

The key to security is defense in depth, and that means improving any one system when you can, because the system as a whole benefits from it. And it provides a much greater tangible awareness from the user about particular points of access to resources, which in turn is inadvertently providing the very education you wanted to give the user in the first place!

you'll never get Joe Averageuser to validate SSL certificate roots of trust by clicking through dialogues. You will, however, get very far giving him a simple piece of advice, like check the color of the bar before you use a banking website.

You'll never get anywhere with that either. Ask any user in real life if they are looking for that, the answer 99 times out of 100 (for firefox users) would be no. A fair number of people may know to look for the lock though at this point since it is ubiquitous. Probably not a majority though.

The whole reason it works is because banks use it, not because users look for it. Users don't have to look for or think about anything; it simply protects them.

That is what phone OS's need to be designed to do (and they are, hence the "bullshit" in my title). They need to simplify the absurdly-complex system that is a mobile phone down to a manageable set of qualities that everyday users can handle and make intelligent decisions based on.

And my suggestion provided a way to give a user a real context in which to make a security choice, instead of presenting a laundry list of security items which the average user will not have means to judge.

the vendors understand the psychological and intellectual landscape, and I believe strongly that they are moving in the right direction at a very respectable pace.

They are moving that way but I would call it more fits and starts (or perhaps lurching) rather than a "pace", which suggests constant and steady forward progress.

Security systems will inadvertently take a step back sometimes, but then two forward when they learn. Over time is does ratchet up if vendors are really serious about security.

Re:Cannot rely on education as solution (1)

Jahava (946858) | more than 3 years ago | (#33209234)

Just because it cannot stop all attacks does not mean it should stop none. It's still a better solution.

The key to security is defense in depth, and that means improving any one system when you can, because the system as a whole benefits from it. And it provides a much greater tangible awareness from the user about particular points of access to resources, which in turn is inadvertently providing the very education you wanted to give the user in the first place!

If I create a system with known limitations, someone seeking to exploit that system will take those limitations into account. Your idea serves to change the behavior of the malware; not to inhibit it or diminish its effectiveness. You're effectively placing hurdles in the path of the malware and hoping it gets tired of jumping them. Even worse, you are making your user base jump those same hurdles! Your entire premise is based on the idea that the software (or malware authors) will get tired before your users, and I can assure you this will never be the case.

You'll never get anywhere with that either. Ask any user in real life if they are looking for that, the answer 99 times out of 100 (for firefox users) would be no. A fair number of people may know to look for the lock though at this point since it is ubiquitous. Probably not a majority though.

I can only offer my own experiences here, but I have had near 100% success getting my friends, family, girlfriends, and coworkers to check that icon. "Look at the color, look at the lock, and make sure the name is right, and you'll be secure." I think it is a very effective and well-designed UI modification. I'm pretty proud of them :) YMMV, I guess.

And my suggestion provided a way to give a user a real context in which to make a security choice, instead of presenting a laundry list of security items which the average user will not have means to judge.

This is also pretty subjective, but I don't think these kinds of lists [androidtapp.com] are overly-complex or obscure. They very clearly spell out exactly what the application is doing in words that everybody can understand. I believe that the issue, if any, is that users just aren't used to being presented with that kind of useful information, and they have to get accustomed to it.

Also only offering my own experiences here, but I know a lot of non-technical people who would refuse an app because of weird permissions. I also know someone who will install an app no matter what (my brother), but he can be an idiot sometimes.

Anyways, not to make this into a flamewar; I suppose we'll have to agree to disagree.

Re:Cannot rely on education as solution (1)

SuperKendall (25149) | more than 3 years ago | (#33209626)

If I create a system with known limitations, someone seeking to exploit that system will take those limitations into account.

Which is why it's better to replace a worse system with one somewhat less worse, ASAP.

Because someone seeking to exploit Android has, as you said, taken the current LACK of limitations into account.

Your idea serves to change the behavior of the malware

It ALSO eliminates malware vectors from many classes of applications, instead of easily attaching SMS exploits to every kind of app under the sun.

You're effectively placing hurdles in the path of the malware

I am taking hurdles that are already there and making the user remove them when it makes sense, instead of making them sign a paper saying they are OK with hurdles located somewhere distant being removed because they are an eyesore.

You are never PLACING hurdles in front of anything. Instead you should only ever be selectively removing system access controls when and where it makes sense.

Your entire premise is based on the idea that the software (or malware authors) will get tired before your users

And your premise seems to be that means we shouldn't even bother to tire the software even though it has many beneficial side effects.

They very clearly spell out exactly what the application is doing in words that everybody can understand.

Yes they do. And they are impossible to judge. As I said in response to another post somewhere, you are asked to choose before you have even used an app. Well how do you know it's request to SMS is unreasonable? I cannot think of a single application type I could not see some potential in sharing information from via SMS. And the same is true for many users - in a media player they would be like "cool, I can SMS movies I like to friends".

Also only offering my own experiences here, but I know a lot of non-technical people who would refuse an app because of weird permissions.

SMS is weird to no-one though. That's the problem. You have no context for the choice you are being asked to make. You only have to have an app where the choices offered do not seem weird at all, and that is easy to do.

I am saying this model is trivial to bypass, so put in something that is less trivial and also makes the user think about what they are granting when it matters.

Anyways, not to make this into a flamewar; I suppose we'll have to agree to disagree.

We can disagree but it's very dangerous for Android to continue doing this - there will be many more, and many worse exploits. That's why I'm so fired up about this because I see it as a huge looming threat and I don't want to repeat the mistakes Microsoft made for any mobile OS moving forward.

Android has a generally good security model but it's undermined almost wholly by the presentation of user interaction for adjusting app controls.

Re:Cannot rely on education as solution (1)

Jahava (946858) | more than 3 years ago | (#33210542)

I am taking hurdles that are already there and making the user remove them when it makes sense, instead of making them sign a paper saying they are OK with hurdles located somewhere distant being removed because they are an eyesore. You are never PLACING hurdles in front of anything. Instead you should only ever be selectively removing system access controls when and where it makes sense.

I see what you're saying, but that methodology has the serious limitation in that it annoys the user. Furthermore, the security derived from your implementation is directly proportional to how many times you annoy the user - namely, the threshold that you set. At an extreme (pop-ups every time an SMS gets sent) it is very much more secure than the current implementations, since the user will immediately notice unsolicited SMS messages. However, anything short of that adds basically nothing, as all the malware has to do is ride out the threshold. And, unfortunately (speaking to user behavior), whatever the threshold is, if it's not very low, the users will likely get frustrated with the interruptions and automate through it (e.g., Internet Explore + ActiveX installation popups, or UAC).

It's sad, because a handful of power users could definitely take advantage of the system, but I am of the general opinion that those power users are not typically the ones that need protection from malware.

Yes they do. And they are impossible to judge. As I said in response to another post somewhere, you are asked to choose before you have even used an app. Well how do you know it's request to SMS is unreasonable? I cannot think of a single application type I could not see some potential in sharing information from via SMS. And the same is true for many users - in a media player they would be like "cool, I can SMS movies I like to friends".

...

We can disagree but it's very dangerous for Android to continue doing this - there will be many more, and many worse exploits. That's why I'm so fired up about this because I see it as a huge looming threat and I don't want to repeat the mistakes Microsoft made for any mobile OS moving forward.

Android has a generally good security model but it's undermined almost wholly by the presentation of user interaction for adjusting app controls.

I can only speak to Android, but I think I can say something particularly useful in this context. Specifically, on an Android platform, an application that wants to send text messages will likely do so not by requesting permissions to send SMS messages, but rather by soliciting Android's assistance via Intents [android.com]. In this manner, if my application has text or media that it wants to send to someone else, it simply wraps it in an Intent describing the nature of it and passes it to Android, which proceeds to connect it to the appropriate service, oftentimes with user interaction.

Using Intents, applications can send data using services like Internet or SMS without, themselves, needing to request permissions to directly access the SMS system. Through Intents and IntentFilters, a type of ad-hoc network-of-trust is established: if I start only with non-malicious services, and never install apps that can send SMS messages, then malicious SMS messages cannot be sent; any app that wants to send SMS messages has to go through a trusted path, which (in a reasonable installation) involves things like pre-populating text fields and presenting the user with a "Send SMS" dialog.

As such, your theoretical SMS-sending media player should exist without needing to request a single permission. It'll work wonderfully and automatically integrate with the software on the phone to send media updates via SMS, Facebook, GMail, and any other paths that are present on your phone. Any application that requests "Send SMS" capabilities is a huge red flag; it wants to send SMS messages outside of the scope of the existing infrastructure. Unless I have a good reason to allow this (e.g., Google Voice), it's an automatic "no", no confusion necessary.

Re:Cannot rely on education as solution (1)

SuperKendall (25149) | more than 3 years ago | (#33210808)

I see what you're saying, but that methodology has the serious limitation in that it annoys the user.

Only very slightly - I have direct experience with the system since it's what iOS uses to grant apps permission to use location. It works well because you only see the dialog twice, after that it's silent - that system is a great balance between over-tasking the user with questions and simply putting up a dialog that people are going to accept no matter what. And since you see it around the task you are trying to perform, it makes a lot of sense to you what is being asked.

Furthermore, the security derived from your implementation is directly proportional to how many times you annoy the user

Not at all. The security of the system mainly comes from the aspect of asking at the time a resource is needed and after a user has seen the application running, as opposed to asking before the user knows if it's reasonable to grant that permission.

I could see leaving some things in the initial dialog box - like "access internet". Things people are just always going to agree to, just leave 'em out (frankly, I don't see the need to even have some of those permissions). But any resource that deserves more attention, like SMS or address book or location, that is much better done at the time the app wants to use it.

Through Intents and IntentFilters, a type of ad-hoc network-of-trust is established: if I start only with non-malicious services, and never install apps that can send SMS messages, then malicious SMS messages cannot be sent

I am quite familiar with the Android SDK and intents.

That whole system doesn't matter though because the user already agreed to send SMS messages up front. The user already bypassed that whole infrastructure, without knowing if the application really should be sending SMS messages or not.

The whole system of Intents is perfect for the system I am proposing, because you could simply ask when an Intent is used.

Any application that requests "Send SMS" capabilities is a huge red flag;

Why? In the days of texting and instant messaging everything including movie URL's, SMS for a media player actually makes a lot of sense for me (it could for example transcode a shot clip to send to someone that way). It's way too easy to socially engineer the user into agreeing that SMS is a fine thing to allow any application, because (once more) without every having run the application it's easy to imagine cool ways in which that ability MIGHT be used. But you are asked before you ever know. That is the broken part, you are being asked to make a security choice with no basis on which to judge the appropriateness of the request. So in the end because they user cannot know, they will pretty much always agree.

If people just block any app that has SMS permissions out of fear, that can be really bad too - you are blocking potentially useful applications.

There is no drawback beyond single-time slight annoyance in my proposed system, and a ton of real benefit including acceptance of more functionality across the app space.

Re:Fearmongering Bullshit... (0)

Anonymous Coward | more than 3 years ago | (#33212186)

You are completely missing the point. The point is that it is feasible for an application that *does* appear to legitimately need these permissions (e.g. an improved SMS application, of which there are many). There is no way for the user to specify that the only SMS messages that can be sent are those ones that they wish to send, or that the OWNER_DATA permission should only be used to read data required for the application to do its job. The fact that it's just a tic-tac-toe application is beside the point, a more complicated application could claim to need these permissions and all kinds of users are going to trust that the application is not doing the wrong thing.

This is not "fearmongering bullshit", this is a legitimate concern for smartphones these days. I'm really confused why your comments got moderated up.

Re:Fearmongering Bullshit... (1)

JamesRing (1789222) | more than 3 years ago | (#33212208)

Accidentally posted this anonymously, actually wanted to spend some karma on this one.

You are completely missing the point. The point is that it is feasible for an application that *does* appear to legitimately need these permissions (e.g. an improved SMS application, of which there are many). There is no way for the user to specify that the only SMS messages that can be sent are those ones that they wish to send, or that the OWNER_DATA permission should only be used to read data required for the application to do its job. The fact that it's just a tic-tac-toe application is beside the point, a more complicated application could claim to need these permissions and all kinds of users are going to trust that the application is not doing the wrong thing.

This is not "fearmongering bullshit", this is a legitimate concern for smartphones these days. I'm really confused why your comments got moderated up.

Re:Fearmongering Bullshit... (1)

Jahava (946858) | more than 3 years ago | (#33223622)

You are completely missing the point. The point is that it is feasible for an application that *does* appear to legitimately need these permissions (e.g. an improved SMS application, of which there are many).

Is it? Android lets you share data via SMS without needing your application to request SMS privileges via Intents (see my last comment to SuperKandall for way too much more information on that. This covers the vast majority of SMS-related use cases.

Granted there are still applications that may want to send SMS on their own (like Google Voice), but those applications ought to be scrutinized heavily when they request that particular permission. Who is their author? What do they do? Does it make sense for such an application to request that permission?

For any given capability, there will be a legitimate uses for them. Ultimately, it's up to the user to make the call. The best the operating system can do is provide the user with:

  • The opportunity to make such a decision, and
  • As much knowledge as possible to make the right decision.

Android (specifically) provides both to a very significant degree. Prior to installing any application, I know who wrote it, how popular it is, what other people think about it, and specifically what capabilities it will have. It's up to the user to weigh those qualities against each other and determine if they trust it with those capabilities, and you'll never find a security model (save total vendor control) that removes that responsibility, especially because most users want that kind of control.

There is no way for the user to specify that the only SMS messages that can be sent are those ones that they wish to send, or that the OWNER_DATA permission should only be used to read data required for the application to do its job. The fact that it's just a tic-tac-toe application is beside the point, a more complicated application could claim to need these permissions and all kinds of users are going to trust that the application is not doing the wrong thing.

It is the point, though. There may be legitimate uses for combinations of permissions, but the user can still make the call to avoid dangerous combinations of permissions (e.g., SMS text + user data, contacts + Internet access, etc.), just as application developers should do their best to avoid such dangerous combinations. Android provides very powerful capabilities in the form of Intents to minimize the need for an application to have its own capabilities.

If a user sees an application requesting numerous permissions, even if those permissions make sense, they can still weigh the consequences against the respectability of the author. The real risk to the security plan is a malicious application from a well-respected source (such that the trust in that source outweighs the concern for the application's requested capabilities), and fortunately the market works powerfully to minimize that risk, in the form of financial damage. The security model leverages free market to gain strength, and that is, indeed, quite powerful.

This is not "fearmongering bullshit", this is a legitimate concern for smartphones these days. I'm really confused why your comments got moderated up.

Maybe it's because I'm awesome? Seriously, don't be an ass.

I will explain my title:

  1. The security model in Android is a very well-designed middle ground between opening up the device's potential to legitimate applications and giving the user the discretionary insight and power to make decisions. In fact, from a security point of view, I have never seen a better one (and feel free to point one out). The BBC doesn't offer any insight as to what they feel is wrong with the model. They just present a stylized, selective narrative isolating a worst-case scenario. Hence the "fearmongering".
  2. The BBC's article presumed that the user already installed the malicious application. This removed the most powerful and significant safety mechanism from the narration: the user's discretion. Most of the fundamental security layers imposed by Android serve to give the user the best possible information to make the choice to install, and enforce that the application is limited to those capabilities throughout its lifecycle. By neglecting that critical step, the BBC presents an egregiously false image of the threat to the reader. Hence the "bullshit".

Definitely not Symbian OS then? (1)

CockMonster (886033) | more than 3 years ago | (#33209760)

Seeing as how any app that's unsigned cannot do this sort of stuff without the user being asked (probably several times) if it's ok? But hey, Symbian OS isn't Linux based so it must be crap, eh?

Sad state of the BBC (1)

kegon (766647) | more than 3 years ago | (#33210156)

This is totally a non story. Man tries to write proof of concept malicious phone app. There is so little content in the story, the BBC can easily re-use this story again and again without worrying about it losing relevance. Any vaguely competent programmer could have easily done whatever they did (don't bother checking the article they don't explain anything). The sad fact is, there probably really are thousands of "hackers" out there trying to write malicious apps and we should all be careful with security blah blah blah, but instead of leading to any actual news in this area the BBC only want the "big bad Internet" angle.

The BBC have never quite "got it" when it comes to technology and technology stories. Everything has to be dumbed down enough so that the technical content is zero, but I don't think this is because they are trying to make it easy to understand, it's because they never understood themselves in the first place. Therefore, they eat up promo stories like this one, fed to them from companies in the IT security business saying how "scary" things are, and amping up the FUD.

At the end of the day, you don't need to go to the trouble of writing a malicious app; as Kevin Mitnick would say, you just ask people for the information you want. But c'mon BBC, a 14 year old would be able to write a much better, easy to understand, technically competent, story with some detail. I'm so glad I'm not paying a TV licence fee any more.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...