Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Apple's Spotty Record of Giving Back To the Tech Industry

chicksdaddy Re:Article is flame bait. Or a troll. (268 comments)

You have to read the whole article - ASF is not the only example cited. It is the only example cited within the first three paragraphs of the story, however.

about 4 months ago
top

Is Analog the Fix For Cyber Terrorism?

chicksdaddy Re:sure, no problem (245 comments)

really excellent feedback. appreciated.

about 5 months ago
top

Georgia Cop Issues 800 Tickets To Drivers Texting At Red Lights

chicksdaddy Gloating - but a good idea (1440 comments)

Look, studies have shown that driver reaction time while texting and driving is far, far worse than the reaction time for impaired driving (aka driving drunk), which is clearly illegal. In other words, we (your fellow citizens) are a lot safer with you drunk driving than driving while texting. (See this Car & Driver study: http://www.caranddriver.com/features/texting-while-driving-how-dangerous-is-it) So, apply the same logic as you would with drunk driving. Sure, these drivers were stopped at a red light, but would you expect the cop to look the other way if they were swigging from a bottle of vodka at the same red light ("well, the car isn't moving right now, so...")? He's right to read the law literally and also to assume that if they're texting at a red light, they likely won't stop texting once the car is moving. Take away: texting behind the wheel is a serious danger to public health and should be tolerated to about the same extent that we, as a society, tolerate drunk driving - which is not at all. My 2c.

about a year ago
top

DARPA Cyber Chief "Mudge" Zatko Going To Google

chicksdaddy Update: He'll work in Motorola Mobility ATAP Unit (30 comments)

Update courtesy of Google: Mudge will be working in Motorola Mobility's Advanced Technology & Projects (ATAP). From the web: "The group's mission is to deliver breakthrough innovations to the company's product line on seemingly impossible short timeframes. ATAP is skunkworks-inspired. Optimized for speed. Small, lean, resourced. With agility, freedom from bureaucratic constraints, and a willingness to embrace risk as core attributes." Hmm...sounds kinda like DARPA! ;-)

about a year ago

Submissions

top

Facebook Awards $50,000 Prize For Internet Defense

chicksdaddy chicksdaddy writes  |  13 hours ago

chicksdaddy (814965) writes "The Security Ledger reports (https://securityledger.com/2014/08/facebook-awards-internet-defense-prize-for-work-on-securing-web-apps/) on Facebook awarding its first ever monetary prize for groundbreaking work on cyber defense.
In a blog post on Wednesday, the company announced its first ever, $50,000 Internet Defense Prize was awarded to Johannes Dahse and Thorsten Holz, both of Ruhr-Universität Bochum in Germany for their work on a method for making software less prone to being hacked.(https://www.facebook.com/notes/protect-the-graph/internet-defense-prize-awarded-at-23rd-usenix-security-symposium/1491475121092634)

Dahse and Holz developed a method for detecting so-called “second-order” vulnerabilities in Web applications using automated static code analysis. Their paper (https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-dahse.pdf) was presented at the 23rd USENIX Security Symposium in San Diego.(https://www.usenix.org/conference/usenixsecurity14/technical-sessions)

In a blog post announcing the prize, John Flynn, a security engineering manager at Facebook, said the Internet Defense Prize recognizes “superior quality research that combines a working prototype with significant contributions to the security of the Internet—particularly in the areas of protection and defense.”

Second order vulnerabilities are distinct from ‘first order’ security holes like SQL injection and cross site scripting. They allow an attacker to use one of those first-order flaws to manipulate a web application and store a malicious payload on a web server. That payload, which may be stored as a shared resource on the application server, can later be used to target all users of the application.

Dahse and Holz’s work was chosen by a panel to receive the prize both on its technical merit and because panelists could “could see a clear path for applying the award funds to push the research to the next level,” Flynn wrote."

Link to Original Source
top

Antivirus Hapless In Protecting China's Uyghurs From Targeted Attacks

chicksdaddy chicksdaddy writes  |  about two weeks ago

chicksdaddy (814965) writes "The Security Ledger reports (https://securityledger.com/2014/08/study-finds-unrelenting-cyber-attacks-against-chinas-uyghurs/) on a new study of China's persecuted Uyghur minority that describes a community besieged by cyber attacks and with little protection from punchless antivirus software.

The study, “A Look at Targeted Attacks Through the Lense of an NGO” (http://www.mpi-sws.org/~stevens/pubs/sec14.pdf) is being presented at the USENIX Security Conference in San Diego on August 21. In it, researchers at Northeastern University and The Max Plank Institute studied a trove of more than 1,400 suspicious email messages sent to 724 individuals at 108 separate organizations affiliated with the Uyghur World Congress, an umbrella group representing Uyghur interests.

The study found that the "APT" style targeted attack weren't so "advanced" after all. The individuals or groups behind the attacks relied heavily on malicious e-mail attachments to gain a foothold on computers with malicious Microsoft Office or Adobe PDF attachments the favorite bait. The groups behind the attacks did not rely on – or need – previously unknown (or “zero day” ) software vulnerabilities to carry out attacks. Known (but recent and unpatched) software vulnerabilities were enough to compromise victim systems.

NGO groups are depicted as having few defenses against the attacks: anti virus software was largely ineffective at stopping malicious programs used in the attacks.“No single tool detected all of the attacks, and some attacks evaded detection from all of the antivirus scanners,” wrote Engin Kirda, a researcher at Northeastern University in a blog post.(http://labs.lastline.com/a-look-at-advanced-targeted-attacks-through-the-lense-of-a-human-rights-ngo-world-uyghur-congress) Even months after the malware was used against the WUC, “standard anti-virus (AV) detection software was insufficient in detecting these targeted attacks,” Kirda wrote."

Link to Original Source
top

Popular Web Sites Still Getting Gamed For SEO Attacks

chicksdaddy chicksdaddy writes  |  about two weeks ago

chicksdaddy (814965) writes "The security community has been aware of the danger posed by open redirect vulnerabilities (http://cwe.mitre.org/data/definitions/601.html) for years, but that hasn't added any urgency to calls to fix them.

Now data from Akamai shows that open redirects are a leading culprit in SEO attacks, in which scammers use redirects from legitimate web sites to plant malicious software on the computers of unsuspecting visitors. "Open redirect vulnerabilities are frequently left un-patched on major sites across the Internet, and these vulnerabilities are being exploited extensively by malicious actors and organizations," writes Akamai researcher Or Katz in a post on The Security Ledger.

In just one example, Akamai observed an SEO attack in which 4,000 compromised web servers at legitimate web sites were used to redirect visitors to more than 10,000 malicious domains. The activity also served to boost the search engine ranking of the malicious sites, Akamai said."

Link to Original Source
top

Old Apache Code at Root of Android FakeID Mess

chicksdaddy chicksdaddy writes  |  about three weeks ago

chicksdaddy (814965) writes "The Security Ledger reports that a four year-old vulnerability in an open source component that is a critical part of Android mobile OS leaves hundreds of millions of mobile devices susceptible silent malware infections. (https://securityledger.com/2014/07/old-apache-code-at-root-of-android-fakeid-mess/)

The vulnerability was disclosed on Tuesday (http://bluebox.com/news/). It affects devices running Android versions 2.1 to 4.4 (“KitKat”), according to a statement released by Bluebox. According to Bluebox, the vulnerability was found in a package installer in affected versions of Android. The installer doesn't attempt to determine the authenticity of certificate chains that are used to vouch for new digital identity certificates. In short, Bluebox writes “an identity can claim to be issued by another identity, and the Android cryptographic code will not verify the claim.”

The security implications of this are vast. Malicious actors could create a malicious mobile application with a digital identity certificate that claims to be issued by Adobe Systems. Once installed, vulnerable versions of Android will treat the application as if it was actually signed by Adobe and give it access to local resources, like the special webview plugin privilege, that can be used to sidestep security controls and virtual ‘sandbox’ environments that keep malicious programs from accessing sensitive data and other applications running on the Android device.

In a scenario that is becoming all too common: the flaw appears to have been introduced to Android through an open source component — this time from Apache Harmony (http://harmony.apache.org/), an open source alternative to Oracle’s Java. Google turned to Harmony as an alternative means of supporting Java in the absence of a deal with Oracle to license Java directly.

Work on Harmony was discontinued in November, 2011. However, Google has continued using native Android libraries that are based on Harmony code. The vulnerability concerning certificate validation in the package installer module persisted even as the two codebases diverged."

Link to Original Source
top

CNN iPhone App Sends iReporters' Passwords In The Clear

chicksdaddy chicksdaddy writes  |  about a month ago

chicksdaddy (814965) writes "The Security Ledger reports on newly published research from the firm zScaler that reveals CNN's iPhone application — one of the leading mobile news apps — transmits user login session information in clear text. (https://securityledger.com/2014/07/cnn-app-leaks-passwords-of-citizen-reporters/). The security flaw could leave users of the application vulnerable to having their login credential snooped by malicious actors on the same network or connected to the same insecure wifi hotspot. That's particularly bad news if you're one of CNN's iReporters — citizen journalists — who use the app to upload photos, video and other text as they report on breaking news events, zScaler warned in a blog post.

According to a zScaler analysis (http://research.zscaler.com/2014/07/cnn-app-for-iphone.html), CNN's app for iPhone exposes user credentials in the clear both during initial setup of the account and in subsequent mobile sessions. The iPad version of the CNN app is not affected, nor is the CNN mobile application for Android. A spokesman for CNN said the company had a fix ready and was working with Apple to have it approved and released to the iTunes AppStore.

The privacy of journalists' private communications has never been more a risk. Reporters find themselves in the crosshairs of sophisticated hacking crews, often working at the beck and call of anti-democratic regimes. They have infiltrated the networks of newspapers like The New York Times and The Washington Post — often in search of confidential communications between reporters and policy makers or human rights activists. (http://www.nytimes.com/2013/01/31/technology/chinese-hackers-infiltrate-new-york-times-computers.html) Here in the U.S., the Obama Administration is aggressively pursuing Pulitzer Prize winning journalist James Risen of The New York Times in order to uncover the source for a chapter in his book State of War concerning a covert US operation against Iran. (http://www.npr.org/blogs/thetwo-way/2014/06/02/318214947/times-reporter-must-testify-about-source-court-decides)"

Link to Original Source
top

Tired of playing cyber cop, Microsoft looks for partners in crime fighting

chicksdaddy chicksdaddy writes  |  about a month and a half ago

chicksdaddy (814965) writes "When it comes to fighting cyber crime, few companies can claim to have done as much as Redmond, Washington-based Microsoft, which spent the last five years as the Internet's Dirty Harry: using its size, legal muscle and wealth to single-handedly take down cyber criminal networks from Citadel, to Zeus to the recent seizure of servers belonging to the (shady) managed DNS provider NO-IP.

The company's aggressive posture towards cyber crime outfits and the companies that enable them has earned it praise, but also criticism. That was the case last week after legitimate customers of NO-IP alleged that Microsoft's unilateral action had disrupted their business. (http://www.itworld.com/it-management/425601/no-ip-regains-control-some-domains-wrested-microsoft)

There's evidence that those criticisms are hitting home – and that Microsoft may be growing weary of its role as judge, jury and executioner of online scams. Microsoft Senior Program Manager Holly Stewart gave a sober assessment of the software industry's fight against cyber criminal groups and other malicious actors.

Speaking to a gathering of cyber security experts and investigators at the 26th annual FIRST Conference in Boston (http://www.first.org/conference/2014), she said that the company has doubts about the long term effectiveness of its botnet and malware takedowns.

Redmond is willing use its clout to help other companies stomp out malicious software like botnets and Trojan horse programs. Stewart said Microsoft will use its recently announced Coordinated Malware Eradication (CME) program to empower researchers, industry groups and even other security firms that are looking to eradicate online threats. That includes everything from teams of malware researchers and PR professionals to software and cloud-based resources like the company's Malicious Software Removal Tool and Windows update.

"Use MSRC as a big hammer to stomp out a malware family," Stewart implored the audience, referring to the Microsoft Security Response Center. "Go ahead and nominate a malware family to include in MSRT," she said, referring to the Malicious Software Removal Tool."

Link to Original Source
top

FDA: We Can't Scale To Regulate Mobile Health Apps

chicksdaddy chicksdaddy writes  |  about a month and a half ago

chicksdaddy (814965) writes "Mobile health and wellness is one of the fastest growing categories of mobile apps. Already, apps exist that measure your blood pressure (http://www.withings.com/us/blood-pressure-monitor.html) and take your pulse (https://itunes.apple.com/us/app/thinklabs-stethoscope-app/id346239083?mt=8)- jobs traditionally done by tried and true instruments like blood pressure cuffs and stethoscopes .

If that sounds to you like the kind of thing the FDA should be vetting, don't hold your breath. A senior advisor to the U.S. Food and Drug Administration (FDA) has warned that the current process for approving medical devices couldn’t possibly meet the challenge of policing mobile health and wellness apps and that, in most cases, the agency won't even try.

Bakul Patel, and advisor to the FDA, said the Agency couldn't scale to police hundreds of new health and wellness apps released each month to online marketplaces like the iTunes AppStore and Google Play.

“It’s just not possible,” Patel said at a panel discussion of medical device security hosted by that National Institute of Standards and Technology’s (NIST’s) Information Security and Privacy Advisory Board (ISPAB) in June. (podcast available here: http://blog.secure-medicine.or...)

Estimates put the number of new, mobile health applications created each month at 500. But the FDA has reviewed no more than 80 so far – a small (and shrinking) fraction of the population.

In September, 2013, the FDA issued guidance to mobile application publishers about what kinds of mobile applications would qualify as medical devices. (https://securityledger.com/2013/09/fda-says-some-medical-apps-a-kind-of-medical-device/) The FDA said it will exercise oversight of mobile medical applications that are accessories to regulated medical devices, or that transform a mobile device into a regulated medical device. In those cases, the FDA said that mobile applications will be assessed “using the same regulatory standards and risk-based approach that the agency applies to other medical devices.”

Speaking on the NIST panel in June, Patel reiterated that guidance. Most mobile medical applications were really “health and wellness” tools that couldn’t adversely affect patient health. But he said the agency would treat applications that are mobile companions to regulated medical devices – like insulin pumps – differently. And he said that was a fine place to draw the line: most mobile health applications have short lifespans on the Appstore or Google Play. Diverting FDA resources to vetting them would be a waste of time.

“The whole mobile application world has its own ecosystem. Mobile apps live and die and its all user or consumer driven," he said. "The end-of-life cycle is so short compared to any other products we see. We need to focus on oversight of what is sustained and maintained.”"

Link to Original Source
top

Industrial Control System Firms in Dragonfly Attack Identified

chicksdaddy chicksdaddy writes  |  about a month and a half ago

chicksdaddy (814965) writes "Two of the three industrial control system (ICS) software companies that were victims of the so-called "Dragonfly" malware have been identified, The Security Ledger reports. (https://securityledger.com/2014/07/industrial-control-vendors-identified-in-dragonfly-attack/)

Dale Peterson of the firm Digitalbond identified the vendors (http://www.digitalbond.com/blog/2014/07/02/havex-hype-unhelpful-mystery/) as MB Connect Line (http://mbconnectline.com/index.php/en/contact/company), a German maker of industrial routers and remote access appliances and eWon (http://www.ewon.biz/en/home.html), a Belgian firm that makes virtual private network (VPN) software that is used to access industrial control devices like programmable logic controllers. Peterson has also identified the third vendor, identified by F-Secure as a Swiss company, but told The Security Ledger that he cannot share the name of that firm.

The three firms, which serve customers in industry, including owners of critical infrastructure, were the subject of a warning from the Department of Homeland Security. DHS’s ICS CERT said it was alerted to compromises of the vendors’ by researchers at the security firms Symantec and F-Secure. (https://securityledger.com/2014/07/dhs-warns-energy-firms-of-malware-used-in-targeted-attacks/) DHS said it is analyzing malware associated with the attacks. The malicious software, dubbed “Havex” was being spread by way of so-called “watering hole” attacks that involved compromises of vendors web sites.

According to Symantec, the malware targeted energy grid operators, major electricity generation firms, petroleum pipeline operators, and energy industry industrial equipment providers. Most of the victims were located in the United States, Spain, France, Italy, Germany, Turkey, and Poland.

Symantec described the group behind the Dragonfly/Havex malware as “well resourced, with a range of malware tools at its disposal.” The security firm Crowdstrike said the attacks were part of a cybercrime group it dubbed “Energetic Bear” (http://www.reuters.com/article/2014/07/02/us-cybersecurity-energeticbear-idUSKBN0F722V20140702) that was focused on espionage and of Russian origin.

Contacted by The Security Ledger, Gérald Olivier, a Marketing Manager at eWon said the compromise of its website occurred in January, 2014. According to an incident report prepared by the company, the attackers compromised the content management system (CMS) used to manage the company’s website and uploaded a corrupted version of a setup program for an eWon product called Talk2M. Hyperlinks on the eWon page that linked to the legitimate Setup file were changed to point to the malicious file. If installed, the malware could capture the login credentials of eWon Talk2M customers. The second firm, MB Connect Line, did not respond to requests for comment from the Security Ledger."

Link to Original Source
top

Trivial Bypass of PayPal Two-Factor Authentication On Mobile Devices

chicksdaddy chicksdaddy writes  |  about 2 months ago

chicksdaddy (814965) writes "The Security Ledger reports on research from DUO Labs that exposes a serious gap in protection with PayPal Security Key, the company's two-factor authentication service.

According to DUO (https://duosecurity.com/blog/duo-security-researchers-uncover-bypass-of-paypal-s-two-factor-authentication), PayPal's mobile app doesn't yet support Security Key and displays an error message to users with the feature enabled when they try to log in to their PayPal account from a mobile device, terminating their session automatically.

However, researchers at DUO noticed that the PayPal iOS application would briefly display a user’s account information and transaction history prior to displaying that error message and logging them out. The behavior suggested that mobile users were, in fact, being signed in to their account prior to being logged off. The DUO researchers investigated: intercepting and analyzing the Web transaction between the PayPal mobile application and PayPal’s back end servers and scrutinizing how sessions for two-factor-enabled accounts versus non-two-factor-enabled accounts were handled.

They discovered that the API uses the OAuth technology for user authentication and authorization, but that PayPal only enforces the two-factor requirement on the client – not on the server.

An attacker with knowledge of the flaw and a Paypal user's login and password could easily evade the requirement to enter a second factor before access the account and transmitting money."

Link to Original Source
top

FTC Lobbies To Be Top Cop For Geolocation

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "As the U.S. Senate considers draft legislation governing the commercial use of location data, The Federal Trade Commission (FTC) is asking Congress to make it — not the Department of Justice — the chief rule maker and enforcer of policies for the collection and sharing of geolocation information, the Security Ledger reports. (https://securityledger.com/2014/06/ftc-wants-to-be-top-cop-on-geolocation/)

Jessica Rich, Director of the FTC Bureau of Consumer Protection, told the Senate Judiciary Committee’s Subcommittee for Privacy, Technology that the Commission would like to see changes to the wording of the Location Privacy Protection Act of 2014 (LPPA) (http://www.ftc.gov/news-events/press-releases/2014/06/ftc-testifies-geolocation-privacy). The LPPA is draft legislation introduced by Sen. Al Franken that carves out new consumer protections for location data sent and received by mobile phones, tablets and other portable computing devices. Rich said that the FTC, as the U.S. Government’s leading privacy enforcement agency, should be given rule making and enforcement authority for the civil provisions of the LPPA. The current draft of the law instead gives that authority to the Department of Justice (DOJ).

The LPPA updates the Electronic Communications Privacy Act to take into account the widespread and availability and commercial use of geolocation information provided. LPPA requires that companies get individuals’ permission before collecting location data off of smartphones, tablets, or in-car navigation devices, and before sharing it with others.

It would prevent what Franken refers to as “GPS stalking,” preventing companies from collecting location data in secret. LPPA also requires companies to reveal the kinds of data they collect and how they share and use it, bans the development, operation, and sale of GPS stalking apps and requires the federal government to collect data on GPS stalking and facilitate reporting of GPS stalking by the public.(http://www.franken.senate.gov/files/documents/140327Locationprivacy.pdf)"

Link to Original Source
top

With Firmware Update, LG Links Smart TV Features To Viewer Monitoring

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "Can electronics giant LG force owners of its Smart TVs to agree to have their viewing habits monitored or lose access to the smart features they've already paid for?

That's the question being raised by LG customers and privacy advocates after firmware updates to some LG SmartTVs removed a check box opt-in that allowed TV owners to consent to have their viewing behavior monitored by LG. In its place, LG has asked users to consent to a slew of intrusive monitoring activities as part of lengthy new Terms of Service Agreement and Privacy Statement, or see many of the 'smart' features on their sets disabled.

Among other things, LG is asking for access to customers’ “viewing information”- interactions with program content, including live TV, movies and video on demand. That might include the programs you watch, the terms you use to search for content and actions taken while viewing.

Some LG SmartTV owners are crying foul (http://www.techdirt.com/articles/20140511/17430627199/lg-will-take-smart-out-your-smart-tv-if-you-dont-agree-to-share-your-viewing-search-data-with-third-parties.shtml). They include Jason Huntley (aka @DoctorBeet), a UK-based IT specialist who, in November, blew the whistle on LG's practice of collecting user viewing data without their consent. (http://doctorbeet.blogspot.com/2013/11/lg-smart-tvs-logging-usb-filenames-and.html) Huntley said he views the new privacy policy as a way for LG to get legal cover for the same kinds of omnibus customer monitoring they attempted earlier – though without notice and consent. “If you read the documents, they’ve covered themselves for all the activity that was going on before. But now they’re allowed to do it legally.”

It is unclear whether the firmware updates affect LG customers in the U.S. or just the EU. If they do, privacy experts say they may run afoul of US consumer protection laws. “My initial reaction is that this is an appalling practice,” Corryne McSherry, the Intellectual Property Director at the Electronic Frontier Foundation (EFF), told The Security Ledger.(https://securityledger.com/2014/05/bad-actor-with-update-lg-says-no-monitoring-no-smart-tv/) “Customers want and deserve to be able to retain a modicum of privacy in their media choices, and they shouldn’t have to waive that right in order for their TV (or any other device) to keep working as expected.”"

Link to Original Source
top

Heartbleed Exposes Critical Infrastructure's Patch Problem

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "The good news about the Heartbleed vulnerability in OpenSSL is that most of the major sites that were found to be vulnerable to the flaw have been patched. (http://www.computerworld.com/s/article/9247787/Most_but_not_all_sites_have_fixed_Heartbleed_flaw)

The bad news: the vulnerability of high-profile web sites are just the tip of the iceberg or – more accurately – the head in front of a very long tail of vulnerable web sites and applications. Many of those applications and sites are among the systems that support critical infrastructure. For evidence of that, look no further than the alert issued Thursday by the Department of Homeland Security’s Industrial Control System (ICS) Computer Emergency Readiness Team (CERT). The alert – an update to one issued last month – includes a list of 43 ICS applications that are known to be vulnerable to Heartbleed. (http://ics-cert.us-cert.gov/advisories/ICSA-14-135-05) Just over half have patches available for the Heartbleed flaw, according to ICS CERT data. But that leaves twenty applications vulnerable, including industrial control products from major vendors like Siemens, Honeywell and Schneider Electric.

Even when patches are available, many affected organizations — including operators of critical infrastructure — may have a difficult time applying the patch. ICS environments are notoriously difficult to audit because ICS devices often respond poorly to any form of scanning. ICS-CERT notes that both active- and passive vulnerability scans are “dangerous when used in an ICS environment due to the sensitive nature of these devices.” Specifically: “when it is possible to scan the device, it is possible that device could be put into invalid state causing unexpected results and possible failure of safety safeguards,” ICS-CERT warned."

Link to Original Source
top

Blade Runner Redux: Do Embedded Systems Need A Time To Die?

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "In a not-so-strange case of life imitating Blade Runner, Dan Geer, the CISO of In-Q-Tel, has proposed making embedded devices such as industrial control and SCADA systems more 'human' (http://geer.tinho.net/geer.secot.7v14.txt) in order to manage a future in which hundreds of billions of them will populate every corner of our personal, professional and lived environments. (http://www.gartner.com/newsroom/id/2636073)

Geer was speaking at The Security of Things Forum (http://www.securityofthings.com), a conference focused on securing The Internet of Things last Wednesday. He struck a wary tone, saying that "we are at the knee of the curve for deployment of a different model of computation," as the world shifts from an Internet of 'computers' to one of embedded systems that is many times larger.

Individually, these devices may not be particularly valuable. But, together, IoT systems are tremendously powerful and capable of causing tremendous social disruption. Geer noted the way that embedded systems, many outfitted with remote sensors, now help manage everything from transportation to food production in the U.S. and other developed nations.

“Is all the technologic dependency, and the data that fuels it, making us more resilient or more fragile?" he wondered. Geer noted the appearance of malware like TheMoon (https://isc.sans.edu/forums/diary/Linksys+Worm+TheMoon+Summary+What+we+know+so+far/17633), which spreads between vulnerable home routers, as one example of how a population of vulnerable, unpatchable embedded devices might be cobbled into a force of mass disruption.

Taking a page out of Philip Dick's book (http://www.goodreads.com/book/show/7082.Do_Androids_Dream_of_Electric_Sheep_) or at least Ridley Scott's movie (http://www.imdb.com/name/nm0000631/) Geer proposes a novel solution: “Perhaps what is needed is for embedded systems to be more like humans.”

By "human," Geer means that embedded systems that do not have a means of being (securely) managed and updated remotely should be configured with some kind of "end of life" past which they will cease to operate. Allowing embedded systems to 'die' will remove a population of remote and insecure devices from the Internet ecosystem and prevent those devices from falling into the hands of cyber criminals or other malicious actors, Geer argued.

The idea has many parallels with Scott's 1982 classic, Blade Runner, in which a group of rebellious, human-like androids – or “replicants” – return to a ruined Earth to seek out their maker. Their objective: find a way to disable an programmed ‘end of life’ in each of them. In essence: the replicants want to become immortal."

Link to Original Source
top

Want More Secure Passwords? Ask Pavlov.

chicksdaddy chicksdaddy writes  |  about 4 months ago

chicksdaddy (814965) writes "With each data breach at a major online service we are reminded, all over again, how pitiful most people are at picking and sticking to secure passwords. (http://securitynirvana.blogspot.com/2012/06/final-word-on-linkedin-leak.html). But all the noise security folks have made about the dangers of insecure passwords hasn't done much to change human behavior. (http://www.washingtonpost.com/blogs/the-switch/wp/2014/01/21/123456-replaces-password-as-most-common-password-found-in-data-breaches/)

Maybe the problem is that explaining isn't enough. Perhaps online firms need to actually change the behavior of users — to 'train' them to use secure passwords. And when you're talking about training someone to do something, who better to turn to than Ivan Pavlov (http://en.wikipedia.org/wiki/Ivan_Pavlov), the Russian Nobel Prize winning physiologist whose pioneering work in classical conditioning forever linked his name to the image of drooling dogs.

Writing on Security Ledger (https://securityledger.com/2014/05/is-pavlovian-password-management-the-answer/), Lance James, the head of Cyber Intelligence at consulting firm Deloitte & Touche suggests that a Pavlovian approach to password security might be the best way to go.

Rather than enforcing strict password requirements (which often result in weaker passwords http://blog.zorinaq.com/?e=54), James advocates allowing weak passwords, but attaching short TTL (time to live) values to them, based on data on how quickly the chosen password could be cracked.
"Let the user know the cost and value of the password, including it’s time for success," James proposes.

Users who select a weak password would get a message thanking them for resetting their password — and informing them that it will expire in 3 days, requiring another (punishing) password reset. Longer and more secure passwords would reward the user with a longer reprieve -from days to months. Thoughts?"

Link to Original Source
top

OpenSSL: The New Face Of Technology Monoculture

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "In a now-famous 2003 essay, “Cyberinsecurity: The Cost of Monopoly” (http://cryptome.org/cyberinsecurity.htm) Dr. Dan Geer (http://en.wikipedia.org/wiki/Dan_Geer) argued, persuasively, that Microsoft’s operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond’s monopoly, or at least to lessen Microsoft’s ability to ‘lock in’ customers and limit choice. “The prevalence of security flaw (sp) in Microsoft’s products is an effect of monopoly power; it must not be allowed to become a reinforcer,” Geer wrote.

The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer.(http://cryptome.org/cyberinsecurity.htm#Fired) (AtStake was later acquired by Symantec.)

These days Geer is the Chief Security Officer at In-Q-Tel, the CIA’s venture capital arm. But he’s no less vigilant of the dangers of software monocultures. Security Ledger notes that, in a post today for the blog Lawfare (http://www.lawfareblog.com/2014/04/heartbleed-as-metaphor/), Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn’t proprietary software managed by Redmond, however, it’s common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed.(https://securityledger.com/2014/04/the-heartbleed-openssl-flaw-what-you-need-to-know/)

“The critical infrastructure’s monoculture question was once centered on Microsoft Windows,” he writes. “No more. The critical infrastructure’s monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them," Geer writes.

What happens when a critical and vulnerable component becomes ubiquitous — far more ubiquitous than OpenSSL? Geer wonders if the stability of the Internet itself is at stake.

“The Internet, per se, was designed for resistance to random faults; it was not designed for resistance to targeted faults,” Geer warns. “As the monocultures build, they do so in ever more pervasive, ever smaller packages, in ever less noticeable roles. The avenues to common mode failure proliferate.”"

Link to Original Source
top

Crowd Funding Bug Bounties To Fix Open Source Insecurity? Don't Count On It.

chicksdaddy chicksdaddy writes  |  about 3 months ago

chicksdaddy (814965) writes "The discovery of the Heartbleed vulnerability put the lie to the notion that ‘thousands of eyes’ keep watch over critical open source software packages like OpenSSL. In fact, some of the earliest reporting on Heartbleed noted that the team supporting the software consisted of just four developers – only one of them full time. (http://online.wsj.com/news/articles/SB10001424052702304819004579489813056799076)

To be sure, there are still plenty of examples of tightly monitored open source projects and real accountability. (The ever-mercurial Linus Torvalds recently made news by openly castigating a key Linux kernel developer Kay Sievers for submitting buggy code, suspending him from further contributions.) (http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01331.html)

But how do poorer, volunteer-led open source projects improve accountability and oversight — especially in areas like security? Casey Ellis over at the firm BugCrowd has proposed a crowd-funded project to fund bug bounties (https://www.crowdtilt.com/campaigns/lets-make-sure-heartbleed-doesnt-happen-again/description) for a security audit of OpenSSL ($7,162 raised thus far, with a target of $100,000).

But a post on Veracode's blog doubts that offering fat purses for information on open source bugs will make much difference.

"A paid bounty program would mirror efforts by companies like Google, Adobe and Microsoft to attract the attention of the best and brightest security researchers to their platform. No doubt: bounties will beget bug discoveries, some of them important," the post reads. "But a bounty program isn’t a substitute for a full security audit and, beyond that, a program for managing OpenSSL (or similar projects) over the long term. And, after all, the Heartbleed vulnerability doesn’t just point out a security failing, it raises questions about the growth and complexity of the OpenSSL code base. Bounties won’t make it any easier to address those bigger and important problems."

In other words: finding bugs doesn't equate with making the underlying code more secure. That's a lesson that Adobe and Microsoft learned years ago (see Adobe's take on it from back in 2010 here: http://blogs.adobe.com/securit...).

What's needed is a more holistic approach to security that result in something like Microsoft's SDL (Secure Development Lifecycle) or Adobe's SPLC (Secure Product Lifecycle). That will staunch the flow of new vulnerabilities. Then investments need to be made to create a robust incident response and updating/patching post deployment. That's a lot to fit into a crowd-funding proposal — so it will need to fall to companies that rely on packages like OpenSSL to foot the bill (and provide the talent). Some companies, like Akamai, are already talking about that."

Link to Original Source
top

Apple's Spotty Record Of Giving Back To The Tech Industry

chicksdaddy chicksdaddy writes  |  about 4 months ago

chicksdaddy (814965) writes "One of the meta-stories to come out of the Heartbleed (http://heartbleed.com/) debacle is the degree to which large and wealthy companies have come to rely on third party code (http://blog.veracode.com/2014/04/heartbleed-and-the-curse-of-third-party-code/) — specifically, open source software maintained by volunteers on a shoestring budget. Adding insult to injury is the phenomenon of large, incredibly wealthy companies that gladly pick the fruit of open source software, but refusing to peel off a tiny fraction of their profits to financially support those same groups.

Exhibit 1: Apple Computer. On Friday, IT World ran a story that looks at Apple's long history of not giving back to the technology and open source community. The article cites three glaring examples: Apple's non-support of the Apache Software Foundation (despite bundling Apache with OS X), as well as its non-support of OASIS and refusal to participate in the Trusted Computing Group (despite leveraging TCG-inspired concepts, like AMDs Secure Enclave in iPhone 5s).

Given Apple's status as the world's most valuable company and its enormous cash hoard, the refusal to offer even meager support to open source and industry groups is puzzling. From the article:

"Apple bundles software from the Apache Software Foundation with its OS X operating system, but does not financially support the Apache Software Foundation (ASF) in any way. That is in contrast to Google and Microsoft, Apple's two chief competitors, which are both Platinum sponsors of ASF — signifying a contribution of $100,000 annually to the Foundation. Sponsorships range as low as $5,000 a year (Bronze), said Sally Khudairi, ASF's Director of Marketing and Public Relations. The ASF is vendor-neutral and all code contributions to the Foundation are done on an individual basis. Apple employees are frequent, individual contributors to Apache. However, their employer is not, Khudairi noted.

The company has been a sponsor of ApacheCon, a for-profit conference that runs separately from the Foundation — but not in the last 10 years. "We were told they didn't have the budget," she said of efforts to get Apple's support for ApacheCon in 2004, a year in which the company reported net income of $276 million on revenue of $8.28 billion."

Carol Geyer at OASIS is quoted saying her organization has done "lots of outreach" to Apple and other firms over the years, and regularly contacts Apple about becoming a member. "Whenever we're spinning up a new working group where we think they could contribute we will reach out and encourage them to join," she said. But those communications always go in one direction, Geyer said, with Apple declining the entreaties.

Today, the company has no presence on any of the Organization's 100-odd active committees, which are developing cross-industry technology standards such as The Key Management Interoperability Protocol (KMIP) and the Public-Key Cryptography Standard (PKCS)."

Link to Original Source
top

TCP/IP Might Have Been Secure From The Start, But...NSA!

chicksdaddy chicksdaddy writes  |  about 5 months ago

chicksdaddy (814965) writes "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet (http://en.wikipedia.org/wiki/Room_641A) and the invisible hand behind every flawed crypto implementation (http://www.reuters.com/article/2014/03/31/us-usa-security-nsa-rsa-idUSBREA2U0TY20140331).

Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP — the Internet's lingua franca.

As noted on Veracode's blog (http://blog.veracode.com/2014/04/cerf-classified-nsa-work-mucked-up-security-for-early-tcpip/), Cerf said that given the chance to do it over again he would have designed earlier versions of TCP/IP to look and work like IPV6, the latest version of the IP protocol with its integrated network-layer security and massive 128 bit address space. IPv6 is only now beginning to replace the exhausted IPV4 protocol globally.

“If I had in my hands the kinds of cryptographic technology we have today, I would absolutely have used it,” Cerf said. (Check it out here: http://www.youtube.com/watch?v...)

Researchers at the time were working on just such a lightweight cryptosystem. On Stanford’s campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn’t yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977).

As it turns out, however, Cerf revealed that he _did_ have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren’t they used? The culprit is one that’s well known now: the National Security Agency.

Cerf told host Leo Laporte that the crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet.

“During the mid 1970s while I was still at Stanford and working on this, I also worked with the NSA on a secure version of the Internet, but one that used classified cryptographic technology. At the time I couldn’t share that with my friends,” Cerf said. “So I was leading this kind of schizoid existence for a while.”

Hindsight is 20:20, as the saying goes. Neither Cerf, nor the NSA nor anyone else could have predicted how much of our economy and that of the globe would come to depend on what was then a government backed experiment in computer networking. Besides, Cerf didn't elaborate on the cryptographic tools he was working with as part of his secure Internet research or how suitable (and scalable) they would have been.

But it’s hard to listen to Cerf lamenting the absence of strong authentication and encryption in the foundational protocol of the Internet, or to think about the myriad of online ills in the past two decades that might have been preempted with a stronger and more secure protocol and not wonder what might have been."

Link to Original Source
top

Vint Cerf: CS Programs Must Change To Adapt To Internet of Things

chicksdaddy chicksdaddy writes  |  about 5 months ago

chicksdaddy (814965) writes "The Internet of Things has tremendous potential but also poses a tremendous risk if the underlying security of Internet of Things devices is not taken into account, according to Vint Cerf, Google’s Internet Evangelist.

Cerf, speaking in a public Google Hangout on Wednesday, said that he’s tremendously excited about the possibilities of an Internet of billions of connected objects (http://www.youtube.com/watch?v=17GtmwyvmWE&feature=share&t=21m8s). But Cerf warned that the Iot necessitates big changes in the way that software is written. Securing the data stored on those devices and exchanged between them represents a challenge to the field of computer science – one that the nation’s universities need to start addressing.

Internet of Things products need to do a better job managing access control and use strong authentication to secure communications between devices."

Link to Original Source
top

Hell Is Other Contexts: How Wearables Will Transform Application Development

chicksdaddy chicksdaddy writes  |  about 5 months ago

chicksdaddy (814965) writes "Veracode's blog has an interesting post on how wearable technology will change the job of designing applications. Long and short: context is everything. From the article:

"It’s the notion – unique to wearable technology – that applications will need to be authored to be aware of and respond to the changing context of the wearer in near real-time. Just received a new email message? Great. But do you want to splash an alert to your user if she’s hurtling down a crowded city street on her bicycle? New text message? OK– but you probably shouldn't send a vibrate alert to your user's smartwatch if the heart rate monitor suggests that he’s asleep, right?

This isn't entirely a new problem, but it will be a challenge for developers used to a world where ‘endpoints’ were presumed to be objects that are physically distinct from their owner and, often, stationary.

Google has already called attention to this in its developer previews of Android Wear – that company’s attempt to extend its Android mobile phone OS to wearables. Google has encouraged wearable developers to be “good citizens.” “With great power comes great responsibility,” Google’s Justin Koh reminds would-be developers in a Google video.(https://www.youtube.com/watch?v=1dQf0sANoDw&feature=youtu.be&t=2m26s)

“Its extremely important that you be considerate of when and how you notify a user.” Developers are strongly encouraged to make notifications and other interactions between the wearable device and its wearer as ‘contextually relevant as possible.’ Google has provided APIs (application program interfaces) to help with this. For example, Koh recommends that developers use APIs in Google Play Services to set up a geo-fence that will make sure the wearer is in a specific location (i.e. “home”) before displaying certain information. Motion detection APIs for Wear can be used to front (or hide) notifications when the wearer is performing certain actions, like bicycling or driving."

Link to Original Source

Journals

chicksdaddy has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>