Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Bug Cellphones Microsoft Technology

MS Says All Sidekick Data Recovered, But Damage Done 279

nandemoari writes "T-Mobile is taking a huge financial hit in the fallout over the Sidekick data loss. But Microsoft, which bears at least part of the responsibility for the mistake, is paying the price with its reputation. As reported earlier this week, the phone network had to admit that some users' data had been permanently lost due to a problem with a server run by Microsoft-owned company Danger. The handset works by storing data such as contacts and appointments on a remote computer rather than on the phone itself. BBC news reports today that Microsoft has in fact recovered all data, but a minority are still affected (out of 1 million subscribers). Amidst this, Microsoft appears not to have suffered any financial damage. However, it seems certain that its relationship with T-Mobile will have taken a major knock. The software giant is also the target of some very bad publicity as critics question how on earth it failed to put in place adequate back-ups of the data. That could seriously damage the potential success of the firm's other 'cloud computing' plans, such as web-only editions of Office."
This discussion has been archived. No new comments can be posted.

MS Says All Sidekick Data Recovered, But Damage Done

Comments Filter:
  • by sakdoctor ( 1087155 ) on Thursday October 15, 2009 @03:02PM (#29760873) Homepage

    Not just buzz, it's the future bro.

    • Re:Cloud computer (Score:5, Insightful)

      by fahrbot-bot ( 874524 ) on Thursday October 15, 2009 @03:24PM (#29761141)

      it's the future bro

      Perhaps for people who don't care about their data... Privacy, security, accountability and reliability cannot be ensured by a third party. I'll keep my data in-house thank you.

      • Re:Cloud computer (Score:5, Informative)

        by Krneki ( 1192201 ) on Thursday October 15, 2009 @03:46PM (#29761369)

        it's the future bro

        Perhaps for people who don't care about their data... Privacy, security, accountability and reliability cannot be ensured by a third party. I'll keep my data in-house thank you.

        If you can setup offline synchronization and data encryption, there is no reason to not use cloud computing.

        If your provider does not support this, then it's time to change it.

        • Re: (Score:3, Interesting)

          One reason not to use Cloud Computing is that I can avoid Ribbon Interface crapola (as was in Office 2007), and just keep using my older software. Or I can ignore Vista/ME and just keep using older XP/98 operating systems. With cloud computing using older programs won't be an option, because it will be forced upon you.

        • >>>If your provider does not support synchronization and data encryption, then it's time to change it.

          Well I could but my local government gave Comcast a monopoly. There is nothing else to "change" to. Thanks politicians for taking-away my freedom of choice. I'm losing my liberty.

        • Re:Cloud computer (Score:4, Insightful)

          by TubeSteak ( 669689 ) on Thursday October 15, 2009 @04:25PM (#29761859) Journal

          If you can setup offline synchronization and data encryption, there is no reason to not use cloud computing.

          All a local backup will give me is reliability.
          If I can't encrypt my data on their servers I don't really have privacy or security.

      • Re: (Score:3, Insightful)

        by elnyka ( 803306 )

        it's the future bro

        Perhaps for people who don't care about their data... Privacy, security, accountability and reliability cannot be ensured by a third party. I'll keep my data in-house thank you.

        Dude, organizations use third party data centers (or data centers that they physically own but are managed by a 3rd party) all the time w/o a glitch. Unless you are a software giant (like ebay or amazon) that can build your own data center, or are a minor/midsize operation (or are just a guy with a home computer), you will inevitably have a large part of your stuff either running on someone else's infrastructure or having it operate on someone else's watch.

        It is done all the time, by many, for years now.

        • Re: (Score:3, Interesting)

          by fahrbot-bot ( 874524 )

          Dude, organizations use third party data centers (or data centers that they physically own but are managed by a 3rd party) all the time w/o a glitch.

          Ya, "all the time". I worked for a company that outsourced its data center to IBM. They "accidentially" deleted our Oracle database - twice - and it often took two weeks to get things simple done on the servers, like add an entry added to the /etc/hosts file. I was hired as the senior Unix SA and we purchased our own equipment ($2 million worth), brought th

      • Re: (Score:3, Funny)

        by 4D6963 ( 933028 )
        Yes, if I'm gonna lose personal data I want it to be to my own flawed backup strategy! To hell with professionals whose job and business is to do just that!
    • by corbettw ( 214229 ) on Thursday October 15, 2009 @03:41PM (#29761313) Journal

      Storing all of your data and the lion's share of your processing on a remote machine, with only the bare minimum stored and run locally? Sounds a lot more like the past to me.

  • by wiredog ( 43288 ) on Thursday October 15, 2009 @03:05PM (#29760905) Journal

    here [washingtonpost.com] the damage to T-Mobile is compounded by their tone deafness on customer support.

    Uh, T-Mobile, can I offer a hint here? This is not the time to nickel-and-dime cranky customers. Let them go now, and maybe they won't spend the next nine months telling everybody they know to avoid your service -- instead, if you're lucky, they'll find a new hobby after only two months.

  • by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Thursday October 15, 2009 @03:06PM (#29760911) Journal

    Well, to be fair, whoever said 'All data is lost' to the press should have been dragged out back and shot. They should have said 'We're looking in to how long it will take to restore data, and to see if there will be any problems' and left it at that for a few days.

    • So they had actual backups of the data?
      Still a sad state it is taking so long.
      • by rtfa-troll ( 1340807 ) on Thursday October 15, 2009 @03:10PM (#29760985)
        From stories circulating it looks as if they are doing this by recovering the structure of the database, not restore from backup. Note that they say that most customers should have all data restored. Not just "data up to last week" or something similar. Of course this could all just be misplaced speculation and misunderstandings.
        • That's kind of my point. As was pointed out in the original /. discussion, the data wasn't 'lost' per se; it was pointed out even then that lots of it could likely be recovered, though it would be very inconvenient and possibly not worth the effort.

  • by cookie23 ( 555274 ) on Thursday October 15, 2009 @03:07PM (#29760921) Homepage
    It is hard for me to blame T-Mobile for the MS/Danger server / backups failure. Danger both makes the phones and runs the service, where as T-Mobile appear to be little more than common carriers and the customer service department. It is a bit unreasonable to suggest that T-Mobile could have prevented the outage. I mean it not like they could host the data somewhere else right? Sure they could have done a much better job handling the failure after it happened, much much better, but I just don't think they could have prevented it.
    • Re: (Score:3, Insightful)

      by LMacG ( 118321 )

      But Johnny SidekickUser can't contract directly with Danger, he has to deal with T-Mobile. T-Mobile has some responsibility for making sure the service they're reselling operates as advertised. This shouldn't be a "best-effort" service.

    • by sirwired ( 27582 ) on Thursday October 15, 2009 @03:32PM (#29761219)

      If T-Mobile plasters their name on the contract, the device, and the service, then the buck stops there. Period. Internally, T-Mobile can choose to blame the Easter Bunny if they like, but ultimately, it was T-Mobile's responsibility to ensure that their customer's data was properly protected. This absolutely could have been prevented by audits of Microsofts/Danger's operations, checks of backup integrity, tighter contracts, etc. T-Mobile can go try and sue MS to get their damages back, but in the meantime, customers can, and should, be blaming (and suing) T-Mobile.

      SirWired

    • They are ultimately responsible. Personally, I want to hear a recording of the conference call that went on in this maintenance window. I bet the "oh shit" moment was pretty intense.
    • by Thing 1 ( 178996 )

      Danger both makes the phones and runs the service [...]

      You'd think that their name would at least give customers pause as to the safety of their data... (Or, perhaps their name gives them some legal wiggle room?)

  • by elrous0 ( 869638 ) * on Thursday October 15, 2009 @03:07PM (#29760939)
    Hey, at least this fiasco took the heat off their crappy network for a while.
  • by Aurisor ( 932566 ) on Thursday October 15, 2009 @03:07PM (#29760941) Homepage

    But Microsoft, which bears at least part of the responsibility for the mistake, is paying the price with its reputation.

    Wow, this is a terrible blow for Microsoft. This might make people think that they produce unreliable products!

    • This whole sorry saga just puts the two companies in silhouette. Data loss is directly caused by Microsoft & their shoddy stuff

      T-Mobile, who only sells this Microsoft stuff, on hearing of the problems, immediately issues a statement & offers advice & compensation.
      Microsoft, who caused this, "Not me Guv! 'onest!"
    • Wow, this is a terrible blow for Microsoft. This might make people think that they produce unreliable and shoddy products!

      There, fixed that for you.
  • by rtfa-troll ( 1340807 ) on Thursday October 15, 2009 @03:08PM (#29760947)

    Worth repeating every time. Nobody cares if you back up your data. Take a blank server; take whatever it is that you store offsite. If you can turn the blank server into your production system then you are fine. If you can't then your strategy is failing. If you never try it then you are an amateur.

    This incompetence is something far beyond serious for MS. T-mobile is a much bigger customer than almost anyone short of vodafone can ever hope to be. MS have been moving strategically into hosting servers such as exchange for many customers. If you're a CEO you should be calling your CIO in and asking him when he plans to be free of MS services. If you are a CIO you want to be able to answer "there's nothing business critical relying on MS services" by the time that meeting comes.

    • by Aurisor ( 932566 )

      I think you're overstating your point. Unless you are saving your data in a truly useless format, having a practiced procedure for getting that data back into production only lets you get the data back up faster. We have one backup system in particular at my office - although we have never built a production machine from it, we do (manually and automatically) test the data to ensure that everything from production made it in. Will restoring that data be slow and sketchy? Sure. Is it fair to say that nob

      • There are astounding stories of whole databases where it turned out that database had never been written from memory to disk. There are many people who make the mistake of believing that their MySQL files on disk are consistent (you are supposed to dump the database). Even applications like office can have corrupt files on disk if a document is open. I know of situations where it turned out that the heads in the backup system were misaligned and so the tape only read back on the system they were backing
        • Sorry and I should have said; not testing breaks the fundamental principle of KISS. If you have to think about whether your backup is correct, then your system is too complex. You should know it's correct because you know it works.
        • by timster ( 32400 )

          I know of situations where it turned out that the heads in the backup system were misaligned and so the tape only read back on the system they were backing up on

          As I recall, this was essentially true all the time for DDS-3 drives. Remember kids, Just Say No to helical tape.

      • by timster ( 32400 ) on Thursday October 15, 2009 @03:30PM (#29761209)

        No, he's NOT overstating his point. Unless your data is a bunch of flat text files or Word documents or whatever the restore is a critically difficult process.

        Enterprise data like this often has never been in a flat or "dead" state since the original implementation. Complex applications frequently have delicate interactions between the live application and the contents of the database at any particular moment. Having a bunch of database tables on a tape somewhere doesn't do you much good if the application can't actually start from the state contained on the tapes, and it's a two-week manual process to clean up the issues.

        If you can afford a "slow and sketchy" restore process, or your application is just not that complicated, then by all means, don't test your restore, and don't create a department with responsibility for backups and nothing else. It's still amateur work.

      • Re: (Score:3, Informative)

        by Ephemeriis ( 315124 )

        I think you're overstating your point. Unless you are saving your data in a truly useless format, having a practiced procedure for getting that data back into production only lets you get the data back up faster. We have one backup system in particular at my office - although we have never built a production machine from it, we do (manually and automatically) test the data to ensure that everything from production made it in. Will restoring that data be slow and sketchy? Sure. Is it fair to say that nobody will care if we have the data backed up? No.

        The point isn't to have a practiced procedure that your technicians can run through with their eyes closed... The point is to actually test your backups and know whether they are working, whether the data is usable, and whether it is possible to get a production server up and running from that backup.

        Most backups aren't going to be as easy as insert tape, walk away, come back to a working production server an hour later. Most backups will involve some kind of re-pointing or importing or configuration or w

      • Will restoring that data be slow and sketchy? Sure.

        So what is supposed to happen while "slow and sketchy" is taking place? All business stops?

        And what is meant by "sketchy"? "Sketchy" is not the adjective I like to hear used to describe the accuracy and consistency of my data.

        Even if you're just backing up a bunch of flat files, how do you know that your backup is a consistent snapshot? Or are you OK with your data just being invalid in unpredictable ways?

        Where are your backups located? On-site? I sure hope not. Fires happen. Floods happen.

        Backups an

    • by cptdondo ( 59460 ) on Thursday October 15, 2009 @03:26PM (#29761153) Journal

      This incompetence is something far beyond serious for MS. T-mobile is a much bigger customer than almost anyone short of vodafone can ever hope to be. MS have been moving strategically into hosting servers such as exchange for many customers. If you're a CEO you should be calling your CIO in and asking him when he plans to be free of MS services. If you are a CIO you want to be able to answer "there's nothing business critical relying on MS services" by the time that meeting comes.

      Hehe. I raised this issue when this broke. We have a huge amount of critical data outsourced to a hosting company. I sent this fiasco up the food chain asking what is our backup strategy should this happen to our host.

      I got back some pablum about "well, they have 2 geographically separate datacenters, blah blah blah" from the guy who administers the contract.

      Maybe they did at one point but I know the folks we use fired most of their devs, including the lead developer, back in March as a cost cutting measure. and I wouldn't be surprised if one of the "two data centers" disappeared along with the developers. Regardless, no one on our end seems to be concerned and no one is taking any precautions (like local backups.)

      Maybe one day I'll get to say, "I Told You So."

    • The issue has nothing to do with Microsoft. The issue has to do with a failure by T-Mobile with a vendor (that happens to be Microsoft). How T-Mobile ever approved a contract with the appropriate backup software, hardware, DR plan and testing is the bigger issue. Microsoft likely provided exactly the level of support that T-Mobile paid for, and I'm willing to bet that T-Mobile balked at these proposed charges from Microsoft and went with the cheaper option without the backup expenses. If your a CIO you use

    • Re: (Score:3, Insightful)

      This point absolutely cannot be overstated: A backup that has never been through a restore/recovery test is just as bad as having no backups at all.
      Your admin team or hosting company should be able to tell you what is involved to get from your backup to a fully functioning production system (a truly well thought-out backup scheme will have a step-by-step recovery checklist), and they should be able to provide a worst-case data loss estimate based on your backup scheme.

      This isn't a failure of "cloud compu
  • Not likely (Score:4, Interesting)

    by pongo000 ( 97357 ) on Thursday October 15, 2009 @03:10PM (#29760973)

    "That could seriously damage the potential success of the firm's other 'cloud computing' plans, such as web-only editions of Office."

    I can't tell whether this is spin put on the summary by the submitter or some other third-party (because we all know submitters are, absent any editorial constraints on /., free to post what they want without attribution). That said, it's highly unlikely Microsoft will suffer from this. Wisely, they offloaded all responsibility the moment they created this entity known as Danger. They've effectively washed their hands of the entire affair, because it wasn't really a Microsoft problem in the end, but a problem with an affiliated company.

    It is simply wishful thinking on the part of the submitter (or whomever) that Microsoft will be tainted by this deal. In all likelihood, Microsoft will simply walk away from their relationship with Danger, and it will be business again as usual.

    • Re:Not likely (Score:4, Insightful)

      by nedlohs ( 1335013 ) on Thursday October 15, 2009 @03:43PM (#29761343)

      Except that people make decisions and don't really care if something is just "affiliated".

      Microsoft and Google bid for the "cloud computing" "office" contract at some company. Do you really think Google isn't going to mention, with a bunch of references, this screw up?

      With quotes from press releases like:

      We have determined that the outage was caused by a system failure that created data loss in the core database and the back-up.

      Roz Ho
      Corporate Vice President
      Premium Mobile Experiences, Microsoft Corporation

      in big bold blocks.

  • by jollyreaper ( 513215 ) on Thursday October 15, 2009 @03:14PM (#29761027)

    The worrisome part about cloud computing is putting your trust in someone else's hands. But keeping your backup process internal to the company is no panacea either. Bad management practice is what led to the cloud screwing up, just like bad management practice led to in-house data losses at other companies.

    How many of you guys generate your own power 24x7? C'mon, you're really going to place the face of your business in the hands of people running off the wire? Wire power. Feh! That wire could be going anywhere. Real men run their own generators!

    Sounds silly, right? Of course, that's only because we're used to power companies running like utilities, government-regulated monopolies allowed to exclusively service the public with a healthy, dependable profit in return for low rates and universal service. In such an environment having your own generators for anything other than emergencies is paranoia. But wow, you start deregulating things and let the businessmen go nuts and it almost seems like you'd have to.

    The real question with cloud computing is whether the companies are going to operate in a fashion that brings to mind steady, sober, dependable service like a local utility, like a giant rapacious corporation uncaring of human concerns, or like a fly-by-night dotcom. My personal opinion is that I don't trust these fuckers. Current company's situation is that we have a major software product we run our business on and the publisher got gobbled up by a bigger company and that company got gobbled up by a bigger one. The big company has decided to discontinue the product and have been slowly dismantling the team that supports it. We know we're going to have to make a jump eventually but the conglomerate could pull the plug tomorrow and we'd still be in operation. If it was a cloud app, we could be dead in the water.

    • The worrisome part about cloud computing is putting your trust in someone else's hands.

      I don't get this part. (privacy issues aside) Cloud services aren't generally touted as incremental backup solutions.

      I mean its the same exact thing as running a RAID server in your own home. Simply having a system that is designed to mitigate downtime a backup system does not make.

      Even if you have a system that separates data across multiple disks in different locations, user error or a database bug could wipe the "live

    • Well, actually my employer does in fact generate their own power. Or rather, they can. Our data center runs off a UPS bank, which is fed by the mains feed and a generator. If mains power fails, the UPS has enough capacity to keep the data center running until the genny starts up and starts supplying power. It's more convenient to run off mains, but we don't assume mains power is totally reliable (or even completely clean, the UPS bank filters and stabilizes it).

      And yes, they run monthly tests of the genny t

    • by Chris Mattern ( 191822 ) on Thursday October 15, 2009 @03:57PM (#29761501)

      The real question with cloud computing is whether the companies are going to operate in a fashion that brings to mind steady, sober, dependable service like a local utility, [or] like a giant rapacious corporation uncaring of human concerns

      Man, what fantasyland are your utilities located in? I wanna move there! In my experience, utilities *are* "giant rapacious corporation uncaring of human concerns".

      • Man, what fantasyland are your utilities located in? I wanna move there! In my experience, utilities *are* "giant rapacious corporation uncaring of human concerns".

        I live under the benevolent heel of LIPA, so I'll add "unreliable" and "dirty" to your list of adjectives...

    • If you trusted the power company that much, you wouldn't have UPS and power generators in data centers.

    • The worrisome part about cloud computing is putting your trust in someone else's hands. But keeping your backup process internal to the company is no panacea either. Bad management practice is what led to the cloud screwing up, just like bad management practice led to in-house data losses at other companies. How many of you guys generate your own power 24x7? C'mon, you're really going to place the face of your business in the hands of people running off the wire? Wire power. Feh! That wire could be going a

    • I actually don't trust the power company to provide electricity 24x7, because they don't... at least not reliably. In addition to a 3-day outage in the middle of winter several years ago, I lose power for more than a minute - often several hours - at least half a dozen times each year. So yes: in addition to a UPS for my TiVo and other electronic essentials, I have a generator big enough to run my servers, router, etc. as long as I keep feeding it gasoline.

      What third-world country do I live in tha

  • Stormy weather (Score:3, Interesting)

    by surfdaddy ( 930829 ) on Thursday October 15, 2009 @03:18PM (#29761059)
    Given how much of our internet access is being spied on by the government, how could ANYBODY want to trust their critical data to a cloud service? Sounds like Microsoft has Cumulonimbus clouds.
    • but that can work in your favor!

      'talk' about sensitive stuff in the middle of a file upload (so to speak) and you'll be sniffed.

      then simply file a FOIA to get your data back from the government 'backups'.

      (yeah, right.)

  • by Anonymous Coward on Thursday October 15, 2009 @03:19PM (#29761087)

    Years of BSODS.

    Years of viruses.

    Years of trojans.

    Yet THIS "damages Microsoft's reputation"?!?!?!

    • It's kind of like having a reference customer. It's all very well showing that they are incompetent in theory. It's good to be able to set up the production servers and run load tests. Here we have a real life demo that MS can really damage loads of customer's data. There are always cynics who say "yes, but they won't be able to do it in production". Now nobody will be able to claim that MS can't do an up to date full scale cloud screw up.
  • by Teun ( 17872 ) on Thursday October 15, 2009 @03:27PM (#29761165)
    Cloud computing and remote storage are not necessarily the same.

    What we see here is a small device storing it's data remotely and I wonder why.
    Considering how cheap a couple of GB of memory are and how precious wireless bandwidth is this can mean only one thing, having and thus exploiting that data is worth more than the cost of the bandwidth.
  • by pubwvj ( 1045960 ) on Thursday October 15, 2009 @03:34PM (#29761235)
    A company called Danger? Responsible for data and servers? Yowsa! Red alert time!
  • Microsoft? No. (Score:5, Insightful)

    by snspdaarf ( 1314399 ) on Thursday October 15, 2009 @03:38PM (#29761281)
    I don't see this as having a big effect on Microsoft. T-Mobile on the other hand....
    I don't believe that customers care if your services providers have problems. They have an agreement with you, not your providers.
  • All data recovered? (Score:4, Interesting)

    by Carik ( 205890 ) on Thursday October 15, 2009 @03:50PM (#29761415)

    So here's what confuses me... "BBC news reports today that Microsoft has in fact recovered all data, but a minority are still affected." If all the data has been recovered, wouldn't NO ONE still be affected? I mean... being affected by this means your data was lost in such a way that it couldn't be recovered. So...

  • by bkaul01 ( 619795 ) on Thursday October 15, 2009 @04:31PM (#29761907)
    Who decides that a server farm called "Danger" is a safe place to store backups?
  • Wrong wrong wrong! (Score:5, Insightful)

    by LWATCDR ( 28044 ) on Thursday October 15, 2009 @05:24PM (#29762773) Homepage Journal

    "But Microsoft, which bears at least part of the responsibility for the mistake, is paying the price with its reputation."
    Microsoft bears ALL THE RESPONSIBILITY FOR THE MISTAKE!
    They own Danger and they run the data center that stores the data!
    It was their fault 100%.

  • Not So Fast (Score:5, Informative)

    by NuttyBee ( 90438 ) on Thursday October 15, 2009 @05:55PM (#29763193)

    I have a Sidekick.

    I still, a week later, can't get e-mail on it. My contacts were never lost, but the damn thing still doesn't work! I'm getting tired of waiting.

    My contract is up in August and I'm going to find a phone that stores everything locally AND a new provider. I have learned my lesson.

  • by QuestorTapes ( 663783 ) on Thursday October 15, 2009 @06:33PM (#29763569)

    > "The outage was caused by a system failure that created data loss in the core database and the back up,"
    > [Microsoft Corporate Vice President Roz Ho] wrote in an open letter to customers.

    It sounds like their "backup" was a replica on another connected server.

    No actual offline backups at all.

    When JournalSpace was destroyed, one SlashDot thread was "Why Mirroring Is Not a Backup Solution".

    My favorite comment was by JoelKatz:

    >> The whole point of a backup is that it is *stable*. Neither copy is stable, so there is no
    >> "backup on the hardware level". There are two active systems.
    >>
    >> If you cannot restore an accidentally-deleted file from it, it's not a backup.
    >> ... if the active copy of the data is corrupted, there is no backup.

  • Reputation? (Score:3, Insightful)

    by cheros ( 223479 ) on Friday October 16, 2009 @06:33AM (#29766975)

    "But Microsoft, which bears at least part of the responsibility for the mistake, is paying the price with its reputation"

    Just out of curiosity, what reputation might that be? :-)

  • by Ilgaz ( 86384 ) on Friday October 16, 2009 @08:54AM (#29767569) Homepage

    I think this is more like 1984 scandal of Amazon Kindle, it will have long time impact on cloud computing and the general direction of things to come.

    Even if you invent a system about e-ink/store tomorrow which has NOTHING to do with Amazon Kindle, you will still be asked "but will you delete my books remotely?". Just like some dead tech acquired by MS and not managed well will cost even IBM Mainframe dept. sales.

    If one is a hopeless conspiracy theorist, he can easily suggest MS did it on purpose to lower general public trust to cloud which they have almost nothing. Cloud is all open source empire right now, Apache Hadoop etc. are being talked about, not some MS enterprise server or technology.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...