Federal Government Removes 7 Americans From No-Fly List
It has been several years since I flew domestically within the US, but I personally have never been allowed to board any aircraft larger than a Cessna that I was piloting myself without the holy trinity of passport or acceptable photo ID, ticket, and boarding pass (only issued after presenting ticket plus passport/photoID).
A few weeks ago, I was at the gate in Frankfurt when a very Aryan-looking German gentleman was refused leave to board a flight to London Heathrow because he could only find his boarding pass, having lost/misplaced his passport at some point after passing through security.
(Co-incidentally, there was a spare German passport lying on the ground next to the chair he had been sitting in, and luckily it had his picture and name in it, so he was able to board the flight after stressing for 15 minutes... but the "No ID, no flight" thing is a pretty hard and fast rule in Europe, it seems)
Ask Slashdot: Dealing With an Unresponsive Manufacturer Who Doesn't Fix Bugs?
Most vendor contracts in my experience have long, obtuse and legally dense clauses in them that seek to prevent customers from discussing publicly issues with the product, and setting maximum compensatory relief for lost business and costs as the initial purchase cost of the solution.
However, many of those clauses are also not enforceable under the specific state/federal (or in the case of Europe, EU) laws. The only real way to know what you recourse is within the terms of the contract is to get advice from a contract lawyer first of all, about which legal jurisdiction would be available or need to be used when seeking redress, and second from a contract law expert in that specific jurisdiction about what your legal options are.
The only way the vendor is going to give a damn about you as a client is if they are facing some kind of legal action for not addressing the problems. Their EULA ? Vendor Supply Contract will include a clause that problems with the system are not grounds for legal action or compensation, but those are almost always worth less than the ink used to print the text. If the threat of legal action does not work, and the cost of pursuing actual action and compensation is worth more than the cost of the solution, then probably the only courses open to you will be to you are junking the system and paying up the remainder of the contract/early exit termination fees, or living with it for another 12 months.
Either way, more thorough and extensive pre-acceptance testing next time might be in order. Learn where your client went wrong with the evaluation of the existing solution, and correct those mistakes when evaluating the next.
Why the FCC Will Probably Ignore the Public On Network Neutrality
Network Neutrality is a great concept for the consumer, but not for the provider. So given that there are millions of comments broadly in favour of NN in the "Public Consultation" phase and a small group of lobbyists/back-room power brokers against NN, we get to see where the power lies - with the public who vote into power the politicians who set direction for the FCC, or the corporate interests behind the scenes.
The biggest part of the problem, though, is that there is no real choice in the domestic internet provider markets in the US. There is certainly the illusion of choice, but in each market, the vast majority of consumers have access to a single incumbent backbone provider who also provide "last mile" connectivity, or one of a small number of alternatives which are either themselves clients of the backbone provider re-using and reselling that provider's last-mile capability or alternative access methods which offer a service which is either inferior or significantly more expensive.
The traditional capitalist approach to this is for a smaller, hungrier, competitor to the incumbent to set up shop and offer a better service for lower cost, thus enticing customers away from the incumbent and providing the new competitor with the revenue to expand services. In this scenario, centrally enforced Network Neutrality is not required - if one provider chooses to prioritize traffic in a way that its' customers do not like, they can leave in favour of the alternative. However, the massive initial infrastructure costs associated with setting up as a backbone ISP with last-mile connectivity, so that the new competitor is not dependent on the existing incumbent breaks the model, and you need high-value independent actors, such as Google, going in and setting up their own networks, because they can absorb the huge initial capital outlay.
The alternative to having several "backbone plus last-mile" providers with broad or total coverage in each region (which would be eye-wateringly expensive) would be for the backbone elements to be treated as utilities/managed by independent Not For Profit entities, and for all ISPs to be resellers of bandwidth competing on services and price.
Once you have genuine competition, Net Neutrality becomes something that individual providers (resellers) can offer to their customers or not (although verifying that a provider actually IS offering Net Neutrality would probably be beyond Joe Public and most of them would not know or care, anyway). A customer can choose to sign up to a service provider who guarantees low latency for online gaming, or one with high video streaming bandwidth, or the odd one who offer a life-size Lara Croft blowup doll, if they choose to. Because the free market with a low barrier to entry encourages providers to provide the services that the customer wants and is willing to pay for.
Anonymous Peer-review Comments May Spark Legal Battle
Except he can't defend himself against someone who can continue to make post whether or not they are accurate.
He could spend every day., all day trying to defend each time a comment is made. That would be pretty wasteful.
The person making the comment could actually go through normal peer review channels.
BTW AC comment aren't actually peer review.
Have you ever tried to defend yourself against one or more people making AC comments? It is not possible.
As I mentioned in the post, I have some sympathy for him regarding defending against the AC comments, but he does not appear to have made any attempt to defend the papers' data against any comments, AC or named. Making a cover-all defensive post to engage the named reviews and encourage the AC reviewers to post under their own names would, in my opinion, be a good middle ground between defending against all negative posts or defending against none.
It would also have given him some discussion points with the faculty recruitment people from UoM, which may or may not have helped. But the "I am going to ignore criticism, head in the sand, and then threaten to sue when that criticism causes or plays a part in me not getting a new job" approach is really not good for a researcher in the publish-or-perish world they live in.
Anonymous Peer-review Comments May Spark Legal Battle
Sorry for the wall of text... summary and comments :P
From the Science article and PubPeer discussion on the topic, but not the comments on the papers by the aggrieved scientist (Dr Fazlul Sarkar), a broad summary would be that he was a tenured researcher at Wayne State University, who was offered a tenured position at University of Mississippi.
He resigned from Wayne State, then was informed by UoM that the offer was revoked. Dr Sarkur's lawyer comments that the retraction makes it "crystal clear" the retraction is because of the PubPeer comments on approximately 10% of Dr Sarkur's published and peer reviewed papers (more than 50 papers out of more than 500 he is listed as authoring), where the comments indicate that images used in specific papers look remarkably similar to images used in other papers relating to different experiments.
Wayne State agreed to take him back but did not offer him a tenured position. But how many other employees who resign and then say "I changed my mind, can I come back?" would be welcomed back?
Some of the negative comments on those papers then allegedly (I cannot comment directly as I have not read the comments, many of which have apparently been removed by PubPeer moderators) veered into insinuations of deliberate misconduct. Dr. Sarkur's lawyers are, of course, going to claim malice/intent in posts, and their removal is likely legal expedience, not an admission that the posts are inappropriate.
It seems to me that the logical approach would have been for Dr. Sarkur to engage in a process of defending his work against negative comments. Granted, that defense process may take some time - time that is better spend researching cancer cures, or figuring out how he will spend that huge salary he isn't going to get any more (trying not to laugh at this point...).
But according to his lawyer, "his client has no responsibility to critics who refuse to put a name to their accusations" - in other words, anonymous cowards will be ignored. I can sympathize with that approach, but at a time when scientific papers have taken a battering over experimental repeat-ability and interpretation of results, I would assume that anyone publishing a paper who is confident in their work would be willing to defend it, especially in an area with such life-changing possibilities as cancer research.
It is akin to a social media consultant smartening up their LinkedIn profile and then wondering why they do not get a job interview when their Facebook profile is a constant boast about their party lifestyle and their Twitter feed is a racist/homophobic diatribe.
As Dr. Sarkur has a Ph.D., I would assume that he is familiar with the process of authoring a paper or a thesis and then having to defend that work against examination, in a "viva voce" examination where multiple subject matter experts basically poke holes in the work and try to uncover any areas where the preparation and execution is sub-standard. It is just a shame that Dr. Sarkur feels that process need not apply now that he has his Ph.D.
I think next winter will be:
The winter here in Sweden was, for most of it, non-existent. By the time the snow started coming down properly, it was almost into Spring.
This, by the way, is a country where in Stockholm (about 173rd of the way up the country, not talking the far north) it can and does snow from the start of November through to the end of March. If you go up to the far North - Kiruna, or even further if you want to make a point, you can have snow for 10-11 months a year.
The Growing Illusion of Single Player Gaming
Granted there are outliers with my argument - MMOs have "content" in them, and they are a pretty good definition for the current logical extent of multiplayer gaming. But in many cases, the current design trend seems to be to have huge open worlds where the majority of space is filled with nothing or procedurally generated content (thinking of Diablo III), where the goal of that content seems to be just to add hours to the time it takes you to get through the story portions of the game.
Bioware/EA's MMO Star Wars The Old Republic is a counter to that - 8 huge personal (single player, in a multiplayer world) storylines, one for each basic character class, with minimal and entirely optional multiplayer content until you get to the endgame, at which point it becomes almost totally about multiplayer in a traditional MMO grind-fest for gear. World of Warcraft has a similar setup. Those "theme-park" games are typically a linear exploration of content that the developers have implemented.
At the other end of the scale, the sandbox MMOs (EVE Online, or reaching back in time to the pre-"New Game Enhancements" Star Wars Galaxies, and even titles like Minecraft) can be played alone but are much more entertaining when experienced as part of a group or larger community, because there is typically little or no story-driven content designed for solo play.
Sandbox versus theme-park is not in itself a good/bad argument - I spent an insane amount of time in Star Wars Galaxies and EVE Online, and loved the completely open freedom to "write my own story" they offered. I also enjoy the theme park games and the chance to experience a well-crafted story.
However with the sandbox, the developer does not need to spend as much time creating content as for the theme park, because the sandbox players' creative tendencies will generate more stories than the developer ever could, and those stories will be personal to the player and therefore more compelling. It means that the developer can be lazy if they want to, or free them up to refine other areas without having to devote time and effort to developing content.
To Really Cut Emissions, We Need Electric Buses, Not Just Electric Cars
The problem is the manpower to operate it just doesn't scale well to something as small as a ship.
Why is it then possible and viable to have nuclear powered submarines but not ships?
Economically, it should not be. Because the value metrics and usage requirements for a submarine are vastly different to those for a ship. Both go on water, but when a submarine is underwater it needs a controlled non-toxic emission propulsion and power system - older and smaller subs use electric batteries, which are charged when on the surface by a diesel engine which exhausts out into the air, so they have very limited underwater endurance. A sub with a nuclear reactor does away with the electric battery element, has no need of diesel engines, so it can stay underwater for months at a time - even to the point where they can if necessary complete an entire tour of duty without breaking the surface of the water.
That ability to stay underwater and (probably) undetected gives the ability to project power into areas and in ways where highly visible surface ships just would not work.
The reason it works is that submarines are not used for economic activity - their value to the Navies that have them falls into the "money is no object" category and profit is irrelevant in the face of security and force projection.
What Do You Do When Your Mind-Numbing IT Job Should Be Automated?
Most of the IT jobs (emphasis on the "jobs" part) that I see, cannot be automated - or if they can be automated, the automation needs a level of oversight and constant tweaking that it is not economically viable to automate the process.
Almost without exception, an IT "job" can be split into discrete "tasks", where some of the tasks can be and should be automated for various reasons, but in terms of the W.W.W.W.H. (What, Where, When, Why, How) aspects, the reason for automating (Why) has a significant bearing on whether it would be a good idea to even try automating.
Automating the tasks which can be automated within a job makes sense in many cases - relieving the employee of the trivial and repetitive tasks to tackle the more high-value elements of the job. From a commercial perspective, if you are spending most of your time on the high value tasks, you are probably earning more money for your company or providing better value. As long as the boss recognizes that fact, your job should be more secure and your pay packet should, at some point, see an increase to recognize the higher value that you represent. ok, you might need to leave the company and parlay that higher value experience at a new employer to see the increase in your salary, but if your CV can show a successful sequence of task automation leading to higher productivity, then you will probably be more in demand.
If you have either a role that can be automated to the point where you are irrelevant, or a manager who thinks that your role can be automated to the point where you are irrelevant, then my advice would be to start looking for a new job where either you are more stretched or your manager appreciates your contributions more.
Verizon Throttles Data To "Provide Incentive To Limit Usage"
I cannot decide on the best response to that:
"You will never find a more wretched hive of scum and villainy. We must be cautious.”
“Who’s the more foolish; the fool, or the fool who follows him?”
"If they follow standard Imperial procedure, they’ll dump their garbage before they go to light-speed. Then we just float away.” “With the rest of the garbage.”
However, whatever the response is, Verizon will come back with one of these:
“So what I told you was true from a certain point of view.”
“Only at the end do you realize the power of the Dark Side.”
UK Spy Agency Certifies Master's Degrees In Cyber Security
So the spy agency that admit s to (a) sharing data with the NSA, and (b) has pretty much admitted that it wants to be able to hack into any systems it wants in search of information, is now certifying information security courses that would, in theory, make their jobs harder...
What can possibly go wrong?
Passport Database Outage Leaves Thousands Stranded
...in case my other article did not make it clear, we always ask if they have a backout plan, and they always say they do.
I used to deal with a lot of Indian outsourced IT groups, and the only way to handle this is to either follow up the "Yes, we have a backout plan" response with "Tell me what your backout plan is" or just to skip straight to that without bothering to ask the "Do you have a plan?" question.
Things still got screwed up, but after the first occurrence we completely cut their access to the servers and re-enabled them on demand, so we forced their people to update a specific server first to show that they could do it on a system which is not mission-critical.
However, that approach really only works when the client does not turn into a whining tub of lard when the vendor starts putting pressure on.
Ask Slashdot: Is Running Mission-Critical Servers Without a Firewall Common?
If the POS (point of sale... although if the vendor are as lax about their quality assurance as they are about network security, that might just also stand for "piece of shit") and the back office PC are completely isolated from the internet, then I would agree there is no need for a firewall. However, retail POS systems almost always now come with a built-in credit card payment system instead of having separate terminals for that... so the POS cannot be guaranteed an airgap out to the internet unless the POS vendor is also supplying a separate credit card payment system with separate hardware that will reside on a completely separate network from the POS and back office system.
My advice to the OP would be to register their extreme dis-satisfaction with the setup verbally with the client, and in writing/email to the client and vendor, detailing the concerns about data security. That way, it at least limits OPs liability for the inevitable fuck-up and loss of customer credit card data to the time and effort involved in hiring a lawyer and producing said documentation when the shit hits the fan and law suits alleging incompetence start flying.
From experience, I know that as the 3rd party implementation consultant, you are nothing more than an annoying buzzing sound to the vendor unless you get the client on board, and even then it will still not work unless there are break clauses around client satisfaction built into the vendor-client contract. All OP can really do is cover his/her own ass, do their best to educate the client about the dangers involved, and leave it at that.
No firewall is probably because the vendor is too lazy to figure out how to configure the POS firewall so that they can still connect to it for remote support/maintenance tasks.
Linus Torvalds: "GCC 4.9.0 Seems To Be Terminally Broken"
There is an article on /. every few months, about how Linus Torvalds was abrasively to-the-point about something, or about how a kernel developer responded to a Linus abrasive episode with a "dude, not helpful, be nice..." reasoned argument.
From my recollection, Torvalds does not often get involved beyond the initial message, but when he does I seem to recall that his response is "My sand pit, my rules. You don't like it, go make your own."
While the GCC compiler may not be a part of his Linux sand pit, it does go a long way toward defining the quality of the executable it produces, so even if the code is perfect a shit compiler will still produce a shit executable, in the same way that a perfect compiler will produce a shit executable from shit code. The difference is that a shit compiler cannot produce a good executable, whereas the shit code can be improved to good code with time and effort, and if a coder whose executable ends up being shit tries to turn around and blame the compiler, everyone else is going to respond with "bad workman always blames his tools, therefore the code is shit".
99 times out of 100, the code is shit, because generally the compiler devs are much better coders  than the rest of us mortals, so we probably assume that executable errors are introduced in our code (or is it just that I am a crap programmer??).
Microsoft's CEO Says He Wants to Unify Windows
So many negative comments here... as if people think that a unified OS must also mean a unified UI.
A single core codebase for the OS will have a few problems with performance on different hardware, but that is a separate discussion... and who expects Microsoft stuff to run quickly anyway?
However, incorporating a different UI for each target device means that you should not need to see the craptastic Metro UI on a desktop system or workstation, while touchscreen and small screen systems are not compromised by a need to develop elements for discrete keyboard and mouse input.
Snowden Seeks To Develop Anti-Surveillance Technologies
Securing the technology is one thing - that in itself will be a huge job, because depending on how far you want to take it, you can end up needing to sandbox each application and harden each layer of the communication stack.
You might need a complete new protocol ecosystem based on only systems which are open source (not just because I like open source, but so that everything can be audited and peer-reviewed at the code level), built with compilers which themselves are not only trusted but also auditable as matching their published source code, and using communication protocols which are themselves open source and audited.
Put all of that together, and you still have the biggest security/privacy threat to deal with - the ID-10-T (aka the user sitting at the computer). Until users of a computer system are educated - not necessarily to the extent that they can themselves audit source code, but at least to the point where they can recognize compromised behaviour of a computer system - then they will always be the weak link in a security/privacy model for IT systems. Getting away from the Windows/local admin culture would be a huge step, but until the most idiotic and incompetent user of a given computer system is either isolated from the ability to do anything or educated to prevent them doing dumb stuff, the computer they use must be considered compromised and all users of that computer must be considered at risk.
Verizon's Accidental Mea Culpa
Too big to fail, too arrogant to concede, too greedy to care. This news is all the more reason to regulate.
But, but, but... regulation is the antithesis of the Capitaist way that our republican Democracy has weaned its children on since it was formed!!
I do tend to agree though - regulation of ISPs is probably the only way to deal with this.
Capitalist theory says that if an incumbent merchant/provider is too inefficient to provide a good service or if another potential merchant/provider thinks they can do a better job for a lower price, then that new provider will step in and provide said service. The threat of that is what keeps the incumbent lean and competitive, and the result is a competitive environment that is generally good for the consumer and rival providers seek to offer better deals to entice custom away from their competitors.
However, that theory assumes that there is a very low or non-existent barrier to entry into that competitive marketplace. Given the initial infrastructure setup costs and, in many cases, exclusivity contracts between providers and the municipal areas which would present the profits to drive services out into more marginal areas, the barriers to entry into the Tier 1 ISP market are prohibitive, to the point where you need to be a corporate entity the size of Google to be able to reasonably make the capital investment required.
As such, the local markets for each ISP more closely resemble non-competitive monopolies with the illusion of choice being provided by third party suppliers who typically have to by access to the resources from the incumbent monopoly - they get wholesale prices, and the consumer sees some small price reductions if the third parties can make enough money to operate by charging the consumer slightly less than the discount they got from the incumbent. But fundamentally, everything is still controlled by that original monopolistic provider, so services suck, progress is stifled because there is no incentive for change, innovation is discouraged, and the level of capacity/reliability is never going to be any more than "just barely enough so that we can maximise our profit margins".
The Hacking of NASDAQ
'We've seen a nation-state gain access to at least one of our stock exchanges, I'll put it that way, and it's not crystal clear what their final objective is,' says House Intelligence Committee Chairman Mike Rogers
Ummm to make money or destabilize our economy?
Makes one feel good that you are the head of the Intelligence Committee.
The problem with the final objective is that Nasdaq's IT security was (and probably still is) pretty incompetent, because once the bad guys were past the outer defences, there was very little internally to audit unusual activity. The analogy used in the BusinessWeek article uses the analogy of physically breaking into a bank versus breaking into a private home - the bank will have internal security sections, cameras, password-protected doors, and so on. So when determining what was taken, you can look at what areas the bad guys had access to and where they went. In a private home, there is the external alarm - once that is down, you have no way of knowing where the guys went unless they leave a physical trail. In this case, while it might be expected that Nasdaq would be the IT security equivanelt of a bank, they apparently were the equivalent of a home owner who left the alarm deactivation code on a piece of paper taped next to the alarm console.
Let's try a few plausible options, based on the article. Determining the probable source of the hack/attack will help there.
The core of the malware used was a 0-day exploit kit that had previously been attributed to a team within the Russian FSB's electronic warfare group, suggesting that the Russians may be behind this. At the approximate time the hack took place, the Russians were combining their two domestic stock exchanges into what they planned as a single super-exchange to rival Nasdaq, NYSE, LSE in London and the Hang Seng in Hong Kong. Probably a dual-purpose reason being (a) increasing international prestige and economic diversification, and (b) preparation for pressurising large Russian companies whose stocks were listed on international exchanges to draw back and list exclusively on the new Russian exchange, thus reducing the potential leverage and influence that US and international governments would have over those Russian companies (thinking sanctions, as with the current situation in Ukraine). For the Russians therefore, a plausible action would be to hack the Nasdaq exchange servers and copy the software code that powers the exchange, so that they can use it or modify it for their own exchange - believe it or not, the code for the Nasdaq exchange is generally considered to be world-beating, so that would be a viable target.
Second, the CIA apparently found some information in the real world suggesting Chinese connections - the Chinese Peoples' Liberation Army certainly had electronic warfare capabilities, and conceivably might plant an electronic bomb in the Nasdaq systems for use at a later date if it proved convenient. Equally, with the Chinese approach to IP and industrial espionage, hacking to steal the code in a similar way to the Russian scenario is possible.
Both of those governments' beurocrats are often known to be corruptable and have links to organised crime, so there is another possible source for the attack, with the goal of either blackmailing Nasdaq or gaining access to the not-yet-public information stored on the compromised systems to give them advance knowledge of information that would move stock markets and prices (financial gain).
In determining the source of the attack, the origin of the malware used is not the greatest indicator - malware kits can be copied as easily as any other software, so either an actor within the FSB may have sold a copy to someone, or another hacker may have hacked a completely different system infected with that malware kit and downloaded the elements of the kit they could find, reverse-engineering the rest. So just because the FSB are credited with creating a previous version of this specific kit does not mean they are involved.
Lastly, looking at the capabilities of the payload may give some insight into the objective - a malware kit with a keylogger and dial-out facility to a C&C server is generally not going to be paired with a logic bomb to fry the infected system. So a system with a keylogger will be used for industrial espionage, while a logic bomb is an offensive, destructive weapon. The NSA's original analysis of the malware apparently indicated all sorts of interesting/terrifying capabilities. Given their extreme interest in surveillance of computer systems, if they chose to deliberately scare-monger and make this breach out to be more serious than it may otherwise have seemed, they could use that as leverage to expand their intelligence remit to be the gatekeepers of data security and cyberwarfare within the US - expanded influence, and also a much more free hand to conduct their own domestic surveillance. Plus, it is definitely conceivable that they would already have laboratory copies of the FSB malware kit that they could use when hacking Nasdaq.
So, there you have 4 other possible actors and objectives:
Russia: Domestic economic control over large businesses to reinforce geopolitical strength, and industrial espionage.
China: Industrial espionage, or the future possibility of electronic sabotage.
Organised crime: Extortion or industrial espionage for financial gain.
This is not to suggest that any of those groups actually did do this, or that if they did that they did it for the reasons I have suggested. But it does indicate that there are a lot of possibilities out there, and Mike Rogers is a politician, so he is not going to start slinging mud at someone unless they give him a good quote as justification.
Washington Redskins Stripped of Trademarks
What do you call people from India, Pakistan, Bangladesh, Afghanistan and that region?
Being from the UK myself, I asked some of my American colleagues who also work here ("here" being Sweden... more about that in a moment).
The response from two of the Americans was that they had no idea what to call people from that region, as they had no real idea of where those countries were. The other 3 promptly came up with "Terrorist", and were apparently not joking, judging by the lack of humour in voice or demeanour.
Anyway, regarding Sweden, this country currently has a degree of nationalist racism against "Invandrare" - effectively immigrants, but used as a catch-all for those immigrants who are obviously not Swedish, have poor language skills or education, and typically who come from near/middle eastern countries or central/eastern Europe, but Asians can also be included. Broadly speaking, immigrants from other Nordic/Scandinavian countries are ok, and immigrants from the UK or USA are loved unless they are complete assholes.
Historically however, there has never been a huge problem with racism, particularly against "coloured" people - and in this sense I use the term "coloured" to refer to anyone who does not have the typical Nordic/Scandinavian/Aryan light skin/light hair/blue eyes combination, not specifically people of African descent. So up until very recently (10-20 years), it was possible to buy "negerbollar" - literally "Nigger Balls" - which are a small chocolate-based pastry typically dusted in coconut, and many people still call them negerbollar without feeling any discomfort or embarrassment. Now, though, their official name is "chokladboll" to avoid any problems.
Canadian Court Orders Google To Remove Websites From Its Global Index
That's part of the problem of expanding into other countries, you have to either accept their rules or stay out. Consider Google or Yahoo in the case of China...
Compare to an example of a court order that forbids a third party railroad line from transporting a particular product into the country.
This is the part that I have a problem with - if a Canadian judge wants to mandate that all discussions of the health benefits of eating less Maple Syrup are blocked in Canada, I have no problem with that. If I live in Canada or if I live in China, then I expect what I see on the Internet to have to comply with local laws, and while I expect censorship in both Canada and China, I expect a hell of a lot more of it in China.
The precedent it sets, though, could allow a fundamentalist Islamic cleric to order Google to not index (and therefore censor) discussions about the interpretation of Islamic Sharia law so that his interpretation is dominant, not just in his country, but around the world as well.
This instance of the problem - a couple of embittered former employees of a company selling knock-off products - is not a bad idea. While I would like to know that they used to sell these goods, if I am looking to buy said equipment, I do not need to be able to see the actual site they were using as a sales portal. But the precedent it sets is a dangerous one.
Consider (not trying to derail the topic, honestly) the recent EU ruling that establishes the "right to be forgotten". If you look at it as the right for a woman who, as a dumb teenager, posted naked pictures of herself to show off a new tattoo, who now wants to see those pictures fade into obscurity, then it is a good thing. But many of the requests Google are receiving are from people who want to hide criminal convictions or other information which can legitimately fall under the heading of "in the Public Interest to know", so while Google can use that as a way to refuse the request, it shows that "good idea" precedents are often used to justify "bad idea" changes.