No RIF'd Employees Need Apply For Microsoft External Staff Jobs For 6 Months
I'm guessing it's an acronym so let's see what might fit...
Reading is fun!
Resistance is futile
Resource interchange format
Royal Irish Fusileers
None of those seem to have any bearing on the context of the article (other than a tenuous reference to Borg Gates). Any editors around to perhaps explain what it means?
Dwarf Fortress Gets Biggest Update In Years
Gnomoria is also inspired by DF, and arguably is much closer to the spirit of DF than Minecraft is, and the graphics and interface are (IMHO) far superior to OotB Dwarf Fortress.
If you enjoyed Minecraft but don't yet feel ready for the mind-bogglingly insane brilliance of DF then Gnomoria is a good stepping stone. I became aware of it during Aavak's (also a DF player) Let's Play and picked it up soon after, whilst it doesn't have anywhere near the depth of DF (traps/mechanisms are much more limited for example) if you only have a few hours it's much easier to dip in and out of, whereas with DF I usually have to play for days at a time... ;)
FWIW when I play DF I do so with a tileset and all the rest of the gubbins one might find in the Lazy Newb Pack. It's a sublime game but the complexity and inconsistency of its interface can be one of its biggest frustrations.
Nathan Myhrvold's Recipe For a Better Oven
The burning occurs because once all the water has evaporated from the top of the crust, it'll burn incredibly quickly. Typically it's difficult to gauge from a quick glance how much moisture is remaining in the top layers of the bread - although much easier to gauge the amount of steam you see when you open the oven door.
Easiest solution to tackle the evaporating of water is to brush a little oil over the top. Water evporates, oil soaks in instead, the hot oil helps the crust brown quicker and prevents it drying out as quickly yet makes it very crispy. If you want the bread cooked more evenly, re-wrap the whole thing in foil - the inside will stay soft but the crust will be more pliable too.
This isn't the sort of thing you can do in advance - if you apply oil to soon-to-be-pre-packaged garlic bread, it'll soak throughout the bread, negating the effect, so if you must use the premade stuff, crack open the foil and get busy with the pastry brush. Lots of people will say use olive oil, but at 200ÂC rapeseed or groundnut oil will cook better and won't spoil the taste of the garlic butter.
Can't believe I just had a minor geek-out about garlic bread, something I don't even like that much. But I've done it this way for others and none of the survivors have complained yet :)
Microsoft Takes Down No-IP.com Domains
Most people I know that use no-ip are people setting up their own minecraft servers its not a hotbed of criminal activivty like MS claims.
I looked up this "minecraft" of which you speak, and it seems to be some crudely archaic simulation where you wander round indiscriminately smashing rocks together and killing animals - basically a terrorism simulator. I fail to see why anyone would support the use of this software.
Lots of terrorism-simulator apologists say it's something called an Indy game, but it bears absolutely no comparison with any of the Harrison Ford films (and in any case, an Indy game would require royalty payments to LucasArts which we can find no record of). It doesn't have a proper company behind it like EA or Zynga but only a nebulous cloud of anonymous people known as "notch".
Not only that, it seems that the hacker group "notch" had their paypal account suspended several years ago due to money laundering and other suspicious activities.
Frankly anyone who uses this simulator or supports the filth behind it deserves everything they get.
Hackers Ransom European Domino's Customer Data (including Favourite Toppings)
Real, proper, mozzarella di bufala campana, even spelt correctly this time :) There's a couple of other restaurants that get it fresh as well but they're very pricey.
Hackers Ransom European Domino's Customer Data (including Favourite Toppings)
As a brit with an italian SO, that's about the size of it. In London at least there are lots of good (and some great) pizzerias, almost all of which are owned and run by italians. The superb Franco Manca in Brixton is quite probably the best pizza I've ever had, and this includes some truly excellent pizza restaurants in and around Naples (pizza napoli is, to me, the only style of pizza worth emulating; the SO is from rome and dislikes the "local" style of pizza as well). That said, they're also one of the few places in London where you can consistently get proper fresh mozarella (I can virtually guarantee it's a completely different beast to all the mozzarella you've ever had). Residents of Campania will rarely eat a mozarella more than a day old.
Never been too keen on the other styles of pizza - as you mention, I find chicago style too bready and too greasy and bears quite a resemblance to pizza sicilia. Pizza romana/lazio is, once again, too thick for me. Sure, there's plenty of places that do perfectly passable thick-crust pizzas but not really to my taste. When I eat pizza I want to taste the ingredients, not bread.
And then you have places like pizza hut and domino's. Not only do they have a heavy, pre-cooked breadlike crust but they also use heavily processed ingredients, and unsurprisingly they're very popular with people who are more used to the taste of processed foods and ready meals than people who make pizza with fresh ingredients. Given that they don't require fresh food and can be easily thrown together and into an oven, they're also ubiquitous and cheap since they're so much easier to store and prepare and so many people think of this as pizza than the traditional styles.
That said, we're snobby enough to keep a sourdough culture in the fridge so we can throw some pizza dough together and have a passable pizza from scratch within an hour if we like. Damn, I'm hungry now.
Google Starts Blocking Extensions Not In the Chrome Web Store
That's easy enough to do. Just require that in order to enable the "incredibly risky" developer mode, you must be registered as a developer with Google, and flipping the button requires google+ integration. After all, we need to look after chrome users and this means cracking down on dodgy app development, I'm sure you're not one of those developers but e just need to check for the greater good, OK?
London Black Cabs Threaten Chaos To Stop Uber
All of these posts and so few mentions of The Knowledge. Its average time to train and pass is about 3 years and is widely renowned as extremely tough. There's a reason so many cabbies are ex-beat coppers - they're some of the few people who know the streets well enough to even begin. You need exceptional spatial awareness and an excellent memory for names and place details*. I've not been to NY so I can't draw any parallels, but from a cursory glance at a map it looks like it has a vastly simpler road network; understandable as London is less of a city and more a product of a thousand years of congealed towns with only the occasional fire or war giving the opportunity for large-scale redevelopment of limited areas.
I've been here long enough to call myself a Londoner, and have been in love with black cabs for years precisely because of the regulation and excellent training. The result of this is that you can give cabbies excessively vague directions (e.g. "a pub about ten minutes walk from station X that has a huge beer garden", "that theatre that was showing Generic West End Musical last year") and they'll still know what you're talking about and will get you there by the quickest route and have to be aware of any roadworks or if such-and-such a road en route is likely to be busy at the time you're driving. I use them because I only ever take cabs when walking, tube bus aren't acceptable (usually due to time constraints) and I've found them unfailingly reliable.
Experience with minicabs has been a whole different kettle of fish; they all rely on satnav exclusively and are useless without a postcode or street/place name - and even then rarely have enough background to distinguish on King's Head pub from another. They'll frequently say "it'll cost you X quid" at the start of the journey and then hold you ransom for "X plus 10 quid" at the end because they ran into traffic or roadworks that cabbies know how to avoid. Given my requirements for timely transport when using cabs, I'll often end up with slightly more money in my pocket but 15mins late using a minicab. YMMV of course.
It's not just a matter of the black cabbies protecting their turf - as well as the knowledge, their job comes with the fairly onerous legal requirements of buying a specially adapted vehicle (hackney carriages are required to have a turning circle of 8m) as well as spot-checks and CRB checks which it sounds like the GLC is exempting Uber drivers from on the basis that the meter isn't tethered to the vehicle. This doesn't really seem fair to me - it's a bit like the government saying that company X has to comply with industry regulations but company Y doesn't because their frob has the dooberry widget on the side rather than the top.
No affiliation with any cabbies, cab firms, cab car companies, cabinets, cab franc, or the Citizens Advice Bureau.
* There are even computer programs for that. One cab journey I made at about 2 in the morning from liverpool street to crystal palace, the driver asked if I'd had a good night and I said no, I've only just got out of work. Why's that? Ah, you work in computers? Wonder if you could have a look at my laptop? It keeps crashing when I hit a speed bump. It was a crummy little netbook but it was running some kind of vastly complicated "knowledge" application that looked like the bastard offspring of a mind map, the A-Z and M C Escher - took about two seconds to see the machine had been through enough abuse that one of the SODIMMs had worked a little loose. Stuck a bit of tape on it and gave it a shoogle, all fine. Cab driver was over the moon as he'd been quoted two hundred quid to have it fixed, I suspect more than the computer was worth, so the journey that would have left me forty to fifty quid lighter ended up being free.
Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit?
this would rapidly create an underclass of socially-blacklisted, uninsurable, embittered expendables who are considered net liabilities to their culture
I believe you misunderstand my point, dear sir. Yes, we rapidly create said subclass, but we also be equipping automotive transport with the technology at our disposal to rapidly and automatically elevate them into and age-old, time-honoured superclass that will never need to pay (or fail to pay) insurance premiums ever again. Only this way can we ensure the survival of the right kind of people!
Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit?
This sounds like a great opportunity for a fantastic and potentially lifesaving system better suited to the preservation of preferential congenital traits.
Your mobile phone acts as a personal identifier, and from your contact list, browsing habits, online shopping, app purchases etc. your $provider knows if you're a family man, the state of your finances, family, that sort of stuff and most importantly the GPS will be able to ascertain how careful you are when crossing a busy road - compare the time you spend waiting at the side of the road before you cross, infer if you look both ways from small changes in the accelerometer. As well as being able to better selectively provide you with exclusive money-saving offers, $provider will then also be able to forge a synergistic relationship with your insurance company.
Now the insurance company has a vested interest in this, as they can now better analyse who the highest risk customers are. They've got a substantial pecuniary stake in making sure some people are never injured and make these sort of judgements all the time; similarly, every insurance company has a list of members who analysts will show you will never return a profit and would be better off the company books. This information would be more valuable still when correlated with detailed medical analysis such as genetic predisposition to inheritable diseases or lifestyle choices relating to enhanced susceptibility to fiscally immoderate claim payment in later life.
The best part is the insurance companies can now create a market for providing this sort of information to automotive computing manunfacturers, so that car guidance systems can come pre-loaded with an active list of unique identifiers. Realtime monitoring of the GPS signals and cell telemetry will give you a good idea of their positions at any one time, so that when a car does enter such an "Dilemma-Inducing Ethical Situation" the car will be immediately aware of who in the inevitable proximity of the Premium Optimal Opportunity Recovery zone will be the most ethical targets to avoid and which is the least financially viable person or person(s) to involve in any involuntary Sudden Cessation of Universal Motion situation that may arise as a result.
As well as giving a way to improve survival statistics of those best suited to provide for their insurance premiums, this would also result in improved market realisation and increased financial savings to all of the following job creators:
Mobile phone/OS manufacturers
I think I'll file the above as a patent.
Can You Tell the Difference? 4K Galaxy Note 3 vs. Canon 5D Mark III Video
Filmmaker Shane Carruth (the budget auteur behind time-travelling mindbender Primer, filmed for just $7000) shot his latest film Upstream Colour on a hacked Panasonic GH2 for monetary reasons.
HP Server Killer Firmware Update On the Loose
TBH, I suspect this is just getting publicity since it's the first super-dodgy HP firmware patch since they adopted their "no updates for YOU!" mentality - the explanation for which from HP was that they'd sunk a lot of money into their patching process and people shouldn't get to use it for free I guess. This won't be the last time this happens either.
As a sysadmin that's dealt with dozens of these "killer firmwares", there's often an indentified need. We make extensive use of the HP SPP's at work and they come with a list of fixes and known issues as long as your arm; it's part of my job to go through the advisories to see if we're at risk and if we are to analyse the risk of updating/not updating. Many of them aren't security vulns or emergency fixes and are often extremely obscure, but once in a while you'll encounter something like a NIC locking up on receiving a certain type of packet or the BIOS repeatedly saying a DIMM has failed when it hasn't, or if you mix hard drives with firmware X and firmware Y on RAID controller Z running firmware... er.. A it might drop the whole array... lots of little issues than can severely impact running systems if left unchecked. And then when you upgrade one component you'll frequently have to upgrade others to stay within the compatibility support matrix, until eventually you just run the damned SPP to make sure everything in that server is at a "known good compatible" level.
Sure, we don't just flash as if it were patch tuesday and no-one ever should - we wait for at least 2 months of testing on non-production boxes before we patch any prod kit with firmware unless it's an emergency fix - but lots of people use the HP SPP to automatically download the latest updates; we've had enough problems with them that we'd never do this (and in any case 97% of our servers have no net access). But the whole point of the SPP is meant to be that HP should have already done most of the regression testing for you.
That said, we've had nothing but trouble with Broadcom NICs for ages and I'm sure there's many admins here who have fond memories of the G6 blades, broadcom NICs, ESX and virtual connect from a few years back. Think HP switched much of their kit to Emulex after that debacle. Also, the latest web-based HP SPP (as opposed to the last one where you just ran a binary) is a complete train wreck on windows for ad-hoc updates, largely due to the interface being handed over to people who seemed to want to make it a User eXperience rather than a tool.
How much use would you get from a 1 gigabit internet connection?
Assuming I got at least 100Mb/s up (preferably a gigabit), this would make online backups for any more than a few GB feasible. A friend and I have been mulling this over for a decade before "cloud" became a thing and before even commercial online backups became viable, but it would be effective for those too.
I have a NAS. He has a NAS. We can both set up encrypted containers that the other one of us doesn't have the key to. We both require offsite backups and have the nouse to tunnel rsync through an SSH tunnel or a VPN. Wouldn't it be great if I could just set up a cron job and do a weekly sync to each other?
Of course it would, but even on a "decent" downstream connection (let's take my 24Mb/s ADSL2 connection with a ~2.5Mb/s upload) I wouldn't even get halfway through my deltas for the week (of which disc images of my windows boxes or a major component size-wise). Sure, I could periodically pop on the train with a hard drive once a month or so, but even then let's say I need to restore a 5GB file pronto when he's off on holiday - the restore would still take hours, possibly days.
If we all had a full gigabit up/down, I'd build NASes for friends and family for free if I could use them as backup locations which would also have the nice side effect of finally letting me put the oblig. XKCD http://xkcd.com/949/ to rest.
ARIN Is Down To the Last /8 of IPv4 Addresses
Hmmm... sounds like there's a market for selling hardware to mine IPv6 addresses. Just need to set up some sort of exchange...
How Apple's Billion Dollar Sapphire Bet Will Pay Off
Ha, I was hoping someone would post that quote (I think the only pop culture reference to Moissanite anywhere) - I work within a stone's throw (pun not intended) of Hatton Gardens where Doug the Head's shop is, and his pub Ye Olde Mitre. Which reminds me, rather a nice day for a pint...
WRT54G Successor Falls Flat On Promises
From a few posts along in the thread https://lists.openwrt.org/pipe...:
Quick update on this subject: Linksys has now posted a GPL source for
the WRT1900AC, and it contains the wifi driver sources.
It appears to me, that this driver was properly licensed under GPL, with
proper license headers in all source files.
This means that work on supporting this device can theoretically
continue, although I expect it to take quite a bit of time. As I
anticipated, the code quality of the driver source code is abysmal.
This looks like rewrite (not cleanup) material, ugly enough to cause eye
cancer or frighten small children ;)
There are also still some pieces missing: Since this driver does not use
standard Linux Wireless APIs, it can only properly function with custom
hostapd/wpa_supplicant hacks. I don't see those in the release.
Update 2: Those can be found in the OpenWrt SDK for this device on
GitHub. Same comments regarding code quality apply here.
Can anyone more au fait with OpenWRT verify that this is correct?
WRT54G Successor Falls Flat On Promises
I can't even begin to imagine a chain of events that resulted in shipping an OpenWRT router without any OpenWRT support.
It starts with "B" and rhymes with Whelkin.
How Apple's Billion Dollar Sapphire Bet Will Pay Off
Silicon carbide at least has potential; there's a synthetic polymorph of it called Moissanite that's transparent, harder (9.5 vs. 9) and stronger than sapphire/corundum. I imagine it costs a shedload more to make than sapphire glass however.
Ask Slashdot: System Administrator Vs Change Advisory Board
This. Absolutely, 100% this.
As I've alluded to in my other posts, as soon as I graduated from cowboy sysadmin to a "proper" sysadmin that files change requests and writes project documentation, I've come to love change managers for precisely the reasons above. Change managers are under continual bombardment from non-technical project managers and developers that might well have deep, deep insight into a certain area but can't see past the end of their nose. A good change manager will often trot up to us sysadmins and say "So-and-so has submitted this change but doesn't think it needs approvals from you guys, can you take a gander?" to be met with either a "yeah that's fine" or a "Holy crappingon what-the-fuck in a god-buggered handbasket NO!". Good sysadmins in a constructive environment see a bigger picture than the project managers and the developers and, as far as CAB is concerned, submit better change requests as a result - because risk analysis is such an innate part of our job that most of us don't even realise we're doing it. But change managers see a bigger picture still because they're exposed to the sysadmins, network admins, security admins, user admins, mail admins, storage admins, admin admins, admin users, sysadmin networks, bread, eggs and breaded eggs.
Change managers exist to protect the business. Sysadmins exist to run the business' IT. Change managers realise that sysadmins are often asked to do dangerous or even outright impossible things by powerful people with only an inkling of what consequences such an action might have; it's a change manager's job to communicate with and understand the sysadmin (and everyone else) in such repsects, just as it's the sysadmins' responsibility to communicate to the business why change X is crucial or dangerous. In a properly functioning IT dept, sysadmins and change managers protect both each other and the business from stupidity, mis-co-ordination and lack of oversight. As a sysadmin, change managers are almost always on your side - either pushing for that change that's so essential, or holding you back where there's a risk. They're a highly valuable ally. When something goes to shit, they're the first people to step in and say "no, the sysadmins had nothing to do with this incident".
I'm MrNemesis and in the last three years I've learned to love my change managers.
SSD-HDD Price Gap Won't Go Away Anytime Soon
It depends how you measure "speed". If you measure speed by things like sequential read or write speed like so many people do, it's possible to match SSD speed with as few as two platter-based hard drives.
But in the real word (of servers at least) there's not really any such thing as sequential reads/writes any more, and when you throw a VM-backended-on-a-SAN into the mix it's safe to say that there is no such thing as a sequential transfer - all I/O, by the time it hits the SAN controller, will look random simply because it's the aggregated reads and writes of dozens or hundreds or thousands of different servers.
So going back to the original premise - if you in fact measure speed in IOPS rather than throughput, you'll need something approaching at least twenty spindles (probably with a bunch of expensive battery-backed RAM as cache sitting in front of it) in order to even get close - platter-based drives basically just suck at random IO and it's not unusual for them to be an order of magntitude slower in throughput when doing 4kB random as opposed to 4kB sequential; I've seen drives that can do >150MB/s sequential drop to doing less than 1MB/s random (something you can easily try out yourself with iometer if you so wish). It's why so many SAN technologies now use tiering, where incoming writes first get written to RAM, then the SAN controller does some IO coalescing, and then sends it down to the fifty or so spindles directly - or increasingly these days to an intermediate NAND layer. This way you can serialise requests so that whilst the data hitting the SAN is inherently random, your SAN controller has the smarts to get it to write to the spindles in as sequential a manner as it can.
If it's IOPS you're after and you don't have a fancypants SAN, it's now frequently cheaper to shell out for a limited amount of NAND than it is to buy enough spindles to support a peak IO load, even if you shell out on the big bucks of FusionIO or those ludicrously pricey SAS SSDs. If you need speed and capacity, you can now buy "application accelerators" or suchlike that will automatically promote hot blocks into a local NAND cache rather than going straight to the platters (although I don't know how well these work in practive). If you do have a fancypants SAN you can make it an even more fancypants SAN by plugging a layer of NAND in between the controller cache and the spindles themselves and still have oodles of relatively cheap platter capacity.
Of course at home I still use an SSD for the OS and programs and I keep my static media on platters, because that's one environment where I do know accesses will be mostly sequential and I need the capacity-per-quid that only platters can give at present. But I've just added an SSD writeback cache to my NAS and it's noticeably faster already.
TLDR: Throughput and capacity aren't the only measures of storage, and an SSD can improve performance massively whilst costing less than the equivalent platters as long as you're aware of your IO workload.