Debian's Systemd Adoption Inspires Threat of Fork
The init process is a critical stage: failure tends to leave you with no access to the system to diagnose the failure. I tend to break it into two parts, the first part being what's necessary to allow at least a single-user login on the console, and the stuff that happens after that point to bring up the full multi-user system and server processes.
I don't like systemd for the first part. It's overly complex, and the more parts and interdependencies there are the more things there are to fail. To quote an engineer, "The more they overthink the plumbing, the easier it is to stop up the works.". Shell scripts and plaintext log files may be primitive, but they have the advantage of being easy to read with minimal access and not requiring complex stuff to run (mainly they just require that basic binaries be available in the path). Until I've got at least a basic system up and running enough to log in and work, I want the init process to be as simple and straightforward as possible with as few points of failure in the init process itself (as opposed to the things it's starting) as possible. I want this stage to be as hard to break and easy to debug/fix as possible. I don't want to depend on complex tools at a point where I'm working in the most primitive environment.
I don't particularly like systemd for the second part, but it isn't quite as much of a problem here as in the first part. By this point I've got a basic system up, I can log in and work, and most of the tools are available. Obviously nothing graphical will work, but text-based tools will probably run to decipher binary logfiles and modify configurations. I still prefer not depending on such tools, because they're one more point where things can fail to work and leave me scratching my head trying to figure out what broke without access to basic diagnostic information, but at this stage I can likely fix any tool-related problems and get back on track.
The one part I think systemd works for is in the later stages of the second part. There's a lot of server processes with interdependencies that typically start after the multi-user system's up and running. I think systemd's a good thing for getting those running, it can do a better job of parallelizing that process than shell scripts can. The only change I'd make is to make systemd use syslogd like everything else, so log files end up in the expected place and are plain text so basic tools like more and tail and grep work on them as-is. Binary logfiles offer no great advantage over plaintext, and going through syslogd means not depending on two sets of tools and having to manage two configurations to get logging directed where it needs to go.
One last bit has me twitchy about systemd: history. SysV init scripts may be clunky and primitive, but they've been around a long time. People know how to manage them, and they've had the kinks worked out of them and best practices established. systemd doesn't have that. I do not want to make my servers (that have to run for anything to work right) dependent on something until I've had time to work with it and get familiar with it and, more importantly, it's been out there in use long enough for people to find and fix the problems, work out the oddities and figure out the best ways to use it without shooting yourself in the foot. I'd prefer to stick with SysV init on my production systems and only enable systemd on testbed systems at the start, and then enable systemd on a server-by-server basis so unexpected failures don't completely kill me (eg. if systemd blows up on my primary mailserver, the backup still on SysV can keep things under control until I either fix the problem or revert and restart the primary).
Torvalds: I Made Community-Building Mistakes With Linux
As a software developer, frankly I'd rather deal with Linus or someone like him than most of the management I've had to deal with at my jobs. At least with Linus you know exactly where he stands and exactly where you stand with him, and why. When he calls something "stupid", he's usually very clear about his reasons for thinking it's stupid. I can deal with that. I can argue my position with him because I know what his position and his reasoning is. And he won't take my arguing with him personally, or even particularly badly as long as I can trot out facts and hard numbers to back up my positions and counter his. Better that than management that won't tell you what they want, won't say what they mean and try to deny their own decisions in the face of copies of their own e-mails and memos.
Confidence Shaken In Open Source Security Idealism
The main difference is that you aren't leaving the trust to the open-source community. You can, but you don't have to. If you're affected by a problem, you have the option of legally fixing the problem yourself if it's that critical for you. You can discuss the problem with others without risking a vendor's legal department jumping down your throat. You can test your systems to determine whether they're vulnerable (eg. Debian-based Linux systems weren't normally affected by the recent bash bug even though the bug existed on them because of the way Debian configured their shells). Ultimately you've got options you just don't have with closed-source software.
And think about this. How many problems of a similar severity have we seen in closed-source software? And how many of those have the vendors known about for years and deliberately left in place because fixing them would involve admitting they existed and cause PR problems? It seems to me that open-source software still has a much better track record when it comes to these issues than closed-source software by a wide margin.
Fighting the Culture of 'Worse Is Better'
The problem is that you don't have to consider just benefits, desirability, goals and values. You have to consider the other side of the equation: cost. And for programming languages like C++, the cost of breaking compatibility can be huge. If you change C++ in a way that isn't compatible with existing behavior, you have to check every single program written in C++ for bugs, errors and just plain failure to compile anymore. The cost of that's going to outweigh just about any benefit short of "creates a literal Heaven on Earth for every single person in the world". It'd be simpler to create a new language based on C++ with the desired changes made to it, which is what tends to happen. The same applies to software frameworks and architectures.
Xen Cloud Fix Shows the Right Way To Patch Open-Source Flaws
This is the right way to handle things, yes. The problem is that most researchers are used to dealing with proprietary software vendors whose reaction to any bug report is at best to ignore and bury the report and deny there's any problem, at worst to attack and sue the researchers. The only sane reaction to that situation is to handle things the way Heartbleed and Shellshock were: immediately publicly disclose all the details so that there are too many independent confirmations and too much publicity for the vendor to ignore the situation or deny the problem. When 99% of the time you need to follow one course, it's easy to lose track of when you're dealing with the other 1%.
To Fight $5.2B In Identity Theft, IRS May Need To Change the Way You File Taxes
The IRS could do a lot with a few simple checks:
- Is the refund going to the same bank account as the last refund for this taxpayer? If yes, there's a minimal risk of fraud. The taxpayer would've complained if last year's refund was stolen.
- Is the taxpayer's name on the account the refund's going to, and has the account been open more than a year? If yes, the risk of fraud's low. Banks typically don't let you open accounts without checking some physical ID.
- Is the refund check being mailed to the same address as used on last year's return? If so, the taxpayer likely hasn't moved and the risk is low.
- If any of the checks fail, compare the contact information on this year's return to last year's. If they're the same, contact the taxpayer to confirm where the refund should go. If they aren't, check any supporting filings from other sources (W-2, W-4, 1099, etc.) and see if any of those sources has contact information. Use that to contact the taxpayer.
It'll slow things down a bit when there's a problem, but it'll also let the effort be concentrated on returns with the highest chance of identity fraud.
And I mean, really. "Identity theft" is just a fancy new name for impersonating someone, and impersonation for the purposes of fraud is not new.
College Students: Want To Earn More? Take a COBOL Class
And in many cases they probably can't do it over. We're talking about major financial and operational programs that weren't designed so much as they evolved along with the business over the course of the last half-century (since the introduction of the IBM System/360). The specs and requirements, if they exist at all, are buried in the back of a warehouse the size of Warehouse 13 and have probably been turned into nests for the mutant rats. The source code in many cases doesn't match the binaries or doesn't exist at all thanks to errors in migrating data and mistakes in editing files. The running binaries may literally be the only authoritative statement of what rules the company's accounting follows. There's a reason every single IBM mainframe since the S/360 has been capable of emulating an S/360 down to the hardware level, after all.
Does Learning To Code Outweigh a Degree In Computer Science?
The difference between someone with a Computer Science degree and one who's learned practical coding is the difference between a residential-home architect and a construction-oriented master carpenter. The first can design your home and tell you why it's designed that way. The second can actually build it, tell you what goes into the construction and why, and when certain design elements are going to muck up the physical realities. In the end, you're going to need both skillsets unless you restrict yourself to just building cookie-cutter copies of existing house plans. And ideally your senior people should have both skillsets so their designs take into account the grungy details of turning them into working code.
The absolute worst situation is senior architects/designers with no practical experience, they tend to turn out beautiful, elegant masterpieces that're a nightmare to actually implement. That's followed only slightly by pure practical programmers trying to do high-level design while being ignorant of the overarching principles and abstract concepts that help guide you when it comes to what's the best way to approach a problem.
Getting IT Talent In Government Will Take Culture Change, Says Google Engineer
I'd note that most software engineers aren't philosophically opposed to dressing well, or to reasonable dress codes. They're mostly opposed to stupid dress codes that make them uncomfortable while working for no good reason. Reasonable dress for a meeting with outside customers is different from that for a group of engineers banging out a solution to a code problem, and what's reasonable when you've hauled someone in on their day off to deal with an emergency isn't the same as what they'd wear during a normal workday. Management tends to lose sight of all this because they've got much different jobs from the engineers and the dress norms for them are going to be different from those for engineers because the routine situations are going to be different.
Ask Slashdot: Is Running Mission-Critical Servers Without a Firewall Common?
The problem there is that the Windows firewall itself creates it's own attack surface. You have such a large range of internal machines that need access to so many different services on the servers for monitoring, administration, deployment, support and so on, and so many of those services are either so poorly documented or multiplex so many different functions/services over the same port that it's difficult to write specific rules for them, that in the end your firewall rules for the servers end up being unmanageably complex. They end up not protecting you nearly as much as you think they are, and they actually cause problems and contribute to failures (I could count on spending at least half a day every week diagnosing firewall-rule-related problems, and every release tended to result in several rollbacks and re-deployments over the course of a couple of days because of errors or omissions in firewall rule changes which we also had to diagnose). Plus, for all that cost, the primary threat wasn't from other compromised servers, it was from internal machines which legitimately had access to the servers (ie. the desktops belonging to DBAs, sysadmins, managers and so on) which were compromised by malware coming in via other vectors that bypassed all the firewalls.
Ask Slashdot: Is Running Mission-Critical Servers Without a Firewall Common?
You said they disabled the local firewall. That's how I'd run most Windows servers on a network of any size, because the local firewall just eats up resources on the server that could be better used for the server's actual job. The firewalls should be proper hardware firewalls built into the networking infrastructure located a) between the outside world and the client networks to control access to the network in general, b) between the POS terminal segment and the server segment to control what access the terminals have to the servers and to block the servers from unnecessary access back to the POS terminals, and c) between the two client networks you mention to control what access each client has to the other's network.
The Windows Firewall itself is fairly useless in a large network because as far as incoming connections go it can't control things any better than a hardware firewall can, and for outgoing connections it's pointless because any malware that might try making unwanted outbound connections has to be assumed to have enough access to disable or bypass the Windows Firewall.
People Who Claim To Worry About Climate Change Don't Cut Energy Use
People who're worried about climate change would likely be people who've already started cutting electricity usage. If you've already been doing things to cut down for several years already, how likely are you to be able to still make big gains? Not very. It's a lot easier to get those when you haven't cared and can still do the easy things like replacing burned-out incandescent bulbs with CFLs or LEDs, or replacing an old less-efficient refrigerator with a new one when remodeling the kitchen. It's not so easy when you did all those things, and replaced the windows with double-pane insulated ones and had the heating/cooling system upgraded to a modern unit, several years ago and now all that's left would be very-big-ticket items like a solar power system or infeasible stuff like completely rebuilding the house using modern materials and construction.
Airbus Patents Windowless Cockpit That Would Increase Pilots' Field of View
It's a good idea as long as everything's working perfectly, but the failure mode in the event of avionics problems makes it unacceptable.
Amazon Fighting FTC Over In-App Purchases Fine
Amazon is confusing users by making it so that setting the parental controls to "no in-app purchases allowed" leaves the game in a condition where in-app purchases are still allowed. If I get in a car, put the car into Reverse to back out of a parking spot, then put it in Drive to go forward, a reasonable person would expect the car to go forward. They wouldn't expect it to continue to act as if it were in Reverse for another few minutes before the Reverse setting expired and it began to act in accordance with the gearshift setting. Similarly when you set the parental controls in an app you'd expect the app to act according to the controls, not to ignore your setting for several more minutes because you've entered the password recently (as part of setting the parental controls, not to authorize purchases).
Amazon Fighting FTC Over In-App Purchases Fine
I think Amazon's problem is going to be that just refunding the purchases doesn't help the parents. If the kid maxes out the credit-card on in-app purchases, the parents have to deal not just with those purchases but the fees and interest from over-limit charges on the card and/or the additional costs associated with any declined charges (eg. if I pay a bill on-line using my card and the charge is declined, I get hit for late fees and possibly service disconnections). Having this happen when you're out-of-town (eg. the kid does this while the family's on vacation, and when you go to check out of the hotel you can't pay your hotel bill and you have to figure out why without being able to check your accounts on-line to see what unexpected charges are there). The only acceptable way of handling things is what Amazon should've done from the start: once parental controls are turned on in an app, all actions that would cause a charge or affect parental controls always require a PIN (and ideally there'd be an option to say "don't allow charges period until parental controls are turned off again").
Prisoners Freed After Cops Struggle With New Records Software
That assumes they're paying their Excel programmers. More likely they don't have any programmers on staff to pay, they subcontract that tedious and non-core-business detail out to an outsourcing firm in India or China or somewhere.
Prisoners Freed After Cops Struggle With New Records Software
Sounds like a typical bollix-up: the system was a drastic change from the existing one and difficult to use, and has performance problems on top of that, but management still sent it live and turned the old system off without making sure everyone had thorough training. On top of that they didn't have any extra resources on hand to help with the extra workload as people learned the new program on the job and didn't have anybody familiar with the program on hand to help the users. End result: the entirely predictable train wreck occurred. But of course the management responsible for this will never be held accountable for it. Instead the blame will be put on "the software", instead of the management who signed off on the software being acceptable when it manifestly was not.
Chinese-Built Cars Are Coming To the US Next Year
I'm minded from earlier cases of problems with Chinese-sourced products that the Chinese attitude is very much "It's the buyer's responsibility to make sure they're getting what they ordered and paid for. If they don't check, it's their fault for being so gullible.". Not exactly the attitude I'd be looking for out of a manufacturing center.
Microsoft Fixing Windows 8 Flaws, But Leaving Them In Windows 7
This is just an extension of the kind of coerced upgrade Microsoft's attempted before. With Vista and then with Win7, when they didn't take off on their own MS tried to force the issue by making the latest versions of IE and DirectX and such only available for Vista/7, not XP. This is the same thing: "Upgrade to Win8 or take the heat for running a vulnerable OS.". Thing is, it'll backfire the same way the "no latest DirectX on XP" did. Win7's such a large base that developers can't afford to write code that won't run on it, so they won't be able to use the new Win8-only safe functions. Which means applications will remain vulnerable on Win8, just like on Win7 where they also run.
AT&T To Use Phone Geolocation To Prevent Credit Card Fraud
If they're going to track your cel phone, that means they're assuming you have your cel phone on you. So why not send the authorization code to your cel phone and let you give it to the merchant? That way it doesn't matter if the card's stolen, the merchant can't get an auth code if you aren't present with your phone. Or better yet, have an app that'll let you punch in the merchant's ID and transaction number and initiate the payment from your end, rather than having the merchant handle your card? That makes stealing the card pointless, because just having the card isn't enough to let you make a charge.
Todd Knarr hasn't submitted any stories.
Todd Knarr has no journal entries.