Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Debian Talks About Systemd Once Again

RavenLrD20k Re:Remove It (517 comments)

On my servers, the current business week is in plain text and not compressed and archived until 11pm Sunday night for the next week. I keep a month's worth of archived logs. Now here's why: If a system goes down for some reason, the only logs that are going to have anything immediately useful are going to be the uncompressed ones that can easily be cat dumped or vi'd for initial troubleshooting. You're most likely going to need only the last few lines of the log just to find out what went wrong. If troubleshooting is greater than that and you find a longer history of problems that culminated in the panic, any liveCD distro will have the tools necessary to crack open your archives.

Binary log systems are a Disaster Recovery nightmare. The only reason you have a log system is that something went wrong and you need to do some form of troubleshooting/recovery. If your core system is still working fine and the native systemd is able to read the binary, great. What happens when a system partition crashes and won't boot back up? Please enlighten me on how a binary log file can be read on a system that won't boot itself? Can any liveCD using a systemd based distro read the binary file and translate it to a human readable format? Also, it's been said that using a config file, the journal system of systemd can write to a plaintext file. Please explain how that works? Using the config file, does the journal system completely turn off and each component individually writes to syslog, generating their own log file or adding to one of the already created pertinent log files, as it does with System V? Or, does each program send it's message to the journal system and it's this system that sends a message to syslog to write? If it's this latter case, what happens if during a system panic the journal system corrupts the data being written? What if the journal system itself craps out in a failure?

These are all questions that I legitimately do not have an answer to yet, and I haven't had time to research into it. Before I consider updating my systems to a systemd based distribution these questions MUST be answered satisfactorily, and it will be as I draw closer to that point that I will be making time to research it. I don't have time for FUD, fanboyisms, or anything else as such from either side. I have specific requirements that must be completely answered. If the answers are not forthcoming, I, and many many many sysadmins like me, will be keeping System V init systems on my servers by whatever means necessary.

3 days ago
top

Google Announces Motorola-Made Nexus 6 and HTC-Made Nexus 9

RavenLrD20k Re:phablet (201 comments)

I'm waiting for a Pipboy style mount for my phone with hard inputs on top of touchscreen. Telephone style communication can be had through Bluetooth headsets.

5 days ago
top

Google Announces Motorola-Made Nexus 6 and HTC-Made Nexus 9

RavenLrD20k Re:Same cell modems? (201 comments)

Unfortunately, I don't know of any cell modem manufacturers other than Qualcomm, so thus, not much option for an open modem platform. Unless you happen to have information on other platforms that are open and mass marketed that could enlighten me.

5 days ago
top

Password Security: Why the Horse Battery Staple Is Not Correct

RavenLrD20k Re: symbols, caps, numbers (546 comments)

Ms. Miss. and Mrs. are all abbreviations for the same word: Mistress. So by pedantic technicality, no matter which abbreviation is used they all mean the exact same thing, so there's no real distinction between them.

And to the A/C child that stated "Maybe she remarried." Maiden always refers to the name a lady was born with, not any of the ones she took on through any number of subsequent marriages.

about a week ago
top

Ubisoft Claims CPU Specs a Limiting Factor In Assassin's Creed Unity On Consoles

RavenLrD20k Re:More childhood (337 comments)

...whether you choose to believe it or not, you suffer from your self-imposed PC-centrism.

"I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss." --Cypher, The Matrix

I'm PC-centric. I know this and I embrace it. Unlike the "rest of the world," as you put it, I see that there are more than enough good titles on PC to occupy my limited entertainment time. Those games that want to be console exclusive? Bye. I don't need you. There's only a handful of franchises that make me consider buying a console, and they tend to be on Nintendo anyway since it's mostly nostalgia driven impulses. Even those, the newer titles seem to break and disjoint the franchises. On the other consoles everything seems generic enough that there's usually a PC title that can mimic the fix if I really want it. One case in point: Dishonored vs. Assassin's Creed; also, just about any FPS can fill in for another. Destiny? Far Cry. Call of Dookie(any of them). Crysis(again, any of them). Deus Ex:HR. Hell, old school Deus Ex can even fill this niche satisfactorily, and with none of them do I have to have an active internet connection to play. The only FPS series that I haven't been able to find a game on the PC to mimic it is the Metroid: Prime series.

about two weeks ago
top

What's Been the Best Linux Distro of 2014?

RavenLrD20k Re:systemd (302 comments)

Tool for the job. OOP has its place in programming as does procedural. The same can be said of systemd. My desktop system I'm probably not going to care when the upgrade forces me into systemd since it's not a system that I need to be concerned about long term uptime and stability. My servers on the other hand, they're a completely different story. I've got another year or so on my LTS, and in that time I intend on putting systemd through its paces on a test machine to force it to fail, and then observe the ways in which it fails. If it does what my hypothesis says it will, based on its generally monolithic design strategy, then I will be looking for a distribution that hasn't drunk the systemd Kool-Aid for my next LTS. If it surprises me and performs well during the fail testing, then I will consider using it.

I believe that this is where the core hatred for systemd is really coming from. It's not so much that it's a bad init system or that people are resistant to change, though the latter is certainly a factor with good reason. It's the fact that Distributions are forcing this change on System Administrators and removing the choice for the Sysadmins to continue with a technology that they have working and have learned the core of how it works or have to install and learn a whole new system that they may not know their way around yet. Has RH or Debian offered the old System V init package as an alternative on their default install? Not that I've seen. This is where the ire is coming from. The defense for systemd is "if you don't like it, it's open source; change it in the source." The people that say this don't realize that in a corporate production environment, this is rarely a feasible argument. Sysadmins are not usually programmers, and these operating system distributions -- Redhat, CentOS, Debian, etc -- have just sold them out and made more work for them to test the ways systemd can fail on their systems and reanalyze their disaster recovery plans to account for this new anomaly. Or management may just decide that they need to go get a service contract with Microsoft for their needs and dump Linux altogether as they may deem the MS option as cheaper and easier to maintain long term.

Fast blanket changes to new technologies by the distributions without accounting for the slower moving but gigantic enterprise user base without offering the option to keep their current technologies was probably one of the dumbest moves they could have done. Why did most Enterprise that use windows software keep on XP until Windows 7 came out, skipping Vista? Why am I using a windows 7 desktop at my employer right now when Windows 8.1 is out there and mature enough for the next phase of OS to have been announced? For the same reason you're not going to get Enterprise customers jumping on board with the next release of the Linux OS they're using...if they're using one of those that is dropping System V.

about two weeks ago
top

Ask Slashdot: How Would You Build a Home Network To Fully Utilize Google Fiber?

RavenLrD20k Re:Combine the 2 (279 comments)

Are you saying here that 1) You don't punch both ends with the proper wiring (straight through) (you also seem to think it doesn't matter) and 2) that you are seriously suggesting wiring wallports to RJ-45 ends as opposed to a proper patch panel?

For your first point, he's saying don't bother with wiring the house with crossover lines, just use straight through cable to go from your jacks (which if you noticed the next line after what you quoted, he is running jacks in the room, not RJ-45 crimps) in each room to the central switch. Don't forget that this is for residential use: One jack per room is usually sufficient. He is right that using a Gigabit Ethernet switch should automatically change the port operation from straight-through to crossover depending on what device it's being connected to, therefore it technically doesn't matter; it's just a better practice to treat it like it does. In my own residential setup, I ran straight-through to one jack in each room, up into the attic space, and plugged them all into a gigabit switch mounted near the access door. If any of the rooms has multiple computers in it, I would just plug a crossover patch cable in the jack and run it to another gigabit switch in the room, and have all the computers connect into that.

There are two ways that your second point can be interpreted. You either think that he's crimping an RJ-45 end on the cable where it comes out of the wall and leaving enough cable in the wall to be able to pull it out and make the connection in the room (not what he's doing; again, read his next sentence after what you quoted.); or you think that he's wiring the jack in the room back to a switch that he's connecting into using an RJ-45 connector (this is exactly what he's doing), and don't understand why he'd do it this way as opposed to running it to a patch panel.

While patch-panels are a veritable necessity in your large environments where cable runs can be complex and need to be labeled as well as quick changes to network topology can be facilitated, in a residential system where there's most likely only going to be 5 - 10 single line runs that aren't going to move or change much it's an added cost and complexity that doesn't necessarily need to be purchased when each run can be directly connected into the mounted switch and left alone. Also, every time you jump from cable to patch panel to cable to device there's a performance hit on the network. Granted, over the residential runs we're talking about this hit would be nigh negligible, but if it's part of a network plan that adds a touch more complexity that isn't needed and poses no real benefit, its just one more reason to do direct-to-switch runs. Remember, we're talking about spaces where you have a single line coming from the switch and you can most likely easily trace it to the room it's running to just by standing in one place in the attic and following the line with your eyes.

about two weeks ago
top

Ross Ulbricht's Lawyer Says FBI's Hack of Silk Road Was "Criminal"

RavenLrD20k Re:I'm still waiting for the defense lawyer that s (208 comments)

I'm still waiting for the defense lawyer that says..."Your honor my client is a scoundrel criminal and you should give him the maximum punishment for his crime."

It will never happen because it would be illegal for the lawyer to do so. The worst a lawyer can do to his client if he knows his client is guilty and cannot bring himself to put forth his best efforts in defense is recuse himself from the case... and even this can have repercussions for the lawyer such as have his case reviewed by the Bar Association.

about two weeks ago
top

Ross Ulbricht's Lawyer Says FBI's Hack of Silk Road Was "Criminal"

RavenLrD20k Re:Nice try, it's called a WARRANT (208 comments)

One part you missed in this whole thing, as mrchaotica pointed out in his subject below: There was no warrant sought; let alone signed. The feds performed a potential act of war to gather the data by hacking into a server on foreign sovereign soil without direct authorization from either Congress or Presidential approval, and most certainly without the prior authorization of the country where the server is located. In this case the three letter organization involved went rogue, and imho completely botched this case, and Ross's lawyers are right in their attempt to get the evidence repressed. In reality heads need to roll for this within the organisation that overstepped its jurisdictional bounds, and the rolling heads must be done in complete view of the public.

about two weeks ago
top

GNOME 3 Winning Back Users

RavenLrD20k Re:Quality of Slashdot discourse in death-spiral (267 comments)

For me (and likely a good portion of people out there) its a matter of time. If we're small network administrators, we don't always have time to roll our own distribution for them, or program the components to fit our network by hand. My servers out in the cloud that I run OpenCloud from I've been using CentOS 6.5 on. I've also been running this distro on my home intranet for media storage and network management. Being able to use yum to keep the system up to date with patches and updates saves me loads of time from having to compile and patch by hand myself. I'm also not likely to upgrade any time soon as I tend to prefer the init system I know over systemd that I haven't been able to test yet. I also dual boot Linux Mint KDE and Win7 on my desktop, and this system I'm not likely to care as much about the sysvinit / systemd debate. I'll probably continue to use the newer versions of Mint to keep the administrative ease of apt.

If I do wind up having to upgrade server systems it looks like I'll have to give up the feature that kept me preferring Redhat/Debian based distributions over Slack/Gentoo based, just to keep the more transparent System V Init system. All the package based systems that made server administration faster and easier are sliding to the systemd blob. Even using the current version book of LFS compiling the entire system myself I'll have to go off script to keep sysvinit (I'll probably be doing this for my intranet management servers over the weekend since those are also my personal tinker toys.) Nothing more that a mild annoyance, granted, but I don't like the fact that I'm not being given a choice beyond "you can have fast binary package management with a binary blob managing all the core hardware initialization at once with little transparency and added complexity, or you can have your transparent init scripts that boot things transparently with a lot of feedback on each subsystem...but your packages need to be compiled for your system as they're upgraded through Portage/make install. You didn't need your server to do much of anything else for the next 2-8 hours, did you?" No offense, but this is still a situation of "Better the Devil I know than the Angel I don't." I'd rather give up some time to component administration right now and know the system is stable and know how to keep it that way than to use a new init system that I don't know how it runs and run the risk of losing the stability that I require.

Systemd is fine for my desktop system as that's where I'll want the faster boot time that it affords and I won't necessarily need to scrounge the logs, require the utmost in stability, or get as in depth with the system operation; but on my servers I need to be able to check logs in the event of a failure to boot (binary format for your logs? Great idea for when there's a kernel panic and you need to see what happened by reading logs through the boot loader! /s) and I don't necessarily want the system to panic at the catastrophic failure of a single sub-system when the rest of the system would be perfectly capable of limping along until repair. In full disclosure, I don't know enough about systemd yet to know for a fact that this is a problem and I'm just going on my hunches based on my understanding of the theory, but before I put it on any system that I will rely on in more than just a tinker-toy fashion, I will be tinkering with it to see in what ways it will fail and how it behaves when it does. Unfortunately we're back to the whole time thing where I have to manage work projects and personal projects already started and the platforms that these projects run on before I can start putting time into testing systemd for my configurations.

about two weeks ago
top

US Says It Can Hack Foreign Servers Without Warrants

RavenLrD20k Re:But what if I'm in a boat, submarine, airplane? (335 comments)

I know you're making a joke here but just to be a pedant: it can and has been argued that feet touch the soil by extension of whatever clothing, platform or cushion are providing support between the foot and the dirt. This includes bodies of water, concrete, shoes, trees...etc.

about two weeks ago
top

US Says It Can Hack Foreign Servers Without Warrants

RavenLrD20k Re:So what they are saying... (335 comments)

US is short for USA and is fully United States of America indicating that the United States is a single country that resides on the continent of America (more specifically North America, and thus the sovereign limits of our laws are limited only to the country that is the United States; not the entire continent of North America. We as a nation cannot dictate policy for Canada or Mexico (though we sure as hell try, and often succeed in, influencing it) who we share the North American continent with.

Given this fact the DoJ does not have legal jurisdiction over anything outside of the borders of the United States or its territories without the express permission of agencies of those other nations. Only the United States Military, by order of Congress or through declaration of a Police Action by the President of the United States (until this power is successfully challenged by the other branches of government, this is a power that the office is allowed), can be officially authorized to perform actions against other Sovereign territories, which include hacking computer infrastructure not within US territory. While it can be argued that the NSA and CIA have performed such actions and are not part of the military, unless they're performing in the capacity of military consultants, they very rarely are given any "official" operations off of US soil. The DoJ doesn't have that option. Either they drop the evidence because it was collected as part of an operation that doesn't exist (therefore the evidence can't exist), or the operation did exist and the DoJ went rogue by going against foreign policy...and thus performed an illegal operation against the Doctrine set forth in The Constitution (Article one, Section 8) and the evidence should not be used (a decent defense lawyer should be able to successfully make this argument and turn my ambiguous "should not" into a firm "cannot").

Other nations could legitimately see this action as an act of war perpetrated by the United States as a whole and would be well within their rights to call us on it by whatever means they deem necessary. We should be eating crow for this and heads in the DoJ need to roll for going rogue, but whether any other country would be brave enough to step up against us in this is very unlikely... unless this was perpetrated against a 2nd world country (Russia, China, etc...), then you can bet we're going to see some form of retaliation.

about two weeks ago
top

US Says It Can Hack Foreign Servers Without Warrants

RavenLrD20k Re:So what they are saying... (335 comments)

Um, last I checked the DoJ isn't part of our military and as such does not have the authority to perform acts of war such as invading sovereign territories not controlled by US law; which is exactly what hacking a server not controlled by the US without the permission of said sovereign government is considered. Hell, the DoJ isn't even a spy organisation, we already have other departments for that, so they can't legitimately use that excuse that it was just SOP for intelligence gathering.

The US needs to eat some serious crow for this and the DoJ needs to be smacked down hard. My cynicism says that either of those things happening is nigh unlikely given the current political climate

about two weeks ago
top

Belkin Router Owners Suffering Massive Outages

RavenLrD20k Re:Oh hey, consumers! (191 comments)

Why organize a campaign to DDoS it when all you have to do is post the address on Slashdot during prime time?

...oh wait.

about two weeks ago
top

Complain About Comcast, Get Fired From Your Job

RavenLrD20k Re:So, it has come to this. (742 comments)

As noted in TFA a good portion of the US is "at-will employment" where an employer or employee can terminate the relationship at any time for any reason. This essentially means that an employer can fire someone for just about any reason down to "you have a speck of dust on your shoe." If an employee can prove discrimination this becomes a bit hairy for the employer, but generally there's not really much that an employee has for recourse.

about two weeks ago
top

Apple Sapphire Glass Supplier GT Advanced Files For Bankruptcy

RavenLrD20k Re:How can you (171 comments)

Yeah...you missed the non-partisan view of the logic stream. Allow me to show how things played out without the republican or democrat filters. Bush looked at Solyndra and tried to push to invest in it...but the administration ran out of time during a political time sink. Obama's administration took over and continued the efforts of the previous administration to invest. So basically you have the fact that the idea was started by one round of idiots and pushed forward (or at the very least, not stopped) by the next round of idiots and through both rounds of idiots the American people got shafted. Same story as what always happens in politics...with the supporters of idiot circle A blaming idiot circle B for the issue, and idiot circle B supporters blaming idiot circle A...perpetuating the huge ass political circle jerk of finding someone to blame for a problem instead of working on actual solutions and thus letting things continue to spiral out of anyone's control all the while blaming the other guy for not taking responsibility of the control stick and letting the flat spin continue.

Round and Round she goes, where she stops nobody knows. So goes American Roulette

about two weeks ago
top

Why the FCC Will Probably Ignore the Public On Network Neutrality

RavenLrD20k Re:Changes require systematic, reliable evidence.. (336 comments)

Trying to assert that the internet is like "a series of UPS trucks", as you do, is not in any way an apt analogy, and you know it (or should, at any rate, if you're hanging out on a site like Slashdot).

Actually, that is EXACTLY what the internet is like. And routers are like traffic lights and route signs, switches and hubs are like stop signs, yield signs and round-a-bouts. With both infrastructures the method of transport does not care one bit about what company owns the transporter despite your claims to the contrary. The only way the two infrastructures are different (aside from the obvious of physical cars on asphalt vs. electrical/light impulses over cable) is that Streets, Roadways, Highways, and Interstates are all public infrastructure (with the exception of privately maintained roadways that are few and far between by comparison) while America generally made the piss poor decisions to 1) allow our information infrastructure to be laid out by private carriers that were also our primary content providers at the time, 2) make deals that ensured that a single private carrier per medium type had domain over an area's network and 3) trust these carriers to provide adequate infrastructure at competitive rates while disregarding the fact that these were the same people providing the content and that our line provider and content provider are one and the same.

Both modes of infrastructure have another thing in common: traffic wear. The way municipalities deal with heavy traffic management is introducing more control points into a route (more routers), widening the roadways to accommodate increased traffic flow (more bandwidth), create alternate routes (routing/switching), create new on-ramps to the Interstate (route to a higher tier), and adjust speed limits (throttling). All the same methods that an ISP has to manage their internet traffic.

To put this into perspective of what the ISPs are trying to pull lets look at it this way. Say that instead of municipalities, counties, and states owning the roadways we had Comcast, Cox, and TWC owning the roadways. They all charge people $50 a month for 200 miles worth of 25 mph roadway access with higher prices allowing for more miles and higher speed roadways (ie 450 miles and access to roadways with speed limits of 45 mph or less). These companies also make agreements with courier services to give them a fast lane on their roadway; say Comcast makes an agreement with DHL that allows DHL to travel in a 55 mph lane on a normally 25 mph roadway with no bandwidth restriction to any house on its roadway. All other couriers have to use the regular 25 mph roadway with the 200 mile limit per vehicle... unless they pay a premium to Comcast for better access. Now, I have issues with DHL because there's a funky internal routing loop that they use that adds a week to their package delivery even using the fast lanes, and thus much prefer FedEx...but in this case, since Comcast made an exclusive deal with DHL and won't allow any other courier on the fast lane or increase their bandwidth limits per vehicle, I wind up having to wait a week and a half for the package through FedEx. So now...do I go for artificially faster service through DHL with a horrible customer service system or wait longer for the package but deal with an awesome customer service system. I can and prefer to deal with the latter, but in this day and age...who else would? DHL now has an unfair advantage and FedEx can't be as competitive. This is what we have to look forward to on our internet if we let private companies run it. I won't even get into the nightmare that emergency services could become.

For one with such a low ID you should remember the days of dial-up. Yes it wasn't perfect since we were essentially having to pay twice for Internet service: Once for the line use and once for the actual ISP, but there was actual competition between the ISPs of the time. When I lived in a metropolitan area, there were roughly 20 options to choose from that were stumbling over themselves to get more customers, which meant getting more and more modem banks and finding ways to undercut the competition to make sure they got noticed for best prices for great quality. Even when I lived in a rural community for a couple of years while going college, there were 4 ISP options and the rates, customer service, and actual service were always top notch. The issues tended to lie with the phone carrier whenever there were issues...which of course there was only one of those.

Then we stepped into the realm of cable and DSL broadband/highspeed, where the line carriers became the ISPs. We launched ourselves into technology devouring media at record speeds and always wanting more... in this desire to consume more and more we made our gravest mistake. We let the dial-up ISP model die in favor of trusting singular companies to give us our addiction. Granted, it wasn't overnight, because the "killer app" to warrant the increased cost of broadband hadn't really been created. Online gaming was still niche. Remote connections into a work computer were using CLI if they were needed at all. Hell, even porn images over dial-up and very short low quality movs were still faster to consume than waiting for the monthly Playboy, Club, or Hustler mag. But once videos became more popular, and the likes of MySpace started demanding more bandwidth for the special effects that the kiddies were dazzled with... it was the beginning of the end.

Instead of finding a way to bring the dial-up model to the broadband network and keeping things neutral and competitive, we entrusted our broadband needs to the same people who were providing us the same lines that their own content was coming down. Now we are paying the price for this lack of foresight, and the rut we're in is only getting deeper. Municipalities are being sued from being able to roll out their own cable/fiber networks that could serve as a jumping off point for a throw back to the dial-up model of business. A model that would look very much more like the neutral streets, roadways, highways, and interstates of our other major infrastructure.

about two weeks ago
top

Experiment Shows Stylized Rendering Enhances Presence In Immersive AR

RavenLrD20k Re: Porn ... (75 comments)

I am 35 and completely agree with your assessment. I regularly interact with my wife's cousin's fiancee who is in her early 20's and I have to say that I am completely baffled by her naivete and gullibility. I look back to my own time at that age and I just can't understand it. I don't remember 20 somethings of the late 90's and early 2k's as being nearly as damaged as she, her friends, and others that I've seen seem to be, and be able to make it through life.

Let me clarify that a bit. Yes, I've seen plenty of ditzes back then, but they never really made anything of themselves and nor were they expected to. They usually wound up getting out of high school and becoming the trophy wife riding on the curtails of a successful arm to hang from... and that was if they looked decent. God help them if they didn't fit the mold of what society deemed even mildly attractive. The majority of women who did have the intelligence would go on to college and invest in learning something that would give them a good career that they could live off of without having to rely on a man to keep them afloat.

Fast forward to the girls I see today and there are few that live up to the expectations of what I saw from my time...and they are usually in school for Law or on course for a Doctorate. It's a far greater majority that I see that are more like the Trophy wife bound type of my day in mentality...but without fitting into the what was then required mold of attractiveness. This is the group that I see going for nursing and advanced nursing degrees and other career paths that the intelligent ones were going for in my day. I keep seeing this and I try to relate it. Would we have done the same stupid mistake these girls are repeatedly making without learning? without thinking? Was it as difficult for us to keep out of trouble as it seems to be for these girls? Why did we have the concept of spend for living first, then for your wants...but the average girl I see this day don't seem to have this basic concept? I'm not saying we always made the perfect decision, but it just seems that life was a whole lot easier for us than it is for the current generation to live day to day, and I just can't understand why.

Is it "No child left behind" that we have to thank for this because that's the generation we're starting to see hit their 20's now. I observe this generation coming into their own, both in the specific anecdotes that are part of my day to day interactions as well general observations of behaviors of people from all walks of life while I'm out and about and traveling and I can't help but wonder; In leaving no child left behind and bringing everyone down to the same level of mediocrity, have we left our entire nation behind?

about two weeks ago
top

First Shellshock Botnet Attacking Akamai, US DoD Networks

RavenLrD20k Re: Only the beginning (236 comments)

Go through the scripts you need and delete the 'ba' if it shebangs #!/bin/bash. Delete the scripts you don't need. Your problem is solved providing that your shell isn't set to bash. With software freedom comes software responsibility. If you want to run an operating system where you don't have a support contract for someone else to fix things for you, you need to learn to fix things yourself. It's like with cars: Cars give you freedom, but you're responsible for their upkeep and responsible operation to ensure they keep giving you that freedom.

about three weeks ago
top

Remote Exploit Vulnerability Found In Bash

RavenLrD20k Re:Test string here: (399 comments)

If it outputs:

bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
This is a test

or just

This is a test

then your server is patched successfully. Whether or not the error message displays depends on bash configuration. I have three CentOS 6.5 servers that I manage in my house and one in the cloud. On the 3 64-bit machines which were original installs it generates the error message after patching. On the 1 32-bit machine which was upgraded from a previous version of CentOS I just get the "This is a test" message with no error after patching.

The idjit part of the statement is running CGI scripting on an exposed web server. It was also done mostly tongue in cheek, hence the deliberate misspelling of idiot. The reason for this statement is that it can be easily inferred that since he's (presuming male from the 'Lord' in his handle) directly able to administer a router, either this server is production for a business, or he's at home.

If it's the former, all sympathy goes out the window as well as the tongue in cheek intent since he's getting paid to administer these systems and keep them secure; which also means that he should know how to patch bash even if his distro provider hasn't put the patch in their upstream. A production server is that important.

On the other hand, if it's the latter, a greater deal of leeway can be given, and we can safely assume he's a hobbyist. In this case, it would be prudent for him to turn this into a learning situation. First, yes, take down port 80. Second, if your distro doesn't have the patch in the upstream yet, this would be a good opportunity to learn how to patch by hand. The tarball can be downloaded from the gnu.org site (not giving a direct link since the point of a hobbyist server is to perform your own research). Read the documentation to learn how to manually install the patch. Once your system is patched and you pass the test, you can open up port 80 again. Finally, you'll want to learn other options to processing your forms or dynamic pages than relying on CGI scripting. It's an old methodology that leaves your machine open to a host of problems, as is evident by this shellshock vulnerability.

On a final note: I guess I'm just old now, but way back when I was using Redhat 5, whenever I needed to patch something on my box that I'd dial in on I wouldn't normally wait on the Distro's upstream, but instead install the source project myself. Not only did this keep my systems current, but it also helped me with my understanding of that system and every way that it operated. There's just a whole lot of understanding that seems to get lost when you're not dealing with levels low enough.

about three weeks ago

Submissions

top

FCC Demands States to get out of the way of municipal broadband

RavenLrD20k RavenLrD20k writes  |  about 4 months ago

RavenLrD20k (311488) writes "I hope that Mr. Wheeler has the ability and the stones to actually act on this, but I'm not holding my breath. From the article:

While his recent waffling on Net Neutrality is still cause for concern, Tom Wheeler's recent statements in support of municipal broadband are worth cheering. In a statement posted to the FCC site, Wheeler said that: "If the people, acting through their elected local governments, want to pursue competitive community broadband, they shouldn't be stopped by state laws promoted by cable and telephone companies that don't want that competition."

That's about as strong a statement as one can expect from the head of a regulatory body. Plus, it's a pretty blunt challenge to both the industry he once lobbied on behalf of, and the government officials many believe are in their back pockets. In particular he cited the case of Chattanooga, TN which built out its own gigabit per-second fiber network out of frustration with the options offered by the incumbent Comcast. The trouble is, Tennessee's state government passed a law restricting municipal broadband projects...

"

Link to Original Source

Journals

top

The First day of the rest of my days...

RavenLrD20k RavenLrD20k writes  |  more than 9 years ago Well, I suppose I can give this a try. Everyone else and their mother seems to have a blog so I might try and give a go at it. If anyone cares to, perhaps I can get some feedback as to what people would want to know about me...or maybe wonder what in the universe I have to offer.

Slashdot Login

Need an Account?

Forgot your password?