Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba

mcrbids Exponential growth (446 comments)

Assume for a second, that you have a pond. And a new type of algae has been introduced into the pond. Algae grows quickly, so let's assume a doubling time of a day. 24 hours. The concern is that this new algae is gross and smells bad and nobody wants to have a pond full of this disgusting algae. Unfortunately, treating the algae is expensive and nobody wants to treat the entire pond.

The question is: One week before the pond is entirely covered in algae, would enough have appeared that you would even notice? At a "gut instinct" level, we'd guess that perhaps a quarter or a third or at least a tenth of the pond would be covered in algae, but that gut level instinct would be completely wrong. Just 1.56% of the pond would be covered - right about the point where it becomes noticeable at all.

The point is this: information processing capabilities, globally, aren't just growing exponentially: the rate of growth is itself also growing exponentially. Just about exactly at the time where we notice actual, verifiable intelligence of any kind is just about exactly the time where we have to assume it's ubiquity.

Previous discussions talk about the number of cross connects and how far away we are from the mark without commenting that the Internet itself allows for an infinite number of cross connects - my laptop can connect directly to billions of resources immediately with an average 10-25ms delay. Now, it's very likely that what is meant by "cross connects" in the context of AI is substantially different than the "cross connect" capability that global networking enables, but it's equally true that people generally fail at understanding exponential growth. It's why 401ks are so universally underutilized, why credit cards are such big business, and why the concept of the "singularity" seems like such hocus pocus at the gut level.

3 days ago

Coding Bootcamps Presented As "College Alternative"

mcrbids Lovin' that smell of BIAS (226 comments)

See, anybody who has a CS degree will be motivated to HATE boot camp guys. Employers who want more (cheaper) labor will be motivated to LOVE any force that lets them hire more people at less cost.

As a self-taught programmer myself managing a 10+ year project that's highly profitable, you'll probably guess which side of that divide you'll tend to see me on.

about two weeks ago

Data Center Study Reveals Top 5 SMART Stats That Correlate To Drive Failures

mcrbids Re:The measurements in question: (142 comments)

Your later comments about ignoring RAID controller warnings for a *year* strike me as callous. But we all have our standards, and standards vary greatly from place to place as the needs the drive the standards also vary greatly. (financial institutions care much more about transactional correctness than reddit)

After months of testing, our organization has wholeheartedly adopted ZFS and have been finding that not only is it technically far superior to other storage technologies, it's significantly faster in many contexts, it's actually more stable than even EXT4 under continuous heavy read/write loads, and brings capabilities to the table that even expensive, hardware RAID controllers have a tough time matching. Best of all, since it actually runs off JBOD, the cost is somewhere between insignificant and irrelevant.

I was wondering if you had investigated ZFS at all, and if so, why you aren't using it?

about two weeks ago

Denmark Faces a Tricky Transition To 100 Percent Renewable Energy

mcrbids THIS problem solved long ago... (488 comments)

Large scale internal combustion engines are extremely efficient and can run on just about anything burnable: vegetable oil, powdered coal, agricultural dust, wood gas from trees, dried leaves, etc. Yes, you can literally run an engine on banana peels. The trick is to get the carburetor to get the balance right.

From the perspective of a generator for a hospital, it would be relatively straightforward to design a generator running an engine like this with whatever renewable fuel is most convenient and readily available locally. Large scale wood gas installations typically work with fuel pre-processed into pellets.

about two weeks ago

The Students Who Feel They Have the Right To Cheat

mcrbids Re:Ok... just turned two score, but... (438 comments)

You make it sound like it was paradise in the 80s. It had it's suckiness, just like we do today.

1) There was constant threats of terrorism in the media in the 80s. Take a look at the "Libyans" in "Back to the Future".

2) Helicopter parents were definitely a thing in the 80s.

3) There were plenty of poor example adults in the 80s.

4) I'll 100% grant that entry level jobs are *much* harder to find now.

5) NSA and FBI watched us in the 80s. Ma Bell logged every call ever made. What was that you were saying on the CB Radio, back when the FCC actually gave a damn?

6) Granted Massive student debt, partially offset by the relative ease of getting into school. Yes, debt is a problem, especially when you pick a lame degree. It was always a problem, more so now.

7) There was no "online", so no posting stupid stuff online, and no online bullying. Bullying back then wasn't some insult posted in a chat root, it was a broken jaw. I remember well facing my bully with a stick in my hand, and being knocked flat repeatedly by a kid with 30 pounds on me, while I cursed defiantly and got up to face him again.

8) Education system was "declining" then too.

9) I'd argue that the cold war and the constant threat of total, global annihilation far outweighs a few school shootings. Or did you forget that little detail?

about two weeks ago

President Obama Backs Regulation of Broadband As a Utility

mcrbids Re:They ARE a utility. (706 comments)

The only reason he airline industry is not a natural monopoly is because of the massive public infrastructure provided by the US Government FAA in public use airports and related flight control infrastructure. In every meaningful sense, an airport solves the "last mile problem" for airplanes. Why wouldn't we expect a similar investment in the "last mile problem" for Internet Service?

SouthWest doesn't own the Oakland Airport; they merely lease a terminal. Can you imagine what would have happened if Delta had owned the airports too?

about two weeks ago

Ask Slashdot: How Useful Are DMARC and DKIM?

mcrbids Re:Here we go again (139 comments)

I've seen this lame list for 10 years, pretty much trolling bait. But based on this, I wonder if you even know how DKIM works?

(X ) It will stop spam for two weeks and then we'll be stuck with it

Pretty touch to crack legitimate encryption.

(X ) Requires immediate total cooperation from everybody at once

Not at all. You can use it, or not. If you don't use it, you essentially give permission for black hats to spoof your identity. Also, if you are an admin, you can choose what you do with DKIM.

(X ) Many email users cannot afford to lose business or alienate potential employers

How is being able to protect your account from being spoofed going to affect business?

(X ) Lack of centrally controlling authority for email

Why would you need one? DKIM is done via DNS and is under the control of the record holder.

(X) Asshats
(X ) Huge existing software investment in SMTP
(X ) Armies of worm riddled broadband-connected Windows boxes
(X ) Eternal arms race involved in all filtering approaches

Do you actually know how DKIM works? Each of these points are either effectively made better with DKIM or are irrelevant.

(X ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical

Care to name one?

(X ) Whitelists suck
(X ) Countermeasures should not involve sabotage of public networks
(X ) Why should we have to trust you and your servers?
(X ) Killing them that way is not slow and painful enough

How is DKIM a whitelist? You really have no idea how this works, do you? Did you just fill in some boxes at random?

I'll address a single point on here, to show how DKIM works rather well even in the worst of the points:

(X ) Mailing lists and other legitimate email uses would be affected

One of the products my company provides for schools is a "mailing list reflector" that in practice works very much like your average mailing list. In order to ensure delivery, all outbound email is signed with DKIM, even though we're really just forwarding the original message to the mailing list recipients.

How is this done? Well, we use a dummy address for the "From" field like "originaluser@gmail.com " and then set the reply-to field to match the original sender. Thus, DKIM passes as we provide keys for mycompany.com, the user is "From" mycompany.com, and the end user is able to reply to get a message back to the sender without involving our mail server at all.

It's a compromise, but it works well and we've had virtually no complaints.

about three weeks ago

Boo! The House Majority PAC Is Watching You

mcrbids Re:Here's why (468 comments)

Voters worry about irrelevant issues like abortion, gay marriage, inequality, and racism, while not worrying enough about the stuff that matters, like banking regulation, tax policy, nepotism, and crony capitalism.

And, in my opinion, that's largely because of the Centrally Controlled Media in the United States. And if you think "Main Stream Media" doesn't include Faux[sp?] News, you're also a victim of this control.

about a month ago

Vulnerabilities Found (and Sought) In More Command-Line Tools

mcrbids For all the idiots (87 comments)

... to the masses of sarcastic "I though Open Source was more secure!" crowd: in an Open Source forum, when vulnerabilities are found, they are patched. Since it's a public forum, the vulnerabilities are disclosed, and patches / updates made available. The poor, sorry state of the first cut gets rapidly and openly improved.

With closed source, the vulnerabilities merely stay hidden and undisclosed, and you have no ability to know about it, or fix it yourself. the poor, sorry state of the first cut never improves. Yes, there are some cultures that take security seriously. You have no way of knowing.

This, right here, is what "more secure" looks like: public notification of the vulnerabilities and patches to distribute.

about a month ago

How Apple Watch Is Really a Regression In Watchmaking

mcrbids Re:How big a fuss is it, really? (415 comments)

I haven't worn a watch in well over a decade. Why should I start wearing one now?

about a month ago

OEM Windows 7 License Sales End This Friday

mcrbids Re:Time to "stock up" from NewEgg ... (242 comments)

Linux is free because it is open source, but that can have its own associated restrictions (associated with the time input required to it to a certain level of functionality, depending on your Linux expertise.)

I guess you haven't set up recent Linux distros? Using Fedora, I can have a workstation up and running, fully updated in 30 minutes. Compare with Windows with the update/reboot/install for a day. At the very least, let's talk about the current state of Linux, and not its state as of 2001, OK?

about a month ago

Ask Slashdot: Unlimited Data Plan For Seniors?

mcrbids Re:t-mobile $50 (170 comments)

Happy MetroPCS customer here. Seriously, they rock. Coverage isn't fabulous but isn't bad either.

about a month ago

Ask Slashdot: Smarter Disk Space Monitoring In the Age of Cheap Storage?

mcrbids Re:We have more but we USE more. (170 comments)

With today's 4-8 TB drives, it's easy to keep billions of of files on a single disk, so you could potentially keep data for many thousands of customers on a single disk. But if you do that, you quickly run into an entirely new type of constraint: IOPS.

The dirty secret of the HD industry is that while disks have become far bigger, they haven't really become any faster. 7200 RPM is still par for the course for a "high performance" desktop or NAS drive, and you can only queue up about 150 requests per second at 7200 RPM. Simple physics takes over.

Spinning disks are already a non-starter for many scenarios, and this is a trend that will only accelerate as HDDs basically become the modern equivalent of tape backup.

about a month ago

Ask Slashdot: Aging and Orphan Open Source Projects?

mcrbids Re:Retired developers (155 comments)

Have you considered starting a company around the OSS Project? It's typical for a project in your position to spawn a commercial support entity to satisfy support needs, the $$ for which is also used to develop/support the project.

about a month ago

Can the Sun Realistically Power Datacenters?

mcrbids Re:Obligatoriness Extraordinaire (237 comments)

Sadly, there just aren't enough places with lakes to store anything like the amount of power we'd need to store. You also have to deal with transmission loss between the solar site and the point of use. There was this proposal a while back to use massive, carved granite/stone blocks to store power but it doesn't seem to have achieved much mention beyond its initial proposal.

about a month and a half ago

Tesla Announces Dual Motors, 'Autopilot' For the Model S

mcrbids Re:Read speed limit signs (283 comments)

Never mind highway 505...

about a month and a half ago

Vax, PDP/11, HP3000 and Others Live On In the Cloud

mcrbids The value is the software (62 comments)

Up until about the year 2000, I ran a small hardware shop for customers. Gradually, it became clear to me that the value of computers isn't in the hardware, it's in the software and data that they hold.

In response, I reinvented myself and co-developed a company that hosts data for (now) hundreds of clients and tens of thousands of users. Comparing the total hardware value of all our servers to our annual revenue puts hardware expenses (roughly) in petty cash. Servers host a *lot* of data, it's the data and the software used to manage the data that's valuable.

about 1 month ago

Belkin Router Owners Suffering Massive Outages

mcrbids Go for the 100% Open Source option (191 comments)

I've had issues with the last several routers, so I recently bought the very first, 100% OSS router. My thinking is that if it's open source, it's probably high quality code, and it's more likely to get updated than proprietary firmware, where they are cash incentivized to just have you buy the new router rather than fix old bugs.

As far as hardware goes, it's mid-range router hardware, N300 Wifi with respectable antennas and a ho hum 100 Mbit hardware switch. The UI was a little odd, more complex and far more options than your typical Wifi router interface.

However, in the month or so that I've had it, it's been the least problematic Wifi I've had in a few years. I live in a densely populated area with quite a few other hotspots in sight, and I haven't noticed any issues where restarting the router made a difference.

I haven't had the chance yet to hack it, but even as just a router, this is a winner. Also, support products that are consumer friendly like this one. It's not even more expensive! (Currently just $52)

about 1 month ago

Belkin Router Owners Suffering Massive Outages

mcrbids Re:Live by the cloud, die by the cloud. (191 comments)

Fake Internet connectivity is when some WiFi access point hijacks all DNS requests to take you to some login web page or ad.

So my company presents at trade shows. Trade shows often have Internet service available at ridiculous prices, and frequently, performance is horrible. Often, rather than pay that ridiculous price, we have a laptop set up with the same configuration as our servers, and run with a recent backup copied onto the laptop. This lets us demonstrate our products with a "sandbox" - same as we use for development - without having to bother with the on site Internet.

Our mobile "server" is set up to wildcard DNS to a locally hosted copy of our website. Other vendors, of course, see our hot spot and figure they can use it to get Internet service on somebody else's dime. When they find that all they can get to is our website and product, it's typical for them to get upset - more than once we've been accused of hacking!

Now, set up the hot spot with an SSID like "NoInternetHere" as a way of discouraging trouble.

about 1 month ago

Grooveshark Found Guilty of Massive Copyright Infringement

mcrbids Re:Some content should be avoided... (171 comments)

Raise your hand if you honestly think that Mickey Mouse as a trademark will enter the public domain in 2023. ....

Me neither.

about 2 months ago



Comcast blocking DNS for BitTorrent users?

mcrbids mcrbids writes  |  more than 2 years ago

mcrbids (148650) writes "It appears that Comcast is killing BitTorrent use by blocking DNS to BitTorrent users.

For the past week, I've been having issues with my Comcast cable where everything "works fine" except DNS. Even setting up my own caching name server did not work since UDP port 53 was a black hole as far as the public Internet was visible to me. Resetting the modem/router fixed it, only to have the problem reoccur anywhere from a few hours to a day later.

Last Friday I noticed BitTorrent running on my Mac, sharing only a CentOS ISO image, and killed it. I haven't had a problem since. Can anybody corroborate this apparently new tactic being used by Comcast to censor BitTorrent use?"

Apache webserver vulnerable to "slow get", too

mcrbids mcrbids writes  |  more than 3 years ago

mcrbids (148650) writes "About a month ago, a story broke that http (apache, IIS and everything else out there) was susceptible to a "slow post", where a malicious client starts a connection to a web server, sends headers indicating a very large upload via POST, and then sends that upload very slowly, starving resources and eventually causing a DDOS.

Well today, doing some research to see how effective this attack was (hint: VERY EFFECTIVE) I tried the same thing using http GET as well, and saw very similar results. With a simple, 20-line PHP script run from my laptop, I was able to take a fairly beefy internal webserver (8 core, 12 GB RAM, CentOS 5) offline in just under a minute, and keep it that way for as long as I wanted to. The technique was simple: send "GET /" and then append letters, 1 or 2 every second or so. After several hundred simultaneous connections were achieved, the web server was no longer responsive. I don't have an IIS server to test against, and don't feel like using any "unwitting volunteers".

It doesn't take a large botnet to take most hosts offline. It takes only a single, relatively low-powered laptop and a 20-line script hacked up in PHP 5.Given that the "slow post" attack is already well known, it's only a matter of time before a black hat discovers that even disabling form post won't protect anybody, either!"

Disable Advertising? No way!

mcrbids mcrbids writes  |  more than 4 years ago

mcrbids (148650) writes "Dear Slashdot,

This is the only way I can think of to actually send a communication to you. I noticed tonight a checkbutton labelled: "As our way of thanking you for your positive contributions to Slashdot, you are eligible to disable advertising."

Well, I'm not going to check it. I've spent years writing my often +modded posts, and have enjoyed doing it! Your adveritising is subtle enough to not detract needlessly from the experience, you get a few pennies from my daily views, and I have purchased more than one item due to an ad posted on Slashdot. It's a win/win/win situation, and I will not be checking the button, nor do I steal content from websites by using products like Adblock. If a website has ads posted intrusively, then I avoid that site, rather than legitimize a website that is offensive in nature by giving it the benefit of my eyeballs.

Thank you Slashdot, for maintaining a high quality, highly relevant site for over 10 years now! I've not paid a thin dime for any of your content, and I have spent countless hours pontificating finer points; you have more than deserved whatever revenue you get from your classy, unobtrusive ad impressions!"

Root hole found in Linux

mcrbids mcrbids writes  |  more than 5 years ago

mcrbids (148650) writes "Looks like a pretty serious hole has been found in Linux — affecting 32 and 64 bit versions of Linux with and without SELinux using a creative way to exploit null pointer references. You can check it out yourself. As of this writing, there are no patches available for this, making it a potential zero-day exploit."

Rockstar squelches connection to Michael Savage

mcrbids mcrbids writes  |  more than 5 years ago

mcrbids writes "While poking around online I found this article which details an an easily verified connection between Rockstar Energy Drinks and Michael Savage the "shock jock" commonly found on ultra-conservative talk radio. Michael Savage has been banned from entering the United Kingdom due to the hateful nature of his monologues. Strangely, he broadcasts from the highly liberal San Fransisco on KNEW AM Rockstar has responded with the standard C&D route with lawyers, et al. Is this going to be another example of a company who hasn't discovered the Streisand Effect or is there legitimately no connection between Michael Savage and Rockstar Energy drinks, even if they are at the same address and share the same CFO? (Michael's wife, Rockstar CEO's mother)"

Best javascript framework?

mcrbids mcrbids writes  |  more than 5 years ago

mcrbids (148650) writes "For the past 6 years or so, we've been heavily developing a proprietary, custom vertical application based on Linux, Apache, PHP, and PostgreSQL in a home-rolled PHP framework based loosely on Horde. We've been quite successful in the marketplace with our relatively classic technology based on HTML 3.x.

After investing heavily in fully redundant server clustering over the past year or so, we're finding that we'd like to improve our look and feel, improve response time, etc. and the natural way to do this is by incorporating javascript/ajax into our product. We've already begun some using ajax(y) in a few areas where very large tasks need to be coordinated over a long period of time — EG: longer than a typical browser timeout.

But we don't want to re-invent the wheel. There is a bewildering array of javascript frameworks, and with any framework, there's the risk of getting stuck trying to do something not anticipated by the framework developers.

So, which is the best, and why? Which should be avoided? Here are some of the frameworks I've seen so far:

Dojo, Ext JS, Fleejix.js, jQuery, Mochikit, Modello, Mootools, Prototype, Qooxdoo, Rico, and Scriptio. So far, in my research, jQuery and/or Prototype seem to be front runners, Dojo perhaps a close second.

I'd be most interested in the opinions of people who have switched from one to the other, and why?"

Turbo-charging logging?

mcrbids mcrbids writes  |  more than 6 years ago

mcrbids (148650) writes "I'm revamping our web-based application and am currently reviewing options as far as logging, particularly with redundant, clustered hosting solutions. I've run into a few problems that it seems no amount of online searching seems to have found.

My first concern is about scalability — our application writes directly to local log files. Unfortunately, many of the log entries are quite large and so cannot be piped over syslogd. Other options are much heavier, come with significant administration overhead, or bottlenecks. Is there a syslogd replacement that will allow for very large (tens of KB or larger) log entries?

My next question is about logfile integrity. A perfect log file is write-only, never rewrite. A one-way street, data goes in, gets saved, and never gets deleted. But any log file is essentially just a file, and a single # echo "" > /path/to/log will kill the log file dead. Yes, you can log remotely, but this increases complexity and therefore the chances of failure. Also, what if your remote log server is also compromised? I've been considering the use of a CD-R, especially the ability to recover from a buffer underrun during a write sequence. I've simulated a few, tying up the HDD with I/O while burning a CD-ROM. It under-ran, renegotiated, then resumed writing without incident. Why not use this capability, leave the drive in a sort of permanent under-run, and renegotiate for log entries? Wouldn't doing so create a file that could not effectively be erased, even if the host was compromised?"


Slashdot Login

Need an Account?

Forgot your password?