Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:Git Is Not The Be All End All (233 comments)

What unbelievable nonsense, but I suppose I shouldn't expect too much from an anonymous coward. You don't even realize that you proved my point with your response.

-Matt

yesterday
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:A lot of to-do about $700 (233 comments)

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

yesterday
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:Git Is Not The Be All End All (233 comments)

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

yesterday
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:SVN and Git are not good for the same things (233 comments)

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

yesterday
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon It's not that big a deal (233 comments)

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

yesterday
top

Ask Slashdot: Why Can't Google Block Spam In Gmail?

m.dillon Google seems to do a good job for me (261 comments)

Google filters out ~100-200 spams a day from my email box (which I universally forward all my domain mail through) and leaves me with (usually) only one or two that I have to specifically mark as spam. I've never been able to do better running my own spam filter.

-Matt

about a week ago
top

Ask Slashdot: VPN Setup To Improve Latency Over Multiple Connections?

m.dillon Mobile links (174 comments)

For mobile internet connections... for dual mobile internet connections. I haven't done that but I have used VPNs over mobile hotspots extensively. There is just no way to get low latency even over multiple mobile links. The main problem is that the bandwidth capabilities of the links are fluctuating all of the time, and if you try to dup the packets you will end up overloading one or the other link randomly as time progresses because the TCP protocol will get acks from the other link and thus not backoff as much as it should. An overloaded mobile link will drop out, POOF. Dead for a while.

For VPN over mobile links, the key is to NOT run the VPN on the mobile devices themselves. Instead, run it on a computer (laptop etc) that is connected to the mobile devices. Then use a standard link aggregation protocol with a ~1 second ping and a ~10 second timeout. You will not necessarily get better latency but it should solve the dropout problem... it will glitch for a few seconds when it fails over but the tcp connections will not be lost.

-Matt

about two weeks ago
top

Ask Slashdot: VPN Setup To Improve Latency Over Multiple Connections?

m.dillon It can be done but... (174 comments)

I run a dual VPN link over two telcos (Comcast and U-Verse in my case), between my home and a colo. I don't try to repeat the traffic on both links, however, because they have different bandwidth capabilities and it just doesn't work well if the line becomes saturated. Instead I use PF and FAIRQ in both directions to remove packet backlogs at border routers in both directions, and to ensure that priority traffic gets priority. Either an aggregation-with-failover or a straight failover configuration works the best (the TCP connection isn't lost since it's a VPN'd IP). That way if you lose one link, the other will take over within a few seconds.

The most important feature of using a VPN to a nearby colo is being able to prioritize and control the bandwidth in BOTH directions. Typically you want to reserve at least 10% for pure TCP acks in the reverse direction, and explicitly limit the bandwidth to just below the telco's capabilities to avoid backlogging packets on either your outgoing cablemodem/u-verse/etc router or on the telco's incoming router (which you have no control over without a VPN). Then use fair queueing or some other mechanism to ensure that bulk connections (such as streaming movies) do not interfere with the latency for other connections.

In anycase, what you want to do just won't work in real life when you are talking about two different telco links. I've tried it with TCP (just dup'ing the traffic). It doesn't improve anything. The reason is that one of the two is going to have far superior latency over the other. If you are talking Comcast cable vs U-Verse, for example (which, the comcast cable will almost certainly have half the latency of the U-Verse. If you are talking about Comcast vs Verizon FIOS, then it is a toss-up. But one will win, and not just some of the time... 95% of the time. So you might as well route your game traffic over the one that wins.

-Matt

about two weeks ago
top

First Shellshock Botnet Attacking Akamai, US DoD Networks

m.dillon Bash needs to remove env-based procedure passing (236 comments)

It's that simple. Even with the patches, bash is still running the contents of environment variables through its general command parser in order to parse the procedure. That's ridiculously dangerous... the command parser was never designed to be secure in that fashion. The parsing of env variables through the command parser to pass sh procedures OR FOR ANY OTHER REASON should be removed from bash outright. Period. End of story. Light a fire under the authors someone. It was stupid to use env variables for exec-crossing parameters in the first place. No other shell does it that I know of.

This is a major attack vector against linux. BSD systems tend to use bash only as an add-on, but even BSD systems could wind up being vulnerable due to third party internet-facing utilities / packages which hard-code the use of bash.

-Matt

about a month ago
top

Apple Will No Longer Unlock Most iPhones, iPads For Police

m.dillon Re:"unlike competitors" ??? (504 comments)

It's built into Android as well, typically accessible from the Setup/Security & Screen Lock menu. However, it is not the default in Android, the boot-up sequence is a bit hokey when you turn it on, it really slows down access to the underlying storage, and the keys aren't stored securely. Also, most telco's load crapware onto your Android phone that cannot be removed and that often includes backing up to the telco or phone vendor... and those backups are not even remotely secure.

On Apple devices the encryption keys are stored on a secure chip, the encryption is non-optional, and telcos can't insert crapware onto the device to de-secure it.

The only issue with Apple devices is that if you use iCloud backups, the iCloud backup is accessible to Apple with a warrant. They could fix that too, and probably will at some point. Apple also usually closes security holes relatively quickly, which is why the credit card companies and banks prefer that you use an iOS device for commerce.

-Matt

about a month ago
top

Comcast Allegedly Asking Customers to Stop Using Tor

m.dillon VPN is the only way to go, for those who care (418 comments)

I read somewhere that not only was Comcast doing their hotspot crap, but that they will also be doing javascript injection to insert ads on anyone browsing the web through it.

Obviously Comcast is sifting whatever data goes to/from their customers, not just for 'bots' but also for commercial and data broker value. Even this relatively passive activity is intolerable to me.

Does anyone even trust their DNS?

Frankly, these reported 'Tor' issues are just the tip of the iceberg, and not even all that interesting in terms of what customers should be up in arms about. It is far more likely to be related to abusing bandwidth (a legitimate concern for Comcast) than to actually running Tor.

People should be screaming about the level of monitoring that is clearly happening. But I guess consumers are mostly too stupid to understand just how badly their privacy is being trampled.

There is a solution. Run a VPN. If Comcast complains, cut the T.V. service and change to the business internet service (which actually costs less).

-Matt

about a month ago
top

Facebook Seeks Devs To Make Linux Network Stack As Good As FreeBSD's

m.dillon High perf SMP coding is in a category of its own (195 comments)

Designing algorithms that play well in a SMP environment under heavy loads is not easy. It isn't just a matter of locking within the protocol stack... contention between cpus can get completely out of control even from small 6-instruction locking windows. And it isn't just the TCP stack which needs be contention-free. The *entire* packet path from the hardware all the way through to the system calls made by userland have to be contention-free. Plus the scheduler has to be able to optimize the data flow to reduce unnecessary cache mastership changes.

It's fun, but so many kernel subsystems are involved that it takes a very long time to get it right. And there are only a handful of kernel programmers in the entire world capable of doing it.

-Matt

about 2 months ago
top

NSA Considers Linux Journal Readers, Tor (And Linux?) Users "Extremists"

m.dillon That's nothing (361 comments)

In the 80's it was well known that the CIA was monitoring the USENET. Apparently there was a list of keywords that they searched for that became well known, so we used them all the time. We had it on good authority that the CIA had become amused by our antics. It probably relieved the boredom.

-Matt

about 4 months ago
top

Researchers Claim Wind Turbine Energy Payback In Less Than a Year

m.dillon Stupid argument (441 comments)

It's hilarious watching people argue over a topic that has already been shown to be a non-issue. The EIA (US) and German statistics show that, in aggregate, wind-energy sources produce a relatively steady amount of power. Individual turbines and even whole wind farms might not be deterministic, but all the wind farms taken together... are.

-Matt

about 4 months ago
top

Improperly Anonymized Logs Reveal Details of NYC Cab Trips

m.dillon Re: Data Security Officer (192 comments)

Except you can decode the salt trivially if you took a cab ride that happens to be in the data set and you recorded the license and medallion number. At which point the salt is useless.

-Matt

about 4 months ago
top

Endurance Experiment Writes One Petabyte To Six Consumer SSDs

m.dillon Re:And the winners are... (164 comments)

And... that's it? What did SMART say? Did you actually wear the SSDs out as-per the wear indicator? Or did you hit a bug in the samsung controller before the wear-indicator maxed out?

To be fair, the precise situation you describe, particularly if you did not retune the RAID-6 setup or the mysql server, and if the server was fsync()ing on every transaction (instead of e.g. syncing on a fixed time-frame as postgres can be programmed to do)... that could result in el-cheapo samsungs not being able to do any write-combining and cause a 256:1 write-amplication of the data.

With proper tuning the write-amplication could easily be reduced to 4:1 and you would probably be able to run the server with SSDs. Maybe use Intel or Crucial though, and not Samsung. It isn't just the controller that matters... just using stock firmware doesn't really net you a good, robust SSD and there aren't too many real vendors who work on the firmware vs just OEM whatever was supplied with the controller. Intel is probably one of the better ones. They actually fix bugs, as does Crucial. Samsung... I dunno.

-Matt

about 4 months ago
top

Endurance Experiment Writes One Petabyte To Six Consumer SSDs

m.dillon Re:IO pattern (164 comments)

Yes, but it's a well-known problem. Pretty much the only thing that will write inefficiently to a SSD (i.e. cause a huge amount of write amplification) is going to be a database whos records are updated (effectively) randomly. And that's pretty much it. Nearly all other access patterns through a modern filesystem will be relatively SSD-efficient. (keyword: modern filesystem).

In the past various issues could cause excessive write amplification. For example, filesystems in partitions that weren't 4K-aligned, filesystems using a too-small a block size, less efficient write-combining algorithms in earlier SSD firmwares. All of those issues, on a modern system, have basically been solved.

-Matt

about 4 months ago
top

Endurance Experiment Writes One Petabyte To Six Consumer SSDs

m.dillon All still going (164 comments)

I have around 30 ranging from 40G to 512G, all of them are still intact including the original Intel 40G SSDs I bought way at the beginning of the SSD era. Nominal linux/bsd use cases, workstation-level paging, some modest-but-well-managed SSD-as-a-HDD-cache use cases. So far wearout rate is far lower than originally anticipated.

I'm not surprised that some people complain about wear-out problems, it depends heavily on the environment and use cases and people who are heavy users who are not cognizant of how they are using their SSDs could easily get into trouble.

For the typical consumer however, the SSD will easily outlast the machine. Even for a pro-sumer doing heavy video editing. Which, strangely enough, means that fewer PCs get sold because many consumers use failed or failing HDDs as an excuse to buy a new machine, and that is no longer the case if a SSD has been stuffed into it.

A more pertinent question is what the unpowered shelf-life for typical SSDs is. I don't know anyone who's done good tests (storing a SSD in a hot area unpowered to simulate a longer shelf time). Flash has historically been rated for 10-years data retention but as the technology gets better it should presumably be possible to retrieve the data after a long period on a freshly written (only a few erase cycles) SSD. HDDs which have been operational for a time have horrible unpowered shelf lives... a bit unclear why, but any HDD I've ever put on the shelf (for 6-12 months) that I try to put back into a machine will typically spin-up, but then fail within a few months after that.

-Matt

about 4 months ago
top

New Permission System Could Make Android Much Less Secure

m.dillon A little surprised. (249 comments)

Google must know by now how bad a light its broken permission system is putting on Android. I can't run half the android apps I want to run on any of my Android devices any more because of the permissions they want. And a lot of the ones that I intentionally do not upgrade no longer work. It's making my three android devices useless and almost worthless.

I'm flabbergasted that there are full-on idiots in the Google command chain who are unwilling to address such a severe and obvious problem. Truly flabbergasted. Has Google gone insane?

I've already stated but I will again... when the iPhone-6 comes out, I'll be moving over to it from my perfectly working but horribly insecure Motorola Razr. At least then I can browse my facebook account from my phone without it sucking up all the stuff I've tried so hard to keep partitioned off of it. As it stands now, I can't even run customized UIs on my Android because the g*d* program insists on advertising on my notifications screen, even though I bought the paid-for version.

At least with iOS I don't have to worry about all this in-the-face crap ruining the experience.

-Matt

about 4 months ago

Submissions

m.dillon hasn't submitted any stories.

Journals

m.dillon has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?