Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Harvard Scientists Say It's Time To Start Thinking About Engineering the Climate

dnavid Re:Global warming is bunk anyway. (310 comments)

We shouldn't be fooling around like this. It's obvious we don't understand, or are too corrupt and greedy to admit, that there's no problem.

Its ironic that one of the potential benefits of geoengineering research is that it will force many climate change deniers to admit that its possible for human activity to have major deleterious effects on Earth's climate.

2 days ago
top

Rooftop Solar Could Reach Price Parity In the US By 2016

dnavid Re:They WILL FIght Back (495 comments)

Yes, many US states require free net metering and power resale. It's the law, so utilities have to do it. But all you're doing at the time being is transferring the solar-generators' share of the infrastructure costs onto the non-solar-generators share. So when you report that these people can "break even", is that really a fair comparison?

It is a true statement that net-metering customers are often using infrastructure that they are effectively not paying for, or not paying the true costs of, when their net-metering contracts allow them to directly offset generation and usage one for one. However, its also true that the statement "solar could reach price parity" can still be true while recognizing that fact for this reason: electric utilities overstate the costs of the infrastructure, and the costs of that infrastructure are affected by network defections.

In Hawaii where I live, the electric utility recently proposed a plan to deal with the high growth rate of residential solar, which they were delaying due to their assertion that the grid was unprepared for the volume (which I concede as likely a mostly true statement). Their plan proposed a cost structure that would have made a customer currently paying about $200/month (what their documents stated was the residential average) and who could with solar reduce that bill to close to $15/month, under the new system would be paying about $150/month. That's for someone who deployed a net-zero solar system. That was due to the proposed system charging a $55/month infrastructure fee and paying only about half the rate for customer generated power as they charged for customer used power.

Given those numbers, its entirely possible that the continued drop in costs for residential solar could bring down the cost of a completely off-grid system of batteries and emergency generator (for stretches of low sunlight) to a point where it becomes competitive with that $1800/yr of "infrastructure" costs. Moreover, even if the costs don't reach full parity, if they just get close its possible that enough customers could choose to defect off the grid so as to make the per-customer costs of maintaining the grid even higher - since the costs are largely fixed and don't go down when there are fewer customers.

The combination of lowering costs of completely off-grid solar systems combined with the rising costs of utility power per customer due to defections could still cause a meet-in-the-middle where solar reaches price parity even for systems that don't need utility connections. Maybe not by 2016, but I wouldn't bet a lot of money against it either.

The problem with the "solar customers are not paying their fair share of the grid" statement is not that it is not true: its that the best way to resolve that problem in the long run will be for solar customers to defect off the grid. If electric utilities continue to pursue the strategy of making solar customers "pay their fair share" eventually, and it may take time, the technology will reach the point where that fair share will be zero, because they will stop using the grid. Once the technology reaches that point, it will be too late for the electric utilities to do anything except watch customers leave. They should be working now to find a more suitable relationship between the utility and solar residential customers that isn't adversarial. Because in short run the utilities have a lot of power over the situation, but in the long run they have very little. They should use their power while they can to create something that ensures a future where their customers still need them. There are lots of ways to do that. Demanding solar customers pay huge amounts for the privilege of being utility customers is not one of them.

3 days ago
top

Launching 2015: a New Certificate Authority To Encrypt the Entire Web

dnavid Re:Replace Cisco, and Akamai and then maybe.. (202 comments)

Replace Cisco, and Akamai and then maybe I'll be convinced it's better than the current situation. But it's still oxymoronic service: A central authority that *REQUIRES* trust for people who don't trust anybody.

First, if you don't trust Cisco and Akamai to that extent, how do you intend to avoid transporting any of your data on networks that use any of their hardware or software?

Second, I think a lot of people really have no idea how SSL/TLS actually work. There's two forms of trust involved with SSL certificate authorities. The first involves the server operators. Server ops have to trust that CAs behave reasonably when it comes to protecting the process of acquiring certs in a domain name. But that trust has nothing to do with actually using the service. Whether you use a CA or not, you have to trust that *all* trusted CAs behave accordingly. If Let's Encrypt, or Godaddy, or Network Solutions, is compromised or acts maliciously they can generate domain certs that masquerade as you whether you use them or not. As a web server operator if you don't trust Let's Encrypt, not using their service does nothing to improve the situation, because they can issue certs on your behalf whether you use them or not - so can Mozilla, so can Microsoft, so can Godaddy.

The real trust is actually on the end user side: they, or rather their browsers, trust CAs based on which signing certs they have in their repositories. Its really end users that have to decide if they trust a server and server identity or not, and the SSL cert system is designed to assist them, not server operators, to make a reasonable decision. If you, as an end user decide not to trust Let's Encrypt, you can revoke their cert from your browser. Then your browser will no longer trust Let's Encrypt certs and generate browser warnings when communicating with any site using them, and you as the end user can then decide what to do next, including deciding not to connect to them.

Seeing as how the point of the service is to improve the adoption of TLS for sites that don't currently use it, refusing to trust a Let's Encrypt protected website that was going pure cleartext last week seems totally nonsensical to me, unless you also don't trust HTTP sites as well and refuse to connect to anything that doesn't support HTTPS.

Lastly, if you literally don't trust anybody, I don't know how you could even use the internet in any form in the first place. You have to place a certain level of trust in the equipment manufacturers, the software writers, the transport networks. If all of them acted maliciously, you can't trust anything you send or do.

I don't necessarily trust the Let's Encrypt people enough to believe they will operate the system perfectly, and I don't believe they are absolutely immune from compromise. But I do think the motives of people adding encryption to things currently not encrypted at all is likely to be reasonable, because no malicious actor would be trying to make it easier to encrypt sites that have lagged and would otherwise continue to lag behind adopting any protection at all. Even if they are capable of compromising the system, that is quixotic at best. Even in the best case scenario they would be making things a lot harder for themselves, and in the long run getting people accustomed to using encryption with a system like this can only accelerate the adoption of even stronger encryption down the road.

4 days ago
top

Interviews: Warren Ellis Answers Your Questions

dnavid Re:Who is this guy (15 comments)

Notable works? Most popular recognizable work? I'm not gonna go googling so do your fucking job editors

If you have no idea who Warren Ellis is and have no intention of actually doing any research, what possible benefit could listing any of his works be to having any appreciation or context to an interview?

In any case, three of his most well known works are actually *mentioned* in the interview; namely Planetary, Transmetropolian, and his run on Hellblazer. I'm also fond of his work on The Authority which is also very well known. I also know he scripted the animated Justice League Unlimited episode called "Dark Heart" that was about the nanotech weapon that landed on Earth.

And jeez, learn to use the rest of the internet, or unlearn the little random bits.

5 days ago
top

Ask Slashdot: Can You Say Something Nice About Systemd?

dnavid Re:Reliable servers don't just crash (928 comments)

Actually that statement is 100% correct since the definition of a reliable server is one that does not crash.

Its trivially easy to make server software that doesn't crash. Just send all exceptions into an infinite loop. I think not crashing is a common prrequisite, but far from sufficient requirement for a reliable server. In fact, for some software like filesystems and databases crashing is almost not relevant to reliability: crashing to prevent data corruption is what reliable systems do in some contexts.

"Reliability" is when the software does what it is designed to do. That can include protective crash dumping. "Availability" is when the software is always running. Well designed and implemented software is both reliable and available, which is another way of saying its always running, and always running correctly.

about three weeks ago
top

Tim Cook: "I'm Proud To Be Gay"

dnavid Re:Gay? (764 comments)

I don't see why it should be a reason to be "proud". Gay is the way he is rather than something he has chosen but it does not confer some form of superiority on him.

I have no idea why people keep saying this, as if the only valid reason to express pride is to express feelings of superiority or requesting acknowledgement of accomplishment. The way I use the word, and the way the dictionary defines it, allows for pride to express positive feelings of association, and to express self-esteem particularly to contrast with the expectation of the opposite. For example, during and immediately following World War II, many Japanese Americans expressed pride in being Japanese Americans. Some served the military during the war and were proud of their accomplishments, but others did not and were only expressing pride in being associated with a demographic that was often denigrated but did noteworthy things. I don't recall hearing people ask why a Japanese American would express pride in being Japanese when that was not their choice: the reason for making the statement was obvious at the time, as is the reasons for declaring pride in being gay today. Except for people being deliberately obtuse.

about three weeks ago
top

Making Best Use of Data Center Space: Density Vs. Isolation

dnavid Re:Blades (56 comments)

The SAN is usually less of a single point of failure because they usually have a lot of redundancy built-in, redundant storage processors, multiple backplanes, etc. You're right that off-site replication is still important, but usually more for whole site loss than storage loss.

People assume the biggest source of SAN failures is a hardware failure, and believe hardware redundancy makes SANs less likely to fail. In my experience, that's false. The biggest source of SAN failures are (usually human-) induced problems from the outside. Plug the wrong FC card with the wrong firmware, knock out the switching layer. Upgrade controller incorrectly, bring down SAN. Perform maintenance incorrectly, wipe the array. SANs go down all the time, and often for very difficult to predict reasons. I saw a SAN that no one had made any hardware or software changes to in months just suddenly crap out when the network that connected it to its replication partner began to experience flapping which noticeably affected no one except the SAN, which decided a world without reliable replication was not worth living in and committed suicide by blowing away half its LUNs.

Keep in mind the last time I saw a hard drive die and take out a RAID array was so long ago I can't remember. However, the last time I saw a RAID *controller* take out a RAID array - and blow the data away on the array - was only a couple of years ago. Its important to understand where the failure points in a system are, particularly when it comes to storage. These days, they often are not where most people are trained to look. Unless you are experienced with larger scale storage, you're not trained to look where the problems tend to be.

Storage fails. Any vendor that tells you they sell one of something that doesn't fail is lying through his teeth. Anything you have one of you will one day have to deal with having zero of. It doesn't matter how "reliable" the parts in the one thing are. You should plan accordingly.

about a month ago
top

Ask Slashdot: Remote Support For Disconnected, Computer-Illiterate Relatives

dnavid Re:You could lock down Windows (334 comments)

For the purposes of the discussion, I'm assuming they are on Windows 7. If they aren't on Windows 7, they need to get there, at least. If they are still on XP that just sucks because a lot of the below stuff isn't there.

Something to look at which works for both Windows XP and Windows 7 are software restriction policies, which are a form of whitelisting build into Windows. With Windows 7 Enterprise or Ultimate editions, you can also use Applocker which is a more sophisticated version of software restriction policies. I'm not an expert on SRP or Applocker, but I believe both can be used to lock down a desktop and prevent users from running or somehow causing to run any executables except for the ones you whitelist. That won't prevent all possible malware from infecting the system, but between that and malwarebytes I think that would provide significant protection for this specific use case, and you wouldn't have to retrain the users to switch from Windows to a Linux desktop.

about 2 months ago
top

Why Is It Taking So Long To Secure Internet Routing?

dnavid Re:It's a production system (85 comments)

The internet is in production. No one wants to touch anything that's already in production unless they literally can't make it any worse. Otherwise we would have IPv6 as well.

Lots of people want to touch production systems. In the case of the internet and BGP, however, evolution has weeded out the people who like to touch production systems, and the only people with administrative rights are still getting over having to support 32-bit AS numbers and wondering where their pet dinosaur went.

about 2 months ago
top

New Details About NSA's Exhaustive Search of Edward Snowden's Emails

dnavid Re:Again? (200 comments)

Organized crime had "NDAs" as well. The agreement is worth the organization you're agreeing with.

The word of someone who believes they can break their word if the people they are giving it to is not worthy of it is completely valueless at all times.

about 2 months ago
top

The State of ZFS On Linux

dnavid Re: Magic (370 comments)

I was just reading up on Ceph a bit. One thing that does have me concerned is that it does not appear to do any kind of content checksumming. Of course, if you store the underlying data on btrfs or zfs you'll benefit from that checksumming at the level of a single storage node. However, if for whatever reason one node in a cluster decides to store something different than all the other nodes in a cluster before handing it over to the filesystem, then you're going to have inconsistency.

The problem you're describing is a problem that neither ZFS nor BTRFS is capable of handling either. Both checksum data on disk, but are vulnerable to errors that occur anywhere else in the write path starting from network clients through the OS. That's why ECC or fault tolerant memory is explicitly recommended for ZFS enterprise servers; a bit flip in memory is impossible for ZFS to correct for or detect in most cases.

about 2 months ago
top

The State of ZFS On Linux

dnavid Re: Magic (370 comments)

Sure, but even in a mirrored btrfs configuration you don't have to add drives in pairs. Btrfs doesn't do mirroring at the drive level - it does it at the chunk level. So, chunk A might be mirrored across drives 1 and 2, and chunk B might be mirrored across drives 2 and 3. For the most part you can add a single n GB drive at a time and expand your usable storage capacity by n/2 GB. You don't have to rebalance anything when you add a new drive - it will just be used for new chunks in that case. However, in most cases you'll want to force a rebalance.

That's a good thing/bad thing. Its what allows BTRFS to gain n+1 redundancy per data chunk on odd numbers of drives or in arrays where the number of drives changes, because mirroring isn't geometry-specific. But with disk mirrored vdevs you have the case where you can lose either half of the mirror for any vdev with no data loss. In other words, with four drives organized as two sets of mirrors, I can lose any two drives as long as they are not both members of the same mirror. With chunk-based mirroring once you lose a single drive you can't be certain the next drive failure anywhere won't fail the array without knowing exactly how the chunks are mirrored. That's not what people generally expect when they use "mirroring" and that mismatch can cause problems in maintaining arrays. Honestly, if ZFS could do that kind of mirroring with metaslabs, I would personally turn it off.

If I was going to look at more advanced device redundancy and management storage, I would probably jump past btrfs and go to Ceph. There I can not only add storage devices however I want, I can also scale up the number of storage servers any way I want. Its still a developing system, but then again so is btrfs.

about 2 months ago
top

The State of ZFS On Linux

dnavid Re: Magic (370 comments)

Agree - I phrased my original question poorly. My point was that raidz was not as flexible as the roadmap raid5 support for btrfs (which behaves like raidz, not like raid5 in zfs). I'm interested in being able to add/remove individual drives to a parity array.

Although people ask for it often, I'm unaware of that feature being on anyone's implementation roadmap. Part of the problem I think is that there's a difference between supporting a feature, and having that feature be problematic to use. Its unclear to me how useful btrfs rebalance is in RAID5/6 arrays with large drives. It could impact performance enough to make it annoying to use in the general case. That's one of the reasons why even hardware RAID5/6 is not used as much in servers with high performance requirements. The cost to rebuild makes it prohibitive to recover from a drive failure.

Even for a home array, I'm more inclined to use mirroring than RAIDX, and ZFS does allow you to add mirror pairs to pools composed of mirrored vdevs. In other words, you can make a pool with four drives set up as two pairs of mirrored vdevs, and then later add drives in pairs of mirrors and dynamically resize the entire array across the new drives. That's not what you're looking for, but its what ZFS users typically do instead, which is why there is less pressure to add dynamically adding disks to RAIDZ vdevs not as high as you might expect.

about 2 months ago
top

The Documents From Google's First DMV Test In Nevada

dnavid Re:Who would have thought (194 comments)

Not quite. Chernobyl was caused by operators not understanding the reactor's function in low power situations which happened to be during a test to see if the cooling system would work in the time after the core shut down but before the diesel generators were back up. They brought the power levels too low to where the reactor was nearly shut down, then to raise it back up they brought the control rods all the way out creating hot spots. Then when the power came on too strong, the response was to lower the control rods, which in a very hot reactor actually does the opposite- causes a huge reaction.

That's true in broad strokes, but you're overlooking key details. First, the automatic cooling systems were shut off. Second, when the power dropped to very low levels the operators instead of allowing the reactor to bring power up normally disabled the control rod systems designed to ensure the reactor couldn't "bounce critical" when they removed control rods to bring up power. And finally, when the test did not go as planned, and after several previous failures, the operators of the reactor went off-script and began overriding other safety features such as the coolant override systems in an attempt to drive the reactor to the desired test parameters.

You're not correct about the behavior of the control rods. The problem was a design issue of the control rods which caused them to displace coolant and neutron moderators as they are inserted, which means there's a short lag between when the control rods begin moderating the reaction and when they ironically increase the reaction due to reducing the effects of coolant moderation. This always happens, not just in "hot" reactors, but in a normally functioning reactor this is not a problem because the momentary increase in reaction is not significant. But all of the operators' previous actions caused the reactor to be placed into an extremely unstable situation. In particular, had they not disabled the automatic control rod safety systems the reactor would have automatically moderated them out of the unstable situation. But because they had essentially taken control away from the control rod systems for most of the control rods, the reactor could not effectively stop the problem the operators were introducing.

System logs show the operators did not simply try to lower the control rods back into the reactor to slow it down, they had attempted an emergency scram which drops all the control rods into the reactor at once. They would have only done that in response to an emergency condition. Which means when they did what you are pointing to as the cause of the accident, they were already in the middle of an accident. What they did not know was that they had already put the reactor into a condition beyond the ability of a scram to fix.

about 2 months ago
top

The State of ZFS On Linux

dnavid Re: Magic (370 comments)

Do you have a link. The last time I looked into this, you could not add a disk to a raid-z. You could add disks to a zpool, or add another raid-z to a zpool. However, a raid-z was basically immutable. This is in contrast to mdadm where you can add/remove individual disks from a raid5.

Google seems to suggest that this has not changed, however I'd certainly be interested in whether this is the case. The last time I chatted with somebody who was using ZFS in a big way they indicated that this was a limitation. He was using it for very large storage systems, and I could see how many of the ZFS features made it much more appropriate in these kinds of situations, especially with things like write intent log on seperate media, having many independent storage units which are individually redundant but otherwise behaving like a big array of disks (which helps to distribute IO which reduces some of the penalties with RAID), etc. I'm more familiar with btrfs and it seems to be evolving more towards being an ext4 replacement, where smaller arrays are the norm, etc. That isn't to say that many of the features on either aren't potentially useful for both.

MightyYar's process isn't adding a disk to a RAID-Z, its addressing your original question of how to replace 1TB drives with 3TB drives. His process uses an external USB drive to kickstart the process, adding a USB drive, telling ZFS to logically replace one of the older drives with the newer (bigger) USB drive, letting it rebuild with that USB drive, and when the old drive has been replaced in the array with the USB drive, removing that old drive and replacing it physically with a new 3TB drive, then asking the array to rebuild again. You don't even need the USB drive; you could replace the disks in the array, but unless you are at RAIDZ2 or higher, for a long time during the process you would not have drive redundancy. MightyYar's process prevents having a case where you are running without at least n+1 redundancy in the array. Once all the original drives are replaced physically with 3TB drives, you can ask ZFS to expand the array to use all the space.

about 2 months ago
top

The Documents From Google's First DMV Test In Nevada

dnavid Re:Who would have thought (194 comments)

Ofcourse it is not 100% ready for the real world. It does not mean it should not be deployed though.

We need the power they said, it will be fine they said, don't worry they said.

The citizens of Chernobyl

Interesting analogy, since the Chernobyl accident was not caused by the power plant's automated systems, but by human beings that overrode the safety systems designed to prevent just such an accident. Interestingly, the Three Mile Island accident occurred for essentially the same reasons: humans prevented the automatic systems from functioning correctly to prevent an accident.

about 2 months ago
top

Does Learning To Code Outweigh a Degree In Computer Science?

dnavid Re:Is Coding Computer Science? Of Course! (546 comments)

I'm assuming the vast majority of programming jobs require the ability to code, and no further domain specific knowledge. This is just based on my reading of many, many programming job listings over the years.

I'm sure there are jobs that require CS knowledge, just as I'm sure there are (programming-related) jobs that require Biology knowledge or Architecture knowledge or whatever. But all of those are niches: a very small subset of all programming jobs require those specific areas of knowledge. ALL programming jobs require coding though, and even among the ones that require domain-specific knoweldge, I'd imagine the bulk involve a lot more coding than anything else.

You don't need "domain specific knowledge" to code, but I think most such programmers are subpar. Code is like writing; you only need to know English (or your native language) to write, but if that's all you know then you're not going to be a particularly useful writer. Code implements algorithms, algorithms solve problems, and knowledge of the problem space is always not just valuable but the difference between uninteresting scribbles and a best selling novel.

A lot of the time, code supports other code; its code designed to address computer system issues explicitly. Knowledge of how computer systems work is essentially to being able to write or debug or sanity check reasonable code. Sometimes code directly tackles a non-computer problem like code to analyze data in another space. Its not *mandatory* to understand that space, but its extremely limiting on a developer to write code to analyze data about a subject matter they know nothing about. They will always need someone else to translate every little thing for them, and they will never be able to know if their code is actually doing something useful. If it goes awry, someone else will have to tell them that.

You have to be an extremely stellar programmer to be worth it, if you don't understand what you're coding about.

about 3 months ago
top

Cause of Global Warming 'Hiatus' Found Deep In the Atlantic

dnavid Re:Every week there's a new explanation of the hia (465 comments)

The problem with climatologists is that they are climatologists; they are not sociologists, politicians, economists, or ethicists. Anybody who advocates following the advice of climatologists on climate change is either a charlatan or a liar.

The problem with sociologists, politicians, economists, or ethicists is that they know nothing about climate change. Therefore, anyone following *their* advice about climate change is an idiot. I guess we just throw darts at a board, because everyone qualified to know the subject matter of anything doesn't know how to use it, and everyone who knows how to use the subject matter knowledge doesn't possess any of it. Given a choice, I will go back to the world of lying charlatans, and you can go back to the world of living in a cave waiting for lightning to strike a tree and make the hot glowy.

about 3 months ago
top

Underground Experiment Confirms Fusion Powers the Sun

dnavid Re:Thought that was obvious... ? (141 comments)

Of course deductions carry scientific weight, but they don't serve as meaningful evidence and instead as the basis of a hypothesis.

Genuinely logical deductions carry equal or greater weight as experiments. Logical deduction is part of the process of scientific analysis. Logical deduction is in fact the glue that connects otherwise disconnected scientific theories. Without logical deduction, scientific theories would be disconnected semantic dust.

Without the rules of math and logic, you can't do scientific analysis. Experiments are the data, logic and math are the engine. Its logic that tells us if the Earth is spherical its not cubical. No one does experiments to prove the Earth is not every other possible shape. Its logic that tells us that if the Earth has one shape, it cannot have another. No experiments are necessary. No one has tried to experimentally confirm the Earth is not a dodecahedron, or a torus, because those are logical impossibilities.

You are probably confusing genuine deductive logic for what people sometimes call deductions but are actually inductions or "common sense." Those typically fail often. But they are not true logical deductions. "Holmesian deduction" is not generally real logical deduction. But when you say science uses experiments to support a conclusion, on what basis do you declare those experiments support anything? Why does seeing X support Y? Without logical deduction, you can't get from here to there. Experiments don't tell you that X supports Y, experiments generate the X. Logical deduction connects X to Y. It is in fact two logical deductions that underpin two of the foundations of Science. If an assertion always leads to X, and an experiment demonstrates that X is false, then the original assertion cannot be entirely true. That's the principle of falsification. Conversely, if an assertion predicts a set of circumstances S, and the set S is distinct from all other similar assertions, then if experiments confirm all the elements of S, the probability that the original assertion is true increases with the size of S. That's the principle of confirmation. Try and do Science without variations of those deductions.

about 3 months ago
top

Underground Experiment Confirms Fusion Powers the Sun

dnavid Re:And there is the matter of (141 comments)

But, again, neutrino oscillation can't nullify these results, because oscillation only makes neutrinos harder to detect (by changing their "flavor"). It doesn't create neutrino signals where none originally existed (at least not in this sense).

Sure it can: By "oscillating" other flavors of neutrino into the type they're looking for, when they weren't there in the first place (or not in sufficient number).

They'll need to look at the ratio of the various types and back-calculate to eliminate other possible signals, or combinations of them, to see if there is a way for other (possibly unexpected) reactions to produce a signal that looks like the ones expected and/or observed.

Yes and no. Yes, its possible for neutrino oscillation to take a different flavor neutrino than expected and oscillate its type to become one you were expecting. But neutrino oscillation doesn't alter energy. As a practical matter, I don't believe there exists a particle interaction that generates large amounts of muon or tau neutrinos at coincidentally the same energy as the proton-proton generated electron neutrino.

about 3 months ago

Submissions

dnavid hasn't submitted any stories.

Journals

dnavid has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?