×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Gamma-ray Bursts May Explain Fermi's Paradox

dnavid Re:WTF (236 comments)

We used to wonder what in the hell was making these ultra-bright quasars; now we believe that they are "active" galactic cores which are in the process of forming a supermassive black hole in the centers. It's possible that two such black holes might form and orbit their mutual centers of gravity, but eventually they would merge. This merging is probably the source of the gamma ray bursts.

Most GRBs have a signal that's inconsistent with that scenario because of the size of the black holes: basically most GRBs have signals consistent with much smaller objects than galactic black holes.

The original theory, and one which still explains some GRBs, are the gamma ray emissions from two neutron stars merging. Binary stars are common, and in some cases both stars eventually become neutron stars. When their orbits decay, they can merge to form black holes and in the process convert a huge amount of mass into gamma ray energy. But the prevailing theory that best explains the majority of the rest of them are a special class of supernova that emits a huge amount of its energy in two narrow jets. When those jets happen to be pointed in our general direction, they appear to be a GRB.

Some GRBs emit so much energy that for a while astronomers couldn't reconcile their energy output with the limits on their size: even total conversion of all the matter in an object of that size into energy seemed to be insufficient to generate the kind of energy a GRB produces. When it was discovered that supernova can sometimes emit jets rather than explode outward equally in all directions, that provided a way for something of that size to appear to emit more energy than possible. Astronomers were calculating the energy reaching us from the GRBs, and assuming the object sent that much energy in all directions. Astronomers now think we only see a small fraction of all GRBs that detonate, and most jet their energy in directions we can't see.

2 days ago
top

Proposed Disk Array With 99.999% Availablity For 4 Years, Sans Maintenance

dnavid Re:Power Costs (255 comments)

Let me know when you can find a way to dispatch a tech to swap a hard drive in a tier 1 datacenter for a buck.

It doesn't cost a buck. It costs 5 bucks, but has a 20% chance of occurring in the 4 year lifetime of each HDD. Also, you would not "dispatch a tech". Instead you would send out a tech with a cart of, say, 50 HDDs. The the tech would walk down the aisles, pulling and inserting disks. That would be his full time job. If he could do 50 in an 8 hour shift, and is paid $30/hour, that is about $5/disk.

No, you would not. The problem with this is that the whole point of the paper was to analyze ways to improve the reliability of disk arrays. You can do what you're describing if there was no specific timeframe in which hard drives need to be replaced: you just replace them whenever you get around to it, rather than soon after they fail. But that only works in environments where actual disk reliability is not important. In environments where actual array reliability is important, delaying the swapping of drives widens the window of vulnerability for an array, even one with hot spares, because of the need to survive the rare cases of multiple drives failing in a short span. That isn't likely, but when you're dealing with five nines of uptime requirement, those unlikely events have to be accounted for.

I'm also trying to imagine implementing a system whereby a $30/hr tech just walks down aisles and pulls blinking drives and replaces them, and I'm thinking anyone who does that deserves the uptime they get.

2 days ago
top

Gamma-ray Bursts May Explain Fermi's Paradox

dnavid Re:WTF (236 comments)

One of the possible conclusions is we may actually be the first intelligent species to hit space flight in our galaxy.

So we're the older more advanced civilization we're looking for? That's so amazing and depressing at the same time.

Or we're just the oldest in the neighborhood. The really cool kids might simply be too far away for us to have met them yet.

On the subject of books, Stephen Baxter wrote a series of fiction novels designed to tackle the Fermi Paradox, each one of which containing similar characters but set in a different universe with a different answer to the paradox. In Manifold Time, the answer to the Fermi Paradox is basically "we were the first, and because we take over the universe quickly enough we also become the only." In Manifold Space, the answer to the Fermi Paradox is comparable to the subject being discussed: advanced civilizations pop up all the time but they get wiped out by massive cosmic disasters repeatedly throughout the history of the universe. Manifold Origin is a bit harder to summarize, but I guess the best way to put it is that the answer to the Fermi Paradox is that evolution to advanced civilizations is contrary to our guesses so improbable, even we shouldn't be here but we didn't arrive by chance, we were "shepherded" into being here.

2 days ago
top

Gamma-ray Bursts May Explain Fermi's Paradox

dnavid Re:WTF (236 comments)

But once a civilization has achieved the Iron Age of technology, such a civilization is likely to achieve space faring status within a thousand years

It took humans 3200 years. Why do you assume that the average species is *way better* than humans?

In any case though, I thought it was pretty clearly talking about nipping things in the bud, sterilizing all life at any point in the massive timeline between the first self-replicator to a civilization capable of avoiding or defending against gamma ray bursts. The amount of time it actually takes is probably some random variable, and all things considered, how long it took us is probably around average. Earth life existed about 3.5 billion years or more before we came along.

Yeah, its a statistical thing. The point isn't that GRBs obliterate all technological civilzations. Its that by reducing the odds of a planet achieving a technological civilization capable of surviving GRBs to very low levels, the odds of us having ever encountered one drop for very likely to extremely unlikely. And that's all that's necessary to resolve Fermi's Paradox.

2 days ago
top

Proposed Disk Array With 99.999% Availablity For 4 Years, Sans Maintenance

dnavid Re:Power Costs (255 comments)

Now, what does it cost to swap it? Let's say the chance of failure is 20%, it takes ten minutes, and you pay the admin $30/hour (I just made up all these numbers). ($30/hour * 1/6 hour * 0.2 failures) = $1.

So unless I made a mistake in either my math or my assumptions, it looks like swapping is still a win, unless the number of additional disks is less than 5%.

Let me know when you can find a way to dispatch a tech to swap a hard drive in a tier 1 datacenter for a buck.

2 days ago
top

Apple Posts $18B Quarterly Profit, the Highest By Any Company, Ever

dnavid Re:It is hard to know what to think (525 comments)

Apple arguably makes the best phones and when using Android phones you notice little things here and there that aren't quite a nice, but these are rather rare and mostly insignificant. It feels strange that Apple is making such a profit with a rather smallish that may be 12% of the market and no particularly eye-popping new products since the Steve Jobs era, just a series of well-engineered refinements. Then again, certain shoe and apparel companies do this and have done this for decades. Seems odd to see this in technology sector that historically has been very market-share, volume and dominance oriented. However historically, this was the method employed since the early days of Apple (premium pricing).

Well, first I think you're underestimating Apple's marketshare. If you measure Apple's marketshare relative to all smartphones everywhere on Earth, its marketshare could be that low. But Apple doesn't directly compete with most of those smartphones: it competes in the high-end smartphone market where it still has substantial marketshare. Supposedly, Apple shipped more phones in China last quarter than any other smartphone manufacturer, which means its marketshare is not insubstantial.

But I think more importantly, people - especially within the technology space - tend to look at Apple as a vanilla consumer tech company, when it is anything but. Asking why people are willing to pay so much for Apple when at best their products are marginally better is like asking why people pay so much for Air Jordans which are marginally better, or why people are willing to pay hundreds of dollars for dresses that have no practical benefit over a $79 dress from Target. It comes down to brand loyalty and brand consciousness.

People keep saying Apple was going to fail in China because their phones were twice as expensive as the competition; that if they didn't make a budget priced phone Apple would be irrelevant in China. What those people are completely clueless about is that much of the sales of Apple iPhones in China were *because* its the expensive phone, not *in spite of* the fact its the most expensive phone. Its the Cadillac of phones, the Ferrari of phones. But as expensive as an iPhone is, its a lot easier to buy than a Cadillac, and far more useful day to day.

People have been misunderstanding Apple's strategy for years now, and show no signs of learning their lesson. The correct way to look at Apple is not to compare them to Samsung or Microsoft or Dell. Rather, its to compare them to Michael Kors or Calvin Klein or Nike. Imagine if Nike was the *only* company making shoes with celebrity athletes endorsing the product, and *everyone else* was marketing budget sneakers. They would probably never sell as much shoes as the budget sneaker companies, but think of the stranglehold Nike would have on the high-end shoe market, on mindshare, and most importantly on profits. Now look at Apple, saying "we're stylish and pricey" and everyone else, for the most part, saying "we are just as good but a lot cheaper." In the traditional tech space, Apple would be doomed. But in the high-end shoe market, they would be King. Apple isn't winning because they are playing the game better, Apple is winning because they are playing a completely different game and no one else is even bothering to suit up and play them.

3 days ago
top

Apple Posts $18B Quarterly Profit, the Highest By Any Company, Ever

dnavid Re:Tax (525 comments)

Which is the amount they couldn't find a way to avoid. Their profits would be taxed at around 40% in the US, but they funnel them to Ireland (the famous "Double Irish") and pay 0% on them. What they do pay is made up of sales tax, employment taxes, and tax on things like property that they can't pretend does not exist in any taxable jurisdiction.

This is somewhat misleading. The US is special in that its the only country that actually taxes income that isn't even earned in the country. Most countries will tax someone on the income generated in the country, but not tax income generated outside the country. That includes both corporations and *people*. If you are a US citizen and you go outside the country and earn income, you're required to pay US income tax on that income, even though it was earned entirely outside the country. To repeat: that's something practically unique to the US.

The US does provide an exception: if that income was already taxed by another country, you're allowed to declare that because you already paid taxes on that money to someone else, you don't have to *also* pay taxes to the US. Again: that's not just for corporations like Apple, but also for individuals.

Apple is required to, and does pay US income taxes on net income it earns in the US: it cannot simply "funnel" the income to another country to dodge taxes, and everyone saying that simply is confused or mistaken. If that was possible every corporation would do it and no one would pay any taxes. What companies like Apple *can* do, however, is a) pay taxes on that income in another country, particularly one with a much lower tax rate and b) don't bring the money back to the US, where they would then have to pay US tax on it.

Is this "dodging taxes?" Yes. But not really *US* taxes. The countries with the biggest beef with Apple are really European countries for whom Apple doesn't pay taxes on income generated in those countries because that revenue is funneled into Ireland. But that money would *not* be "taxed at around 40% in the US" because if Apple didn't funnel that income to Ireland (where it has special sweetheart deals that Ireland gave to many companies, to the chagrin of many other EU countries), it would then be taxed in the individual countries it was earned, and the US would still not see a dime of it.

As to dodging taxes by not explicitly transferring that money earned overseas back to the US? It has yet to be explained why a company that earns money overseas has an obligation to transfer that money to the US explicitly for the sole purpose of paying US taxes on it, and for no other reason. That's simply illogical.

But as to the income Apple generates in the US by its business operations in the US: for the most part, it pays taxes on that income just like every other corporation.

3 days ago
top

How One Small Company Blocked 15.1 Million Robocalls Last Year

dnavid Re:Let's get this straight (145 comments)

The NSA has metadata (and most likely recordings) of most of the phone calls in the entire world. The FBI (and a bunch of other unnamed government agencies) can and do tap phones without court orders. Cell phones can be used to track individuals 24/7. And yet somehow between the FCC and all the phone companies no one can figure out who is making robocalls. Really?

What's actually going on is that phone companies love robocalls because they make money on them and the FCC doesn't give a damn and/or is too "pro-business" to do anything for consumers.

Just stop lying and pretending that this is a hard problem. It's bad enough that this crap goes on in the first place. Pretending that nothing can be done is adding insult to injury. STFU and admit that it happens on purpose and nothing will change because you like the status quo. Stop lying to us!

Who said its hard to figure out who is making robocalls? Its not difficult to figure that out. The problem is that it is not illegal to make robocalls. The concern is that some robocalls violate the law by calling people that are registered on do not call lists and do not have a valid legal reason for calling, and other robocalls are perfectly legal but the recipient doesn't want to answer them anyway. Services like nomorobo and others are intended for people who want to control the kinds of telemarketing and other robocalls they receive.

I use nomorobo myself, and given that its a free service I don't have to maintain myself, I think it works very well. It doesn't catch and block everything, but it blocks a surprising number of them and every one it blocks is a call I don't have to answer or alternatively listen to ring the phone over and over until voicemail picks it up. It doesn't block all robocalls, but its not supposed to. When my drugstore calls to tell me my prescriptions are ready for pickup, that's a robocall but not one I want blocked. If the service just blocks the worst offenders, that's still plenty valuable just as anti-spam filters don't have to block 100% of all email spam to still be worthwhile.

3 days ago
top

Gamma-ray Bursts May Explain Fermi's Paradox

dnavid Re:WTF (236 comments)

From TFS:

They further estimate that GRBs prevent complex life like that on Earth in 90% of the galaxies.

So, life possible on 10% of the galaxies means that those are none at all? What about our own one? This smells of clickbait.

The Fermi paradox basically states that if life on Earth is the typical result of similar conditions, the probability is far higher that there are older, more advanced civilizations, and eventually on timescales far smaller than the universe has existed we should eventually have bumped into one of them as they spread throughout the galaxy, even the universe.

The paper suggests two effects of gamma ray bursts that alter that calculation. First, a given location was more likely to be exposed to a gamma ray burst at earlier times in the universe, when the population of large hot stars was higher and overall density of the universe was higher. Therefore, its possible that even though the universe is 14 billion years old during a significant percentage of that time the universe was too dense and the frequency of gamma ray bursts too high to allow a sufficiently high technological civilization to arise. That's why there aren't any really old civilizations, or alternatively why there are so few that they tend to be very far away statistically. Second, even after the universe had expanded enough to make gamma ray bursts less likely to completely sterilize all planets everywhere its still the case that most parts of most galaxies are still too dense to avoid getting hit by them.

So its possible the reason why we have not yet seen a very old highly advanced civilization is that the actual probability of one being old enough, and close enough, for us to have bumped into (or rather for them to have bumped into us) is a lot lower than we might assume, even if the conditions to initiate life are pretty common. Nearly all of them have been wiped out before they could advance to the point of being able to colonize on an interstellar level and avoid being driven to extinction by gamma ray bursts.

4 days ago
top

"Mammoth Snow Storm" Underwhelms

dnavid Re:jessh (397 comments)

when did we become a nation of wimps? we dealt with snowstorms for decades without shutting down at the mere hint of a blizzard. this country is going soft catering to whiners.

We dealt with cholera for even longer without all this public sanitation bling.

4 days ago
top

Systemd's Lennart Poettering: 'We Do Listen To Users'

dnavid Re:I agree with Lennart (551 comments)

Why would LibreOffice or GIMP ever be dependent on systemd? They have nothing to do with the startup or shutdown of the system - they are plain vanilla applications (same most likely goes for JBoss and KDE, though they may provide some 'system-like' services). Seriously, folks. It's just this kind of hyperbole from systemd haters that makes me think it must be good...

The root of the worry comes from the fact that the history of systemd's development is not "lets address this one thing" but rather "wouldn't it be great if?" Systemd doesn't adapt to or reuse other things as often as a lot of people are comfortable with: they displace syslog with journald because its better for them, because they now have control over logging rather than have to work with anyone else to get the features they want. The issues with Gnome are due to systemd saying "wouldn't it be great if" they could extend boot startup services to session start up services, which then gets you to login and window managers and so on. And while they assert - correctly - that *technically* speaking these things aren't dependent on systemd, its clear they are trying to displace the other things because they think they can do them better.

And maybe they can. The most pernicious thing about systemd is that in many respects, what they displace they improve on (at least to an extent): that's what makes it possible at all for them to expand the system. But its still unnerving to many. The answer to the question "why would LibreOffice ever be dependent on systemd" is "if systemd developers decide that they can offer a service to LibreOffice that would be beneficial, there doesn't seem to be anything in their dna that says maybe we should stay in our lane." Maybe one day systemd will want to act as a gateway to special network file services, or whatever. I can't imagine what that could be, but then again I never imagined that an init replacement would be replacing system logging and hooking into session startup. That's the problem: they've already defied expectations on how much reach systemd is intended to have.

Honestly, the gist I get from the interview itself is "if we think we can do it better, why not go for it." That's an understandable point of view, but its also very provocative to many in the open source community.

about two weeks ago
top

Reaction To the Sony Hack Is 'Beyond the Realm of Stupid'

dnavid Re:Yup, Hegel 101 (580 comments)

Anyone believing the "terrorist" propaganda must somehow also believe that the DPRK has millions of bomb strapping terrorists stationed in the US ready to flock into Star and AMC to bomb people for watching a comedy.

Anyone who thinks that is nuts. But that's the problem. There's no way to predict how many nuts out there think this would actually happen, and decide they want to get in on the action. 13,000 theaters being attacked by North Koreans is a really bad sequel to Red Dawn, not a credible threat. But a couple dozen crazy people attacking theaters because they heard about the North Korean threat is still a lot of potential casualties. That is a legitimate risk worth thinking about.

about a month and a half ago
top

Time To Remove 'Philosophical' Exemption From Vaccine Requirements?

dnavid Re:Religious is better than philosophical? (1051 comments)

Here's my Philosophical objection: if people can be exempt based on religious beliefs I can be exempt because I feel vaccines are bad.

Stupidity is not a philosophy, its a lifestyle choice.

about a month and a half ago
top

'Moneyball' Approach Reduces Crime In New York City

dnavid Re:A tech gloss over racial profiling? (218 comments)

I would very much like to know the racial makeup of that list. Given it came from the police themselves, it certainly leads to questions about how such individuals end up on those lists.

1. The article links to a document that describes the system and how it works in detail.

2. The article and linked document describe the function of the lists as prioritization lists for prosecutors after arrest, not used by the police to target people for arrest. If you think about it, that's logical: the police do not need to help the city make a list of people to target for arrest. If the police are biased in targeting people for arrest, they can still stop and arrest those people whether they are on the list or not.

3. Given the finite resources of the NYC criminal justice system, any data-driven system designed to focus more resources on critical repeat-offenders will by necessity reduce the focus on everyone else. That means while any system can have racial bias including this one, its more likely to reduce the impact of that bias simply due to the practical reality of reducing the need for drag-net style law enforcement. Even if the impact is small, more time spent on publicly vetted priorities is less time available to do anything else.

about 2 months ago
top

Complex Life May Be Possible In Only 10% of All Galaxies

dnavid Re:Let's do the math (307 comments)

What evidence is there of an infinite universe that had no beginning?

Bear in mind also that if an infinite universe exists, which had no beginning, then light would also have had infinite amount of time to travel to here from absolutely everywhere else, and although the intensity of radiation that reaches a point is inversely proportional to the square of the distance to that point. the volume of space that is an average of some given distance away from a point is greater than an amount proportional to the square of the distance from that point, and so the number of things in that volume which produce radiation at that distance would be be correspondingly greater, more than cancelling out the inverse square relationship to the intensity of radiation reaching a point some fixed distance apart. Every point in the universe would be perpetually saturated in radiation that is reaching it from every other point in the universe, infinitely far away, and certainly things like life bearing planets could not exist.

Critical observation suggests that the universe is finite.

This is known as Olber's paradox, and it is not valid for expanding universes where red shift reduces the wavelength of light from distant sources until it drops below visible wavelengths and there ends up being an observable horizon, even in an otherwise infinite space and unbounded lifetime.

Although your point about distance and volume is wrong for other mathematical reasons. The number of light sources expands as the square of radius, not volume. The *total* number of sources is proportional to the cube of radius, but that's double counting. The volume includes close sources previously counted, plus the new sources being added as radius increases. Olber's paradox doesn't rely on that error in math.

about 2 months ago
top

Harvard Scientists Say It's Time To Start Thinking About Engineering the Climate

dnavid Re:Global warming is bunk anyway. (367 comments)

We shouldn't be fooling around like this. It's obvious we don't understand, or are too corrupt and greedy to admit, that there's no problem.

Its ironic that one of the potential benefits of geoengineering research is that it will force many climate change deniers to admit that its possible for human activity to have major deleterious effects on Earth's climate.

about 2 months ago
top

Rooftop Solar Could Reach Price Parity In the US By 2016

dnavid Re:They WILL FIght Back (516 comments)

Yes, many US states require free net metering and power resale. It's the law, so utilities have to do it. But all you're doing at the time being is transferring the solar-generators' share of the infrastructure costs onto the non-solar-generators share. So when you report that these people can "break even", is that really a fair comparison?

It is a true statement that net-metering customers are often using infrastructure that they are effectively not paying for, or not paying the true costs of, when their net-metering contracts allow them to directly offset generation and usage one for one. However, its also true that the statement "solar could reach price parity" can still be true while recognizing that fact for this reason: electric utilities overstate the costs of the infrastructure, and the costs of that infrastructure are affected by network defections.

In Hawaii where I live, the electric utility recently proposed a plan to deal with the high growth rate of residential solar, which they were delaying due to their assertion that the grid was unprepared for the volume (which I concede as likely a mostly true statement). Their plan proposed a cost structure that would have made a customer currently paying about $200/month (what their documents stated was the residential average) and who could with solar reduce that bill to close to $15/month, under the new system would be paying about $150/month. That's for someone who deployed a net-zero solar system. That was due to the proposed system charging a $55/month infrastructure fee and paying only about half the rate for customer generated power as they charged for customer used power.

Given those numbers, its entirely possible that the continued drop in costs for residential solar could bring down the cost of a completely off-grid system of batteries and emergency generator (for stretches of low sunlight) to a point where it becomes competitive with that $1800/yr of "infrastructure" costs. Moreover, even if the costs don't reach full parity, if they just get close its possible that enough customers could choose to defect off the grid so as to make the per-customer costs of maintaining the grid even higher - since the costs are largely fixed and don't go down when there are fewer customers.

The combination of lowering costs of completely off-grid solar systems combined with the rising costs of utility power per customer due to defections could still cause a meet-in-the-middle where solar reaches price parity even for systems that don't need utility connections. Maybe not by 2016, but I wouldn't bet a lot of money against it either.

The problem with the "solar customers are not paying their fair share of the grid" statement is not that it is not true: its that the best way to resolve that problem in the long run will be for solar customers to defect off the grid. If electric utilities continue to pursue the strategy of making solar customers "pay their fair share" eventually, and it may take time, the technology will reach the point where that fair share will be zero, because they will stop using the grid. Once the technology reaches that point, it will be too late for the electric utilities to do anything except watch customers leave. They should be working now to find a more suitable relationship between the utility and solar residential customers that isn't adversarial. Because in short run the utilities have a lot of power over the situation, but in the long run they have very little. They should use their power while they can to create something that ensures a future where their customers still need them. There are lots of ways to do that. Demanding solar customers pay huge amounts for the privilege of being utility customers is not one of them.

about 2 months ago
top

Launching 2015: a New Certificate Authority To Encrypt the Entire Web

dnavid Re:Replace Cisco, and Akamai and then maybe.. (212 comments)

Replace Cisco, and Akamai and then maybe I'll be convinced it's better than the current situation. But it's still oxymoronic service: A central authority that *REQUIRES* trust for people who don't trust anybody.

First, if you don't trust Cisco and Akamai to that extent, how do you intend to avoid transporting any of your data on networks that use any of their hardware or software?

Second, I think a lot of people really have no idea how SSL/TLS actually work. There's two forms of trust involved with SSL certificate authorities. The first involves the server operators. Server ops have to trust that CAs behave reasonably when it comes to protecting the process of acquiring certs in a domain name. But that trust has nothing to do with actually using the service. Whether you use a CA or not, you have to trust that *all* trusted CAs behave accordingly. If Let's Encrypt, or Godaddy, or Network Solutions, is compromised or acts maliciously they can generate domain certs that masquerade as you whether you use them or not. As a web server operator if you don't trust Let's Encrypt, not using their service does nothing to improve the situation, because they can issue certs on your behalf whether you use them or not - so can Mozilla, so can Microsoft, so can Godaddy.

The real trust is actually on the end user side: they, or rather their browsers, trust CAs based on which signing certs they have in their repositories. Its really end users that have to decide if they trust a server and server identity or not, and the SSL cert system is designed to assist them, not server operators, to make a reasonable decision. If you, as an end user decide not to trust Let's Encrypt, you can revoke their cert from your browser. Then your browser will no longer trust Let's Encrypt certs and generate browser warnings when communicating with any site using them, and you as the end user can then decide what to do next, including deciding not to connect to them.

Seeing as how the point of the service is to improve the adoption of TLS for sites that don't currently use it, refusing to trust a Let's Encrypt protected website that was going pure cleartext last week seems totally nonsensical to me, unless you also don't trust HTTP sites as well and refuse to connect to anything that doesn't support HTTPS.

Lastly, if you literally don't trust anybody, I don't know how you could even use the internet in any form in the first place. You have to place a certain level of trust in the equipment manufacturers, the software writers, the transport networks. If all of them acted maliciously, you can't trust anything you send or do.

I don't necessarily trust the Let's Encrypt people enough to believe they will operate the system perfectly, and I don't believe they are absolutely immune from compromise. But I do think the motives of people adding encryption to things currently not encrypted at all is likely to be reasonable, because no malicious actor would be trying to make it easier to encrypt sites that have lagged and would otherwise continue to lag behind adopting any protection at all. Even if they are capable of compromising the system, that is quixotic at best. Even in the best case scenario they would be making things a lot harder for themselves, and in the long run getting people accustomed to using encryption with a system like this can only accelerate the adoption of even stronger encryption down the road.

about 2 months ago
top

Interviews: Warren Ellis Answers Your Questions

dnavid Re:Who is this guy (15 comments)

Notable works? Most popular recognizable work? I'm not gonna go googling so do your fucking job editors

If you have no idea who Warren Ellis is and have no intention of actually doing any research, what possible benefit could listing any of his works be to having any appreciation or context to an interview?

In any case, three of his most well known works are actually *mentioned* in the interview; namely Planetary, Transmetropolian, and his run on Hellblazer. I'm also fond of his work on The Authority which is also very well known. I also know he scripted the animated Justice League Unlimited episode called "Dark Heart" that was about the nanotech weapon that landed on Earth.

And jeez, learn to use the rest of the internet, or unlearn the little random bits.

about 2 months ago
top

Ask Slashdot: Can You Say Something Nice About Systemd?

dnavid Re:Reliable servers don't just crash (928 comments)

Actually that statement is 100% correct since the definition of a reliable server is one that does not crash.

Its trivially easy to make server software that doesn't crash. Just send all exceptions into an infinite loop. I think not crashing is a common prrequisite, but far from sufficient requirement for a reliable server. In fact, for some software like filesystems and databases crashing is almost not relevant to reliability: crashing to prevent data corruption is what reliable systems do in some contexts.

"Reliability" is when the software does what it is designed to do. That can include protective crash dumping. "Availability" is when the software is always running. Well designed and implemented software is both reliable and available, which is another way of saying its always running, and always running correctly.

about 3 months ago

Submissions

dnavid hasn't submitted any stories.

Journals

dnavid has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?