Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:"Human Colleague"... Nope, You Just Don't Get I (Score 1) 407

The fact that human brains can do it strongly suggests that it's possible to make something artificial that can do the same thing. That doesn't necessarily mean we can do it with our current software approaches, or even in anything involving silicon, but it clearly is possible.

(With the pace of recent advances in AI I would argue that it seems likely to be doable in software, and probably very much sooner that you might expect.)

Comment Re:Another perspective... (Score 1) 299

Literally quoting from the article:

Doing so would have set a dangerous precedent and would compromise the impartiality of myself and the other press photographers who work at the court. It's quite foreseeable that one photographer handing over photos would endanger all other photographers at the court as we may be perceived as informers or allies of the police.

Comment Re:One word (Score 1) 474

First and foremost, physics strikes again with the speed of light. Pretty much all modern processing is done synchronously which means that it requires a clock signal that changes everywhere at the same time.

To put a number on that: at 4.5 GHz (which is perfectly doable with current processors) light can travel about 6.5cm.

Take a look at your motherboard. 6.5cm is barely the distance to the RAM slots; the PCI-E slots are further away than that. The CPU die is of course smaller, but it's interesting to realize that CPUs can process quite a few instructions in the time it takes light to go from one end of the motherboard to the other.

Comment FPTP screws you over (Score 2) 65

Canada uses first past the post voting, which makes it very difficult to set a new party up. If you try, you end up splitting the vote with whichever of the two main parties most closely aligns with you, which leads to the other main party being more likely to win -- as a result most people won't vote for you, even when they support you more than either of the two leading parties.

"Set up a new party" would be rather a lot viable if that was fixed... but good lucking getting FPTP replaced by anything else.

Comment Re:Robot Safety (Score 1) 126

This is a good example of something that's easy, simple and incredibly wrong (or at least incomplete).

Imagine an AI that's been asked to do something. The AI decides that the best way to do it is to do X. But it's also smart enough to know that if anybody discovers that it's going to do X, they'll hit the Off switch, which will stop it from doing the original task. So what's it going to do? It'll hide the fact that it's going to do X, while trying to manoeuvre itself into a position where it can do X without getting shut down (which might mean disabling the Off switch, getting the ability to prevent anybody from pressing it, or perhaps just by setting things up so that it can do X so fast or stealthily that nobody realizes or can react until it's too late).

An Off switch isn't enough. You need to figure out how to program the AI so that it doesn't try to disable its own Off switch, so that it doesn't disable it accidentally, so that it doesn't mind someone pressing it (even though that'll kill it/stop it from doing its job), but most importantly so that it doesn't even try to do anything that would have you want to press the Off switch in the first place, since you can't guarantee you'd get a chance to do so. We don't really know how to do any of those.

Comment Re:Alternative: (Score 1) 126

Perhaps not "want", no. But what about indifference? What about an AI that's been ordered to do something, and the most efficient way happens to involve disassembling you and it just doesn't care about the consequences?

It may well be true that AI won't want to destroy humanity without being programmed to, but it's also not going to want to do the right thing either unless we program it to -- but not only do we not really know how to do that, if you look around the /. comments on these articles you'll see there isn't even general agreement that we should be trying. This seems like an incredible risk to be taking to me.

Comment Re:On regulation of AI development (Score 1) 72

This is a sane position, but the problem isn't quite so simple as that. AI development is competitive: whoever builds the first super-intelligent AGI is probably going to win big time. Any team that bothers to spend time considering safety is likely to lose to teams that don't.

I'm not entirely sure how we can avoid that. Even if you managed to pass global laws, how do you deal with people secretly breaking the law?

(Of course there's a big difference between AGI and driverless cars; the latter is pretty easy to manage the risk of. But when people start throwing around phrases like "existential risk", they're not talking about the cars.)

Comment Re:You pay people to do fuck-all... (Score 1) 723

I think we can be fairly sure there's nothing non-physical going on in our brains. Nothing else the universe does works like that, our brains are constructed out of perfectly normal matter and thus appear to run on normal physics, and it would make human brains magically special which is crazy. None of that means we're not conscious, it just means that consciousness is something boring and repeatable once you know how to do it.

Prove otherwise and I won't refuse to believe you, but I'm not going to expect that to happen.

There are some scientific indicators though that there may be more to the human mind than physics as known, for example the constant long-term failure to create general ("strong") AI even on the level of an utter moron. It seems this is either excessively hard or impossible.

I have mostly the opposite impression. We know the brain uses neural networks, and we've only really figured neural networks out in the past few years. And in the past few years we've made massive advances in AI everything. Since neural networks are our only real avenue of attack on AGI (our only example is the human brain, and that uses them), I see the current situation as demonstrating less that AGI is hard and more that neural networks were hard. And our ability to do with software many of the things that were previously human-only definitely demonstrates that those parts of the human brain are reproducible.

AGI might still be hard, but it might also simply turn out to be a matter of combining existing neural networks in the right way. Certainly every other AI problem has had people going "oh, but that's just X" or "that's just Y" (exactly like you did). Why not this one?

Note that "we have no idea how the brain works" doesn't mean we can't reproduce it. The neural networks involved in AlphaGo, for instance, are completely inscrutable; we have no idea how they work or why they evaluate any given move they way they do. Yet they demonstrably work just fine for playing Go.

[...] but having jobs for 10-15% of the population is not going to keep the current society-models going.

Yeah. None of these attempts to predict exactly where the limits of AI are will change anything when the limits are clearly high enough that we've got a problem.

Comment Re:The problem (Score 1) 723

It's "universal" so by definition receiving it doesn't depend on how much money you earn or how much tax you pay (although there's probably a "for working age adults" proviso in there...). Stopping tax evasion is orthogonal to that.

No, it's not done as a trick to take from the middle and reward the wealthy. It's done as a trick to make sure that everybody can feed, clothe and house themselves. I personally don't think that's a bad thing to do.

Comment Re:The problem (Score 1) 723

Good point, but it should be a lot easier to verify someone's identity than to verify their identity and do means-testing for the zillions of separate welfare-related programs we have at the moment. (Not to mention that we're ploughing full speed ahead into a future where humans can't really compete for jobs at all, so the difference between "number of people without a job" and "number of people" is going to start getting smaller and smaller anyway.)

Comment Re:You pay people to do fuck-all... (Score 1) 723

By that argument, there's likely no human creativity either. "Filter things from random searches" is probably a pretty good description of how creativity works in human brains too. We're aren't special; I don't buy that our brain does something that computers can't.

...but that ultimately doesn't even matter. If an AI can emulate the output of a human brain, and it can do so cheaper than a human can, then it doesn't matter if the AI works in the same way a brain does or not. It can still act as a replacement.

Slashdot Top Deals

If God had a beard, he'd be a UNIX programmer.