×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

I agree.

You said, "By that logic we want nothing either...". That is a key point. We know what it means to want something, but we *don't* know how that desire or our awareness of it arises in practice, in our brains. That is, we don't know how to implement it, even if we had the ability to fabricate actual neurons. You can, and people do, define "Strong AI" as the attempt to "create an artificial living mind". In that case, you've defined it as something we don't know how to do (yet). Hence my comment about making conclusions about something starting from a point that is not in line with reality.

As you said, "What qualities [the strong AI we eventually do build] shares with us will likely be one of those things that can't be answered for certain until we actually create it." Totally agree.

about two weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

So you construct a fantasy world with whatever you imagine is or will be, and then want to discuss what will happen in that world. Fine, it's a fun thing to do, but you can't then bring your conclusions back to the real world.

I think we're arguing along different lines here. You want to posit a scenario and then discuss what happens within that scenario. I'm saying that the conclusions you draw from such a discussion only apply to reality insofar as the initial scenario matches reality. Your scenario doesn't. You start with "create a true, self aware, synthetic mind ...". That's nowhere near reality, so whatever conclusions you draw are also nowhere near reality.

And that's my point. It's useful to consider "what would happen if" because people do have the goal of creating a "strong AI", but it is speculation. The reality is that all we know how to do now and in the foreseeable future is build specialized, though flexible, algorithms to perform complex tasks. Talking about these as if they are "intelligent", or "want" things, or can "think" just makes it difficult to be productive. There is already real danger in having autonomous cars, autonomous planes, autonomous soldiers, and other complex computer controlled machines. We'd be better served discussing the real risks than fretting over some sci-fi world in which machines have become super-human fictional CyberMen.

Our autonomous cars will be faced with situations like the train moral dilemma (do nothing and it will kill 5, but you can divert it to kill just 1). That problem needs to be faced and an answer provided without resorting to pretending that the autonomous car has "will" or "morality" or a "desire" to minimize some mathematical function related to the number of deaths caused. Autonomous cars, as much as they may seem to have a "goal" of taking us to our requested destination, are just algorithms we created tied to machines we created. We have designed them with a goal in mind, but we have to understand what they *are*, not what we wanted them to be.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

I'm allowed hyperbole. Pout.

But seriously, AI's also want nothing. They are simply machines, too. More complex, of course, but still machines. That's my point. You imbue your hypothetical AI with all the qualities of a human, plus extra. You called it a synthetic mind. So we're starting the discussion by presuming something that doesn't exist, and then concluding basically whatever we want. We then try to say that conclusion applies to the real world. That's what Hawking did. He assumed an AI that can supersede us, concluded that it will supersede us, and then inferred that AI is a threat to humanity. It's a baseless argument based on something that doesn't exist, and that we don't know how to build.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

This is like saying, I'm afraid of automobiles because eventually they will want to travel at the speed of light and will therefore suck up all the energy in the universe in the attempt. Automobiles will almost certainly want to travel as fast as possible because in order to be useful as an automobile, it needs to go fast.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

We benefit daily from programs that are nowhere near as intelligent than us. Why is it "that the only way we're likely to benefit from creating an AI is if it's vastly more intelligent than us"? We benefit from non-intelligent machines of all sorts. We benefit from Google. We benefit from Roombas. We benefit from autonomously-driven mining equipment. This list goes on for pages.

In any event, you are conflating the premise with the conclusion.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

Here's a trivial algorithm: int add(a, b) { return a + b; }.

No matter how much RAM you give the computer running this algorithm, it will never be faster. No matter how fast you make the clock speed of your CPU, this algorithm will never be able to subtract numbers. No matter how much electricity you allow this algorithm to consume, it will never add three numbers at the same time.

Those are situations in which having more resources doesn't help.

You then suggest ways in which algorithms could be improved to use more resources. Fine, that's engineering. The hope/goal of AI is that we can find the kind of algorithms you hypothesize about. But we don't currently have algorithms that "merely require more resources" to get smarter.

I think having a "what-if" conversation can be very useful. (I particularly enjoy them, in fact.) However, my point is that the conclusion that AI will supersede humans is based on the assumption that we have an AI that *could* supersede a human. We don't have any such AI, and we don't know how to build one. So that hypothetical conclusion is effectively the tautological implication of assuming the outcome.

My point is that speculation does not result in being able to draw actual conclusions about our actual future. If we can't achieve the pre-conditions, we won't suffer the conclusions.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

Clearly, I am a poor author. My point, which has mostly gotten lost, is that speculating about what an AI is or will be and then drawing conclusions about what it will do tells us nothing about what might happen *in reality*. That is because, *in reality* we do not have AIs anywhere near the capability given to them in such hypothetical scenarios as the paperclip maximizer. Moreover, we do not know how to build such AIs. Thus, with speculative premises, the conclusions are just as speculative.

There can be value in "what-if" conversations, but if the premises are unlikely to ever be realized, then so are the conclusions.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Re:Assumptions define the conclusion (574 comments)

a) No. b) No. c) No. d) No.

All of your points are the kind of uninformed assumptions I'm pointing out, in addition to some of them being just wrong.

Getting more resources does not necessarily make an algorithm smarter. It doesn't even always make it faster. Assuming you have some magical algorithm that "merely require[s] more resources" is just wishful thinking. Show me the algorithm. There isn't currently such an algorithm.

You can, if you will, define AI as you do in c). However, then there is no AI now, and may never be. You're speculating. And the self-aware requirement is very unlikely to be satisfied in our lifetimes. We literally don't even know how self-awareness/consciousness is implemented in ourselves, let alone how it would be implemented in something we create.

When you say, "I don't see why", and "it would likely", you're just speculating.

There's nothing much to be gained by positing unrealistic CyberMen with hypothetical powers and then trying to draw conclusions about what life with AI will be like. All the powers people like to hypothesize do not exist, and we don't currently know how to make them exist. So whatever conclusions you draw are just speculative fiction. Fun, and perhaps a useful philosophical/ethical pursuit, but it's ultimately fiction.

about three weeks ago
top

Hawking Warns Strong AI Could Threaten Humanity

ldbapp Assumptions define the conclusion (574 comments)

Much commentary on robotics and AI is based on unknowable assumptions about capabilities that may or may not exist. These assumptions leave the commentator the freedom to arrive at whatever conclusion they want, be it utopian, optimistic, pessimistic or dystopian. Hawkings falls into that trap. From TFA: "It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." This assumes a lot about what a "super-human" AI would and could do. All the AI so far sits in a box that we control. That won't supersede us.

So commentary like this usually assumes the AI has become some form of Superman/Cyberman in a robot body, basically like us, only arbitrarily smarter to whatever degree you want to imagine. That's just speculative fiction, and not based on any reality.

You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it. Come back to reality, please.

about three weeks ago
top

$500k "Energy-Harvesting" Kickstarter Scam Unfolding Right Now

ldbapp Re:Actual PhD students getting slandered? (448 comments)

He did not confirm a device, though I didn't ask. He confirmed his involvement in biz.dev, and said it was only part time. He expressed personal confidence in the project, but that's all.

There was one odd thing. I sent email to him @ucla. He replied from wetaginc.com explaining that it is because iFind isn't related to UCLA. Then he offered to send an empty message from the UCLA account. I glanced at the headers of his email and found references to eigbox.net, which seems to be implicated in SPAM related stuff. It could be innocent. He may just be being careful to separate his professional activities, and his email provider/ISP may use eigbox. Or there could be a MITM thing going on. A group looking to KS scam $1/2M could certainly be savvy enough to impersonate the people who's names were stolen.

My level of curiosity isn't high enough to pursue any further. ;)

about 6 months ago
top

$500k "Energy-Harvesting" Kickstarter Scam Unfolding Right Now

ldbapp Re:Actual PhD students getting slandered? (448 comments)

After my post mentioning my skepticism about his involvement, I had an email exchange with him where he confirmed what the KS page says.

about 6 months ago
top

$500k "Energy-Harvesting" Kickstarter Scam Unfolding Right Now

ldbapp Re:Actual PhD students getting slandered? (448 comments)

Similarly: Wotao Yin, according to google, is a mathematician working on the mathematics of optimization. Yet, he is listed as a biz.dev. guy who, "leads the business and marketing strategy development". That's a leap right there. I'd guess his name has also been used without his knowledge.

about 6 months ago
top

Physicists Turn 8MP Smartphone Camera Into a Quantum Random Number Generator

ldbapp Seed (104 comments)

What's the universe's seed?

about 7 months ago
top

Why Bitcoin Is Doomed To Fail, In One Economist's Eyes

ldbapp Re: Offer/Demand Law (537 comments)

A govt created crypto coin would have the advantage of being "official". That's all, but that's big. Perhaps it's not enough, but being "official" sets it apart from other coin.

1 year,22 days
top

Why Bitcoin Is Doomed To Fail, In One Economist's Eyes

ldbapp Re: Offer/Demand Law (537 comments)

I second this idea. If at some point the gov't becomes convinced bit coin is viable, they could just start their own new block chain . If they bless it with some sort of official "approval" (e.g. Their coin is legal tender for taxes), then that one can supplant any other. And as the parent comment mentioned, any sufficiently large market force can do the same. Eventually, bitcoin won't be the only crypto coin. And if the gov't can create its own competing coin, it can create two, or N, new block chains. thus, it can mint crypto currency just as with fiat money.

1 year,23 days
top

Ask Slashdot: Video Streaming For the Elderly?

ldbapp Sony BD-S3100 (165 comments)

This is a combo DVD/Blu-Ray wifi internet connected device. I got it solely for the blu-ray player, but discovered how convenient the internet connection is. A netflix interface is built in. The remote control even has a "netflix" button. There's a tiny bit of setup that you can do, and after that, my over-70 mother can operate it just fine. It also has interfaces for hulu, vudu, and music services like pandora and slacker built in. I used to hook my laptop up to the tv to watch netflix, but no more. There's a selection of other lesser-known services available in the interface, too.

about a year and a half ago
top

Accessorize Your Phone With Another Phone

ldbapp Re:It's backwards (171 comments)

Yep, that's what I posted, umm, before I read your post. (Premature posticulation.) Let's start our own hardware company. There are no barriers to entry, right? ;)

about 2 years ago
top

Accessorize Your Phone With Another Phone

ldbapp A nugget with a menu of optional interfaces (171 comments)

What I want is a computing nugget that I can carry in my pocket (on a necklace, whatever), and then carry any number of different task-specific interfaces to it. You don't even have to carry them. Just walk up to your desk, and your keyboard and monitor connect and you have a desktop. Pick up your "smart-phone" interface, and go. Pick up your candybar interface and go. But all the computing and storage stays the same. It's your cloud in your pocket. Sell me that HTC.

about 2 years ago
top

Smart Guns To Stop Mass Killings

ldbapp Ballistic Malware (1388 comments)

Yeah, that's all we need. Ballistic malware.

about 2 years ago

Submissions

top

Variable Focus Images

ldbapp ldbapp writes  |  more than 2 years ago

ldbapp (1316555) writes ""The company’s technology allows a picture’s focus to be adjusted after it is taken. While viewing a picture taken with a Lytro camera on a computer screen, you can, for example, click to bring people in the foreground into sharp relief, or switch the focus to the mountains behind them.""
Link to Original Source

Journals

ldbapp has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?