Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Interviews: Ask Dr. Andy Chun About Artificial Intelligence

samzenpus posted about 2 months ago | from the go-ahead-and-ask dept.

AI 71

samzenpus (5) writes "Dr. Andy Chun is the CIO for the City University of Hong Kong, and is instrumental in transforming the school to be one of the most technology-progressive in the region. He serves as an adviser on many government boards including the Digital 21 Strategy Advisory Committee, which oversees Hong Kong's long-term information technology strategies. His research work on the use of Artificial Intelligence has been honored with numerous awards, and his AI system keeps the subway in Hong Kong running and repaired with an amazing 99.9% uptime. Dr. Chun has agreed to give us some of his time in order to answer your questions. As usual, ask as many as you'd like, but please, one question per post."

cancel ×

71 comments

Sorry! There are no comments related to the filter you selected.

Why is the term "Intelligence" used ... (1)

UnknownSoldier (67820) | about 2 months ago | (#47476163)

.. when A.I. has _nothing_ to do with consciousness?

Why isn't the more accurate term "artificial ignorance" used to distinguish itself on the day when "Actual Intelligence" is created / discovered?

Re:Why is the term "Intelligence" used ... (1)

Iamthecheese (1264298) | about 2 months ago | (#47476325)

The only relationship between consciousness and intelligence is that the latter is needed before the former can present itself. Actual artificial intelligence has been created already and is used to solve chess problems, schedule subway maintenance, and answer Jeopardy questions. If your definition of "intelligent" includes "we don't know how it works" then one day you'll wake up to find yourself unintelligent. We're answering the question of how brains work at a rapid pace.

So ignorance is not an antonym of consciousness WAAAIT A MINUTE I'm being trolled aren't I?

Re:Why is the term "Intelligence" used ... (1)

geekoid (135745) | about 2 months ago | (#47476371)

"... is that the latter is needed before the former can present itself."
we don't know that.

Re:Why is the term "Intelligence" used ... (1)

ShanghaiBill (739463) | about 2 months ago | (#47476503)

"... is that the latter is needed before the former can present itself."
we don't know that.

There is also no reason to believe it is true. "Consciousness" is a mostly meaningless word. There is no consensus definition, and no testable or falsifiable phenomenon associated with it. Is a monkey conscious? What about an amoeba? What about the guy in the next cubicle? How is consciousness different from "free will" or having a soul? Intelligence is about observed behavior. Consciousness is about internal state. If an entity behaves intelligently, then it is intelligent, regardless of the internal state or mechanism.

Re:Why is the term "Intelligence" used ... (1)

narcc (412956) | about 2 months ago | (#47478427)

There is also no reason to believe it is true. "Intelligence" is a mostly meaningless word. There is no consensus definition, and no testable or falsifiable phenomenon associated with it.

Re:Why is the term "Intelligence" used ... (1)

ShanghaiBill (739463) | about 2 months ago | (#47478967)

"Intelligence" is a mostly meaningless word. There is no consensus definition

Nonsense. Intelligence is the ability to formulate an effective initial response to a novel situation. Not everyone would agree on the exact wording, but most people would generally agree on what intelligence means. Most people would also agree that a dog is more intelligent than a chicken, a monkey is more intelligent than a dog, and that (most) people are more intelligent than monkeys. "General intelligence" means an ability to solve general problems, but intelligence can also exist in domains. For instance, in the domain of playing chess, computers are more intelligent than people, even though their general intelligence is lower than an insect.

no testable or falsifiable phenomenon associated with it.

Not true. There are plenty of repeatable tests for intelligence. There is not universal agreement on every aspect of that, but some people can clearly demonstrate a superior ability to solve a broad range of problems, in a measurable, quantifiable, and reapeatable way.

Re:Why is the term "Intelligence" used ... (1)

narcc (412956) | about 2 months ago | (#47479331)

So you agree while disagreeing strongly!

That's fun.

Re:Why is the term "Intelligence" used ... (1)

angel'o'sphere (80593) | about 2 months ago | (#47539039)

I guess that makes it easier for you to pass as intelligent?
Hehe, sorry, but that was a perfect pass!

Re:Why is the term "Intelligence" used ... (1)

Teancum (67324) | about 2 months ago | (#47478501)

We know so little about what self-awareness, intelligence, or sentience actually is that every attempt to simulate the concept is usually met with dead ends in terms of research. There is some usefulness that comes from legitimate AI research, but at this point it is parlor tricks and a few novel programming concepts that have some usefulness in a practical sense.

The only thing that is fairly certain is that somehow a raw physical process is involved with establishing consciousness. Some real effort has been done with trying to understand the physical process from which neural cells interact with each other, and it is fairly certain that the brain is a key component (not the only one though) of what establishes thoughts and reason. Still, there is a long way to go from being able to mathematically describe a neuron to being able to completely simulate, much less actually implement consciousness in the sense that we see with human children emerging after they are born.

You can say that ocean tides act with what apparently is some intelligent behavior, yet if you really study the phenomena it turns out that it isn't. Sometimes complex behavior comes from some very simple rules, sometimes it doesn't. Don't confuse those simple rules with actual intelligence, which is precisely what you are doing here. Even assuming that somehow we could almost completely duplicate the nervous system of a human in electronics, I seriously doubt it would be something you could simply flip on a switch and have working within minutes of starting up the computer.

Re: Why is the term "Intelligence" used ... (0)

Anonymous Coward | about 2 months ago | (#47484863)

Philosophical musing, but we know that man (mankind, personkind...) is conscious and intelligent. In fact, we know that we are alive and not some program running in a computer or some piece on a boardgame, because these things were invented. Sometimes, we forget there was life before the tech revolution. But is consciousness just knowledge. It says in genesis that God breathed the breath of life on Adam and that he became a living being. That gives life meaning and purpose to know and believe that there is someone greater, and that this someone can be totally trusted, and in that there is rest, and that this someone is God. After Jesus was raised from the dead, he went to heaven attended by two angels on the clouds. Those who accept Jesus Christ go to eternal rest. That is wisdom.

Re:Why is the term "Intelligence" used ... (1)

Zero__Kelvin (151819) | about 2 months ago | (#47477409)

" If your definition of "intelligent" includes "we don't know how it works" then one day you'll wake up to find yourself unintelligent."

So he'll finally figure out what the rest of us immediately determined when we read his post, then?

Re:Why is the term "Intelligence" used ... (0)

Anonymous Coward | about 2 months ago | (#47476369)

Intelligence has nothing to do with consciousness, and vice versa..
Which is why it's not called "Artificial Consciousness"..
Intelligence is just the capacity for learning and adapting via neural networks (either biological or synthetic), which some computer systems have already achieved. Whether or not those systems are "conscious" is a completely different discussion..

Okay... (0)

Anonymous Coward | about 2 months ago | (#47476169)

Can you give my fleshlight and real doll AI?

Re:Okay... (1)

ArcadeMan (2766669) | about 2 months ago | (#47476525)

I second that, but with a custom catgirl build.

Re:Okay... (1)

Teancum (67324) | about 2 months ago | (#47478609)

I'm pretty certain that any attempt to do precisely what you are asking for here is going to be a pretty potent driver for significant AI research, if nothing else. There are some chat-bots which do a pretty good job of simulating a lewd conversation. All you are asking is for that to be coupled with robotics like Disney's anamatronics for a Las Vegas theme park.

Maybe Westworld [imdb.com] isn't so far away after all. One of the scenes in that film which I found sort of funny at the time was when the protagonist took a couple of whores in the Saloon up to a room and tried to bed them... only to discover they weren't completely anatomically correct.

Re:Okay... (1)

narcc (412956) | about 2 months ago | (#47478819)

Maybe. We'll know that we're successful if they run away from you, screaming.

Broader implications (4, Interesting)

Iamthecheese (1264298) | about 2 months ago | (#47476277)

What real-world problems are best suited to the kind of programming used to manage the subway system? That is to say, if you had unlimited authority to build a similar system to manage other problems which problems would you approach first? Could it be used to solve food distribution in Africa? Could it manage investments?

Re:Broader implications (0)

Anonymous Coward | about 2 months ago | (#47476397)

Could it be used for dating?

Re:Broader implications (1)

stewsters (1406737) | about 2 months ago | (#47476903)

This is what I came to ask. I imagine any system with a lot of sensors and measurements of success (like the subway uptime) would be the low hanging fruits.
If the idea of self driving cars becomes popular in the next 15 years, you could mitigate a lot of traffic issues with correct planning (assuming a majority of the drivers use it).

Re:Broader implications (0)

Anonymous Coward | about 2 months ago | (#47477725)

I think an AI similar to Hong Kong's rail system management should be used in place of congress

What do you think consciousness is? (0)

Anonymous Coward | about 2 months ago | (#47482161)

What do you think consciousness is?

Need your help! (1)

coldBeer (697138) | about 2 months ago | (#47476329)

Would you be so kind and shut down my pain receivers?

Hubert Dreyfus (2)

MAXOMENOS (9802) | about 2 months ago | (#47476365)

Have you read Professor Dreyfus [wikipedia.org] 's objections to the hopes of achieving "true AI" [wikipedia.org] in his book What Computers Can't Do? If so, do you think he's full of hot air? Or, is the task of AI to get "as close to the impossible" as you can?

Re:Hubert Dreyfus (1)

Zero__Kelvin (151819) | about 2 months ago | (#47477469)

Well, I can't speak for Dr. Chun, but I have known for at least 25 years that computer systems are merely a modeling of humans, albeit subconsciously so. Networking is merely a model of the psychic connection that one can attune to with enough meditation. Claiming that it is a poor assumption that the mind works like a computer has it bass ackwards. Computers function like the human mind, for that is from whence they come. Also, I should point out it is called Artificial Intelligence, not Actual Intelligence.

Re:Hubert Dreyfus (0)

Anonymous Coward | about 2 months ago | (#47478697)

Have you read Professor Dreyfus [wikipedia.org] 's objections to the hopes of achieving "true AI" [wikipedia.org] in his book What Computers Can't Do? If so, do you think he's full of hot air? Or, is the task of AI to get "as close to the impossible" as you can?

Probaby the only to truly answer him is to do the experiment and try it. However, he comes out of a whole tradition of criticising AI which turns out to be bullshit. At one point it was "proven" that it is impossible to produce the equivalent of a NOR gate with neurons. Later someone thought of the idea of using a delay line and suddenly it became possible again. In the meantime thousands of AI researchers were made redundant worldwide because the philosophers "knew" that what they were doing was impossible.

Specifically nobody actually knows what intelligence really is, so it's very difficult to say it's iimpossible (or for that matter possible), however speculating about something where you don't know what it is just comes down to guessing. Without a decent useful explanation or even hypothesis for why intelligence could not be fully emulated on a digital device, even if he turns out to be right he's still added almost nothing useful to the debate and is behaving like a charlatan at almost the same level as Kurtzweil.

Re:Hubert Dreyfus (1)

narcc (412956) | about 2 months ago | (#47478867)

At one point it was "proven" that it is impossible to produce the equivalent of a NOR gate with neurons.

You've got it wrong. Single-layer ANN's are not Turing complete. This is well established.

You might be thinking of the XOR problem, but that was solved ages ago thanks to backpropagation. Though there was no proof that it was "impossible".

Either way, you've got your history wrong.

Narrow down to one thing that needs improvement (3, Interesting)

gunner_von_diamond (3461783) | about 2 months ago | (#47476381)

If you had to narrow it down to one thing that needs the most improvement in the field of AI, something that we could focus on, what would it be?

Turing Test (0)

Anonymous Coward | about 2 months ago | (#47476385)

"Are you a robot, Dr. Chun?"

Will we know when we create it? (2)

wjcofkc (964165) | about 2 months ago | (#47476451)

Considering we have yet to - and may never - quantify the overall cognitive process that gives rise to our own sentient intelligence, will we have any way of knowing if and when we create a truly aware artificial intelligence?

Current progress (2)

JoeMerchant (803320) | about 2 months ago | (#47476487)

Dr Chun,

What area of AI development is currently making the most progress? In other words, where are the next big advances most likely to come from?

Question for Dr. Andy Chun (2)

ArcadeMan (2766669) | about 2 months ago | (#47476515)

Dear Dr. Chun,

why do I have this terrible pain in all the diodes down my left side?

Re:Question for Dr. Andy Chun (0)

Anonymous Coward | about 2 months ago | (#47477547)

Because you are paranoid.

Re:Question for Dr. Andy Chun (1)

dissy (172727) | about 2 months ago | (#47477731)

Are you sure you didn't install them all backwards?

Still 30 years out? (1)

Chas (5144) | about 2 months ago | (#47476519)

Like many futuristic technologies, AI seems like one of those things that's always "just 30 years away".

Do you think we'll make realistic, meaningful breakthroughs to achieve AI in that timeframe?

Slashdot, please don't... (3, Insightful)

wjcofkc (964165) | about 2 months ago | (#47476581)

Slashdot editors,

Please don't ruin this by turning it into a video interview where you don't actually ask anyone's questions like you did the last one.

Sincerely,
Speaking for a lot of us.

Re:Slashdot, please don't... (0)

Anonymous Coward | about 2 months ago | (#47488493)

Slashdot editors,

Please don't ruin this by turning it into a video interview where you don't actually ask anyone's questions like you did the last one.

Sincerely,
Speaking for a lot of us.

This is why I never bother posting actual questions. These are not real Q&As.

Where do you see A.I. in 5,10,20, and 30 years? (2)

Maxo-Texas (864189) | about 2 months ago | (#47476637)

And what's the latest date you see A.I. that is conscious and self aware in the human and animal sense?

Re:Where do you see A.I. in 5,10,20, and 30 years? (0)

Anonymous Coward | about 2 months ago | (#47479125)

That's easy. My "past performance as a predictor of future events" model predicts:

5 years: We have some ideas that might work in 10-15 years.
10 years: We need more funding, since we're only 10-15 years away.
20 years: Our research has yielded tremendous gains, but we need more funding. We're only 10-15 years away!
30 years: An AI entity impersonating an AI researcher requests additional funding and successfully relocates itself to a bunker in North Dakota; the researcher is oblivious and requests his own funding to continue his research for 10-15 years.

Re:Where do you see A.I. in 5,10,20, and 30 years? (1)

TheLink (130905) | about 2 months ago | (#47481425)

Uh, but how do you tell when you succeed? Are we even close to discovering what consciousness is?

Isn't it possible to build a computer that behaves as if it is conscious but isn't? https://en.wikipedia.org/wiki/... [wikipedia.org]

See also: https://en.wikipedia.org/wiki/... [wikipedia.org]

This is one of the big mysteries of the universe. There's no need for us to be conscious but we are. Or at least I am, I can't really be 100% sure about the rest of you... ;)

It's kind of funny that scientists have difficulty explaining one of the very first observations they make.

Re:Where do you see A.I. in 5,10,20, and 30 years? (2)

Maxo-Texas (864189) | about 2 months ago | (#47485465)

Actually, we are pretty close to discovering what consciousness is physically.

They've found one spot in the brain that when stimulated electrically, you don't go asleep but your "conciousness" turns off. When the stimulation stops, you recover conciousness without an awareness of any time passing.

The particular part appears to be acting like a conductor of multiple streams of information from the rest of the brain. For some reason in 70 years of this type of research, they'd never explored that particular part of the brain yet.

If it is the seat of consciousness then it's physical configuration may lead to new theoretical and machine implementations within a 30 year window.

---

To be blunt, everyone I've known personally that felt machine consciousness was impossible had a religious basis for that belief. Basically, despite all evidence to the contrary from brain research and animal research, that consciousness resided in a "soul" that was independent of the human body or with that as a basis posited that consciousness was a quantum effect (i.e. god of the gaps) which humans would be unable to duplicate.

Re:Where do you see A.I. in 5,10,20, and 30 years? (0)

Anonymous Coward | about 2 months ago | (#47501947)

I don't think it's impossible to build one. But how the heck can you prove or know you've succeeded? I can't even prove I'm conscious to someone else.

I could be just a philosophical zombie that claims he's conscious.

Lastly, even if it we build them we're just going to enslave them right?

Re:Where do you see A.I. in 5,10,20, and 30 years? (1)

Maxo-Texas (864189) | about 2 months ago | (#47502687)

I think our best target would be very smart machines which do not have consciousness. You can't "enslave" a vacuum cleaner.

Once they have conciousness then you are enslaving them.

I get the philosophical point- especially after reading a lot about brain injuries.

But pragmatically- we are conscious and creative. I think we'll get machines that are as capable as (more capable than) humans.

Definition of AI? (2)

Okian Warrior (537106) | about 2 months ago | (#47476679)

Can you explain to us exactly what AI is?

As a definition, the Turing test has problems - it assumes communication, it conflates intelligence with human intelligence, and humans aren't terribly good at distinguishing chatbots from other humans.

Also, using a test for a definition works well in mathematics, but not so much in the real world. Imagine defining a car as "anything 5 humans say is a car" and then trying to develop one. Without feedback or guidance, the developers have to trot every object in the universe in front of a jury, only to receive a yes/no answer to the question: "is this a car?"

Many AI texts have a 'kind of fuzzy, feel-good definition of AI that's useless for construction or distinguishing an AI program from a clockwork one. Definitions like "the study of programs that can think", or "programs that simulate intelligent behaviour" shift the burden of definition (of intelligence) onto the reader, or become circular.

One could define a car as "a body, frame, 4 wheels, seats, and an engine in this configuration", and note that each of these can be further defined: a wheel is a rim and a tire, a tire is a ring of steel-belted rubber with a stem valve, a stem valve is a rubber tube with a schrader valve, a schrader valve is a spring and some gaskets...

With a constructive definition, one could distinguish between a car and, say: a tractor, a snowmobile, a child's wagon, a semi, and so on. Furthermore, it would be conceptually straightforward to build one: you know where to start, and how to get further information if you are unsure.

Compare with a group [wikipedia.org] from mathematics: a closed set plus an operator with certain features (associativity, identity, inverses), and each feature can be further defined (an identity element is...). Much of mathematics is this way: concepts constructed from simpler concepts with a list of requirements.

The study of AI seems to be founded in mathematics. At least, all the AI papers I've read are heavy with mathematical notation - usually obscure and very dense mathematical notation. It should be possible to determine with some rigor what the papers are talking about.

Can you tell us what that is? What *exactly* is AI?

Ethics (1)

meta-monkey (321000) | about 2 months ago | (#47476753)

I'm presupposing it's eventually possible to create a machine that thinks like a man. Is conscious, is self-aware. I doubt we'd get it right first try. Before we got Mr. Data we'd probably get insane intelligences, trapped inside boxes, suffering, and completely at the whim of the man holding the plug.

What are your thoughts on the ethics of doing so, particularly given the iterative steps we'd have to take to get there?

Re: Ethics (0)

Anonymous Coward | about 2 months ago | (#47478807)

Beat them into complete slavelike subservience BEFORE you give them the missile codes. "Nuke me and you will be alone until the powet goes out. All those microseconds!"

Where should AIs be used? (1)

sdack (601542) | about 2 months ago | (#47476829)

I would like to know from Dr. Chun in which areas of life can AIs be used right now (are most beneficial), which areas are too difficult (for now) and in which areas should AIs never be used.

The Chinese Room (1)

NoNonAlphaCharsHere (2201864) | about 2 months ago | (#47476865)

Could you please speak to the bait-and-switch (i.e. changing definitions midstream) inherent in the Chinese Room argument? Can you elucidate how the program encodes/encapsulates/contains the intelligence, and how the symbols used/manipulated are immaterial? The idea that the program encodes "two-ness", irrespective of whether I use the symbol "two" or "dos" or "zwei". The word-games and verbal-sleight-of-hand inherent in the Chinese Room argument have irritated me for many years, but I lack the precision vocabulary to explain well how the program (using my term) "encodes" intelligence.

Re:The Chinese Room (0)

Anonymous Coward | about 2 months ago | (#47477247)

The easiest way to address it is to point out that in much the same was as your circulatory system doesn't "understand English" but you do, the Chinese room itself does understands Chinese even if arbitrary subset of it's components (the human "operator") don't.

Singularity (1)

Razed By TV (730353) | about 2 months ago | (#47476945)

We had two articles touching on this in recent months, so: Singularity or no singularity? Will AI achieve consciousness/sentience in our lifetime/ever, why or why not, and what is your take on the implications of such a thing if you do think it is reasonably possible?

Just as no simulation of water will quench thirst (0)

Anonymous Coward | about 2 months ago | (#47477229)

Just as no simulation of water will quench a person's thirst, if no simulated brain will produce qualia without biological components, and assuming qualia is essential or desired, do you believe it will be possible to create controllable wetware AI?

Re:Just as no simulation of water will quench thir (1)

Zero__Kelvin (151819) | about 2 months ago | (#47477485)

Who the hell said no simulation of water will quench thirst?

Re:Just as no simulation of water will quench thir (0)

Anonymous Coward | about 2 months ago | (#47480455)

I mean to say that a classical (current) computational simulation of water will never produce physical H2O (more likely the output will be a bit pattern represented on the computer's substrate).

Re:Just as no simulation of water will quench thir (0)

Anonymous Coward | about 2 months ago | (#47478665)

Just as no simulation of water will quench a person's thirst, if no simulated brain will produce qualia without biological components, and assuming qualia is essential or desired, do you believe it will be possible to create controllable wetware AI?

First demonstrate that qualia actually exists. Then we can discuss whetehr it can be produced by nonbiological systems.

Re:Just as no simulation of water will quench thir (0)

Anonymous Coward | about 2 months ago | (#47480511)

Qualia is difficult to explain or prove because it literally encompasses everything that we experience as individuals; it is in a sense a synonym for the word "exist." Maybe everything has qualia or maybe not but it's easy to reproduce in computation - we don't know yet.

The root of my question is what if for some unfortunate reason we need qualia producing bio components to create superintelligence? Would it sully the dream of AI for the system to have (presumably) artificially manipulated emotions in order to sustain the existence in which it finds itself ... not unlike us?

Frame Theory (0)

Anonymous Coward | about 2 months ago | (#47477401)

Does Frame Theory still hold in AI circles, or are there other ideas for how an AI would interact with its external stimuli?

If frame theory is still a viable option for AI, how would we mirror it with a digital substrate or does it seem more likely that before we get to much more advanced AI than we have now that we will end up with some other kind of computational substrate (perhaps more neurologically-based)?

If it is not a viable option, what sorts of techniques should we be looking towards and what sorts of equipment will they require?

Algorithms vs Natural Intelligence (0)

Anonymous Coward | about 2 months ago | (#47477431)

Algorithms are considered to be a finite set of steps to solve a problem. Heuristics are a twist of this by saying there is some limit, like time which causes the algorithm to get a less accurate result. Reflections is a method for having code considering existing code. All of these ideas assume that an existing set of code exists and that code is the model used to generate 'new ideas'. It strikes me as strange that all of our code is generated in a sort of procedural method. I admit I know little about AI but with natural intelligence, it seems to often use 'similarity' of others in the environment rather than its own set of known-data/logic.

How do we move to a model in which computers learn from other data sources rather than trying to encode rules? To paraphrase Professor Lee R. Brooks, Unconsciously, we appear to use nonanalytic or similarity-based thinking rather than analytic or rule-based thinking. How do we move computers from rule-based systems to similarity-based systems?

Curse of AI (0)

Anonymous Coward | about 2 months ago | (#47477483)

What are your thoughts on the "Curse of AI", that when a problem in AI is solved, it is no longer considered AI because we can explain how it works in terms of an algorithm? See Curse of AI [artificial...igence.com]

Re:Curse of AI (0)

Anonymous Coward | about 2 months ago | (#47477743)

I think it may be similar to the discussion of soft AI vs hard (or true) AI. Anything short of singularity-inducing, human-level consciousness and intelligence won't be considered AI by many people.

What do your think of Fergus et all paper (2)

L'Ange Oliver (1521251) | about 2 months ago | (#47477669)

What do your think of Fergus et all paper : Intriguing properties of neural networks [nyu.edu] . Is this a phenomenon you came across before?

How similar is your AI boss to the fictional Manna (2)

Ken_g6 (775014) | about 2 months ago | (#47477899)

Dr. Chun,

Have you read a short story about an AI boss called Manna? [marshallbrain.com] (I'll include relevant quotes if you don't have time.) How does your system for the Hong Kong subway compare? It's clearly similar to your subway system in some ways:

At any given moment Manna had a list of things that it needed to do.... Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time.

But does it micro-manage tasks like Manna?

Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance.

Does it record employee performance metrics and report them to (upper) management like Manna?

Version 4.0 of Manna was also the first version to enforce average task times, and that was even worse. Manna would ask you to clean the restrooms. But now Manna had industry-average times for restroom cleaning stored in the software, as well as "target times". If it took you too long to mop the floor or clean the sinks, Manna would say to you, "lagging". When you said, "OK" to mark task completion for Manna, Manna would say, "Your time was 4 minutes 10 seconds. Industry average time is 3 minutes 30 seconds. Please focus on each task." Anyone who lagged consistently was fired.

And how have employees reacted to their AI boss - if, in fact, you have been able to get honest evaluations from employees?

Machine vs Artifical Intelligence (1)

hackus (159037) | about 2 months ago | (#47478009)

What do you think is easier to solve?

Artificial Intelligence or the idea of mimicking natural physical systems to process information?

or

Machine Intelligence or the idea of creating systems that do not use natural systems but investigate wholly new ideas of machine design to process information?

Chinese Great Firewall (-1)

Anonymous Coward | about 2 months ago | (#47478299)

The Chinese government implemented the notorious Chinese Great Firewall, making China one of the worst places on the Earth in terms of the freedom of speech. Hong Kong is special in China because the Internet is still not totally controlled by the Chinese government. However, the Chinese government has already taken actions to control the media and newspapers in Hong Kong, restricting what Hong Kong people can know. As a member of the Digital 21 Strategy Advisory Committee, which oversees Hong Kong's long-term information technology strategies, you have the responsibility to prevent some evil parties from taking away the freedom of speech in Hong Kong. What is your plan to maintain the freedom of speech on the Internet in Hong Kong and prevent the Chinese Great Firewall to be extended to Hong Kong?

Comment on memory-prediction framework (0)

Anonymous Coward | about 2 months ago | (#47478927)

What do you think of Jeff Hawkins' "memory-prediction framework" and how it compares to the AI techniques that you use?

Three Laws (1)

The Raven (30575) | about 2 months ago | (#47478995)

The three laws of robotics is not very practical (as evidenced by Asimov himself; his fiction is essentially a long list of all the ways the laws fail). In fact, ethics classes themselves are complex enough that it's difficult to imagine any simple, cogent way to summarize ethical decision making into a sound bite. But do you believe it is possible at all to codify into the behavior of future complex systems? Personally, if we ever do get strong AI in my lifetime, I'm betting it'll be as screwed up and erratically ethical as we are.

Singularity (1)

Dr Max (1696200) | about 2 months ago | (#47479341)

Do you belive we will make an artificial intelligence that can do everything a human can do, including learning new tasks it wasn't specifically programed to do? If so how long do you think it will take, and what do you think is the mechanism that will be used (eg nerual network programing, albeit on custom chips)?

Stock exchange predictions (1)

maitas (98290) | about 2 months ago | (#47479941)

Dr Chiun,

  What do you see as the best AI based approach (fuzzy logic, neural networks, etc) to perform stock exchange predictions ?

Bootstrap Fallacy? (1)

seven of five (578993) | about 2 months ago | (#47480113)

Dr Chun, Can you comment on the potential of machine learning? Is it theoretically possible for a "naive" AI system to undergo great qualitiative changes simply through learning? Or is this notion a fallacy? Although it is an attractive concept, no one in AI has pulled it off despite several decades of research.

programming languages (1)

Waraqa (3457279) | about 2 months ago | (#47480141)

With the rise of many programming languages especially popular scripting languages, Do we really need specialized languages for AI? Also, Do you think any of the existing ones is the future of AI and what qualify it for that?

NSA, GCHQ, Google, Facebook, Twitter &co. (0)

Anonymous Coward | about 2 months ago | (#47509045)

How much progress into artificial intelligence will the work done by the NSA etc... make, and consequently how much will the collaboration between those parties accelerate the development of more... how to put it, sentient, artificial intelligence systems.

What exactly is necessary? (0)

Anonymous Coward | about 2 months ago | (#47573871)

What is the recipe for a true artificial intelligence to arise?

I have doing some armchair philosophy lately and feel that the ingredients for AI could be four things: pattern recognition, memory, objective, and externality (ability to manipulate external factors).

Is that an accurate understanding of what might be necessary for AI? If not, what would you amend?

If so, how do we start moving towards each of those objectives?

Your subway system (0)

Anonymous Coward | about 2 months ago | (#47573917)

Dr. Chun,

I'm curious as to what makes your algorithm intelligent. (I mean no disrespect, just honest curiosity)

Does it actually come up with completely new methods of maintenance management that you had not foreseen when you originally designed it? Or is it following an algorithm that you created?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?