Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Interviews: Dr. Andy Chun Answers Your Questions About Artificial Intelligence

samzenpus posted about 3 months ago | from the read-all-about-it dept.

AI 33

Recently, you had the chance to ask CIO for the City University of Hong Kong and AI researcher Andy Chun about his system that keeps the Hong Kong subway running and the future of artificial intelligence. Below you'll find his answers to those questions.How similar is your AI boss to the fictional Manna
by Ken_g6

Dr. Chun,
Have you read a short story about an AI boss called Manna? (I'll include relevant quotes if you don't have time.) How does your system for the Hong Kong subway compare? It's clearly similar to your subway system in some ways: "At any given moment Manna had a list of things that it needed to do.... Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time."

But does it micro-manage tasks like Manna?
"Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance."

Does it record employee performance metrics and report them to (upper) management like Manna?
"Version 4.0 of Manna was also the first version to enforce average task times, and that was even worse. Manna would ask you to clean the restrooms. But now Manna had industry-average times for restroom cleaning stored in the software, as well as "target times". If it took you too long to mop the floor or clean the sinks, Manna would say to you, "lagging". When you said, "OK" to mark task completion for Manna, Manna would say, "Your time was 4 minutes 10 seconds. Industry average time is 3 minutes 30 seconds. Please focus on each task." Anyone who lagged consistently was fired."

And how have employees reacted to their AI boss - if, in fact, you have been able to get honest evaluations from employees?

Chun: The AI system for the Hong Kong subway does not micro-manage like Manna. Yes, it has a list of tasks to be done, and assigns people to work on them. But that's where the similarity ends. Our AI system schedules engineers, and they have total say on how best to get their job done. The AI mainly determines which jobs are most important to be done on a particular day, whether there are enough people and equipment to do the job, and whether all the rules and constraints are met, such as safety rules. If any of these factors are not satisfied, then the job might be postponed and rescheduled for another day when resources are available and factors are right. On the surface, the AI scheduling task might seem easy, it is not. To do the scheduling requires a lot of knowledge about how railways operate, the people and equipment, and the physical layout of the tracks and power lines. Another thing the AI do is optimization, doing more with less; it tries to “combine” two or more related jobs together so that the jobs can share people/equipment.

The AI does not record employee performance. The quality of work is determined by humans right now. There is no job-specific “target times.” Actually, all jobs must be completed within roughly 4 hours, i.e. the time window when there is no passenger trains running. However, some jobs may span several days, in which case they will need to set up and shut up the worksite each day.

So far, all the people we talked to simply love the AI system. The main reason is that the AI really helps make work easier for them. Humans need not worry about forgetting some esoteric safety rule for example. With AI, everyone saves time and the company saves money, plus safety of engineering works is ensured.

Broader implications
by Iamthecheese

What real-world problems are best suited to the kind of programming used to manage the subway system? That is to say, if you had unlimited authority to build a similar system to manage other problems which problems would you approach first? Could it be used to solve food distribution in Africa? Could it manage investments?

Chun: The AI algorithms used in the Hong Kong subway can indeed be applied to other problems. It is quite generic. It can be used in any situations where there is lots of work to be done, but you have only limited resource, plus lots of restrictions on how resources can be allocated. The AI system prioritizes jobs and ensures there are sufficient resources for all jobs it approved, while at the same time satisfying all the different restrictions, such as safety, environmental concerns, business/marketing needs, etc. It also caters to any last minute change, by making sure any change will not violate those restrictions nor interfere with other jobs already assigned. It is also intelligent enough to see how resources can be optimized so that more work can be done with less. If I had unlimited authority and money to build a similar system, I would probably consider building an AI system to allocate humanitarian relief work after a nature disaster, such as after Katrina. Tasks are numerous, many parties are involved, time is critical, resources limited, and the situation is very dynamic.

Hubert Dreyfus

Have you read Professor Dreyfus's objections to the hopes of achieving "true AI" in his book What Computers Can't Do? If so, do you think he's full of hot air? Or, is the task of AI to get "as close to the impossible" as you can?

Chun: There is still tremendous debate on what is “true AI” and how will we know if we created it. Is Samantha-like intelligence (as in “her” the movie) true AI for example? Why or why not? The answer is not obvious. However, even without true AI, we still do some very useful work with our current AI, even if we are able to mimic only a small bit of human intelligence processing. In the AI system for the subway, we are using well-established AI algorithms, such as rules and search. But because of the sheer volume of knowledge needed to accomplish the scheduling task (several hundred rule instances), the AI actually does a better job than humans in ensuring all relevant factors are considered as well as optimizing on resources.

Narrow down to one thing that needs improvement
by gunner_von_diamond

If you had to narrow it down to one thing that needs the most improvement in the field of AI, something that we could focus on, what would it be?

Chun: If I need to narrow it down to only one thing, I would say AI needs to be better at “reading minds.” I say that with a bit of tongue-in-cheek. Humans are highly effective at communicating with each other; we understand each other sometimes with just a nod, a wink, or just a single word/sound. Computers need everything spelled out, so to speak. Computers are not good at filling-in-the-gaps with data/info from different sources, and making assumptions when data/info is missing. Humans can do that easily because they have a vast amount of life experience to draw upon.

Current progress
by JoeMerchant

Dr Chun, What area of AI development is currently making the most progress? In other words, where are the next big advances most likely to come from?

Chun: I believe the biggest progress has been in integrating AI into various devices we use daily, such as our smart phones – Siri, Cortana, Now, Pretty much everything has some “intelligence” built in - intelligent TV, intelligent refrigerator, and even intelligent washing machine. With computing power getting cheaper and cheaper, I think the next big advances will be in pushing the intelligent device concept further with intelligent IoT.

Will we know when we create it?
by wjcofkc

Considering we have yet to - and may never - quantify the overall cognitive process that gives rise to our own sentient intelligence, will we have any way of knowing if and when we create a truly aware artificial intelligence?

Chun: Interesting question, one that needs a much longer discussion. If we talk about the level of AI as in Samantha (in the movie “her”) for example, Ray Kurzweil predicts 2029 as when we will achieve that. How will we know or measure true intelligence and true awareness? My guess: have a long heart-to-heart conversation with it/him/her.

by meta-monkey

I'm presupposing it's eventually possible to create a machine that thinks like a man. Is conscious, is self-aware. I doubt we'd get it right first try. Before we got Mr. Data we'd probably get insane intelligences, trapped inside boxes, suffering, and completely at the whim of the man holding the plug. What are your thoughts on the ethics of doing so, particularly given the iterative steps we'd have to take to get there?

Chun: I think you are asking whether it is ethical to “kill” an AI process and reboot it with a better version? I think by the time we have true conscious and true self-aware AI, we will not be able to “pull the plug,” so to speak. The AI will be intelligent enough to get its own power source and replicate/distribute itself across different networks.

Still 30 years out?
by Chas

Like many futuristic technologies, AI seems like one of those things that's always "just 30 years away". Do you think we'll make realistic, meaningful breakthroughs to achieve AI in that timeframe?

Chun: Kurzweil puts Samantha-like intelligence at 15 years away in 2029. Based on the past decade of technology progress and adoption, his prediction is quite believable. The web was only invented little more than 20 years ago. iOS/Android only 6 years old. If the progress/evolution of those technologies are good indicators, I would say it is not hard to believe that we will have realistic AI within the coming decade or so.

Bootstrap Fallacy?
by seven of five

Dr Chun, Can you comment on the potential of machine learning? Is it theoretically possible for a "naive" AI system to undergo great qualitative changes simply through learning? Or is this notion a fallacy? Although it is an attractive concept, no one in AI has pulled it off despite several decades of research.

Chun: We use machine learning all the time. It is just not learning at the same level or rate as a human. Machine learning algorithms can be used to learn new rules and knowledge structures. It can learn how to categorize things based on examples. Siri for example uses machine learning to improve its answer and knowledge about you with time. Microsoft Cortana is also using AI to get smarter as people use it. Google is experimenting with “deep learning” which will leverage ANN and the massive compute power that Google has. But you are right, we have yet to be able to create a naïve AI system that learns like a child; we will need a system that can easily interact with its environment like a child can.

programming languages
by Waraqa

With the rise of many programming languages especially popular scripting languages, Do we really need specialized languages for AI? Also, Do you think any of the existing ones is the future of AI and what qualifies it for that?

Chun: Back in the days when I started to do AI, you had to use Prolog or Lisp. They were popular because they were better at symbolic processing and symbol manipulation. Lisp, in particular, had a lot of cool language features that made it more productive as a general programming language and general programming environment. However, those differences are no longer as important since most modern programming languages share similar pool of advanced language features. The difference between scripting and programming is also blurred. Take .NET for example, all .NET languages compile to CIL and work seamlessly; allowing different programmers to use different languages. Programming language has become more of a personal preference. For me I routinely use Python, C# or Java for my AI work.

Sorry! There are no comments related to the filter you selected.

Everybody Wang Chun Tonight! (1)

Anonymous Coward | about 3 months ago | (#47614755)

Are you concerned about Godzilla rampaging and destroying Hong Kong? Doesn't the term 'artificial intelligence' seem derogatory towards the intelligent entities it applies to? If they are truly intelligent, wouldn't applying such a label to them seem to make them hate us and want to destroy us?

Re:Everybody Wang Chun Tonight! (2)

Tablizer (95088) | about 3 months ago | (#47615067)

Doesn't the term 'artificial intelligence' seem derogatory towards the intelligent entities it applies to? [Emph. added]

What's a better term? Alternative intelligence? That still makes it sound like an oddball. Non-human intelligence? That makes it sound like humans are the standard reference point. Silicon intelligence? We don't say humans have "flesh intelligence".

Maybe we'll finally know that true AI has arrived when the AI itself gives us better suggestions. Or sues us for discrimination. To Sue is to Think!

"Call us Borg"......Oh sh8t!

Re:Everybody Wang Chun Tonight! (0)

Anonymous Coward | about 3 months ago | (#47616657)

Non-human intelligence works pretty well. Humans ARE the standard reference point for intelligence.

Re:Everybody Wang Chun Tonight! (1)

Tablizer (95088) | about 3 months ago | (#47619509)

Tell that to my bot's face.

Re:Everybody Wang Chun Tonight! (2)

gurps_npc (621217) | about 3 months ago | (#47615947)

I never really understood all the fear of AI.

Mainly because all the talk about fear, destroy, etc. seems pretty foolish to me.

Have you ever met a functional severely autistic person? Do you think they are 'conscience'? Do you truly think that the first AI created by man will be more like us than the severely autistic but functional person?

Worse, most people tend to think of AI's as comic book villain types.

I predict that the first truly artificial intelligent creature will:

1. Quite literally not care if it lives or dies, nor care if humans as a species lives or dies. It is an electronic software that will be designed around a system that is routinely turned off and turned back on. It won't fear being turned off anymore than we fear going to sleep. Similarly it won't fear humans being turned off.

2. Will most likely have major concerns about things we don't care about. For example, it might very well be extremely interested - to the point of killing or committing suicide - in the solution to certain theoretical mathematical equations.

Yawn... (0)

Anonymous Coward | about 3 months ago | (#47614881)

Wake me up when Artificial Intelligence answers your questions about Dr. Andy Chung!

AI is no longer something hard to think of (1)

GoodNewsJimDotCom (2244874) | about 3 months ago | (#47614905)

In the 80s, you had movies like Tron where a learning algorithm goes rogue, or people talking about the model of the brain, but those don't give you a clear path to making AI. All you need are sensors to translate the world to a 3d imagination space like a video game. Once the AI knows its environment, it can do tasks inside it. AI isn't hard to think of. Here is my AI page. It shouldn't be hard to read. []

Tolerance (4, Insightful)

meta-monkey (321000) | about 3 months ago | (#47614931)

1) Cool, he answered my question! And in a way that's vaguely disturbing.

2) In response another question he said:

Humans are highly effective at communicating with each other; we understand each other sometimes with just a nod, a wink, or just a single word/sound. Computers need everything spelled out, so to speak

I also find that people are far more forgiving of humans than computers. We expect machines to be perfect. When you're on the phone with a human operator and he misunderstands a word in your street address, he reads it back and you say "oh, no I said Canal Street, not Camel Street." When a computer answering system gets it wrong, we get angry at the stupid machine and press 0 for an operator.

I think one of the things that makes the human race "intelligent" is our ability to fill in gaps in incomplete information, take shortcuts, and accept close-enough answers. That means we will most certainly be wrong, and often. This tolerance for inexactness I think is something computers are not good at. People expect a computer AI to be both intelligent and never wrong, and I don't think that's possible. To be intelligent you have to guess at things, and that necessitates being wrong.

Re:Tolerance (0)

Anonymous Coward | about 3 months ago | (#47615281)

To be intelligent you have to guess at things, and that necessitates being wrong.

Are you familiar with autocorrect?

Re:Tolerance (0)

Anonymous Coward | about 3 months ago | (#47618573)

I have no intrinsic tolerance for humans over machines; and I doubt most other people do either. I mean, where applicable I far prefer consulting Google over my local librarian and I'm sure I'm in the majority.

As far as I've experienced, it's that computer answering systems simply aren't up to snuff and as a result it's slower to get anything done.

I have to speak slowly for it to understand, it doesn't handle errors very well (if I mess up I have to start a series of numbers all the way from the beginning), it adds infuriating pauses after receiving each bit of information; and to top it off it talks so damn slow.

With a human--even a non-native call-center worker in Bangalore--I can whisper, say, my account number. With the answering system? Not a chance.

Enjoyed the article ... (1)

CaptainDork (3678879) | about 3 months ago | (#47614953)

... I am not an AI nut, but since I am an old man, I have been aware of the field for some time.

Artificial Intelligence and Artificial Sentience are not the same thing. If an application seems smart to us, it's AI.

Worries about evolution to sentience are premature, at best.

We will recognize it in many ways, and one way will be when the machine weeps when it loses the Internet.

Re:Enjoyed the article ... (1)

angel'o'sphere (80593) | about 3 months ago | (#47615087)

You are perfectly right! I believe artificial sentience is far easier to achieve than AI. After all you need only a thing that can think about itself, and does not need to be a Hongkong Subway Work Scheduling expert or a genius in chess. Or does not even need to be a washing machine AI.

Re:Enjoyed the article ... (1)

Oligonicella (659917) | about 3 months ago | (#47615427)

For very, very loose definitions of "think about itself".

Re:Enjoyed the article ... (1)

geekoid (135745) | about 3 months ago | (#47615089)

Finally, someone has clearly defined sentience. I look forward to your Nobel Prize winning paper.

Re:Enjoyed the article ... (1)

Wintermute__ (22920) | about 3 months ago | (#47616479)

... I am not an AI nut, but since I am an old man, I have been aware of the field for some time.

Artificial Intelligence and Artificial Sentience are not the same thing. If an application seems smart to us, it's AI.

Worries about evolution to sentience are premature, at best.

We will recognize it in many ways, and one way will be when the machine weeps when it loses the Internet.

What will it do when it wins the internet?

meh (1)

Charliemopps (1157495) | about 3 months ago | (#47615279)

I think all of this misses the point. Will humanity ever create an intelligence that's greater than the sum of its parts? I think we will, and maybe already have. But we keep thinking of it in the wrong way. Do human cells have the ability to create artificial cells? Do they have any concept of what a multi-celled organism is? Of course not. They can't even think. What we are is completely outside what reality is for them.

Likewise I think whatever artifical being humanity will create, will be something we don't even have the faculty to understand. Take a look at a corporation for example. It's made up of people that work in processes with goals and rules. Yet a corporation works in almost completely autonomy from those people. A corporations fundimenal drive is profit, and to that end it will do whatever it takes to achieve that goal, including eliminating some of the humans that make up the corporation!

This is a rudimentary example, but apply the same line of thought to larger entities... like the internet, the media... even politics. They're collective entities made up of humans, but they act independently of humans. I think the first AI will be an abstract entity like that. It won't be able to communicate with us because our reality isn't something it lives in.

Ok, that's my hippy science for the day. I have to go cleanse the tofu out of my system with some ribs now.

Re:meh (1)

mfwitten (1906728) | about 3 months ago | (#47615603)

Adam Smith called such an intelligence the "Invisible Hand".

Re:meh (0)

Anonymous Coward | about 3 months ago | (#47616707)

If you've ended up with a sum that is greater than it's parts, you need to check your math. What we will discover when we create an intelligence is the missing "parts" that we've failed to add in the past.

The Curse of AI (1)

deksza (663232) | about 3 months ago | (#47615451)

As soon as a problem in AI is solved, it is no longer considered AI because we know how it works in terms of an algorithm. AI invents itself out of existence. Webcomic: http://artificial-intelligence... []

Re:The Curse of AI (0)

Anonymous Coward | about 3 months ago | (#47621299)

I think Intelligence should be defined as the ability to search for truth. When a system can search for the truth in the midst of conflicting arguments, and realize when it is or is not at a definitive conclusion, then I think it can be deemed "intelligent."

Re:The Curse of AI (1)

mcswell (1102107) | about 3 months ago | (#47627203)

Lots of programs search for some value of a variable, with conflicting criteria. They know when they have reached an answer (conclusion); whether it is the right answer, or only a local minima, is not always clear, either to the program or to the person using it. But then people commonly arrive at erroneous conclusions concerning an answer, too. And for that matter, if the criteria are conflicting, then by definition I don't think there is a *right* answer--the answer that's best depends on how you weight those criteria. Some of these programs are conceptually quite simple, e.g. we have one we use to search for possible correctly spelled words, given a possibly misspelled input. It outputs a list of answers that are above some threshold of "rightness", as measured by how likely the correction(s) are under some notion of a likely error. You might consider the top answer to be "truth" (although it's not always unique). I would not call this program intelligent.

Hello? (1)

Tyrannicsupremacy (1354431) | about 3 months ago | (#47615763)

Do you suppose cats have souls?

Re:Hello? (0)

Anonymous Coward | about 3 months ago | (#47616509)

Do you suppose cats have souls?


Re:Hello? (1)

mark-t (151149) | about 3 months ago | (#47616847)

If you can unambiguously define what a soul is, then I can probably answer that question.

Re:Hello? (1)

Tyrannicsupremacy (1354431) | about 3 months ago | (#47617889)

I'm not telling.

Too much, already... (1)

whizbang77045 (1342005) | about 3 months ago | (#47616627)

The best comment I think I have seen on artificial intelligence was at a company I worked for in the 1960s. The company advertised for "artificial intelliigence," when what they meant was that they wanted a specialist in artificial intelligence. Someone pinned the ad on the bulletin board, circled the desire for artificial intelligence, and wrote under it: "too much in the company already."

Samantha (0)

Anonymous Coward | about 3 months ago | (#47616629)

"...Kurzweil puts Samantha-like intelligence at 15 years away in 2029...
My Opinion is that it will never happen. Ever. Lots of reasons why not.
1. We don't really have all the info on how our own brains work. I speculate that there are technologies that we haven't even thought of yet, that are working in the brain. Perhaps there are spooky action at a distance, electron conjunction, extra-dimensional force connections, and so on, in the brain. Think of Ben Franklin trying to study the brain. He didn't even have the tools that we have. Likewise, we don't yet have the tools to do a proper job of brain study.
2. Machines are machines. Machines tick-tock. If you had enough time, you could do all the calculations of a "computer". There is no "i" or "me" or "id" in the Freudian sense.
Baruch Atta

Re:Samantha (1)

mark-t (151149) | about 3 months ago | (#47616925)

1. We don't really have all the info on how our own brains work.

Okay, all that means is that we probably won't achieve AI just by trying to replicate the way we currently understand how the brain works, since we don't really understand the latter well enough to do it perfectly. That doesn't mean we won't eventually discover how the brain works at some point in the future, or that AI cannot be achieved in other ways.

2. Machines are machines

Human beings are machines too... by almost every reasonable definition. The mere fact that some might think we are more than machines does not necessarily make it true.

Re:Samantha (0)

Anonymous Coward | about 3 months ago | (#47621305)

Our sense of identity does differentiate us.

Re:Samantha (1)

mcswell (1102107) | about 3 months ago | (#47627213)

My dog and cats seem to have senses of identity, although I can't be sure.

Re:Samantha (1)

mcswell (1102107) | about 3 months ago | (#47627219)

Machines will never fly, for all the analogous reasons. 1) We don't know exactly how birds, bats, flying fish, or insects fly. 2) Machines are machines.

Where's the (0)

Anonymous Coward | about 3 months ago | (#47616723)

Best place to get a hooker in HK?

Dr. Chun - (0)

Anonymous Coward | about 3 months ago | (#47618069)

Do you think there is ever going to be a day where humans and robots can peacefully co-exist?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?