Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A Peek Inside DARPA's Current Projects

ScuttleMonkey posted more than 7 years ago | from the al-gore-unavailable-for-comment dept.

Science 94

dthomas731 writes to tell us that Computerworld has a brief article on some of DARPA's current projects. From the article: "Later in the program, Holland says, PAL will be able to 'automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon.' At that point, perhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns. The original HAL, in the film 2001: A Space Odyssey, tells the astronauts how it knows they're plotting to disconnect it: 'Dave, although you took thorough precautions in the pod against my hearing you, I could see your lips move.'"

Sorry! There are no comments related to the filter you selected.

The Real Issue (3, Insightful)

Atlantic Wall (847508) | more than 7 years ago | (#17713536)

The Real issue is some idiot programming a computer to defend against someone just trying to turn it off.A computer should be programmed to know it can make mistakes.

Re:The Real Issue (2, Interesting)

Eagleartoo (849045) | more than 7 years ago | (#17713744)

[HAL] The only mistakes a computer makes is due to human error [/HAL]

I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought. Define for me thought.
Thought -- 1. to have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
2. to employ one's mind rationally and objectively in evaluating or dealing with a given situation

I guess what we're looking for in thought is self-awareness. My computer science teacher once said to me that a computer is basically on the same level of intelligence as a cockroach. It evaluates in positive and negatives, 1s and 0s

Please feel free to blow me out of the water here =)

Re:The Real Issue (2, Insightful)

exp(pi*sqrt(163)) (613870) | more than 7 years ago | (#17713852)

What does conscious thought have to do with mistakes? Mistakes are when you have some kind of expectation and those expectations fail to be met. If I buy some accounting software, and the figures don't add up correctly, then the software has made a mistake (which in turn is probably caused by a mistake made by a developer). If you make the definition of 'mistake' hinge on concepts like 'conscious mind' and other tricky philosophical ideas then it's a wonder you can get anything done in your day.

My computer science teacher once said to me that a computer is basically on the same level of intelligence as a cockroach. It evaluates in positive and negatives, 1s and 0s.
You have digital cockroaches in your part of the world?

Re:The Real Issue (2, Insightful)

Eagleartoo (849045) | more than 7 years ago | (#17714010)

/. is probably the reason I DON'T get anything done during the day =). I was somewhat responding to "teaching computers that they make mistakes." But if you were going to do that, they would have to evaluate whether what they do is make mistakes or not, they would begin to evaluate their purpose. If they are instructed to make mistakes ("You are a computer, you make mistakes." The man told the computer) and they "learn" that they make mistakes, would it not follow that they start making mistakes intentionally? I mean they've been told to make mistakes, in essence they are fulfilling their purpose, which gets me into the whole original sin argument ---- THIS IS WHY I NEVER GET ANYTHING DONE!!!!!

Re:The Real Issue (3, Insightful)

Ucklak (755284) | more than 7 years ago | (#17714776)

Conscious thought has everything to do with mistakes.

A machine is mechanical and is incapable of mistakes as it can't set expectations.

From your quote, "Mistakes are when you have some kind of expectation and those expectations fail to be met.", machines aren't capable of setting expectations, only following a basic 'to do' list.

If a machine adds 1+1 and returns 3 to the register, then it didn't fail, it added 1+1 in the way it knows how to.

AI today is nothing more than a bunch of IF..THEN possiblities run on fast processors to make it seem instantaneous and 'alive'.

You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.

Not to get philosophical here but IMO, AI won't ever be real until a machine has a belief system and part of that belief system relies upon it's own ability to get energy, just like animals do.

It's possible that a program can be written so that a machine is aware of it's own requirements but then we're back to a bunch of IF..THEN statements written by programmers.

Re:The Real Issue (2, Interesting)

kurzweilfreak (829276) | more than 7 years ago | (#17716906)

Prove that you or I are conscious and more than just an incredibly complicated series of IF..THEN statements. :)

No need to! (1)

eMbry00s (952989) | more than 7 years ago | (#17721386)

If (Me.Arguments(Me.Arguments.Length-1).Status = Defeat){
    Break() //Let's just ignore this thread even though we started it.
}

Re:The Real Issue (1)

Walter Carver (973233) | more than 7 years ago | (#17721540)

Is there something that makes you think that we are not conscious? Or that we are just an incredibly complicated series of IF..THEN statements?

Re:The Real Issue (1)

kurzweilfreak (829276) | more than 7 years ago | (#17724500)

What makes you think that a sufficiently complicated computer isn't conscious? What about a monkey? A dog? A cow? An insect? A worm? Keep going down and find the point where you say "ok, this is conscious, but this isn't." We can't yet even come up with a rigorous definition of consciousness yet. Jeff Hawkins has some very interesting theories though.

Re:The Real Issue (1)

Walter Carver (973233) | more than 7 years ago | (#17729260)

Nothing. There is no evidence. I am not in possition to know if a computer, animal or incect can be conscious. Personally, I think that consciousness is gained gradually in an analogy with the complexity of brain (so no black-or-white examples like the one you said). No idea about artificial consciousness.

There is no way I can experience how it is to be an animal, an insect, or a machine. Therefore, I cannot say. But, I know that if a machine will be able to achieve consciousness, it will only be because a human programmer has (successfully) applied a pshychological model made by a human psychologist/psychiatrist. Or, humans will be able to set in activity the first machine that will be able to expand beyond it's capacity, and therefore, reaching consciousness. Again, the credit goes to the humans that set it so. And there is one final posibility, the one of malfunction: humans set a machine in activity, and only by mistake that machine can expand itself. In any case, humans are responsible for the new consciousness.

What matters, in order to convince humans, is that it will look, feel, "smell" conscious, not that it will will actually have to be.

Re:The Real Issue (1)

Ucklak (755284) | more than 7 years ago | (#17728198)

There is one thing that is innate within all living things as we understand it and that our being has a need to multiply.
That belief system that is within all our cells and further into the organism gives us a need that doesn't need to be taught.

In order to fulfill that need, we (organisms and individual cells) need food and water and we'll do whatever it takes to get it.

You could add that we will do whatever it takes to get food and water OR we'll die. That is the understanding that we will cease to exist if our needs aren't met.
Computers and AI aren't that sosphisticated yet or if ever (I tend to think eventually when another method of skinning the cat comes about but not now).

Computers don't understand, they follow instructions.
They need power and don't do a damn thing about it if it is threatened. It has no desire to learn, it only observes and records and makes no correlation among sets of data.

A computer isn't self aware, a program has to be written to record uptime and keep a tally to simulate self awareness.

We can decide a favorite color, shape, and musical pattern that isn't passed down within our instructions (DNA).
We question, machines only maintain and report.
We have the ability to learn, machines can only record.

So, if you'd like to say that we are a complex set of IF..THEN statments, then go ahead. The difference between our IF..THEN and a machine is that we have the conditional OR that rules us and the ability to change expectations and adapt. Machines don't. Machines can be programmed to adapt but so far, the best machines are proxies for humans and none are autonomous.

Re:The Real Issue (1)

exp(pi*sqrt(163)) (613870) | more than 7 years ago | (#17732370)

There is one thing that is innate within all living things as we understand it and that our being has a need to multiply.
I'm married but have no kids. We have no intention of having kids. We employ various forms of technology to prevent us having kids. What are you talking about?

Re:The Real Issue (1)

Ucklak (755284) | more than 7 years ago | (#17733024)

The fact that you are making a conscious effort to not reproduce.

The parent implied that we are nothing but a complex array of IF..THEN statements to which I replied that we are driven by the conditional OR.
Machines in their current state today aren't capable of understanding as their purpose is to follow instructions, not alter expectations as they are not able to be set mechanically autonomously.

Existence of living organisms is about consuming food, water, and reproducing.
There are those that will put a religious spin on the purpose of life but that doesn't take away the need for food, water, and shelter.

Your existence is about maintaining food, water, and shelter. Whatever you learn, record, etc., will die with you. You have a companion that shares the same expectation which provides comfort. Machines are incapabale of wanting to which the parent would imply that the condition isn't programmed.

I say BS on that as the fact that you have a want that is a condtion of shelter (comfort), you have altered your biologically programmed purpose and have changed the expectation of your being.

Re:The Real Issue (1)

exp(pi*sqrt(163)) (613870) | more than 7 years ago | (#17717478)

You can program a machine to be aware of it's power (Voltage) and you can have a program set to monitor that power with cameras and laser beams and whatever else with commands to shoot laser beams at any device that comes near that power but the machine still isn't aware of what power is.
Few people know what power is. They get hungry, and then they eat.

Re:The Real Issue (0)

Anonymous Coward | more than 7 years ago | (#17720956)

Please learn the difference between "it's" and "its".

Re:The Real Issue (3, Insightful)

AK Marc (707885) | more than 7 years ago | (#17715712)

I don't think computers are capable of making mistakes, because they are incapapble of thinking, they can process and store but this does not entail thought.

Then they can't make mistakes, but can make errors. What do you call it when a brownout causes a computer to flip a bit and give an incorrect answer to a math problem? How about when it is programmed incorrectly so that it gives 2+2=5? How about when a user intends to add 2 and 3, but types in 2+2? In all cases, the computer outputs a wrong answer. Computers can be wrong and are wrong all the time. The wrong answers are errors. A "mistake" isn't an assigning of blame. I agree that computers can be blameless, but that does not mean that mistakes involving computers can't be made. I think your definition of "mistake" is not the most common, and would be the point of contention in this discussion, not whether computers have ever outputted an erroneous answer.

Re:The Real Issue (1)

G-funk (22712) | more than 7 years ago | (#17718380)

The term you need to read up on is "emergent behaviour" my friend.

The Real Project: Warp Drive (0)

Anonymous Coward | more than 7 years ago | (#17713748)

The most interesting project at DARPA is not a cheap imitation of the M-5 computer. The most interesting project is building the first warp engine [scotsman.com] that can finally enable commercial space travel to Mars -- and planet Vulcan for 1st contact.

Re:The Real Issue (2, Insightful)

exp(pi*sqrt(163)) (613870) | more than 7 years ago | (#17713774)

Computers shouldn't be programmed to know they can make mistakes. They should observe themselves making mistakes and learn that they can. Sheesh. Next you'll be suggesting that children should be programmed from birth to believe whatever it's convenient for their parents to have them believe...

Re:The Real Issue (0)

a_nonamiss (743253) | more than 7 years ago | (#17713834)

Next you'll be suggesting that children should be programmed from birth to believe whatever it's convenient for their parents to have them believe...
Wait... you mean you can't do that? I suppose it's a little late to rethink this fatherhood thing...

Re:The Real Issue (1)

Atlantic Wall (847508) | more than 7 years ago | (#17714800)

"They should observe themselves making mistakes and learn that they can" Well if they can figure that out then who knows what they may learn and how the may feel/react to it.

Re:The Real Issue (1)

exp(pi*sqrt(163)) (613870) | more than 7 years ago | (#17717246)

then who knows what they may learn
If we already knew what an intelligent computer could learn, then there wouldn't be much point in making one, would there?

Knowledge (2, Interesting)

hypermanng (155858) | more than 7 years ago | (#17713952)

The only programming that leads to context-rich understanding that could be called "knowledge" in the human sense is self-programming. Like babies. We're all born with a some basic software and a lot of hardware, but it's interaction over time with our environment that we self-program. One might call it learning, but it's more fundamental than just accumulating facts: it's self-creation.

Dennett calls us self-created selves. Any AI more than superficially like a human would be the same.

Re:Knowledge (1)

sottitron (923868) | more than 7 years ago | (#17714074)

And programming us babies takes a long long time and lots of love and attention, I might add. I wonder if a computer will ever think back to a conversation it had years ago and think 'Man, was I ever a Windows Box back then...'

Insert heavy-handed comment here... (0, Offtopic)

Manchot (847225) | more than 7 years ago | (#17713958)

A computer should be programmed to know it can make mistakes.

How can we do that, when our own president doesn't even know when he's made one?

We could tell you... (0)

$RANDOMLUSER (804576) | more than 7 years ago | (#17713586)

...but then we'd have to kill you.

Paranoid (1)

TheWoozle (984500) | more than 7 years ago | (#17713604)

Except that HAL was paranoid. That the astronauts had a conversation they tried to hide from HAL was more than enough. The actual content of the conversation was immaterial.

Re:Paranoid (3, Insightful)

Gorm the DBA (581373) | more than 7 years ago | (#17713724)

Actually, it was over long before that. HAL was just following it's programming, perfectly.

HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.

It makes sense, really, when you think about it. And truly, if Dave had just gone along with it and died, HAL would have finished the job perfectly fine.

Re:Paranoid (2, Informative)

cnettel (836611) | more than 7 years ago | (#17713826)

It was some years ago, but I think that the book also stresses the problem for HAL in that the full mission was never revealed to the human crew, which meant that even too good thinking on their part, at the wrong point in time, would be considered a failure. HAL was programmed/ordered to obey the crew, but also respect the mission objectives, and the contradiction only grew worse.

Re:Paranoid (2, Insightful)

tcc3 (958644) | more than 7 years ago | (#17718376)

Yeah but thats if you go by the book.

The book and the movie are two different animals. The movie made no mention of the "Hal was following orders" subplot. Short of saying it outright, the movie makes it pretty clear that Hal screwed up and his pride demanded that he eliminate the witneses. Which, if you ask me, makes a more interesting story.

After reading all the books, I came to the conclusion that Clarke would have been better served by sticking to the screenplay.

Re:Paranoid (1)

tcc3 (958644) | more than 7 years ago | (#17718396)

Hal "going crazy" at not being able to logically reconcile "Hal is perfect and cannot make mistakes" and "Hal made a mistake" is also an acceptable answer.

Re:Paranoid (1)

cayenne8 (626475) | more than 7 years ago | (#17714022)

I always wondered....was the HAL being only one letter off each from IBM just a coincidence, or was that written on purpose that way?

Re:Paranoid (1)

smbarbour (893880) | more than 7 years ago | (#17714308)

Even if it wasn't a coincidence... The movie used "rebranded" IBM equipment.

The same was said about Windows NT. WNT->VMS

Re:Paranoid (1)

sconeu (64226) | more than 7 years ago | (#17714336)

Clarke claims it's a coincidence. He mentions it in "Lost Worlds of 2001", and specifically has Chandra debunk it in "2010".

Re:Paranoid (1)

threechordme (1041318) | more than 7 years ago | (#17724004)

also in the other language releases of the book it is not called HAL.... as far as i remember

Re:Paranoid (0)

Anonymous Coward | more than 7 years ago | (#17716242)

Actually, it was over long before that. HAL was just following it's programming, perfectly.

HAL was programmed to eliminate any possibile failure points in the mission that he could. Through the spaceflight, HAL observed that the humans in the mission were failable (one of them made a suboptimal chess move, a handful of other mistakes were made). HAL had the ability to complete the mission on it's own. Therefore, HAL made the decision, in line with it's programming, to eliminate the human element.
These can only be attributable to human error.

Re:Paranoid (0)

Anonymous Coward | more than 7 years ago | (#17721006)

"its", not "it's".

As I interpreted the scene... (2, Insightful)

Ungrounded Lightning (62228) | more than 7 years ago | (#17717492)

Except that HAL was paranoid. That the astronauts had a conversation they tried to hide from HAL was more than enough. The actual content of the conversation was immaterial.

As I interpreted the scene: Though the audio pickups were off, HAL had a clear view. So he zoomed in on their faces, panned back-and-forth between speakers, and got a clear shot of their faces - lips, eyes, eyebrows, and other facial markings - as each spoke.

Which tells me he was lip-reading. (Also face-reading.) He knew every word they said and had the bulk of the visual side-channel emphasis as well.

If all he needed to know was that they WERE having a conversation, he could have gotten that from his view through the window, without the camera gyrations.

We, as the audience, got an alternation of the omniscient viewpoint - watching and hearing the conversation - with HAL's viewpoint - silence (except for camera pan and zoom actuators) and an alternating closeups of the two talking heads. Thus we could both follow what was said and observe that HAL was doing so as well - but visually - and was putting a lot of processing power and camera actuator activity into subverting the humans' attemp to keep him from knowing what was said.

True or not (1)

phrostie (121428) | more than 7 years ago | (#17713606)

they should have saved this for April 1st.

the /.ers could have argued over this for hours.

REAL sneak peak (4, Informative)

Prysorra (1040518) | more than 7 years ago | (#17713672)

Here's a LOT of stuff to look through....don't tell anyone ;-)

Top Secret Stuff at DARPA [darpa.mil] . [DARPA]

Not "Strong" AI (4, Interesting)

hypermanng (155858) | more than 7 years ago | (#17713674)

The DoD funds a huge percentage of AI research, but at the end of the day they're interested in things that can be easily weaponized or used for simple intelligence sifting heuristics. The most fundamentally interesting research in AI is in the humanoid robotics projects such as those at the MIT shop, and it is from these more humanly-modeled projects that anything like HAL could ever issue. Search-digest heuristics like PAL aren't much like humans and will never lead to anything approching a human's contextually rich understanding of the world at large any more than really advanced racecar design will lead to interstellar craft.

The difference, as Searle would say, between Strong (humanlike) AI and Weak (software widget like) AI is a difference of type, not scale.

Re:Not "Strong" AI (1)

Ambitwistor (1041236) | more than 7 years ago | (#17714220)

The most fundamentally interesting research in AI is in the humanoid robotics projects such as those at the MIT shop, and it is from these more humanly-modeled projects that anything like HAL could ever issue. Search-digest heuristics like PAL aren't much like humans and will never lead to anything approching a human's contextually rich understanding of the world at large
It is far from clear whether "humanoid robotics" are either necessary or useful in producing AI with a "contextually rich understanding of the world at large".

Clarification (2, Interesting)

hypermanng (155858) | more than 7 years ago | (#17715138)

I don't mean to imply humanoid robotics qua robotics is necessary to AI development. Rather, only in a creature that acts as an agent inhabiting the world at large can one expect anything like human-level understanding thereof to develop. It's all very well to develop clever as-if software widgets to simulate understanding in carefully controlled circumstances, but they won't scale to true global context richness because 1) they interact with the world over narrow modalities and 2) they don't have the rich internal structure necessary on which predicate agents with deep and flexible competencies.

It's like we build ever more elaborate visual perception analogues, but they backend into databases that only ask for enumerations of objects discriminated. I don't care how competent the visual system is, it's never going to achieve sentience because it's not part of a whole agent that travels around (in some sense), processes the answers it's getting from the visual system in a multimodal way related to the agent's goals, edits those goals based on new information and so on. It can't just see, it has to look, and it can't just look because someone typed in a domain name, it has to look for a reason and the reason has to be a reason in the sense of being the result of a decision or discrimination, not just an action with a physical cause.

It would seem that the easiest way to allow for all that is to build something that really moves around in the real world. In short, building a robot with all the appropriate competencies might be really hard, but it's still the most tractable way to achieve Strong AI.

Re:Not "Strong" AI (1)

nigral (914777) | more than 7 years ago | (#17715100)

In a AI course, the teacher once told us that during the first weeks of the first Iraq war the ability to solve complex logistic problems in hours instead of weeks was largely worth all the money invested in IA research since the 60's.

So when the military talks about AI you don't need to think only about intelligent robot soldiers.

Agreed (2, Insightful)

hypermanng (155858) | more than 7 years ago | (#17715226)

I didn't mean to imply that only Strong AI is militarily useful. In fact, I would say that Strong AI is *not* useful, if one thinks about the ethics of forcing anything sentient to go to war in one's place.

Also, I have no trouble recognizing that cleverly-designed "Weak" AI is nonetheless quite strong enough in more conventional senses to be a monumental aid to human problem solving, in the same manner and to the same degree as an ICBM is a great aid to human offensive capabilities.

Re:Not "Strong" AI (1)

master_p (608214) | more than 7 years ago | (#17721638)

It is not possible for computers to reach human-like AI because computers' computational capacity is severely limited when compared to the human brain:

http://en.wikipedia.org/wiki/Brain [wikipedia.org]

The human brain can contain more than 100 billion neurons (that's 100,000,000,000) with each neuron be connected to 10,000 other neurons.

This huge capacity of the brain allows it to mirror the external experiences (and some people suspect the mirror image to be in 3d):

http://en.wikipedia.org/wiki/Mirror_cells [wikipedia.org]

So any attempts to create HAL or a similar structure are highly doomed to failure, unless another approach to computing devices is taken.

Yes and no (1)

hypermanng (155858) | more than 7 years ago | (#17724860)

If we model each neuron as an object composed of state equations, quasi-spatial information and in-out buffers and synapse sets as registry entries that link a given out buffer to the in buffer of another neuron, then we can expect that we'll have a total memory overhead of about 50KB per neuron (when accounting for the average number of connections per neuron) requiring about 50KFLOPS per neuron. For 2x10^8 neurons, that implies that we can comfortably allow for human-scale intelligence using one PByte of memory and about one PFLOP. Of course, there are some subsystems in the brain we may not need to instantiate to such a high degree of verisimilitude and can be replaced with more traditionally coded software (I'm thinking here of more hard-coded lizard brain stuff).

All of this is out of reach today, but in 20 years, I doubt it will seem insurmountable in the slightest. The truly hard question is how to set up an initial arrangement of those neurons (with all the various kinds, initial connections and so on) that allows for a person to grow inside. You know, like any baby.

Re:Yes and no (1)

master_p (608214) | more than 7 years ago | (#17736352)

But those neurons work in parallel...each neuron is an information bus as well. Computer memory is not a bus, it is just storage, and all memory cells have to send their data through the central bus.

I don't see a problem (1)

hypermanng (155858) | more than 7 years ago | (#17828480)

While I admit the virtualization of what was physically instantiated connection information is costly in terms of brute storage as well as memory bandwidth, there's not that many bits in the bus, once stripped of addressing overhead.

That said, managing memory bus speed could turn out to be a considerable technical constraint: the whole virtualizer would need a fairly agile daemon that tracked evolving connection topologies and kept neurons resident with their neighbors to minimize inter-cpu module bandwidth needs. This would be as transparent to the person those neurons instantiate as dendrite formation is to us.

DARPA Slogans (5, Funny)

Paulrothrock (685079) | more than 7 years ago | (#17713740)

DARPA: We don't make the things that kill people. We make the things that kill people better. DARPA: We bring good things to life... that are then used to kill people. DARPA: Who do you want to kill today?

DARPA Created The Internet (1)

wiredog (43288) | more than 7 years ago | (#17713884)

Well, it provided the funding, anyway.

Re:DARPA Created The Internet (1)

toddhisattva (127032) | more than 7 years ago | (#17724736)

If (D)ARPA created the Internet, then the Internet was designed to kill people!

Come on you Tin-foil Hat wearers... (2, Interesting)

silentounce (1004459) | more than 7 years ago | (#17713742)

"watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
Anyone care to guess what they plan to use that little gadget for?

Re:Come on you Tin-foil Hat wearers... (1)

Gorm the DBA (581373) | more than 7 years ago | (#17713784)

Making sure your waitress gets your order right, of course.

I mean really...Bush only invaded Iraq because that waitress brought him FRENCH dressing instead of RANCH like he asked...

Re:Come on you Tin-foil Hat wearers... (2, Interesting)

Perey (818567) | more than 7 years ago | (#17713866)

If that were true, the war would have been won when they renamed it Freedom dressing.

Re:Come on you Tin-foil Hat wearers... (0)

Anonymous Coward | more than 7 years ago | (#17713810)

don't worry tin-foil hatters: the PAL project is a big scam. it's way behind what it was supposed to be at and is chock full of very incompetent faculty members. i know people who know.

Re:Come on you Tin-foil Hat wearers... (1)

Lord_Slepnir (585350) | more than 7 years ago | (#17714076)

Recording 3 hours of PHB meetings and figuring out that all they agreed upon was the time for the next meeting to set the itenary for the real meeting.

Re:Come on you Tin-foil Hat wearers... (1)

metlin (258108) | more than 7 years ago | (#17714862)


"watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."
Anyone care to guess what they plan to use that little gadget for?


Voyeur radio porn?

Use Plan? (1)

nurb432 (527695) | more than 7 years ago | (#17717272)

Thats easy, they plan on turning it over to the DOD.

DARPA doesnt really do much with stuff like that, their job is to create it.

NSA makes more sense. (1)

Ungrounded Lightning (62228) | more than 7 years ago | (#17717664)

Thats easy, they plan on turning it over to the DOD.

Giving it to the NSA makes more sense.

Imagine: Instead of tagging conversations for human review when a keyword is present, they could have the acres of supercomputers analyze them for agreement on action items.

Then the automated agents could maintain and analyze a database of who agreed to what, flagging a collection of conversations for human review only if/when it amounted to a conspiracy to prepare for, support, or execute a military, geopolitical, or criminal activity of interest to the government.

Automated "big-brother". Pervasive area and phone bugs are much more useful if they can be monitored 24/7 by uniformly diligent and objective (even if not maximally perceptive and efficient) AIs, rather than occasionally by high-priced, error-prone, boredom-prone, subjective, and easily-corruptible humans.

Re:NSA makes more sense. (1)

silentounce (1004459) | more than 7 years ago | (#17722608)

That's what I was thinking but I wasn't sure I could put it clearly so I opened it up to you guys. I think the most interesting fact about it is that it would even catch coded conversations. It would catch the agreements, and if the agreement, say purchasing cattle took place in an area where that exchange wouldn't make sense, well, it could flag it for review as well. It essentially boils down a conversation to the most important part, and that makes for less data for other programs or people to sort through.

I worked will on a DARPA... (3, Funny)

WED Fan (911325) | more than 7 years ago | (#17713800)

DARPA has yet to acknowledge the project that I was working on 3 years from now in 2010. Last week, January 14, 2012 we will successfully tested the Time Redaction Project. So, I gave myself the plans tomorrow so that I will be submitting them a few years ago to get the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.

Re:I worked will on a DARPA... (2, Funny)

Rob T Firefly (844560) | more than 7 years ago | (#17713948)

Not anymore, your mom was just assassinated before your birth by an android you failed to prevent the invention of.

Re:I worked will on a DARPA... (4, Funny)

Slightly Askew (638918) | more than 7 years ago | (#17713974)

So, I wiollen have given myself the plans tomorrow so that I wiollen be submitting them a few years ago to get the grant money.

There, fixed that for you.

Re:I worked will on a DARPA... (1)

silentounce (1004459) | more than 7 years ago | (#17714130)

You should be ashamed. It is [halfbakery.com] :
DARPA has yet to acknowledge the project that I worked will on 3 years from now in 2010. Last week, January 14, 2012 we tested will successfully the Time Redaction Project. So, I gave myself the plans tomorrow so that I submit will them a few years ago to get will the grant money. DOD has used this to send a nuke to kill the dinosaurs. I hope it works.

Re:I worked will on a DARPA... (1)

Slightly Askew (638918) | more than 7 years ago | (#17714230)

Sorry, but the shame, good sir, lies with you. Douglas Adams is the foremost authority on time travel verb usage, not some two-bit blogger.

From the link you provided

P.S. I am aware of Douglas Adams' ideas, which went something like 'I will wiollen the ball kikken', but my idea is to make something usable.

This is slashdot. Attempting to improve upon DNA is grounds for drawing and quartering

Re:I worked will on a DARPA... (1)

silentounce (1004459) | more than 7 years ago | (#17714656)

I love DNA as much as the next guy. But I doubt Adams would want his writing to be taken so seriously. You remind me of a friend of mine who still carries a towel with him everywhere. As for the grammar bit, Google thinks [google.com] otherwise.

Not what HAL stood for (2, Interesting)

Hyram Graff (962405) | more than 7 years ago | (#17713806)

[P]erhaps DARPA's PAL could be renamed HAL, for Hearing Assistant That Learns.

Perhaps, but that's not what the orignal HAL stood for. HAL was short for Hueristic ALgorithmal. Arthur C. Clark had to put that into one of his books in the series (2010 IIRC) because lots of people thought he had derived it by doing a rot25 on IBM.

Re:Not what HAL stood for (1)

sconeu (64226) | more than 7 years ago | (#17714382)

No, but the Sirius Cybernetics Corporation is obviously involved, since a robot is "your plastic PAL who's fun to be with."

Heuristic.... (1)

Prysorra (1040518) | more than 7 years ago | (#17714894)

Hueristic? What? A new color scheme? No wonder HAL went crazy.....damn horrible interior design...

Re:Not what HAL stood for (1)

etwills (471396) | more than 7 years ago | (#17721950)

Elsewhere, from googling "one step ahead of IBM":
The author of 2001, Arthur C Clarke emphatically denies the legend in his book "Lost Worlds of 2001", claiming that "HAL" is an acronym for "Heuristically programmed algorithmic computer". Clarke even wrote to the computer magazine Byte to place his denial on record.
[http://tafkac.org/movies/HAL_wordplay_on_IBM.html , goes on to argue this is unconvincing ... given HAL has a different name in the working drafts]

ventriloquists have already cracked this? (1)

amigabill (146897) | more than 7 years ago | (#17713902)

So, to avoid being observed, we all need to learn how to speak without moving out lips?

Re:ventriloquists have already cracked this? (1)

coldsleep (1037374) | more than 7 years ago | (#17714806)

Alternatively, you could just cover your mouth and/or face.

Re:ventriloquists have already cracked this? (1)

Mortanius (225192) | more than 7 years ago | (#17715044)

That, or perhaps folks capable of acting out very bad kung fu movie dubs.

Suddenly, Michael Winslow [imdb.com] becomes in-demand again.

Re:ventriloquists have already cracked this? (1)

badboy_tw2002 (524611) | more than 7 years ago | (#17717858)

Politicians have had to do this for years to keep us from figuring out if they're lying or not.

Re:ventriloquists have already cracked this? (0)

Anonymous Coward | more than 7 years ago | (#17720638)

That's the basis of good Japanese pronunciation

sex with ax nigga (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17713908)

some of you 4ave [goat.cx]

Finally! (1)

itlurksbeneath (952654) | more than 7 years ago | (#17714058)

...is to improve application productivity by a factor of 10 through new programming languages and development tools.
Cool. The government is finally ditching COBOL and FORTRAN.

It Hadda Be Said... (0)

Anonymous Coward | more than 7 years ago | (#17714124)

Scientist: HAL, read my lips.


HAL: I'm sorry, Dave, I can't do that...

GITS 2 (1)

CaffeineAddict2001 (518485) | more than 7 years ago | (#17714164)

We weep for a bird's cry, but not for a fish's blood. Blessed are those with a voice. If the dolls also had voices, they would have screamed, "I didn't want to become human."

Re:GITS 2 (1)

C0y0t3 (807909) | more than 7 years ago | (#17714662)

How do I rate this +1 disturbing? Oh CRAP I replied... nevermind.

vaporware and PR (3, Interesting)

geekpuppySEA (724733) | more than 7 years ago | (#17714168)

IAA graduate student in computational linguistics.

Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."

PAL's role here is not clear. The 'easier' task would be to monitor the body language of the two conversers and, by lining up a list of tasks with the observation of their head movements, correctly predict which points in the conversation were the ones where someone performed an "agreement" gesture.

The much, much more difficult task would be to actually read lips. There are only certain properties of phonemes you can deduce from how the lips and jaw move; many, many other features of speech are lost. Only when you supply the machine with a limited set of words in a limited topic domain do you get good performance; otherwise, you're grasping at straws. And then taking out most of the speech signal? Please.

But no, DARPA is cool and will save all the translators in Iraq (by 2009, well before the war ends.) PR and vaporware win the day!

Allow me to be skeptical (2, Interesting)

Anonymous Coward | more than 7 years ago | (#17714370)

> Later in the program, Holland says, PAL will be able to "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."

      Sure. You will first have to solve the problem of understanding natural language in an open ended environment - something that computers are not very good at yet.

      Quite frankly, AI people have been promising this kind of stuff for some 40 years now, and they have so far been unable to deliver. When is PAL going to be able to do what Holland aspires to? Not any time soon - most likely not within the next 20 years.

        AI people, please stop announcing such pie in the sky projects.

Say What? (1)

vtcodger (957785) | more than 7 years ago | (#17714386)

***"watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."***

I don't know about anyone else, but my experience has been that very few conversations actually result in mutual agreement upon a task. Most conversations are indeterminate, and most of the rest result in symetrically paired misunderstandings about what has been agreed to.

Oh well, at least for once "they" aren't spending my money to kill/maim innocent bystanders.

the real research behind this (3, Informative)

kneecramps (1054578) | more than 7 years ago | (#17715116)

WRT to "watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon":

Here's a link to the actual research that they are likely talking about:

http://godel.stanford.edu/~niekrasz/papers/PurverE hlenEtAl06_Shallow.pdf [stanford.edu]

As you might expect, the ComputerWorld article's summary of the technology is rather optimistic. Nonetheless, this stuff really does exist, and shows some promise in both military and general applications.

Re:the real research behind this (1)

egardner4 (652075) | more than 7 years ago | (#17718916)

Actually, here's a link to a page at the project's prime contractor that gives a little more context:

http://caloproject.sri.com/about/

This page is actually about 1 of the 2 subprojects that together makeup the PAL project.

I suggest that many of the posters to other threads should follow the publications link and bother to read some of the 50-odd citations. Only then will you really be in a position to speculate on what is and isn't hype. I guess it's actually easier to read a summary (/.) of an article (ComputerWorld) containing a summary of a web page (DARPA) summarizing the project.

Same sex conversations only... (5, Funny)

haggie (957598) | more than 7 years ago | (#17715144)

using natural-language processing, figure out what are the tasks they agreed upon.

This would only work for conversations between people of the same sex. There has never been a conversation between a man and a woman in which both participants would agree on the tasks...

M: Want to continue this conversation at my place?
F: Take a leap!
Computer: Agreed to move conversation to male's residence by leaping.

F: When are you going to mow the lawn?
M: Yeah, I'll get right on that.
Computer: Male agreed to mow lawn at earliest opportunity

Re:Same sex conversations only... (1)

gmletzkojr (768460) | more than 7 years ago | (#17715624)

F: Does this outfit make me look fat?
M: Ummm, of course not honey - you look great.
Computer: Application of this sheet-like outfit makes fat women look thinner.

Hearing Aids (1)

syrrys (738867) | more than 7 years ago | (#17715942)

Could this work in hearing aids? I know someone who uses a very expensive hearing aid and still has trouble following conversations. Man, that would be awesome. The hearing aid could not only amplify but correct/clarify what the listener thinks they are hearing.

Look Closer; Go Cross-Eyed (1)

carpeweb (949895) | more than 7 years ago | (#17715986)

I figured I'd check out the source behind the source and visit the DARPA [darpa.mil] web site. I got curious and decided to check out their latest budget estimate [darpa.mil] . What a peach! The "overview" looks like mainframe printout, which of course spills onto a second page (where you'd find the total DARPA budget of $3.3 Billion for FY07. The details that follow (interrupted by a couple of marketing pages on the theme "ExpectMore.gov") make it pretty difficult to connect the dots -- and of course these are just the unclassified dots. Starting from the top budget line on bureaucratic formatted stationery, the details break down the line items, but the detail lines themselves are all found on separate pages, with text explanation in between. I guess they could have made it harder to interpret, but it would take some creative thinking to do so.

HAL (1)

petrus4 (213815) | more than 7 years ago | (#17717028)

"Dave, put that Windows CD down. Dave... DAVE!"
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?