×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Student Maps Brain to Image Search

ScuttleMonkey posted more than 6 years ago | from the fastest-computer-i-own dept.

Graphics 72

StonyandCher writes to mention that a University of Ottawa grad student is creating a search engine for visual images that will be powered by a system mapped from the human brain. "Woodbeck said he has already created a prototype of the search engine based on his patent, which apes the way the brain processes visual information and tries to take advantage of currently-available graphics processing capabilities in PCs. 'The brain is very parallel. There's lots of things going on at once,' he said. 'Graphics processors are also very parallel, so it's a case of almost mapping the brain onto graphics processors, getting them to process visual information more effectively.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

72 comments

brain based search? (1)

w.hamra1987 (1193987) | more than 6 years ago | (#21509809)

that is definitely interesting, but really, can such thing work? i don't think so.

Re:brain based search? (4, Funny)

MarsDefenseMinister (738128) | more than 6 years ago | (#21509825)

Car keys are always in the last place you look, so we know that brain based searching is inefficient at best.

Re:brain based search? (0, Redundant)

achilles777033 (1090811) | more than 6 years ago | (#21509945)

They are always in the last place you bother to look. Once you've found them, you don't need to keep looking.
Are they in the last place you could possibly think to look?

I can't speak for everyone, but I have a feeling you find your keys (most of the time) LONG before you run out of places you can think to look.

Just ordering all those places that I might have stuck the stupid things into a conherent search routine seems pretty CPU intensive to me, let alone moving myself to all those places to actually look, but it still never takes more than a few minutes to find them. Doesn't sound all that inefficient to me.

(I realize it might have been a joke, but so many people get worked up about 'the last place I looked' that I find it alarming more than funny)

Re:brain based search? (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#21510249)

Anything you find is in the last place you looked.
If you found it, why would you keep searching?

Re:brain based search? (0, Redundant)

gatzby3jr (809590) | more than 6 years ago | (#21510313)

Ah - that means our brains have a lower bound of n for searching for car keys.

Re:brain based search? (1)

HandsOnFire (1059486) | more than 6 years ago | (#21510655)

As funny as it is, the brain stops seaching because it has found what it was looking for. How would a computer know that it's found a match to what it's searching for?

Re:brain based search? (1)

achilles777033 (1090811) | more than 6 years ago | (#21511121)

my guess would be when the % certainty got to a sufficient threshold.
The computer can't tell it's holding keys, but if they are metallic, on a ring, and jingle, after a while, it can be relatively certain.

if you follow certain philosophical theories then that's all the human mind does anyway, since we can't be certain that we are in fact holding keys, or that the keys exist at all. ;)
(not that I agree, but it is interesting to read how other people think, closer to a pragmatist myself)

Re:brain based search? (3, Insightful)

cluckshot (658931) | more than 6 years ago | (#21512585)

The solution to many of the questions like the very good parent of this post is to understand several things about the brain that a 100% map will not disclose. Please understand, the mapping of the brain will be of value though it will be of far less value than anticipated. The reason it will be of little value relative to brain function. We actually already know the processes and the number of steps involved. Also there are several features of the circuitry that are not at all contained in our silicon models.

Here is an abbreviated attempt to point out the differences in brain circuitry and why a map will not be of much value. The first problem is that the brain is dealing with an unfamiliar data type structure to our current digital structures. The data has no absolute value. All data coming in is relative in value to previous data. This produces a linear calculus where answers are arrived at in a single XOR subtraction step. The data form coming in will be of more value to the model than any model as all the computational steps are known past the data entry point. They have been known for a long time. The next problem is that the brain has "ghost circuits." These are analogous to the old time "Cross Talk" functions in analog telephonic circuits. Unlike silicon and other circuits where great effort is made to produce separation of data and isolation of circuits, the brain operates because signals can and do quite intentionally affect adjacent computational results. This is a 3 dimensional space effect. Another reason that a map will not be of much value is that the circuitry is in a chemical bath that is altered by reaction sums. The result is another step in the computation by making relational conclusions.

If I haven't confused everyone by now, it isn't by lack of trying to be as simple as can be. The calculus is similar to slide rule operations. (Something forgotten today.) The other functions produce a structural ability for the circuitry to produce results even with highly error ridden data and with outright gaps in data. Data purity above about 17% is sufficient for nearly perfect operations. The unique feature of the data form is that it also allows data which is arrived at by completely type unrelated sensors to be applied to derive intelligent results. This means you can take the output of an ear and match it to a visual field and get useful data! The differential data structure also allows nearly infinite memory compression and use of broad band differential sums to control responses and a gross filter. You use this driving a car.

The basic problem we have in producing an analog to the brain in computers (Artificial Intelligence Type) is that we attempt to do with absolute value sensors what is being done with relative sensors. The result is that computations form a geometrically increasingly difficult solution set when differential data would have produced a linear solution set. To be plain this allows as little as 4 or 5 computational steps to arrive at a very intelligent solution when a googleplex of steps are required using absolute value processing with a lessor result.

There is also a sensor reality that is missing. All natural sensors are motor controlled to center point damping which is a time delay cancellation producing only differential data for output. None of our synthetic sensors do this function and it is why we do not get the results we want. It is really a pretty simple fact that if you want to reverse engineer something you should actually reverse engineer it. Mapping the brain will disclose logical circuits we already know exist and the results of their calculations. Only the data form will tell what is going on.

Find Edges (1)

tepples (727027) | more than 6 years ago | (#21513793)

There is also a sensor reality that is missing. All natural sensors are motor controlled to center point damping which is a time delay cancellation producing only differential data for output. None of our synthetic sensors do this function
...because that's the job of the high-pass filter behind the sensor.

Re:Find Edges (1)

cluckshot (658931) | more than 6 years ago | (#21573773)

If you are paying attention this remark you made is not correct. The natural sensors are quite different from the high-pass filter situation that you describe. There are several effects of the natural design including allowing the devices to respond in a linear fashion to a complete log range above and below that of a sensor with a high-pass filter. It also produces data which is entirely different in value.

High-Pass filters produce absolute value data. Doing a simple parallel subtraction of a data field from a previous field using absolute value data produces no useful data. Doing exactly the same thing to the natural sensor output produces ready for use command data for servo control and the like. It is a calculus similar to that of a slide rule. Unless you understand the difference in the data you understand nothing. The data coming out of both sensors appears identical to the casual user. There is a completely different set of data represented by the values exiting the natural sensor.

The reason this is significant is that in a visual field using a series of 4 parallel operations you can reduce a visual field to useful data. Using a standard 10 mega-pixel camera you can process the logical operations in approximately 42 million clock cycles for the data to be ready using a standard single processor computer. This would allow a standard 3.2Ghz processor to process about 761 frames per second for intelligent data handling for robotic control. Since the processor only needs say 60 frames per second input rate, it has lots of time to do command and control operations. This is a robot with time on its hands. Doing the exact same operation using the usual sensor devices of today takes approximately 1 second to process one frame differential and reduce it to useful data for command and control. In order to process useful data at 4 frames per second you need a quad core machine dedicated to the job. This has to do with the logical control and program steps needed. This is why visual field driving systems are useless today. They simply need too much horsepower to run them and the job is growing geometrically with new requirements using absolute value data. Using the natural differential data, adding new demands is a trivial cost on the system.

I have studied both the physical natural system and the computer systems in depth. I wrote my response because I have noticed a completely mistaken notion that the sensor is not important. I am the son of one of the men who put men on the moon. Because of band width and other issues, My father designed based upon these natural system constraints and was able to handle massive data streams in real time by following these rules. The computers of that time were trivial in ability compared with today. They did well what might well be impossible by the current thinking. Please understand that I am trying to help advance the science.

If you are going to engineer something to do that which is already working, it is wise to reverse engineer. To do that you have to stop trying to improve on the design and copy. The design as it turns out is already optimal. In visual fields for example: The optical response range of the eye in F stops is 16.5 F stops White to Black. The optical range of our best sensors with your high-pass filters is 5.5 F stops. Curiously the photosensitive media has the natural limit of 5.5 F stops. The feedback control is what generates the wide response range. Here is an example of impossible being done. Learn from it.

Re:brain based search? (4, Funny)

zippthorne (748122) | more than 6 years ago | (#21511521)

You're making quite an assumption, there. They might be in the last place you look, but I make sure to keep looking after I find 'em every so often, just to avoid that awful cliché.

The best part is, you can put "finding the keys" in any percentile you want, just by looking some more. Heck, you can really screw with the average by looking for 'em occasionally when you already know where they are.

why do I get the feeling that this is going to ... (3, Funny)

zappepcs (820751) | more than 6 years ago | (#21509823)

turn out like a bad nightmare after watching A clockwork Orange ??

What about if you were Alex (0, Troll)

Therapist of Slashdo (1195429) | more than 6 years ago | (#21509923)

You know, rather than just watching the movie. You could actually experience the Traumatic Images. [google.com] Now that would be really frightening, perhaps even drive you to jump from a high window. Now where's Julian, I need him to change the record, it's driving me nuts.

Re:What about if you were Alex (0)

Anonymous Coward | more than 6 years ago | (#21509979)

best troll ever

MOD PARENT UP (0)

Anonymous Coward | more than 6 years ago | (#21510157)

Now that IS funny.

Time to report another bug in firefox. (1)

oliverthered (187439) | more than 6 years ago | (#21516163)

The above link Traumatic Images looks like is searches google for clockwork orange images when infact it goes to goatse, if firefox scrolled the URL then you would be able to see all of it any know that it goes to goatse but because it doesn't you can't tell where it goes.

This looks like a good one for some kind of attack (possible a goatse one)

Re:Time to report another bug in firefox. (1)

Nicolay77 (258497) | more than 6 years ago | (#21517037)

Opera shows the full URL in a tooltip, I just have to hover the mouse over the link.

Yes, another one for you Firefolks to copy ;)

Re:Time to report another bug in firefox. (1)

BattleApple (956701) | more than 6 years ago | (#21521079)

In Firefox, hold the shift key while hovering. It only shows the beginning and end of the url, but you can see the goatse in there.

Re:Time to report another bug in firefox. (1)

BattleApple (956701) | more than 6 years ago | (#21521197)

Actually, I just realized that functionality comes from the All-In-One Gestures add-on. Anyway, thats one way to see it - besides right-click/properties

Nice! (1)

mfh (56) | more than 6 years ago | (#21509827)

Now I'll find that sweet looking woman I saw at the bar back in 1993. I know she's out there somewhere... I just HAVE TO CONCENTR- oh no.

Beer goggles (1)

CarpetShark (865376) | more than 6 years ago | (#21515427)

Now I'll find that sweet looking woman I saw at the bar back in 1993... ...oh no.


And the beer goggles strike again!

Problem: we don't KNOW how the brain does it (5, Insightful)

algorithmagic (1194567) | more than 6 years ago | (#21509865)

I worked in visual brain research for years, and can vouch there are lots of skeletons in the closet, or elephants in the drawing room: there is no accepted model of the statistics of real images (corners, occlusion, shading), nor of the algorithms necessary to infer them from inputs, nor of the learning process to infer those algorithms. Yes the brain is parallel, and yes it involves robust, fuzzy processing and analog values, but we not only don't know how the brain does it, we don't even know what problem it's trying to solve. The good news is that if this student does indeed have a business model and a real-world problem people will pay to solve, then the ratchet of engineering evolution could give us some real traction into understanding and solving this mystery. Good luck!

Re:Problem: we don't KNOW how the brain does it (1)

morgan_greywolf (835522) | more than 6 years ago | (#21510253)

I seem to remember reading something about the visual heuristics the brain uses to identify faces and objects and so forth. There's some we do know, but I think you're right in that there is a LOT of information about how the brain works in this area than we don't know. Of course, we've only had what knowledge we do have (or at least think we have) about the brain for a relatively short time. The study of human brain is relatively young compared to the study of other parts of the body -- one of the main reasons being that we didn't have the necessary toolset to study these things until very recently (relatively speaking, of course).

Re:Problem: we don't KNOW how the brain does it (1)

lawpoop (604919) | more than 6 years ago | (#21512705)

I remember reading somewhere along the line about a 'geon' theory of vision. This theory posited that the mind had a 'virtual reality' of basic 3-d geometric shapes, which the mind used like legos to build a model of what it saw. So you have cubes, cylinders, and spheres, which you used to model the things you saw.

Then it occurred to me that very few things we saw were geometric objects, or composed of geometric primitives. It's really only until you start living in cities and dealing with manufacturing goods that you start to encounter geometric objects.

If you buy the theory that modern human beings evolved on the savannahs of Africa some 100,000 years ago, the only geometric objects they would have seen were the sun and the moon. Instead, imagine this scene:

A field of short grasses is split by a meandering stream. On the other bank, shrubs conceal the trails of various grazing animals. Beyond that the treeline begins, a wall of various deciduous species. Above that, various cloud formations obscure the blue sky.
No, if you're a savy hunter gather, you need to read that landscape, and there aren't any geometric objects anywhere. You need to see animal tracks and trails in the grassland. See how deep and how fast the water is moving from looking at the surface. Find predators lurking in the shrubs. Identify various plant species for food, materials, and medicines. Read the clouds above to foretell the weather.

I don't know how you would accomplish any of those tasks using classical geometry or the Pythagorean theorem -- there aren't any triangle. Perhaps using fluid dynamics modeling to analyze the stream and the clouds, and maybe fractal geometry to analyze the plants. I don't know, but I think we're a long way from the answer.

Re:Problem: we don't KNOW how the brain does it (1)

12357bd (686909) | more than 6 years ago | (#21515145)

Yes

The same goes for statistical classifiers. Human categories ( 'faces', 'cars' , 'landscapes' ) are not mathematical objects (there's no mapping between concepts/cultural constructs and formulaes/formal expressions). Any formal system trying to express a non formal one is doomed to fail, except for the very few special cases where human categories maps well defined mathematical objects (ie, ball - 2d/3dcircle, box-2d/3drectangle).

Statistical systems try to create a map between basic data characteristics (lines/textures/disposition) and a category (ie: 'face', 'car'), using data sets as 'examples' of those categories, trying to derive a relatively simple mathematical/geometric relation between the characteristics of the samples (ie; clustering is all about projecting those samples on a n-dimensional space, searching for a minimal distance between samples of a given category and a maximal one between samples of diferent categories).

The problem is that no matter how many charateristics you may use, or how clever you make the relation/projection, the resulting data don't have to map to a geometric shape, so the success rate is more related to the specialization of the categories/data sets, than a real category detection 'methodology'.

There's still hope in the neuronal simulation field, (ie, recurrent neuronal simulation) but TFA seems not to be in this line of thought.

Careful whose brain you use... (3, Funny)

butterwise (862336) | more than 6 years ago | (#21509933)

You might end up with a search engine that just looks for pr0n.

Re:Careful whose brain you use... (0)

Anonymous Coward | more than 6 years ago | (#21510027)

...because that would be a real failure, since we all know there's no money in porn...

Image search? (1)

SnoopJeDi (859765) | more than 6 years ago | (#21509991)

Anybody else think image-in-results-out when they first read the summary? Actually, even TFA doesn't make this plainly obvious until you're decently through it.

Once I was past that, I thought it was pretty interesting. It could lead to more honest tagging of videos on YouTube, for example. No more keyword nonsense, just tags assigned by the engine.

Bad article (1)

El Pollo Loco (562236) | more than 6 years ago | (#21510019)

This is a pretty useless article. Doesn't really tell how he's planning on doing it. It's a patent pending method. All it basically says is "Hey, look what we might be able to do". Even the quote from the expert doesn't do anything to tell me this problem is on its way to being solved.

Re:Bad article (1)

Conspiracy_Of_Doves (236787) | more than 6 years ago | (#21510119)

Oh, I can tell you right now what he's planning to do with it. He's planning to sell it to Google.

Re:Bad article (1)

Conspiracy_Of_Doves (236787) | more than 6 years ago | (#21510187)

It ever happen to anyone else where in the couple seconds between when you click the submit button and when the page refreshes, you see clearly what the person you are responding to actually wrote and you realize that you read it completely wrong?

Re:Bad article (1)

bar-agent (698856) | more than 6 years ago | (#21510641)

I'm having nightmare images of the next generation of Google image search.

It would be powered by dark room after dark room of people strapped into chairs, fed by IV, wearing helmets full of electrodes spearing into their brain. 24 hours a day, 7 days a week, these helmets send dozens of images into those poor, fried brains and sees if any of those over-saturated neurons picks up on a match. Then, Google posts results.

Re:Bad article (1)

Metasquares (555685) | more than 6 years ago | (#21512541)

I doubt they're interested in this sort of thing right now. I submitted a paper on how to do multimedia similarity search to them when interviewing there, and was told that similarity-based image search isn't an area they're concerned with at the moment. Because it isn't, it's probably a better idea for him to go into it on his own (also, once they see it in action, they might want it).

Also, everyone in computer graphics has some sort of image similarity search method, and I don't see anything particularly novel about this one. Wake me up when he has some classification accuracies or something showing that his method works better than the others.

Re:Bad article (1, Insightful)

Anonymous Coward | more than 6 years ago | (#21510231)

It's called vaporware. The research might work out beautifully someday, but apparently hasn't yet. Writing it up as news is probably a little premature.

Re:Bad article (2, Informative)

caffeinemessiah (918089) | more than 6 years ago | (#21512401)

This is a pretty useless article. Doesn't really tell how he's planning on doing it.
.

I absolutely agree with you. Even the Computerworld (admittedly not the pinnacle of scientific reporting) article starts by saying "University of Ottawa student Kris Woodbeck is combining the neural processes we use to understand image data with the features of graphics processors." I don't even know where to begin with that statement. So he's come up with a model of neural image processing (a feat in itself)...and is mapping it to a GPU? This is like saying "we've figured out how to isolate stem cells from a source other than human embryos...and we used plastic petri dishes to do it!."

Second: 'The brain is very parallel. There's lots of things going on at once," he said. "Graphics processors are also very parallel'. OK, is this a science finding (i.e. a new image processing ALGORITHM based on the brain), or a systems paper (we came up with a parallel GPU version of an algorithm). I really hope he was misquoted, because otherwise it sounds like vaporware or untested hypotheses.

And then: 'For images, it might be when you took it, with what camera, with what exposure, that's about it. Then you're stuck with a red barn in rolling hills and I might know it was taken in California, but no one else does. How do you surface that metadata so it becomes much more searchable?' OK, now where did this come from again? neural processing? parallel GPU? and now inferring metadata?

Sorry, but getting a provisional patent is hardly a difficult thing (most universities in the US will file one for free if *YOU* think you might be on to something). Furthermore, all this would be more credible if they published some results and at least a brief description (which is allowable by patent law). Until I see some numbers, this is non-news.

Re:Bad article (0)

Anonymous Coward | more than 6 years ago | (#21514315)

Mod parent up.

Also, mod the Canadian government as "Stupid" or "Desperate".

Fatal error? (0)

Anonymous Coward | more than 6 years ago | (#21510245)

What happens when you ask it to search for goatse?

Great idea (0)

Anonymous Coward | more than 6 years ago | (#21510329)

Sadly, all that ever was found was porn.

Re:Great idea (0)

Anonymous Coward | more than 6 years ago | (#21510671)

You mean that wasn't the primary goal?

Actually.. (1)

mcscooter (1166081) | more than 6 years ago | (#21510357)

..processors do one thing at a time, so they would be linear, not parallel.

crap article (1)

blackcoot (124938) | more than 6 years ago | (#21510359)

it's impossible to tell if this guy is patenting the idea of doing vision on gpus (in which case there's prior art going back to before 2004 and probably even beyond that in the gpgpu community) or if he's talking about some tremendously clever collection of algorithms that happens to map well to gpu hardware. either way, i suspect that the poor guy is about to discover the hard way just how extraordinarily difficult this problem is.

Left blain (1)

davidsyes (765062) | more than 6 years ago | (#21510991)

light blain...

Yeh, there be parrallellism there....

(2 Ls up there, 2 Ls down here; 2R, 2L, 2L... get it?)

Parallel??? Not really (0)

superstick58 (809423) | more than 6 years ago | (#21511033)

'The brain is very parallel. There's lots of things going on at once,'

I have a bit of an issue with that statement. I guess in a way it is true that the brain does multiple things simultaneously such as balancing the body and chewing gum ;), but any article [apa.org] on multitasking [cnn.com]will tend to point out that the brain isn't very good at processing higher functions simultaneously. I guess the main goal may not be to simultaneously process multiple images, but to quickly process a single image (which the brain is good at) based on the content rather than the meta-data associated with it.

Re:Parallel??? Not really (1)

Hatta (162192) | more than 6 years ago | (#21511171)

The brain can only do one task at a time, but it does this task by breaking it down into smaller tasks which are done in parallel.

Re:Parallel??? Not really (0)

Anonymous Coward | more than 6 years ago | (#21511285)

The visual system is plenty parallel, insofaras the early visual system operates on very local sections of the image (to, say, extract edges). The local processing happens over the whole image at the same time (by processes spread over a large section of the brain). The dualtasking bottleneck referred to in those multitasking articles does not apply to early vision.

Re:Parallel??? Not really (1)

tehdaemon (753808) | more than 6 years ago | (#21513449)

You are confusing 'brain' with 'mind'.

The mind has problems multitasking - not too different from a CPU. The brain does a lot of things in parallel. In fact, each neuron is independently doing it's own thing, muh like each transistor in a CPU...

T

Can you patent how the brain works? (2, Insightful)

Kazoo the Clown (644526) | more than 6 years ago | (#21511973)

If you can, the patent system is more than a little bit broken, though I guess we all know that by now. I would think that the existance of the brain would constitute prior art...

There's prior art to invalidate this! (1)

hadaso (798794) | more than 6 years ago | (#21515215)

There's prior art to mapping the brain onto electronic computing devices:

It was done in at least one episode of Star Trek.

And if future prior art published in the distant past is not suitable, then Wallace's cross human-rabbit brain mapping ("Wallace & Gromit - The Curse of the Were-Rabbit") might apply (a rabbit's brain IS a kind of electronic computing device, as is a human brain.

Both examples are both "prior" and "art"!

If not applicable, prepare to either pay a licensing fee or stop using your brain (if you haven't done so already). Perhaps legislators should prepare by creating a special tax and mandatory license. And the free culture community should devise alternatives that bypass the patent by accomplishing the same functionality without the usage of a brain (plants provide plenty of prior art to brainless existence, as do most inanimate objects.

Anyway, if such patents are accepted and legislators do not prepare in advance it might be quite difficult to invalidate it because the judges and jury would depend on the patented technology to do their job...

Downside of Biologically Inspired Computing (1)

QuantumFTL (197300) | more than 6 years ago | (#21513419)

While I was an intern at the Jet Propulsion Laboratory, back when I was an undergraduate, I was very gung-ho about biologically inspired computing - I implemented an automatic flowchart positioning system using a genetic algorithm that would "evolve" a correct solution to the problem. While this certainly worked to some extent, the instability and sheer unpredictable nature of using such a stochastic algorithm made it impossible to use in a mission-critical setting. Many biologically inspired algorithms solve problems through methods that cannot be proven correct (unlike, say, the mathematics circuitry in a CPU), but merely empirically observed to "do a good job."

One of the main drawbacks of human engineering is the need for certainty, which often prohibits the use of many high-efficiency stochastic algorithms (especially for things like mesh communication) in conservative industries, like the US defense industry. This is also a significant problem in other areas, however, and many biologically inspired algorithms have properties that we cannot, so far, completely explain - they are treated like "black boxes" with many unknowns for engineering purposes.

I think that in certain circles, the tremendous success that is evolution on this planet has overshadowed its inherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima, and the added constraint that everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence). Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."

I think biologically-inspired algorithms will continue to gain prominence, but in my estimation, it is likely that there will be harsh limits imposed on how far guarantees of performance from empirical tests and symbolic analysis will actually hold.

(Blatently pasted from my post a few years ago)

Re:Downside of Biologically Inspired Computing (1)

mcoletti (367) | more than 6 years ago | (#21534997)

the tremendous success that is evolution on this planet has overshadowed its inherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima
Untrue. There are EC mechanisms to deal with inferior local optima such as hypermutation, restarts, coevolution, island models, and dynamic population sizes, among others.

verything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence).
I hope you're not making some obtuse case for Intelligent Design. Anyway, the use of "self-replicating" is puzzling since I don't think most EC researchers think of populations as being self-replicating, at least not in the real world biological sense. EC != Biology.

Re:Downside of Biologically Inspired Computing (1)

QuantumFTL (197300) | more than 6 years ago | (#21536021)

the tremendous success that is evolution on this planet has overshadowed its inherent weaknesses - that it is a greedy, local optimizer which cannot reach a large amount of the possible biological search space due to being stuck in local optima
Untrue. There are EC mechanisms to deal with inferior local optima such as hypermutation, restarts, coevolution, island models, and dynamic population sizes, among others.

The purpose of my statement was not to suggest that EC is hopelessly flawed, but that biological evolution is - there's just too much of the state space it can never reach (without the use of something like a sentient species, which I'm not going to include in my discussion). That means if we copy solutions found by nature, (neural networks, etc) we should be aware that there is likely a much better solution to the problem that biological evolution could not, due to its particular constraints, evolve.

Indeed I am very much aware that many EC algorithms mitigate this problem somewhat, mostly through the use of a central controlling entity (something that we do not presumably have in biological evolution). However, just about all EC optimizers have extreme difficulty dealing with (the very real) case where the global optima (and other, almost as good solutions) are very, very small regions in the state space, around which fitness drops to almost nothing - for instance, try constructing almost any piece of high technology, such as a rocket or a modern gun. Get one thing slightly off, and the entire device doesn't work! No amount of hypermutation is going to let you find these very, very sparsely distributed solutions in any reasonable amount of computer time - there's just not enough useful gradient information.

Everything must be constructed out of self-replicating units (these two factors are why something useful, like, say, a Colt 45, will never emerge without the pre-existence of an intelligence).
I hope you're not making some obtuse case for Intelligent Design. Anyway, the use of "self-replicating" is puzzling since I don't think most EC researchers think of populations as being self-replicating, at least not in the real world biological sense. EC != Biology.

I haven't been accused of being an ID proponent for some time (despite the theoretical possibility of aliens or whatever interfering with our evolution, there's no particular reason to believe they or any other "higher power" has). Perhaps my post did not make it clear, those statements about the limits of evolution "on this planet" were referring to biological evolution. Here's the part which was key to this article:

Biological examples are fascinating and often practical, but the biological approach is almost always "brute force" and/or "sub-optimal but still alive."
We're taking something from a known, working biological system. That's cool. It will probably work well, even. However as we know due to many things like optical illusions, there's so many inputs that the human visual system can't handle properly. Indeed this is the kind of thing I'd expect to see problems with if this system were put to use in practice, especially in a security setting. I'm saying that I hear all the time that "it's arrogant for us to think we can do something that nature could not in 4.5 billion years of evolution." That's bollocks, biological evolution is hopelessly flawed for solving some types of problems, and we can absolutely do better. The only question is whether or not we will.

Anyways, sorry for the confusion. Only some of my statements apply to EC, which, obviously, has less restrictions on it than biological evolution.

depends on who's brain the map. (1)

CFD339 (795926) | more than 6 years ago | (#21513785)


    I'd imagine that the number of "Probable Hits" will be heavily weighted toward pr0n sites if anyone from around here gets mapped.

BS (1)

Potatomasher (798018) | more than 6 years ago | (#21516759)

I call BS on the article as well...

As far as "generic object recognition" goes, we are VERY far from a Holy Grail. State-of-the-art algorithms so far have a 45-55% successfull recognition rate, when dealing with only 101 objects categories (Caltech 101 database). Basically, with only 101 object to choose from, your "search engine" would get it wrong half the time. Not very useful if you ask me. Let alone with hundred of thousands of categories as he claims.

On top of that, the best and brightest are already working on this problem at MIT, in Dr. Poggio's lab (Computational Intelligence), who along with Marr, started the field of computer vision. The problems encountered are still at the theory level, not in the implementation. So a GPU implementation shouldn't change much.

disclaimer: i am also a graduate student working on the problem, who also happens to have graduated from the University of Ottawa
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...