Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

BellKor Wins Netflix $1 Million By 20 Minutes

kdawson posted more than 4 years ago | from the seven-guys-and-a-million-bucks dept.

Math 104

eldavojohn writes "As we discussed at the time, there was a strange development at the end of Netflix's competition in which The Ensemble passed BellKor's Pragmatic Chaos by 0.01% a mere twenty minutes after BellKor had submitted results past the ten percent mark required to win the million dollars. Unfortunately for The Ensemble, BellKor was declared the victor this morning because of that twenty-minute margin. For those of you following the story, The New York Times reports on how teams merged to form Bellkor's Pragmatic Chaos and take the lead, which sparked an arms race of teams conjoining to merge their algorithms to produce better results. Now the Netflix Prize 2 competition has been announced." The Times blog quotes Greg McAlpin, a software consultant and a leader of the Ensemble: "Having these big collaborations may be great for innovation, but it's very, very difficult. Out of thousands, you have only two that succeeded. The big lesson for me was that most of those collaborations don't work."

cancel ×

104 comments

Anonymous Coward (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29500931)

the topic confuses me

Re:Anonymous Coward (3, Insightful)

ksatyr (1118789) | more than 4 years ago | (#29501101)

The whole thing confuses me. Why are these extremely intelligent people doing research work for NetFlix that would otherwise cost them many times the price of the prize if they paid them in-house? Are there at least share options down the road? I hope the ultimate solution(s) end up in the public domain.

Re:Anonymous Coward (0)

Anonymous Coward | more than 4 years ago | (#29501323)

Why don't you do the research yourself? It's on the frakin FAQ right on netflix's competition page...

Re:Anonymous Coward (2, Interesting)

kelnos (564113) | more than 4 years ago | (#29501659)

It is? I only see a bit in a question about licensing (somewhat tangential) that suggests that Netflix hopes that participants will be able to build a business out of the algorithm they design, but that sounds pretty weak, and doesn't have all that much to do with what the participants got, aside from the prize money.

The contest has been going on for three and a half years, and the winning team of seven will be splitting a cool million, which gives each person just under $145k, minus taxes. Now, I don't know how much time these guys spent on it, but even if they only worked a year's worth of regular work hours over the 3.5 years, $145k per year each for seven developers is a pretty damn good bargain from Netflix's perspective for what they got (not just the new algorithm, but a lot of good PR and buzz).

I'm not saying the BellKor guys got the shaft; they were certainly compensated (not just monetarily; I'm sure their employability went up as well), and I'm sure a big part of their desire to compete was the challenge itself. But I'd bet that Netflix would've had to pay quite a bit more.

And it's not like the BellKor team did all the work; all the other teams did some of the same work independently. I imagine many (most?) of them didn't stand a chance, but let's just throw out a conservative number and say the top 5% of teams managed to improve on Netflix's existing algorithm (even if not by 10%). It's conceivable to believe that an in-house team of paid developers/researchers would end up doing an analogous iterative process, achieving smaller gains, eventually reaching the 10% goal. Depending on Netflix's hiring skills, it's possible they wouldn't reach a 10% increase without many more man-years of work.

This contest was a very smart move on Netflix's part: their only real downside is that their -- self-imposed -- competition terms will allow the contest participants to competitively license their implementations to other companies.

Re:Anonymous Coward (0)

Anonymous Coward | more than 4 years ago | (#29504073)

The conclusion of the parent can be summarized as a success for the collaborations and for Netflix and the post is still modded off-topic? It's true, the Offtopic is the new Troll. The moderationistas can't be wrong about this.

Re:Anonymous Coward (0, Offtopic)

kelnos (564113) | more than 4 years ago | (#29507771)

Yeah, I don't get it either. More like Offtopic is the new "I disagree with you".

Re:Anonymous Coward (0)

Anonymous Coward | more than 4 years ago | (#29508825)

I just modded you off topic because I disagree that you think that offtopic is the new I disagree. So there!

Re:Anonymous Coward (1)

AvitarX (172628) | more than 4 years ago | (#29504409)

That's how bounties work.

It's for the publicity on all sides in the end (and the challenge to the competitors).

The winning team did alright, but the second place, nearly as good one got absolutely nothing for pretty much an equivalent result, they are the ones truly losing out, even if the prize was $100 million.

Re:Anonymous Coward (1)

retchdog (1319261) | more than 4 years ago | (#29503421)

The intelligent people (well, at least the ones who also had enough time to come close to winning...) have good research jobs already.

I've also heard that, even before they won the prize, they were selling some of their tangential/spin-off ideas to Netflix... The prize seems to have been more of a trigger.

Re:Anonymous Coward (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#29501215)

niggery nig nig niggers!

Bad Summary (5, Informative)

Anonymous Coward | more than 4 years ago | (#29500935)

The Ensemble beat BellKor by 0.01%... by their own reporting. According to Netflix, it was a tie. In the case of a tie, the first posted results wins.

Re:Bad Summary (-1, Offtopic)

rubic123 (1642093) | more than 4 years ago | (#29501247)

I have read this article thoroughly and I am agree with you. Rubic camper trailers Brisbane [discountca...ers.com.au]

Re:Bad Summary (5, Informative)

tangent3 (449222) | more than 4 years ago | (#29501513)

The Ensemble beat BellKor by 0.01% on the quiz set. Basically there are 2.8 million records in the qualifying set that the teams must predict the grades of. Half of the records (which half is known only to Netflix) form the quiz set, the other half form the test set. Teams submit their prediction a limit of once a day to get a result from the quiz set, but the final decision of who won is made on the result of the test set.

So even though Ensemble beat BellKor on the quiz set, the test set results came back dead even.

It was a tie... (2, Interesting)

rm999 (775449) | more than 4 years ago | (#29500969)

It was a tie...

In football, I can see how a 20 second difference makes the difference between winning the superbowl. In a contest like this that took thousands of man hours of some brilliant people, calling Ensemble "second place" due to a 20 second difference is just wrong. I don't know if there was a better solution, but something just seems wrong about it all.

Re:It was a tie... (3, Informative)

dingen (958134) | more than 4 years ago | (#29500983)

Altough I think the actual 20 minute difference instead of your imaginary 20 second difference is a little less harsh, you're still right.

Re:It was a tie... (1)

rm999 (775449) | more than 4 years ago | (#29501053)

Hah whoops, I guess that weakens my football analogy, but I stick to my point.

And it's not really about the money - a million dollars is nothing when you split it between the companies sponsoring the teams, but the right to say you won the contest means a lot. The 20 minutes realistically had nothing to do with winning or losing.

Re:It was a tie... (1)

Sparr0 (451780) | more than 4 years ago | (#29501219)

Why would you think the sponsor gets the prize money?

Re:It was a tie... (1)

TheCycoONE (913189) | more than 4 years ago | (#29502261)

He's probably had too much experience with vulture capitalists.

Re:It was a tie... (3, Insightful)

aywwts4 (610966) | more than 4 years ago | (#29501223)

Most football games didn't start in 2006, so proportionally 20 seconds is far too long. You didn't exaggerate near enough, someone else can do the math though. (I'm real sleepy, but the imaginary football game came down to roughly 45 milliseconds?)

I'm really surprised Netflix didn't offer 2 million dollars to the two winning teams, or at-least some sort of consolation prize, as it was effectively a tie in a culmination of years of work.

These people did so much work even at a million dollars they would have likely earned below minimum wage. Netflix has come a long way since 2006, and this kind of research would have cost many millions, they really can't lose here. Unless the contest took so long the code isn't useful and they have already surpassed 10% in house.

Re:It was a tie... (2, Insightful)

dingen (958134) | more than 4 years ago | (#29501311)

Most football games last for a few minutes more than the standard 90 minutes, depending on the number of incidents during the match. The game would never be terminated in the middle of an interesting action and no proper referee cares about a few seconds.

Re:It was a tie... (1)

vigmeister (1112659) | more than 4 years ago | (#29502107)

I have to agree (especially after this past weekend's Manchester derby), but I believe the OP is making a reference to American football where the time (of play, not of the game) is much more tightly controlled.

Which makes the analogy interesting as different sports have different concepts of deadlines and duration of play. As long as there was no violation of their stated rules, I do not see a problem as I think both teams deserved to win and there was only room for one on the podium. I am sure Ensemble will do just fine for themselves (financially and in terms of reputation) with the algorithm they've developed over the past 3.5 years to a point where the 1M may not be such a big deal down the road.

Congrats to both!

Re:It was a tie... (1)

dingen (958134) | more than 4 years ago | (#29502457)

Congrats to both!

Hey, thanks!

Re:It was a tie... (0)

Anonymous Coward | more than 4 years ago | (#29502439)

Football (American) doesn't end even when the time runs out. It ends when the time runs out AND the play is over. So if the ball is intercepted at the 1 yard line with no time left on the clock, you have until you go ob or are tackled to score.

Re:It was a tie... (1)

aiht (1017790) | more than 4 years ago | (#29500997)

One thing that just seems wrong about your post is the fact that it was 20 minutes, not 20 seconds.
Not that it makes much difference compared to thousands of man hours, but y'know, try to get it right.

Re:It was a tie... (1)

wizardforce (1005805) | more than 4 years ago | (#29501041)

It wasn't 20 seconds, it was 20 minutes. Technically Bellkor won the prize by virtue of reaching the target first and technically their competitor did beat them by a hair's width but not before the goal was reached and that was apparently the goal of the competition.

there's always a plus side (0)

Anonymous Coward | more than 4 years ago | (#29505899)

it's better for competition. If the ensemble won then it would be like taking the 2nd 3rd and 4th placed runners and making their combined effort worth of a gold medal, and condemning the actual winner to 2nd place and silver

So when does this start affecting my Netflix recs? (0)

Anonymous Coward | more than 4 years ago | (#29501005)

Are they actually putting the results into use?

Re:So when does this start affecting my Netflix re (1)

Niznaika (913305) | more than 4 years ago | (#29501481)

More importantly, are the algorithms open and not encumbered by any patents ? Will there be software, except Netflix that is, that will use them ?

Re:So when does this start affecting my Netflix re (1)

E IS mC(Square) (721736) | more than 4 years ago | (#29502661)

From what I read, Netflix is implementing two or three 'parameters' or 'methods' (out of possibly thousands the teams may have used) for now. (Can't find the link atm)

nonsense (5, Insightful)

wizardforce (1005805) | more than 4 years ago | (#29501029)

The big lesson for me was that most of those collaborations don't work."

Setting an arbitrary goal that only .2% of competitors could meet does not mean that most collaborations don't work. If 90% of the teams met the target, you probably wouldn't be so quick to claim that the vast majority of collaborations do work but rather that the goal wasn't high enough.

Re:nonsense (0)

Anonymous Coward | more than 4 years ago | (#29501527)

FTFA:

Over three years, thousands of teams from 186 countries made submissions. Yet only two could breach the 10-percent hurdle. âoeHaving these big collaborations may be great for innovation, but itâ(TM)s very, very difficult,â said Greg McAlpin, a software consultant and a leader of the Ensemble. âoeOut of thousands, you have only two that succeeded. The big lesson for me was that most of those collaborations donâ(TM)t work.â

And the number of non-collaborations that worked?

I think it's a gloss on prizes as innovation-spurs (2, Interesting)

langelgjm (860756) | more than 4 years ago | (#29502175)

I think he's pointing to one of the inefficiencies of prize systems as a way to spur innovation. Thousands of people tried, spending tens or hundreds of thousands of work-hours and other resources, and only a fraction got "winning results" (yes, according to the arbitrary way that winning was defined). But the point is that the prize probably resulted in a very inefficient use of resources. We could hypothesize that the same result might have been achieved with only 25% of the resources spent on the prize - for example, by making the cost of entry non-zero, you could have eliminated teams with no chance of winning from participating.

Basically prize systems benefit from people's inability to accurately assess their real chances of winning - or put another way, prize systems free ride off of people's self-delusion.

Of course there are other factors to be considered, e.g., what would those wasted resources have gone to if they were not being used for the competition, perhaps there are incidental rewards to those resources having been used, perhaps people competed for reasons other than simply winning the prize, etc.

Re:I think it's a gloss on prizes as innovation-sp (3, Interesting)

martin-boundary (547041) | more than 4 years ago | (#29502415)

for example, by making the cost of entry non-zero, you could have eliminated teams with no chance of winning from participating.

This doesn't work. If you make the entry cost nonzero, you'll be much less efficient at doing *science*. Remember, the journey is much more important than the result. The benefits to society in disseminating knowledge of data mining technologies and good datasets largely dwarfs the knowledge of the winning entry (think Metcalfe's law).

Re:I think it's a gloss on prizes as innovation-sp (2, Insightful)

langelgjm (860756) | more than 4 years ago | (#29503291)

The benefits to society in disseminating knowledge of data mining technologies and good datasets largely dwarfs the knowledge of the winning entry (think Metcalfe's law).

You're only considering the benefits to society that result from this particular competition. The argument about prize systems being inefficient has to do with the fact that while they generate huge interest in a particular topic (and yes, generate more returns than simply the winning entry), they also result in an inefficient allocation of resources to that one particular topic.

I.e., some of the entrants would likely have benefited society more by flipping burgers or sweeping sidewalks than by wasting their time on the Netflix prize.

The problem is somewhat reduced if you have a large number of prizes on various topics, because then people can devote their time to areas where they have more of a chance of winning, or if you make the cost of entry non-zero (it can still be very low - anyone with any real interest and talent will not be turned off by a $1 or $5 registration fee, or by a simple test to assess their capabilities).

Re:I think it's a gloss on prizes as innovation-sp (1)

martin-boundary (547041) | more than 4 years ago | (#29503653)

You keep claiming inefficiency, yet you don't quote a relevant result. Remember, people who participate in an AI competition are self selecting, ie they are following their preferences. If these (their) preferences had instead been tending towards burgerflipping, they wouldn't be spending the time on number crunching. So, there simply is no shortage of burgerflippers in society as a direct result of the existence of the prize, only an increase in AI skills among a subpopulation.

In fact, comparing the two skillsets is fallacious, since they are not substitutable: An AI programmer can certainly flip burgers with minimal training, but the converse is not true.

Re:I think it's a gloss on prizes as innovation-sp (1)

langelgjm (860756) | more than 4 years ago | (#29503873)

The burger flipping example was facetious, of course. The point being that it doesn't matter if people are following their preferences - people do not automatically prefer to do that which they are most efficient at doing.

So, there simply is no shortage of burgerflippers in society as a direct result of the existence of the prize, only an increase in AI skills among a subpopulation.

Assume the entrants all had moderate computer programming skill. There was likely a lot of duplicative effort in the competition (this happens in other types of research as well). Overall benefits may have been greater if 50% of the entrants worked on 100 different open-source software projects (or 10 different prize projects) rather than everyone working on the NetFlix prize.

Re:I think it's a gloss on prizes as innovation-sp (1)

martin-boundary (547041) | more than 4 years ago | (#29504787)

people do not automatically prefer to do that which they are most efficient at doing.

That's a strange way of viewing efficiency. Who gets to decide what is worthwhile to do for others? You're implying here a framework of value judgements independent of individual preferences, whereas typical definitions of efficiency [wikipedia.org] only require individual preferences.

Now from the second part of your comment, I have to infer that these value judgements have somthing to do with a certain dislike of duplication. That's fine as a personal preference, but does it make sense for assessing social value? On the contrary, I'd argue that duplication is important.

Take a school for example: every child is supposed to learn to read and write. That's an extreme form of duplication, which is inefficient according to the above value judgement. Thus it would be better for society if only one child or only a few learned to read and write, and all the others would simply go "flipping burgers" or do whatever their (lack) of skills made them efficient at: mining, or sewing clothes, etc.

But this is ludicrous, we know the benefits of educating children, even if it requires a lot of duplication. We also know the economic benefits of mass producing goods, which is another form of duplication which often results in lower prices. Why should research be different? It isn't: duplication is the common mechanism that ensures the verification of claims.

All of these examples apply to a competition like the Netflix prize: 1) there's the education dimension, people who compete learn new skills that will show up when the next wave of social websites gets produced, 2) there's the economic dimension, since a lot of participants will know how to do the same type of algorithms, thereby decreasing the cost for those who wish to hire specialists, and 3) there's the verification dimension, because duplicated algorithms are never 100% the same (different parameters, code paths etc), and comparing the dupes is a robust way of assessing the algorithm's capabilities.

Re:I think it's a gloss on prizes as innovation-sp (1)

langelgjm (860756) | more than 4 years ago | (#29505299)

That's a strange way of viewing efficiency. Who gets to decide what is worthwhile to do for others? You're implying here a framework of value judgements independent of individual preferences, whereas typical definitions of efficiency [wikipedia.org] only require individual preferences.

Your assumptions are 1) all entrants for the NetFlix prize prefer to spend their time on the NetFlix prize rather than something else (reasonable, and I agree to some extent); and 2) because these entrants prefer to spend their time doing this, it is efficient for them to do so (because you claim that efficiency is defined by preferences).

Your second assumption relies on a definition of efficiency as individual utility maximization, which in turn assumes that individual utility is defined by preferences. Those are valid definitions and assumptions, but they are somewhat arbitrary. Your own link points to several other definitions of efficiency that do not necessarily rely on individual utility maximization.

Now from the second part of your comment, I have to infer that these value judgements have somthing to do with a certain dislike of duplication. That's fine as a personal preference, but does it make sense for assessing social value? On the contrary, I'd argue that duplication is important.

Duplication in the sense of replication of scientific work, or in the sense of duplicative effort of a bunch of schoolchildren learning the same thing is of course necessary and desirable (to some extent). Both of those examples have limits - only a certain amount of replication in science is useful. You don't want all scientists spending all their time replicating the same experiment. Likewise, while we do want all children to learn to read and write, we don't want all children to learn how to act, how to repair cars, how to program computers, or how to fly planes. (Note that this is true regardless of preferences - even if all children preferred to learn how to repair cars, it would not necessarily be most efficient for them to do so). Time and resources are limited, so specialization is necessary.

Duplication is only useful to a certain extent. Many entrants likely would have a comparative advantage in working on some other project, or learning some other skill than working with recommendation algorithms. However, because there is no prize giving them an incentive to work on a project where they have a comparative advantage, they instead all flock to the NetFlix prize.

The whole criticism essentially comes down to the point that if you are going to use prizes widely as a means of encouraging innovation, it's better to have many prizes that will draw people to various areas of interest, rather than having one big prize, which will draw too much interest.

Re:I think it's a gloss on prizes as innovation-sp (1)

Moridin42 (219670) | more than 4 years ago | (#29505121)

I don't think you are talking about efficiency accurately. By your reasoning, every competitive activity, anywhere, should be done away with since the participants could have been of more value to society by doing any productive job. And yet, how much would you have to pay those people in order to do the janitorial work or burger flipping instead of whichever competitive activity they would choose to do voluntarily? That is the measure of the inefficiency of your argument.

Any time participants in any activity have real choices whether or not to participate, you cannot get inefficient outcomes. There are any number of activities where participants do not have real choices about whether or not to participate, and those activities tend to not have efficient outcomes. But people and teams who have been working for years, making submissions to Netflix, and getting visible feedback as to how they are progressing can certainly make efficient decisions. Those people know exactly how much effort they are putting into the submissions. They have a really good idea as to how their algorithm is doing. They have at least a fair idea as to the number of competitors. In fact, if you raise a barrier to entry, you're more likely to get inefficient outcomes. Human beings are largely risk-adverse in their decision making. In order for them to compete at all, they must feel that they are getting something out of the competition. Start charging to enter (even small amounts), and competitors that would do well won't enter. They'll go on to regular jobs where the pay isn't anywhere near a million dollars for success, but the risk of receiving nothing is pretty close to zero. The whole point of contests such as these are to let various people who think they have a good idea to test their ideas. If Netflix knew what was going to make for a good idea, they wouldn't have the contest. They'd put the good idea in code and let it play out.

Competitors value competition in and of itself, in a way that most employees don't value employment for itself. Employees value getting paid. Being employed is a means to that end.

Re:I think it's a gloss on prizes as innovation-sp (1)

radtea (464814) | more than 4 years ago | (#29502637)

Basically prize systems benefit from people's inability to accurately assess their real chances of winning - or put another way, prize systems free ride off of people's self-delusion.

Pretty much. I had a look at the data early on, verified that by a tiny bit of cleverness I could hit the existing performance mark with far less iron than I'm sure NetFlix throws at the problem, recognized that getting improvements over that were going to take huge efforts in time and computing resources given the structure of the data, looked at what the other teams were doing--which was running hundreds of different algorithms and merging the results, validating my judgement on the difficulty of the problem--and decided the scope of the problem was far bigger than the resources I had available.

Anyone with a reasonable level of algorithmic experience on large numerical datasets would have made the same judgement, leaving only two kinds of people in the competition: the ones with huge corporate or university resources available to them, or the ones who had no real clue how hard the problem actually was. Sometimes the latter were able to collaborate with the former, which was probably useful. Every team needs its deluded optimists.

Re:I think it's a gloss on prizes as innovation-sp (2, Funny)

ostrich2 (128240) | more than 4 years ago | (#29503251)

Your experience was very different from mine.

I found an obvious solution and wrote it down in the margin of a book. I even discovered a proof of this, but the margin was too narrow to contain it.

Re:I think it's a gloss on prizes as innovation-sp (1)

daveime (1253762) | more than 4 years ago | (#29507313)

Go back to trolling sci.math, James :-(

Re:I think it's a gloss on prizes as innovation-sp (0)

Anonymous Coward | more than 4 years ago | (#29511237)

Learn how to use emoticons, Dave. Unless of course, you really were sad about telling him to go back to sci.math?

Re:I think it's a gloss on prizes as innovation-sp (1)

daveime (1253762) | more than 4 years ago | (#29515757)

Anyone who's been exposed to the ramblings of James Harris, even for a short time, will quickly discover a nasty taste in their mouths.

Hence the sad emoticon.

Re:I think it's a gloss on prizes as innovation-sp (1)

fulldecent (598482) | more than 4 years ago | (#29503059)

>> I think he's pointing to one of the inefficiencies of prize systems as a way to spur innovation. Thousands of people tried, spending tens or hundreds of thousands of work-hours and other resources, and only a fraction got "winning results" (yes, according to the arbitrary way that winning was defined). But the point is that the prize probably resulted in a very inefficient use of resources. We could hypothesize that the same result might have been achieved with only 25% of the resources spent on the prize - for example, by making the cost of entry non-zero, you could have eliminated teams with no chance of winning from participating.
>> Basically prize systems benefit from people's inability to accurately assess their real chances of winning - or put another way, prize systems free ride off of people's self-delusion.
>> Of course there are other factors to be considered, e.g., what would those wasted resources have gone to if they were not being used for the competition, perhaps there are incidental rewards to those resources having been used, perhaps people competed for reasons other than simply winning the prize, etc.

So what you're saying is that NetFlix recognized a large leverage on their prize dollars, and contestants received non-tangible rewards for their participation.

Or in other words, prize systems are an efficient way to spur innovation. See also [[X Prize]]

Re:I think it's a gloss on prizes as innovation-sp (1)

bill_mcgonigle (4333) | more than 4 years ago | (#29505571)

perhaps there are incidental rewards to those resources having been used

Right - everybody who seriously competed greatly enhanced their own personal knowledge of the field. I'd bet that most of that new working knowledge is not left to waste. There is a ripe market for prediction systems, and even the worst of the entrants can probably fulfil somebody's small need.

Funny, I learned a different lesson... (5, Insightful)

Squiggle (8721) | more than 4 years ago | (#29501033)

The big lesson for me was that big collaborations were the most successful.

In creating solutions for hard problems most of everything fails and is horribly difficult. No big surprise there. Kinda odd that was the quoted lesson...

Re:Funny, I learned a different lesson... (3, Interesting)

misnohmer (1636461) | more than 4 years ago | (#29501587)

I was just about to post the very same comment. By the contest rules, the contest ends the once someone comes up with a winning solution. The fact that there were 2 solutions meeting the requirement so close together and both resulting from collaborations would rather suggest the collaborations worked really well. The other collaborations simply stopped once there was a winner. Concluding from this that collaborations don't work would be like concluding that the training athletes go through prior to the Olympic games doesn't work - after all from all these entrants training hard only 1 wins in each event.
 

Re:Funny, I learned a different lesson... (0)

Anonymous Coward | more than 4 years ago | (#29501749)

The others are just lazy and need to work hard instead of living on government handouts.

Re:Funny, I learned a different lesson... (1)

radtea (464814) | more than 4 years ago | (#29502829)

The other collaborations simply stopped once there was a winner.

If you look at the leaderboard you'll see that the performance of teams drops off dramatically, so that by number 19 you're already down to 9% improvement. To use your Olympic analogy, it's like 20 people running the 100 m and two of them coming in over a second behind the leader. It's remarkably difficult to find full reports of sprint times--including the losers--but given there's about a second between men's and women's times in the 100 m it seems unlikely that the slowest men are routinely that slow.

The dispersion of results after so many years of hard work remains extremely high. Thus the observation that most collaborations are not effective.

Re:Funny, I learned a different lesson... (1)

misnohmer (1636461) | more than 4 years ago | (#29509057)

Would you agree that the results warrant a conclusion that collaborations are more effective than individual teams? If so, your overall conclusion shouldn't be about collaborations, but simply that people, no matter how many of them (collaborating or not), are not that effective.

Re:Funny, I learned a different lesson... (1)

YourExperiment (1081089) | more than 4 years ago | (#29503021)

Precisely. The Wired post [wired.com] about this hits the nail on the head: -

Arguably, the Netflix Prize's most convincing lesson is that a disparity of approaches drawn from a diverse crowd is more effective than a smaller number of more powerful techniques.

If even Wired can pick up on this, it's kind of embarassing that Slashdot decided to quote the one news source that got completely the wrong end of the stick.

Re:Funny, I learned a different lesson... (1)

grep_rocks (1182831) | more than 4 years ago | (#29503029)

This is a great way for a big corporation to get hundreds of researchers to work on a problem of econmic importance to it and ONLY pay the researchers who had the best result, the rest get nothing - if netflicks had to actually pay for all the researchers who went down blind alleys they would have spent millions more, or gasp... actually had to have had their own R&D department - what a scam, and yet everyone celebrates them like it is some kind of game show... all I see are suckers....

Re:Funny, I learned a different lesson... (1)

mattack2 (1165421) | more than 4 years ago | (#29510903)

Why are they suckers? You could make the same argument that most everyone working on open source software is a sucker, because they could be being paid to program. (Even at some place like RedHat, be paid to work on open source software even.)

As others have said, apparently many people did this because they enjoy working on this type of a problem. The $1M prize is just that, a prize.

Do most people who enter the World Series of Poker (and that costs $10,000 to enter) actually think they're going to win? If they're delusional, yes. But mostly I would suspect no. Yet they still enter it, because (I presume) they enjoy poker, and winning is an *added* benefit, not the sole benefit. (Of course, the WSOP is somewhat unique, because it is near the high end of tournament entry fees, and is the highest winning payout.. and of course is the most famous tournament.)

Re:Funny, I learned a different lesson... (1)

grep_rocks (1182831) | more than 4 years ago | (#29514475)

They are suckers because Netflicks makes a profit from the work of all the researchers who worked on the problem yet they only pay one group - the other groups tried different things that didn't pan out but Netflicks did not have to pay for that R&D effort. The other examples you site like open source make utilities that everyone can benefit from - these open infrastructure projects are not exploitive in the same way - Netflicks owns the algorithm developed by the group. Someone who pays 10K to be in a poker tournament might be a sucker, esp. if it is telivised - in game shows you are doing something that has no economic value anyway so at least you get paid to show up....

Well at least.... (2, Insightful)

russ1337 (938915) | more than 4 years ago | (#29501085)

it's still good for the CV.....

The Rules are the Rules... (5, Interesting)

Anonymous Coward | more than 4 years ago | (#29501087)

I agree that Ensemble "losing" because they posted 20 minutes later is a harsh result. However, those were the rules that Netflix set forth and Ensemble, intentionally or not, was making a risky gamble by waiting until right before the deadline to submit their project. And, perhaps the "tie goes to the earlier poster" rule makes some sense because it encourages making your submission earlier that you would otherwise and not "sniping" unless you're absolutely sure your project is better than the rest. At least as far as I can understand, the rule set forth the proper tradeoff -- Ensemble got to see the score to beat (BellKor's) before it posted; however, in exchange for that, its score needed to have been better in order to win. Had Ensemble wanted the first-mover's advantage and the win in event of a tie, it could have posted earlier than BellKor. The fact that BellKor posted only 20 minutes before the end of the competition suggests that Ensemble could have easily posted earlier without compromising its entry. That is, how much significant tinkering could have possibly been done in the last half hour of this multi-year competition?

Re:The Rules are the Rules... (2, Insightful)

martin-boundary (547041) | more than 4 years ago | (#29501343)

I think it would qualify as harsh if the runner up had a simple algorithm, but in this case all the teams which qualified for the 10% threshold did so with complicated blends of many algorithms. There's really no way to identify whose work is more valuable and deserved most to win, from a scientific perspective.

Re:The Rules are the Rules... (1)

kelnos (564113) | more than 4 years ago | (#29501681)

True, but given a set of conditions including "value" or "complexity" of the submission, it'd be damn-near impossible to judge the final result. Your hypothetical example is easy: if the dirt-simple algo got a 10.2% improvement, whereas the algos with magnitudes greater complexity netted a 10.4% improvement, it'd be easy to say the simple algo is more "valuable." But what if the complexity differences are much smaller? How do you judge them when the improvement is also close? I think Netflix was smart by providing a numerical target and detailing specifically how the numerical results are calculated. While there's always room for people to argue a result, it's harder when the criteria are simple.

Re:The Rules are the Rules... (1)

E IS mC(Square) (721736) | more than 4 years ago | (#29502735)

It's funny and sad at the same time!

And the irony is that, Ensemble were able to calculate thousands of scenarios and permutations/combinations to break the 10% barrier (a significant achievement), but they failed to take into consideration a very basic scenario that their final submission might be tied/inferior. Yes, they tested it against the quiz set, but there was no guarantee they would have got same result against the test set.

Re:The Rules are the Rules... (1)

Pollardito (781263) | more than 4 years ago | (#29506391)

I would have considered it harsh if BellKor had been in the lead for most all of the multi-year contest and then suddenly lost in the final 20 minutes. But even if that happened I'm not sure that I would be clammoring for them to give out two prizes. It'd be nice of them to do that, since they got a lot of value out of all the teams. But that was true regardless of the 20 minute margin, and everyone knew there was no second million dollar prize from day 1. On the bright side, I'm sure even the second place contestant can make some money licensing their algorithm to Blockbuster.

The Objective (1, Insightful)

maglor_83 (856254) | more than 4 years ago | (#29501109)

OK, so somebody won a prize, offered by NetFlix, to do... what exactly?

Re:The Objective (3, Informative)

nextekcarl (1402899) | more than 4 years ago | (#29501145)

IIRC, it was to improve the prediction algorithm for ratings. Basically, if you rated this movies at this level, then Netflix tries to predict you will rate these movies at this many stars each, or something to that effect. I've found the old method they used seems to generally work pretty well for me, though there are times I've been surprised. Though I'm not convinced my ratings are really all that accurate anyway. I'm pretty sure if I'm in a certain mode before I see some movies I'd rate them quite a bit differently than other times, though without some way to wipe my memories of seeing it the first time, I'm not sure how I'd actually test that.

Re:The Objective (4, Informative)

martin-boundary (547041) | more than 4 years ago | (#29501451)

Though I'm not convinced my ratings are really all that accurate anyway. I'm pretty sure if I'm in a certain mode before I see some movies I'd rate them quite a bit differently than other times, though without some way to wipe my memories of seeing it the first time, I'm not sure how I'd actually test that.

If you phrase it like that, you're somewhat missing the point. The target was to minimize an average prediction error over a large number of people, not the prediction error for a single person (eg you).

Here's an analogy which might help: Suppose you play the lottery and you try to predict 6 numbers exactly, then you'll have a vanishingly small chance of getting them right. But suppose you submit millions of sets of predictions, all different, then your chance is much larger of getting the actual 6 right.

Now the Netflix contest required predicting a few million ratings, and even if any one rating might be very far off the target, the task only required making sure that a large proportion of the predictions were pretty close to each of their targets and the remaining ones were not too far off.

The winners were able to make several million predictions such that most of them were, on average (in the RMSE sense used a lot in engineering), a distance of 0.85 from the real rating.

Even if in some instances their predictions were off by 4 (ie predict 1 when it is 5). For example, with 4 million predictions, if 1% of their predictions are off by 4, that's 40,000 instances of being off by 4, but this has to be compensated by several percent of being off by 0 if you want to get 0.85 on average.

Re:The Objective (2, Interesting)

LordKronos (470910) | more than 4 years ago | (#29504135)

Well it might not affect the average prediction as it relates to everybody else. However, from a user's perspective, the whole point of the system is to try an figure out what my taste is for movies based on how I rated those movies, match it up to other people's ratings, and try to predict what other movies I'd like. You can't statistically average out my ratings, as my ratings are the only significant factor on one side of the equation. There are no other users you can use to balance out what my tastes are. It has to go by my ratings, and if my ratings are anomalous, the results are going to suffer.

Your lottery analogy is pointless, because it demonstrates a different issue. There, the 6 actual numbers against which we are rating your submissions is a factual matter. They aren't affected by your feeling and interpretations. They are going to be the same 6 numbers, no matter whether you just got a promotion at work or your spouse was just murdered. However, your rating of movie X would probably be different after the promotion than it would be after the murder of your spouse (we're assuming you actually liked your spouse and didn't hire a hitman).

Re:The Objective (0)

Anonymous Coward | more than 4 years ago | (#29505169)

Your analogy is also pointless, because if you are a difficult person to rate movies for, we can ignore you and still get our millions.

Re:The Objective (1)

LordKronos (470910) | more than 4 years ago | (#29514383)

1) Please pay attention to the conversation, because I didn't post an analogy at all. I merely posted a few extreme examples of things that would affect how you rate a movie.

2) Please pay attention to the conversation, because nobody ever said the system was useless. The person who started this tangent (nextekcarl) simply said (I'm paraphrasing) "the old system worked better for me, but I'm not sure how reliable my ratings are anyway, so it could just be that", which led to martin-boundary suggesting "it doesn't matter how reliable your ratings are...the system will compensate for your screw ups", at which point I replied "not for nextekcarl, it wont".

There, are you caught up now?

Re:The Objective (1)

martin-boundary (547041) | more than 4 years ago | (#29511975)

It's really not as pointless as you claim, because you are not alone in the world. A company like netflix needs to serve all its customers, not just one, otherwise it wouldn't be profitable.

Suppose your personal ratings change with your mood, and everyone else's ratings change with their mood too. The overall set of ratings remains 1-5 as before, and provided people are reasonably independent in their mood swings, then statistically the effect will be neutral. For any one individual whose mood swings up, there will be someone whose mood swings down at the same time. When you view the two together, their swings cancel and the predictive accuracy stays roughly the same overall.

Re:The Objective (1)

LordKronos (470910) | more than 4 years ago | (#29514533)

Yeah, I know that. But as my post was saying, you can't compensate for the side of the equation which only has a sample size of 1 person.

Since I've already been accused by some AC of making an analogy, I might as well go and get one in (and I'll try to make it a bad one full of holes).

Lets say there is a system which looks at your tax return and suggests activities that you might enjoy based on your level of income. That system works pretty well for most people. You are in a field that pays $200k salary. Then you leave the company, and due to a no compete clause in your contract, you make nothing for an entire year. Your salary is $0. So you file your tax return for $0 and the system suggestion "you might enjoy looking for food in dumpsters". Well wait...you aren't poor. You're just living off of your emergency fund until you can get into your next job. So, during that 12 months you didn't work, you actually tinkered with some ideas and came up with some very unique invention. However, you don't the means to manufacture and market it. You decide to sell off the patent and get $10,000,000 for it. So now for year 2 your income is huge, and the system says "you might enjoy buying a yacht". Well, wait a minute. You aren't wealthy. Yeah, you've got a good chunk of change there, and if you spend it carefully you can have a really nice life. If you go buying a yacht, you are done. It's back to work for you.

So, you see how the system fails you as an individual. It doesn't matter how good the system can compensate for everyone else...it fails you. And that was the point. Nextekcarl wasn't saying "the system doesn't work"...he simply said he's not convinced of how well it's working for him.

P.S. Please don't bother pointing out the holes in the analogy...I know it's like swiss cheese. It wasn't my intention to create a rigorously thorough analogy.

ratings systems (1)

bersl2 (689221) | more than 4 years ago | (#29501591)

Ratings systems are inaccurate because people tend to cluster their ratings towards the extremes, for a number of reasons. (I would go into what I believe to be those reasons and the conditions under which they are triggered, but it's really late.)

My proposed solution is to require ratings to conform to some probability distributions and fit some criteria:
1. A user's votes should be approximately normal, with some degree of deviation permitted.
2. [Approximately] 90% of everything is crap/crud (the quantized version of Sturgeon's Law) (for some definition of "crap/crud").
And a few more rules based on observations I have made but don't feel like listing (again, because it's late).

Re:ratings systems (3, Insightful)

retchdog (1319261) | more than 4 years ago | (#29503693)

I'm sure that every schmuck with a Netflix account would be willing to adhere to your stupid rules, and saddened by your unwillingness to pontificate on how you'd change human behavior.

Seriously, this is what Netflix would be if it were invented by Stalin.

Re:ratings systems (1)

bersl2 (689221) | more than 4 years ago | (#29506753)

I'm being compared to Stalin. This is a first. How interesting.

My idea was to queue votes that had not yet been fit. If a user continued to have an excess of some certain rating level, the idea would be to suggest that the user manually normalize his votes and give him appropriate tools to do this (perhaps quasi-random suggestions for the casual user, an entire list for those who want it). People's minds change over time too, so this could encourage updates to old votes.

I realize that this sort of interface feels a lot like the older form of Slashdot meta-moderation, and I have heard it straight from CmdrTaco (as in, in person) that people never meta-moderated enough for it to work, so maybe this idea I propose needs radical changes. But I am tired of people using a rating as if it were a simple approval/disapproval; I even have noticed it in my own behavior, and I hate it. And it just gets worse if it's possible to compare ratings (i.e., top-n lists).

Re:ratings systems (3, Insightful)

Geoff-with-a-G (762688) | more than 4 years ago | (#29506181)

Your proposed solution would only make sense if people were forced to watch a completely random selection of movies. Once you factor in the fact that people are allowed to select which movies they want to watch, it makes sense that their ratings would cluster towards the high end of the spectrum. That is, in fact, the whole point of this ratings prediction system: to tell you, in advance, which movies you will like. If it worked perfectly, you'd never have to rate a movie below average, because you could avoid ever renting a movie which you wouldn't like.

Re:ratings systems (1)

deanoaz (843940) | more than 4 years ago | (#29506363)

Excellent point!

Re:The Objective (1)

Kashgarinn (1036758) | more than 4 years ago | (#29501901)

yeah... it's the same as saying "hey you liked starwars episode 3, 4 and 5, you're going to loooooove episode 1, 2, and the craptastic 3!"

- I guess it's a fine incentive for people who want a $1,000,000 to jump through their hoops, but did they actually help improve "things you might like"?

- I also think they're missing a vital statistic, things you hate, stuff you loathe, they could probably have improved the rating system 100% by adding that measurement into it.

Re:The Objective (2, Funny)

daybot (911557) | more than 4 years ago | (#29502283)

I'm in a certain mode before I see some movies I'd rate them quite a bit differently

Absolutely. Every single film I first saw on a plane ranks very low for me.

Re:The Objective (0)

Anonymous Coward | more than 4 years ago | (#29505349)

Correlation is not causation. Most movies shown on planes are cheap rentals at best, and therefore deserve your low rating. Had you seen them at home, you might have been more outraged that you spent $5 at Blockbuster to rent that crap in the first place.

Re:The Objective (0)

Anonymous Coward | more than 4 years ago | (#29509029)

I'm in a certain mode before I see some movies I'd rate them quite a bit differently

Absolutely. Every single film I first saw on a plane ranks very low for me.

I feel the same way about every film I've watched with a snake... I wonder what would happen if you combined that.

Re:The Objective (5, Informative)

crunchyeyeball (1308993) | more than 4 years ago | (#29501597)

Basically, you were asked to predict how a number of users would rate a number of movies, based on their previous ratings of other movies.

You were supplied with 100 million previous ratings (UserID, MovieID, Rating, DateOfRating), with the rating being a number beween 1 and 5 (5=best), and asked to make predictions for a seperate ("hidden") set comprising roughly 10% of the original data. You could then post a set of predictions to their website which would be automatically scored, and you'd receive a RMSE (Root Mean Squared Error) by email.

To avoid the possibility of tuning your predictions based on the RMSE, you could only post one submission per day, and the final competition-winning results would be scored against a seperate hidden set, independent of the daily scoring set.

It really was a fantastic competition, and anyone with a little coding knowledge (or SQL knowledge) could have a decent go at it. Personally, I scored an RMSE of 0.8969, or a 5.73% improvement over Netflix's benchmark Cinematch algorithm, having learnt a huge amount based on the published papers and forum postings of others in the contest, and my own incoherent theories.

In a way, everyone wins. Netflix gets a truly world-class prediction system based on the work of tens of thousands of researchers around the world hammering away for years at a time. Machine learning research moves a big step forward. BellKor et al get a big juicy cheque, and enthusiastic amateurs like myself get access to a huge amount of real-world research and data.

Re:The Objective (0)

Anonymous Coward | more than 4 years ago | (#29505715)

I believe it was "your mom".

camper trailers Brisbane (-1, Troll)

rubic123 (1642093) | more than 4 years ago | (#29501233)

This conversation is exploit no where. It's wanting the expanse of a serious human to perversion the things to uprise out on ending. Rubic camper trailers Brisbane [discountca...ers.com.au]

Elisha Gray (1)

TFer_Atvar (857303) | more than 4 years ago | (#29501235)

Re:Elisha Gray (1)

CrashandDie (1114135) | more than 4 years ago | (#29501803)

Nice way to be off-topic.

The Gray-Bell controversy, in essence, is about Bell possibly stealing Gray's invention and method. However, the issue in TFA is about someone being denied a prize based on the fact they submitted 20 minutes *after* someone else, and were only marginally better.

The Gray-Bell discussion doesn't come close to this, because at the time, the Patent Laws stated that it's not the Patent Registration that gives someone the rights, but rather the time of invention, and the ability to provide a working prototype.

Big deterrent for the future (1)

WeirdingWay (1555849) | more than 4 years ago | (#29502503)

For those who would do this w/o interest in money because they have such a passion for this sort of thing, this result won't phase them. But for others, the sheer mortality rate of these attempted collaborations, tied in with the company's apparent disinterest to provide something noteworthy to the other team due to a minor technicality is going to discourage people. Imagine how the losing team could turn on each other..."If only you didn't have to take that 25 minute crap we'd be cashing in!" "If only we had slept less!" etc. Really I hope the losing teams ends up like American Idol finalists who don't win but still go on to get successful contracts because they were still good despite a superficial margin that really serves no purpose to differentiate the two teams as far as competitive skill and knowledge of the contest. The results were fair and legal but this is a bittersweet victory with a bad forecast for future competitors if they have to be more concerned about these stupid idiosyncrasies. Hopefully Netflix or other companies with an idea like this have the foresight to create a more realistic margin and split the winnings in the fashion that is certainly more reasonable and gives both teams the recognition they deserve. The perfect situation here would be both teams getting half the money and simply stating in the press release the time of submission so those who MUST have bragging rights and hate the concept of a tie can bring up the submission time in a casual setting.

Re:Big deterrent for the future (1)

WeirdingWay (1555849) | more than 4 years ago | (#29502515)

I guess I should expect that my message content will be filtered by someone's expectation of presentation as I didn't split the comment into paragraphs. My bad.

Re:Big deterrent for the future (1)

deanoaz (843940) | more than 4 years ago | (#29506379)

You are correct sir!

Most collaborations don't work? (1)

roystgnr (4015) | more than 4 years ago | (#29503417)

Out of thousands, you have only two that succeeded.

Yes, because the Netflix rules were set up in such a way as to encourage winners to submit their results as soon as possible upon success. They're not going to wait around to give anyone else the chance to reach the same goal first. You might as well say "Only two people crossed the tape during that photo finish! The other thousand runners are failures!"

The big lesson for me was that most of those collaborations don't work.

By this standard, zero non-collaborations worked.

When will they implement it? (1)

bigbigbison (104532) | more than 4 years ago | (#29503767)

I didn't see anything in the article about when Netflix may implement the new algorithms? I've rated a ton of stuff on Netflix and seem to have totally confused the current system because I rarely get any recommendations and when I do they are totally off. For example I rated a Japanese horror film highly and Netflix then suggested 3 european romantic films (one comedy and two dramas).

The whole premise is wrong (1)

drsmack1 (698392) | more than 4 years ago | (#29504063)

Maybe I am alone here; but the only real trend in my movie likes is that I only watch GOOD movies. I have seen nothing in any of the articles on this that account for that. If I enjoyed 12 Monkeys; don't be suggesting Battlefield Earth to me just because they are both SF movies. To me, a better suggestion for a fan of 12 Monkeys would be Momento.

Re:The whole premise is wrong (1)

EboMike (236714) | more than 4 years ago | (#29509359)

Well, that's the whole point of this competition - find out what each user believes to be "GOOD", which is highly subjective. People who enjoy, say, Epic Movie, White Chicks, and hate movies like Insomnia are likely to dislike Memento as well.

There is no objective "good" or "bad" for a movie - you can average the ratings given by all users, but according to IMDb, Insomnia (US version) has 7.2, and Die Hard 4 has 7.6 - which one would you prefer? (To me, Insomnia is clearly the "better" movie, but my opinion is different from that of other people... which is why the idea is to find people who have the same taste as I do and see what they consider "good").

Try the Hutter Prize model (1)

Baldrson (78598) | more than 4 years ago | (#29504213)

The Hutter Prize [hutter1.net] 's incremental prize awards for progress, itself modeled on the M-Prize [mprize.org] , is a superior way of awarding prize money. There is continual reward for teams that contribute substantially and no one team takes everything based on a technicality.

Re:Try the Hutter Prize model (1)

daveime (1253762) | more than 4 years ago | (#29507497)

The Hutter Prize is a nonsense ... there's only been one russian competing in it since 2006, and he's won the princely sum of about 6,700 Euros. Bit of a far cry from a million bucks.

Also, the prize structure is flawed, in that it penalizes the very people who are achieving the best work. The first person to achieve an improvement in data compression gets a chunk of the prize money. Then someone else who comes in later and manages to compress it a bit further (arguably a harder task to make any gains), only gets a chunk of the *remaining* prize money.

And as it is patently obvious that the compression algorithms are NOT general purpose, but specifically tuned / optimized to the data set in question (a 100MB chunk of wiki data), it is probably going to be useless for any other data set. At least the Netflix development was performed against an oracle, and will probably improve their systems, if not by a full 10%, but somewhere pretty damn close.

In Soviet Russia, Alexander Ratushnyak compresses you !
(Sorry, just couldn't resist)

Re:Try the Hutter Prize model (1)

Baldrson (78598) | more than 4 years ago | (#29507625)

daveime writes: And as it is patently obvious that the compression algorithms are NOT general purpose, but specifically tuned / optimized to the data set in question (a 100MB chunk of wiki data), it is probably going to be useless for any other data set.

What would you suggest is a good English language corpus as a test of your assertion?

Re:Try the Hutter Prize model (1)

daveime (1253762) | more than 4 years ago | (#29515729)

English corpus fine ... but the point is that the wiki data text set *isn't* a standard English Text, it's a form of specialised wikipedia markup language (think XML) with abbrevations, wiki code numbers and dates.

A lot of the so called "optimizations" have been achieved by identifying the structure of that *specific* markup to make extra gains in compression.

It's like making a compressor that works specifically well on Windows PE exe file structure, and expecting it to do the same thing on a jpeg or plain text file in the English Language.

Nothwithstanding that, I guess the think that put me off was the whole "compression == AI" angle that Hutter tried to put on things, when the results gained have been patently NON AI inspired ... unless that russian guy is actually a robot ?

Re:Try the Hutter Prize model (1)

Baldrson (78598) | more than 4 years ago | (#29516157)

daveime writes: It's like making a compressor that works specifically well on Windows PE exe file structure, and expecting it to do the same thing on a jpeg or plain text file in the English Language.

Again: What English language corpus would you suggest to test your assertion that the Hutter Prize has, due to "specialization" to handle markup syntax, produced a compressor that does not outperform the others on English text?

Re:Try the Hutter Prize model (1)

Baldrson (78598) | more than 4 years ago | (#29516265)

daveime: Nothwithstanding that, I guess the think that put me off was the whole "compression == AI" angle that Hutter tried to put on things

Something you need to understand about the Hutter Prize is that it is not about writing a compressor -- it is about achieving the simplest representation of human knowledge. If you want to do it the way Doug Lenat has been trying with Cyc -- hiring a bunch of philosphy PhDs to manually construct an ontology that parsimoniously represents human knowledge -- then by all means, go for it. The amount of ontology required to describe markup syntax is vanishingly small compared to the rest of human knowledge represented in the first 100Mbytes of Wikipedia.

No need for a fancy algorithm (1)

bugs2squash (1132591) | more than 4 years ago | (#29504721)

My netflix queue contains movies chosen by me, my wife and my children and sometimes chosen for a visiting friend. If they would only allow me to maintain separate queues or tag the content as to who chose it, I would have thought that it would make predicting what we each like much easier. It's the same with itunes, the "genius" must think I'm schizophrenic.

Cat, tongue, and all that. (1)

Impy the Impiuos Imp (442658) | more than 4 years ago | (#29505483)

> Greg McAlpin, a software consultant and a leader of the Ensemble: "Having these
> big collaborations may be great for innovation, but it's very, very difficult. Out of
> thousands, you have only two that succeeded. The big lesson for me was that most of
> those collaborations don't work."

Tough luck on the loss. Oh, and you're an idiot.

Saying "only two" worked is like saying "only one person actually found the car keys and all the other guys looking are a big Fail".

See, they stop looking after it's found. Two possibilities: they start slowly pouring in results faster and faster, like a Marathon race end, or nobody else does, because your two groups were the only two bright enough to begin with. In which case a mass attack is still useful the same way gym class for every student is useful to find fast runners for the NFL.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...