×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Elo Chess Rating System Topped By Proposed Replacements

databuff Re:Whole History Rating (102 comments)

According to the leaderboard, Glicko is being beaten by ~5 per cent. Coulom's system better be pretty good!

more than 4 years ago
top

Chess Ratings — Move Over Elo

databuff Re:Apples and oranges? (133 comments)

how do you test current relative rankings without using them to make predictions?

more than 4 years ago
top

Chess Ratings — Move Over Elo

databuff Re:Submission error (133 comments)

The Elo Benchmark was submitted a second time. I wrote to Sonas about this. Apparently the rating system has to be seeded. He tried a different approach to calculating seed ratings and this performed better - pushing him one place higher in the rankings.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:Quality not quantity (69 comments)

Great comments, thanks! To address the your most incisive comments (as I see them) c) Competitions on Kaggle aren't polls. Competitions are framed in a way that requires serious data analysis. For example the Eurovision Forecasting Comp requires contestants to forecast the voting matrix (who votes for who) rather than a simple who will win. b,d,e) getting people to do lots of predictions should seperate the talented from the lucky. Having forecasters predict in the same place over and over is a good way to get long enough history to discover the trully talented.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:The DELPHI method, circa 1944 (69 comments)

Thanks for the post. I hadn't heard of the DELPHI method - so now I'm a little bit wiser. According to the Wikipedia article, the DELPHI method tries to get a panel of experts to agree on a single forecast. Kaggle (assuming the wisdom of crowds is the method of choice), cherishes diversity. It takes everybody's forecasts and 'combines' them in the hope that individual forecast errors will cancel out.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:This story is NOT News (69 comments)

Kaggle, unlike prediction markets, is designed to deal with complex tasks where data modeling is required. For example, a prediction market can be used to get the crowd's view on who will win the Eurovision Song Contest. But Kaggle is asking contestants to forecast the voting matrix.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:Crowdsourcing predictions (69 comments)

Funny and insightful - nice comment! The post was terse, so I didn't explain that competitions on Kaggle aren't polls. Competitions are framed in a way that requires serious data analysis. For example the Eurovision Forecasting Comp requires contestants to forecast the voting matrix (who votes for who) rather than a simple who will win.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:did we forget something? (69 comments)

I totally agree, past performance does not guarantee future performance. However, the more forecasts you get statisticians to make, the less likely it is that their prediction-history reflects chance rather than skill.

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Re:Modern life? (69 comments)

I do believe we rely on predictions more today than at anytime in history because we can make them more reliably (we have so much historical data to base them on).

more than 4 years ago
top

A Crowdsourcing Project To Make Predictions More Precise

databuff Does anybody have prediction-competition ideas? (69 comments)

Thanks everyone for your comments! Sounds like many of you are skeptical that 'wisdom of crowds' can work in this setting. It'll be an interesting experiment, but I'm encouraged by the Netflix Prize case study. Out of interest, does anybody have any interesting ideas for prediction competitions? I'd love to hear from you either in the comments area or at statsbuff@gmail.com.

more than 4 years ago

Submissions

top

The world's smartest company?

databuff databuff writes  |  more than 3 years ago

databuff (1789500) writes "It might just be the smartest company in the world; responsible for solving some of the toughest problems ever posed — from accurately mapping Dark Matter in the Universe to how to avoid buying dodgy vehicles at a used car auction. Kaggle – a collection of more than 17,000 PhD-level brains who compete for prizes in solving incredibly complex questions – is using the power of the internet to accelerate problem solving on a massive scale. Essentially, it's crowd sourcing for geniuses."
Link to Original Source
top

Competition Shines Light on Dark Matter

databuff databuff writes  |  more than 3 years ago

databuff (1789500) writes "For a decade, the world's brightest physicists have been working on understanding and mapping dark matter. On May 23, a consortium including NASA, the European Space Agency and the Royal Astronomical Society, opened up the problem on Kaggle, a platform for machine learning competitions. To detect the presence of dark matter, the consortium asked entrants to build algorithms that detect a phenomenon called gravitational lensing, which causes distortions in the shape of a galaxy. In less than a week, Martin O'Leary, a PhD student in glaciology from Cambridge University made a breakthrough, outperforming the most commonly used algorithms in astronomy. O'Leary applied techniques common in glaciology, to detect the edges of glaciers from satellite images. As profound as the breakthrough is for cosmology, this competition is a prime example of how harnessing interdisciplinary approaches can help make significant scientific discoveries."
Link to Original Source
top

From movie recommendations to life and death

databuff databuff writes  |  more than 3 years ago

databuff (1789500) writes "The April 4 launch of the $3 million Heritage Health Prize was just announced by the Heritage Provider Network, a network of doctors. The competition challenges data hackers to build algorithms that predict who will go to hospital in the next year, so that preventative action can be taken. An algorithm might find that somebody with diabetes, hypertension and high cholesterol is a 90 per cent risk for hospitalization. Knowing this, it might be cheaper for an HMO to enrol them in an exercise program now rather than pay the likely hospital bill. The competition takes the same approach as $1 million Netflix Prize, but solves a far more significant problem."
Link to Original Source
top

Gov 2.0 competition to predict commute times

databuff databuff writes  |  about 4 years ago

databuff (1789500) writes "Last week, Sydney's Minister of Roads, David Borger, launched a $10,000 competition to develop an algorithm that predicts commute times on a major Sydney freeway. The winning algorithm will be used to power predictions on the Sydney live traffic website. The hope is that the predictions will help commuters make informed decisions about when to travel and on what routes, lowering the intensity of peak hour traffic. In its first week, the competition attracted entries from more than 50 teams and 19 countries."
Link to Original Source
top

Time to upgrade the Elo chess rating system

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "About six weeks ago, Slashdot reported a competition to find a chess rating algorithm that performed better than the official Elo rating system. The competition has just reached the halfway mark and the best entries have outperformed Elo by over 8 per cent. The leader is a Portrugese physicist, followed by an Israeli mathematician and then a pair of American computer scientists. The fact that Elo has been so comprehensively beaten is a sure sign that half a century after it was developed, it's due for an upgrade."
Link to Original Source
top

Chess ratings - move over Elo

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "Less than 24 hours ago, Jeff Sonas, the creator of the Chessmetrics rating system, launched a competition to find a chess rating algorithm that performs better than the official Elo rating system. The competition requires entrants to build their rating systems based on the results of more than 65,000 historical chess games. Entrants then test their algorithms by predicting the results of another 7,809 games. Already three teams have managed create systems that make more accurate predictions than the official Elo approach. It's not a surprise that Elo has been outdone — after all, the system was invented half a century ago before we could easily crunch large amounts of historical data. However, it is a big surprise that Elo has been bettered done so quickly!"
Link to Original Source
top

Crowdsourcing a new chess rating system

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "The Elo rating system was invented half a century ago by Hungarian physicist and chess master Arpad Elo. It is used throughout the chess world and has been applied to other contests, ranging from World of Warcraft to soccer. However, Elo's formula was derived theoretically, before we could easily crunch large amounts of historical data — so it is likely that modern approaches could do much better. Jeff Sonas, the creator of the Chessmetrics system, has just launched a competition to find a superior chess rating system. Competitors build their rating systems based on the results of more than 65,000 historical chess games. They then test their algorithms by predicting the results of another 7,810 games. Entries to the competition are benchmarked against Elo as well as other well-known rating systems (such as Glicko and Chessmetrics)."
Link to Original Source
top

Predicting entrepreneurial success

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "If you were trying to find successful entrepreneurs, what would you look for? An unusually high IQ? Wrong. Cooperation and team building over antagonism? Wrong again. The Founders Institute, a startup incubator, has developed a test that any population, any city, university, or country could use to determine who among them has the best shot at succeeding as an entrepreneur. The test favours founders with an interest in novelty, the ability to think on their feet and those who are the right age. CNN notes that the venture capital industry hasn't changed in 30 years, relying on networks, chance meetings and gut feelings. A quantitative test may prove to be a great way for venture firms to find opportunities that rival VCs have missed."
Link to Original Source
top

Memphis uses data to cut crime

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "What started as an experimental predictive crime-prevention initiative in 2005 has proven to be a fabulously successful program. Each week, the Memphis Police Department (MPD) examines the last four weeks' worth of crime data and links it to spatial information. Incidents are then plotted on a digital map, which is monitored around the clock. The MPD's software pinpoints crime hot spots, with details down to the day of the week and times of the day that are most active. The program has been credited as a primary driver of a 31% reduction in serious crime in Memphis since 2006."
Link to Original Source
top

Competitions facilitate real-time science

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "In the age of ubiquitious communications technologies, it's inexplicable that the scientific literature evolves much as it did one hundred years ago. But perhaps competitions offer an effective solution. The best entry to a (still running) bioinformatics contest, requiring participants to pick genetic markers that correlate with a change in the severity of the HIV infection, had outdone the best methods in the scientific literature within a week and a half. Whereas the scientific literature tends to evolve slowly (somebody writes a paper, somebody else tweaks that paper and so on), a competition inspires rapid innovation by introducing the problem to a wide audience. So when this week the headlines announced the discovery of genetic markers that correlate with extreme longevity — what they missed was that the work took 15 years from beginning to publication. Had the study been run as a competition, with the raw data available to all, the results would have been generated in real time. Insights would have been available much sooner and with more precision."
Link to Original Source
top

World Cup forecasting challenge

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "As a break from projecting the strength of subprime mortgages, credit default swaps and other obscure financial instruments, quantitative analysts at Goldman Sachs, JP Morgan, UBS and Danske Bank have modeled the 2010 FIFA World Cup. Now Kaggle has set up a forecasting competition, allowing statisticians to go head-to-head with these corporate giants. The challenge is to predict how far each country will progress in the tournament. If the banks know as much about soccer as they do about subprime mortgages, the statisticians are in with a good chance."
Link to Original Source
top

Google launches a data prediction API

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "Google has released a data prediction a prediction API. The service helps users leverage historical data to make predictions that can guide real-time decisions. According to Google, the API can be used for prediction tasks ranging from product recommendations to churn analysis (predicting which customers are likely to switch to another provider). The API involves three simple steps, upload the data, train the model and then generate predictions. The API is currently available on an invitation only basis."
Link to Original Source
top

Data-driven companies

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "Alan Caras starts a new series delving into the burgeoning world of companies driven by data. For the first in the series, Alan looks at music forecaster uPlaya. Fledgling bands upload their songs to uPlaya, which analyzes them against an ever evolving databank of past and present musical hits, to estimate a song's potential for commercial success. It's an interesting concept that raises questions. Is the appeal of a scale, melody or entire song a matter of subjective taste or science?"
Link to Original Source
top

Datasets and Data-driven Startups

databuff databuff writes  |  more than 4 years ago

databuff writes "Bradford Cross, a founder of Flightcaster, posts about the data-driven startup. Companies like Flightcaster and Weatherbill take publicly available data and deploy predictive analytics to create a business (Flightcaster predicts flight delays before any public announcements is made and Weatherbill insures against adverse weather events.) Included in the post are a list of other publicly available datasets that could be used to create other data-driven startups. It would be good to know if I'm going to get caught in a traffic jam on my way home..."
Link to Original Source
top

A Project to Make Predictions More Precise

databuff databuff writes  |  more than 4 years ago

databuff (1789500) writes "Predictions are critical to modern life. Police predict where and when crimes are most likely to take place, banks predict which loan applicants are most likely to default and hotels forecast seasonal demand to set room rates. A new project called Kaggle facilitates better predictions by providing a platform for forecasting competitions. The platform allows organizations to post their data and have it scrutinized by the world's best statisticians. It will offer a robust rating system, so it's easy to identify those with a proven track record. Organizations can choose either to follow the experts, or to follow the consensus of the crowd — which, according to New Yorker columnist James Surowiecki, is likely to be more accurate than the vast majority of individual predictions. The power of a pool of predictions was demonstrated by the Netflix Prize, a $1m data-prediction competition, which was won by a team of teams that combined 700 models. Kaggle's first competition is underway, and it is accessing the 'wisdom of crowds' to predict the winner of this May's Eurovision Song Contest."
Link to Original Source

Journals

databuff has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?