Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Could Open Source Lead to a Meritocratic Search Engine?

CmdrTaco posted more than 7 years ago | from the sure-if-you-donate-a-thousand-data-centers dept.

The Internet 148

Slashdot contributor Bennett Haselton writes "When Jimmy Wales recently announced the Search Wikia project, an attempt to build an open-source search engine around the user-driven model that gave birth to Wikipedia, he said his goal was to create "the search engine that changes everything", as he underscored in a February 5 talk at New York University. I think it could, although not for the same main reasons that Wales has put forth -- I think that for a search engine to be truly meritocratic would be more of a revolution than for a search engine to be open-source, although both would be large steps forward. Indeed, if a search engine could be built that really returned results in order of average desirability to users, and resisted efforts by companies to "game" the system (even if everyone knew precisely how the ranking algorithm worked), it's hard to overstate how much that would change things both for businesses and consumers. The key question is whether such an algorithm could be created that wouldn't be vulnerable to non-merit-based manipulation. Regardless of what algorithms may be currently under consideration by thinkers within the Wikia company, I want to argue logically for some necessary properties that such an algorithm should have in order to be effective. Because if their search engine becomes popular, they will face such huge efforts from companies trying to manipulate the search results, that it will make Wikipedia vandalism look like a cakewalk." The rest of his essay follows.

This will be a trip into theory-land, so it may be frustrating to users who dislike talk about "vaporware" and want to see how something works in practice. I understand where you're coming from, but I submit it's valuable to raise these questions early. This is in any case not intended to supplant discussion about how things are things are currently progressing.

First, though, consider the benefits that such a search engine could bring, both to content consumers and content providers, if it really did return results sorted according to average community preferences. Suppose you wanted to find out if you had a knack for publishing recipes online and getting some AdSense revenue on the side. You take a recipe that you know, like apple pie, and check out the current results for "apple pie". There are some pretty straightforward recipes online, but you believe you can create a more complete and user-friendly one. So you write up your own recipe, complete with photographs of the process showing how ingredients should be chopped and what the crust mixture should look like, so that the steps are easier to follow. (Don't you hate it when a recipe says "cut into cubes" and you want to throttle the author and shout, "HOW BIG??" It drove me crazy until I found CookingForEngineers.com.) Anyway, you submit your recipe to the search engine to be included in the results for "apple pie", and if the sorting process is truly meritocratic, your recipe page rises to the top. Until, that is, someone decides to surpass you, and publishes an even more user-friendly recipe, perhaps with a link to a YouTube video of them showing how to make the pie, which they shot with a tripod video camera and a clip-on mike in their well-lit kitchen. In a world of perfect competition, content providers would be constantly leapfrogging each other with better and better content within each category (even a highly specific one like apple pie recipes), until further efforts would no longer pay for themselves with increased traffic revenue. (The more popular search terms, of course, would bring greater rewards for those listed at the top, and would be able to pay for greater efforts to improve the content within that category.) But this constant leapfrogging of better and better content requires efficient and speedy sorting of search results in order to work. It doesn't work if the search results can be gamed by someone willing to spend effort and money (not worth it for the author of a single apple pie recipe, but worth it for a big money-making recipe site), and it doesn't work if it's impossible for new entrants to get hits when the established players already dominate search results.

Efficient competition benefits consumers even more for results that are sorted by price (assuming that among comparable goods and services, the community promotes the cheapest-selling ones to the top of the search results, as "most desirable"). If you were a company selling dedicated Web hosting, for example, you would submit your site to the engine to be included in results for "dedicated hosting". If you could demonstrate to the community that your prices and services were superior to your competitors', and if the ranking algorithm really did rank sites according to the preferences of the average user, your site could quickly rise to the top, and you'd make a bundle on new sales -- until, of course, someone else had the same idea and knocked you out of the top spot by lowering their prices or improving their services. The more efficient the marketplace, the faster prices fall and service levels rise, until the prices just covered the cost of providing the service and compensating the business owner for their time. It would be a pure buyer's market.

It's important to precisely answer the question: Why would this system be better than a system like Google's search algorithm, which can be "gamed" by enterprising businesses and which doesn't always return the results first that the user would like the most? You might be tempted to answer that in an inefficient marketplace created by an inefficient search result sorting algorithm, a user sometimes ends up paying $79/month for hosting, instead of the $29/month that they might pay if the marketplace were perfectly efficient. But this by itself is not necessarily wasteful. The extra $50 that the user pays is the user's loss, but it's also the hosting company's gain. If we consider costs and benefits across all parties, the two cancel out. The world as a whole is not poorer because someone overpaid for hosting.

The real losses caused by an inefficient search algorithm, are the efforts spent by companies to game the search results (e.g. paying search engine optimization firms to try and get them to the top Google spot), and the reluctance of new players to enter that market if they don't have the resources to play those games. If two companies each spend $5,000 trying to knock each other off of the top spot for a search like "weddings", that's $5,000 worth of effort that gets burned up with no offsetting amount of goods and services added to the world. This is what economists call a deadweight loss, with no corresponding benefit to any party. The two wedding planners might as well have smashed their pastel cars into each other. Even if a single company spends the effort and money to move from position #50 to position #1, that gain to them is offset by the loss to the other 49 companies that each moved down by one position, so the net benefit across all parties is zero, and the effort that the company spent to raise their position would still be a deadweight loss.

On the other hand, if search engine results were sorted according to a true meritocracy, then companies that wanted to raise their rankings would have to spend effort improving their services instead. This is not a deadweight loss, since these efforts result in benefits or savings to the consumer.

I've been a member of several online entrepreneur communities, and I'd conservatively estimate that members spend less than 10% of the time talking about actually improving products and services, and more than 90% of the time talking about how to "game" the various systems that people use to find them, such as search engines and the media. I don't blame them, of course; they're just doing what's best for their company, in the inefficient marketplace that we live in. But I feel almost lethargic thinking of that 90% of effort that gets spent on activities that produce no new goods and services. What if the information marketplace really were efficient, and business owners spent nearly 100% of their efforts improving goods and services, so that every ounce of effort added new value to the world?

Think of how differently we'd approach the problem of creating a new Web site and driving traffic to it. A good programmer with a good idea could literally become an overnight success. If you had more modest goals, you could shoot a video of yourself preparing a recipe or teaching a magic trick, and just throw it out there and watch it bubble its way up the meritocracy to see if it was any good. You wouldn't have to spend any time networking or trying to rig the results, you just create good stuff and put it out there. No, despite whatever cheer-leading you may have heard, it doesn't quite work that way yet -- good online businessmen still talk about the importance of networking, advertising, and all the other components of gaming the system that don't relate to actually improving products and services. But there is no reason, in principle, why a perfectly meritocratic content-sorting engine couldn't be built. Would it revolutionize content on the Internet? And, could Search Wikia be the project to do it, or play a part in it?

Whatever search engine the Wikia company produced, it would probably have such a large following among the built-in open-source and Wikipedia fan base, that traffic wouldn't be a problem -- companies at the top of popular search results would definitely benefit. The question is whether the system can be designed so that it cannot be gamed. I agree with Jimmy Wales's stated intention to make the algorithm completely open, since this makes it easier for helpful third parties to find weaknesses and get them fixed, but of course it also makes it easier for attackers to find those weaknesses and exploit them. If you think Microsoft paying a blogger to edit Wikipedia is a problem, imagine what companies will do to try and manipulate the search results for a term like "mortgage". So what can be done?

The basic problem with any community that makes important decisions by "consensus" is that it can be manipulated by someone who creates multiple phantom accounts all under their control. Then if a decision is influenced by voting -- for example, the relative position of a given site in a list of search results -- then the attacker can have the phantom accounts all vote for one preferred site. You can look for large numbers of accounts created from the same IP address, but the attacker could use Tor and similar systems to appear to be coming from different IPs. You could attempt to verify the unique identity of each account holder, by phone for example, but this requires a lot of effort and would alienate privacy-conscious users. You could require a Turing test for each new account, but all this means is that an attacker couldn't use a script to create their 1,000 accounts -- an attacker could still create the accounts if they had enough time, or if they paid some kid in India to create the accounts. You could give users voting power in proportion to some kind of "karma" that they had built up over time by using the site, but this gives new users little influence and little incentive to participate; it also does nothing to stop influential users from "selling out" their votes (either because they became disillusioned, or because they signed up with that as their intent from the beginning!).

So, any algorithm designed to protect the integrity of the Search Wikia results would have to deal with this type of attack. In a recent article about Citizendium, a proposed Wikipedia alternative, I argued that you could deal with conventional wiki vandalism by having identity-verified experts sign off on the accuracy of an article at different stages. That's practical for a subject like biology, where you could have a group of experts whose collective knowledge covers the subject at the depth expected in an encyclopedia, but probably not for a topic like "dedicated hosting" where the task is to sift through tens of thousands of potential matches and find the best ones to list first. You need a new algorithm to harness the power of the community. I don't know how many possible solutions there are, but here is one way in which it could be done.

Suppose a user submits a requested change to the search results -- the addition of their new Site A, or the proposal that Site A should be ranked higher. This decision could be reviewed by a small subset of registered users, selected at random from the entire user population. If a majority of the users rate the new site highly enough as a relevant result for a particular term, then the site gets a high ranking. If not, then the site is given a low ranking, possibly with feedback being sent to the submitter as to why the site was not rated highly. The key is that the users who vote on the site have to be selected at random from among all users, instead of letting users self-select to vote on a particular decision.

The nice property of this system is that an attacker can't manipulate the voting simply by having a large number of accounts at their control -- they would have to control a significant proportion of accounts across the entire user population, in order to ensure that when the voters were selected randomly from the user population, the attacker controlled enough of those accounts to influence the outcome. (If an attacker ever really did spend the resources to reach that threshold point, and it became apparent that they were manipulating the votes, those votes could be challenged and overridden by a vote of users whose identities were known to the system. This would allow the verified-identity users to be used as an appeal of last resort to block abuse by a very dedicated adversary, while not requiring most users to verify their identity. This is basically what Jimmy Wales does when he steps in and arbitrates a Wikipedia dispute, acting as his own "user whose identity is known".)

This algorithm for an "automated meritocracy" (automeritocracy? still not very catchy at 7 syllables) could be extended to other types of user-built content sites as well. Musicians could submit songs to a peer review site, and the songs would be pushed out to a random subset of users interested in that genre, who would then vote on the songs. (If most users were too apathetic to vote, the site could tabulate the number of people who heard the song and then proceeded to buy or download it, and count those as "votes" in favor.) If the votes for the song are high enough, it gets pushed out to all users interested in that genre; if not, then the song doesn't make it past the first stage. If there are 100,000 users subscribed to a particular genre, but it only takes ratings from 100 users to determine whether or not a song is worth pushing out to everybody, that means that when "good" content is sent out to all 100,000 people but "bad" content only wastes the time of 100 users, the average user gets 1,000 pieces of "good" content for every 1 piece of "bad" content. New musicians wouldn't have to spend any time networking, promoting, recruiting friends to vote for them -- all of which have nothing to do with making the music better, and which fall into the category of deadweight losses described above.

An automeritocracy-like system could even be used as a spam filter for a large e-mail site. Suppose you want to send your newsletter to 100,000 Hotmail users (who really have signed up to receive it). Hotmail could allow your IP to send mail to 100,000 users the first time, and then if they receive too many spam complaints, block your future mailings as junk mail. But if that's their practice, there's nothing to stop you from moving to a new, unblocked IP and repeating the process from there. So instead, suppose that Hotmail stores your 100,000 received messages temporarily into users' "Junk Mail" folders, but selectively releases a randomly selected subset of 100 messages into users' inboxes. Suppose for arguments' sake that when a message is spam, 20% of users click the "This is spam" button, but if not, then only 1% of users click it. Out of the 100 users who see the message, if the number who click "This is spam" looks close to 1%, then since those 100 users were selected as a representative sample of the whole population, Hotmail concludes that the rest of the 100,000 messages are not spam, and moves them retroactively to users' inboxes. If the percentage of those 100 users who click "This is spam" is closer to 20%, then the rest of the 100,000 messages stay in Junk Mail. A spammer could only rig this system if they controlled a significant proportion of the 100,000 addresses on their list -- not impossible, but difficult, since you have to pass a Turing test to create each new Hotmail account.

The problem is, there's a huge difference between systems that implement this algorithm, and systems that implement something that looks superficially like this algorithm but actually isn't. Specifically, any site like HotOrNot, Digg, or Gather that lets users decide what to vote on, is vulnerable to the attack of using friends or phantom users to vote yourself up (or to vote someone else down). In a recent thread on Gather about a new contest that relied on peer ratings, many users lamented the fact that it was essentially rigged in favor of people with lots of friends who could give them a high score (or that ratings could be offset unfairly in the other direction by "revenge raters" giving you a 1 as payback for some low rating you gave them). I assume that the reason such sites were designed that way is that it just seemed natural that if your site is driven by user ratings, and if people can see a specific piece of content by visiting a URL, they should have the option on that page to vote on that content. But this unfortunately makes the system vulnerable to the phantom-users attack.

(Spam filters on sites like Hotmail also probably have the same problem. We don't know for sure what happens when the user clicks "This is spam" on a piece of mail, but it's likely that if a high enough percentage of users click "This is spam" for mail coming from a particular IP address, then future mails from that IP are blocked as spam. This means you could get your arch-rival Joe's newsletter blacklisted, by creating multiple accounts, signing them up for Joe's newsletter, and clicking "This is spam" when his newsletters come in. This is an example of the same basic flaw -- letting users choose what they want to vote on.)

So if the Wikia search site uses something like this "automeritocracy" algorithm to guard the integrity of its results, it's imperative not to use an algorithm vulnerable to the hordes-of-phantom-users attack. Some variation of selecting random voters from a large population of users would be one way to handle that.

Finally, there is a reason why it's important to pay attention to getting the algorithm right, rather than hoping that the best algorithm will just naturally "emerge" from the "marketplace of ideas" that results from different wiki-driven search sites competing with each other. The problem is that competition between such sites is itself highly inefficient -- a given user may take a long time to discover which site provides better search results on average, and in any case, it may be that Wiki-Search Site "B" has a better design but Wiki-Search Site "A" had first-mover advantage and got a larger number of registered users. When I wrote earlier about why I thought the Citizendium model was better than Wikipedia, several users pointed out that it may be a moot point, for two main reasons. First, most users will not switch to a better alternative if it never occurs to them. Second, for sites that are powered by a user community, it's very hard for a new competitor to gain ground, even with a superior design, if the success of your community depends on lots of people starting to use it all at once. You could write a better eBay or a better Match.com, but who would use it? Your target market will go to the others because that's where everybody else is. Citizendium is, I think, a special case, since they can fork articles that started life on Wikipedia, so Wikipedia doesn't have as huge of an advantage over them as they would if Citizendium had to start from scratch. But the general rule about imperfect competition still applies.

It's a chicken-and-egg problem: You can have Site A that works as a pure meritocracy, and Site B that works as an almost-meritocracy but can be gamed with some effort. But Site B may still win because the larger environment in which they compete with each other, is not itself a meritocracy. So we just have to cross our fingers and hope that Search Wikia gets it right, because if they don't, there's no guarantee that a better alternative will rise to take its place. But if they get it right, I can hardly wait to see what changes it would bring about.

cancel ×

148 comments

Sorry! There are no comments related to the filter you selected.

Hrmmm (-1, Troll)

Dramacrat (1052126) | more than 7 years ago | (#18013336)

First... post?
kekeke.

Re:Hrmmm (2, Insightful)

fyngyrz (762201) | more than 7 years ago | (#18013540)

Won't work. Here's why, in a nutshell: There are huge numbers of sites on the net. There are not huge numbers of sets of people who will be willing to compare sites for relative merit (and there probably aren't even large numbers of such sets who can do so, even if you paid them for the results, which would be a huge cost that would not repay for most types of sites.)

Sorry. Only computers can handle a task like this. It is automation or failure.

Re:Hrmmm (3)

eln (21727) | more than 7 years ago | (#18013700)

You're probably right. One question, though:

What the hell does that have to do with the post you replied to? Stop piggybacking on nonsensical early posts to pump up your karma.

MERITOCRATIC! (3, Funny)

Jeremiah Cornelius (137) | more than 7 years ago | (#18014060)

Seems more likely to lead to Medeocratic.

Re:Agreed, but for a different reason (1)

rowama (907743) | more than 7 years ago | (#18014738)

I also think it won't work, but for a different reason.

The solution offered in the essay can be gamed. Not only that, it would open a new industry of gaming facilitators, or brokers. Gaming the system would only require that a third party (i.e., the broker) provide incentive (e.g., money?) for the chosen raters to vote a certain way. Raters who want to make a few bucks would basically be selling their votes by having the highest paying broker tell them how to vote. As soon as the proportion of bought votes reaches some critical value, the entire system would begin to implode.

Re:Hrmmm (2, Funny)

Overly Critical Guy (663429) | more than 7 years ago | (#18014788)

Thank you for your interesting response to "first post." Apparently, first post won't work because there are a huge number of sites on the net. Only computers can handle a task like first post.

Re:Hrmmm (1)

nacturation (646836) | more than 7 years ago | (#18015394)

Won't work.
Actually, it did. The poster you replied to who wondered "First... post?" did, in fact, get first post.

Sorry. Only computers can handle a task like this. It is automation or failure.
That could very well be the explanation. Perhaps the first poster automated the process. It would certainly explain the outcome.
 

I don't think it will beat pigeon ranking... (1, Funny)

filesiteguy (695431) | more than 7 years ago | (#18013338)

Seriously: What could an OSS-based user-submitted search algorithm do that Pigeon Rank - http://www.google.com/technology/pigeonrank.html - couldn't? If a team of highly trained pigeons can build an empire like Google, then I seriously doubt that user-based indexing would work.

Am I wrong?

Re:I don't think it will beat pigeon ranking... (0)

Anonymous Coward | more than 7 years ago | (#18013444)

Well, if the users submitting to the OSS search algorithm were monkeys, I think they could take Google. Monkeys are definitely superior to pigeons. Let's not get started about bears.

Re:I don't think it will beat pigeon ranking... (1)

homey of my owney (975234) | more than 7 years ago | (#18014004)

I think so. The problem with Google for me is the crap results I get on my searches, that wind up near the top of the results. This comes from the focus of ad revenue, which is never discussed. Certainly I am not alone when I find that I've wasted time visiting a page that has NOTHING to do with what I'm looking for... but it's got a lot of ads for it!

"Bennett Haselton" (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#18013392)

Is that just a pen name or pseudonym for Jon Katz? Because his articles are just as long winded and just as boring.

I'll take the Off-topic hit for this (4, Interesting)

UbuntuDupe (970646) | more than 7 years ago | (#18013422)

I like the essay except for this:

"The real losses caused by an inefficient search algorithm, are the efforts spent by companies to game the search results (e.g. paying search engine optimization firms to try and get them to the top Google spot), and the reluctance of new players to enter that market if they don't have the resources to play those games. If two companies each spend $5,000 trying to knock each other off of the top spot for a search like "weddings", that's $5,000 worth of effort that gets burned up with no offsetting amount of goods and services added to the world. This is what economists call a deadweight loss, with no corresponding benefit to any party."


This issue has long bugged me and it's hard to get answers about it. I don't understand how this is a deadweight loss (DWL) by his definition. Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party. How is this DWL different from the "non-DWL" example directly preceding, in which someone overpaid for hosting, but that was the hosting company's gain?

Does anyone have a rigorous DWL definition that can be backed up by a valid example?

Re:I'll take the Off-topic hit for this (1)

nine-times (778537) | more than 7 years ago | (#18013730)

Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party.

The SEO expert? I don't really know about deadweight loss, but it does seem that nothing was gained by the exercise that was described, except somebody got to leech money off of the companies paying for SEO.

Re:I'll take the Off-topic hit for this (2, Insightful)

maxume (22995) | more than 7 years ago | (#18013872)

The company that loses put money into the advertising system; that money can very likely be re-purposed for other keywords or whatever. The time their employees spent gaming the system(to no benefit for the company) could have been spent on activities that were beneficial to the company. The employee doesn't care, but the employer would have been better off sending him for donuts or whatever.

The amounts seem unlikely(of month of employee time with no realized benefit? bah.), but the concept is sound.

Re:I'll take the Off-topic hit for this (3, Informative)

pkulak (815640) | more than 7 years ago | (#18013902)

Because the first example is equivalent to someone just handing the hosting company 50 bucks a month as a free gift. Money is exchanged, but nothing happens. In the second example, money is exchanged AND people work very hard for a long time to earn it and yet produce nothing. It would be like me paying you to dig a hole and then fill it in. The time you spend doing that is time you can't spend curing cancer.

Re:I'll take the Off-topic hit for this (2, Funny)

geoffspear (692508) | more than 7 years ago | (#18014052)

Anyone who's going to take your money to spend all day digging holes and filling them in is probably unlikely to come up with a cure for cancer regardless of how much you pay them to do research.

Re:I'll take the Off-topic hit for this (0)

Anonymous Coward | more than 7 years ago | (#18014064)

The time you spend doing that is time you can't spend curing cancer.
why are you posting on slashdot when you could be curing cancer?

Re:I'll take the Off-topic hit for this (4, Funny)

Elvis Parsley (939954) | more than 7 years ago | (#18015308)

'cause somebody's paying him $5000 to.

Re:I'll take the Off-topic hit for this (1)

localman (111171) | more than 7 years ago | (#18015342)

Thanks... I think that's the best answer anyone came up with.

Cheers.

Re:I'll take the Off-topic hit for this (1)

Pentagram (40862) | more than 7 years ago | (#18013926)

Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party.

Yes, but it's just a transfer of money from one party to another; it's a zero-sum game. No wealth has been produced in the sense of some useful work being done. With respect to the hosting company example, the hosting company received the market price for a useful service, a positive benefit to both parties. (As far as I can see, the company did not overpay for hosting in the example).

That seems to be the theory, anyway.

Re:I'll take the Off-topic hit for this (1)

UbuntuDupe (970646) | more than 7 years ago | (#18014038)

Well, no, that's not the theory, hence the problem. The definition of the DWL given in the essay (and in treatments of the topic) is a loss "with no corresponding benefit for another party". Whoever got the $5000 benefited; hence it cannot be a DWL. The loss of the search-engine-gamers was the gain of whoever they paid. It doesn't matter if wealth/useful-work has been produced or hasn't. Even in a zero-sum transfer, someone benefits. For it to be a true DWL (by the definition), it must be that no one benefits.

Re:I'll take the Off-topic hit for this (0)

Anonymous Coward | more than 7 years ago | (#18013994)

I've taken a few college econ classes. and my understanding is that in strict economics terms dead weight loss is the reduction in overall utility caused by any transaction that is not at the efficient price level or the efficient quantity level. going by that the example of overpaying for hosting will indeed cause a deadweight loss. the paying $5000 (lets just say it is all a direct bribe to the search engine) may be the efficient price for that transaction since it is agreed upon by both parties and the ammount is probably driven pretty close to some market value since every big site is trying to bribe the search engine. however it still creates an innefficient market because it constitutes what economists call 'rent-seeking behavior' which you can probably look up on wikipedia and get a clearer definition than I can give you, but it is basicaly spending money or effort to increase ones wealth without actually doing anything. The classic examples tend to be forms of government corruption, but I think certain subsets of advertising are also used as basic examples. I hope that helps a little.

Re:I'll take the Off-topic hit for this (1)

UbuntuDupe (970646) | more than 7 years ago | (#18014178)

To be honest, AC, it doesn't help.

in strict economics terms dead weight loss is the reduction in overall utility caused by any transaction that is not at the efficient price level or the efficient quantity level.

Okay, but what does that *mean*? The problem here is that the jargon is obscuring understanding of the concept of a DWL. What does it mean for one price or quantity level to be efficient? I think when you unravel the terms, you see it's basically circular. Try if you disagree.

the paying $5000 (lets just say it is all a direct bribe to the search engine) may be the efficient price for that transaction since it is agreed upon by both parties and the ammount is probably driven pretty close to some market value since every big site is trying to bribe the search engine. however it still creates an innefficient market because it constitutes what economists call 'rent-seeking behavior'

Okay, but now you're justifying its classification as a DWL on different grounds than the original author proposed.

which you can probably look up on wikipedia and get a clearer definition than I can give you,

No, I understand what rent-seeking is, and how it's wasteful. My point is just that it can't be attacked as being a DWL, because someone certainly does benefit -- the rent-seekers. Yes, there are other (very good reasons) that explain why it's bad, but when you start to appeal to the concept of a Paretian improvement/worsening, which is what the DWL does, you reach a contradiction and can't critique it on those grounds.

Re:I'll take the Off-topic hit for this (1, Funny)

Anonymous Coward | more than 7 years ago | (#18014858)

A dead weight loss is when idiots waste time arguing on the internet about badly-written, poorly thought-out essays that no one is going to read instead of engaging in more productive activities like downloading pornographic materials or playing World of Warcraft.

HTH

Re:I'll take the Off-topic hit for this (1)

jonbryce (703250) | more than 7 years ago | (#18014356)

The Search Engine Optimisation expert gets the money. One of the things he will likely do is make the site compliant with the W3C's accessibility guidelines, as this will likely improve search ranking. That does benefit society as a whole. But other techniques such as url cloaking and keyword stuffing do not benefit society as a whole, so having scarce resources devoted to these tasks is suboptimal as far as the economy is concerned.

Re:I'll take the Off-topic hit for this (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18014880)

How about the constant accountant/tax battle? People who try and avoid paying taxes are pitted against Tax Offices around the world.

Re:I'll take the Off-topic hit for this (1)

dovf (811000) | more than 7 years ago | (#18014890)

I don't know anything about economics, but it seems to me that the real DWL is the fact that these companies have invested the $5000 in something which produced nothing, rather than having invested it in whatever it is they do, and thus actually producing something. So you have one situation in which $10000 were spent, and you have something to show for it, and another situation in which the same $10000 were spent, but there's nothing to show for it. So the real DWL is the "something vs. nothing", not the $10000, which are spent in either situation.

Re:I'll take the Off-topic hit for this (1)

UbuntuDupe (970646) | more than 7 years ago | (#18015006)

So the real DWL is the "something vs. nothing", not the $10000, which are spent in either situation.

I agree you can use a more rigorous conception of tha DWL, but like with the other responders, that wasn't the definition the original author used. In that definition, what makes it a DWL was that there was a loss *not corresponding to any gain*. While the *net* gain (across all people) may be zero, or even negative, the people they paid to (futilely) improve the search engine ranking certainly did gain a benefit that corresponded to their loss. Hence, my confusion about the the common use of the DWL term.

Re:I'll take the Off-topic hit for this (1)

inviolet (797804) | more than 7 years ago | (#18015146)

This issue has long bugged me and it's hard to get answers about it. I don't understand how this is a deadweight loss (DWL) by his definition. Who got the $5000 worth of effort from each of them that they spent? That was the corresponding benefit to another party. How is this DWL different from the "non-DWL" example directly preceding, in which someone overpaid for hosting, but that was the hosting company's gain?

The search-engine received all the benefits of the efforts, but those benefits cancelled each other out.

And even if they didn't cancel, the 'benefit' is not useful to the search-engine, because it only amounted to a minor change in search-results ranking. The market itself (i.e. the users who are searching) would benefit by that change in ranking... but only if the change caused an objectively better company to win a higher slot.

In the case of (your example) weddings consults, there are probably no significant differences in product quality among the major players. Therefore, there is no benefit to the world if Wedding Consultants #228 takes a larger share of the market than Wedding Consultants #854. Thus we arrive at the general and distrubing conclusion that marketing efforts consume a lot of wealth but often (usually?) create no net wealth in return. A marketing effort usually just steers consumers in a slightly different but meaninglessly equivalent direction.

I'd hate myself if I worked in marketing. Of course it's a different story if the product you're pushing is one of the rare objectively superior products... but those don't seem to come up very often nowadays.

Re:I'll take the Off-topic hit for this (1)

UbuntuDupe (970646) | more than 7 years ago | (#18015226)

The search-engine received all the benefits of the efforts, but those benefits cancelled each other out.

No, the SE didn't benefit (or at least not primarily). Rather, the workers they paid to game it, benefited. Thus it can't be a DWL by the definition -- the loss of some corresponded to the gain of those workers. It doesn't matter that there was a net loss after summing over all agents; that's not what DWL refers to. Hence my confusion with the concept.

Re:I'll take the Off-topic hit for this (1)

inviolet (797804) | more than 7 years ago | (#18015408)

No, the SE didn't benefit (or at least not primarily). Rather, the workers they paid to game it, benefited. Thus it can't be a DWL by the definition -- the loss of some corresponded to the gain of those workers. It doesn't matter that there was a net loss after summing over all agents; that's not what DWL refers to. Hence my confusion with the concept.

Total social wealth is decreased when people employ others to perform useless tasks, such as battling over a search-engine slot. The SEO industry, like the realm of marketing in which it resides, is one millimeter above being a zero-sum game. And yet the people involved all consume a great deal of wealth in going through their contortions. That makes it a DWL... although that may or may not be how the term got used in the original post.

Now I'm confused too.

Re:I'll take the Off-topic hit for this (0)

Anonymous Coward | more than 7 years ago | (#18015550)

How about the ultimate in dead weight loss.

(1) Buy bullets
(2) Shoot them at Iraqis

You lose money on the bullets, and the Iraqis lose lives. The people you bought the bullets from could have been using that metal to build things that don't just get thrown away.

Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. [...] This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron.
-- Dwight Eisenhower, April 16, 1953

Google (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18013430)

This is what Google already does - using linking as a proxy for the average desirability others have to see the content at the links end. As with all systems, it can be gamed. But it sure does a good job of returning results. It is so good, in fact, Google has not had to update its search syntax available to the general public in order to stay ahead of the competition. I wish Google would. Maybe some one else coming up with another way to have a meritocratic search engine will be the impetus for Google to improve this aspect. But do not pretend that Google does not already more or less do what is desired in providing results for a given search term. Google is meritocratic, and in probably the most neutral way possible, with its search results.

Re:Google (1)

GoCanes (953477) | more than 7 years ago | (#18014626)

Actually, Google is highly regressive in how it displays search results. Companies that can afford SEO tricks, renting links from other high PR sites, hiring staff to write useless content on blogs with links, etc will get the best results. The small company with a better mousetrap gets very little attention from Google. The chicken-and-egg problem will exist in the meritocracy too --- there's no way to rise unless people can find you.

Re:Google (0)

Anonymous Coward | more than 7 years ago | (#18015498)

Your argument seems to be since others can game the search result system, the system itself is regressive. This does not really make sense unless you are suggesting a system can be developed that cannot be gamed.

A small company with a better product will get higher ranking from Google if other websites on the Internet have come to the same conclusion and linked to it. There is no better way to measure what people think is worth seeing on the internet than what those people have chosen to create links for on the internet, at least at this time. Come up with a better way and you too can be a billionaire.

Google is more than just software. (0)

Anonymous Coward | more than 7 years ago | (#18013436)

Even with an open source algorithm that was the 'best ever', the internet is big, and you're going to need huge amounts of computer power to compete with Google.

Re: Wiki vandalism (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18013452)

I rather enjoy the small truth behind every vandalism than being blinded by bias in any other encyclopedia.

http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_i s_failing [wikipedia.org] sounds like the whinge of a controlfreak who couldn't control Wikipedia (That's probably also the reason why that entry was marked as read-only ;-).

Get Back To Me On This One (1)

mpapet (761907) | more than 7 years ago | (#18013460)

Wikia search site uses something like this "automeritocracy" algorithm to guard the integrity of its results, it's imperative not to use an algorithm vulnerable to the hordes-of-phantom-users attack

That right there is a billion-dollar idea that I'm sure more than a small horde of devs are working on for themselves or for vulture capitalists.

Will Mr. Wales own the magic algorithm to use as he sees fit or what?

Gaming Google (1)

Bluesman (104513) | more than 7 years ago | (#18013500)

All you have to do to substantially reduce "gaming" the system is to not make it worthwhile.

Since you can pay Google to have your site link placed right at the top of the search results, for less that what you'd pay someone to game the system to reach a similar position, it wouldn't make sense for large companies to try to "game" Google at all.

If it weren't for the advertising, we'd probably see a lot more of this on Google.

Maybe this project could implement something similar.

Re:Gaming Google (0)

Anonymous Coward | more than 7 years ago | (#18013738)

Maybe this project could implement something similar.

Err, wouldn't that defeat the entire purpose of what he's proposing. The point is to get "better" search results. If you allow others to somehow "buy" their place on the list, then what have you accomplished vs what is already there now?

Re:Gaming Google (1)

Bluesman (104513) | more than 7 years ago | (#18013792)

Google separates them by color, so it's easy to tell an ad vs. the real result.

This project could do something similar.

Re:Gaming Google (1)

Andrewkov (140579) | more than 7 years ago | (#18014254)

I don't think it's so simple. Google seems to be very commercialized. If you are trying to find reviews about a product, Google's top results always point at retailers selling the product. I find that other search engines will point you towards review sites and other third party info, rather than to marketing sites.

Re:Gaming Google (1)

skoaldipper (752281) | more than 7 years ago | (#18014888)

> Google seems to be very commercialized. [...] Google's top results always point at retailers selling the product.

I agree. I get about 50/50 commercial to actual content in my results these days. I long for the good 'ole days of non commercial google lore. As soon as I find a viable contender, I'm hopping off this titantic at first sight of a lifeboat.

And from the lengthy but insightful article, he talks about cost savings to the consumer and not having to compete with industries spending 5 thousand on optimizations. However, he uses an applie pie recipe as an example showing how consumers can increase their page rank (among other factors) by linking to youtube or putting more resources (in general) into their page. He thinks this levels the playing field for us consumers. However, companies will just funnel that 5 thousand from optimization engines into those phantom users he talks about instead. And those phantom users become that new optimization engine. How many of us have that type of disposable income to do the same, albeit in time or money?

No. I was quite fascinated and hopeful before I first visited that wikia link from an earlier /. article. But when I saw the google ads to the right, it was like actually looking at the expiration date on a rank package of meat when you get home. The promise of wikipedia (to me anyways) is the ad free user content. I sense no financial motivation behind that content; only personal, which can be easily checked by such a meritocracy in place now. Remove any financial tentacles from this grand search engine, then I'll jump to that lifeboat.

Merit is in the eye of the beholder (5, Insightful)

SirGarlon (845873) | more than 7 years ago | (#18013512)

I seriously doubt this will turn into anything useful because it relies on a collective definition of "merit." When you and I search for information on the same topic, your needs and my needs may be totally differnt (I may be looking for a little bit of general background and you may be looking to compare and contrast the opinions of two recognized experts in the field). Even if all the hurdles against manipulation can be overcome, I don't see how "merit" rankings will amount to anything more than a popularity contest.

A popularity contest would be great (1)

Per Abrahamsen (1397) | more than 7 years ago | (#18013878)

The main reason Google was so much better than AltaVista was that it sorted the results according to a "popularity contest" based on how many other pages referred to it. This was way more useful than sorting according to how often your search term occur in it.

Don't dismiss popularity contests, the popular choice will, almost by definition, usually be the most interesting choice for most people. You may not feel you belong to "most people", most people don't, but if you leave your feeling of elitism and/or alienation aside for a moment, chances are that the popular choice are often also the right one for you in most areas.

Of course, a general search engine that would return the most popular hits "by people with similar tastes as me", would be even better. Such personalized answers results is already done in some book or movie recommendation systems.

This doesn't preclude the need for a good baseline though, something that would put roses higher than dog poo in a "things that smell great" list.

Re:A popularity contest would be great (1)

ivan256 (17499) | more than 7 years ago | (#18014936)

This doesn't preclude the need for a good baseline though, something that would put roses higher than dog poo in a "things that smell great" list.

That's exactly what this kind of system *doesn't* need. (Well, it needs it because if we don't use the same definition of "merit" for all users, or at least limit the numbers of definition of "merit" that are available this will become a computationally infeasible project... But let's talk theoretically).

This theoretical system should learn through user feedback exactly what that particular individual's idea of 'merit' is. Using a baseline may reduce the time it would take for the system to be useful to any given user, would insert bias into the system. Bias is what everybody wants removed from search results because bias is exactly the thing that people use when they game the algorithm to put their page at the top of the results list. Any search engine that uses a static baseline definition of "merit" can be gamed by the operators of sites that want to move up the list.

I don't think it's possible for any engine, open source or not, to do significantly better than google, since without the nearly infinite processing power required for individualized definitions of merit all you will be doing is replacing one form of bias for another. Perhaps you could come up with bias that is more to the tastes of some subset of users, but empirically that's no better than what we have already.

Re:Merit is in the eye of the beholder (3, Insightful)

nine-times (778537) | more than 7 years ago | (#18013914)

In fairness, I don't think that "merit" is relative with respect to search-engine results. In a simplified example, if I search for "sony", I'm probably looking for one of three things:

  1. The Sony website
  2. A website that sells Sony products
  3. A website that gives reviews of Sony products

Therefore, the top results should reflect that. Most likely, I'm not looking for porn. I remember the days where search engines would return porn for any and all searches. The fact that Google was able to avoid this is part of what brought about its rise to power.

Of course, not every example is so simple, but clearly there are results that are or are not correct for a given search.

PSSST: Merit for sale!!! (1)

EmbeddedJanitor (597831) | more than 7 years ago | (#18014366)

With a few script changes, that whole spambot army out there could easily be rejigged as meritbots.

Companies could very easily request/encourage/force employees to do a merit update every morning.

Any system is open to abuse. At least the Google model is pretty easy to understand.

Re:Merit is in the eye of the beholder (2, Interesting)

timeOday (582209) | more than 7 years ago | (#18014842)

I seriously doubt this will turn into anything useful because it relies on a collective definition of "merit."


Good point. But furthermore, I can guarantee you this won't work, simply because web page rankings and spam filtering are essentially the same thing, and the spam issue has not been solved. That is, even when we don't have the problem of multiple conflicting opinions and all we're trying to do is model the preferences of a single recipient, we still can't do it!

Patents (1)

jimwelch (309748) | more than 7 years ago | (#18013522)

Let's see, when does Google Patents run out?

Re:Patents - Google has more than I thought! (1)

jimwelch (309748) | more than 7 years ago | (#18015284)

US Patents in general run for 20 years after the filing date.

"Ranking search results by reranking the results based on local inter-connectivity"
US Pat. 6526440 Filed Jan 30, 2001

"Methods and apparatus for using a modified index to provide search results in response to an ambiguous search query"
US Pat. 6865575 - Filed Jan 27, 2003

"Systems and methods for highlighting search results"
US Pat. 6839702 Filed Dec 13, 2000

Adaptive computation of ranking
US Pat. 7028029 - Filed Aug 23, 2004

"Graphical user interface for a display screen of a communications terminal"
US Pat. D529920 - Filed Sep 29, 2003
US Pat. D528553 Filed Sep 29, 2003

"System and method for encoding and decoding variable-length data"
US Pat. 7068192 - Filed Aug 13, 2004

"Cable management for rack mounted computing system"
US Pat. 6870095 - Filed Sep 29, 2003

"Voice interface for a search engine"
US Pat. 7027987 - Filed Feb 7, 2001

"Drive cooling baffle "
US Pat. 6906920 - Filed Sep 29, 2003

If they get this right... (1)

Panaqqa (927615) | more than 7 years ago | (#18013558)

...then I think the benefits could be tremendous, but whenever I hear the term "meritocracy" or it's derivatives, I start to get skeptical and/or nervous. One person's eyesore of a website could be someone else's lovingly tended but badly coded page that is popular with all their friends. Also, by definition, those who are willing to spend time in a "modified wiki" project such as this will likely be more technically oriented and likely have a bias against poor design and/or poor coding. Bear in mind that of the first million or so webpages out there, by far the majority were put together by "power users" who self taught HTML - and coded without any form of compliance with any W3C standard as it existed at that time.

Economic inefficiencies (2, Informative)

atomic777 (860023) | more than 7 years ago | (#18013580)

I especially like his point on the economic inefficiencies that result from Google's vulnerability to results manipulation or 'tweaking'. In a certain unnamed, small internet company I worked for, fully 10% of our staff were SEM/SEO people, and a good chunk of our development time was spent on projects led by them trying to optimize our page rankings. I'm sure we're not the only ones.


If a theoretical "merit-based" search engine existed, those non-trivial resources would be spent building a better mousetrap, making our site faster, etc. I hope such an engine exists some day...

Re:Economic inefficiencies (1)

knewter (62953) | more than 7 years ago | (#18014976)

I think, realistically, if a merit-based search engine existed then the same number of resources would be spent trying and failing. Did you have marked improvement for the effort spent on SEO?

Meritocracy = aristocracy with genetics for wealth (0)

Anonymous Coward | more than 7 years ago | (#18013598)

Meritocracy is aristocracy with genetics substituted for wealth. In both cases, it's about who your daddy was - just a different part of him. Such concepts are valuable neither in building good societies nor creating effective search engines.

Re:Meritocracy = aristocracy with genetics for wea (0, Offtopic)

Dunbal (464142) | more than 7 years ago | (#18013712)

Such concepts are valuable neither in building good societies

Considering every human society has a "privileged" class - call it what you will, aristocracy or otherwise - I would think that it's the only way to HAVE a society.

StumbleUpon (3, Informative)

EricBoyd (532608) | more than 7 years ago | (#18013618)

It's not a "search engine" per-say but a lot of your talk of "automated meritocracy" sounds exactly like what StumbleUpon [stumbleupon.com] does in order to recommend content to users. People vote on a page, those votes are passed through an automated collaborative filtering system, and then the page is shown to more users who are predicted to like it, rinse lather and repeat. Good content rises to the top of the recommendation queue, so that new users (or people who just joined a category) are shown the things which the vast majority of people liked, in order to build up a rating history to personalize that person's recommendations.

Re:StumbleUpon (1)

truthsearch (249536) | more than 7 years ago | (#18014584)

With the data they have they can probably build a very personalized search engine. With everything at the top having very positive votes from many users I imagine it would be less susceptible to gaming.

Re:StumbleUpon (1)

MajinBlayze (942250) | more than 7 years ago | (#18014628)

You, and many others, are missing the point, and failed even to RTFS (as long as it may be) it is worth reading if you beleive that this is an extension of Google's PageRank, or of any other voting site out there. The primary concept is this: Users cannot select what to vote for, preventing a subsect of users to overwhelm the voting. Instead random users are chosen (like meta-moderating on Slashdot) to vote yes or no. In order for SEO to be possible, the SEO company would have to own more than a majority of the entire population of the user base. The largest flaws, however, lie in 2 problems: there would be a constant reshuffeling of results based on new votes, and that it requires users to care about the result of their voting.

Two approaches to the search problem (2, Interesting)

currivan (654314) | more than 7 years ago | (#18013632)

There are two main directions where search can improve. One is better understanding of natural language, to disambiguate query terms and provide results where the wording on pages is different from the wording of the query.

The other, which this approach can address, is to improve the term relevance scores and overall page quality metrics that mainstream search engines are based on. Google had its initial success because of two features of this type: one was Page Rank, a measure of overall topic-independent site popularity, and two, bettor use of anchor text, the words people write when linking to other pages.

In both cases, they mined the link structure of the web, which was essentially aggregate community generated information about site quality that wasn't being spammed at the time. As they succeeded, regular people put less effort into writing their own link text, and spammers took over.

The next source of this type of community generated content will probably be something incidental instead of deliberately created. If you build a central repository of reviews of web sites, you both make it easy for people to game your results, and you open yourself up to lawsuits from interested parties.

However, untapped information already exists on what people find useful on the web in the form of their browsing histories, a special case of this being their bookmarks. Someone who could aggregate this information on what millions of people ended up looking at after they ran a particular search query would be in an excellent position to improve the traditional search engine scoring algorithm beyond link data.

Re:Two approaches to the search problem (2, Interesting)

russellh (547685) | more than 7 years ago | (#18013876)

There are two main directions where search can improve. One is better understanding of natural language, to disambiguate query terms and provide results where the wording on pages is different from the wording of the query.
I'm highly skeptical about this path because NL works best in a specified (narrow) context. So if you can specify the context, then you must have already put web pages into context - driven by what? the semantic web? If you've done that, then NL is almost redundant. Like, maybe I want to search for "reaction" in the context of "chemistry" but not medicine or politics. The ability to say this particular bit of information on the web is 30% political and 70% chemistry related is the kind of thing you want to get to and where NL and AI is ultimately useful for search; contextualizing information when it is gathered (or ideally, created), not during the search process.

The other, which this approach can address, is to improve the term relevance scores and overall page quality metrics that mainstream search engines are based on. Google had its initial success because of two features of this type: one was Page Rank, a measure of overall topic-independent site popularity, and two, bettor use of anchor text, the words people write when linking to other pages.
I hope that for search, "pages" go away and what we have is more structured information. I would say that there is a lot of work being done here, but it seems to be gigantic committee style work... maybe an open source search engine can provide a straightfoward way or standard for information providers to readily adopt that will really move it along instead of simply trying to adapt to all the crud that is already out there. The engine would have a model of the information universe, like a weighted tag graph, which is editable in a wiki style, to which information authors could link in their pages, like HTML meta tags. That's a fairly simple and easy scheme to start with.

Re:Two approaches to the search problem (1)

SatanicPuppy (611928) | more than 7 years ago | (#18014568)

That will never work. Understanding natural language is hugely difficult for people, and mind-bogglingly difficult for computers. You have to account for the fact that meaning is contextual, meaning is not fixed, and that people make mistakes in their use of language.

There is a whole branch of philosophy dedicated to theory of language, and I'd recommend books, but they're by and large so hopelessly abstruse that it would be little more than intellectual hazing if you don't already have pretty solid knowledge of the subject.

Look at computer translation software...Even special purpose driven, it produces extremely clunky translations.

What we really need is not a search engine that can figure out what we want, but instead a search engine that returns extremely accurate results for what we tell it we want. That puts the linguistic burden on us, and we're much better equipped to handle it.

Re:Two approaches to the search problem (1)

MindStalker (22827) | more than 7 years ago | (#18014648)

You know, if what they are looking for is "what the average community perfers" Why don't they just implement a decent search engine, even better if they can just use googles. And then record clicks. If a result gets more clicks put it farther to the top. Only problem with this is it puts a barrier to new guys getting in. So maybe add some randomization to put non favorites towards the top but make sure the top 8 or so get on the front page no matter what.

Re:Two approaches to the search problem (1)

currivan (654314) | more than 7 years ago | (#18015172)

We tried using clickthroughs, but the data is very noisy. Users often don't know if a page is useful until they go to it, and they often open many pages from the same list of results. The best application turned out to be "how often is this link the last one people click on", but that assumes they're using the back button rater than opening several links in tabs.

You also don't know if the user finds what they really want linked off of a result page, or if they give up. The skewing of clicks toward the top results is very strong, but it seems to vary by the type of query. To even get this data, you need redirect links, and the fact Google doesn't use them tells me they didn't find it useful either.

Re:Two approaches to the search problem (1)

MindStalker (22827) | more than 7 years ago | (#18015398)

Hmm, good point. If there only was a way for force a user to say. Yea.. this is what I wanted.. But alas... If you implemented search as a sidebar or something that was more integrated into the browser this would probably be easy.

Bootvis' Theorem (2)

Bootvis (913169) | more than 7 years ago | (#18013636)

Bootvis' Theorem states:
It is not possible to create an algoritm that takes as input any dataset and a search query and outputs the results 'best' matching the query.

I have truly marvellous proof but ...

Efficient Labour markets (1)

Dorceon (928997) | more than 7 years ago | (#18013676)

I like how the post talks about making search an efficient market, but completely discounts another important market that is already a lot closer to efficient: labour. If you're good enough to write an ungameable search engine, you're going to have substantial job offers from at least Google, Yahoo, and Microsoft.

Re:Efficient Labour markets (1)

AmberBlackCat (829689) | more than 7 years ago | (#18014172)

I think any search algorithm is going to be as un-gameable as DRM is uncrackable. The goal would be to convince the big corporations that the algorithm is un-gameable long enough to collect the money.

I had this idea a while ago (1)

Paulrothrock (685079) | more than 7 years ago | (#18013678)

An open source search engine would be a good idea, except that the index would have to be hosted somewhere and indexed somehow.

I'd gladly donate some spare processor cycles, hard drive space, and bandwidth to an open source search engine like a BOINC project.

Re:I had this idea a while ago (1)

rrohbeck (944847) | more than 7 years ago | (#18014498)

>I'd gladly donate some spare processor cycles, hard drive space, and bandwidth

If it's along the lines of P2P apps, DHT [wikipedia.org] s etc., this could really work.
Kad [wikipedia.org] already does a pretty good job of searching. Use something like it to point to Internet content, and use swarming for downloads... There's a Firefox extension waiting to happen.

In open Source (0)

Anonymous Coward | more than 7 years ago | (#18013692)

In Open Source, Engine searches you!

Already-existing grassroots google (3, Informative)

Bananatree3 (872975) | more than 7 years ago | (#18013722)

There already exists a distributed, open source engine which has been around a while, which is called Majestic 12 [majestic12.co.uk] . It uses a client-based search engine, which crawls the web for hundreds of millions of URLS, and then sends the data back to central servers. The servers than compile the data and use user-based searching algorithms to perform the search. While the algorithms are still very much in alpha, it is still a very noteworthy project. Also, its URL base is currently around 30-35 Billion URLs.

Re:Already-existing grassroots google (1)

TodMinuit (1026042) | more than 7 years ago | (#18014590)

But Majestic 12 sucks. Mozdex [mozdex.com] , on the other hand, which is buiilt on open source technology [mozdex.com] . It's not the best search engine out there, but it's about a billion times better than Majestic 12.

Systems by their nature are always "gamed" (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18013736)

In general I see the termed "gamed" as subjective. When outcomes are matched to an individual's expectations, they see the system as working, when they disagree with the outcome, they call it gaming.

As long as people are the engine behind this "pure meritocracy," the system will be gamed. I find the google results to be good enough that I am not looking for an alternative. Google provides the basis for research. If you want the best deals, you still have to shop around and do the due diligence. If you want to do research to still have to follow-up on the citations yourself. Anyone who suspends their own reason to defer exclusively to the magic algorithm, will get gamed.

Re:Systems by their nature are always "gamed" (4, Interesting)

Kelson (129150) | more than 7 years ago | (#18013864)

In general I see the termed "gamed" as subjective. When outcomes are matched to an individual's expectations, they see the system as working, when they disagree with the outcome, they call it gaming.

Very true. For an example, look no further than the subset of SEO that sees no difference between settings up hundreds of automatically-generated pages linking to a site for the sole purpose of increasing search rankings and hundreds of individual people independently writing about (and linking to) a site. I've actually seen people in the linkfarm business claim that they're not doing anything different from bloggers.

This is basically equivalent to saying that there's no difference between one person writing 10 letters to a politician under assumed names, and 10 people writing their own letters.

fair and un-gamable rankings <> meritocracy (2, Insightful)

Anonymous Coward | more than 7 years ago | (#18013750)

The use of a ranking system (even a fair and un-gamable one) is biased against a true meritocracy. If I'm looking for apple pie recipies, I (and likely anyone else looking for apple pie recipies) will pluck one from the top-ranked choices.

This "top-10-cherry-picking" makes it highly unlikely that the possibly-superior newcomers will be seen. You have to be seen in order to be ranked up.

It's only through "outside" mention (blogs, word-of-mouth, etc.) that newcomers have much of a chance of being looked at.

Re:fair and un-gamable rankings meritocracy (1)

Tony Hoyle (11698) | more than 7 years ago | (#18015230)

That's why you have decay in rankings.. If nobody keeps voting things over time the number of votes attributed to a page lowers until it goes back to zero - this allows new pages to be on the same footing as older ones.

Which Community? (2, Insightful)

RAMMS+EIN (578166) | more than 7 years ago | (#18013852)

``First, though, consider the benefits that such a search engine could bring, both to content consumers and content providers, if it really did return results sorted according to average community preferences.''

It's also interesting to ask "which community?" There is a small number of categories of things that define some high percentage of the things I search for. I am pretty sure there is a very small intersection of those categories with the categories of things the world's population as a whole searches for. There are also differences based on location and language. In short, my preferences are almost certainly very different from the average of all searchers.

On the other hand, there are definitely groups of searchers whose preferences coincide with mine. For example, people who are involved in open source development, *nix users, computer scientists, environmentalists, English speakers, and people in the Netherlands probably have preferences that largely overlap with mine.

This suggests to me that some sort of machine learning might be used, where the system guesses your search preferences based on what links you have followed in the past, and what links other people have followed in the past. In other words, the system (implicitly) tries to determine which communities you are part of, and gives you results that are prefered by members of these communities.

Someone please read this (0)

Anonymous Coward | more than 7 years ago | (#18013856)

For three years now I have been waiting for someone to develop a new kind of search engine that I was sure was so obvious that it would happen any moment. I have made attempts to do it myself because the program is trivial to write but I just don't have the time(Ive got a company to run). So in the hopes that someone will read this and do what I propose so they can change the world and get rich and I will finally get the information I need here is the idea.

(1) Scrape bookmarking sites and store lists of bookmarks associated with a user identifier.(delicious, digg, redit, ect..)

(2) Take your own bookmarks and find other users with similar bookmarks (The number of bookmarks they have in common with you the more similar they are to you).

(3) Print out the bookmarks of similar people that you don't already have.

(4) Profit!

This is so obviously more effective than a search engine. The information will begin to flow naturally to where it is needed spawning the next big information revolution. And information accelerates all other forms of technology.

The singularity is nigh!!!!!!!!!!

Sounds really like peer ranking .. (1)

roguegramma (982660) | more than 7 years ago | (#18013948)

Sounds like the algorithm he really wants to talk about is the one Highlander names "peer ranking system" on his page at Everything2.com: http://www.everything2.com/index.pl?node_id=152171 2 [everything2.com]

I somehow believe that Google is quite aware of this algorithm and has already implemented it.

Reminds me of Indiana Jones... (1)

gd23ka (324741) | more than 7 years ago | (#18013976)

If you know from which Indiana Jones movie this scene is from tell me. I remember Jones facing off
some huge Samurai with swords in the middle of a market place. The Samurai twirls his swords
and delivers one hell of a impressive martial arts show before challenging Jones to attack.
Jones instead just shrugs, draws his colt and shoots the Samurai point blank.

With this analog in mind, it's easy for me to draw my colt and shoot this long missive down with
one single argument: A Wikipedia-like process for a search engine means Administrators decide
what is worthy of inclusion into the index and what is not. Administrators are voted in by peers
so in order to become one he or she must consistently demonstrate the ruling orthodox attitude.
So in the end we would get a political correct search engine much worse than Google. Connect that
to the Mother Gaia complex deep in your ecosocialist asshole, Wales.

Check out "Administrators" like this ndividual here http://www.google.co.za/search?hl=en&q=SlimVirgin& btnG=Search&meta= [google.co.za]

Re:Reminds me of Indiana Jones... (1)

micromuncher (171881) | more than 7 years ago | (#18014948)

I think you're confused with Kill Bill. In Raiders of the Lost Arc, Indy's first encounter was with an Arab Egyptian and a scimitar. In Temple of Doom, it was Sikh with a khunda.

Let a million algorithms bloom (2, Interesting)

DysenteryInTheRanks (902824) | more than 7 years ago | (#18013980)

He's thinking about this all wrong.

A true open source search engine would let anyone roll their own algorithm. Each algorithm would be a sort of "plug in."

The index would be the shared, open source part, collaboratively crawled (via PC software or browser plugin) by everyone who elects to participate.

Algorithms would either work on the index after the fact, or, if they need access to the indexing process itself, would be part of a series of plugins run on the full HTML of each page.

The index itself would have an open API, so people could build their own front end search websites.

Trying to design the right algorithm up front is a premature optimization. I have no interest in helping Jimmy Wales become the next Sergey Brin. But I *would* participate in something that gives _me_ a shot, however distant, at founding the next Google, minus the massive spider farm.

The Quantum Bookkeepers (2, Interesting)

Jimekai (938123) | more than 7 years ago | (#18014010)

Such an auto meritocracy could truly work if the self-pruning clustering algorithm created semantically-bound transactions in a feedback system that was designed at the outset to rival capitalism. I know that Google could be tweaked to do this, were it not for capitalist noses being unable to pick up on the scent of profit.

Jimmy knows it's gonna suck (1)

El Mariachi 94 (1064198) | more than 7 years ago | (#18014094)

During the talk, Jimmy acknowledge that the Beta of the engine is gonna suck and the media is gonna shit all over it. When the beta is released, they're gonna type in bold letters "We know this sucks" to curve some of that negative karma from the press. At least he's realistic about the project. Check it out the video of Jimmy's NYU talk here: http://video.google.com/videoplay?docid=-741696809 2951113589 [google.com] or download the MP3 here: http://homepages.nyu.edu/~gd586/Jimmy%20Wales%20-% 20NYU%20-%201-31-07.mp3 [nyu.edu]

User-based ranking is patented by IBM (3, Interesting)

Animats (122034) | more than 7 years ago | (#18014102)

Rating by asking random users has been tried. At IBM. See United States Patent 7,080,064, Sundaresan July 18, 2006, "System and method for integrating on-line user ratings of businesses with search engines". Sundaresan has several patents related to schemes for asking users for ratings and using that info to adjust search rankings.

The basic trouble with this approach is that, if you ask random users to rate random sites, they don't have enough time, energy, or effort to do a good job of it. If you ask self-selected users of the sites, the system can be gamed.

This sort of thing only works where the set of things to rate is small compared to the interested user population. So it's great for movies, marginal for restaurants, and poor for websites generally.

Google (1)

RAMMS+EIN (578166) | more than 7 years ago | (#18014106)

I sometimes think that we already know the way to do searching - and Google has a patent on it.

Couldn't be more wrong (3, Insightful)

Bluesman (104513) | more than 7 years ago | (#18014120)

>The extra $50 that the user pays is the user's loss, but it's also the hosting company's gain.
>If we consider costs and benefits across all parties, the two cancel out.
>The world as a whole is not poorer because someone overpaid for hosting.

And thus the broken window fallacy continues...

Wealth is created through increased efficiency. A decrease in efficiency is a decrease in wealth, regardless of who benefits.

By the "world is not poorer" logic, we might as well all ride horses, since we'd be paying oat producers and horseshoe manufacturers instead of the auto industry, so the world as a whole wouldn't be poorer.

By paying more for inefficient hosting, that takes money away from more efficient uses.

Open Source hasn't even led (0)

Anonymous Coward | more than 7 years ago | (#18014246)

to a meritocratic development model. The founding principle of Open Source is "If I make it, I make it my way, regardless of what mere users want." I'm not passing judgements on this principle, but look at the devs behind Gaim for instance -- they delete feature requests they don't like. They don't deny them. They *delete* them. Same with Mozilla devs. How is this going to change just because it's a website?

Meritocratic Search Doesn't Make Sense (2, Insightful)

logicnazi (169418) | more than 7 years ago | (#18014442)

The author of this piece takes about meritocratic search as if it were some real fixed ordering of the search results that we just have to be smart enough to uncover. This is anything but the case. For instance is the recipe for apple pie that makes better tasting pie but is too complicated for the inexperienced chief to make better or worse than the one which is extremely easy to follow but isn't as good? When talking about pie this sort of issue might not be a big deal but what happens when we start talking about things like climate science. Is the best result some sort of environmental activists site, a mass media story, a global warming skeptic's site or the actual scientific results that are too technical for most of the public to understand?

Sure, wikipedia makes these compromises quit well but the idea of content neutral encyclopedia entries provides a well defined goal. The second that we get to a search engine we can no longer cling to content neutrality because we must choose how to rank the advocacy sites on both sides of the spectrum. Unlike wikipedia where one can neutrally remark that some people believe X and others Y in a search engine the community has to decide if "unwanted pregnancy" is going to take someone to the planned parenthood site, an abortion clinic or an anti-abortion site.

In short there is no notion of the meritocratic search order, there are just tradeoffs between different sorts of searchers. Google is already navigating this maze of tradeoffs, including looking at what users like, so I fail to see the argument that a community search will obviously make better tradeoffs than Google.

In fact anyone who has spent much time on the Internet realizes that every community tends to develop its own prejudices and biases pushing away those who disagree and attracting those who agree. Slashdot attracts open source zealots and repels the technically inept. Whatever community develops this search engine will have its own biases which will discourage participation by those who don't agree. This is just human nature.

Likely I might enjoy the results returned by such a search since I suspect the participants are likely to be technically sophisticated nerds and others who have similar views as I do. However, it seems doubtful that they will provide the results people who are very different than those who run the search engine will appreciate.

Besides, this whole project just smells hokey to me. It sounds like Wales is drunk on his success with wikipedia and advocating it as THE solution to any problem. Problems are pragmatic things and they shouldn't be solved by ideologies.

Broken window fallacy (1)

numberthre (1044498) | more than 7 years ago | (#18014454)

The world as a whole is not poorer because someone overpaid for hosting.

Erm, yes it is. That difference in price could have been used to produce value. If you believe that the world is in fact not poorer, then you believe that the point of an economy is just to shuffle money around.

See the broken window fallacy: http://en.wikipedia.org/wiki/Broken_window_fallacy [wikipedia.org]

Our answer for search - SiteTruth (3, Insightful)

Animats (122034) | more than 7 years ago | (#18014554)

We hadn't planned to announce this quite yet, but this is a good opportunity.

We have a new answer to search - SiteTruth. [sitetruth.com] It's working, but not yet open to the public.

Other search engines rate businesses based on some measure of popularity - incoming links or user ratings. SiteTruth rates businesses for legitimacy.

What determines legitimacy? The sources anti-fraud investigators tell you to check, but nobody ever does. Corporate registrations. Business licenses. Better Business Bureau reports. The contents of SSL certificates. Business addresses. Business credit ratings. Credit card processors. All that information is available. It's a data-mining problem, and we've solved it. The process is entirely automated.

Most of the phony web sites, doorway pages, and other junk on the web have no identifiable business behind them. Try to find out who really owns them, and you can't. When we can't, we downgrade their ranking. With SiteTruth, you can create all the phony web sites you want, but they'll be nowhere the beginning of any search result.

Creating a phony company, or stealing the identity of another company, is possible, but it's difficult, expensive and involves committing felonies. Thus, SiteTruth cannot be "gamed" without committing a felony. This weeds out most of the phonies.

SiteTruth only rates "commercial" sites. If you're not selling anything or advertising anything, SiteTruth gives you a neutral or blank rating. If you're engaged in commerce, you can't be anonymous. In many jurisdictions, it's a criminal offense to run a business without disclosing who's behind it. That's the key to SiteTruth.

Our tag line: "SiteTruth - Know who you're dealing with."

The site will open to the public in a few months. Meanwhile, we're starting outreach to the search engine optimization community to get them ready for SiteTruth. We want all legitimate sites to get the highest rating to which they're entitled. An expired corporate registration or seal of trust hurts your SiteTruth ranking, so we want to remind people to get their paperwork up to date.

The patent is pending.

Re:Our answer for search - SiteTruth (1)

micromuncher (171881) | more than 7 years ago | (#18014862)

The crappy front page makes it look like a scam.

Re:Our answer for search - SiteTruth (1)

Animats (122034) | more than 7 years ago | (#18014998)

The crappy front page makes it look like a scam.

To some extent, that page was made to discourage unwanted attention during the early phases. But it's all real.

State of trust networks (1)

numberthre (1044498) | more than 7 years ago | (#18014814)

What is the current state of trust networks? Many problems (spam, SEO, 'gaming') seem to hinge on the absence of a trust certificate for ratings. Further, the value of ratings is subjective. I shouldn't have to `trust' Wikia selected users with their ratings. I myself should choose users whose ratings I value for a particular topic.

Is there some fundamental flaw in current trust network algorithms that prevent them from being implemented?

Scale Issue - This is unlikely to work... (1)

Phrogman (80473) | more than 7 years ago | (#18015018)

You have a major problem with the scale of providing search results. What you are proposing here is that individual users *rate* websites in some manner according to their merit. Leaving aside the fact that users are no inherently qualified to rate websites, the fact that a given website may have great merit for a given subject, but not others, and the fact that people will actively find a way to "game" this system just as they have all the others, how big exactly is the internet? lets assume there are 10m useful websites on the web (analy extracting a number). How exactly do you expect the users of this new system to manually rate all of those websites? What is their individual reward for doing so? Even spending 5s to select a number between 1 and 10 for 100 websites on a regular basis is going to occupy considerable time for no apparent reward beyond "helping the system". Most users are probably not that philanthropic, they will simply want to gain benefits from the effort of others. In other words the system will depend on those individuals willing to participate and give up their free time. Now, granted you get all this participation to occur, why would the results be any different or better than those of Google or the other major search engines.

In the original thesis paper that gave rise to the Google algorithm, they essentially worked off a couple of simple concepts - that are no doubt very difficult to implement. They called the key factors "Hubs" and "Authorities". A Hub is a page that contains many hyper-links to other pages. An Authority is a page that is linked to from many other pages. Google's algorithm not only rates each page according to how hub-like or authority-like it is, but it also increases the ranking of the page according to the ranking score of the pages it links to, and more importantly the pages that link to it. In other words if lots of other pages think your page is important and link to it, you gain a higher ranking, if you link to lots of high ranking pages that too is weighed to some degree. As well the actual contents of your page are rated according to their relevance to the search query a user enters at google. Now there are of course thousands of other rules that have been added to avoid all the various tricks that arsehat website developers have attempted to use to fool the system and garner a higher ranking, but the end result is a pretty efficient and accurate search system - probably the best around given google's popularity. The key thing being that of course all this indexing can be done automatically.

How exactly is any manual system going to improve on that? how can it hope to keep up with the billions of webpages out there that are constantly being updated and improved, deleted, moved etc? The manual system can only rate individual websites, not pages - that simply wouldn't be practical. In an ideal world where one website dealt exclusively with one subject and one subject only, this might work, but most websites have dozens if not hundreds of different informational elements that might be of interest to a user, and there is no manual way to determine that, in other words any such meritocratic system is still going to have to fall back on indexing the individual pages using a spider, and the only real result of the meritocratic system is to determine the user's perspective on how valuable the information on that site is - and thats entirely subjective. Google is more objective by far, and any approach to a search engine pretty much has to be as objective as possible.

I don't honestly think any manually driven system has a hope of keeping up with the web. There is simply far too much data being generated, edited, deleted and moved daily for any such effort to offer any real economy of scale.

Google Toolbar? (1)

lanfor (644610) | more than 7 years ago | (#18015208)

Isn't Google doing something very similar with their Google Toolbar? When I search using Google, they record which results/websites I opened and they can improve ranking of those websites. Of course this is biased toward getting feedback for only the top 10 results. Nevertheless I don't see much difference in the proposed algorithms.

One could also ask if bots would be a problem for gaming the Google Toolbar system. If you can game a WikiSearch engine, then you should also be able to game the Google Toolbar, right? Unless those projects do some sort of identity verification, then they will be always exploitable by bots (even with the proposed, random algorithm: imagine WikiSearch with 1,000,000 bots and 1,000 active real users - randomization wouldn't help here, right?)

Lukasz
http://www.hikipedia.com/ [hikipedia.com] - a free database of hiking trails built by the hiking community

Open Directory (1)

bcrowell (177657) | more than 7 years ago | (#18015336)

Sounds like they're reinventing Open Directory [dmoz.org] , which has been doing just fine for many years. I believe Google actually uses Open Directory as one of its seeds for the pagerank algorithm. The Wikimedia foundation keeps on starting up projects, many of which ever become very successful. Wikibooks, for instance, has never achieved its original, grandiose goals, and it's been struggling for years now without making much headway. Its only big area of success was gaming guides (not the college textbooks it was originaly supposed to create), but then they deleted all the gaming guides. I can count the high-quality, complete wikibooks on my thumbs. How about getting rid of some of the failed projects before proposing more?

Mathematically Impossible (1)

Skewray (896393) | more than 7 years ago | (#18015444)

Any system which depends on user input to rank pages is fundamentally a voting system. It has been proven (and discussed on slashdot), that any voting system with more than two candidates can be gamed. The entire project sounds like a waste of time.

summary + while we're wishing, I'd like a pony (1)

sdedeo (683762) | more than 7 years ago | (#18015472)

I read this essay (long on words, short on content.) The summary:

"I have a new idea for a search engine. You should be allowed to suggest a modification to the search results. Your modification will be anonymously reviewed, Slashdot-moderation style, by a small, random subset of search engine users. It's nice to learn that the algorithm solves a problem that does not exist with contemporary link-network algorithms, but does with a hypothetical bad idea (the sockpuppetry issue.)"

Now can we talk about the idea? It's an interesting suggestion, actually, despite my snark. It does indeed bust out of the box a little, by leapfrogging google's focus on an arms race of increasingly arcane (one presumes) link-network analysis.

My feeling, though, is that it won't work: the issue being that it eliminates one serious advantage of the google algorithm, which is that it takes work to "vote". If we put aside the question of linkspam (I understand it's an issue, but if my search experience is representative, google is winning the arms race) for a moment. In order to affect search results, a good faith user has to create links to the pages he considers useful for her readers. It's reasonable to expect that such a linker has knowledge greater than the average member of the community.

Indeed, the whole point is that the average member of the community is going to a search engine in order to find out something he doesn't know already. The advantage google has is that it has figured out reasonably good algorithms that take advantage of this "hidden knowledge." Websites gain credibility, and accrue links; in proportion to this credibility, their own opinions count more. It's important to note that google is not a "one man one vote" system -- it is indeed a meritocracy to begin with.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>