Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

My Crowdsourced Follow-Up About Crowdsourcing

CmdrTaco posted more than 3 years ago | from the department-of-redundancy dept.


Slashdot regular contributor Bennett Haselton writes "In my last article, I proposed an algorithm that Facebook could use to handle abuse complaints, which would make it difficult for co-ordinated mobs to get unpopular content removed by filing complaints all at once. I offered a total of $100 for the best reader suggestions on how to improve the idea, or why they thought it wouldn't work. Read their suggestions and decide what value I got for my infotainment dollar."

In my last article, I proposed an algorithm that Facebook could use to handle abuse complaints, which would scale to a large number of users while also making it difficult for co-ordinated mobs to get unpopular content removed by filing complaints all at once. I offered a total of $100 to readers sending in the best suggestions for improvements, or alternative algorithms, or fatal flaws in the whole idea that would require starting from scratch. As the suggestions were coming in, Facebook obligingly kept the issue in the news by removing a photo of two men kissing from a user's profile, sending a form letter to the user that they had violated Facebook's prohibition on "nudity, or any kind of graphic or sexually suggestive content". (It would be a cheap shot to say that a photo of a man and a woman kissing probably would not have been removed; in truth, probably just about anything will get removed from Facebook automatically if enough users file complaints against it, which is the problem for unpopular but legal content.)

How would these complaints have been handled under my proposed algorithm? The gist of my idea was that any users could sign up to be voluntary reviewers of "abuse complaints" filed against public content on Facebook. Once Facebook had built up a roster of tens of thousands of reviewers, new abuse complaints would be handled as follows. When a complaint (or some threshold of complaints) is filed against a piece of content, a random group of, say, 100 users could be selected from the entire population of eligible reviewers, and Facebook would send them a request to "vote" on whether that content violated the Terms of Service. If the number of "Yes" votes exceeded some threshold, the content would be removed (or at least, put in a high-priority queue for a Facebook employee to determine if the content really did warrant removal). The main benefit of this algorithm is that would be much harder for co-ordinated mobs to "game the system", because in order to swing the vote, they would have to comprise a significant fraction of the 100 randomly selected reviewers, and to achieve that, the mob members would have to comprise a significant fraction of the entire reviewer population. This would be prohibitively difficult if hundreds of thousands of users signed up as content reviewers.

All of the emails I received -- not just "almost" all of them, but really all of them -- contained some insightful suggestions worth mentioning, although there was some duplication between the ideas. If you didn't see the last article, you might consider it worth while to stop reading before proceeding further, and mull over the description of the algorithm above to see how you would improve it. Then read the suggestions that came in to see how well your ideas matched up with the submissions I received.

The upshot is that nobody found what I believed to be completely fatal flaws, although one reader brought something to my attention that might cause trouble for the algorithm after a few more years. Beyond that, reader suggestions could be divided essentially into two categories. The first category of suggestions related to ensuring that the basic premise would actually work -- that the votes cast by a random sample would be representative of general user opinion, and could not be gamed by a coordinated mob or a very resourceful cabal trying to game the system. The second category of suggestions started by assuming that the voting system would work, and suggested other features that could be added to the algorithm -- or, in one case, an entire alternative algorithm to replace it.

To begin with the attacks and counter-attacks against the basic voting algorithm. Walter Freeman and Haydn Huntley independently suggested monitoring for users who vote in a small minority in a significant portion of vote-offs, and reducing their influence in future votes (by either not inviting them to vote on future juries, or sending them the future invites but then ignoring their votes anyway). The assumption is that if a user is frequently among the 10% who vote "Yes [this is abuse]" when the other 90% of respondents are voting "No [this is not abuse]", or vice versa, then that user is voting randomly, or their point of view is so skewed that their votes could safely be ignored even if they are sincere. I like the idea of eliminating deadweight voters, but this might also incentivize voters to vote the way they think the crowd would vote, instead of voting their true opinions -- for example, if they were called to vote on an anti-Obama page that showed Obama wearing a Hitler mustache. Some people's knee-jerk reaction would be to call the page "racist" or "hate speech" or "a threat of violence", even though comparing Obama to Hitler is not, strictly speaking, any of those things. If I were voting my honest opinion, I would count that page as "not abuse". But if I knew that I were voting along with dozens of other people, and my future voting rights might be revoked if I didn't vote with the majority, I might be tempted to vote "abuse".

Similarly, Walter Freeman and reader "mjrosenbaum" both suggested setting deliberate traps for deadweight users, by creating artificial cases where the answer was pre-determined to be obviously yes or obviously no, calling for votes, and revoking privileges for users who gave the wrong answer. This would eliminate the problem of borderline cases like the one above, where smart users think, "I suspect the majority will give the wrong answer, so I'm just going to go with the crowd, to keep my voting rights." On the other hand, it's more labor for Facebook to create the cases, and any public content authored by them -- especially content that is deliberately crafted to be "questionable" -- would probably have to run a gauntlet of being reviewed by lawyers and PR mavens before being released. My suggestion would be to use these artificial scenarios periodically to make sure that the system is working (i.e. that juries are giving the right answers), but it would be too inefficient to use it to try and weed out problem voters.

In fact, these and several other suggestions fell into a category of ideas that could possibly improve the efficiency of the algorithm by reducing voter shenanigans (where "efficient" means that fewer users have to be invited to each vote-off in order to get statistically valid results), but might not be worth the effort. As long as most of the votes cast by users are sane and sincere, all you have to do is invite enough voters to a vote-off, and the majority will still get the correct answer most of the time, even if you have problem voters in the system. That's the simplest possible algorithm. The more complicated an algorithm you come up with, the more likely that Facebook (or any other site you recommended this to) would just throw up their hands and say, "Sounds too hard", and leave the idea dead in the water. That's why I like the algorithm as lean and tight as possible.

So it's not quite like designing an algorithm for your own use, where you could feel free to introduce more complications as long as you're responsible for keeping track of them. In recommending an algorithm for widespread adoption, the basic form of the algorithm should be as simple as possible. In the case of the voting algorithm some interesting wrinkles may come up if you don't eliminate problem voters, but this is not fatal to the idea as long as it's still true that, given a large enough random sample of voters, the majority will tend to vote the correct answer.

For example, James Renken pointed out that as voters dropped out due to boredom, the remaining users casting votes would tend to be either (1) weirdos who just wanted to view questionable material; and (2) prudes bent on removing as much material from Facebook as possible. But that's OK, as long as those two groups vote sanely enough (or as long as there are enough sane users outside those two groups) that material which does violate the TOS, tends to get more "Yes [this is abuse]" votes than material that doesn't. Then all you have to do is make the jury size large enough to make a statistically significant distinction between those two cases.

Similarly, Joshua Megerman suggested surveying users for their religious, political, and other beliefs when they sign up as volunteer reviewers (they could of course decline the survey). This makes it possible, insofar as people answer truthfully, to make sure that a jury is composed of a group with diverse belief sets. (On the other hand, users could game the system by reporting beliefs that are the opposite of what they truly feel. For example, if you're a leftist, register as a right-winger. Then when an abuse case comes before you, if it's a piece of content more offensive to leftists, then the real leftists on the jury will tend to vote against it -- but as a registered right-winger, you'll be able to cast a vote against it as well, and you'll be displacing a real right-wing voter who probably wouldn't have voted that way, so your vote will be worth more!) Again, it's fine if Facebook wants to do this, but even without collecting this data and simply selecting jurors at random, it should still be true that genuinely abusive pages get more "Yes" votes in a jury vote, than non-abusive pages.

Lastly in the "keep the jurors honest" category, Paul Ellsworth suggested allowing jurors to anonymously review each other -- when a given juror is chosen for the "hot seat" (perhaps randomly, perhaps as a result of a history of skewed voting), other jurors are randomly selected from the voting pool, to review that juror's voting record and decide whether that juror has been voting honestly and judiciously, or not. When I first read this idea, I instinctively thought that because a contaminated jury pool would be reviewing itself, it would not be able to reduce the percentage of problem voters, but a little more thought revealed that this isn't true. Suppose initially your jury pool consists of 80% "honest voters" and 20% "dishonest voters", that honest voters who review the voting record of another voter will always vote correctly whether that person is "honest" or "dishonest", and that dishonest voters will always vote incorrectly. It's still the case that when a voter's record is reviewed by a panel of, say, 20 other voters, virtually 100% of the time the majority will get the right answer. If you strip voting rights from a voter whenever a jury of other voters determines them to be a "dishonest voter", then over time, the percentage of honest voters in the system will creep from 80% to 100%. So again, this might work, and again, it might just be adding unnecessary complexity if the basic algorithm could work without it.

Note that none of these precautions would address the case of a "sleeper" voter -- a voter who joins the system with the sole intention of voting incorrectly on particular types of cases (perhaps planning on voting "yes" to shut down pages made by a particular organization, or pages advocating a particular view on a single issue), while still planning to vote correctly on everything else. By voting honestly in all other cases, they prevent themselves from being flagged by the system for casting too many minority votes, or from being blacklisted by other jurors for having a questionable overall voting record. The only real way I can see to address this problem is to hope that such users are outnumbered by the honest users in the system, and that juries are large enough that the chances of "rogue voters" gaining a majority on any one jury are nearly zero.

Which brings us to the one potentially fatal weakness in the system that I'm aware of: reader George Lawton referred me to a program run by the U.S. government to create armies of fake accounts to infiltrate social media, named, apparently without irony, Earnest Voice:

The project aims to enable military personnel to control multiple 'sock puppets' located at a range of geographically diverse IP addresses, with the aim of spreading pro-US propaganda.

An entity with the resources of the U.S. military could potentially create enough remote-controlled voters to overwhelm the system. I'm not sure if there is a way to deal with a system if the majority of voters are compromised. Presumably by making all decisions appealable to a core group of trusted Facebook employees at the top (although this then creates a bottleneck and limits scalability, especially if filing an appeal is free and all the parties who lose abuse cases are constantly filing appeals to the next level up).

Now. On to the second category of suggestions: Assuming the majority of voters are honest, what other features would be desirable to build into the system?

Walter Freeman, on the subject of filing appeals, suggested putting appealed pages in a special queue where they could be publicly viewed and users could comment on the ongoing appeals process, in addition to reading arguments posted by either side; this also negates the censorship itself due to the to the Streisand effect. I agree, but it's not obvious why this is a desirable feature. This does create perverse incentives, since some users could get extra traffic for their content by creating a page that makes whatever argument you're trying to promote, spiking it with some TOS-violating content, waiting for the page to get shut down, appealing the decision, and enjoying all the extra Streisand attention that it gets while on public display during the "appeal".

Meanwhile, James Renken pointed out that the system would work best for content that was originally public anyway, like a controversial Facebook page or event. If someone filed a complaint regarding a private message that they received, and they wanted a "jury vote" about whether the content of the message constituted abuse, then either the sender or the recipient would have to waive their right to privacy regarding the message before it could be shared with jurors. If the message really was abusive, then in some cases the recipient might waive their privacy rights -- reasoning that they didn't mind sharing the nasty message that someone sent them, in order to get the sender's account penalized. The problem arises if the message also contains sensitive personal facts about the recipient, which they wouldn't want to share with anonymous jurors. The system could allow them to black out any personal information before submitting the message for review, but that creates a recursive problem of abuse within the abuse system -- how do you know that someone didn't alter the content (and thus the offensiveness) of the message through their selective blacking-out? So it's not obvious whether this idea could be applied to non-public content at all.

Reader George Lawton suggested allowing content reviewers to vote on the funniest or weirdest content they had to review, to be posted in a public "Hall of Infamy". I love the thought of this, but I think Facebook's lawyers would be uncomfortable glamorizing anything questionable even if it were ultimately voted to be non-abusive (and certainly if it was voted to be abusive). Besides, this also has the perverse-incentives problem -- tie your message to something that you know will not only get an abuse complaint, but will hopefully end up in the Hall of Weird. (Even without the abuse jury system, there are already plenty of incentives for people to make a political point and hope that it will go viral.)

David Piepgrass suggested that new content reviewers should be allowed to specify certain types of content that they don't want to be asked to review -- nudity, graphic violence, etc. This sounds like a good idea. He adds that users probably shouldn't be able to opt-in only to review certain categories of content (or jurors might sign up only to review nudity, and then who would be left to review the death threats?).

Finally, in the other corner: Jerome Shaver suggested bypassing the jury voting system altogether and working on a heuristic algorithm to determine when abuse reports were being submitted by organized mobs of users, based on the patterns shown by mutual friendships between the users filing the abuse reports. The difficulties in designing such an algorithm, are too complicated to summarize quickly, and could fill an entire separate article. (Convince yourself that it's not an easy problem to solve. You can't just ignore abuse complaints from clusters of users that have many mutual friendships, because it can happen that real tight-knit communities of users might file abuse complaints against a piece of content, where the complaints are actually genuine.) But again, there is the problem that if a proposed solution is too complicated or too nebulous, Facebook has the excuse that they are "weighing several options", that they're "already working on something similar internally", etc. The jury vote system has the advantage that it can be described in just a few sentences, and the general public always knows whether it has been implemented or not -- which means that as long as abuses of the complaint system continue, people can ask, "Why doesn't Facebook try this?"

You'll notice this is just a laundry list of the ideas I received, without any definitive conclusions about which ones are good or bad, but that's all I was going for. The original algorithm, I could argue with the force of mathematical proof that, under certain reasonable assumptions, it would work. There's no such proof or disproof for any of the suggested modifications, so I don't feel as strongly about any of them. But at the top of the article I suggested for readers to stop reading and see how many of these ideas they could come up with on their own. How did you do?

The final honor roll of readers who were each the first, or only, person to submit an original idea: Walter Freeman (bonus points for getting in several good ones), James Renken, Joshua Megerman, Paul Ellsworth, George Lawton, Jerome Shaver, and David Piepgrass. Most of them volunteered to donate their winnings to charity, and agreed to let me donate their share to Vittana, which arranges microloans to college students in developing countries. One preferred a charity of their choosing, and only one actually kept the money. To be clear, for future contests, it's awesome if you want to donate the money to charity, but it's not dickish to keep it. That was the original deal after all.

So, all very clever and interesting suggestions, some of which might inspire readers to keep coming up with their own further variations. I said which ideas I probably would have incorporated and which ones I wouldn't, and I'm sure many of you would tell me that I'm wrong on some of those points. Although from here on out you're doing it for free.

Sorry! There are no comments related to the filter you selected.

I see a problem already! (0)

Anonymous Coward | more than 3 years ago | (#35964744)

This comment was removed because we determined that it violates the Facebook Statement of Rights and Responsibilities ("Statement") to which you agreed when you registered to use the Facebook website. The Facebook Statement includes our content, intellectual property and privacy policies. Please reread our Statement and be certain that all of your remaining content on the Facebook website complies with these rules.

tl;dr (5, Funny)

Krneki (1192201) | more than 3 years ago | (#35964798)

Do I get 100$ if I read the whole article?

Re:tl;dr (2)

arisvega (1414195) | more than 3 years ago | (#35964962)

Do I get $50, if I read half of it?

Re:tl;dr (1)

Tatarize (682683) | more than 3 years ago | (#35968648)

Do I get $25 for parroting the joke?

Re:tl;dr (1)

Blakey Rat (99501) | more than 3 years ago | (#35964994)

I read the first couple paragraphs, and I've actually decided it's worth $100 for me to NOT read the article.

Does this guy even work for Facebook? He doesn't seem to be in any position to actually implement the idea... so what's the point? Brainwanking?

Re:tl;dr (1)

david duncan scott (206421) | more than 3 years ago | (#35965244)

The point? At least an intellectual exercise., and possibly a scheme that Facebook (who have stepped on their dicks any number of times over this very issue) might at least consider, along with anybody else who hosts user content and feels a need to monitor its appropriateness.

But hey, if it doesn't spin your prop, don't think about it.

Cognitive penis-waving comparison (1)

jonaskoelker (922170) | more than 3 years ago | (#35970568)

so what's the point? Brainwanking?

Encephallic show-off.

Re:tl;dr (1)

mwfischer (1919758) | more than 3 years ago | (#35970656)

Let's vote on it.

Enough of the bullshit! (1)

Anonymous Coward | more than 3 years ago | (#35964844)

Crowdsousing, the Cloud, Web 2.0, Cyberwhatever. Enough of the bullshit buzzwords!

Cyberwhatever (1)

Presto Vivace (882157) | more than 3 years ago | (#35965742)

is a great buzzword!

Low information voter? (0)

Anonymous Coward | more than 3 years ago | (#35964930)

So, I didn't see anything about the very real problem that most voters won't even know what the TOS really say. People will vote whether they "feel" the TOS has been violated regardless of the terms of service actually say.

Re:Low information voter? (2)

mcmonkey (96054) | more than 3 years ago | (#35966038)

Mod parent up.

For the tl;dr crowd, to summarize:

It's basically the /. mod system, except mods only vote on posts/messages that are flagged by a user as violating the ToS and the choices are 'OK' or 'Not OK'.

As for who watches the watchers, it's basically the /. meta-mod system.

The "algorithm" is safety in numbers. You get a large enough pool of mods, and hopefully the "normal" people sufficiently outnumber the extreme folks who mod everything as 'Not OK' or who volunteer to mod just to get a peek at naughty content.

Re:Low information voter? (0)

Anonymous Coward | more than 3 years ago | (#35966796)

People make decisions of "right and wrong" based on perceived intent, not just the content and facts.

If you want illegal drugs, see Bob on the corner of 5th and Main street, he will sell you the good stuff.

A guy named Bob on the corner of 5th and Main street is selling illegal drugs, he is a bum, and increasing the crime in the area, he must be stopped before our children turn into drug addicts!

The factual content of both messages is the same, there is a guy named Bob, his exact location, and what is selling. However, very few people would consider statement 1 to be acceptable. Almost everyone would consider statement 2 to be informative and support it because of the intent of the overall message. Again, same facts, different intent or conclusion, different opinions by a group of random people about which is right and which is wrong.

Going with the majority? (1)

neokushan (932374) | more than 3 years ago | (#35964974)

How is voting the way you feel the majority of people will vote a good thing? While I don't want to out and out say that Democracy doesn't work, there certainly are instances where the majority of people are wrong. The whole crowd-sourcing system doesn't really account for this. Still, it's probably still better than existing systems.

Democracy doesn't work (1)

sakdoctor (1087155) | more than 3 years ago | (#35965062)

I'll say it.

Democracy doesn't work. I want my information organised by smart algorithms (Google); not stupid people (facebook).

Re:Democracy doesn't work (0)

Anonymous Coward | more than 3 years ago | (#35965128)

Because plutocracies are so much better.

How about we just leave all the content online and let people pick and choose what they want.

That's the fucking joy of the internet

Re:Democracy doesn't work (1)

robot256 (1635039) | more than 3 years ago | (#35967964)

Because plutocracies are so much better.

How about we just leave all the content online and let people pick and choose what they want.

That's the [redacted by crowd-sourced censors] joy of the internet


Re:Democracy doesn't work (0)

Anonymous Coward | more than 3 years ago | (#35969690)

Because plutocracies are so much better.

How about we just leave all the content online and let people pick and choose what they want.

That's the fucking joy of the internet

because there are some things I'd prefer not to see. goatse comes to mind.

Re:Democracy doesn't work (0)

Anonymous Coward | more than 3 years ago | (#35965220)

Who decided who is smart? Whoever the well-connected and powerful decide. This leads to the people who keep the elite in power will always be labeled the smartest. Meanwhile truly smart people will be digging ditches and living in squalor.

Freedom > progress

Re:Democracy doesn't work (0)

Anonymous Coward | more than 3 years ago | (#35965516)

What about a smart algorithm, based on the collaboration of smart people? (netflix)

Re:Democracy doesn't work (0)

Anonymous Coward | more than 3 years ago | (#35965602)

Irony: Posting this on a curated technology blog. :/

Re:Democracy doesn't work (1)

aepurniet (995777) | more than 3 years ago | (#35966560)


Re:Going with the majority? (1)

mcmonkey (96054) | more than 3 years ago | (#35966592)

How is voting the way you feel the majority of people will vote a good thing?

He's not saying people should strive to vote with the majority. He's worried about folks voting with the expected majority to hide their bias.

Someone who decides to vote against everything is easy to find. Once you get a large enough set of 99-to-1 votes where the same person is the 1, you know you can ignore that person's vote. Whether it's someone who things everything is inappropriate, doesn't understand the ToS, or whatever, the author's feeling is once you've identified whether something is inappropriate or not, the minority view has no value.

But what if (and these are pretty big ifs) you have someone who is biased against people with red hair. They are going to vote that any picture with red hair is inappropriate. And what if that person knows there is a system to discount or disregard votes from a biased source. And what if that biased person shapes their voting pattern based on that knowledge.

That person could cast that biased vote against any picture with red hair, but then vote with the expected majority, not based on the voter's view of the material, in all other cases, to cultivate a normal appearance and therefor maximize the weight given to the biased votes.

Basically, 1) the author is way over thinking the issue. As others (and I) have posted, he's basically still talking about the /. mod/meta-mod system.

And 2) He's a sociopath and assuming there is a large population of people who think like he does.

Re:Going with the majority? (0)

Anonymous Coward | more than 3 years ago | (#35966848)

Another way of saying it is "tyranny of the majority." See

Re:Going with the majority? (1)

Archangel Michael (180766) | more than 3 years ago | (#35968672)

Tyranny of the majority. Welcome it. With it, you can vote yourself a raise (along with everyone else) until the system is bankrupt.

This is why the Constitution was LIMITING governance, with prescribed roles and functions and everything else was left to the states, or to the people. But we've long bypassed that whole perspective.

It was one of the reasons we had representative governance (republic) democratically elected. And even that has been perverted by the artificial limitation of 435 Representatives for 350 Million people. I no longer have any meaningful say in who represents me.

Add in the broken two party system (both sides are wrong) and you end up where we are today. Those that live on the dole of others are all voting in one block. We're screwed.

so if i understand it right... (0)

Anonymous Coward | more than 3 years ago | (#35964980)

and to be fair, i didnt read the entire article, in fact just the first few paragraphs. buyer beware.

But, the entire idea hinges around the notion that (a) you have a substantial user base sitting around willing to review pictures et cetera and that (b) theyre not the same trolls coordinating against content, or at least not in large percentages.

I'm not sure we can make a statement where (a) and (b) are not at odds with each other. I mean, this boils down to "oh yeah you have a coordinate crowd?! well I will get a bigger uncoordinated crowd!", to which the obvious response is "we are legion".

Tee El; Dee Arr. (0)

xMrFishx (1956084) | more than 3 years ago | (#35965002)

Yo Dog, I herd you like crowdsourcing your crowdsourcing. So we crowdsourced your crowdsourced crowdsourcing so you can crowdsource crowdsourced crowdsourcing while you crowdsource crowdsourcing.

Re:Tee El; Dee Arr. (0)

Anonymous Coward | more than 3 years ago | (#35967128)

No no no. Not here. Go back to reddit.

Another option (1)

NEDHead (1651195) | more than 3 years ago | (#35965042)

To address the selection problem for the pool of reviewers and avoid having the self selection result in a bias, when an entry needs judging put the question to a random group of users. Have it pop up when they log in and require a vote of 'OK' or 'Not OK' before they finish logging in to their account.

Nope (0)

Anonymous Coward | more than 3 years ago | (#35965310)

To address the selection problem for the pool of reviewers and avoid having the self selection result in a bias, when an entry needs judging put the question to a random group of users. Have it pop up when they log in and require a vote of 'OK' or 'Not OK' before they finish logging in to their account.

Right, because randomly showing offensive material to someone who doesn't want to see it, let alone the legal repercussions of doing that to a minor won't alienate your user base at all.

That's the rub, even with self-selection. You have to be able to verify whether someone is of age or not if their role exposes them to that type of material. There is no way to do that currently.

Re:Nope (0)

Anonymous Coward | more than 3 years ago | (#35967704)

Wait we are talking about facebook right?

I think there might be a reasonable solution there...

How about (1)

kelemvor4 (1980226) | more than 3 years ago | (#35965050)

How about facebook simply employ the appropriate number of individuals necessary to responsibly run their business? It may cost them 900M out of every billion to do it. Does it mean they shouldn't? No, it means they overestimated how lucrative running a social networking site is by cutting important corners. This sort of problem is what necessitates legislation and government oversight.

Re:How about (1)

smelch (1988698) | more than 3 years ago | (#35965164)

This sort of problem is what necessitates legislation and government oversight.

No it doesn't, this is bullshit. Here's a real wild idea: run your shit how you want to run it. Why the fuck would facebook need government regulation to tell them how to handle removing stuff from their site? Once something gets popular everybody should be able to use it and it must be about free speech? That's a flimsy argument and stupid too. Facebook runs the risk of removing more than they and their users would want, but that should not leave them open to any kind of law suit or regulation any more than if I decide I don't want posts unrelated to starcraft on my forums. Your whole attitude just makes me angry.

Re:How about (1)

nedlohs (1335013) | more than 3 years ago | (#35965202)

How about they do what they are already doing and keep more of the money for themselves. if people really won't put up with the errors and start leaving for greener pastures then they can consider spending more money on it.

Really? (0)

Anonymous Coward | more than 3 years ago | (#35965054)

I thought the users on this site were supposed to be intelligent, not generic trolls that plague the comments section on YouTube.

1 vote maximum (0)

Anonymous Coward | more than 3 years ago | (#35965112)

How about actual democracy where users who haven't had a chance to vote before and are online gets randomly selected to vote. With the amount of users and relatively few instances where a vote would be called, 1 vote is enough. Then you could add a CAPTCHA to stop Earnest Voice.

If there were a voting class of users, those users would start abusing their power to vote against the user posting the content instead of the content they vote on, at least if they know who uploaded it.

I only have one thing to say... (0)

Anonymous Coward | more than 3 years ago | (#35965134)

OP is a faggot.

Formatting (0)

Anonymous Coward | more than 3 years ago | (#35965190)

This is useless without pictures. Or bullet points. I'll just get the audio book.

Hmm.. (3, Insightful)

cforciea (1926392) | more than 3 years ago | (#35965200)

I can't figure out why these articles don't even make a casual reference to the fact that they are being posted on Slashdot, where we have a system that already fulfills all of the requirements for this "project".

Re:Hmm.. (1)

Mr. Shiny And New (525071) | more than 3 years ago | (#35965830)

Not only fulfills the requirements, but almost in exactly the same way as proposed by the author and his crowd.

Make it TOS based (0)

Anonymous Coward | more than 3 years ago | (#35965230)

Instead of having 100 people decide whether or not the image is offensive, have a handful of randomly selected people (who didn't flag the image) cite the particular clause in the TOS that applies.

Then send those results to an employee to judge.

You can also alleviate the problem by allowing people to categorize things as being pornographic, hateful, etc. while flagging. People who wanted to view a bowdlerized Facebook could opt themselves and their spawn in, and leave the grown up Facebook for the rest of us.

I don't really see the need for this survey. (1)

spads (1095039) | more than 3 years ago | (#35965292)

Nor do I recall it. I think the idea is ready to tested and might bestow benefits at numerous applicable sites.

I think it has more limitations with re. to lifting up of comments of value (though it also appears beneficial here), such as for slashdot comments, in comparison to stopping wrongful censure, where I really can't see a down side.

I don't see any practical way of gaming it. There are the threats of politically correct mindedness and herd-think, and perhaps some of the comments could be used for tuning it. However, it would be a clear improvement over what we have now without that. Whoever they are hiring to review these things isn't up to the job. They are like ants being buried with grains of sand.

Metamoderation? (1)

sirdude (578412) | more than 3 years ago | (#35965294)

No, I didn't read the first article. Yes, I super-speed-skimmed through this one... Nevertheless, from what I understand, isn't what you are proposing currently being implemented on /. as metamoderation? Watching the watchers and so on? Furthermore, trying to find ways to categorise the "values" of reviewers etc. sounds similar to weighted voting, a system in use on sites like IMDb.

My personal opinion on the whole matter though, is that all the sewage of the world should be redirected into the Facebook data-centre. But I'm guessing that that is neither here no there :S

P.S. Has the submitter officially changed his name to "Slashdot regular contributor Bennett Haselton"? Or is "Bennett Haselton" /.'s Alan Smithee?

Re:Metamoderation? (1)

Bieeanda (961632) | more than 3 years ago | (#35965552)

He's definitely real, I remember reading his voluminous output on the topic of web filters during the Nineties. Time has apparently only made him more verbose.

Summary: he reinvented the jury system. (1)

Animats (122034) | more than 3 years ago | (#35965304)

OP just reinvented the jury system.

It's not a bad idea, provided that people are actually willing to work for you for free. Usually, they aren't. It's been tried before, for spam filtering, but the reviewers were overwhelmed. (You'd get random messages containing spam which you have to rate. Right.) This approach sometimes works when the number of items to rate is much smaller than the number of raters, and when the user has to read the thing they're rating anyway, as with Slashdot.

Also, the annoyance level of a message may depend on the recipient. "Hey. let's do lunch at Pizza Hut" can produce reactions ranging from "Great, see you there!" to "Who the hell is this?" to "Another Pizza Hut spam." Then there's the problem of the confidentiality of messages.

Probably not a winner here.

Re:Summary: he reinvented the jury system. (1)

Omnifarious (11933) | more than 3 years ago | (#35966894)

Actually, OKCupid does something like this, and it seems to work pretty well for them.

All "crowdsourced moderation" approaches Slashdot (1)

slimjim8094 (941042) | more than 3 years ago | (#35965400)

suggested allowing jurors to anonymously review each other -- when a given juror is chosen for the "hot seat" (perhaps randomly, perhaps as a result of a history of skewed voting), other jurors are randomly selected from the voting pool, to review that juror's voting record and decide whether that juror has been voting honestly and judiciously, or not.

Have you metamoderated recently? In fact, the whole thing seems like what Slashdot's been doing for years and years. Randomly select moderators based on a number of factors, including previous history and past performance, and so on.

Sophisticated Censoring. (1)

MarkvW (1037596) | more than 3 years ago | (#35965534)

A business will only host material that is profitable. If it is unprofitable to host the material, the business will not host it. Unless they are protected by a more powerful government, a business will always remove material when directed to remove that material by a government. Google and Facebook are in this to make money for their shareholders.

In the sixties, a black man couldn't kiss a white woman on TV because many affiliates wouldn't carry the show. You still don't see it very much on TV. The TV programming is racist because you can make money easier with racist programming.

Any algorithm will only have an impact if it conveys reliable information to Facebook/Google about how the inclusion/exclusion of the material helps the company's bottom line. In other words, does it impact page view and advertising revenue? If the complainers don't matter to the bottom line, blow them off. If they cost money, censor the material. That's the only kind of algorithm that makes sense. Study the complainers in depth and chart their influence over time. Base your censorship decisions upon the results.

I'm convinced that a free alternative to Facebook is essential. In a free environment, the user can rely on his or her own self-defined filters to determine what will or will not be received--not some profit-driven corporate censorship algorithim.

Start with better filtering of complaints (1)

Mike_K (138858) | more than 3 years ago | (#35965732)

While setting up juries, etc, may be useful, it would be quite easy to use a Google-ish algorithm to discount abusive mob complaints in the first place. If you find that certain people have a high rate of complaining, their complaints probably should have a low weight. And you could use the age and activity level of the account to ignore some accounts (new accounts, and rarely used accounts do not represent Facebook users well). So if someone arranges a mob to get groups dealing with an issue removed, the mob will quickly become toothless because they've complained too much too quickly. On the other hand, creating new accounts would create another toothless mob.

Now your juries have a lot less to do, which deals with the problems of jury fatigue.

It should be simpler (1)

Maury Markowitz (452832) | more than 3 years ago | (#35965920)

I still think there is a far simpler solution to this problem. Simply chose 1 in 10 people that _can_ vote. That is, 90% of Facebook members simply wouldn't be allowed to vote on any specific topic, on average they would have to try 10 different ones (or more) before they were allowed to vote.

That would make organized vote mobs very difficult to arrange, which is what we're trying to accomplish. Yet this version would not require any special effort on the part of the users, or the system. No setup either.

Great idea; author is not a lawyer (1)

tulcod (1056476) | more than 3 years ago | (#35965950)

The author kindly forgot about the legal issues of this algorithm. You can't just spread people's photos to your users just because you want their opinion. These photos contain private data and cannot simply be sent to random people on the internet, not even with faces blurred away. Great idea; legally not possible until Google becomes the president of the USA.

Minors (0)

Anonymous Coward | more than 3 years ago | (#35966852)

Nor can certain types of photos be sent to minors.

Addendum (0)

Anonymous Coward | more than 3 years ago | (#35967838)

Let people who qualify (18+) opt-in by selecting an option in their account settings, "I would like to review flagged content." Then whenever there is content to reviewed, an indicator on their homepage appears so that they can review images in a 'hot or not' fashion, but with multiple criteria such as 'Which aspect of the TOS does this image violate?: [multiple choices]', 'Do you find this content to be offensive, in poor taste, or neither?'. The phrasing of these questions helps to further define the voters' choices and additionally can be used to create a more detailed profile of the voters. Since this is not a jury position or job but a survey, it will only appear randomly to users that have opted in and will offer a predetermined number of images (say, 50) to rate. Among those images will be 'safe images', verified to comply with TOS and legal guidelines in order to statistically group voters. Additionally, while some of these 'safe images' will be totally innocuous, others will contain content that may be considered culturally/morally/personally objectionable. Occasionally, some 'safe images' will be duplicated in a set to further weed out random or abusive voting. All users' anonymity will be guaranteed that are associated with this process.

As for the protections of the owners' photos, issue the notice 'This image has been flagged as a violation of TOS. In order to contest this flag, you must submit to having this image reviewed which may take up to a week. If you consent to this review, select 'I agree', otherwise this image will be removed.' Personally, I think that sounds nicer than 'You've forfeited all rights to every mundane facet of your life that you've forced onto my servers day in and day out over the past decade, so feel free to weep while thousands of strangers howl for assorted reasons at the picture of the nip-slip that occurred when your obese dachshund knocked you down the stairs while you were stooping over trying to pull off those popsicle sticks stuck to your nighty.' Which is true, if less friendly.

Remove robots from the mix (1)

aarenz (1009365) | more than 3 years ago | (#35968416)

Make sure that the vote options are randomly displayed using a graphic image to determine which is yes and which is no, so you could stop someone from writing some simple script to watch for the facebook voting page and click yes or no by default. Sort of like forcing two fields to capture email with neither of them called email on signup pages.

Tighter and simpler algorithm (0)

Anonymous Coward | more than 3 years ago | (#35969642)

drop off the functionality

tl;dr (1)

mrman18766 (1231734) | more than 3 years ago | (#35970120)

see subject

Re:tl;dr (0)

Anonymous Coward | more than 3 years ago | (#35970210)

See earlier post

let's get started! (1)

larry bagina (561269) | more than 3 years ago | (#35970456)

I vote to remove Bennett Haselton.

Paid Moderators. (0)

Anonymous Coward | more than 3 years ago | (#35972342)

I used to work for a content moderation company, and there's essentially one problem with this: privacy.

In a huge number of cases, the content marked as 'objectionable' won't be publicly available. it'll be in pictures or posts available to a users friends, or a subset of friends, or a private group. A successful meta-moderation system would mean that even if you lock your security settings down, all it takes is a few people to mark it as 'inappropriate' before hundreds of random strangers get to look at what you were up to on a weekend - regardless of whether or not it violates the terms. Once you add minors to this (who may or may not have their age reported correctly), it gets very messy.

The easiest way to do that is their current way - they have offices full of paid, full-time moderators who're bound by contract to keep things private. They're all on locked-down computers that don't let you save files or take screenshots, and which let you 'escalate' tricky moderation situations. Despite the situation being almost ideal for working from home, it'll be completely banned - no terminal servers, no vpn, nothing. The only moderation happens within the office, and the only risk of data escaping is somebody taking photos of their screen (which gets you fired on the spot). There'll be meta-moderators who regularly check a sample output of what each user is accepting or rejecting, and they'll be pulled up on mistakes.

The fact we hear so little about how their moderation offices work is actually an encouraging indicator to the fact their system works fairly well - in individual cases like this, somebody needs some retraining. They had a bad shift, or they were half asleep, or they just plain clicked the wrong button. The moderators are human, and it's tedious work.

Crowdsourcing the moderation is theoretically more accurate, but I feel more comfortable knowing it's being done by a bunch of bored guys in an office who'd rather be playing WoW rather than those who voluntarily spend their free time going through other peoples 'objectionable content'.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?