Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Crowdsourcing the Censors: A Contest

Soulskill posted more than 3 years ago | from the people-love-to-vote dept.

Social Networks 111

Frequent contributor Bennett Haselton is back with an article about how sites with huge amounts of user-generated content struggle to deal with abuse complaints, and could benefit from a crowd-sourced policing system similar to Slashdot's meta-moderation. He writes "In The Net Delusion, Evgeny Morozov cites examples of online mobs that filed phony abuse complaints in order to shut down pro-democracy Facebook groups and YouTube videos criticizing the Saudi royal family. I've got an idea for an algorithm that would help solve the problem, and I'm offering $100 (or a donation to a charity of your choice) for the best suggested improvement, or alternative, or criticism of the idea proposed in this article." Hit the link below to read the rest of his thoughts.

Before you get bored and click away: I'm proposing an algorithm for Facebook (and similar sites) to use to review "abuse reports" in a scalable and efficient manner, and I'm offering a total of $100 (or more) to the reader (or to some charity designated by them) who proposes the best improvement(s) or alternative(s) to the algorithm. We now proceed with your standard boilerplate introductory paragraph.

In his new book The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov cites examples of Facebook users organizing campaigns to shut down particular groups or user account by filing phony complaints against them. One Hong-Kong-based Facebook group with over 80,000 members, formed to oppose the pro-Beijing Democratic Alliance for the Betterment and Progress of Hong-Kong, was shut down by opponents flagging the group as "abusive" on Facebook. In another incident, the Moroccan activist Kacem El Ghazzali found his Facebook group Youth for the Separation between Religion and Education deleted without explanation, and when he e-mailed Facebook to ask why, his personal Facebook profile got canned as well. Only after an international outcry did Facebook restore the group (but, oddly, not El Ghazzali's personal Facebook account), but they refused to explain the original removal; the most likely cause was a torrent of phony "complaints" from opponents. In both cases it seemed clear that the groups did not actually violate Facebook's Terms of Service, but the number of complaints presumably convinced either a software algorithm or an overworked human reviewer that something must have been inappropriate, and the forums were shut down. The Net Delusion also describes a group of conservative Saudi citizens calling themselves "Saudi Flagger" that coordinates filing en masse complaints against YouTube videos which criticize Islam or the Saudi royal family.

A large number of abuse reports against a single Facebook group or YouTube video probably has a good chance of triggering a takedown; with 2,000 employees managing 500 million users, Facebook surely doesn't have time to review every abuse report properly. About once a month I still get an email from Facebook with the subject "Facebook Warning" saying:

You have been sending harassing messages to other users. This is a violation of Facebook's Terms of Use. Among other things, messages that are hateful, threatening, or obscene are not allowed. Continued misuse of Facebook's features could result in your account being disabled.

I still have no idea what is triggering the "warnings"; the meanest thing I usually say on Facebook is to people who write to me asking for tech support (usually with the proxy sites to get on Facebook at school), when they say "It gives me an error", and I write back, "TELL ME THE ACTUAL ERROR MESSAGE THAT IT GIVES YOU!!" (Typical reply: "It gave me an error that it can't do it." If you work in tech support, I feel your pain.) I suspect the "abuse reports" are probably coming from parents who hack into their teenagers' accounts, see their teens corresponding with me about how to get on Facebook or YouTube at school, and decide to file an "abuse report" against my account just for the hell of it. If Facebook makes it that easy for a lone gunman to cause trouble with fake complaints, imagine how much trouble you can make with a well-coordinated mob.

But I think an algorithm could be implemented that would enable users to police for genuinely abusive content, without allowing hordes of vigilantes to get content removed that they simply don't like. Taking Facebook as an example, a simple change in the crowdsourcing algorithm could solve the whole problem: use the votes of users who are randomly selected by Facebook, rather than users who self-select by filing the abuse reports. This is similar to an algorithm I'd suggested for stopping vigilante campaigns from "burying" legitimate content on Digg (and indeed, stopping illegitimate self-promotion on Digg at the same time), and as an general algorithm for preventing good ideas from being lost in the glut of competing online content. But if phone "abuse reports" are also being used to squelch free speech in countries like China and Saudi Arabia, then the moral case for solving the problem is all that more compelling.

Here's how the algorithm would work: Facebook can ask some random fraction of their users, "Would you like to be a volunteer reviewer of abuse reports?" (Would you sign up? Come on. Wouldn't you be a little bit curious what sort of interesting stuff would be brought to your attention?) Wait until they've built up a roster of reviewers (say, 20,000). Then suppose Facebook receives an abuse report (or several abuse reports, whatever their threshold is) about a particular Facebook group. Facebook can then randomly select some subset of its volunteer reviewers, say, 100 of them. This is tiny as a proportion of the total number of reviewers (with a "jury" size of 100 and a "jury pool" of 20,000, a given reviewer has only a 1 in 200 chance of being called for "jury duty" for any particular complaint), but still large enough that the results are statistically significant. Tell them, "This is the content that users have been complaining about, and here is the reason that they say it violates our terms of service. Are these legitimate complaints, or not?" If the number of "Yes" votes exceeds some threshold, then the group gets shuttered.

It's much harder to cheat in this system, than in an "abuse report" system in which users simply band together and file phony abuse reports against a group until it gets taken down. If the 200 members of "Saudi Flagger" signed up as volunteer reviewers, then they would comprise only 1% of a jury pool of 20,000 users, and on average would only get one vote on a jury of 100. You'd have to organize such a large mob that your numbers would comprise a significant portion of the 20,000 volunteer reviewers, so that you would have a significant voting bloc in a given jury pool. (And my guess is that Facebook would have a lot more than 20,000 curious volunteers signed up as reviewers.) On the other hand, if someone creates a group with actual hateful content or built around a campaign of illegal harrassment, and the abuse reports start coming in until a jury vote is triggered, then a randomly selected jury of reviewers would probably cast enough "Yes" votes to validate the abuse reports.

Jurors could in fact be given three voting choices:

  • "This group really is abusive" (i.e. the abuse reports were legitimate), or;
  • "This group does not technically violate the Terms of Service, but the users who filed abuse reports were probably making an honest mistake" (perhaps a common choice for groups that support controversial causes, or that publish information about semi-private individuals); or
  • "This group does not violate the TOS, and the abuse reports were bogus to begin with" (i.e. almost no reasonable person could have believed that the group really did violate the TOS, and the abuse reports were probably part of an organized campaign to get the group removed).

This strongly discourages users from organizing mob efforts against legitimate groups; if most of the jury ends up voting for the third choice, "This is an obviously legitimate group and the complaints were just an organized vigilante campaign", then the users who filed the complaints could have their own accounts penalized.

What I like about this algorithm is that the sizes and thresholds can be tweaked according to what you discover about the habits of the Facebook content reviewers. Suppose most volunteer reviewers turn out to be deadbeats who don't respond to "jury duty" when they're actually called upon to vote in an abuse report case. Fine — just increase the size of the jury, until the average number of users in a randomly convened jury who do respond, is large enough to be statistically significant. Or, suppose it turns out that people who sign up to review content to be deleted, are a more prudish bunch than average, and their votes tend to skew towards "delete it now!" in a way that is not representative of the general Facebook community. Fine — just raise the threshold for the percentage of "Yes" votes required to get content deleted. All that's required for the algorithm to work, is that content which clearly does violate the Terms of Service, gets more "Yes" votes on average than content that doesn't. Then make the jury size large enough that the voting results are statistically significant, so you can tell which side of the threshold you're on.

Another beneficial feature of the algorithm is that it's scaleable — there's no bottleneck of overworked reviewers at Facebook headquarters who have to review every decision. (They should probably review a random subset of the decisions to make sure the "juries" are getting what seems to be the right answer, but they don't have to check every one.) If Facebook doubles in size — and the amount of "abusive content" and the number of abuse reports doubles along with it — then as long as the pool of volunteers reviewers also doubles, each reviewer has no greater workload than they had before. But the workload of the abuse department at Facebook doesn't double.

Now, this algorithm ducks the question of how to handle "borderline" content. If a student creates a Facebook group called "MR. LANGAN IS A BUTT BRAIN," is that "harassment" or not? I would say no, but I'm not confident that a randomly selected pool of reviewers would agree. However, the point of this algorithm is to make sure that if content is posted on Facebook that almost nobody would reasonably agree is a violation of their Terms of Service, then a group of vigilantes can't get it removed by filing a torrent of abuse reports.

Also, this proposal can't do much about Facebook's Terms of Service being prudish to begin with. A Frenchman recently had his account suspended because he used a 19th-century oil painting of an artistic nude as his profile picture. Well, Facebook's TOS prohibits nudity -- not just sexual nudity, but all nudity, period. Even under my proposed algorithm, jurors would presumably have to be honest and vote that the painting did in fact violate Facebook's TOS, unless or until Facebook changes the rules. (For that matter, maybe this wasn't a case of prudishness anyway. I mean, we know it's "artistic" because it's more than 100 years old and it was painted in oils, right? Yeah, well check out the painting that the guy used as his profile picture. It presumably didn't help that the painting is so good that the Facebook censors probably thought it was a photograph.)

But notwithstanding these problems, this algorithm was the best trade-off I could come up with in terms of scalability and fairness. So here's the contest: Send me your best alternative, or best suggested improvement, or best fatal flaw in this proposal (even if you don't come up with something better, the discovery of a fatal flaw is still valuable) for a chance to win (a portion of) the $100 -- or, you can designate a charity to be the recipient of your winnings. Send your ideas to bennett at peacefire dot org and put "reporting" in the subject line. I reserve the right to split the prize between multiple winners, or to pay out more than the original $100 (or give winners the right to designate charitable donations totalling more than $100) if enough good points come in (or to pay out less than $100 if there's a real dearth of valid points, but there are enough brainiacs reading this that I think that's unlikely). In order for the contest not to detract from the discussion taking place in the comment threads, if more than one reader submits essentially the same idea, I'll give the credit to the first submitter -- so as you're sending me your idea, you can feel free to share it in the comment threads as well without worrying about someone re-submitting it and stealing a portion of your winnings. (If your submission is, "Bennett, your articles would be much shorter if you just state your conclusion, instead of also including a supporting argument and addressing possible objections", feel free to submit that just in the comment threads.)

In The Net Delusion, Morozov concludes his section on phony abuse reports by saying, "Good judgment, as it turns out, cannot be crowdsourced, if only because special interests always steer the process to suit their own objectives." I think he's right about the problems, but I disagree that they're unsolvable. I think my algorithm does in fact prevent "special interests" from "steering the process", but I'll pay to be convinced that I'm wrong. Today I'm just choosing the "winners" of the contest myself; maybe someday I'll crowdsource the decision by letting a randomly selected subset of users vote on the merits of each proposal... but I'm sure some of you are dying to tell me why that's a bad idea.

Sorry! There are no comments related to the filter you selected.

How about this... (4, Insightful)

geminidomino (614729) | more than 3 years ago | (#35829798)

Don't rely on the cooperation of self-serving and outwardly evil companies to send your message.

I'll take my prize in zorkmids, thanks.

Re:How about this... (0)

Anonymous Coward | more than 3 years ago | (#35830144)

Fuck you in the head? Don't mind if I do.

Re:How about this... (0)

Anonymous Coward | more than 3 years ago | (#35832170)

How about a useful reply?

THERE"S NO SUCH THING AS THE DARK SIDE !! (0)

Anonymous Coward | more than 3 years ago | (#35829896)

It's ALL dark !!

Can you name that tune ??

Meta (3, Funny)

PhattyMatty (916963) | more than 3 years ago | (#35829900)

So he's crowd-sourcing the crowd-sourcing solution. One more level and we'll make a black hole!

Re:Meta (0)

Anonymous Coward | more than 3 years ago | (#35830040)

But was he carrying his Bag of Holding when he jumped through the Portable Hole?

Re:Meta (1, Funny)

Anonymous Coward | more than 3 years ago | (#35830204)

Yo dawg, we heard you like crowdsourcing so we put a moderation system in your moderation system so we can all crowdsource while we crowdsource.

M O D E R A T I O N

Re:Meta (2)

conspirator57 (1123519) | more than 3 years ago | (#35830538)

And let me say this: Extremism in defense of crowdsourcing is no vice. And moderation in pursuit of moderation is no virtue.

Re:Meta (2)

clang_jangle (975789) | more than 3 years ago | (#35830298)

Meta-meta-minutiae? Just the thing for keeping trivial minds occupied, apparently.

Re:Meta (1, Informative)

raddan (519638) | more than 3 years ago | (#35830314)

People have been doing that on Mechanical Turk for awhile now. It makes sense. If you think of a human as a very slow, error-prone CPU, the solution is obvious:
  1. Get more people
  2. Have them check each other's work

The second part relies on the independence of the people-- i.e., they are not colluding to distort your "computation". But crowdsourcing sites like MTurk and Slashdot effectively mitigate this by 1) having a large user base from which they 2) sample randomly. MTurk allows you to do crowdsourced checking of crowdsourced content by exposing the worker_id, so that you can exclude a worker who participated in one step from participating in another. Slashdot's moderation system requires that people can't post and moderate at the same time.

This paper [acm.org] coins the term "inter-annotator agreement" for this idea for NLP-type tasks. There are other papers, too, but I have to get back to work.

Re:Meta (0)

Anonymous Coward | more than 3 years ago | (#35831520)

One more level and we'll make a black hole!

You mean goatse? I thought his solution was supposed to preclude that.

in soviet russia (1)

Anonymous Coward | more than 3 years ago | (#35829928)

they want you to snitch on others and if you can't do that then they make you snitch on your self.

Asshole jurors (1)

Anonymous Coward | more than 3 years ago | (#35829946)

Your idea doesn't take into account the number of people who would sign up, just so that they could hit "Abusive" at everything. You need levels of meta moderation for this to succeed.

Re:Asshole jurors (3, Insightful)

Garridan (597129) | more than 3 years ago | (#35830362)

Actually, you can use moderation (and a bit of graph theory) to do just that. At first, you shouldn't put stock in any user. But, when you have a large group who usually agree, and (here's the key point) usually agree with the professional moderators, you should trust that large group. This can easily be reduced to an eigenvalue problem, similar to PageRank.

The problem I see with this idea as a whole is teens posting naked pictures of themselves and others. Then, this moderation scheme turns into a portal for child porn. Whether or not you think teens should have the right to take and distribute naked pictures of themselves, setting up a website to intentionally distribute such material is illegal.

Re:Asshole jurors (0)

Anonymous Coward | more than 3 years ago | (#35830804)

Mod parent up

Re:Asshole jurors (0)

Anonymous Coward | more than 3 years ago | (#35831354)

I modded you down to prove your point, enjoy!

Seriously? (0)

Anonymous Coward | more than 3 years ago | (#35829970)

Crowdsourcing moderation using a random sampling of users? What a novel idea. I think even better would be to then allow people to review those moderations to make sure the moderators don't abuse their power. Call it metamoderation if you will. You know, slashdot should really implement something like this.

Also, this idea is fucking stupid for abuse reports, since there needs to be a clear definition of abuse in the ToS, and it would be impossible for Facebook to ensure their crowdsourced moderators are following that definition. So my better suggestion would be for Facebook to spend some of its $100 billion to hire more fucking moderators. Where do I get my $100?

Re:Seriously? (1)

Magic5Ball (188725) | more than 3 years ago | (#35839910)

Yes. The basic problem in this problem is that at present, mobs can expend X units of work to cause FB to expend Y > X work to investigate the complaint, or to cause the Z > X work to be further expended to fight the complaint (for Y and Z being not more than a few times, say 10, larger than X). The solution proposes that FB avoids spending that 10X work by enabling mobs to expend 100X units of work to cause FB to incur some exponential of X work through community moderators. (The amount of work the jurors spend is exponential to the number of complaints received, if the jurors are of a statistically significant size.)

As a mob, the exponential works in my favour. I can easily spawn 200 complaints a day as an individual without tool assists, each costing FB or its community 100X (=20000X) but costing me 200X in total, I've used each of member of your jury pool once in a day. If my mob of 100 each spawns 400 tool-assisted complaints a day, 400*100*100, my mobs's 400 units of work costs FB or its community 4,000,000 units of work.

With just a little more effort, my mob could spam your new abuse detection system until its effectiveness returns to previous levels, while causing the entire community to incur exponentially greater costs for this new system.

Facebook to switch to SlashCode? (3, Informative)

yakatz (1176317) | more than 3 years ago | (#35829980)

This sounds a lot like the slashdot moderation scheme...

For those who did not know, you can get the source code behind slashdot here [slashcode.com]

Re:Facebook to switch to SlashCode? (1)

magarity (164372) | more than 3 years ago | (#35830388)

This sounds a lot like the slashdot moderation scheme...

Speaking of which, it used to throw up a 'please take your turn at metamoderation' link every once in a while. I haven't seen that for quite a while now. Did the new version leave it out?

Re:Facebook to switch to SlashCode? (1)

nwmann (946016) | more than 3 years ago | (#35830640)

i got negative karma somehow that's why i thought it stopped giving it to me

Re:Facebook to switch to SlashCode? (1)

dreampod (1093343) | more than 3 years ago | (#35832538)

I think it just reduced the frequency of the metamod reminder. I know I had been thinking the same thing lately and then yesterday got the reminder link. It was 2 days after I got regular mod points which I don't know if it is relavent.

Re:Facebook to switch to SlashCode? (0)

Desler (1608317) | more than 3 years ago | (#35830474)

But don't dare look at it lest your head melt [gstatic.com] .

Re:Facebook to switch to SlashCode? (1)

Ginger Unicorn (952287) | more than 3 years ago | (#35835526)

that scared me

Deputize (4, Insightful)

DanTheStone (1212500) | more than 3 years ago | (#35830000)

I'd be more likely to deputize to people who you find are more reliable (basically, trusted moderators chosen from your randomly-selected pool after reviewing their decisions). Your system assumes that most people will be reasonable. I think that is an inherently flawed assumption, including for the very situations listed above. You can't trust that only a minority will think you should remove something that is against the mainstream view.

Re:Deputize (3, Insightful)

owlnation (858981) | more than 3 years ago | (#35830068)

"I'd be more likely to deputize to people who you find are more reliable (basically, trusted moderators chosen from your randomly-selected pool after reviewing their decisions). Your system assumes that most people will be reasonable. I think that is an inherently flawed assumption, including for the very situations listed above. You can't trust that only a minority will think you should remove something that is against the mainstream view."

In theory, that's definitely a better way. The problem is -- as wikipedia proves conclusively -- if you do not choose those moderators wisely, or you are corrupt in your choice of moderators, you end up with a completely failed system very, very quickly.

Re:Deputize (0)

Anonymous Coward | more than 3 years ago | (#35832378)

I'm not saying Wikipedia is perfect, but holding it up as an example of abysmal quality control doesn't really make sense. For all its flaws, Wikipedia has generate an enormous wealth of reasonably reliable data, and continues to maintain this dataset in good form.

Again, there are many lessons to learn from Wikipedia, but it's hardly a good go-to example for how crowd-sourcing can fail.

Re:Deputize (1)

gknoy (899301) | more than 3 years ago | (#35830096)

Deputizing doesn't sound like it would work well versus a dedicated mob of abusive flaggers.

Re:Deputize (1)

Brucelet (1857158) | more than 3 years ago | (#35832294)

Why not? You wouldn't deputize the abusers.

Re:Deputize (1)

dreampod (1093343) | more than 3 years ago | (#35832646)

Wikipedia takes this particular approach..The problem encountered with this is that if they are honest in their moderation most of the time, it dramatically enhances their reputibility when they are abusively modding their particular special interest. Most flaggers trying to cover up criticism of the Saudi royal family are going to be perfectly content properly rating content for nudity, offensive language, etc. However when their particular hobbyhorse shows up they will falsely rate it as being offensive.

It does raise the bar on the effort required to do so however which has a value in itself though. Ultimately if your pool is large enough you get the positive benefits from them while drowning out the negative but it forces your pool to be much bigger.

Do not check out the painting at work (2)

Geeky (90998) | more than 3 years ago | (#35830010)

The painting mentioned as a profile picture is Courbet's Origin of the World.

Probably best not check it out at work.

Although, of course, it is on the wall of a major gallery where anyone can see it.

Sheep of god (1)

Anonymous Coward | more than 3 years ago | (#35830034)

Suppose an atheist created a page about islam in turkish language and islamist turks find this abusive - hey because the person denies Allah - and they create a mass campaign with abuse complaints.

Let's assume turkey is 99% muslim. (https://secure.wikimedia.org/wikipedia/en/wiki/Demographics_of_Turkey)

The random picked people would statistically fit to that group and they'd also approve the ban.

So, what is the trouble for?

Good luck doing this... (0)

Anonymous Coward | more than 3 years ago | (#35830036)

Doing a good solution for handling abuse is going to take a lot more than $100.

If you let anyone with accounts click on an abuse button, you will get a situation like the guy who did David's Farm on Youtube -- a bunch of people making shill accounts to slander the guy (of all the people alleging things, there has not been one single proof or conviction ID shown) and flag every video made as abusive. It worked -- even though David Rock was one of the top grossing YouTubers a year ago, he got tossed out on his ear due to Google not really caring to validate all the complaints that the bots/shills made, even though all the complaints were bogus.

If you take a representative sample, same thing... the botters will create tons of accounts, and by random luck, one of the bots will be chosen. Then the same thing will happen -- people who dis some ruler get their accounts banned due to "abuse".

Slashdot's system works because Slashdot doesn't get mainstream attention. If it were a top tier site, there would be botters with mod points slamming anyone automatically who speak out on a topic, as well as people spamming their stuff on one end of the Firehose, while their bots mod it up on the other end.

This is actually very hard -- all it takes is someone determined enough to make a system to bypass CAPTCHAs, and perhaps have a range of disparate IPs to proxy through and they pretty much win.

stop moderating. (0)

Anonymous Coward | more than 3 years ago | (#35830110)

Remove moderation altogether. Problem solved. Or is genuinely free speech something you're trying to avoid?

Re:stop moderating. (1)

Anonymous Coward | more than 3 years ago | (#35830296)

Or is genuinely free speech something you're trying to avoid?

Yes.

Because when speech is completely free without any sort of consequences, people become cruel, abusive, small minded, lose all sense of rationality, etc....

The GNAA posts that uses to show up here are a perfect example. And then you have the spam from mostly scam sites.

No thank you, as much as I love free speech, there needs to have some sort of consequence on the web because otherwise, not so popular speech gets drowned up by the crap. Sure it's not perfect - case in point were the posts here that were critical of Apple before they became top dog. Or anything critical of F/oSS is usually asking for a negative moderation - regardless of how well reasoned it is.

Re:stop moderating. (1)

Ruke (857276) | more than 3 years ago | (#35831052)

AC's right. The systems which are interested in such a moderation scheme (Facebook, etc.) are not terribly interested in championing free speech. They're looking to be a reasonably pleasant place to hold a conversation. And, as far as I'm concerned, that is a perfectly reasonable thing for them to do. If Facebook wants to lay down some TOS before I use their servers to communicate with my friends, that's their choice. If Facebook wants to permaban all users who post using the letter "K" on every third Friday, that's their choice. They're going to hemorrhage users, but it's totally within their rights to do that.

Re:stop moderating. (1)

Americano (920576) | more than 3 years ago | (#35830528)

If we remove moderation, can we remove anonymity too, and force people to post with their real names if they wish to participate?

Or is taking real-world responsibility for the actual content of your speech something you're trying to avoid?

Re:stop moderating. (1)

h4rr4r (612664) | more than 3 years ago | (#35830616)

You cannot have free speech without anonymous speech. Let's say I want to be able to say something bad about the Saudi royal family or the Israeli government, think I got both sides there, do I really want their nutbag followers to be able to find me?
Since I probably would rather continue to breathe than make my point known I will never speak out. That hurts free speech. If we did not have to worry about never being able to find a job, or even being killed for our speech that would not be an issue.

Re:stop moderating. (1)

mlts (1038732) | more than 3 years ago | (#35831066)

Here in the US, it isn't that bad where people fear for their lives (yet), but SLAPP cases are on the rise here. I wouldn't be surprised when companies start having bots which periodically check Google or traverse sites themselves and automatically file lawsuits against anyone for libel who complains. This is cheap for an organization which has a large law arm, but defending against these would be cost prohibitive for individuals.

So, in the US, having the ability to separate a userID from the real person in case of a mass civil action will be important.

Anonymity is one thing. However, it would be nice to have the ability to have the ability to have a userID that posts can be attributed to it, but it be completely separate from the real person. This way, spammers end up ignored, while people who contribute content, but don't feel like sticking their real life identity out there can achieve a reputation.

What would be interesting is an authentication scheme based off of PGP/gpg keys. You log onto a site, the site asks for you to sign a chunk of text [1] with the key and paste it in, and you are now authenticated at that site. Want that identity never to be used again? Destroy the private key. Another advantage of this is that there are no passwords to worry about -- the website just stores a public key.

Using a PGP/gpg public key means that posts can also be signed for further security. A site can offer to show the full signed post, but normally just show the contents of the message to hide the relatively ugly PGP signature by default.

Downside: If forensics discover the private key on a machine, it will clench beyond a doubt that the person using that machine has access to that anonymous ID. However, that can be remedied with solutions often discussed here.

[1]: Random text from a cryptographically secure RNG plus a timestamp. The goal is to ensure replay attacks do not happen.

Re:stop moderating. (1)

Americano (920576) | more than 3 years ago | (#35831154)

Indeed, anonymity is important, but when we're talking about private platforms operated by private, for-profit enterprise, we're not talking about "free speech." Nobody's obligated to give others a platform from which to speak.

Suggesting - as the AC above did - that you have a right to un-moderated, unrestricted speech on somebody else's dime is rather disingenuous.

Re:stop moderating. (1)

1u3hr (530656) | more than 3 years ago | (#35830756)

Or is taking real-world responsibility for the actual content of your speech something you're trying to avoid?

You bet it is, when that might mean being killed/fired/beaten up/expelled/etc.

Re:stop moderating. (0)

Americano (920576) | more than 3 years ago | (#35831172)

Maybe you should start your own site which will allow you to publish your speech in a truly anonymous fashion, then, instead of demanding other people provide one for you, or pretending to be outraged that somebody would set terms of use that govern what you say and how you say it on a site they have built & continue to operate.

Re:stop moderating. (1)

1u3hr (530656) | more than 3 years ago | (#35831942)

Maybe you should respond to the post I actually made instead of lying that I "demanded" anything or "pretended outrage".

You're either a moron who can't read, an asshole who doesn't care what post he attaches his rants to or a troll who misrepresents people to wind them up..

Re:stop moderating. (1)

Americano (920576) | more than 3 years ago | (#35832506)

Apparently my initial point wasn't expressed clearly for you:

"Stop moderation" is simply not an option on a private platform. No private platform is obligated to provide - or interested in providing - for its members a platform which allows them to disseminate consequence-free speech. Do away with moderation? You'll see platforms require that you use your real name, or they'll simply disallow comments and posting altogether. Any expectation of "anonymous" speech on Facebook is ridiculous.

It does not matter to Facebook that you want to speak out in a manner that might get you expelled, beaten up, arrested, or killed. That is not their problem, nor is it their mission to provide you with a platform to do that, or protect you from the consequences if you do use their platform for it. In other words: If you want anonymous speech for any old thing you might want to say, don't expect to find that on Facebook, or any other private site which has terms of use.

Re:stop moderating. (1)

russotto (537200) | more than 3 years ago | (#35832782)

"Stop moderation" is simply not an option on a private platform. No private platform is obligated to provide - or interested in providing - for its members a platform which allows them to disseminate consequence-free speech.

Well, there's the one we don't talk about.

Re:stop moderating. (1)

mlts (1038732) | more than 3 years ago | (#35831328)

There is a difference between responsibility to not troll/spam/spew on a board, and being sued/arrested/tortured/killed/family tortured/family killed for a statement.

It would be nice that there would be a way to have a system that dealt with trolls and spammers, but wouldn't affect people who have unpopular opinions, either unpopular in their country or unpopular in general.

A system also would have to deal with grey areas: Lets say there is a person who says that iOS and OS X are 100% secure. Would this person be trolling, need some education about what the term "100% secure" really means, or would the post be considered sardonic humor? One person with moderation ability might flag it as a troll, another might consider it humorous, still another might absolutely agree with it.

censorshipbook (1)

fred fleenblat (463628) | more than 3 years ago | (#35830192)

It is jarring for me to realize that pages are being taken down because they merely *offend* others. These aren't kiddie porn or drug dealer pages, it's just people talking about stuff. They talk about their friends, their enemies, their schools, their governments. It's not all flowers and happiness. If they want real people on facebook, they need to realize that some people are going to say unpleasant things.

Maybe have a counter at the top of the pages that says "this page has received N complaints" but leave the content there so all can judge for themselves.

random != win (1)

spottedkangaroo (451692) | more than 3 years ago | (#35830238)

There's actually been a lot of research on this topic the last decade. No great solutions imo, but a lot of research. Most of it better than random=trustworthy. Here's the problem. Say you have N users and M are a fake mob. F=M/N is your ratio of fake users. Now select 0.01 of your users at random. F=0.01*M/0.01*N ... the ratio is the same, so the mob still works. Of course, I'm assuming you don't know which users are mob users. If you did, why would you bother with randomly selecting some? This isn't even a scholarly argument, it's just a mundanely obvious observation. Here's some actual research: http://scholar.google.com/scholar?q=reputation%20systems [google.com]

Re:random != win (1)

suutar (1860506) | more than 3 years ago | (#35833688)

True, the percentage won't change. But the original problem isn't a percentage problem; it's saying "we have X complaints" instead of saying "X out of Y people think this is a problem", because they don't know what Y is. They can't really assume that everyone on FaceBook has seen the material in question and only X think it's bad. The proposed solution is intended to turn the number of folks who have a problem with the content from an absolute figure to a measured percentage, which is (or at least appears to be) much more closely related to whether it really is a problem.

I'm hoping for Diaspora (1)

MarkvW (1037596) | more than 3 years ago | (#35830262)

I'm really hoping for a decentralized social networking system, where everything is not controlled by the Big Head.

Re:I'm hoping for Diaspora (1)

blue_goddess (1416183) | more than 3 years ago | (#35836792)

Diaspora does not solve the problem, because you either rely on outside hosting (in this case you're prone to excessive notice/takedown and screwed anyway) or serve that from own machine, in which case you are even more vulnerable to angry DDoS mob (vide: anon).

Reputation/Trust (0)

Anonymous Coward | more than 3 years ago | (#35830276)

We need some sort of model for reputation/trust, similar to a credit score. If there is no information about you, you get a low score because you are unknown. As you create a history of your actions, they should affect your score. If you flag a lot of content, and your flags aren't accurate, then your credit score should be lowered. That way, if you try to flag items in the future, it will carry less weight since you are an unreliable source.

Newgrounds had something like this.

I'm sure someone's already mentioned Slashdot... (3, Informative)

JMZero (449047) | more than 3 years ago | (#35830284)

..but I have to say it's ironic that you're posting about this algorithm on Slashdot, a site whose moderation system has incorporated the best of your ideas for years, and yet that doesn't seem to come up when you're asking for ideas.

I like the Slashdot system. Moderators are assigned points at times beyond their control, to prevent just the kind of abuses you mention. There's appropriate feedback control on how moderators behave. The job of moderating (and meta-moderating) is presented and appreciated in such a way that people actually do it. People are picked to do moderation in a reasonable way. The process is transparent, and the proof that it works is that the Slashdot comments you typically see are actually not horrible (usually) and sometimes are quite informative.

Re:I'm sure someone's already mentioned Slashdot.. (0)

Anonymous Coward | more than 3 years ago | (#35830554)

..but I have to say it's ironic that you're posting about this algorithm on Slashdot...

Given that even the summary says, "and could benefit from a crowd-sourced policing system similar to Slashdot's meta-moderation," how on earth do you think there's anyone involved that is not aware of /.'s moderation system?

The process is transparent, and the proof that it works is that the Slashdot comments you typically see are actually not horrible (usually) and sometimes are quite informative.

/. is not an awful system, not even bad, but there are some stupid design decisions. The comments you typically see were in the first few hundred, and most of them are karma whoring, that is, posts designed to get 3 or 4 people to mod them up in a kneejerk fashion.

What /. has going for it is that it churns through the news quickly, produces some reasonably good commentary, but it's by no means the standard to judge by.

The hivemind has moods (1)

Fractal Dice (696349) | more than 3 years ago | (#35830692)

When I look at slashdot and the way it gets moderated, I feel that either the culture of slashdot has changed a lot over the past decade or else I've changed a lot (it's sort of hard for me to tell objectively). I realize that communities and their biases are not a constant but there are a few topics where the slashdot moderation lately feels so alien to me that it has raised my internal astroturf alarm. Admittedly, I'm part of the problem for letting my mod points expire more often than I spend them, but I find when I force myself to spend them, I end up starting to moderate "+1 aligns with my biases" or "-1 I don't want to hear this point of view".

Re:The hivemind has moods (0)

Anonymous Coward | more than 3 years ago | (#35830792)

The average slashdotter is a lot stupider than the posters from 10 years ago.

I don't know why, but it's true - check out some old slashdot posts sometime and be reminded why this site was once great.

Re:The hivemind has moods (1)

Ltap (1572175) | more than 3 years ago | (#35832412)

I agree. If you find someone who has constructed a well-written argument that disagrees with you (some of the anti-privacy, anti-filesharing, and pro-censorship commenters come to mind), the best (or at least most constructive) approach is to respond to them, not to try to mod them into oblivion.

Trying for the $100 (3, Interesting)

xkr (786629) | more than 3 years ago | (#35830290)

I have two algorithms, and I suggest that they are more valuable if used together, and indeed, if all three including your algorithm are used together.

(1) Identify "clumps" of users by who their friends are and by their viewing habits. Facebook has an app that will create a "distance graph," using a published algorithm. It is established that groups of users tend to "clump" and the clumps can be identified algorithmically. For example, for a given user, are there more connections back to the clump than there are to outside the clump? Another way to determine such a clump is by counting the number of loops back to the user. (A friends B friends C friends A.) Traditional correlation can be used to match viewing habits. This is probably improved by including a time factor in the each correlation term. For example, if two users watch the same video within 24 hours of each other this correlation term has more weight than if they were watched a week apart.

Now that you have identified a clump -- which you do not make public -- determine what fraction of the abuse reports come from one or a small number of clumps. That is very suspicious. Also apply an "complaint" factor to the clump as a whole. Clumps with high complaint factors (complain frequently) have their complaints de-weighted appropriately. Rather than "on-off" determinations (e.g. "banned"), use variable weightings.

In this way groups of like-minded users who try to push a specific agenda through abuse complaints would find their activities less and less effective. The more aggressive the clump, the less effective. And, the more the clump acts like a clump, the less effective.

(2) Use Wikipedia style "locking." There are a sequence of locks, from mild to extreme. Mild locks require complaining users to be in good standing and be a user for while for the complaint to count. Medium locks require a more detailed look, say, by your set of random reviewers. Extreme locks means that the item in question has been formally reviewed and the issue is closed. In addition, complaints filed against a locked ("valid") item hurt the credibility score of the complainer.

I hope this helps.

Re:Trying for the $100 (1)

dkleinsc (563838) | more than 3 years ago | (#35831304)

If I'm an organized group with an ax to grind, I can get around your idea 1 fairly easily - organize my group off $SOCIAL_NETWORK, and instruct my loyal group members to specifically avoid detection by not friending or viewing each other's stuff on $SOCIAL_NETWORK. Your distance graph no longer shows these folks as connected in any way, problem solved.

Offline astroturfers have been doing that kind of thing for years.

Re:Trying for the $100 (1)

xkr (786629) | more than 3 years ago | (#35832510)

True, but my guess is that (1) this is far too much trouble for them; and (2) they were a "clump" with common interests long before they decided to attack postings they don't like. You are suggesting that people leave their own church before promoting the church's views?

Re:Trying for the $100 (1)

dreampod (1093343) | more than 3 years ago | (#35832740)

At that point the profiles complaining are going to quite distinctive due to their inactivity except for when flagging something as abusive. They are still going to want to use their real profile most of the time because using a fake profile prevents them from interacting with their friends and groups. This means that fairly simple behavioural analysis can help exclude these reports from the system or treat them as a distinct 'loners who like flagging content' clump.

The real difficulty would be determining how much weight to give a clump because obviously in legitimate cases they should be valued at greater than a single individual in value.

Re:Trying for the $100 (0)

Anonymous Coward | more than 3 years ago | (#35833184)

This is actually a really well thought out idea, but there is one small problem that I see with it. How would you justify locking a topic or group to complaints, but continue to allow it to grow and voice it's opinion. You are censoring to avoid censoring.

The clumping idea should most definitely be used by Facebook if it isn't already, but then there is also a chance that a clump can have a valid point when complaining. If I am on Facebook and I find a group that is trying to approve the murder of puppies as part of the local kindergarten curriculum, then of course I am going to tell all the people that I know in the area with kids at that kindergarten so that the complaints number gets high enough to raise a red flag with the powers that be. Sadly this approach would be squashed by your clumping idea and thus my daughter would have to slaughter puppies everyday. Given how hard it is to get blood out of clothes I just can't let that happen.

This would now be the perfect spot for me to suggest and alternative, but sadly I got nothing. I am going to go on the assumption that the combination of your first idea with the idea from the article would be a better choice, but I still don't get a warm and fuzzy feeling thinking about it...

Slashdot moderation/meta-moderation??? (1)

davidwr (791652) | more than 3 years ago | (#35830326)

At its core, this sounds like a blend of Slashdot's moderation and meta-moderation processes.

This is much more difficult than it sounds (1)

querist (97166) | more than 3 years ago | (#35830394)

Hello everyone,

This is a much more difficult problem than it seems at first glance. Some other posters have already pointed out the problem of the "jury of your peers" concept with the example of the country Turkey. A similar problem arises if it is simply approached as "what is considered offensive in the host country" (in this case, the USA, since Facebook is based in the USA). Heck, there are pictures of my daughter in her soccer uniform that would be banned in Saudi Arabia because you can see her knees, never mind her ankles. Scandalous!

It is difficult to conform to all nations' "sensibilities" with regard to what is "inappropriate" without falling to the harshest restrictions, such as Sharia law or the Thai ban on any criticism of the Thai royal family.

Spotted Kangaroo (message 35830238 in this thread) has an interesting idea with using "trustworthy" members. I'm not sure how that "trustworthyness" would be calculated other than using a metamoderation system similar to Slashdot's. By using supposedly trustworthy members, and then allowing the Facebook staff to "metamoderate", especially in the instance of appeals against complaints, I think it could work reasonably well. It would take a while and considerable effort for shill accounts to build up enough "trustworthyness" to be able to have any impact since the shill accounts would have to show activity and not just longevity.

I like the "jury" system, though. It's better than letting people comment only on topics about which they have strong feelings. Given the large number of churches that use Facebook as the electronic bulletin board for their youth groups, I could see a disproportionate number of people moderating pro GLBT groups and pages down because it offends their beliefs. We need a random selection mechanism that still works fairly, such as trusting people to list languages understood honestly. I'd be useless in moderating a page in Turkish, for example.

Just a few thoughts. I hope that if someone notices a flaw in my reasoning that you could post a polite explaination of the flaw and propose a better solution. I'm not interested in the $100, so I thought I'd just toss a few ideas out for folks to use.

Vigilantism (2)

Ruke (857276) | more than 3 years ago | (#35830514)

I think you'd be hard-pressed to find a group of people who would familiarize themselves with the Facebook TOS well enough to actually enforce it. I'm afraid that what you'd actually get is a group of people who vote Offensive/Inoffensive based on whether they agree with whatever controversial topic is at hand. This puts any minority group (LGBT, religious organizations, etc.), as well as any controversial groups (pro-Life/pro-Choice, political groups, etc.) at a much higher risk than they are now. You need some sort of incentive for people to vote according to the rules, rather than voting for what they think is right.

Re:Vigilantism (1)

querist (97166) | more than 3 years ago | (#35831014)

That is why the metamoderation is done by Facebook employees, who should be familiar with the TOS. It should work itself out eventually, with obvious abusers being given low reputations so that they are never asked to moderate again.

Re:Vigilantism (1)

Ruke (857276) | more than 3 years ago | (#35831332)

A system like this could work out, but by introducing Facebook employees, you make the system non-scalable. Instead of an employee reviewing 1000 abuse reports per day, he reviews 1000 moderations per day, and that's only meta-moderating 1 out of every <jury size> moderations.

I like the system I saw posted earlier, where instead of asking "Is there abuse?", you ask "Which kind of abuse is there?" and give them a list. People who choose incorrectly are given a lower confidence rating. However, this still doesn't account for hot-button issues - say I'm perfectly reasonable, except that I think that anyone who's had an abortion deserves death by firing squad, so I'm going to mark any Planned Parenthood groups as grossly offensive. I suppose that those are likely edge cases, and you let the large jury size dilute their biases...

Jury Qualification Improvement (2)

Umuri (897961) | more than 3 years ago | (#35830438)

I like the idea, however your problem is you will always come across trolls on the internet, or people who just like screwing up systems. I would say this percentage on facebook is quite sizable, so i would propose these alterations(to be taken individually or all together or mix/match):

Assign a trustability value to each juror, that is hidden and modified in one of two ways(or both):

Have a pool of pre-existing cases(I'm sure facebook has tons of examples stored in their history banks).
In this situation facebook knows what the outcome should be according to their standards.
Have any prospective juror have a mix of "real" cases and these pre-existing cases mixed together for a trial period, say that first 20 cases they review have an unknown mix. This way they can't guess which ones are appropriate or not.

Use their verdicts on these existing cases to assign a juror a "reliability" factor on their verdicts on the non-example cases in their batch.
That way jurors who don't quite get the rules, or are causing problems, are easily weeded out and their vote counts less in the total verdict weight on their real cases.

Alternatively:
Trustability starts at 50%, so new jurors get half votes.
whenever a juror disagrees with the majority opinion by the polar opposite choice, lower their trustability rating.
Likewise when they are in the majority and it is not a middleground, increase their trustability.

Both of these improvements will lower the odds of troll or mob mentality, even if the control a decent size of the juror pool because their individual votes will be worth less, while being invisible enough to the end-user that they won't be able to tell they aren't being effective.

Re:Jury Qualification Improvement (1)

querist (97166) | more than 3 years ago | (#35831036)

This seems like a good idea, except that it could still allow a "mob rule" effect. I would contend that having Facebook employees metamoderate, at least in disputed cases, would be a more effective approach overall. Your approach, however, could easily seed the system.

Re:Jury Qualification Improvement (3, Interesting)

dkleinsc (563838) | more than 3 years ago | (#35831220)

Thing is, this problem isn't one of mere trolls. Trolls, spammers, and other forms of lesser life are relatively easy to recognize.

No, these are paid shills and organized groups with an agenda. And that's much much harder to stop, because they will have 'spies' trying to infiltrate and/or control your jury selection, 'lawyers' looking for loopholes in your system, and a semi-disciplined mob who will be happy to carry out their plans carefully.

An example of what they might do if they were trying to take over /. :
1. See if they could find and crack old accounts that haven't been used in a while, so they could have nice low UIDs. These are your 'pioneer' accounts. If you aren't willing or able to pull that off, make some new accounts, but expect the takeover to take longer.
2. Have the 'pioneers' post some smart and funny comments about stuff unrelated to your organization's angle to build karma and become moderators.
3. Have your larger Wave 2 come in, possibly with new accounts. Still be reasonably smart and funny on stuff unrelated to the organization's angle. Have your pioneers mod up the Wave 2 posts.
4. Repeat steps 2 and 3 until your group has a large enough pool of mods so that you can have at least 5 moderators ready whenever a story related to the organization's ideology comes up.
5. Now let your mob in. Have your moderators mod up the not-totally-stupid mob posts in support of your organization's ideological position, and posssibly mod down as 'Overrated' (because that's not metamodded) anything that would serve to disprove it.
You now have the desired results: +5 Insightful on posts that agree with $POSITION, -1 Overrated on posts that disagree with it, and an ever-increasing pool of moderators who will behave as you want them to with regards to $POSITION.

I have no knowledge of whether anyone has carried out this plan already, but it wouldn't surprise me if they had. The system on /. is considerably more resilient than, say, the New York Times comment section or Youtube, but still hackable.

Re:Jury Qualification Improvement (0)

Anonymous Coward | more than 3 years ago | (#35839964)

Alternatively:
Trustability starts at 50%, so new jurors get half votes.
whenever a juror disagrees with the majority opinion by the polar opposite choice, lower their trustability rating.
Likewise when they are in the majority and it is not a middleground, increase their trustability.

AC cause I was moderating...
        but that won't work.

Imagine for any given thing on $social_network the $RandMob gets to it first, with good numbers...
then as people that aren't mob start voting, they get their voting power decreased because they
don't agree with the mob. If the mob gets a good inertia rolling, they usurp even 100% votes.

-AI

Cultural sensitivities (2)

petes_PoV (912422) | more than 3 years ago | (#35830472)

The system as described does not appear to cater for situations where a post/article is grossly offensive to an identifiable group or minority, but is meaningless to the majority. So if something is flagged that honks off a lot of people in Uzbekistan (for example) or america (for another example) should the "judges" not also come from that cultural group (the honkees?)? Without that filter, most people who knew nothing about the circumstances of the article would not be in a position to make a considered judgement - or they might even vote the complaint down for their own political reasons.

Although you can't expect people to identify themselves as being knowledgeable about every conflict, argument, religious view, political wrangling or moral panic you could choose individuals from the same timezone and hemisphere that the complaints originate from (and maybe only ban the offending piece in that geography - unless more complaints are received from outside).

Re:Cultural sensitivities (0)

Anonymous Coward | more than 3 years ago | (#35831110)

So if something is flagged that honks off a lot of people in Uzbekistan (for example) or america (for another example) should the "judges" not also come from that cultural group (the honkees?)?

The problem with your adjustment is it would only further promote mob mentality. Suppose the majority in a particular population thought "X" and the facebook page promoted "Y" without violating Facebook terms of service. Your adjustment would effectively make it easier to bury "Y" as it would only allow that population to consider the content.

Make Sense? I hope so.

Re:Cultural sensitivities (1)

petes_PoV (912422) | more than 3 years ago | (#35832324)

Yes, I understand - however I'm proposing that "Y" would only be blocked in geographies where the "X" view predominated. All the "X"-ers could use Facebook "safe" in the knowledge that their values/feelings/religion/ethics/scruples/prejudices weren't being debased/criticised/titillated/subverted/mocked/parodied or abused while the rest of the world could carry on regardless, without those blocks in place.

The thing we must consider is that China is currently the single largest user base on the planet. India with its 1Bn+ people will soon be the second. Once these massive, single states get to organise their people on the internet it will be very easy for them to impose their attitudes on everybody - just by sheer force of numbers. So they would have the largest number of volunteer censors - possibly state sponsored - and everybody else would have to dance to the "dictatorship of the majority".

What I'm proposing is that FB and other global internet presences could segment contentious content so that the people who object to it - or at least the region they are in, will feel like they have "won" by having stuff banned and everybody else who doesn't have strong (or opposing) views would not be affected.

Re:Cultural sensitivities (2)

querist (97166) | more than 3 years ago | (#35832562)

Your proposal is interesting, but I can see some potential problems with it with regard to the overall concept of free expression.
Let us consider a page on Facebook that is critical of Islam. Who would be considered appropriate to moderate that page? Most (if not all) Muslims would mark it inappropriate or offensive because it offends their beliefs, yet to Christains or others it may be considered informative and appropriate.
As a conservative Christian (I am not saying you are), would you want your 13-year-old to have access to page that actively promotes the homosexual lifestyle? I know many conservative Christians, given that I live in the "deep south", and I know they would find such a page offensive. Who is best to moderate those pages?
The idea is good to try to have people judging the page be those more likely to care, but you have to draw the line somewhere or you will have too much censorship because people don't like their prophet being insulted or something like that.

A few ideas. (1)

Zandamesh (1689334) | more than 3 years ago | (#35830482)

I like your idea of crowd-sourcing, and I came up with a few ideas while reading yours:

Test the judgement of the moderators.

When mods are called upon to moderate something make sure that they have no way of knowing if it's real or not. This way you can for example request of a mod if "MR. LANGAN IS A BUTT BRAIN," or any other previously non bogus post should be modded or not. This way, you can know before hand if the content should be modded and test the modding ability of the moderator.

Describe what how they should judge something.

When reporting bad content you can usually choose what kind of bad content it is, for example you could choose between:
copyright infringement
offensive
Advertising
(more things here)

It is important to have a distinction between the kinds of bad content because it allows you to both judge the users who mod and the judges. It could be possible that one user/mod can see the distinction between "offensive" and "non-offensive" content but not the difference between "copyright infringing" and "non-copyright infringing" posts. By having this system in place, you could judge each users/mods credibility.

This system allows you to have an overview of how credible each users complaint is and by also keeping track of how each moderators mods things, you could also select the right one for the job. This complements the first adjustment I mentioned and vice versa.

Don't just randomly select moderators

I'm pretty sure Facebook and YouTube keep track of a whole lot of things about their users. When things get reported you could try to search for unbiased mods based on all the data they collect. For example, one post could reported 100 times from one particular geographical location, you could then search for the mods from another location.

These are just the ideas I got off the top off my hat, maybe they'll be useful.

Validation (0)

Anonymous Coward | more than 3 years ago | (#35830518)

You need to validate that the juror is a valid juror and not a random bot. For each juror, run them across a random subset of training cases (created by FB employees) inserted into their standard jury duty. If they vote the same way the people who made the the training cases for a high enough percentage of cases (90%), then they probably are valid people and you should count their votes when calculating whether to delete a group.

Re:Validation (1)

Ruke (857276) | more than 3 years ago | (#35830922)

And hey, if the botters manage to build a bot which can execute good judgement in moderating other users, mission fucking accomplished [xkcd.com] .

Escalating Review Ratios (0)

Anonymous Coward | more than 3 years ago | (#35830536)

Something on the order of the number of hits to the number complaints. Lots of views and lots of complaints need more supervisory reviewers to make the call.
Simple.

What do you expect? (1)

MikeRT (947531) | more than 3 years ago | (#35830542)

Most people are censorious by nature to one extent or another. People tend to group together with like-minded individuals. Those two factors, plus a system that lets them trash those they dislike is an inherent recipe for disaster.

The solution, if there is one, is a simple filter process:

1. One complaint, per item.
2. Individual review of complaint.
3. If your complaint is blatantly unreasonable, you're banned.

Maybe if a member of "Saudi Flag" got permabanned the first time they filed a false report, things would change a little.

Give this guy $100! (1)

DrEasy (559739) | more than 3 years ago | (#35832732)

This reminds me of the video appeal in pro tennis. If your appeal was bogus you lose it, otherwise you get to appeal again. This makes you think twice before appealing, and will therefore reduce the review load.

Minor possible improvement: you get the right to enter a complaint only after some time since the creation of the account has passed, or if you have reached a certain amount of activity; this will deter the creation of shill accounts, or at least it will increase the friction/cost (time, energy) for doing so.

Re:What do you expect? (1)

Sigma 7 (266129) | more than 3 years ago | (#35833736)

You missed a step:

4. If there's a large number of false complaints against a user (whether all at once or spread over time), it becomes harder to complain against that user. For example, you may need minimum community participation (e.g. been around for at least 4 days and made 10 posts, etc.) or otherwise managed to get yourself trusted enough by the community.

| /dev/null (0)

Anonymous Coward | more than 3 years ago | (#35830568)

| /dev/null

You can send my $100 there too.

look at stack overflow model (1)

ftravers (937513) | more than 3 years ago | (#35830598)

make people participate in your whole system like stack overflow. Gradually, as they build up reputation, give them more power. Then if one of these guys does something odious, you can yank his/her priv's.

"Prior art" (1)

macraig (621737) | more than 3 years ago | (#35830604)

There is "prior art" for this idea, if you know where to look. OKCupid.com has has a crowd-sourced "flagmod" [okcupid.com] system for its Web site for years.

Don't judge the rules; judge the evidence (1)

ChrisGoodwin (24375) | more than 3 years ago | (#35830658)

Require that abuse reports include a freeform description of why the suspected content violates the rules.

Then, don't ask the jurors whether the content violates the rules. Instead, ask them: Is (the freeform description) a true statement about the suspected content?

In other words: someone reports content for violation of TOS. Reason: "This content contains nudity."

The juror then gets the moderation request, and they answer a single question. Is: "This content contains nudity" true about the suspected content?

You're not judging the rules; you're judging the evidence.

TL; DR (0)

myrmidon666 (1228658) | more than 3 years ago | (#35830928)

TL;DR

Use "Scores" (0)

Anonymous Coward | more than 3 years ago | (#35830974)

As I bet many meta-moderation algorithms may use, use (double float) scores to gauge each users fitness to either report a site or moderate reports. People who report sites that are usually taken down increment their "credibility score". Contrary-wise, if they report sites that don't get taken down, their score goes down. Same for moderators. If a moderator makes a decision that usually agrees with the final verdict, increase their credibility. Moderators with a higher score contribute their score to the final reward pool of the decision thereby rewarding the "elite" moderators and those that agree. Same for reporting content. Reporters who report valid TOS violations gain credence as their decisions are supported. With enough iterations, a solid base of moderation is achieved. Control of the "elite moderators" group (or other users for that matter) can then be simplified to functions of their score(s). (This is all while randomly offering users to moderate for a specified (or random) amount of time to control "elite"-ed special interest groups) I would compare that to making users their own neural network of moderation.

Aaron Barr's Sockpuppets (0)

Anonymous Coward | more than 3 years ago | (#35831008)

This will never work. All "they" need to do is set up a network of Facebook "personas" like Aaron Barr was proposing (and the US Military was asking for, including the management software) and you can generate as many +1 or -1 as you want. For Facebook read Amazon accounts, Youtube accounts and also Slashdot accounts. No, they were not all coming from the same IP range or geographical area (as obtained through geolocation). Don't know if anybody already coined the term Socknet but that is exactly what it is.

User Accountablility (1)

Pyus (2042218) | more than 3 years ago | (#35831070)

Lets say the offending material is extremely and obviously offending. You want to send this image to 100 volunteers to be viewed and voted on? What If I have the my pictorial guide to gutting and skinning a pig uploaded to my profile. 1000 pictures... 100,000 volunteers just vomited a little in their throats. My account gets deleted, I create a new email address and I'm back! :D

Exposing volunteers to REAL abusive images/videos/text is IMHO worse then trying to catch false positives. If you want to be a forum moderator go for it, but to be a content moderator for an international site of millions of people is a bit like opening your soul to all the evil in the world.

User Accountability - Facebook users abuse reports should be confirmed on a random sample basis, every 10 abuse reports filed. If the abuse report is a fake report, the user should be dealt with on step based incident system. First incident, warning. Second-Third incident, temp suspension. Fourth incident, full account suspension.

Re:User Accountablility (0)

Anonymous Coward | more than 3 years ago | (#35832664)

User Accountability - Facebook users abuse reports should be confirmed on a random sample basis, every 10 abuse reports filed. If the abuse report is a fake report, the user should be dealt with on step based incident system. First incident, warning. Second-Third incident, temp suspension. Fourth incident, full account suspension.

Then they'll just open a new account. Better to give them no feedback, but just blackhole the abuse reports from violators. This accomplishes two things. First, it gives (real user) violators the emotional satisfaction of having bitched, while insuring that they do no further harm. Second, it doesn't tip off the abusive sock-puppet violators as to which of their accounts have been burned.

(posted AC cuz I've moderated - number11)

FaceBook doesn't care (2)

curril (42335) | more than 3 years ago | (#35831234)

This is a well-studied "Who watches the watchers?" web of trust type issue. While there is no perfect solution, there are a number of good approaches. This page on Advgato [advogato.org] describes a good trust metric for reducing the impact of spam and malicious attacks. It wouldn't be that big of a deal for FaceBook to incorporate some such system. However, it would require FaceBook to actually care about about being fair to its users, which it doesn't. FaceBook exploits for financial gain the tribal desires of people to band together and be part of a group. So FaceBook's really uses its abuse policy as a way to force people to follow the rules of the bigger and more aggressive tribes. Such battles actually help FaceBook to be successful because it strengthens the tribal behaviors that benefit FaceBook's bottom line.

So all in all, no matter what brilliant, cost-effective, robust moderation/abuse system you design or crowd source, the very, very best that you can hope for is that somebody at FaceBook might pat you on the head and thank you for your efforts and say that they aren't interested in your contribution at this time.

Pixel Perfect HTML coding (-1)

Anonymous Coward | more than 3 years ago | (#35831662)

Xhtmlpark is one of the most trusted private sector web design and development company in England had strong footprints on the global market since 2010 to Convert PSD to XHTML [xhtmlpark.com] / Pixel Perfect HTML coding [xhtmlpark.com] / HTML / CSS and CMS implementation. Today, xhtmlpark has established itself as one of the leading global leader that specifically focused to deliver its clients highest quality standards governed under own guiding principles.

Our PSD to HTML [xhtmlpark.com] service is targeted to agencies and web design freelancers; our work is 100% confidential. Our PSD to XHTML [xhtmlpark.com] package has a 100% money back guarantee. So you can go ahead and fill in the order form, be sure of our reliability.

I see... (1)

Charliemopps (1157495) | more than 3 years ago | (#35831692)

So you want free content provided by your customers... but that lead to low quality content. So you wanted free moderation provided by your customers... but that lead to poor moderation. So you want free code and methodology to improve your moderation provided by your customers... at some point the obvious has to just smack you upside the head.

My Suggestion... (2)

twistedsymphony (956982) | more than 3 years ago | (#35831770)

Why not "test" the jurors every so often to determine if they're really effective jurors?

It would work something like this: you would have a small group (employees of facebook, or wherever) that takes (actual) select complains and determine how their "ideal" juror would handle the complaint. feed these at random to the jury pool and if they're not voting the way they should, reduce (or remove) their voting power in effecting the outcome in the decision making process, alternatively if they have a strong history of voting exactly the way they should then their votes would carry more weight in non-test cases.

I wouldn't necessarily "kick out" jurors, but their voting power could be diminished to nothing if they have a very poor track record... I also don't think that the jurors should know that they're being tested nor, what their voting power is, nor that their voting power even has more or less weight than anyone else's.

Excellent use of meta-moderation! (1)

jdp (95845) | more than 3 years ago | (#35832278)

Totally agree that people like Morozov write off crowdsourcing without understanding it. One of the things that's fascinating to me is that crowdsourcing systems in general haven't learned from Slashdot's success with meta-moderation. Evaluating abuse reports seems like a great application.

eh (1)

kelemvor4 (1980226) | more than 3 years ago | (#35832650)

He's helping kids circumvent security systems at their school to access banned sites and doesn't understand why he's getting complaints?

Here's your sign...

Improvements (1)

bioster (2042418) | more than 3 years ago | (#35833164)

Your basic approach does seem to be vulnerable to someone registering a large number of "sleeper" accounts that wait to be called in to be a juror about something they care about (perhaps an upcoming attack). To help counter this: 1. An account can't be selected as a juror unless it's been active for a minimum amount of time, with actual activity. (Say, a month.) 2. Jurors which consistently ignore their "duty" get dropped from the list. [p] You would also want to attempt to weed out vandals from your juror list: 3. Jurors which consistently vote counter to the majority get dropped from the list. To handle borderline cases I would try: 4. For the crowd-sourcing system to function, a minimum of 2/3rds majority is required. 5. If a 2/3rds majority isn't achieved, then a paid moderator looks at the complaint.

Overestimating average Facebook user (1)

T-Bone1027 (1724172) | more than 3 years ago | (#35834052)

The most fatal flaw in your algorithm is that you assume that the average user actually knows the Facebook TOS. They don't. I'm not sure it would necessarily be a bad thing, but most "jurors" would certainly end up just making a judgment based on their own values

Inmprovements (1)

Stashiv (2042490) | more than 3 years ago | (#35834372)

To further improve upon this I have a few suggestions. #1 Instead of the "Jury" making the final decision have it so that the jury is the initial buffer before the official complaint is registered for final review by facebook themselves. This should alleviate the amount of work they have to do and thus have more time to properly investigate the claims and the corresponding group/user etc. In this manner Facebook makes the official decision based on their ToS instead of randomly selected people and their interpretation of FB's ToS. #2 This suggestion most likely should be included with any option. Further improve the algorithm to select candidates that appear to have no biased opinions on the matter at hand. Eg a controversial group should not have a reviewer who is part of said group or has links to other groups either in favor or against the particular group in question. We know FB mines enough data to deliver adds based on a users preferences, hobbies etc. This could be worked into that. #3 If option 1 was to be thrown to the wayside have only "torn jury" decisions based on votes that are too close in numbers then sent to FB for final review. In this situation where a vote may be torn there is a high risk individuals either A) interpreting the ToS incorrectly or B) The randomly selected individuals may have a higher bias towards one vote or the other based on personal feelings and would allow FB (And various other sites if this type of review process was used elsewhere) to make a final decision based on the actual ToS instead of an interpretation
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?