Fake News or Filter Bubble?

his text is inspired by the call from the of https://github.com/Administry/administry-of-truth
Wilm says: recruiting pitch for #Anarchist #devs who want to help stop the spread of #fakenews on social media

His stated goal is to create “a system that is capable of holding individuals, organizations, and media outlets accountable for their words and actions on the inernet. A system that can be trusted because the code is open for all to view, and the methodologies employed rely solely on verifiable facts as the source of truth.”

There is virtually no policy part as of now (2018-03-14) — it is all about code. Here’s a brief exchange between me and the author on it:

  • Me: IMO you are missing the policy part. I.e. who and how is going to categorize channels as fake news. The tool itself, without policies rooted in ethics rooted in morality is just a tool and can easily be used to to censor the truth. If you believe in the collective wisdom of community, a communal reputation/moderation system should be sufficient.
  • Wilm: you’re absolutely right! We plan on using a combination of recognized fact checking sources (politifact, snopes) in the early stages (to be phased out) along with an Ai that will be trained by the community, and of course human intervention when the ai is not acting properly. A huge portion of this is going to be garnering trust from people, so I’m very interested is hearing your thoughts on the best way to do that if what I’ve said above isn’t satisfactory.

Source: https://social.coop/@wilm/99674606340771780

Wilm’s project is aimed (at least at the moment) at reddit discussions and countering “Russian bots”. In this text, I will try to take more general approach: preliminary model of a community-driven news moderation system/process.

So, there is a community, that is built upon some principles. Those principles include some criteria, formal or not, distinguishing unwanted external input fom wanted one. Reasons for that are irrelevant, of course.
Now, first of all, there are questions we have to ask:

  1. What will be the level of agency of community members? Are they expected to participate in the process? How directly? Are they expected to interact with the system all the time, periodically or once-off at the start? Do we want to create a subset of population to deal with moderation? On what basis?
  2. Do we want to build hard filtering scheme, keeping unwanted input beyond reach of community members, or do we want let the content in, flagged as doubious? How do we want to process the information after it is moderated?
  3. How are we going to deal with changing needs and criteria? How are we going to maintain (i.e. review) the policies and filtering conditions? Do we want to create a filter bubble, or do we want the community “infosphere” evolve somehow, by design?

Anarchist moderation — community in action

Wilm was calling “anarchist devs” and I subscribe to anarchist values as well. So, let’s try to answer these questions according to anarchist way of thinking about community.
Community in anarchist mind is above all a voluntary association of equal individuals. They are supposed to participate in all community processes directly, equally and continually . Except for technical functions, temporary (and possibly rotational), there are no exceptional positions. Also, as the way to prevent creation of technocratic caste, the most vital skills have to be learned by everyone, at least at the basic level (enough to call bullshit in case of need).

Starting from these assumptions, I see an appropriate system as several levels of flagging, with actual filtering occuring only at individual level. With an exception of “Black list”, every input should be available to everyone on the same basis, marked with a string of flags from various layers of moderation system. User (their personal filter, to be precise) would then decide what they want to be done with the content, based on specific combinantion of flags.

In his text, Wilm writes: “We plan on using a combination of recognized fact checking sources (politifact, snopes) in the early stages (to be phased out) along with an Ai that will be trained by the community, and of course human intervention when the ai is not acting properly.

In my perspective, such a process could look as follows:

Community level

  1. Incoming piece of information (say, a post) is checked against black list criteria and if it matches, the content is put in the black list bucket (and the on-call moderator is notified).
  2. Then system checks the input against several static/external channels:
    1. Internal whitelist
    2. External verification services (snopes, for example)
    3. Any other policy-driven criteria (keywords, number of links etc. etc.)
  3. Here comes the fun part. Systems AI starts checking community memory.
    1. Previous occurences of this very input in the system and eventual scoring it received.
    2. Occurrences of similar inputs in the system: checking the originality and eventual scoring similar posts received.
  4. Then the input is released to community feed.

User level

So, a user is presented with a piece of content and “slip”: a string of flags attached to it. These flags represent opinions about the content, coming from various sources, internal or external, including community collective memory represented by historical overal scoring values.

The idea is that each slip is archived with the content piece and can be retrieved (memory fading curve permitting) in detail if needed. Also, the slip inclyde individual values: scoring coming from very use who cared to rate the content. This value should be also always editable by the user who entered it.

Individual filter should be defined by individual user, and allow to group messages in three categories at least: white, grey and black. The process is similar to the community level flagging. User can define their own white and black list, and trusted sources. The important thing is that user can also set a list of other community members as trusted and include their assessment as part of filtering criteria. User’s own decisions from the past, regarding similar content, will also be extrapolated and put as a scoring value.

The ultimate goal of this process is to put a message into one of metaphorical boxes: white, gray or black. As a default, white box messages would be listed in users feed. Gray box content would be listed in separate feed, to be manually assessed. Black box would only be listed as a number of messages and available upon request. For inquisitive minds, it can be combined with a process to drop a small random sample from the black box into the main feed — as a harmless crack in the filter bubble. 🙂

Final stage of the process is the user’s manual scoring. By default it would apply to gray box content, but user can add their scoring (by reassigning messaxe to another box) in every case. Very important aspect is of course training the filter AI and sending message back for policy-making purposes.

Endgame

Ideally, the system I am trying to describe would balance users’ agency, AI-implemented community policies and feedback from changing community preferences. Combined with practically-oriented policies and their maintenance, it can become valuable tool to express community preferences and needs regarding information input.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s