NIPS 2017: Saturday December 9th 09:00 – 16:30

Democracy of information

Social Media and other online media sources play a critical role in distributing news and informing public opinion. Initially it seemed that democratising the dissemination of information and news with online media might be wholly good – but during the last year we have witnessed other perhaps less positive effects.

Reality of Echo chambers

The algorithms that prioritise content for users aim to provide information that will be ‘liked’ by each user in order to retain their attention and interest. These algorithms are now well-tuned and are indeed able to match content to different users’ preferences. This has meant that users increasingly see content that aligns with their world view, confirms their beliefs, supports their opinions, in short that maintains their ‘information bubble’, creating the so-called echo-chambers. As a result, views have often become more polarised rather than less, with people expressing genuine disbelief that fellow citizens could possibly countenance alternative opinions, be they pro- or anti-brexit, pro- or anti-Trump. Perhaps the most extreme example is that of fake news in which news is created in order to satisfy and reinforce certain beliefs.

Results of Polarised opinions

This polarisation of views cannot be beneficial for society. As the success of Computer Science and more specifically Machine Learning have led to this undesirable situation, it is natural that we should now ask how Online Content might be prioritised in such a way that users are still satisfied with an outlet but at the same time are not led to more extreme and polarised opinions.

What is the effect of content prioritisation – and more generally, the effect of the affordances of the social network – on the nature of discussion and debate? Social networks could potentially enable society-wide debate and collective intelligence. On the other hand, they could also encourage communal reinforcement by enforcing conformity within friendship groups, in that it is a daring person who posts an opinion at odds with the majority of their friends. Each design of content prioritisation may nudge users towards particular styles of both content-viewing and of content-posting and discussion. What is the nature of the interaction between content-presentation and users’ viewing and debate?

Transparency of Content

Content may be prioritised either ‘transparently’ according to users’ explicit choices of what they want to see, combined with transparent community voting, and moderators whose decisions can be questioned (e.g. Reddit). At the other extreme, content may be prioritised by proprietary algorithms that model each user’s preferences and then predict what they want to see. What is the range of possible designs and what are their effects? Could one design intelligent power-tools for moderators?

The online portal Reddit is a rare exception to the general rule in that it has proven a popular site despite employing a more nuanced algorithm for the prioritisation of content. The approach was, however, apparently designed to manage traffic flows rather than create a better balance of opinions. It would, therefore, appear that even for this algorithm its effect on prioritisation is only partially understood or intended.

Redesigning alghoritms

If we view social networks as implementing a large scale message-passing algorithm attempting to perform inference about the state of the world and possible interventions and/or improvements, the current prioritisation algorithms create many (typically short) cycles. It is well known that inference based on message passing fails to converge to an optimal solution if the underlying graph contains cycles because information then becomes incorrectly weighted. Perhaps a similar situation is occurring with the use of social media? Is it possible to model this phenomenon as an approximate inference task?

The workshop

The workshop will provide a forum for the presentation and discussion of analyses of online prioritisation with emphasis on the biases that such prioritisations introduce and reinforce. Particular interest will be placed on presentations that consider alternative ways of prioritising content where it can be argued that they will reduce the negative side-effects of current methods while maintaining user loyalty.

Invited Speakers:

The following speakers have confirmed their attendance at the workshop:

  • Aristides Gionis, University of Aalto
  • Delip Rao, Joostware AI Research and Johns Hopkins University
  • Suresh Venkatasubramanian, University of Utah
  • Andreas Vlachos, University of Sheffield

Organising Committee:

  • Nicolò Cesa-Bianchi, University of Milan
  • Marko Grobelnik, Jozef Stefan Institute
  • Massimiliano Pontil, Istituto Italiano di Tecnologia and University College London
  • Sebastian Riedel, University College London
  • Davor Orlic, Knowledge 4 All Foundation
  • John Shawe-Taylor, University College London
  • Chris Watkins, Royal Holloway
  • Emine Yilmaz, University College London

Sponsors

09:00‑10:00 Invited talk : Automating textual claim verification
Andreas Vlachos, University of Sheffield
10:30‑11:10 Presenations:
11:30‑11:50 Spotlights:
11:55-12:10 Poster session:
Nicolò Cesa-Bianchi, University of Milan
12:10-13:00 Lunch break
13:00-14:00 Debate: Philosophy and ethics of defining, identifying, and tackling fake news and inappropriate content
Moderator: Chris Watkins, University College London
14:00‑15:00 Invited talk: Political echo chambers in social media
Aris Gionis, Aalto University
15:00‑15:30 Coffee break
15:30‑16:30 Debate: Reality around fake news
Moderator: Chris Watkins, University College London

Abstracts due October 23rd;  acceptances sent November 7th

Within the past few years, social media have become dominant aggregators and distributors of news. Much public discussion has moved online to social media such as Facebook, Twitter, Reddit, and comment boards. Traditional newspapers and news channels have lost influence to new online forums with weaker editorial controls.

Perhaps as a result, fake news and lies spread fast and widely. Online political discussion is polarised and tribal. Echo-chambers reinforce one-sided views without presenting any balancing alternatives. False rumours persist even when disproved.

One original promise of the Internet was that it would empower better democratic discussion, with wider participation, and better universal access to true news and argument.  If this has not happened, what technologies can we build to achieve it?

Best Paper Award prize is $1,000 USD

We invite contributions on any of these or related topics:

Fake news and fact checking:

  • Tracing widely distributed content back to its origins
  • Enhancing content in real time with fact checking
  • Proactive identification of news trends
  • Monitoring, detection, and moderation of polarizing topics

Presenting and organising content:

  • Defining ‘fair discussion’ and algorithmic fairness
  • Voting and reputation systems for collective evaluation and prioritisation
  • Identifying and mitigating tribalism and echo chambers
  • Algorithmic fairness in news retrieval and presentation

Abuse, hate speech, and illegal content:

  • Identifying abuse and breaches of rules for civil discussion
  • Detecting and mitigating tribalism
  • New tools for moderators

Collective intelligence:

  • How can the quality of on-line discussion be evaluated?
  • Improved technologies for online discussion that enable better collective intelligence
  • Models of online discussion as large-scale message passing analogous to graphical models

Please send an abstract of up to two pages summarising your contribution; one additional page of cited references is allowed. You may also attach full papers of any format or length that supplement your submission, but we do not guarantee to take these into account in the reviewing process. You may submit work that has previously been published — but please give details of how it has previously been published. Our review process will not be blind: please submit your contributions as PDFs containing the authors’ names.

We intend to make accepted contributions available online, linked to the workshop page. There will be no formal published proceedings.

Important Information

  • Submission deadline: October 23th
  • Submissions are closed
  • Acceptance decisions sent: November 7th
  • Date of Workshop: December 9th 2017

Any enquiries should be sent to info@k4all.org email address