Organized by
Knowledge 4 All Foundation

Sponsored by

Social Media and other online media sources play a critical role in distributing news and informing public opinion. At first it seemed that democratising the dissemination of information and news with online media might be wholly good – but during the last year we have witnessed other perhaps less positive effects.

The algorithms that prioritise content for users aim to provide information that will be ‘liked’ by each user in order to retain their attention and interest. These algorithms are now well-tuned and are indeed able to match content to different users’ preferences. This has meant that users increasingly see content that aligns with their world view, confirms their beliefs, supports their opinions, in short that maintains their ‘information bubble’. As a result, views have often become more polarised rather than less, with people expressing genuine disbelief that fellow citizens could possibly countenance alternative opinions, be they pro- or anti-brexit, pro- or anti-Trump.

This polarisation of views cannot be beneficial for society. As the success of Computer Science and more specifically Machine Learning have led to this undesirable situation, it is natural that we should now ask how Online Content might be prioritised in such a way that users are still satisfied with an outlet but at the same time are not led to more extreme and polarised opinions.

What is the effect of content prioritisation – and more generally, the effect of the affordances of the social network – on the nature of discussion and debate?  Social networks could potentially enable society-wide debate and collective intelligence. On the other hand, they could also enforce conformity within friendship groups, in that it is a daring person who posts an opinion at odds with the majority of their friends. Each design of content prioritisation may nudge users towards particular styles of both content-viewing and of content-posting and discussion. What is the nature of the interaction between content-presentation and users’ viewing and debate?

Content may be prioritised either ‘transparently’ according to users’ explicit choices of what they want to see, combined with transparent community voting, and moderators whose decisions can be questioned (e.g. Reddit). At the other extreme, content may be prioritised by proprietary algorithms that model each user’s preferences and then predict what they want to see. What is the range of possible designs and what are their effects? Could one design intelligent power-tools for moderators?

The online portal Reddit is a rare exception to the general rule in that it has proven a popular site despite employing a more nuanced algorithm for the prioritisation of content. The approach was, however, apparently designed to manage traffic flows rather than create a better balance of opinions. It would, therefore, appear that even for this algorithm its effect on prioritisation is only partially understood or intended.

If we view social networks as implementing a large scale message-passing algorithm attempting to perform inference about the state of the world and possible interventions and/or improvements, the current prioritisation algorithms create many (typically short) cycles. It is well known that inference based on message passing fails to converge to an optimal solution if the underlying graph contains cycles because information then becomes incorrectly weighted. Perhaps a similar situation is occurring with the use of social media?

The workshop will provide a forum for the presentation and discussion of analyses of online prioritisation with emphasis on the biases that such prioritisations introduce and reinforce. Particular interest will be placed on presentations that consider alternative ways of prioritising content where it can be argued that they will reduce the negative side-effects of current methods while maintaining user loyalty.

Contributions

Either theoretical or practical - are welcomed in relevant areas including but not limited to the following directions:

Analysis of media:

  • Detection and prediction of emerging trends;
  • Detection of tribalism among online personas;
  • Detection of trolling, abuse, and fake news;
  • Intelligent tools for discussion moderators

Enhancement of content:

  • Automatic fact-checking;
  • Annotation according to viewpoint;
  • Visualisation and navigation of large on-line discussions

Enhancement of discussion:

  • Adapting improved message-passing algorithms for social media;
  • Gamification architectures and their effects on discussion;
  • Mitigation of tribalism

Algorithmic fairness:

  • What does it mean for an algorithm to be ‘fair’ in content prioritization?
  • Nicolo Cesa-Bianchi, Università degli Studi di Milano
  • Massimiliano Pontil, Istituto Italiano di Tecnologia and University College, London
  • John Shawe-Taylor, University College, London
  • Chris Watkins, Royal Holloway, London
  • Emine Yilmaz, University College, London
10:00‑10:30 Welcome

John Shawe Taylor, Director of K4A and Head of Computer Science Dept. University College London

10:30‑11:15 Auditing Search Engines for Differential Satisfaction Across Demographics + discussion
Rishabh Mehrotra, University College London
11:15‑11:45 Fake News Challenge + discussion
Sebastian Riedel, University College London
11:45‑12:30 Reddit and its medical uses + discussion
Chris Watkins, University College London
12:30 Lunch
Reddit and its medical uses + discussion
Chris Watkins, University College London
13:00‑13:45 Machine Learning problems related to Global Media Monitoring
Marko Grobelnik, Jozef Stefan Institute
13:45‑14:00 Report on actions taken by companies in response to fake news issue
Davor Orlic, Knowledge 4 All
14:00‑15:00 How to improve comment boards with curated discussion
Chris Watkins, University College London
15:00‑16:30 Discussion of workshop proposal and next steps
  • Chris Watkins, Royal Holloway, London
  • Emine Yilmaz, University College, London
  • Sebastian Riedel, University College, London
  • Andreas Vlachos, University of Sheffield
  • Marko Grobelnik, Jozef Stefan Institute