Exploring multilayer networks in VR

Multilayer networks are an increasingly popular way to model complex relations between various types of entities and they have been applied to a large number of real-world data sets. Their intrinsic complexity makes the visualization of this type of network extremely challenging and still an open research area. To help the visual exploration of complex multilayer network structures, today we are releasing MNET-VR. MNET-VR is the output of a research project that I carried on with Leonard Maxim and that was based at the Networks, Data & Society research group of the IT University of Copenhagen and supported by the Digital Design Department. MNET-VR explores the potential of Virtual Reality to visualize this type of network structure.

MNET-VR offers basic functions to visualize and filter multilayer network structures. MNET-VR does not offer, at this stage, the possibility to manipulate the network layout. A proper 3D layout of the network can be obtained through the R package multinet. While its primary goal is to explore multilayer networks, MNET-VR can also be used to visualize single layer networks using igraph and multinet. The export of of the files for visualization is done through two simple R functions that we make available on the website. MNET-VR is designed for Oculus Rift/Oculus Quest with Link.

Here you can get a preview of how it works even it the video format does not really give you the full experience:

MNET-VR Trailer from Leonard Maxim on Vimeo.

Reasonable & Wrong, technical solutions to social problems

Today, together with Michele Coscia, we published a paper on Royal Society: Interface, with the title “Distorsions of Political Bias in Crowdsourced Misinformation Flagging” (originally the paper had a much better title, but reviewer 2 called it a “pointless tongue-twister” and you know how these things are…).
The paper analyses the assumptions behind online content-moderation when it is based on users flagging the content they consider to be inappropriate. We show, using agent based modelling, how those assumptions are wrong (thus the strategy is doomed to fail) when users are expected to flag “fake news”.

The article is pretty nice, it has a lot of pictures and it’s open access, so you should check it out.

The article also comes with an interesting background story that is probably worth sharing as it contains a lesson we should always keep in mind: reasonable is not enough.

As part of the project “Patterns of Facebook Interactions around Insular and Cross-Partisan Media Sources in the Run-up to Italian Elections” (https://sites.google.com/uniurb.it/mine/home) I had access to the data from the ulr-shares datasets that contains almost all the links that have been shared on facebook. In addition to the links the datasets also contains additional data about how the link performed on the platform as well as if the urls had been shared with third party fact checkers. This is part of Facebook’s effort to fight fake news and it basically works in the following way: when users read something on the platform that they think might be fake they can report the content (they flag it). Then facebook’s algorithms try to remove what they think might be random flags and, when a specific news receives enough flags it is submitted to external fact checkers that will investigate the content.
Digging into the list of what had been sent to external fact-checking we couldn’t avoid noticing that mainstream well established newspapers were largely dominating that list. And that is not the kind of sources one would expect to find there. I discussed this with Michele and we came to the conclusion that this was, after all, an unavoidable consequence of the wrong assumptions made by facebook when they tried to model users as “fake-news detectors”. Facebook approach, we thought, makes sense only if we don’t take into account all the social dynamics that we know exist and play a role in shaping online human behavior: more precisely if we ignore users’ political polarization and homophily.

To test our intuition we build two Agent based models: one that works according to the ideal world that facebook approach seems to assume (that we called monopolar ) and one that account for political polarization and homophily in how the users are connected (that we called bipolar).

The results support our intuition since the the data produced using the bipolar model fits remarkably well the real data of how news-items are flagged on facebook: large, moderate mainstream news outlets are flagged much more than extreme venues (that are almost not flagged).

The explanation behind this is pretty simple, in a polarize and homophilic social network a user will be more tolerant for sources that are politically aligned and less tolerant with sources that belong to a different political sphere. At the same time, circulation of information is highly affected by the homophily that defines the network structure, thus users belonging to one side of the political spectrum will be rarely exposed to content highly aligned with the opposite side. An unintended consequence of this is that moderate sources with large audiences will be the among the few who are actually able to reach both ends of the network and, as soon as the level of tolerance for different opinions goes down, they will get all the flags.

Obviously this work as all the limits of ABMs and there are many additional dynamics we should incorporate (we are already working on a follow-up). Nevertheless, it work as an important reminder: we should never rush to implement a technical solution to a societal problem, ’cause in many cases reasonable and wrong can easily co-exist.

The Consequences of GDPR on Social Network Research

I’m very happy that our paper An analysis of the Consequences of the General Data Protection Regulation on Social Network Research has now been accepted for publication on ACM Transactions on Social Computing [you can read a pre-print below or -soon- on arxiv].  I believe this can actually be a useful paper and not just a line to add on a cv (nothing wrong with that!).

The paper has a very hands-on approach, trying to address many of the actual concerns that people working with social network and social media data might have. Nevertheless, I think that considering the paper, as well as GDPR, just a list of dos and don’ts, would mean to miss an opportunity to think about the good that might come out of GDPR.  GDPR offers an opportunity to computational researchers working with online data to re-think our approach to data collection, data storage and data sharing, keeping in mind that behind those csv files there are people – data subjects with rights-  that we, lost as we are in our research, often fail to consider.
It is not (just) a matter of consent, privacy or APIs access. By forcing us to map the whole data flow and all the actors involved, GDPR gives us the opportunity to pause and ask ourselves key questions – such as How long will we store the data? What if we tried to obtain consent?  etc. – even if we might not like the answers.

Talk @CPH Techfestival 2019

Sat, Sept. 7th, I’m participating at Techfest 2019 a 3-day festival in Copenhagen with 200+ events on humans and technology.  There I’ll be coordinating a session on “The Future of Social Media Data Research: Let’s Do It Right When it Comes to Data, Privacy and Public-Interest” were I’ll discuss how accesso to social media data for research has changed over the year, how various scandals have impacted and what I see as a general trend for the future. 

Session presentation:
Social media has become a central data source for the study of individual as well as societal issues. The unprecedented scale and granularity of this type of data allow researchers to observe social dynamics as they unfold and hold high potential. Nevertheless, recent scandals and data-breaches such as in the case of Cambridge Analytica, and the social media platforms’ reaction, have shown the potential contradictions underlying this kind of research. Who holds the right to do public-interest research on this data? How to make it freely available for researchers? Who should be hold accountable?
This session will discuss these questions that will play a central role in shaping the future of research in many fields.

Join me!