ICIC   25583
INSTITUTO DE CIENCIAS E INGENIERIA DE LA COMPUTACION
Unidad Ejecutora - UE
artículos
Título:
Detecting malicious behavior in social platforms via hybrid knowledge- and data-driven systems
Autor/es:
MARIA VANINA MARTINEZ; JOSÉ N. PAREDES; MARCELO A. FALAPPA; SIMARI, GERARDO
Revista:
FUTURE GENERATION COMPUTER SYSTEMS
Editorial:
ELSEVIER SCIENCE BV
Referencias:
Lugar: Amsterdam; Año: 2021 vol. 125 p. 232 - 246
ISSN:
0167-739X
Resumen:
Among the wide variety of malicious behavior commonly observed in modern social platforms, one of the most notorious is the diffusion of fake news, given its potential to influence the opinions of millions of people who can be voters, consumers, or simply citizens going about their daily lives. In this paper, we implement and carry out an empirical evaluation of a version of the recently-proposed NetDER architecture for hybrid AI decision-support systems with the capability of leveraging the availability of machine learning modules, logical reasoning about unknown objects, and forecasts based on diffusion processes. NetDER is a general architecture for reasoning about different kinds of malicious behavior such as dissemination of fake news, hate speech, and malware, detection of botnet operations, prevention of cyber attacks including those targeting software products or blockchain transactions, among others. Here, we focus on the case of fake news dissemination on social platforms by three different kinds of users: non-malicious, malicious, and botnet members. In particular, we focus on three tasks: (i) determining who is responsible for posting a fake news article, (ii) detecting malicious users, and (iii) detecting which users belong to a botnet designed to disseminate fake news. Given the difficulty of obtaining adequate data with ground truth, we also develop a testbed that combines real-world fake news datasets with synthetically generated networks of users and fully-detailed traces of their behavior throughout a series of time points. We designed our testbed to be customizable for different problem sizes and settings, and make its code publicly available to be used in similar evaluation efforts. Finally, we report on the results of a thorough experimental evaluation of three variants of our model and six environmental settings over the three tasks. Our results clearly show the effects that the quality of knowledge engineering tasks, the quality of the underlying machine learning classifier used to detect fake news, and the specific environmental conditions have on smart policing efforts in social platforms.