INVESTIGADORES
FALAPPA Marcelo Alejandro
artículos
Título:
Detecting Malicious Behavior in Social Platforms via Hybrid Knowledge- and Data-driven Systems
Autor/es:
JOSÉ N. PAREDES; GERARDO I. SIMARI; M. VANINA MARTÍNEZ; MARCELO A. FALAPPA
Revista:
FUTURE GENERATION COMPUTER SYSTEMS
Editorial:
ELSEVIER SCIENCE BV
Referencias:
Lugar: Amsterdam; Año: 2021
ISSN:
0167-739X
Resumen:
Among the wide variety of malicious behavior commonly observed in modern social platforms, one of the most notorious is the diusion of fake news, given its potential to influence the opinions of millions of people who can be voters, consumers, or simply citizens going about their daily lives. In this paper, we implement and carry out an empirical evaluation of a version of the recently-proposed NetDER architecture for hybrid AI decision-support systems with the capability of leveraging the availability of machine learning modules, logical reasoning about unknown objects, and forecasts based on diusion processes. NetDER is a general architecture for reasoning about dierent kinds of malicious behavior such as dissemination of fake news, hate speech, and malware, detection of botnet operations, prevention of cyber attacks including those targeting software products or blockchain transactions, among others. Here, we focus on the case of fake news dissemination on social platforms by three dierent kinds of users: non-malicious, malicious, and botnet members. In particular, we focus on three tasks: (i) determining who is  responsible for posting a fake news article, (ii) detecting malicious users, and (iii) detecting which users belong to a botnet designed to disseminate fake news. Given the difficulty of obtaining adequate data with ground truth, we also develop a testbed that combines real-world fake news datasets with synthetically generated networks of users and fully-detailed traces of their behavior throughout a series of time points. We designed our testbed to be customizable for dierent problem sizes and settings, and make its code publicly available to be used in similar evaluation eorts. Finally, we report on the results of a thorough  experimental evaluation of three variants of our model and six environmental settings over the three tasks. Our results clearly show the eects that the quality of knowledge engineering tasks, the quality of the underlying machine learning classifier used to detect fake news, and the specific environmental conditions have on smart policing eorts in social platforms.