In recent years, people have been spending more and more time on social media. Within the realm of multimedia contents used by platforms, the quantity of visuals is certainly growing in significance. Interaction data enables to know the users' favourite images. This information could be exploited to gain a deeper insight into their psychological profile, since the literature on automatic personality recognition suggests that personality traits may correlate with aesthetics. In this paper we explore the use of personal preference on multiple images to predict personality traits of users. Unlike previous works, we propose a model that exploits ResNet50, a Convolutional Neural Network, to automatically extract features from the images in the PsychoFlickr dataset. We then fit five independent linear regressors on these features to detect personality. In order to determine whether using more than one image leads to better results, we train the model multiple times, using one to five images as input, and we compare the performances. Our method seems to outperform the related state-of-The-Art works.

Modeling user personality traits from aesthetic preference on multiple images

Valese A.
2024

Abstract

In recent years, people have been spending more and more time on social media. Within the realm of multimedia contents used by platforms, the quantity of visuals is certainly growing in significance. Interaction data enables to know the users' favourite images. This information could be exploited to gain a deeper insight into their psychological profile, since the literature on automatic personality recognition suggests that personality traits may correlate with aesthetics. In this paper we explore the use of personal preference on multiple images to predict personality traits of users. Unlike previous works, we propose a model that exploits ResNet50, a Convolutional Neural Network, to automatically extract features from the images in the PsychoFlickr dataset. We then fit five independent linear regressors on these features to detect personality. In order to determine whether using more than one image leads to better results, we train the model multiple times, using one to five images as input, and we compare the performances. Our method seems to outperform the related state-of-The-Art works.
2024
UMAP 2024 - Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization
32nd Conference on User Modeling, Adaptation and Personalization, UMAP 2024
File in questo prodotto:
File Dimensione Formato  
3627043.3659568.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 1.06 MB
Formato Adobe PDF
1.06 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3540790
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact