•  
  •  
 

Abstract

The research publication, “Generative Agents: Interactive Simulacra of Human Behavior,” by Stanford and Google in 2023 established that large language models (LLMs) such as GPT-4 can generate interactive agents with credible and emergent human-like behaviors. However, their application in simulating human responses in cybersecurity scenarios, particularly in social engineering attacks, remains unexplored. In addressing that gap, this study explores the potential of LLMs, specifically the Open AI GPT-4 model, to simulate a broad spectrum of human responses to social engineering attacks that exploit human social behaviors, framing our primary research question: How does the simulated behavior of human targets, based on the Big Five personality traits, responds to social engineering attacks? . This study aims to provide valuable insights for organizations and researchers striving to systematically analyze human behavior and identify prevalent human qualities, as defined by the Big Five personality traits, that are susceptible to social engineering attacks, specifically phishing emails. Also, it intends to offer recommendations for the cybersecurity industry and policymakers on mitigating these risks. The findings indicate that LLMs can provide realistic simulations of human responses to social engineering attacks, highlighting certain personality traits as more susceptible.

Share

COinS