Abstract
This study investigates the types of English metaphors that Alexa can accurately interpret and compares her metaphor comprehension processes to those of humans. The study involved eighteen English language students from the University of Jordan, all with Academic IELTS band score of 6.5 or above. Data were collected through a metaphor interpretation test administered to human participants and the Alexa virtual assistant. The test consisted of fifteen metaphors divided into three categories which are as follows: five orientational, five ontological, and five structural metaphors. The results revealed that Alexa surpassed the human participants in interpreting metaphors, achieving an accuracy rate of 93.3%, compared to 64.8% for the students. Although Alexa achieved a higher metaphor interpretation rate, she exhibited difficulty with structural metaphors, which involve abstract concepts. Similarly, human participants managed a success rate of 74%, which was the lowest among the three categories. This study indicates that structural metaphors are challenging for both AI and humans. The findings highlight the complexities of interpreting structural metaphors and the limitations shared by humans and AI. These insights contribute to the broader discourse on AI’s ability to process figurative language and offer valuable implications for advancements in NLP and human-computer interaction.