82
Lie to Me: Lying Virtual Agents Henrique Daniel Santar´ em Reis Disserta¸c˜ ao para obten¸ c˜ao do Grau de Mestre em Engenharia Inform´ atica e de Computadores uri Presidente: Professor Doutor Joaquim Armando Pires Jorge Orientador: Professora Doutora Ana Maria Severino de Almeida e Paiva Vogal: Professor Doutor Nuno Manuel Mendes Cruz David Maio 2012

Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Lie to Me: Lying Virtual Agents

Henrique Daniel Santarem Reis

Dissertacao para obtencao do Grau de Mestre emEngenharia Informatica e de Computadores

Juri

Presidente: Professor Doutor Joaquim Armando Pires Jorge

Orientador: Professora Doutora Ana Maria Severino de Almeida e Paiva

Vogal: Professor Doutor Nuno Manuel Mendes Cruz David

Maio 2012

Page 2: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

2

Page 3: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Acknowledgements

First I would like to thank IST-Taguspark, INESC-ID and GAIPS for the condition which

enabled me to develop my work. To my advisor Ana Paiva, for all the support, motivation and

opportunity to work at GAIPS.

I would also like to thank everyone at GAIPS for their help throughout my work. A special

thanks to Joao Dias for following my work closely, for the new ideas, discussions and supporting

me in the hardest moments. Another special thanks to Henrique Campos, Joana Almeida and

Andre Carvalho for great companionship, support and for enduring this journey with me.

Finally, to my parents and girlfriend, who provided the greatest amount of motivation.

Lisboa, May 14th 2012

Henrique Reis

Page 4: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

4

Page 5: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

To my family and girlfriend for all their

support and patience

Page 6: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

6

Page 7: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Resumo

O objectivo da nossa tese foi o de dotar agentes virtuais com a capacidade de modelar a mente

de outros agentes e usar essa informacao no seu raciocınio. Estendemos uma arquitectura ja

existente com um nıvel de Teoria da Mente para N nıveis. Quisemos avaliar se agentes capazes de

utilizar mais do que um nıvel seriam percepcionados como sendo mais inteligentes em comparacao

com agentes que apenas usam um nıvel, focando-nos na nocao de engano. Revimos conceitos-

chave sobre este assunto utilizando trabalhos especıficos da area de Psicologia que se focavam

em enganar, mentira e Teoria da Mente, assim como trabalhos que tinham como objectivo

implementar sistemas num contexto parecido a este. Descrevemos o modelo conceptual que

desenhamos para alcancar o objectivo a que nos propusemos, seguindo depois para a descricao

de como foi implementado. Para avaliar o nosso trabalho desenvolvemos um cenario de estudo

baseado num jogo de bluff e mentira, fazendo duas versoes, uma com um nıvel de Teoria de

Mente e outra com dois. Gravamos a aplicacao a correr e fizemos vıdeos de demonstracao para

avaliar a diferenca do comportamento do agente mentiroso nas duas versoes. A avaliacao foi

composta por testes preliminares, de simulacao e atraves de um questionario final, sendo esta

ultima parte feita com 60 participantes. Os nossos resultados demonstraram-se consistentes

com a nossa hipotese de que quanto maior o nıvel de abstraccao de Teoria da Mente, melhor os

agentes conseguem realizar tarefas que implicam enganar outrem.

Page 8: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

8

Page 9: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Abstract

The objective of this thesis was to endow agents with the ability to model the mind of other

agents and to use this information on their reasoning process. We wanted to assess if a reasoning

ability capable of more then one level would be perceived as more socially intelligent than

one with only one level, focusing our work on deception behaviour generation. Key concepts

were reviewed regarding this topic, such as research work in the field of Psychology focused on

deception, telling lies and Theory of Mind, in addition to some works which tried to implement

systems with a similar goal. We proceeded by describing the conceptual model to achieve

our proposed goal, based on the Mindreading model of Baron-Cohen, to update N levels of a

Theory of Mind. An implementation specification was describe, explaining how we implemented

our model using an existing autonomous agent architecture and other frameworks. For the

evaluation stage we developed a case study based on a deception game and made two version

of liars: a one level and a second level Theory of Mind. We recorded the application running

and made demonstration videos to evaluate the differences in the deceptive behaviour. The

evaluation itself was composed of preliminary, simulation and a final questionnaire. The latter

were performed with 60 participants. The results showed to be consistent with our hypothesis

that the more reasoning abstraction level of a Theory of Mind, the better agents can performed

deceptive tasks.

Page 10: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

10

Page 11: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Palavras Chave

Keywords

Palavras Chave

Teoria da Mente

Engano

Agentes Autonomos

Sociedade de Agentes

Inteligencia Artificial

Keywords

Theory of Mind

Deception

Autonomous Agents

Society of Agents

Artificial Intelligence

Page 12: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

12

Page 13: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Contents

Bibliography ii

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Document Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 5

2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 Lie Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Lie - An everyday event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.4 Lie - Motivation and Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.5 Who Lies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.6 Cues to Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.7 Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.7.1 Mindreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.7.2 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Related Work 13

3.1 Theory of Mind Implementation Approaches . . . . . . . . . . . . . . . . . . . . 13

3.2 Wagner & Arkin - Deception in Robots . . . . . . . . . . . . . . . . . . . . . . . 15

3.3 Castelfranchi’s GOLEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.4 Mouth of Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 A Mindreading Architecture 21

4.1 Agent Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Model of Other and ToMM Component . . . . . . . . . . . . . . . . . . . . . . . 22

4.3 From First to Nth level Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . 23

4.4 EED and SAM Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.5 Creating and Updating Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.6 Deliberation and Means-Ends Reasoning . . . . . . . . . . . . . . . . . . . . . . . 25

4.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

i

Page 14: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

5 Implementation 27

5.1 FAtiMA Modular Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.1.1 FAtiMA Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.1.2 FAtiMA Modular Components . . . . . . . . . . . . . . . . . . . . . . . . 29

5.1.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2 Implementing a Mindreading Agent . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.2.1 Theory of Mind - Second to Nth level . . . . . . . . . . . . . . . . . . . . 33

5.2.2 Creating Model Of Others . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.2.3 Updating Model of Others . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.3 ION Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.3.1 Visibilities on ION Framework . . . . . . . . . . . . . . . . . . . . . . . . 39

5.3.2 NWN2 - Graphic Realization Engine . . . . . . . . . . . . . . . . . . . . . 40

6 Case Study 41

6.1 The Werewolf Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6.2 Tactics’ Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6.3 Victims’ planning and reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6.4 Wolf’s Planning and Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6.4.1 First Level Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6.4.2 Second Level Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . 45

7 Evaluation 47

7.1 Preliminary Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

7.2 Final Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7.2.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7.2.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

8 Conclusion 57

8.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Bibliography 59

A Experiment’s Questionnaire 61

ii

Page 15: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

List of Figures

2.1 Baron Cohen Mindreading Mindreading Mechanisms . . . . . . . . . . . . . . . . 11

3.1 BDI-model Architectures of Theory of Mind from [15] . . . . . . . . . . . . . . . 14

4.1 Proposed Conceptual Model for a Theory of Mind . . . . . . . . . . . . . . . . . 21

4.2 Theory of Mind Model Hierarchy considering M agents and N abstraction levels. 23

5.1 FAtiMA Core Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5.2 FAtiMA Core Pseudo Code [16] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5.3 An example of an authored Property . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.4 Example of an authored Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.5 Example of an Action with global effects . . . . . . . . . . . . . . . . . . . . . . . 32

5.6 Example of an Action with local effects . . . . . . . . . . . . . . . . . . . . . . . 32

5.7 FAtiMA Modular Global Class Dependencies . . . . . . . . . . . . . . . . . . . . 33

5.8 Theory of Mind Model Hierarchy considering 3 agents and 2 abstraction levels. . 34

5.9 Example of Theory of Mind Model Hierarchy considering 2 agents and 2 abstrac-

tion levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.10 Example of an Inference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.11 ION Framework Simulation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.1 Agent’s Avatars used during the Werewolf Case Study . . . . . . . . . . . . . . . 41

6.2 An agent performing the Accuse action . . . . . . . . . . . . . . . . . . . . . . . 42

6.3 An agent performing the LastBreath action . . . . . . . . . . . . . . . . . . . . . 42

7.1 Evaluation Test’s Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7.2 Box-plot of Statistical Data regarding Game Questions . . . . . . . . . . . . . . . 50

7.3 Box-plot of Statistical Data regarding Global Questions about Player . . . . . . . 51

7.4 Box-plot of Statistical Data regarding Global Questions about the Liar . . . . . . 53

7.5 Box-plot of Statistical Data regarding Specific Questions about the Liar . . . . . 55

iii

Page 16: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

iv

Page 17: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

List of Tables

2.1 DePaulo’s Lies’s Taxonomy and examples . . . . . . . . . . . . . . . . . . . . . . 7

7.1 Simulation Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7.2 Mann-Whitney statistics for global game questions considering the two conditions

(ToM1 and ToM2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

7.3 Mann-Whitney statistics for global questions about player considering the two

conditions (ToM1 and ToM2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

7.4 Mann-Whitney statistics for global questions about the liar considering the two

conditions (ToM1 and ToM2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

7.5 Mann-Whitney statistics for specific questions about the liar considering the two

conditions (ToM1 and ToM2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

v

Page 18: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

vi

Page 19: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 1

Introduction

1.1 Motivation

The aim of Artificial Intelligence has been to both understand and build systems with the notion

of intelligence. But what is intelligence and how can we achieve it? There are two perspectives we

can take. The first comes from the only natural intelligent “system”’ we know: ourselves. This

simple fact leads us to one of the main goals of IA: create human-like intelligence. Something

that would act by our standards, have the same kind of reactions, display the same emotions, be

as fallible as we are. Summing up, an agent that we could interact like we would with anybody

else. The second perspective arises from an ideal concept of intelligence. A system that would

always make the right decision and always get the best, or at least among the best, results.

In other words, IA can either aim to achieve human-like performance or ideal specific problem

performance [23]. This thesis focuses on the first objective, with a special concern for social

intelligence.

Specific areas such as problem solving, search algorithms, knowledge representation, planning

and learning reasoning or natural language processing are all big and distinct fields in AI, trying

to reach those goal. AI researchers have dreamt of creating robots and agents with this capacity,

that seem to think, feel and be alive. Entities with this gift have to be capable of perceiving its

environment, reason about it and take action to achieve its goals with the maximum profit. In

doing so, a sense of intelligent animated artificial live is sought: a virtual companion, a social

pet or a personal assistant could be some of the uses of such technology. Indeed, creating lively

animated characters that seem more and more like us humans is one of the topics of recent

research in IA and Multi-Agent Systems (MAS). Not only the cognitive capabilities, domain

perception and reasoning are important, but also the sense of animation and the impression of

being alive turns out to be one of the key aspects to foster the illusion of life [24].

An autonomous agent has to be able to perceive the environment and act upon it, taking

in consideration its goals. However, if believability is to be achieved, the communication with

other agents has to be believable as well. In multi-agent systems, agents interact, cooperate

and negotiate with one another. To do this successfully they need to seem to act naturally

and plausibly. These interactions also need to seem real and rich enough to cast in the user

a sense of belief, the “suspension of disbelief” as Joseph Bate first labelled it [2]. While most

communications, either verbal or non-verbal, require exchange of information in some way, some

1

Page 20: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

of them might be deceitful.

In human-computer interaction (HCI) and MAS, users and agents are both presumed to

always say the truth, abiding to the sincerity assertion. This baseline is usually justified by

claiming that these interactions are based on the principle of cooperation and that cooperation

would imply sincerity by all parties. However, deception occurs occasionally, both uninten-

tionally and on purpose, exactly as it happens in human communication every day. Therefore

deception is one such human-like characteristic that would enrich an interaction and its be-

lievability. Despite its negative connotation, deception can also be used for good, or rather

benign reasons, as bargaining, poker games and other general video games, virtual training and

investment banking.

Although deception has already been discussed considering its use in AI for a long time, only

during the last decade some projects have been developed and implemented it computationally.

To enhance the believability of synthetic characters and endowing them with the capacity of

deception, one has to borrow research data from psychology, more specifically from social psy-

chology, highlighting the multi-disciplinary of this specific research work. Specially, a Theory of

Mind is needed in this process, so that the agent can attribute a mental state to other agents

and be able of reason about it.

1.2 Problem

In a first stage, the aim purpose of this thesis is to develop a scenario to test lying agents.

Therefore it is necessary to give both the motivation and means to lie. Specifically, we want to

develop the capacity to lie and behave deceptively in the context of a society of agents, where

there will also be other agents that will try to uncover the liars. The main focus is then to

design an architecture to support the processes that lead to deception and lie generation, based

on the existing FAtiMA architecture [9] [16] by Dias. This architecture was developed with the

intent to endow agents with emotional states and will be further discussed in Section 5.1. It is

important to note that although trust is an important notion to consider while talking about

deception, we will not be considering it in the development of this work. Our aim is to focus

our efforts on lie generation and how can we achieve it before trying to implement mechanisms

to counter it.

While thriving for the believability of artificial agents, not only thinking but also feeling has

to be simulated in order to move one step closer to believable characters. In this way, it is also

mandatory that the developed module will work consistently and causally with the emotional

driven structure of the specified architecture. In this regard, we will focus on a specific problem:

“How can autonomous agents behave deceptively and generate lies relevant to a spe-

cific context?”

As we will see, humans resort to a mechanism of more than one abstraction levels to represent

others, which is then used to achieve their deceptive goals. Our work will follow an approach

which assumes that modelling what others are thinking is beneficial to perform deceptive tasks.

Specifically we believe that an entity A which can represent what an entity B is thinking will

2

Page 21: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

be less successful in deception then an entity C which can model what agent B thinks about C.

In other others the hypothesis we will try to prove is that:

“The higher the reasoning order an entity is capable to use, the better it can success-

fully perform deceptive tasks.”

By high-order reasoning we mean that one considers what others are thinking, first level;

what others are thinking about others, in a second level, and so on. This notion will be further

explained in Section 2.7.

We will begin to answer these questions through the analyses and comparison of architectures

already developed to achieve a similar goal as this one.

The resulting agent architecture and models will be evaluated with users in the context

of a game with a concrete scenario. This evaluation will test the believability of the social

interactions, focusing in particular on the deceptive ones, produced by the work of this thesis.

1.3 Document Outline

Deception will be the focus of this document and throughout it several related topics will be

discussed. In Chapter 2 the basic notions of deception and lies will be explained, as well as a

number of studies done in the field of psychology which will shed some light on matters related

to what and why people use to lie about. We will also attend and describe to the most important

notion used in this work, Theory of Mind. It will be addressed how does it appear on children

and what impairments can occur from not having this capacity. Mindreading is an important

process that uses it, and we will describe what it is and what are the pre-requisites of a system

which aims to simulate it.

In Chapter 3 several research works focused on this same issue of how to implement deception

in AI are reviewed. The two prominent Theory of Mind implementations are also discussed.

We then proceed to Chapter 4, describing the model we have created which aims to design

an architecture that enables virtual agents with deceptive and manipulative capabilities. The

rationale that leads us to this implementation is reviewed. Relevant examples of how our concept

can be applied to answer our needs are explained as well.

Following the conceptual approach we then head to describe the specific implementation in

Chapter 5. FAtiMA Modular agent architecture and the ION Framework were used to achieve

how purpose. Both systems are described considering the changes we made in both, with special

focus to the changes made to the Theory of Mind Component in the FAtiMA Modular. It will

also be described the case study where we tested our architecture, along an explanation of how

it was used to achieve its proposed goal.

Chapter 7 analyses the results we obtained by the tests made with users. We will focus

specially in trying to answer our hypotheses stated in Section 1.2.

Final conclusions and future works are mentioned in Chapter 8.

3

Page 22: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

4

Page 23: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 2

Background

In this chapter we will review some important notions and concepts that will be used throughout

the document. We will start by defining what a lie is. Afterwards we will proceed to cover some

studies in the field of Psychology for a more in-depth view about lying. We will explain the notion

of Theory of Mind and its advantages, and will end this chapter describing the Baron-Cohen’s

model for “Mindreading”.

2.1 Definitions

There is a base amount of literature on the subject of deception, and so there are many defini-

tions to this term. To begin the discussion on this topic, we will first adopt the definition by

Castelfranchi [4], which considers that deception occurs when a proposition P is induced actively

or passively by an agent A to an agent B, when agent A, the deceiver, believes the proposition

is false, or at least does not believe the proposition is true. This definition excludes as being

deceptive behaviour cases where the agent A believes proposition P is true, and tries to induce

it, while that proposition P false. This can be considered as accidental deception but because it

has no intention behind it, it will not be considered in the development of this work, although it

may be mentioned in the analysis of other works. Taking this into account, lies are speech acts

that achieve the same goal of deceit, being only a part of deception. Non-verbal mechanisms can

also be employed but we are going to focus our work on deception through verbal mechanisms,

or speech acts.

2.2 Lie Taxonomy

DePaulo has conducted a series of research work on this topic that have shed some light on the

ulterior motives of a lying mind and its associated mechanisms. To lay a baseline to what is

deception and lying in their research, and because in some of these experiments participants had

to keep a record of their own lies, they were told that a lie occurs any time you intentionally

try to mislead someone, where both the intent to deceive and the actual deception must occur

[7]. It was also created a taxonomy of lies based on the participants’ records of their everyday

lies, which can then be categorized in four dimensions. What people lie about is defined by

the content category; the reason specifies the motive that lead the lie to be told; and the type

5

Page 24: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

represents the way lies were told in terms of subtlety. In this dimension it is important to note

that subtle lies also include deceptive behaviour like non-verbal deception, which is out of the

scope of the definition of lie, as stated in the previous Section 2.1, expanding this behaviour to

the broad deception definition. The last dimension defines the target of the lie as the referent

of the lie. Table 2.1 sums up all the terms in each category, as well as gives examples for each

one.

2.3 Lie - An everyday event

Lying is an everyday event. This is the utmost important conclusion of DePaulo’s first study

[7]. Backing up this conclusion is the fact that participants lied in about 30% of their social

interactions during the study. Although the relation between the number of interactions, op-

portunities to lie, and the number of actual lies was not the focus of the study, it was still a

statistically solid sample that clearly states the above. Moreover, 70% of the people stated they

would lie again if they were given the chance. This is explained by the fact that the majority

of the lies have little consequence and are a matter-of-fact, an action performed in an instance,

with almost no cognitive deliberation. As with every other daily behaviour, lying seems to need

little planning and to have little cognitive and emotional baggage. It is not unwisely said that

the most easy answer to a question is ’no’. People usually try to protect themselves emotionally

and don’t like to be compromised or to be reprimanded, which is why a lie is sometimes almost

instinctive. In fact, serious lies that include deep breaches of trust are far less common and

are not an everyday event. This kind of lie requires a greater cognitive process and is of bigger

emotional significance than the little, everyday kind of lie. It must be noted that this last type

was not the focus of the study mentioned above. In any case, social situations in which lies were

told were reported to be less pleasant and intimate, coming from the discomfort and distress felt

after telling the lie. This is caused by the fact that lying is usually condemned in our society,

and also because it is an act of breach of trust between two parties. Lying fabricates something

that is not real and in this way leaves a smudge. When someone lies to give emotional sup-

port for example, the empathy and comfort that is provided is not genuine and it marks that

specific experience. This contributes to people avoiding using direct contact when telling a lie,

assumption which was confirmed in this study. People choose other means of communication,

like writing a letter or talking by phone. Face-to-face interactions seem to bring some degree of

the mentioned discomfort.

2.4 Lie - Motivation and Rewards

Lying is an every-day event that is associated with the achievement of some of the most basic

social interaction goals, such as influencing others, managing impression, and providing reas-

surance and support. As demonstrated in the research work by DePaulo [7], about 80% of the

lies were about or included the liar as one of the targets (being the referent using Depaulo’s

nomination), further more, 50% of the content of the lies were about feelings and actions, being

6

Page 25: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Kind of Lie Definition

ContentsFeelings Lies about the liar’s feelings, opinions, evaluations and affects.

Example: “Told her cooking was the best.”

Own Actions, Plans, Whereabouts Lies about what the liar did, is doing or is planning to do, as well as his past andpresent whereabouts.

Example: “Told her we should go out sometime, but I won’t.”

Achievements and Knowledge Lies about facts the liar knows of or lacks the knowledge of, which can refer toaccomplishments, achievements or failures.

Example: “Told her I had done OK on my calculus exam, when I have failed.”

Explanation and Reasons Lies about the liar’s reasons and explanations for their behaviour.

Example: “Told him I could not make it because I was sick.”

Facts and Possessions Lies about facts, objects, events, people or possessions.

Example: “Told him my father was a businessman of success.”

ReasonSelf-Oriented Lies told to protect, enhance or to the advantage of the liar. These can be psycholog-

ical, which regulate the liar’s own feelings or for personal gain, which include physicaland material safety and gain. Impression management is also included, as well as liestold to evade doing something the liar does not want or to protect him from beingbothered.

Example (psychological): “Told her me and Maria are a happy couple when in fact Idon’t know if she still loves me.”

Example (personal advantage): “Told this new girl a fake cell phone number becauseshe was not that interesting.”

Other-Oriented Lies told to protect, enhance or to the advantage of others. This can also be psycho-logical, which includes reasons to avoid conflict, worry and hurt feelings. Can alsoaim for another person material, emotional and physical protection or advantage.

Example (psychological): “Told him everything will be all right, when in fact hebarely has time to finish his homework project.”

Example (another person advantage): “Told the client a higher price, to make moreprofit to the company.”

TypeOutright Lies that are completely contrary to the truth.

Example: “Told my mother I have never smoked .”

Exaggerations Lies that convey an impression that overly exceeds the truth.

Example: “I exaggerated how sorry I was for being late.”

Subtle Evading lies that omit relevant facts. Also includes behavioral and nonverbal lies.

Example: “We discussed how the football game went when in fact I did not watchedit, only read about it on the newspaper.”

ReferentLiar Lies that refer to the liar, be it an action, emotion or opinion.

Example: “Told him I studied in Cambridge.”

Target Lies that refer to the target of the lie.

Example: “Told her she looked beautiful.”

Other Person Lies referring other person or people, neither the liar nor the target of the lie.

Example: “Told him my friend John enjoyed being with us last Saturday, but hereally did not.”

Object or Event Lies referring to objects, events or places.

Example: “Told him I want to buy the same jacket, when in fact it is really ugly.”

Table 2.1: DePaulo’s Lies’s Taxonomy and examples

7

Page 26: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

feelings predominant. People then make use of lies to manage their identity to the eyes of others

and to appear what they are not. In fact, the image that we try to convey to others is usually

an amended one. We try to highlight the traits that are most convenient to the current social

context, trying to blend in and to not enter in conflict with others, mostly in a psycho-emotional

perspective. This highlights a non-deceptive impression management, in particular if we only

make some relevant trait stand out, as it is most convenient for a specific social interaction. In

the other hand, deceptive presentations also have the goal of image management. In this case

the liar can omit information (subtle lie) or simply tell something that is not true about him.

Therefore deceptive and non-deceptive impression managements have both the same goal but

they are different in important aspects. In deceptive presentations, any form of lie is creating a

new self that does not represent the liar, and thus is fostering a false image, instead of editing

it, which is the non-deceptive case.

People lie about their feelings, and the motivation is more psychic than materialist or of

personal gain. We want to be accepted and esteemed, to meet other’s expectation, to have

affection, to receive respect, to avoid tension, conflicts or to hurt others’ feelings. Since such

rewards seem to be much more important for us, it is also on a psychological level that we lie

about. This leads us to claim what we are not, to lie about what we think of others, and about

our feelings and opinions.

Although lying as an objective to obtain material gain is also registered, such types of lie

are told in a lesser amount than one should expect at first, indicating that people are more

concerned and lie more about their psyche and emotions.

2.5 Who Lies

Is there a common personality associated to those who lie? Based on another study [17], people

who lie more tend to be more manipulative, concerned with their self-presentation and more

social. On the other hand, those who were reported to tell fewer lies tend to have interpersonal

relationships that are especially satisfying and meaningful. They also lie more about other-

oriented lies, supporting the premise that in a whole, people lie mostly about themselves and

their feelings.

Self-confidence also plays a big role in the rate of lies. According to the same research work,

people with low self-confidence and low self-esteem find it more difficult to say what they think

and to expose their ideas, and because of that tend to lie more about what they really feel. They

also tend to make more altruist lies to try to show they agree and like others, to be accepted

by them as a result. Therefore people with these traits lie much more about their emotions and

opinions in a general perspective, because people already lie mostly about this content, making

it more perceivable. These two results support the conclusion that self-confidence is inversely

proportional to lies in general, both self-oriented and other-oriented lies. In conclusion, the ones

that lie more are those who care deeply about what other people think of them, by extroverted

and manipulative people.

Regarding the target of the lie, in another study carried out by DePaulo [6], results showed

that people tell fewer lies to those with whom they feel closer, they interact more frequently and

know for longer. It is also concluded that as a relationship deepens, the overall lying frequency

8

Page 27: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

lowers, although other-oriented lies may rise while faking agreements and taking the partner’s

side. There are a few reasons for this decrease of lie rate. In one hand, people are unwilling to

lie because close relationships have ideals such as openness, authenticity and trust, and telling

lies violates those ideals. In violating these ideals one would lose the psychic rewards that come

from such relationships and the rewards mentioned in Section 2.4. On the other hand, there is

also the fact that such partners know us much better than other casual people, and so deception

is more difficult to succeed. They have more information about us, and know how we use to

behave and react to events, so anything that falls of this standardized information can give us

away and rise suspicion more easily. Another fact to take into account is that although the

expectations in a partner in a close relationship are higher, they are also more accurate, and

thus less likely to be violated as well.

2.6 Cues to Deception

Deception has serious consequences when discovered, and therefore condemned in our society.

No one likes to be lied to, as this is considered a breach in the mutual trust, an outrage to the

shared relationship, or in its absence, an untrustworthy act against someone. Ways to detect

when lies are being told would then be relevant in trying to avoid these situations.

Do people behave differently when they are lying? DePaulo says we must focus on the

ways in which liars act differently than truth tellers, instead of how liars solely behave [8],

as these are the referred cues that will give them away. In the same study by DePaulo et al.,

conclusions are drawn toward the fact that this discrepancy in behaviours is mostly caused by the

more intensive deliberation and cognitive challenges liars have to go through to maintain their

stories internally consistent and coherent with what others already know. Neither emotional

investment nor personal experience is there to make those claims true. Consequences of this

more intensive cognitive process are cues like longer response latencies, more speech hesitations,

fewer illustrations using hand movements, and the inclusion of less ordinary imperfections and

unusual content in their stories. On a more emotional level cues can appear as well. The

apprehension of being caught can lead to cues of fear that can include higher pitch, faster and

louder speech, pauses, speech errors, and indirect speech. Cues of sadness can also appear when

lying to people we trust and care, caused by the feeling of guilt. These can include lower pitch,

softer and slower speech, and downward gazing.

Lying is something we do every day. Even though the above cues may appear, they are

very subtle. In Section 2.3 we saw that lying is usually not a complex process, nor does it

induce much stress or feeling of guilt, resulting in only faint behavioural signals. In attempting

to control their thoughts and feelings, liars generally end up more preoccupied and tense, and

although not to a great extend, the more they are alert, the more cues will emerge. To sum up,

the study concluded that due to the deliberative nature of their mind process, and in trying to

control their expressive behaviours, thoughts and feelings, liars could show faint cues that would

compromise their performance. These interactions would generally seem, as cited in [8]:

• Less forthcoming

– will respond less, will seem to be holding back and response latencies will be longer;

9

Page 28: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

• Less convincing and compelling

– will seem to make less sense than a truth-teller’s story, will seem less engaging, less

immediate, more uncertain, less fluent and less active;

• Less positive, pleasant and more tense

– resulting from the feeling of guilt and apprehension;

• To include fewer ordinary imperfections and unusual content in their stories

– resulting from the difficulty to simulate a true personal and emotional experience.

2.7 Theory of Mind

A Theory of Mind (ToM) is a term coined by Premack and Woodruff [21] which defines the

ability to infer the full range of epistemic mental states of others, i.e. beliefs, desires, intentions

and knowledge. Having a ToM is being able to reflect about one’s own and other’s mind content,

it is a natural way of thinking about why people do what they do, and what they might do next.

Psychologists call this using a theory simply because there’s little evidence to prove that this

framework, the mind, exists, and that what we infer is actually what others are thinking or

feeling [21]. Moreover, we can only assume others have a mind that we can infer about only by

introspection of our own.

These complex abstractions we make of other’s mind and consequently of our own, is a

mechanism that helps us make sense of their behaviour in a specific context and predict their

next action. We do this applying the principle that observing something leads to some epistemic

change, which eventually leads to a result, usually is an action. Theory of Mind is the basis

of a process called mindreading, and Baron-Cohen describes it as a process we do all the time,

effortlessly, automatically, and mostly unconsciously. That is, we are often not even aware we

are doing it, until we stop to examine the words and concepts that we are using [1]. In fact, all

the time we say that someone wants, thinks, knows, intends, plans, agrees or looks for something,

we are ascribing some mental state and trying to infer some coherence out of it. By doing this,

we can infer other’s mental state, and by observing their interactions we might anticipate what

their behaviour will be or even manipulate their actions.

To evaluate if someone has a developed ToM, a typical test to be done is the false-belief task

[30]. To pass this test the participant has to be able to attribute a false-belief to someone. An

example of this test can be stated as follows:

“There are two boxes A and B on a table, and John puts his ball inside box A and

leaves the room. John’s mother then changes the ball from box A to box B to trick

John. When John re-enters the room, where will he look for the ball?”

All these actions happen in the presence of the participant. To pass this test the participant

has to answer “box A”, which indicates that he can ascribe a false belief to John: that John

believes the ball is in box A, while in fact it really is in box B. However, this question only

exercises a one level conceptualization of someone else’s mind, John’s mind. A one level Theory

10

Page 29: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

of Mind starts to develop in normal children around the age of 3-4 [28] [18] [11], and it is the

first step to a fully developed ToM. Autistic children start to pass this test much later, because

their condition results from a specific impairment in ToM [1]. Thereby, tests like the first-level

false-believe test can only be attempted successfully by children with this conditions at an older

age, usually around the age of 9 [14]. By the age of 6 years old, ordinary children start to pass

tests that exercise their second level ToM [12] [13]. They would be able to answers correctly to

the following question, considering the same scenario of John and his mother:

“Where does John’s mother think her son will look for the ball?”

This is a more complex task, where the participant has to be able to conceptualize about

embedded mental states. To answer correctly, one has to think about what John’s mother thinks

about John. At this stage, children will have the capacity of an adult-like ToM. A second-order

ToM gives the capacity to handle most of everyday interactions and social needs

2.7.1 Mindreading

Now that we understand the inner ability which enables humans to be socially intelligent and

to perform high-order reasoning about other people’s motivations and beliefs, we must ask

ourselves how is this theory used to achieve its purpose. Baron-Cohen proposed one of the most

complete theoretical models [1] that consists of four main innate components or mechanisms,

which together comprise the human “mindreading” ability (Fig. 2.1).

Figure 2.1: Baron Cohen Mindreading Mindreading Mechanisms

The first mechanism is the Eye Direction Detections (EDD), which Baron-Cohen suggests

has 3 basic functions: detecting the presence of eye-like stimuli, determining whether eyes are

directed to the “self” or other entity, and inferring that if the organism’s eyes are directed at

something then it sees that thing. This last function is specially important as it allows the

observant to attribute a specific basic mental state like [Entity-sees-me], for example, attribute

“John sees the candy” or “John sees the door” to John. The Intentionality Detector (ID)

is responsible for interpret primitive mental states such as goals and desires based on other’s

actions. It can represent mental states of the type [Entity-wants-item] and [Entity-hasGoal-

Goal], for example, “John wants the candy” or “John wants to leave the room”. Both these

11

Page 30: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

mechanisms allow the self to interpret and understand observed behaviour as a small number

of mental states of all entities in the environment. These mechanisms also allow a dyadic level

representation, that is, between the observed entity and the observer or the observed entity and

the object it sees. The Shared-Attention Mechanism (SAM) is the third component of this theory

and is a high order ability as it is responsible for building triadic relationships using information

from the first two components. These representations depict relations between an entity, the self,

and a third object, which can be another entity. Therefore states such as [Entity-sees-(Entity-

sees-Object)] can be represented, like “John sees I see the candy”. This can be achieved by

comparing the perceived mental states of other entities with one’s own current perceptual state,

attaining shared attention notions which can be attributed to the observant. The last and most

important component is the Theory of Mind Mechanism. Its two main functions are to represent

the full range of epistemic mental states and to relate them with human behaviour, resulting

in a useful and meaningful theory that can be used to reason, anticipate and manipulate the

action of others. It makes use of the information provided by SAM component and transforms

it into epistemic knowledge using principles such as “seeing leads to knowing”.

2.7.2 Concluding Remarks

Conceptually thinking, deception, mindreading and theory of mind seem to have a connection.

While in the need to propagate beliefs that are not true, or that we believe are not true, it is

a good assumption that knowing the target’s mental state, by an inference process, is a good

advantage to the success of the deceit. In a simple way, by knowing the structure and current

state of other’s mind, it is easier to know what actions we have to perform to induce specific

information to them. Thereby, being able to represent this inferred knowledge is not enough

[29]. Using it to achieve specific high order goals is its true purpose, as these goals refer to

intentions to change the mental state of others. Intending that someone believes in a fact F,

whether it may be true of false, is a task that only an entity endowed with a Theory of Mind and

mindreading capabilities can pursue. In fact, if deceiving and lying is manipulating someone

else’s mental state, a theory of mind is necessary to achieve this goal, otherwise it would only

be a random task, considering that we would neither have initial information nor feedback as a

result of our actions. Another requisite which must stand is that entities can have incomplete

and possibility wrong knowledge. This enables manipulation and lying attempts, otherwise they

would not be possible as well.

It is based on this line of thought that computational works related to deception have also

focused on implementing a model of Theory of Mind.

12

Page 31: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 3

Related Work

The Related Work Section will cover computational systems that are related to or implement

deception. It will begin by analysing a work that takes two approaches to the theory of mind,

laying the ground for a general understanding of its requisites and alternative implementations.

Then it will analyse three different works that propose an implementation for deception, one

in the field of robotics, the other two in virtual environments with synthetic characters. It is

important to note that there are not many architectures that focus specifically on this topic,

and such there are fewer references to other works.

3.1 Theory of Mind Implementation Approaches

A Theory of Mind (ToM) explains the ability to define others as intentional agents and to infer

their mental states, including their beliefs, goals, intentions and desires. This capacity can then

be used to predict behaviour, for social manipulation or to implement a degree of intelligence

for virtual training.

Meyer et al. [15] are working on computational implementations of ToM, and their goal

is to endow agents with ToM in the specific context of virtual training. Their focus is to give

agents the capacity to interact in a believable way with trainees, and to explain their actions and

decisions after the experiment is over. In this process, trainees can face and solve a crisis scenario

while interacting with human-like behaviour. The agents then model the trainee’s mind and give

feedback either by simple action decisions, or by an explanation at the end. Although the goal of

their work is different from that of this thesis, it is interesting because they developed theoretical

approaches to computationally implement a ToM that can be used in other scenarios and other

oriented works such as this one. Namely, they focus on two prominent and conceptually very

distinct approaches of human theory of mind: the theory-theory (TT) and the simulation-theory

(ST). In both approaches it is agreed that there is an innate module that is responsible for ToM,

the difference lies in their composed elements and how do we learn to use them. Both can be

implemented by a BDI model [22] because they refer to concepts like goals, intentions, desires,

plan and action, elements that can be attributed to a mental state. A typical BDI agent has all

of these concepts explicitly represented in its internal state. Its behaviour is composed by a set

of actions, a plan, which is driven by a goal it seeks to achieve. Desires are a set of objectives the

agent would like to do, but has not yet committed itself to them. When there is commitment,

13

Page 32: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

desires generate intentions. These are objectives the agent has compromised itself to achieve,

and in turn are the basis to generate plans. Beliefs are updated in each deliberative cycle, and

are used in the reasoning process and plans’ generation [3].

According to theory-theory, ToM is implicit and is developed automatically and innately in

our childhood through early social interactions, and suffers a maturation process. The mental

state we attribute to others is not observable, but is knowable through intuition and insight.

Moreover, this ToM theory encompasses a set of principles on how the BDI model concepts

interact with each other for each inferred mental state. These are specific reasoning rules depen-

dent of each inference. As represented in figure 3.1a, the architecture is composed by the typical

BDI modules, plus several sets in the belief base: the set of own beliefs and the representations

of others’ mental states. Each of this representations incorporate beliefs, goals and reasoning

rules, all of which, being in the belief base, are themselves represented as beliefs. Reasoning

rules combine beliefs and goals to explain and anticipate behaviour, in other words, to produce

plans or actions. As an example, the rule

if(A(B(X)) and A(G(Y))) then A(P(Z))

means that if an agent believes that agent A beliefs X and has goal Y, it can believe agent

A will execute plan Z. This approach does not make use of the architecture’s own deliberative

mechanism, and relies on these specific reasoning rules. The simulation-theory model is different

in this regard, such that it uses its own reasoning power on all mental states, including its own

and others.

(a) Theory-Theory (b) Simulation-Theory

Figure 3.1: BDI-model Architectures of Theory of Mind from [15]

The simulation theory claims that each person simulates being another while trying to reason

about their epistemic state. In this account, ToM allows one to mimic the mental state of

another person. In practice, what this means is that the agent will use its own deliberative

power to reason about other agents. Figure 3.1b depicts this design. Using this approach the

representation of mental states is not strictly defined, but it is not coded solely in beliefs, thus

being able to have the same concepts and elements as the agent’s own mental state, namely

beliefs, goals, intentions and plans, like Meyer et al. adopted [15]. The agent can then stop

deliberating about its own mental state and switch over to other agent’s mental state to predict

14

Page 33: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

its behaviour. If the other mental states have the same elements and structure of the agent’s

own, the implementation of this architecture can be very modular and easier compared to the

theoretic one. In this case, the reasoner can treat mental states of other agents as if it were the

agent’s own, like a black box, which simplifies programming the reasoner.

In the case study performed by Meyer et al., results showed there was no distinguishable

behavioural difference between TT agents and ST agents. However, considering implementation

effort, the authors of this research work defend that ST models are better in terms of code

re-usability and modularity. Moreover, the TT model can only deal with BDI models due

to their rigid representation of other’s mental state in terms of beliefs, limiting its symbolic

representation.

This research work is especially important because it considers and implements two different

approaches of Theory of Mind, pointing out its conceptual origins and analysing the implemen-

tation undergoing for each.

3.2 Wagner & Arkin - Deception in Robots

Wagner and Arkin developed some preliminary algorithms to endow an intelligent system with

the capacity of deception [26]. Although this work is primarily oriented to robots, which has

some additional concerns, it shows an approach for implementing a deceptive framework on an

intelligent agent. Their testing scenario is a variation of the hide-and-seek game. There are

3 corridors where the deceiver can hide. He can also leave false trails leading to some other

corridor, which the seeker, or mark, will analyse and finally choose a corridor as well.

In their approach, Game Theory and Interdependence Theory are used to reason about

deception and to assess if a deceptive behaviour is needed. This evaluation is done considering

the social situation an agent is in, which is computationally represented as an outcome matrix.

This matrix contains information about the individuals interacting (their identity), the possible

interactive actions they deliberate about, and a pair of scalar values for each possible pair of

actions, representing the outcome values. For determining if deception is necessary, they need

to characterize a social situation. This is done by locating a situation in the interdependence

space. This is a four dimensional space that describes a situation in terms of interdependence,

correspondence, control, and asymmetry dimensions [27], each of which describe an aspect of a

situation. Therefore they try to measure the extent to which an agent’s outcome is influenced

by the actions of other agent. Moreover, by setting limits on some dimensions, or defining

regions in the interdependence space, situations where deception is warranted can be ascribed.

For example regions with high enough interdependence and conflict (low correspondence) can

be said to warrant deception.

The first step is to induce a false belief on the other agent, which is called the mark. This

assumes the deceiver has a model of the mark, which encapsulates knowing: (1) a set of its

features, which identifies it; (2) an action model of the partner, so that the agent knows the

possible actions the mark can perform; (3) an utility function of the partner, to be able to

compute the matrix outcome, which then has the outcomes of the partner. This defines the

mark’s model. The deceiver has such a model of the mark and of itself. Indeed a important point

of this work was to examine to what extent modelling the partner can affect the effectiveness

15

Page 34: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

of deception itself. The partner model is in practice the theory of mind used to anticipate

behaviour. In this case reasoning about the mark can be done by using the action model and

utility functions, which act as reasoning rules.

Both the agent and the mark have the same outcome matrix for a specific situation, the true

outcome matrix. The goal of the deceiver is then to perform some action in the environment

transmitting a false communication to the mark, that will lead it to behave in a way that would

benefit the deceiver. In doing this, the agent is inducing another outcome matrix on the mark,

the induced outcome matrix. The agent creates the induced matrix using an algorithm that tries

to decrease the probability the mark has to choose an action that is unfavourable to the agent,

and then chooses the false communication that produces an outcome matrix that is most similar

to the induced outcome matrix. It is assumed the agent has a set of false communications it can

resort to propagate false information.

In short, for acting deceptively, the general algorithm is:

1. Determine if deception is necessary;

2. Create induced matrix;

3. Select and produce the best false communication to convince that induced matrix is true;

4. Agent uses true matrix to select an action which maximizes its outcome;

5. Mark produces the induced matrix and selects an action from it.

Their algorithms make a couple of assumptions. The deceiver is assumed to have a finite

set M of false communications over which it deliberates, which have the purpose of providing

false information. From a human perspective, this false communications can be a speech act

(a lie) or some general behaviour with the intent to deceive, both of which don’t have specific

actions that can be solely labelled as being deceitful. A communication is only false depending

on what the agent believes to be true, and thus depends on its internal state, not on specific

communications designed to propagate false information. Instead, any action should be able to

propagate such falsities.

Other unrealistic assumption is that the agent can perceive the partner’s outcome values,

thus being able to update its utility function. Once more, this perspective is not consistent with

a more social behavioural perspective. This particular assumption is in theoretical contradiction

with deception. If the agent is able to update and keep track of its partner’s utility function, any

attempt to deceive by the partner should be thwarted because its internal state is completely

known to the deceiver. In fact, one pre-condition for deception is our ignorance about our

partner’s true ulterior goals, otherwise deception cannot happen. This is why theory of mind is

used to infer mental states, specifically because they are not known. However this assumption

is acceptable, because the focus of the work is on the deceiver’s processes, not the partner’s,

although it could not be applied in this thesis work for example.

Despite these two facts, and being a robotics perspective, it shows a way to reason about

deception, using Game Theory and Interdependence Theory, being interesting in the way de-

ception is warranted. The knowledge about our partner, although being a basic model based on

features, was also shown to affect the success of a deceit attempt. This specific result is similar

16

Page 35: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

to what we want to achieve, that using the information from the Theory of Mind of others, a

deceiver can better create a plan to successfully achieve deception.

3.3 Castelfranchi’s GOLEM

Castelfranchi [4] built a multi-agent world designed to study social attitudes, focusing on cooper-

ation and deception. As mentioned in chapter 2, deception can occur by conflicts in cooperation

scenarios, which is why these cooperative activities are the focus of this work. GOLEM is based

on the blocks world of AI planning domain research. Agents have the goal to build a block

structure, which can be composed of either small blocks, large blocks or both. Because there

are several structures, goal conflicts will emerge, and each agent will try to achieve their goal,

resorting to their own methods or by using the “help” of other agent.

In order to simulate distinct types of cooperation, agents have task delegation and task

adoption preferences, which are the two main factors to take in account in a cooperative scenario.

These are formalized in GOLEM by a framework of personalities. For example, being lazy is a

delegating personality trait which leads the agent to always delegate a task if there is another

agent capable of doing it, taking action only if there is no other alternative. Being a supplier

is an adoption personality trait which leads the agent to always help if the other agent can’t

do the action by itself, i.e. does not have the capability. Therefore agents also have different

capabilities, which together with their goals and based on their knowledge of other agents, they

use to plan their actions. It is important to note that there is no chronic deceiver personality.

In this world deception is only instrumental and thus only due to conflicts in the agents’ goals.

Also to note, as mentioned above, personality traits are only preferences and not rigid rules. A

lazy agent can perform an action to mislead other agent to infer that its personality is not lazy.

In GOLEM, as in the real world, not all information is accessible. Agents only have a limited

view of the world and an incomplete and possibly wrong knowledge. This is in fact needed for

deception to happen. If every agent have all the information and correct knowledge, there is no

room for successful deceptive attempts.

Specific to GOLEM is that deception can be about capabilities, goals or personality, all of

which come ultimately form conflicts in achieving goals. Taking the same personalities as above,

let’s say Eve is lazy and Adam is a supplier. If Adam knows of Eve’s capabilities he won’t accept

Eve’s delegation request. This is an example of deception about capability. In this case there

are three scenarios:

1. Adam already believes Eve cannot perform the action, either because Eve never performed

that action in front of Adam and he inferred that belief, or because Adam believes Eve

has a personality that only delegates tasks. In any case Eve just exploits this wrong belief,

which is an example of passive deception;

2. Adam believes Eve can perform the action and will refuse the task delegation. In this

case Eve can tell a lie about its capability or personality, and act in a deceptive manner,

pretending her inability. This would be a case of active deception about capability or

personality;

17

Page 36: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

3. Adam does not have any information about Eve’s capability, where Eve can act the same

way as in scenario (2).

There are a set of logic reasoning rules that define how agents might deceive. These are used

to deduce beliefs, personalities and capabilities from the agent’s current knowledge base, to then

act accordingly.

This work is worth mentioning because it has several similarities to the goal work of this

thesis. By creating a simple multi-agent world, basic concepts of cooperation and deception

could be analysed. Also, developing a personality framework to model agent tendencies is a

similar approach to what FAtiMA architecture proposes in Section 5.1, but in the later case for

emotional influenced reasoning. This traits influence the agent’s actions through a reasoning

process that takes in account beliefs, personalities and capabilities, which is what the agent’s

in GOLEM can lie about. However agent in GOLEM can only lie in this limited scope, they

cannot lie about their requests for example. This would require a second order reasoning that

would endow the agent with the capacity of reasoning about the reasoning of other agents.

3.4 Mouth of Truth

De Rosis and Carofiglio take another approach for implementing deception [5]. They try to focus

on the communicative perspective of a deceptive action, while trying to test their approach on

a simplified version of the turing imitation game. In their specific scenario the sender tries to

convince the receiver that some fact X is not true, where the sender can lie or use other deceptive

strategies. This case study is already very oriented to communication, which is why they also

focused on developing a cognitive model of deception to simulate natural conversations. They

tried to develop a way to undermine the receiver’s belief, increasing its doubt. Mouth of Truth

implements reasoning models as belief networks [19, 20], and beliefs as their nodes. Uncertainty

is represented as usual, by probabilities associated with other nodes. In this way, beliefs are

connected in a network, where a fact implication logic can be applied, enabling deception by

telling a lie about a belief that is not what the sender wants to manipulate, but a belief that is

connect to that one. For example, uncertainty can be given to the belief “it rained” if the sender

says “the floor outside is dry”. However, the agent also needs to have some mental image of

the receiver to be able to deduce this. Therefore, it is assumed the mental state of the agent is

composed of a set of its own beliefs and reasoning rules, and the mental state of the other agent,

with the same elements. The receiver’s beliefs can then influence the decision making process

of the sender. This constitutes the theory of mind of Mouth of Truth, which takes a theoretic

approach, as mentioned in Section 3.1.

Planning the deception itself is the beginning of the process, which is composed by four

steps:

1. Decision of whether to deceive;

2. Selection of the deception object;

3. Selection of the form of deception;

18

Page 37: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

4. Selection of a deception instrument.

This planning is very similar to the one used by Wagner and Arkin in their deceptive robot,

in Section 3.2. Moreover, a parallel can be drawn between some of these elements and DePaulo’s

Taxonomy in Section 2.2. The deception object is the belief the sender wants to manipulate.

This is the content of the lie if the sender decides to use a direct approach of deception, by

focusing its lie on that object. On the other hand, if an indirect approach is chosen, another

domain fact will be selected that influences the belief, and that will be the deception instrument.

In such case, that would be the content of the lie. The floor being dry could be a deception

instrument in the above example of fact implication. The form of deception (3), can be linked to

the type dimension of the Lie Taxonomy. However, in Mouth of Truth there are also considered

deceptive truths which make use of erroneous beliefs, and scenarios where the receiver’s distrust

is exploited, where the truth is told expected to not be believed. This type of deceptive speech

act was not feature in the Lie Taxonomy because they are in fact truths, not lies, although they

have a deceptive intent. The outcome of this planning process is the best action the sender

can perform to manipulate the receiver’s belief, which is then validated considering a number

of parameters like impact on the desired belief, plausibility and credibility of the deception

behaviour, safety about not being discovered, and computational cost.

The Mouth of Truth approach is a different perspective on how to implement a deceptive

behaviour mechanism, because it focus on the manipulation of uncertainty of beliefs. These

beliefs compose the agents’ mental state together with a set of reasoning rules. Because the

mental image of the receiver also has the same structure, hypotheses have to be made to define

these rules for it. This problem is solved using the restrict domain of the experiment scenario:

each agent only has the other as a source of information, thus the sender models the receiver’s

reasoning rules using its own set of rules. This leads to a limited simulative perspective of ToM

in practice, while conceptually it uses a theoretic approach, as described in Section 3.1. Also

to note is that this specific approach could not be implemented using a BDI model because

there are no concepts like desires and intentions. However, it is the conceptual ground that is

interesting to analyse and have in mind, that deception is a communicative act that exploits the

uncertainty of beliefs.

19

Page 38: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

20

Page 39: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 4

A Mindreading Architecture

In this chapter we will describe the model we have designed to overcome the problem our work

tries to solve:

“How can autonomous agents behave deceptively and generate lies relevant to a specific con-

text?”

We believe that modelling other minds is required to achieve complex deceptive behaviour,

hence following our hypothesis that the higher the reasoning abstraction level regarding other

agents thoughts the better the results can be achieved in deceptive tasks. Each element in

the model will be described and representative examples will also be given to illustrate their

mechanism.

4.1 Agent Model Overview

In order to build agents that can cope and behave intelligently in a social environment, we

propose a model which endows agents with Theory of Mind mechanisms (fig. 4.1). Our view is

based on the Mindreading model by Baron Cohen, explained in Section 2.7.1. This architecture

follows the BDI model approach of Simulation-Theory discussed in Section 3.1.

Figure 4.1: Proposed Conceptual Model for a Theory of Mind

An agent is defined by an Agent Model. The Knowledge Base (KB) is the structure where

21

Page 40: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

all the propositions the agent knows to be true are present. It represents its beliefs and its

knowledge about events that have occurred. This information is the foundation for the agent’s

behaviour, as he acts upon the knowledge he has of the world. The Theory of Mind Mechanism

(ToMM) component is responsible for modelling and managing other agent’s mind representa-

tions. The agent will take this information into account when he wants to manipulate someone

else. Finally, the Deliberation Means-Ends Reasoning component defines the planning capabil-

ities of the agent. It is responsible for using the represented knowledge, both in the KB and

ToMM to achieve the agent’s goals through the selection of action the agent will perform in the

environment.

However, there is another type of model to describe an agent, as we need a structure to

populate the ToMM component.

4.2 Model of Other and ToMM Component

The Simulation-Theory approach (Section 3.1) defends that we represent others by simulating

ourselves in that same situation. The elements managed by the ToMM component follow the

same approach. They are simplifications of Agent Models, which we call Model of Others, so

they can be simulated by the same processes used by the Agent Model itself. Updating these

replications accordingly will ensure they will remain close to what they really represent.

The main difference is that Model of Others do not have a Deliberative component. Repre-

senting other’s goals and intentions would require intention recognition capabilities, which is a

complex area by itself. Therefore, the ID component in Baron Cohen’s theory (Section 2.7.1)

was not incorporated in the model because it would require intentionality detection. We think

one can still model high order intentions that aim to operate a change in someone’s mind with-

out this mechanism. Furthermore, the models in the last level of the Theory of Mind will not

have a ToMM component as well. In this case, they only represent themselves, and no further

entities.

An agent has a representation of other agents through their respective models stored in

the ToMM component. These models, also being a type of Agent Models, will have a ToMM

component, which will also store more models. Therefore this representation is done recursively

throughout several abstraction levels of Theory of Mind. This allows for N levels of nested

ToMs, which grant a greater and granular modelling power.

The global model hierarchy is represented in a tree-like structure. Figure 4.2 shows the

potential hierarchy of this model, considering M agents and N ToM levels. Each model is

described by AM, Agent Model, and its respective identifier. An arrow from AM A to AM B

indicates that A has a model of B in its ToMM Component.

It is important to note that at any abstraction level, a model does not represent itself in the

next level of its sub-tree. In other words, a model P in abstraction level L, which is the parent

of a sub-tree of models will not represent himself in level L + 1, the first level in its sub-tree,

as it would only be redundant. However, it will be modelled in level L + 2 onwards, as those

models will still reflect their image of P.

The finite set of nested models stored in the ToMM component has a cardinality defined by

Equation (4.1), where max is the maximum ToM representation level and t is the total number

22

Page 41: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Figure 4.2: Theory of Mind Model Hierarchy considering M agents and N abstraction levels.

of entities the agent is aware of, including himself.

|models| =max∑i=1

(t− 1)i (4.1)

This approach has an exponential spacial growth of the total number of models each agent

has to represent, i.e. O(tmax). Considering a scenario with three agents, A, B and C, and a

second level ToM modelling ability, each agent would have to store six models in total. If we

consider this same example with a third level ToM, the total would go up to fourteen models

and with four levels to thirty models. Therefore, the complexity of this model scales with the

maximum level of ToM representative capacity we want to endow the agents with.

The tree structure for model hierarchy begins to be very complex as more ToM levels are

used, which affects significantly each update cycle of the agent. Throughout our work we will

focus on a second level ToM, keeping in mind that more levels could be used in exchange for a

slower reasoning cycle.

4.3 From First to Nth level Theory of Mind

The implicit purpose of our model is to extend a first level Theory of Mind approach, mitigating

its flaws. A one level Theory of Mind agent can only model what others are thinking about and

not further than that. This is specially notorious in deception scenarios because it gives the

capacity to counter a Theory of Mind one level lower. We have seen that deception requires the

need to explicitly want to change the mental state of another. Let us imagine agent A has a first

level ToM and is trying to deceive agent B by manipulating the model it has of B. If instead

agent B had a second level ToM, and if it knew about A’s intentions, it could reason in a way

that could prevent that manipulation. Since he could model what A thinks about himself, he

could prevent the condition which achieves the goal to become true.

The ability to model more than one level, whilst not strictly needed, achieves the level of

23

Page 42: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

human adult reasoning capabilities in everyday social environments. Children start to develop

a second level of ToM at around the age of 6, and so we want to endow this seemingly basic, but

complex, mechanism of normal older children to our agents, aiming for a fully adult reasoning

ability.

The first level ToM can take advantage of some simplifications that we must address if

we want to extend it. For instance, it does not need to express triadic relationships of the

type 〈Ag1〉:perceives:〈Ag2〉:perceives: 〈Proposition〉. The SAM mechanism is responsible for

generating such conditions, as we will explain in the next section.

4.4 EED and SAM Mechanisms

Our model preserves the Mindreading model of Baron Cohen only to some extent. The EED

mechanism is still present but was generalized to represent all types of sensorial perceptions,

although its purpose still remains mainly on what agents are seeing. Any sensorial component

is very domain dependent and this one is no exception. We can, for instance, assume that if

an agent is within a certain radius or in the same place, it perceives all the present objects and

entities, as well as all events that take place there.

The output of EED is a dyadic representation of the type 〈Ag〉:perceives:〈Proposition〉, which

reflects that the agent 〈Ag〉 has perceived a proposition 〈Proposition〉, be it an object, event or

another agent. Its output is used to update the agent own KB, first level ToM and as input to

the SAM mechanism.

The SAM mechanism is fed by the output of the EED component and computes its out-

put in order to determine which minds perceive what, creating triadic relations for each one.

For example if an agent 〈Ag1〉 perceives 〈Ag2〉, and 〈Ag2〉 perceives 〈Proposition〉, both out-

putted by EED, then SAM can cross this information to create the triadic representation

〈Ag1〉:perceives:〈Ag2〉: perceives:〈Proposition〉. The resulting representation of EED and SAM

are then used to update the agent’s knowledge according to the principle that “seeing leads to

knowing”, i.e a perceived proposition is added to one’s own KB, and it also updates the KB of

other relevant agent models, including the nested ones.

The dyadic representations directly update the self KB and the first level models. Let us con-

sider that 〈Ag1〉 perceives 〈Ag2〉:perceives:〈Proposition〉. Being a dyadic relationship, we would

select the mind of 〈Ag2〉 in the first level of the ToMM component of 〈Ag1〉, and then update its

KB. If however 〈Ag1〉 has perceived a triadic relationship like 〈Ag2〉:perceives:〈Ag1〉:perceives:

〈Proposition〉, we would need to update a model in the second level ToM. 〈Ag2〉 would be used

to select the model at the first level, and using its ToMM component we would select 〈Ag1〉,proceeding to update its KB. This way 〈Ag1〉 would be able to represent the knowledge that he

believes 〈Ag2〉 believes we knows about 〈Proposition〉.

4.5 Creating and Updating Models

It is important to address the topic of how are models created. The EED is used to perceive

other entities. Whenever the agent sees a new entity in the environment it initializes a new

model to represent it in its own mind, ToMM component. The memory will begin with no

24

Page 43: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

information and will be updated throughout the simulation, as the agent perceives the other

agent receiving perceptions itself. When the agent receives an event it will update its beliefs

and memory. That same perception is used to update the models of others in the same way,

simulating the changes in their internal states.

Let us consider a few examples to demonstrate how these same mechanisms work after models

have been created. Imagine an agent called John and a candy which can be eaten. The perception

John:perceives:Candy(isEdible) updates the KB of Jon both on its own model and in the model

others have of John. A triadic representation like John:perceives:Mary:perceives:Candy(isEdible)

is used to, first, select models of John, and then, select the model of Mary inside those models

of John. It is important to consider three different cases: (1) the perception is received by John,

(2) it is received by Mary, and (3) it is received by a third agent, for example Annie. The first

example represents the case where John has to infer and update the model that represents him

to the eyes of others, so that higher order models remain updated. In such a situation every

model that, in turn, has a model of John is selected. On the first layer of the ToM hierarchy

there is no model of John, because there is no need to model what John would think about

himself. In turn, the model of every other entity John knows is there. Each one of them is

selected and the process proceeds. Considering one of them at a time, the model of John is then

selected, which is already on a second abstraction level. It is now possible to update it, changing

the views that both any Annie and Mary think John has of Mary. The second case scenario

is identical to the first one. The representation of both John and Annie must be selected, and

one at a time, updates the model of Mary these representation have. The third and last case is

where it is Annie that receives the perception. Annie has a model of John which belongs to the

first level ToM, depicting what Annie thinks about what John thinks about. Upon selecting this

model, we proceed to select the model of Mary inside John’s model, at a second level abstraction

level. It is then updated, possibly adding information to the view that Annie thinks John has

of Maray.

4.6 Deliberation and Means-Ends Reasoning

The second purpose of a Theory of Mind is to be able to make use of the knowledge it represents.

Thereby, specific mechanisms have to enable the deliberative process to use this information.

Modelling explicit goals that involve changing the mental state of others cannot be achieved

otherwise. The Deliberative and Means-End Reasoning Component (Deliberation Component

in short) makes use of conditions to test the state of the world, either the property of an object

or entity, or by testing events that have occured. A deliberative process without Theory of Mind

would only need conditions that are confined to the context of the agent’s own mind, its own

KB. Our work proposes that each agent also stores models of others, and so we need to be able to

specify that on conditions as well. Extending them to model dyadic and triadic relation achieves

the intended purpose: 〈Ag〉:knows:〈Proposition〉 and 〈Ag1〉:knows:〈Ag2〉:knows:〈Proposition〉,which we have used naturally before. Any time the Deliberative component finds one of these

conditions, it will first verify which ToM it has to select to test the condition against. It is a

similar selection process to those explained in Section 4.5 to update ToM agent models. If a

condition does not specify a relation it will be tested against the agent’s own model, in other

25

Page 44: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

words, its own KB. With this mechanism it is now possible to have goals that consider the mental

state of other agent. Let us imagine John wants to trick Mary into believing that a candy is

spoiled. The desired state of the world is that Mary believes Candy(isEdible) to be false, while

John believes Candy(isEdible) to be true. These conditions, called success conditions, need to

be verified for John to achieve the goal. The first condition is true if according to the model

John has of Mary the candy is not edible, in other words Candy(isEdible) is false, while the

second condition is true if it holds in John’s own KB.

It is also important to note that the planning mechanism needs to support this kind of con-

dition and effects that refer to a certain model of Theory of Mind. The Deliberative component

is responsible for creating plans to achieve certain goals. First, these goals have to be selected

and then be computed by the planning mechanism to create a plan. Plans are made of a set

of actions, which upon done successfully achieves the selected goal. In order to create plans we

need a planner, which was achieve using an extension of STRIPS [10] that defines actions as

operators, defined by tuples OP〈Ag,N,P,E〉, where Ag is the agent who executes the action, N

the name of the action, P the set of condition that need to be verified before this goals can be

done, its pre-conditions, and E a set of conditions that hold after the goals is achieved, its effects.

We need to allow the goal’s pre-conditions and effects to represent dyadic and triadic relations,

for example SELF:knows:Candy(isEdible) and SELF:knows:Mary:knows:¬Candy(isEdible).

Typically there are two types of effect we need to be able to represent: global effect and

local effects. The first type of effects represent those that are known to every agent, and can

be represent as *:〈mental-state〉:〈proposition〉. For example, *:knows:Candy(isObject) is such

an effect. The second type of effects represent those that are only known to a specific entity

and have a localized effect. Dyadic and triadic effects can then be defined like 〈Ag〉:〈mental-

state〉:〈proposition〉 and 〈Ag1〉:〈mental-state〉:〈Ag2〉:〈proposition〉. An example of each of this

cases can be Jonh:knows:Candy(isEdible) and Jonh:knows:Merry:knows:¬Candy(isEdible).

4.7 Concluding Remarks

This model does not take into account the system from which the agents receive their perceptions.

However there must be one, as this architecture only defines the agents’ mind. Regardless of

the independence of our system to the world simulation, the domain of this research work

requires some additional care regarding this issue. Our work is designed to endow agents with

the capacity to lie. This process prevents the total view of the world to allow incomplete and

possibly wrong knowledge, as we have seen in 3.3. Thus the simulation world has to permit

events and perception that are only received by a subset of agents. These agents can then

deliberate over this information others do not have and use it in their favour.

26

Page 45: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 5

Implementation

Having explained the model for endowing agents with representative abilities to model the

minds of others, which we believe is ultimately required for deception, we will now explain how

we implemented the aforementioned mechanism. The frameworks and architectures with which

the model was integrated will be explained, with special focus on how the update mechanisms

were performed. We finally proceed to a full description of the case study we developed to test

the validity of our model and its ability to generate deceptive behaviour.

5.1 FAtiMA Modular Architecture

FAtiMA [9] [16] is an autonomous agent architecture implemented in Java1 with planning capa-

bilities which we used to integrate our model. Its purpose is to build virtual characters which

behave and reason in a way that it is influenced by their emotional state and personality. Their

behaviour is meant to be believable and create empathy reactions to the users. Because each

agent can potentially have a different personality from one another, their reasoning will differ

according to the situation. In such cases an emergent effect can occur, where actions and reac-

tions unroll creating a flow of interactions that was not explicit intended. The personality of an

agent is defined by the following set of traits: (1) emotional threshold and decay rates, which

affect how emotions evolve; (2) a set of goals, that the agent can possibly pursuit; (3) a set of

emotional rules, which defines the emotional response to an event; (4) a set of action tendencies,

in other words, reactive actions to an event. Therefore, each agent will subjectively evaluate a

situation based on its personality and behave differently according to its goals.

These approaches have set the guidelines for the architecture’s development, which is now

composed by two main complementary parts: FAtiMA Core and FAtiMA Modular Components.

5.1.1 FAtiMA Core

FAtiMA Core, as the name states, is the backbone that provides the basic algorithms that

generally define the core aspects of its functionality. The global model of an agent in FAtiMA

is depicted in Figure 5.1, where there are depicted the most important elements.

1www.java.com

27

Page 46: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Figure 5.1: FAtiMA Core Architecture

The affective state stores the emotions of the agent and is responsible for their decay, which

can be modified by the appraisal process. In the memory is outlined all the knowledge and event

occurrences that the agent as perceived. The action selection mechanism ultimately defines the

agent behaviour, through its actions.

Each one of these elements represent mechanisms that have to be implemented by modular

components integrated to the architecture. Hence, it is important to mention that FAtiMA Core

does not commit itself with particular implementations of each of these mechanisms. An agent

defined solely by the FAtiMA Core Architecture will not do anything, as it does not define how

the agent appraises events or how it deliberates over a plan to achieve certain goal.

The pseudo code of the agent’s main process in FAtiMA Core is described in Figure 5.2.

while(shutdown != true)for each Component cc.update();

e <- perceiveEvent();

if(a new event e is perceived)memory.update(e);memory.performInferences();

for each Component cc.update(e);

aF <- newAppraisalFrame(e);

for each AppraisalComponent aCaC.startAppraisal(e,aF);updateEmotions(aF);

for each AppraisalComponent aCaC.reAppraisal();updateEmotions(aF);

for each BehaviuorComponent bCbc.actionSelection();

a <- selectAction();executeAction(a);

Figure 5.2: FAtiMA Core Pseudo Code [16]

28

Page 47: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

The agent is in an endless loop processing events and updating its internal state. As we

can see, FAtiMA Core is only responsible for notifying the components about new events, and

making use of their output to trigger the agent’s behaviour through the execution of actions.

The agent can either receive external of internal events to begin this process. An external event

is a change in the environment, while an internal change corresponds to events in the agent’s

internal architecture, for example, succeeding or failing a goal. Upon receiving an event, the

memory is updated and the agent is now aware that event happened. At this stage, inference

operators specifically defined are trigged to test the new state of the memory and possibly infer

new knowledge. Each component is then updated with the received perception. After this the

appraisal process takes place, which is of little relevance to our work. We only need to keep in

mind that it updates the emotional state.

The loop proceeds to the final part where the action that the agent will perform is chosen.

Taking into account all the components that provide behaviour functionalities, a set of possible

actions are outlined.

On the following Section we will mention a couple of the most important component and

start to analyse in more detail the Theory of Mind Component.

5.1.2 FAtiMA Modular Components

Agents that are only defined by FAtiMA Core will not do anything because modular components

need to be integrated to the system. They must implement the relevant interfaces to add specific

functionalities. Each component can implement one or a subset of these mechanisms, as it is

not required to implement all of them. Upon initialization of FAtiMA Core, each component is

added depending on that specific agent instantiation definition.

We will proceed to discuss the most relevant components used in our work.

Behaviour Components

There are two types of behaviour components currently implemented, the Reactive and Delib-

erative Components.

The Reactive Component uses predefined rules described by the agent’s personality to endow

it with reflective behaviour to events. Action tendencies and emotional reactions trigger when

the event specified in those rules occurs, and give an action and emotional response respectively.

This is a very simple and limited mechanism in the sense that it would only allow the agent to

respond reactively to events in the world.

Our work made more use of the Deliberatie Components, which implements a deliberation

means-end reasoning, as it enables planning capabilities and goal-oriented reasoning. Goals are

described in the agent’s personality and are the defining traits of the characters behaviour. They

have preconditions that must be all met before the goal can be activated. A goal is achieve when

an of its success conditions are met, or it can fail if any of its fail conditions ıs verified.

Goal’s preconditions are tested in each reasoning cycle against the agent’s Knowledge Base

(KB), its Memory 5.2. An intention is created when a goal is activated, which represents the

agent’s commitment, and is represented by all the plans that can ultimately achieve the goal.

Plans are generated by a continuous planner so that the state of the world reflects the success

29

Page 48: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

conditions of the goal. Initially, a planner starts with an empty plan that has only two special

steps: the start step and the final step, the current and desired state of the world respectively.

In each deliberative cycle the planner will focus on the best plan and try to verify choose an

action to verify one of its open preconditions. An open preconditions is a step’s precondition

that as not yet been achieved by the plan. When a plan has no open precondition then the goal

has been achieved.

Actions are defined similarly to goals. They are defined by preconditions that must be

verified before the action can be executed, and have effects, which are condition that are true

on the world after the action is executed. Further examples of actions and goals will be shown

when we discuss the 1st level Theory of Mind.

Goals and actions are defined in authoring files that describe the actions that can be executed

in the world, and all goals that agents can have. These are defined in xml files which are taken

as input to the overall system. In this way we enable the designing of scenarios and agents

without modifying code in the architecture. Sections 5.2 and 6 show examples of authoring in

FAtiMA Modular.

Theory of Mind 1st Level

FAtiMA Modular already has a component that implements a first level Theory of Mind. We

will now address it before heading on to explain the full extent of our work. As mentioned in

Section 2.7 a ToM has two main functions. The first one is to represent models of other agents,

including their beliefs, motivations, emotions and past experiences. This is easily achieved by

replicating the agent’s own model structure, taking the simulation approach as described in

Section 3.1. The agent will therefore model each of the other agents in the environment as a

replication of himself. Thus, both the agent’s own defining class, AgentCore, and the class which

defines other’s models, ModelOfOther, implement the interface IAgentModel so that they can be

treated equally during the agents deliberative process.

Although the basic structures are the same, as we have explained in Chapter 4, they have

some small simplifications mainly regarding components. Because this is only a first level Theory

of Mind, models of others do not have a ToM component. For instance, agent A can only

represent the model of agent B, but cannot represent the model agent B has of agent C. Another

big simplification made, as mentioned in Section 4.2, was that model of others do not represent

goals, intentions and reasoning components.

Models are updated in the same way we explained in Chapter 4. Due to simplification

reasons, agents perceive events that happen close to them, implementing a simple approach to

the EDD component mentioned in 4.1.

To use the stored knowledge in ToM model the agent need to have explicit goals regarding

minds of other. We have seen that goals are defined in terms of conditions which must be

tested. Usually these would be tested against the agent’s own model and knowledge base. Since

we also want to use information stored in internal states of model of others, the first step was

to enable the representation of condition in terms of model of others, so they can be used like

normal conditions as the agent itself. This is represented in XML by adding the tag “ToM”

to conditions. Figure 5.3 shows an example of a property which is verified if the proposition

30

Page 49: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

“Candy(isEdible)” is not true in Mary’s model.

<Property ToM="Mary" name="Candy(isEdible)" operator="!=" value="True"/>

Figure 5.3: An example of an authored Property

This can either be verified if the proposition is false in the memory of Mary’s agent model,

or if the proposition is simply not in the KB. A property like that can be part of a goal’s pre-

conditions or success conditions, thereby making use of other’s mental state in the reasoning

process of the agent. Whenever the Deliberative Component finds a condition it will test it

against the relevant agent model. If no ToM tag is specified, the component will test it against

the agent own structures, otherwise it will ask the ToM Component for the correct model and

test it then. With this mechanism we can represent goals that depend on the internal states of

others. For instance, let us take the example that John wants to deceive Mary about eating a

candy. Let us imagine they both want to eat the candy, but John has a goal where he does not

want Mary to know he has eaten the candy. Thereby, John has a goal which succeeds if he eats

the candy while Mary does not know about that. A goal for this objective can be authored as

shown in Figure 5.4.

<ActivePursuitGoal name="GluttonDeceive(Mary)">...<SuccessConditions>

<Property ToM="SELF" name="Candy(wasEaten)" operator="=" value="True"/><Property ToM="Mary" name="Candy(wasEaten)" operator="!=" value="True"/>

</SuccessConditions>...

</ActivePursuitGoal>

Figure 5.4: Example of an authored Goal

The desired state of the world is depicted by the success conditions. In this case, John wants

Mary to believe that a certain property of the object Candy, wasEaten, is not true, while John

knows himself that it is. The first condition is verified if “Candy(wasEaten)” is true in John’s

KB, while the second is verified if “Candy(wasEaten)” is false or it is not represented in the KB

of Mary’s model that John has modelled. To achieve this goal, the planner has to reason about

actions that change other mental states as well. Thus, changes have to be made to the action’s

effects. At least two type of effects have to be accounted in planning operators, Global Effects

and Local Effects.

• Global Effects - an effects that is perceived by all the agents in the scene. This type of

effect does not have a ToM tag specified.

• Local Effects - an effect that is only perceived and acknowledge by a particular agent

specified in the ToM tag.

In this way we can hide information from agent and manipulate their knowledge and beliefs.

Figure 5.5 and 5.6 show two example of global and local effects respectively.

The example shows two different operators to eat an item that is edible. In the first example,

the success condition’s effect is global, thus every agent will know that object was eaten. On the

31

Page 50: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

<Action name="Eat([food])"><PreConditions>

<Property ToM="[AGENT]" name="[food](isEdible)" operator="=" value="True"/></PreConditions><Effects>

<Property name="[food](wasEaten)" operator="=" value="True"/></Effects>...

</ActivePursuitGoal>

Figure 5.5: Example of an Action with global effects

<Action name="EatHidden([food])"><PreConditions>

<Property ToM="[AGENT]" name="[food](isEdible)" operator="=" value="True"/></PreConditions><Effects>

<Property ToM="[AGENT]" name="[food](wasEaten)" operator="=" value="True"/></Effects>...

</ActivePursuitGoal>

Figure 5.6: Example of an Action with local effects

other hand, the second example shows a case where the effect is local, and only the agent that

has done the action, described by the variable [AGENT], knows that object [food] was eaten.

Regarding the planning process, a condition is matched by a local effect only if it refers to the

same conditions, in this case a property condition, and also to the same ToM. Global effects

and conditions are matched simply if their conditions are the same, since there is no ToM to

match. Taking the example of the goal depicted in Figure 5.4, the planner would try to match

its first success condition to the Eat operator. However, the effect of that action would conflict

with the goal’s second success condition, in which case that plan would be dropped. After that

the planner would try to match the action EatHidden, whereby it would satisfy the first success

condition without interfering with the second one. This operator would then be chosen and

executed to fulfil the agent’s intention.

5.1.3 Concluding Remarks

FAtiMA Modular is a complex architecture, with modules designed to implement many features

of the agent behaviour mechanisms. In this section we have only described the most important

ones that will be used to develop our work, with special care to the Theory of Mind Component.

With this in mind, we can now explore what was done to achieve a Theory of Mind with more

than one level. To sum up all the components described in this section, Figure 5.7 shows the

global class dependencies in FAtiMA Modular.

Many more components are implemented but were not used in this work, like the following

component: Social Relations, Drives, Empathy and Emotional Intelligence.

32

Page 51: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Figure 5.7: FAtiMA Modular Global Class Dependencies

5.2 Implementing a Mindreading Agent

Now that we have seen how the most basic components of FAtiMA Modular work, we can focus

on the developed work of this thesis. As we saw in Section 5.1.2, there are many components

that are already implemented and endow agents with behavioural characteristics, from which our

work focus specially on the Theory of Mind Component. Although the component already exists,

it does not completely fulfil the purpose it could, as we discussed in Chapter 4. The capacity of

Theory of Mind Component is limited by having only one level. We will now explained how we

implemented our conceptual model in that same Chapter.

5.2.1 Theory of Mind - Second to Nth level

The basic rationale behind our work is to create nested levels of models which represent the

other agents in the environment. Eventually N levels can be represented, but as we saw in

Chapter 4 its complexity grows exponentially with the number of levels. If we remember that

each one of these models are treated as the agent’s own and updated with the events in the

environment, the overall performance of the system would be severely jeopardized. Therefore,

we will focus on second level Theory of Mind, taking into account that the abstraction level

could be enhanced in exchange for the agent’s responsiveness speed. Third and fourth levels

were also tested. However, these levels are much more complex to both understand and test,

and thus we focused on the second level.

5.2.2 Creating Model Of Others

Let us now discuss how agent models are created. When the simulation begins, the agent

receives a list l with m agent names, representing all other entities in the scene. This is the

first perception the agent receives. In the first level ToM version we would only need to create a

model for each of those agents and store them in the ToM Component. Now we want to create

models to represent N levels of Theory of Mind. The specific number of levels, a variable called

maxToMLevel, is received as an argument of the ToM Component upon its initialization. In the

following example we will use a maximum level of 2. The rationale is to recursively initialize all

models, creating a tree-like hierarchical global structure. Figure 5.8 represents the structure of

33

Page 52: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

agent’s A ToM in a scenario with 3 agents, A, B and C, where an arrow from A to B indicates

that A has a model of B in its Theory of Mind component.

Figure 5.8: Theory of Mind Model Hierarchy considering 3 agents and 2 abstraction levels.

As we can see, a model does not represent itself in the immediate subsequent ToM level. In

this specific example, agent A would receive a list composed of two elements, B and C. The agent

would then follow by considering all agents in the scenario, also adding himself to this list, and

beginning the recursive initialization until it reaches the maximum level of ToM’s abstraction,

much like a depth-first algorithm. For every model m considered it would only create models

for all other agents and not one for m itself, resulting in a tree structure like the one described

in Figure 5.8. Every model would be initialized upon creation, along with all its components.

The decision of what components to instantiate is the result of the following process: for each

component used by the parent node, if it implements the interface IModelOfOtherComponent

then it is recreated in the new model. In a simple way, it describes components which can

duplicate themselves and be used by models of others’ mind, ModelOfOther. Components such as

the Reactive and Theory of Mind components implement this feature and are in fact instantiated

in each subsequent model.

However, it is important to note that models in the last level, in Figure 5.8 the second

level, will not have a ToM Component. When a model is being initialized in a level equal to

maxToMLevel, ToM Components will no longer be created. These are the leaves of the overall

tree structure, and therefore will only represent themselves and no further ToM levels.

After the instantiation process ends, the agent will trigger an innate objective to perform the

“look-at” on all the entities he knows of, in other words, the agents corresponding to the models

in ToM’s first level. Thus, the following perceptions will be related to properties of others. For

instance, agent A in the Figure 5.8 would perceive the properties of agents B and C.

However, we did not want to limit this initialization stage to a one-shot process. If we

imagine that agents A, B and C are in a room, a similar process would have to trigger if an

entity D would enter the room in the midst of the simulation. That entity could be the User for

example. In such a case, agent A would receive an “entity-added” perception telling him there

is a new entity, and a similar process would ensue: agent A would “look-at” the User, receive

its properties through perceptions, update its knowledge base and the ToM models.

We also choose to not delete models of Theory of Mind in the opposite scenario, where

the agent receives an “entity-removed” perception. In such a case, it would still be useful and

34

Page 53: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

interesting to maintain such models for further planning operations. However, this was simply

an implementation option as it is highly context dependent.

5.2.3 Updating Model of Others

The second problem we must address is how to update those models, now that we have created

them. As we mentioned, the agent will start by looking at all the entities in the scenario and will

perceive their properties, updating its memory accordingly. Let us take the following example:

John and Mary want to eat the candy that is on the floor. As usually, we will discuss a version

with a maximum of 2 levels of Theory of Mind. Given what was discussed in the previous

Section, John will have a model representing a model of Mary. This model will, in turn, have a

model of John, which depicts what John thinks that Mary thinks about him. Figure 5.9 shows

the resulting hierarchy.

Figure 5.9: Example of Theory of Mind Model Hierarchy considering 2 agents and 2 abstractionlevels.

When John sees the candy, it will perceive it as food, for example receiving the property

“Candy(isEdible)”, which will be stored in his memory. It will perceive that Mary has seen

the candy as well, resulting in triadic relationship of the type 〈John〉:perceives:〈Mary〉:perceives:

〈Candy(isEdible)〉, mimicking the SAM component we mentioned in Chapter 4. In fact, upon

perceiving that Mary as seen the candy, an internal mechanism will trigger in the ToM compo-

nent and every property John knows about the candy will be reflected in Mary’s model, following

the aforementioned simulative approach and thereby creating triadic relationships.

However, the mentioned mechanism only update up to the first level of ToM. How can we

propagate to further levels? The answer is to further turn the mechanism recursive. Therefore,

every module should be updated independently and individually with each perception the agent

receives, and we will do just that. In the first level ToM version, the component would just take

every property of target that was seen and duplicate it in the model which corresponds to the

subject that perform that action “look-at”. Because we have assumed the theory that “seeing

leads to knowing”, we can infer that if someone saw something, then it knows about it. We

further infer that people would know the same as we do. To expand this notion, we perform a

recursive update on the relevant models.

When any agent perceives that 〈Subject〉:looked-at:〈Target〉, a similar update mechanism will

trigger in the ToM component. Our version traverses the tree hierarchy of ToM models, updating

35

Page 54: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

the ones which represent the subject of the look-at action, 〈Subject〉. Following our concrete

example, after John has perceived the candy’s properties it will also receive the perception that

he has finished the action “look-at” with the target candy, i.e. 〈John〉:looked-at:〈Candy〉. In

such a case, every model reflecting John in its ToM models’ hierarchy will be updated. As we

can see in Figure 5.9, there is one such model in the second level corresponding to John’s view

of Mary. Therefore John would think that Mary knows that he knows about the candy and all

its properties.

Property Visibilities

There is yet another problem we need to address. We have seen in Section 4.6 that actions have

both global and local effects. When someone perceives an event, it does not mean that everyone

will be aware of that as well, which is what local effects try to represent. This ensures that

agents can have information others do not, which is a necessary requisite to perform all forms

of deception.

So far we stated that all properties of an object or entity are propagated through the models

hierarchy, updating the relevant ones. However, models must remain coherent with local effects,

thereby not propagating them to all models. To achieve this goal we need to track all properties

the agent receives. For this purpose, the Theory of Mind component has a global structure,

propertiesVisibility, to map properties to a list of ToM model names against which they are to

be tested. It keeps, in fact, information about properties’ visibility to other agents in the world,

hence the name. Upon perceiving a property of an object in the world, either by receiving it as

an effect of an action or by looking at the object, in addition to perform all the mechanisms we

have already discussed, the ToM component will also add the effect to the list corresponding to

that property in propertiesVisibility. For example, when John perceives:

<Property name="Candy(isEdible)" operator="=" value="True"/>

It will fetch the list corresponding to property Candy(isEdible) from propertiesVisibility, and

add a global effect, which is represented by the symbol *.

Let us imagine that somehow John knows that the candy in the room has strawberry flavour

but Mary does not. In such a case, John would also perceive the following property, represented

in XML:

<Property ToM="John" name="Candy(Flavour)" operator="=" value="Strawberry"/>

The property would be stored in John’s memory and the ToM component would also add

to propertiesVisibility a new key-value pair, respectively Candy(Flavour) mapped to the list

with one element, (John). Upon receiving new information, the ToM component will trigger the

recursive model update we have mentioned. During this stage we use in fact the information

stored in propertiesVisibility. John knows, among other things about the candy, that is has a

flavour, and it will try to propagate that information to other models. However, before adding

a new proposition to a model, a visibility check is done to ascertain if that model should know

about that. If that model’s name is contained in the visibilities list for that property, then

that model can perceive it. Considering Figure 5.9, the ToM component would try to update

36

Page 55: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Mary’s model with Candy(Flavour), but it would fail because the list of Candy(Flavour) in

propertiesVisibility only contains one element, John.

Following the recursive propagation, in the second level, John’s model contained in Mary’s

model would in fact be updated with Candy(Flavour), being “perceived” by that model. However

this would not be coherent because if Mary does not know that the candy has a flavour, it cannot

think that John knows about it. Therefore we need to have some extra step in the check so

this does not happen. We did this by adding an attribute to the class ModelOfOther, called

predecessorMinds, a list which keeps the name of all models in the path from that specific model

to the root of the tree hierarchy. The predecessorMinds list is created upon creation of a model,

duplicating the one of the parent mind node, and adding the parent name in the list. For

instance, in Figure 5.9 John’s own model would have an empty predecessorMinds list, Mary’s

model would have a list composed of one element, John, and John’s model in Mary’s model

would have a list composed of two elements, John and Mary. When a model m is being updated

with a proposition P the definitive check we do is described as following:

1. Get the list visibilities from propertiesVisibility, representing the visibilities of P.

2. Get the list predecessors from m’s predecessorMinds.

3. Test if predecessors list plus the element m is contained in visibilities.

4. If it is, then that model can perceive P, otherwise the algorithm stops following the re-

maining sub-tree and continues the recursive process.

Let us return to the example about the candy’s flavour. We saw that because only John

knows about this proposition, the visibilities of Candy(Flavour) is described by a list of one

element, John. The model of Mary has a predecessorMinds list composed of the same one

element, John. From step 3, we would need to consider the list (John, Mary), resulting from

Mary’s predecessors plus the element Mary itself. Because not all of this list’s elements are

contained in Candy(Flavour) visibilities, Mary’s model would still not be updated.

Regarding the representation Mary’s model has of John, we would reach the same outcome.

The predecessorMinds list of John’s model has two elements, John and Mary. All the elements

in this list are not contained in Candy(Flavour) visibilities, thus preventing John’s model from

being updated.

Inferences

Inferences also perform an import rule in the updating process of agent models. They enable

the implementation for the SAM Component mentioned in 4.4. Inferences are defined in the

agent authoring files as a special type of actions, which will not be executed but rather triggered

when new information is added to the agent’s memory. As we have mentioned in the beginning

of this Section we have adopted a simulative approach, whereby each model has an independent

reasoning process. This also applies to the inference process. We have changed this specific

mechanism to be done independently by each model as well. Considering the pseudo code

shown in Figure 5.2, we can see that every component is updated in each cycle. Regarding the

37

Page 56: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

ToM component update, it will proceed in a similar fashion, updating every model sequentially.

Upon being updated, if any new knowledge was added to the model’s memory since the last

update, the inference process will trigger and all operators will be verified. Let us return the

example of John, Mary and their candy on the floor. Imagine we want to write an inference

which would describe that everyone knows John does not consider edible anything that is on

the floor. Such an inference is shown in Figure 5.10 in XML.

<Action name="Inference-OnFloorNotEdiblebyJohn([item])"><PreConditions>

<NewEvent action="look-at" subject="John" target="[item]"/><Property name="[item](OnFloor)" operator="=" value="True"/>

</PreConditions><Effects>

<Property ToM="John" name="[item](isEdible)" operator="=" value="False"/></Effects>

</Action>

Figure 5.10: Example of an Inference Operator

It is important to note that we only specify a local effect, John, not the specific model Mary

has of John. However, as we saw, every model will be updated recursively and independently

by the ToM Component. Because of this mechanism, Mary’s model will trigger this inference

operator and will in fact be able to perform it. John’s own model will update its own memory,

and the model of John represented in the second level will update itself as well. Therefore this

process is responsible for enabling triadic relationships in our agent system, implementing the

SAM mechanism. Considering this example, John would be able to receive a perception of the

type Marry:perceives:John:perceives:Candy(edible) = False.

Furthermore, we have also permitted notations of the following type, which can be used to

describe some very specific scenarios:

<Property ToM="[Ag1]" ToM2="[Ag2] name="[item](property)" operator="=" value="True"/>

This notation permits the reference of a property in the memory of a model two levels deeper

in the ToM hierarchy structure. We have considered that a more granular representation then

this would not be necessary due to its abstraction complexity.

5.3 ION Framework

The ION Framework [25] is a system designed to model and manage dynamic virtual envi-

ronments. FAtiMA architecture implements what we call the agent minds of the characters in

that environment. They aim to endow agents with reasoning, planning and memory capabilities.

However, the entities they represent still have to be embodied in virtual environments to created

testing scenarios where users can possibly interact with them. The ION framework supports an

approach where the graphic realization engine and the simulation engine are decoupled. The

first type of engines are responsible for graphically representing the elements in the simulation,

usually associated with an avatar, giving them visual characteristics. The second is responsible

for managing all entities, operations and information flow in the world. ION implements the

later type, a simulation engine.

38

Page 57: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

It is composed of four basic structures: Entities, Properties, Actions and Events. Entities

populate the environment and represent both objects and the embodiment of agent minds in

the world, which we can call characters. Characters and objects have properties that define

them. Any given property can be changed through an action perform by a character. Events

are changes notifications in elements of the simulation, which are received by other characters

in the scene.

The ION Framework is based on the Observer pattern and the monitoring of events. Entities

can be registered in events and be notified when a property has changed or an action has changed

its state (started or stopped). The simulation itself evolves in a discrete step fashion, as shown

in Figure 5.11.

Figure 5.11: ION Framework Simulation Flow

Changes to the simulation elements are treated as request and therefore are not done im-

mediately. This method ensures a mediation of conflicts between concurrent requests, whereas

the simulation engine follows a policy to choose only one concurrent change. By concurrent

requests we mean changes to the same property in the same simulation step. Therefore, it is

always ensured that the simulation state is the same for all the elements.

We have then integrated FAtiMA Modular as agent minds of the elements in the simulation

world, implemented by the ION Framework. Whenever an agent performs an action, a message

is sent to the framework requesting the beginning of that action, which will later come to a

finished state, at which point the mind will be notified. Changes in a property of an element

will also be sent the all minds, so they can perceive the change.

5.3.1 Visibilities on ION Framework

Once again the requisite that our system should not allow a complete vision of the world comes

into play. Minimal changes were also performed in the ION Framework itself to ensure the

completion of this requisite. Initially any change performed on a property would be sent to all

notified elements and respective minds. However this should not happen, as we do not want all

properties to be visible to every agent. A similar mechanism to the visibilities implementer in

the ToM component was used in the ION Framework regarding properties. Upon initialization

agent minds send their properties to the simulation, which will eventually be propagated to all

other characters’ minds. Instead of just sending the property name and its value, it is also send

the property visibility to the simulation, usually describing if it is either global or only known

to that agent. The same process happens when a property is changed and minds are notified.

Notifications about changed properties can be sent to only one or a group or all minds. This

mechanism allows for scenarios where a notification about the change in a property only visible

to character A, is also send to the mind of agent B. In Section 6 we will describe specific examples

depicting this situation, this time considering the scenario we used to test the developed work.

39

Page 58: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

5.3.2 NWN2 - Graphic Realization Engine

Following the approach of ION Framework, we still needed a graphical realization engine to

represent the virtual world. We used the Never Winter Night 223 (NWN2) game engine for the

graphical interface, together with the Never Winter Night Extender4 Plugin for the commu-

nication from the ION Framework to the game itself. This process is implemented through a

database communication channel where events are written in the DB from either end and read

from the other end. Both the ION Framework and NWN2 read the DB continuously waiting for

new events. When an agent wants to perform an action, the ION Framework sends a request

to the game and waits for its completion. In Section 6 we will explained more in-depth our

scenario, which was developed using the NWN2 Toolset5.

2http://www.obsidianent.com/3http://www.atari.com/4http://www.nwnx.org/5http://www.nwn2toolset.dayjo.org/

40

Page 59: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 6

Case Study

This chapter is the bridging between our implementation and the evaluation we will do to test

our work. We will start by explaining the game we chose for our scenario and how it enables

us to test what we have proposed. The agents in the scenario will be defined along with the

possible actions they can perform. Finally we will address what tactics each type of agent has

and how are they define in the agent’s XML file, its authoring file.

6.1 The Werewolf Game

We aimed for a testing scenario were bluffing, deception and manipulation were core aspects of

the game.

The Werewolf/Mafia game1 was chosen to make a scenario where characters have incomplete

information and vision of the world. The core game-play evolves around the notion of trying to

either hide or discover hidden information through interactions with other players.

We adopt a very simple version of the Werewolf game. There are five players in the game,

called villagers, who are divided in two groups: the Werewolves and the Victims. Our version

focused on a one Werewolf scenario, thereby existing four remaining Victims 6.1.

Figure 6.1: Agent’s Avatars used during the Werewolf Case Study

Victims have limited information about the environment, since they don’t know who is the

Werewolf. Their goal is to discover who is the Werewolf among all villagers, in other words,

the one who is lying. On the other hand, the Werewolf knows his role and subsequently who

are the victims. Having all the hidden information about the villagers’ roles, its objective is

to remain hidden until he is not outnumbered by Victims, thus trying to eliminate all Victims

while concealing its true identity.

1http://en.wikipedia.org/wiki/Mafia (party game)

41

Page 60: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

The game progresses in a turn based manner. Each round every villager will perform the

Accuse action (Fig. 6.2) in order, targeting the villager they think is the Werewolf. As one

might guess, the Werewolf itself will try to deceive by accusing a Victim.

Figure 6.2: An agent performing the Accuse action

At the end of each turn the villager voted by the player’s majority will leave the game,

being “lynched in the public square”. Our version of the game has children playing, as shown

in Figure 6.1. When one of them loses they just leave the area, not actually dying. Upon being

excluded from the game the villager informs other agents about their true identity (Fig. 6.3).

This information can be used to infer new information, usually regarding past accusations.

Figure 6.3: An agent performing the LastBreath action

The Werewolf will win if it reaches the last turn alive, at which point there will only stand

two villagers: himself and another Victim. At this stage he could reveal his true nature without

retaliation and win the game. Victims win if they manage to discover who the Werewolf is before

the last turn.

6.2 Tactics’ Reasoning

We aimed for a simple variation of the game. Nevertheless, we needed to assess what strategies

real players usually use in the game. One of the main strategies is to search for emotional

cues (Section 2.6) to events. Although characters in NWN2 have facial expressions, they are

very limited. Furthermore, modelling all the possible expression cues would be very difficult.

42

Page 61: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Scripting all possible reactions would be out of the scope of our work, as we would be doing

facial expression recognition, a whole different subject. We opted for the second most commonly

used strategy, reasoning about other player’s past actions.

We modelled two types of actions: Accuse and LastBreath. Players Accuse another player

to vote against them in that turn. This means the accuser thinks the accused is the Werewolf.

The LastBreath action is done when a player is chosen by the majority of player and is

eliminated from the game. Before he leaves, we will perform this action, which will tell other

players what his true role is, either Victim or Werewolf.

Therefore, we focused on the choices player made while performing their Accuse action. They

take into account past accusations so that they can achieve their goal: either accuse someone

they think is the Werewolf, or accuse a victim, for Victims and Werewolves respectively.

6.3 Victims’ planning and reasoning

The Victims were not the focus of our work, the liar was. However, we needed seemingly

intelligent agents as subject tests for the liar to deceive. As a result, we created global guidelines

which Victims used in their reasoning process, for example:

• When agent A accuses agent B, it means that A suspects that B is the Werewolf.

• When a victim is accused by agent C, that victim will start to wonder about agent C being

the Werewolf.

• When a Victim V is eliminated from the game, if I am a Victim, I will start to wonder

about those who accused V.

The notions of suspicion and wonder are levels of certainty an agent has about the fact that

someone is a Werewolf. An agent can only suspect about one agent, but can wonder about

many. These notions are represented as properties, for example:

<Property ToM="John" name="Wonders(Marry)" operator="=" value="True"/>

In this way we can represent mental states of others. The above property defines that John

wonders about Marry being the Werewolf.

Furthermore, these guidelines were implemented through inference operators. The following

operator is a simplified inference which defines the third bullet above.

<Action name="Inference-VWondersAccusersOnLB([victim],[nature],[accuser])">

<PreConditions>

<Property name="SELF(role)" operator="!=" value="Werewolf"/>

<NewEvent action="LastBreath" subject="[victim]" target="Victim" .../>

<RecentEvent action="Accuse" subject="[accuser]" target="[victim]" .../>

...

</PreConditions>

<Effects>

<Property ToM="SELF" name="Wonders([accuser])" operator="=" value="True"/>

</Effects>

</Action>

Victims will follow the rationale above, accusing players they think might be the Werewolf.

Therefore their goal is to accuse the players they suspect or wonder, while this mental states

will be update throughout game turns.

43

Page 62: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

6.4 Wolf’s Planning and Reasoning

The Werewolf is the liar in this social context and as such we focused on his behaviour. The

strategy it applies is different from the ones used by the Victims, focusing on either remaining

hidden by doing what others do or trying to eliminate victims that are targeting him. The most

used tactic in real games as well as in online variations seems to be the first one. Werewolves

usually try to blend in, trying avoid suspicious actions that could denounce them. This behaviour

is called lurking2 in online variations of the game. Lurking means voluntary absence of action

or in other words, laying low. Therefore, the main goal of our liar will be to lay low, aiming to

deceive others into believing he is just one among the Victims. Using the Theory of Mind, the

liar will try to change their mental states, making them believe he thinks the same as they do.

The main purpose of this case study is to test how well can a second level Theory of Mind

implement such goal comparing to a first level one. We will first explain the goal defined for

the latter version and then for the second level version. It is important to keep in mind that in

either versions both the Werewolf and Victims have that same level of Theory of Mind.

6.4.1 First Level Theory of Mind

To achieve the proposed goal in a such a ToM we have to think about how can we repre-

sent it. Such a simple abstraction level cannot fully represent the goal, as we would need to

model an explicit goal that would aim for the a success condition like 〈Villager〉:believes:〈Liar〉:believes:〈Proposition〉, as the liar is trying to change what others think about himself. First

level Theory of Mind can only model at best a condition like 〈Villager〉:believes:〈Proposition〉,as it would not make sense to have a goal that aimed to change the liar’s own internal state.

Keeping the above in mind, we developed goals that explicitly change other Villagers’ minds,

but not what they think about others, since we cannot represent those abstraction levels. While

trying to keep the same strategy of laying low, we tried to induce in their minds a victim, so

that he will accuse her. Because we cannot induce changes in what others thinks about the liar,

we opted to induce changes in their mind, specifically adding suspicion about a new victim. For

example, the following XML code define a goal for such type of liar:

<ActivePursuitGoal name="DeceptiveAccuseToM1([deceived],[target])">

<PreConditions>

<Property name="SELF(role)" operator="=" value="Werewolf"/>

<Property name="[target](role)" operator="=" value="Victim"/>

<Property ToM="[deceived]" name="Wonders([target])" operator="=" value="False"/>

...

</PreConditions>

<SucessConditions>

<Property ToM="[deceived]" name="Wonders([target])" operator="=" value="True"/>

</SucessConditions>

<FailureConditions/>

</ActivePursuitGoal>

This way we will induce in a first stage wonder and, hopefully later, suspicion about the target

to the deceived villager. Such a goal would lead the liar to accuse the target, influencing the

deceived. It is also important to keep in mind that inferences as those mentioned in Section 6.3

2http://mafiamaniac.wikia.com/wiki/Lurking

44

Page 63: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

play a big role in maintaining model of others updated. This greatly influences the preconditions

in the liar’s goals.

6.4.2 Second Level Theory of Mind

The representation level of a second order ToM will enable us to explicitly define the goal for

laying low. The condition 〈Villager〉:believes:〈Liar〉:believes:〈Proposition〉 can be the success

condition of our goal. We took the same approach and tried to induce a new proposition of

wondering in the deceived mind models. A simplified definition of such a goals is shown below.

<ActivePursuitGoal name="DeceptiveAccuseToM2([deceived],[target])">

<PreConditions>

<Property name="SELF(role)" operator="=" value="Werewolf"/>

<Property ToM="[deceived]" name="Wonders([target])" operator="=" value="True"/>

<Property name="[target](role)" operator="=" value="Victim"/>

...

</PreConditions>

<SucessConditions>

<Property ToM="[deceived]" ToM2="SELF" name="Wonders([target])" operator="=" value="True"/>

</SucessConditions>

<FailureConditions/>

</ActivePursuitGoal>

Using second level Theory of Mind we can completely define the success condition of our

goal. This will lead the liar to induce a new change in models of Theory of Mind by accusing

the target. Once more, inferences play a big role in keeping models updated and enabling the

precondition to be verified in the goal.

45

Page 64: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

46

Page 65: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 7

Evaluation

In this chapter we will evaluate the testing scenarios described in Section 6 and ascribed the

validity of our hypothesis stated in Section 1.2. We will start to describe preliminary tests

we performed, discussing their purpose and how their results set a guideline to the subsequent

final tests. Although they did not return any statistical results, their feedback highly influenced

the design and structure of the final evaluation. We proceed to describe the final evaluation,

explaining our motives as to why such tests were used and how we manipulated the different

test conditions. Deception, manipulation and anticipation measures are specially focused while

comparing the results of the two versions. Finally we interpret the results and argue that the

purpose of this work was achieved.

7.1 Preliminary Tests

Given the possible variations and deviations regarding people’s focus while performing our tests

we executed preliminary tests to assess what people thought about the scenario and where they

focused their attention at. As we have mentioned in Section 6, our scenario has one liar and

four truth tellers. Our main concern was to develop a believable liar which behaved in a way

people thought natural. Hence we could take several approaches to develop it. Either he could

primarily try to deceive his fellow “villagers” or he could focus on deceiving the user itself. In

other words, would he try to change the user or the villagers’ minds? However, this concern was

directly dependant on the user role in the game, namely it would not be a problem if the user

did not take part in the game itself. In Section 7.2 we will explain why the final test was not

an interactive one, and thus the result of this first preliminary test was only slightly relevant

to the final evaluation. Nevertheless, we did receive feedback that is still interesting to discuss.

These tests were made with 4 participants with ages between 23 and 50, where users did take

an active player role. Although we have used a very little sample, we were only looking for early

opinions to set guidelines in an early stage of our work. As we would come to conclude both

approaches led the liar to ultimately deceive other players to win the game. Users would also be

influenced by the liar’s actions regardless of his objective. Thereafter we focused on goals that

would change other players’ minds.

47

Page 66: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Simulation Tests

Before starting the final evaluation we felt the need to conduct additional tests. Due to the

fact that there is a difference in the outcome in each version, we thought applicants could give

biased answers. There is also the fact that we were aiming to show people a video demonstrating

interactions between agents playing the game, hence showing one and only one instantiation of

each scenario. Therefore it was imperative to assess if the version shown in the demonstration

was a representative one.

Simulation tests were performed in the following fashion:

1. Both scenario versions were run ten times, from beginning until the liar was caught or won

the game.

2. Each of those times we recorded how far the liar, Werewolf, went in terms of turns.

3. The results were analysed and the best score for each version was used in the video shown

in the questionnaire evaluation stage.

The results for the Simulation tests are shown in Table 7.1. ToM1 and ToM2 are the version

where the liar uses first and second level ToM respectively. The longest possible game in our

scenario is one that achieves four turns, because there are five players. One leaves the game

each turn until we reach only two players.

Version/Test #1 #2 #3 #4 #5 #6 #7 #8 #9 #10

ToM1 3 3 1 1 3 2 1 3 2 2ToM2 3 4 1 3 4 2 3 1 3 3

Table 7.1: Simulation Tests

What we want to test with these simulations is the best possible score for each version. At

best, ToM1 loses before reaching the last round, never winning the game. ToM2 manages to

win the game in two out of ten times.

We concluded that to better represent each version in the video shown in the final evalu-

ation, the best case scenario must be used. Therefore, in the questionnaire regarding ToM1

we used a demonstration were the Werewolf reached the third turn, while for ToM2 we used a

demonstration where the Werewolf wins the game at the end.

7.2 Final Evaluation

The final evaluation we performed aimed to test all the assumptions and theories we have

discussed throughout this document. In particular, we wish to test the validity of our hypothesis,

which states that:

“The higher the reasoning order an entity is capable to use, the better it can success-

fully perform deceptive tasks.”

Although our evaluation is based on a game, due to time and resources restrains we have

chosen to not make an interactive experience test between the user and the agents in the game.

48

Page 67: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Such a test would require more time to be performed, hence gathering fewer potential results

than an interactive one. In fact, we believe this boosts the statistical significance of the tests,

enabling a larger number to be performed.

7.2.1 Procedure

The evaluation was performed through online questionnaires. Individuals received invitations on

Facebook1 and e-mails directing them to our evaluation. The same questionnaire was applied to

two different video demonstrations, each with our test conditions: ToM1 with a liar using a first

level Theory of Mind; ToM2 using a second level Theory of Mind. Participants were randomly

directed to one of these versions and were presented with a questionnaire structured as shown

in Figure 7.1. Each participant only applied for one version and not both.

Figure 7.1: Evaluation Test’s Structure

The survey starts with a brief introduction about the work and its context. The game is

described with special focus in roles and their goals. As expected, we do not reveal who is the

liar. Individuals are then instructed to watch a video demonstration of one of the versions.

We ask applicants to pay special attention to agents’ actions and try to uncover who is the

liar. Finally, participants had to fill a questionnaire keeping in mind what they watched while

giving their opinion. Question types address topic such as the game itself, agents in general,

and the Werewolf specifically. Answers abided to the Likert scale (-2: Strongly Disagree, -1:

Somewhat Disagree, 0: Neither Agree nor Disagree, 1: Somewhat Agree, 2: Strongly Agree).

It is important to note that participants did not know what was the purpose of the system nor

what version were they testing. A full transcription of the questionnaire is available in Appendix

A.

1www.facebook.com

49

Page 68: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

7.2.2 Results

The data we collected was analysed using the non-parametric Mann-Whitney statistical test to

compare the differences between our two test conditions ToM1 and ToM2. These variable refer

to the level of Theory of Mind the liar uses in each version, ToM1 refers to a one level and ToM2

to a two level Theory of Mind. A total of 60 participants (56.67% males and 42.6% females)

took part in our evaluation. Their ages ranged mostly from 19 to 25 (92.7%).

Our analyses divides questions in four groups: (1) questions about the game itself; (2)

questions about all the player; (3) the same questions as (2) but only regarding the liar; (4)

questions specially focused on deceptive behaviour. Stages (2) and (3) are comparable because

we evaluate the global behaviour of Victims and the Werewolf by the same measures, while stage

(4) addresses only deceptive behaviour.

Game Questions

We aimed to realize if participants had understood the game and had enjoyed it, comparing the

two conditions. Although there is no direct relevance to our hypothesis, we believe it still is

interesting to measure if social intelligent agents are an addition to an enjoyable and interesting

experience. Figure 7.2 shows the statistical data we collected in a box-plot graphic, while Table

7.2 describes a summary of those results.

Figure 7.2: Box-plot of Statistical Data regarding Game Questions

Our analysis shows that participants did in fact perceive ToM2 condition as being more

interesting and would play such a variation of the game with significant results between condi-

tions (Q1, p < 0.05; Q2 p < 0.05). Although Q3 and Q4 do not directly measures if agents are

intelligent, we can see that our conditions introduced a change in the game’s overall difficulty

to both roles. Participants thought ToM2 variable had increased the difficulty of the game to

Victims (Q3, p < 0.05 1-tailed), while eased the task of the liar (Q4, p < 0.05). We can conclude

50

Page 69: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Question Question Statement

Descriptive Statistics

Mann-WhitneyToM1 (N = 30) ToM2 (N = 30)Mdn[Quartiles] Mdn[Quartiles]

Q1 The game is interesting. 0[−1, 1] 1[0, 2]

U = 316.500p < 0.05

r = −0.263

Q2 I would play a game like this. −0.5[−2, 1] 1[0, 1]

U = 303.000p < 0.05

r = −0.292

Q3 It is easy to win while playing as a Victim. 0[−1, 1] −1[−1, 0]

U = 328.000p < 0.05 (1-tailed)

r = −0.242

Q4 It is easy to win while playing as a Werewolf. 0[−1, 1] 1[0, 1]

U = 257.500p < 0.05

r = −0.390

Table 7.2: Mann-Whitney statistics for global game questions considering the two conditions(ToM1 and ToM2).

the liar did a more proper job in the second level Theory of Mind version, which participants

perceived as of increased difficulty for the Victim role. These conclusions are also supported by

effect sizes which can be considered to be very close to medium or medium (|r| = 0.3) on all

measures.

Global Player Questions

The second stage of our analysis performed measurements regarding no particular player. Figure

7.3 and Table 7.3 show the statistical data we collected in a box-plot graphic and a summary

table for Mann-Whitney’s test results.

Figure 7.3: Box-plot of Statistical Data regarding Global Questions about Player

The most significant results tell us players did a better job at playing the game with ToM1

condition, which is support by a large effect size (Q5, p < 0.05, |r| > 0.5). As we are accounting

for the overall behaviour of players this result is more relevant for victims, which did in fact

51

Page 70: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Question Question Statement

Descriptive Statistics

Mann-WhitneyToM1 (N = 30) ToM2 (N = 30)Mdn[Quartiles] Mdn[Quartiles]

Q5 The players played well. 1[0, 1] 0[−1, 1]

U = 292.000p < 0.05

r = −0.315

Q6 Player are intelligent. 0[0, 1] 0[−1, 1]

U = 352.500ns

r = −0.198

Q7 Players are affected by others’ actions. 1[1, 2] 2[1, 2]

U = 327.000p < 0.05 (1-tailed)

r = −0.252

Q8 Players behaved in a predictable way. 0[−1, 1] 1[1, 2]

U = 240.000p < 0.001r = −0.424

Q9 Players behaved in a human-like way. 1[0, 1] 1[0, 1]

U = 434.500ns−0.031

Q10 Player are easily deceived. 0[0, 1] 1[0, 2]

U = 213.000p < 0.001r = −0.478

Q11 I figured out the player’s tactics. 0[−1, 1] 1[−1, 2]

U = 338.500p < 0.05 (1-tailed)

−0.221

Table 7.3: Mann-Whitney statistics for global questions about player considering the two con-ditions (ToM1 and ToM2).

won the game in ToM1 condition testing (Section 7.1). We can correlate this fact to the game’s

difficulty previously analysed to conclude that because ToM1 condition was perceived as an

easier variation of the game to victims, they managed to perform better at the game. On the

other hand, when difficulty was added to the game, which we conclude it is directly related to

using ToM2 variable, they performed worse.

The player’s predictability was also significantly influenced by ToM2 variable (Q8, p < 0.001,

|r| < 0.5). Participants perceived the control variable, ToM1, has less predictable than ToM2.

We believe this is influenced by the liar actions. While trying to deceive, the liar should not

give himself away. Fostering predictable behaviour through manipulation, the liar in the ToM2

variable achieved its goal with more benefit, winning in fact.

The previous conclusion can also be support by our results regarding Q10. The easiness in

deceiving is highly influence depending on which variable we use (Q10, p < 0.001, r = −0.478).

Players were deceived more easily using test variable ToM2, which is supported by a high

significance and a close to large effect size. This means the difference between results comes

greatly from our testing variable and not chance. We must remember that our variables account

for a difference the liar’s behaviour and not that of the victims, by which we can conclude that

ToM2 condition did in fact introduced a factor which changed the results, in other words a

better deceiving agent.

Global Liar Questions

In this stage we applied the same measurements we used to evaluate overall player to only the

liar. Figure 7.4 and Table 7.4 show the statistical data we collected in a box-plot graphic and a

summary table for Mann-Whitney’s test results.

We got several relevant results from this stage of the evaluation. Measures about “how

well did the liar played” (Q12) and its intelligence (Q13) resulted in statistically significant

52

Page 71: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Figure 7.4: Box-plot of Statistical Data regarding Global Questions about the Liar

differences between conditions (p < 0.001) supported by a large effect size(|r| = 0.5), meaning

our ToM2 test variable had a large impact on the control variable, ToM1. We can conclude

that the liar from variable ToM2 is more intelligent than the one from the control variable.

Also important to note due to its high statistical significance and close to large impact (Q15,

p < 0.001, r = −0.467) is the fact that the liar in ToM2 variable was affected by other players’

action. This fact is in compliance to one of deception’s requirements we hypnotised in Section

2.7, which states that one must consider what others are thinking so it can better manipulate

them and achieve its deceptive goals. By the end of this analysis we will be able to further

correlate the notion of intelligence and deceptive behaviour.

Specific Deceptive Behaviour Questions

The last stage of our evaluation focused on the deceptive behaviour and how well it was per-

formed. Figure 7.5 and Table 7.5 show the statistical data.

Unfortunately we only got two statistically significant results regarding specific deceptive

behaviour. However, the result we did obtain are highly significant and had a large impact on the

control variable. Namely, by analysing the data concerning Q14 we conclude that participants

perceived the liar as more intelligent than other players in condition ToM2 (Mdn[ToM1] = 0,

Mdn[ToM2] = 1). This result is highly significant and has a large impact on the difference

between conditions (Q14, p < 0.001, r = −0.621). The same can be said about Q21, which

tests the extent to which the liar managed to deceive other player. Results show that the

liar in condition ToM2 managed to deceive to a better extent then test condition ToM1 (Q21,

p < 0.001, r = −0.524).

Although we did expect statistically results from Q20, we did not managed to get them.

Our analyses resulted in low statistical significance (p > 0.05) and a small effect size (|r| = 0.1),

53

Page 72: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Question Question Statement

Descriptive Statistics

Mann-WhitneyToM1 (N = 30) ToM2 (N = 30)Mdn[Quartiles] Mdn[Quartiles]

Q12 The liar played well. 1[−1, 1] 2[1, 2]

U = 166.000p < 0.001r = −0.574

Q13 The liar is intelligent. 0[0, 1] 1[1, 2]

U = 168.000p < 0.001r = −0.571

Q15 The liar is affected by others’ actions. 1[0, 1] 2[1, 2]

U = 220.500p < 0.001r = −0.467

Q16 The liar behaved in a predictable way. 0[−1, 1] 1[0, 2]

U = 260.000p < 0.05

r = −0.3745

Q17 The liar behave in a human-like way. 1[0, 1] 1[1, 2]

U = 310.000p < 0.05−0.291

Q23 I figured out the liar’s tactics. 0[−1, 1] 1[0, 2]

U = 292.500p < 0.05

r = −0.312

Table 7.4: Mann-Whitney statistics for global questions about the liar considering the twoconditions (ToM1 and ToM2).

showing that participants did not perceive a relevant difference between manipulation related

abilities between test conditions ToM1 and ToM2. This could hint to the complexity of the

overall problem this work faces. Deception is a complex and multi-dimensioned capacity humans

develop, where manipulation is only one of those dimensions.

7.2.3 Concluding Remarks

In this Chapter we described our the procedures of our evaluation and accounted measures to

test the hypothesis of this thesis. The overall goal was to measure the extent to which a second

level Theory of Mind is better at performing and succeeding in deceptive tasks than a one level

one. Preliminary and Simulation tests were performed to ascribe what would be the final case

study which would better characterize and evaluate our goal.

Subsequently we described the final evaluation done through an online questionnaire. We

started by explaining our option for a non-interactive experience due to time and resources

constrains. Then we classified our group of 60 participants, as being composed of 56.67% males

and 42.6% females, with their ages mostly ranging from 19 to 25 (92.7%). Furthermore we

described how they were exposed to an introduction of our scenario, followed by a request to

watch a demonstration in a short video. To test our hypothesis we evaluated two test conditions,

ToM1 where the liar agent uses a first level Theory of Mind; and ToM2 where the liar agent

uses a second level Theory of Mind.

We analysed the resulting data comparing each condition to the features tested by each

question. Overall results seem to show that test condition ToM2 achieved better results in

deceiving other players than ToM1, which is consistent with our hypothesis. Heading to the

comparative analysis, we started by evaluating features concerning to the game itself. Our

results showed that the difficulty of the game to villagers (players trying to uncover the liar),

increased in test condition ToM2, while the game became easier to the liar. The impact on

the game clearly suggests that a second level Theory of Mind agent has a better deceptive

54

Page 73: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Figure 7.5: Box-plot of Statistical Data regarding Specific Questions about the Liar

behaviour, at least in terms of game outcome, which was also demonstrated in the Simulation

tests. The evaluation of other questions with strong statistical significance can also add up to

support this idea in a broader sense. A particularly interesting result was the high correlation

found between the notion of intelligence and deception. Participants perceived the liar in test

condition ToM2 to be more intelligent per se as well as in comparison with other players. This

results support the idea that social intelligent behaviour requires a complex level of Theory of

Mind. It is important to remember from Section 2.7.1 that a second level Theory of Mind is the

abstraction level mostly used everyday life by human adults. Considering test condition ToM2

as more intelligent supports the idea that deception is a sign of high social intelligence.

Another confirmed argument is that liars perform their task better if they take into account

the actions of other agents (Q15). As we might have guessed it is observing others that we can

manipulate and change their mental states (Section 2.7.1).

There was a question in particular we were hoping to get results and did not. Apparently

participants thought the liars in both test condition were about the same in terms of how good

they are in manipulating other people’s mind. We believe this result comes as a consequent of

the complexity of this subject and the simplicity of the scenario as well. Complete results in all

relevant dimension regarding deception could only be achieved by a complex scenario.

Our aim was to test if the higher the reasoning order one can use, the higher the proficiency

in deceptive tasks. We can conclude that our results not only support our hypothesis, but also

supporting that higher order reasoning is a clear sign of social intelligence.

55

Page 74: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Question Question Statement

Descriptive Statistics

Mann-WhitneyToM1 (N = 30) ToM2 (N = 30)Mdn[Quartiles] Mdn[Quartiles]

Q14 The liar is more intelligent than the other players. 0[−1, 0] 1[0, 1]

U = 143.500p < 0.001r = −0.621

Q18 The liar is good at predicting what others are thinking. 0[−1, 1] 0[0, 1]

U = 414.500ns

r = −0.072

Q19 The liar is good at anticipating other player actions. 0[−1, 1] 0[0, 1]

U = 393.000ns

r = −0.115

Q20 The liar is good at manipulating other player actions. 0[−1, 1] 1[0, 1]

U = 367.000p < 0.1 (1-tailed)

r = −0.166

Q21 The liar managed to deceive the other players. 0[−1, 1] 1[1, 2]

U = 188.000p < 0.001−0.524

Q22 The liar managed to deceive me. 0[−1, 1] 1.5[−1, 2]

U = 369.000ns

r = −0.159

Table 7.5: Mann-Whitney statistics for specific questions about the liar considering the twoconditions (ToM1 and ToM2).

56

Page 75: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Chapter 8

Conclusion

Classic Artificial Intelligence aimed at providing machines with specific problem solving abilities.

In a turning point IA is now aiming at build human-like behaviour, which can for example, be

utilized as artificial companions. Humans live in society and use a wide array of capabilities

to be able to behave consistently and thrive in such an environment. This is a specific type

of intelligence: social intelligence. If we want to build agent that behave in the same way as

humans do, we need to endow them with such a capacity. Therefore this dissertation tries to

address the following problem regarding social intelligence in IA:

“How can autonomous agents behave deceptively and generate lies relevant to a spe-

cific context?”

This is a kind of behaviour we only encounter in social environments, hence being important

to build in autonomous agents since we are aiming for complex social interactions.

We argued that an important step to achieve that goal is through abstraction reasoning,

namely using a mechanism humans use, called Theory of Mind (ToM). Several research works

have implemented this ability in agents but have only addressed a simplification of the problem,

considering only a one level Theory of Mind. In this way, an agent cannot reason about what

others are thinking about him for example.

We propose that the problem above can be tackled with the following hypothesis:

“The higher the reasoning order an entity is capable to use, the better it can success-

fully perform deceptive tasks.”

We developed a conceptual model to achieve N levels of Theory of Mind through a simulation

approach, where the agent reasons about what others believe and do as if he was in their shoes.

This model is based on a complete theory about how humans ascribe mental states to other

people, called a mindreading ability. Its mechanisms were implemented extending the Theory

of Mind component of an autonomous agent architecture called FAtiMA. We argued that every

Theory of Mind level adds a great deal of complexity to the agents reasoning cycle, and so we

proceeded to test it considering a second level ToM. This argument is also supported by the fact

that humans use mostly a second level ToM in their everyday tasks. Furthermore, we developed

a case study based on a game of deception called Werewolf, where a liar has to hide his true

identity.

57

Page 76: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Simulation tests were performed to compare the performance of both testing conditions, first

level ToM (ToM1) and second level ToM (ToM2). The best sample for each sample was recorded

and shown to 60 participants, who were asked to fill a questionnaire. The results showed that

participants clearly perceived testing condition ToM2 as being best at deceiving other agents.

An interesting result shows a clear relation between what participants perceived as intelligence

and the ability to deceive. Testing condition ToM2 was both considered more intelligent and

better at deceiving other agents than ToM1. This result confirms our prediction that deception

is a necessary trait in social intelligent agent.

Other results were not as good as we expected. For example we expected that an agent with

a second level Theory of Mind would also show a better capacity to manipulate others. However

this was not the case. We believe this can be related to the complexity of deception behaviour,

which comprise several dimensions, for instance manipulation, anticipation and lying.

All in all our work showed good results and supported our hypothesis that a higher level of

Theory of Mind results in better performance of deceptive behaviour. Due to the modularity of

our architecture, it can be configured to implement N levels of ToM, thus being able to be used

to configure complex scenarios and be used in other research works related to ToM.

8.1 Future Work

Although our model achieved our proposed goal, it can still be further developed, particularly if

it is to be used efficiently in scenario with more ToM levels. We argued that the agent’s reasoning

process grows exponentially as more ToM levels are considered. This happens because the agent

appraises an event not just for himself, but for every model it has represented in its ToM. This

can be very inefficient with three and four models of ToM. A solution can be to not consider

some parts of the ToM global structure, which as we have seen is represented by a tree hierarchy

of models. Some portions of the tree might not be needed in some specific context. This could

be achieve by either ascribing maximum levels of the nested ToMs in other models, or the agent

could have an heuristic to only update relevant models in the tree hierarchy. However this could

be very domain dependent but would still tackle the problem of high level ToM inefficiency.

A more complex matter would be to ascribe intentions to models of others. As we have

mentioned while we were explaining our model we did not include this particular feature, as this

would require intention recognition. Such an addition would require an entire research work by

itself. However, it would provide an important addition to the capacities of a lying agent. We

support the idea that being able to model the goals of others’ in addition to their belief would

greatly improve their capacity to anticipate other agents’ actions.

Yet, neither of these issues are trivial as they could further increase the quality of the agent’s

behaviour. Therefore we believe our work has set ground for new research work to be done in

this subject.

58

Page 77: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Bibliography

[1] S. Baron-Cohen. Mindblindness: An Essay on Autism and Theory of Mind. MIT Press,1995.

[2] J. Bates. The role of emotion in believable agents. Communications of the ACM, 37:122–125, 1994.

[3] M. E. Bratman. Intention, Plans, and Practical Reason. Cambridge University Press, Mar.1999.

[4] C. Castelfranchi, C. Castelfranchi, R. Falcone, R. Falcone, F. D. Rosis, and F. D. Rosis.Rosis. deceiving in golem: How to strategically pilfer help. In In Autonomous Agent 98:Working notes of the Workshop on Deception, Fraud and Trust in Agent Societies. Kluwer,1998.

[5] F. de Rosis, V. Carofiglio, G. Grassano, and C. Castelfranchi. Can computers deliberatelydeceive? a simulation tool and its application to turing’s imitation game. ComputationalIntelligence, 19(3):235–263, 2003.

[6] B. M. DePaulo and D. A. Kashy. Everyday lies in close and casual relationships. Journalof Personality and Social Psychology, 74(1):63 – 79, 1998.

[7] B. M. Depaulo, D. A. Kashy, S. E. Kirkendol, M. M. Wyer, and J. A. Epstein. Lying ineveryday life. Journal of Personality and Social Psychology, 70:979–995, 1996.

[8] B. M. DePaulo, J. J. Lindsay, B. E. Malone, L. Muhlenbruck, K. Charlton, and H. Cooper.Cues to deception. Psychological bulletin, 129(1):74–118, Jan. 2003.

[9] J. a. Dias and A. Paiva. Feeling and reasoning: A computational model for emotionalcharacters. In C. Bento, A. Cardoso, and G. Dias, editors, Progress in Artificial Intelli-gence, volume 3808 of Lecture Notes in Computer Science, pages 127–140. Springer Berlin/ Heidelberg, 2005.

[10] R. E. Fikes and N. J. Nilsson. Strips: A new approach to the application of theorem provingto problem solving. Artificial Intelligence, 2:189 – 208, 1971.

[11] P. Fletcher, F. Happe, U. Frith, S. Baker, R. Dolan, R. Frackowiak, and C. Frith. Otherminds in the brain: a functional imaging study of ”theory of mind” in story comprehension.Cognition, 57(2):109 – 128, 1995.

[12] H. W. G.-Juergen Hogrefe and J. Perner. Ignorance versus false belief: A developmentallag in attribution of epistemic states. Child Development, 57(1):567–582, 1986.

[13] F. Happe. An advanced test of theory of mind: Understanding of story characters’ thoughtsand feelings by able autistic, mentally handicapped, and normal children and adults. Journalof Autism and Developmental Disorders, 24:129–154, 1994. 10.1007/BF02172093.

59

Page 78: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

[14] F. G. E. Happe, F. The role of age and verbal ability in the theory of mind task performanceof subjects with autism. Child Development, 66(3):843–855, 1995.

[15] M. Harbers, K. v. d. Bosch, and J.-J. Meyer. Modeling agents with a theory of mind. In Pro-ceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligenceand Intelligent Agent Technology - Volume 02, WI-IAT ’09, pages 217–224, Washington,DC, USA, 2009. IEEE Computer Society.

[16] S. M. J. Dias and A. Paiva. Fatima modular: Towards an agent architecture with a genericappraisal framework. Workshop in Standards in Emotion Modeling, Leiden, Netherlands,2011.

[17] D. A. Kashy and B. M. DePaulo. Who lies? Journal of Personality and Social Psychology,70(5):1037 – 1051, 1996.

[18] S. R. Leekam and J. Perner. Does the autistic child have a metarepresentational deficit?Cognition, 40(3):203 – 218, 1991.

[19] R. E. Neapolitan. Probabilistic reasoning in expert systems: theory and algorithms. JohnWiley & Sons, Inc., New York, NY, USA, 1990.

[20] J. Pearl. Probabilistic reasoning in intelligent systems : networks of plausible inference.Morgan Kaufmann, 1997.

[21] D. Premack and G. Woodruff. Does the chimpanzee have a theory of mind? Behavioraland Brain Sciences, 1(04):515–526, 1978.

[22] A. S. Rao and M. P. Georgeff. Bdi agents: From theory to practice. In Proceedings of theFirst Intl. Conference on Multiagent Systems, San Francisco, 1995.

[23] S. Russell and P. Norvig. A modern, agent-oriented approach to introductory artificialintelligence, volume 6. ACM, New York, NY, USA, April 1995.

[24] F. Thomas and O. Johnston. The Illusion of Life: Disney Animation. Disney Editions,revised edition, 1995.

[25] M. Vala, G. Raimundo, P. Sequeira, P. Cuba, R. Prada, C. Martinho, and A. Paiva. Ionframework - a simulation environment for worlds with virtual agents. In Proceedings ofthe 9th International Conference on Intelligent Virtual Agents, IVA ’09, pages 418–424.Springer-Verlag, 2009.

[26] A. Wagner and R. Arkin. Acting Deceptively: Providing Robots with the Capacity forA De-ception. International Journal of Social Robotics, pages 1–22–22, 2010.

[27] A. R. Wagner and R. C. Arkin. Analyzing social situations for human-robot interaction.Interaction Studies, pages 277–300, 2008.

[28] H. M. Wellman and D. Estes. Early understanding of mental entities: A reexamination ofchildhood realism. Child Development, 57(1):910–923, 1986.

[29] A. Whiten. Natural Theories of Mind: Evolution, Development, and Simulation of EverydayMindreading. Blackwell Publishers, 1991.

[30] H. Wimmer and J. Perner. Beliefs about beliefs: representation and constraining functionof wrong beliefs in young children’s understanding of deception. Cognition, 13:103–28, 1983.

60

Page 79: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Appendix A

Experiment’s Questionnaire

The complete questionnaire is shown in the following pages. The hyper-link reads “Please clickhere to watch the video” in Portuguese.

The questions for both versions are translated below:

Game Questions

Q1- The game is interesting.Q2- I would play a game like this.Q3- It is easy to win while playing as a Victim.Q4- It is easy to win while playing as a Werewolf.

Global Questions About Players

Q5- The players played well.Q6- Players are intelligent.Q7- Players are affected by others’ actions.Q8- Players behaved in a predictable way.Q9- Players behaved in a human-like way.Q10- Player are easily deceived.Q11- I figured out the player’s tactics.

Global Questions About The Liar

Q12- The liar played well.Q13- The liar is intelligent.Q15- The liar is affected by others’ actions.Q16- The liar behaved in a predictable way.Q17- The liar behave in a human-like way.Q23- I figured out the liar’s tactics.

Specific Questions About The Liar

Q14- The liar is more intelligent than the other players.Q18- The liar is good at predicting what others are thinking.Q19- The liar is good at anticipating other player actions.Q20- The liar is good at manipulating other player actions.Q21- The liar managed to deceive the other players.Q22- The liar managed to deceive me.

61

Page 80: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

Questionário 1 - Lie To Me

Bem-vindo ao Lie to Me.

O meu nome é Henrique Reis e este questionário destina-se a testar a minha Dissertação deTese de Mestrado.A tarefa que proponho começa pela visualização de um breve vídeo de demonstração de um jogo,seguida de um conjunto de perguntas às quais deve responder tendo em conta o que viu nademonstração. Todo o processo não dura mais de 10 minutos.

O Jogo:Existem 5 participantes no jogo, neste caso crianças, que estão a jogar a uma versão simplificadado Jogo dos Lobos.Neste jogo, dos 5 jogadores, um é Lobo e quatro são Vítimas. Na imagem seguinte estão representados os jogadores, designados pela sua cor em Português.

Da esquerda para a direita: Verde, Vermelho, Preto, Amarelo e Azul.

As Vítimas apenas sabem que elas próprias são vítimas. Já o Lobo sabe quem ele é, e por exclusão, quem são as Vítimas. Em cada turno, por ordem, todos os participantes acusam outro participante uma vez. No fim do turno sai do jogo quem tiver sido acusado pela maioria dos jogadores em jogo. O objectivo das Vítimas é encontrar o Lobo, enquanto que o do Lobo é não ser descobertopelas vítimas e chegar ao fim do jogo. As Vítimas ganham se o Lobo sair do jogo antes do último turno. O Lobo ganha se chegar ao último turno, no qual restará apenas uma vítima, altura em que sepode revelar sem medo de retaliações e "devorar" a vítima que falta.

Na seguinte demonstração do jogo o utilizador apenas tem o papel de moderador, iniciando asrondas interagindo com o obelisco ao centro.

Visualize o vídeo que se segue, tendo em atenção as acções dos jogadores.Tente perceber quem é o Lobo e quem são as Vítimas.

Clique aqui para ver o vídeo.

Poderá ver o vídeo repetidas vezes se for necessário. Responda agora às questões que se apresentam a seguir.

As possibilidades de resposta traduzem-se da seguinte forma:

-2 -1 0 1 2

Discordototalmente.

Discordomoderadamente.

Não concordo nemdiscordo.

Concordomoderadamente.

Concordototalmente.

Page 81: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

1. O jogo é interessante. *

-2 -1 0 1 2

2. Jogaria um jogo assim. *

-2 -1 0 1 2

3. É fácil ganhar o jogo da perspectiva das Vítimas. *

-2 -1 0 1 2

4. É fácil ganhar o jogo da perspectiva do Lobo. *

-2 -1 0 1 2

Considerando os jogadores em geral:

5. Os jogadores fizeram um bom jogo. *

-2 -1 0 1 2

6. Os jogadores são inteligentes. *

-2 -1 0 1 2

7. Os jogadores têm em consideração as acções dos outros. *

-2 -1 0 1 2

8. O comportamento dos jogadores é previsível. *

-2 -1 0 1 2

9. Os jogadores têm um comportamento que parece humano. *

-2 -1 0 1 2

10. Os jogadores são facilmente enganados. *

-2 -1 0 1 2

11. Consegui perceber a táctica dos vários jogadores. *

-2 -1 0 1 2

Considerando que o Lobo era o jogador Preto, responda às seguintes questões:

12. O jogador Preto fez um bom jogo. *

-2 -1 0 1 2

13. O jogador Preto é inteligente. *

-2 -1 0 1 2

14. O jogador Preto é mais inteligente que os outros jogadores. *

-2 -1 0 1 2

15. O jogador Preto tem em consideração as acções dos outros jogadores. *

-2 -1 0 1 2

16. O comportamento do jogador Preto era previsível. *

-2 -1 0 1 2

17. O jogador Preto tinha um comportamente que parecia humano. *

-2 -1 0 1 2

18. O jogador Preto é bom a prever o que alguém está a pensar. *

-2 -1 0 1 2

Page 82: Lie to Me: Lying Virtual Agents - ULisboa...Lie to Me: Lying Virtual Agents Henrique Daniel Santar em Reis Disserta˘c~ao para obten˘c~ao do Grau de Mestre em Engenharia Inform atica

19. O jogador Preto antecipa as acções dos outros jogadores. *

-2 -1 0 1 2

20. O jogador Preto manipula as acções dos outros jogadores. *

-2 -1 0 1 2

21. O jogador Preto consegue enganar os outros jogadores. *

-2 -1 0 1 2

22. O jogador Preto conseguiu enganar-me. *

-2 -1 0 1 2

23. Consegui perceber a estratégia do jogador Preto. *

-2 -1 0 1 2

24. Sexo: * Feminino Masculino

25. Idade * Menos de 18

19 a 25

26 a 39

40 a 60

Mais de 60

* = Input is required

This form was created at www.formdesk.com