Difference between revisions of "Agent-based computational economics"

From Simulace.info
Jump to: navigation, search
(Possible learning types)
(Possible learning types)
 
(127 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
+
'''Agent-based computational economics''' or shortly '''ACE''' is branch of [http://en.wikipedia.org/wiki/Computational_economics computational economics]. It uses [http://en.wikipedia.org/wiki/Agent-based_model agent-based models] or simulations to model real world market or economic interactions between agents. Agents can represent institutions, firms, individuals or environment.  Models, often created in specialized software or framework, are dynamic and allow introduction of heterogenous behavior of agents. ACE is therefore '' "a computational study of economic processes modeled as dynamic systems of interacting agents"<ref name=Tesfatsion2006>Leigh Tesfatsion, Agent-Based Computational Economics: A Constructive Approach to Economic Theory [(pdf,253KB) http://www.econ.iastate.edu/tesfatsi/hbintlt.pdf], in Leigh Tesfatsion and Kenneth L. Judd (eds.), Handbook of Computational Economics, Volume 2: Agent-Based Computational Economics, Handbooks in Economics Series, Elsevier/North-Holland, the Netherlands, 2006.</ref>
 +
''
  
 
==Resarch==
 
==Resarch==
Main pillars of ACE resarch<ref>TESFATSION, Leigh. Agent-Based Computational Economics: Growing Economies from the Bottom Up. IOWA STATE UNIVERSITY. Agent-Based Computational Economics [online]. 2012-05-02, 2012-05-02 [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/tesfatsi/ace.htm</ref>:
+
Main pillars of ACE resarch according to [http://en.wikipedia.org/wiki/Leigh_Tesfatsion Leigh Tesfatsion].<ref name=Tesfatsion2007>Leigh Tesfatsion (2007) Agent-based computational economics. Scholarpedia, http://www.scholarpedia.org/article/Agent-based_computational_economics</ref> <ref name=Tesfatsion></ref>
  
 
* Empirical
 
* Empirical
Line 10: Line 11:
  
 
===Empirical===
 
===Empirical===
This area area stands for explaining possible reasons for observed regularities.
+
This area stands for explaining possible reasons for observed regularities. This is achieved through replication of such regularities using multi-agent models. This approach allows to seek causal explanations thanks to bottom-up modelling of simulated market or economy<ref name=Tesfatsion />.
  
 
===Normative===
 
===Normative===
===Qualitativ insight and theory generation===
+
ACE can help to increase normative understanding, ACE models can serve as virtual test field for different policies, regulations and can simulate many different economic scenarios. Subsequent insights in social norms and institutions can help to explain why there are some persisting regularities in markets. Another aspepct is relationship between environmental properties, organization structure and  performance of that organization.
 +
<ref>Tesfatsion, Leigh. “Agent-based computational economics: modeling economies as complex adaptive systems.” Ed. Leigh Tesfatsion & Kenneth L Judd. Information Sciences 149.4 (2003) : 262-268. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.143.4883&rep=rep1&type=pdf</ref>
 +
 
 +
===Qualitative insight and theory generation===
 +
Through ACE approach, self-organizing capabilities of decentralized market systems could be understood. It can explain why there are some regularities persistent over time and why they remain while others disappear. Evolving agent world can be used to observe needed degree of coordination to establish institutions and attain self organization<ref name=Tesfatsion /><ref name=Tesfatsion2007 />.
 +
 
 
===Methodological advancement===
 
===Methodological advancement===
 +
ACE seeks the best instruments and methods to study economic studies using computational experiment. Important aspect is whether data produced by such experiments are in accordance with real-world data. In order to achieve this methodological principles need to be developed as well as Programming, visualization and validation tools<ref name=Tesfatsion /><ref name=Tesfatsion2007 />. For more information see [[#Software and programming |Software and programming ]]
  
 +
==Fields of application==
  
==Fields of application==
+
 
Double auction simulation
+
One of the first major applications of multi-agent models in social sciences was famous [http://en.wikipedia.org/wiki/Sugarscape Sugarscape] model by Epstein and Axell. From this application it is not far to the economic field. ACE can approach can be applied to rather simple [http://en.wikipedia.org/wiki/Double_auction double-auction] market models or two-sector trading worlds. ACE is also used in various complex market simulations like tourism, digital news or investments. ACE can also help to analyze the impacts of various policies and regulations for example effect of deregulation on an electric power market
Financial markets
+
<ref>
Labour markets
+
Cirillo R. et al (2006). Evaluating the potential impact of transmission
Economic zones model
+
constraints on the operation of a competitive electricity market in
 +
illinois. Argonne National Laboratory, Argonne, IL, ANL-06/
 +
16 (report prepared for the Illinois Commerce Commission),
 +
April. http://www.dis.anl.gov/pubs/61116.pdf
 +
</ref>
 +
<ref>
 +
Charles M. Macal and Michael J. North, "Tutorial on Agent-Based Modelling and Simulation" [http://www.econ.iastate.edu/tesfatsi/ABMTutorial.MacalNorth.JOS2010.pdf PDF,359KB], Journal of Simulation, Vol. 4, 2010, 151–162
 +
</ref>.
 +
More complex models are capable of simulating whole economies with all necessary aspects as financial, household or job markets while maintaining homogenity of agents. Example of this is the [http://www.eurace.org/index.php?TopMenuId=2 EURACE] project. Models like this enable what-if analysis and policy experiments on European scale.
 +
<ref name=Tesfatsion>
 +
TESFATSION, Leigh. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems. 2010-03-24, [cit. 2012-06-18]. http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
 +
</ref>. There are also applications to model economic behaviour of vanished civilizations
 +
<ref>
 +
Kohler TA, Gumerman GJ and Reynolds RG (2005). Simulating
 +
ancient societies. Scient Amer 293(1): 77–84. http://libarts.wsu.edu/anthro/pdf/Kohler%20et%20al.%20SciAm.pdf
 +
</ref>
  
 
==Computational world models==
 
==Computational world models==
 
[[File:AMES network.png|thumb|right|Agent hierarchy used in AMES framework]]
 
[[File:AMES network.png|thumb|right|Agent hierarchy used in AMES framework]]
Computational world is composed of many agents, some of them can act on their own, have learning capability and memory. Others represent rather reactive elements of the world such as technology or nature. Some agents can be passive like house or patch of land. Composition of agents is also possible, music band agent can be for instance a composition of agents playing musical instruments. Agents are therefore ordered in hierachy as shown on [http://www2.econ.iastate.edu/tesfatsi/AMESMarketHome.htm AMES framework] example. Agent can be simple-programmed, autonomous or human-like <ref name=Chen>
+
Computational worlds can composed of various agents, some of them can act on their own, have learning capability and memory. Others represent rather reactive elements of the world such as technology or nature. Some agents can be passive like house or patch of land. Composition of agents is also possible, music band agent can be for instance a composition of agents playing musical instruments. Agents are therefore ordered in hierachy as shown on [http://www2.econ.iastate.edu/tesfatsi/AMESMarketHome.htm AMES framework] example. Agent can be simple-programmed, autonomous or human-like <ref name=Chen>
 
Chen,S.-H.,Varieties of agentsinagent-based computational economics: A historical and an
 
Chen,S.-H.,Varieties of agentsinagent-based computational economics: A historical and an
interdisciplinary perspective. Journal of Economic Dynamics and Control(2011), doi:10.1016/j.jedc.2011.09.003, Available from: http://www.econ.iastate.edu/tesfatsi/ACEHistoricalSurvey.SHCheng2011.pdf
+
interdisciplinary perspective. http://www.econ.iastate.edu/tesfatsi/ACEHistoricalSurvey.SHCheng2011.pdf, Journal of Economic Dynamics and Control(2011), doi:10.1016/j.jedc.2011.09.003,  
</ref>
 
In order for agents to operate in computational worlds, methods and protocols are required. These methods and protocols enable interactions between agents themselves, between agents and the world or artificial institutions e.g. market. These protocol consits of rules for mediation between agents and serve as description of interaction between agents e.g. between market and agent.
 
<ref>TESFATSION, Leigh. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf</ref>
 
<ref name=Leigh>
 
{{cite web
 
|title = Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems
 
|url = http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
 
|author = TESFATSION, Leigh
 
|date = 2010-03-24
 
|format = PDF
 
|accessdate = 2012-06-18}}
 
 
</ref>
 
</ref>
 +
In order for agents to operate in computational worlds, methods and protocols are required. These methods and protocols enable interactions between agents themselves, between agents  artificial institutions e.g. market or between agents and the world itself. These protocol consits of rules for mediation between agents and serve as description of interaction between agents e.g. between market and agent.
 +
<ref name=Tesfatsion></ref>
 
For example in [http://en.wikipedia.org/wiki/Double_auction double auction] model, agents may have following methods:
 
For example in [http://en.wikipedia.org/wiki/Double_auction double auction] model, agents may have following methods:
 
<pre>
 
<pre>
Line 46: Line 60:
 
getMarketProtocols (posting, matching, trade, settlement);
 
getMarketProtocols (posting, matching, trade, settlement);
 
</pre>
 
</pre>
 +
 +
First method acquires (<code>getWorldEventSchedule</code> current time from the world itself. Through <code>getMarketProtocols</code> agent can acquire valid protocol used for different kinds of interaction and negotiations between agents. Method <code>getWorldProtocols</code> can serve for other out of market interactions.
  
  
 
===Equilibriums and attractors===
 
===Equilibriums and attractors===
Model behavior can result to various types of equilibrium and attractors. System is in equilibrium if all influences acting on the system offset each other
+
[[File:derivate-follower-basin-of-attraction.png|thumb|right|Agent stops too early in a basin of attraction missing the highest attainable profit]]
so that the system is in an unchanging condition<ref>http://dl.acm.org/citation.cfm?id=1531270</ref>. Agent-based models can help to determine which parameters influence stability or effectiveness of the market. Parameters can be changed on different levels e.g. agent level, market level or world level. Agent may have parameters like risk aversion, market may have parameters like non-employment payment percentage etc.<ref>TESFATSION, Leigh. Modeling Economies as Complex Adaptive Systems. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, 2010-03-24 [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf</ref>
+
Model behavior can result to various types of equilibrium and attractors. '' "System is in equilibrium if all influences acting on the system offset each other
 
+
so that the system is in an unchanging condition" ''
 +
<ref>http://dl.acm.org/citation.cfm?id=1531270</ref>. Agent-based models can help to determine which parameters influence stability or effectiveness of the market while visualization capabilities can help to identify possible [http://www.scholarpedia.org/article/Basin_of_Attraction basins of attraction]. These can than be pinpointed through generated reports, plots or through other available ex-post analytical tools. Agent can for be for instance attracted by different basins of attraction while using different learning algorithms. Image on the right shows how agent scale the profit curve using deterministic reactive reinforcement [[#Learning| learning]]. Because of using simple Derivative-follower adaptation<ref name=TesfatsionLearning /> agent stops when profit level start's to fall, which is in this case too soon. Parameters can be changed on different levels e.g. agent level, market level or world level. Agent may have parameters like risk aversion, market may have parameters like non-employment payment percentage etc.<ref name=Tesfatsion></ref>
  
 
==Agent types and characteristics==
 
==Agent types and characteristics==
  
Simple programmed agents are represented by simple algorithm, be it short lenght of a code or simplicity of a pseudo-random number generator agent uses (Shen). However even simple agents can exhibit form of swarm intelligence simillar to the emergent behavior of a group ants or termites. Groups of simple agents are than capable to solve complex tasks. Even without learning capability, the agents can optimize or generate orderly movement patterns. [http://en.wikipedia.org/wiki/Stigmergy Stigmergy] can be one way to achieve this.  Agents can be differentiated by position in the [http://economistsview.typepad.com/economistsview/2009/02/cognitive-hierarchy-theory.html cognitive hierarchy], where more complex agents are able to think more steps ahead than simple agents. Smarter agents can also emulate behavior of simple agents if favourable but it's not possible vica versa.Non-agent economic models often introduce simplifying assumptions e.g. that all agents are rational and homogenous (CM Macal1,2* and MJ North1,2). Humans interacting in various systems or institutuins are heterogenous and it's desirable to emulate this feature to produce more realistic behavior(Shen).  
+
Simple programmed agents are represented by simple algorithm, be it short lenght of a code or simplicity of a pseudo-random number generator agent uses <ref name=Chen />. However even simple agents can exhibit form of swarm intelligence simillar to the emergent behavior of a group ants or termites. Groups of simple agents are than capable to solve complex tasks. In some cases even without learning capability, the agents can optimize or are able to generate orderly movement patterns. [http://en.wikipedia.org/wiki/Stigmergy Stigmergy] can be one way to achieve this.  Agents can be differentiated by position in the [http://economistsview.typepad.com/economistsview/2009/02/cognitive-hierarchy-theory.html cognitive hierarchy], where more complex agents are able to think more steps ahead than simple agents. Smarter agents can also emulate behavior of simple agents if favourable but it's not possible vica versa. Non-agent economic models often introduce simplifying assumptions e.g. that all agents are rational and homogenous <ref name=North>Charles M. Macal and Michael J. North, "Tutorial on Agent-Based Modelling and Simulation", http://www.econ.iastate.edu/tesfatsi/ABMTutorial.MacalNorth.JOS2010.pdf , Journal of Simulation, Vol. 4, 2010, 151–162</ref>. Humans interacting in various systems or institutuins are heterogenous and it's desirable to emulate this feature to produce more realistic behavior<ref name=Chen />.  
  
 
===Learning===
 
===Learning===
In order to capture dynamic nature of real markets agents must be able to learn which means change their behavior according to the situations they encounter. (zdroj) Agents in ACE can use various types of learning algorithms. Selection of an algorithm can fundamentally influence the results of the simulation<ref>TESFATSION, Leigh. Modeling Economies as Complex Adaptive Systems. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, 2010-03-24 [cit. 2012-06-18]. Available at: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf</ref>. [[Roth-Elev]] [http://en.wikipedia.org/wiki/Reinforcement_learning reinforcement learning] algorithm is one of the possible choices:
+
In order to capture dynamic nature of real markets agents should be able to learn which means change their behavior according to the situations they encounter. (zdroj) Agents in ACE can use various types of learning algorithms. Selection of an algorithm can fundamentally influence the results of the simulation<ref name=Tesfatsion />. [[Roth-Elev]] [http://en.wikipedia.org/wiki/Reinforcement_learning reinforcement learning] algorithm is one of the possible choices. It works in following steps:
<code>
 
 
# Initialize action propensities to an initial propensity value.
 
# Initialize action propensities to an initial propensity value.
 
# Generate choice probabilities for all actions using current propensities.
 
# Generate choice probabilities for all actions using current propensities.
Line 65: Line 81:
 
# Update propensities for all actions using the reward (profits) for the last chosen action.
 
# Update propensities for all actions using the reward (profits) for the last chosen action.
 
# Repeat from step 2.
 
# Repeat from step 2.
</code>
 
  
 
===Possible learning types===  
 
===Possible learning types===  
There are various types of learning algorithms(Tesfatsion, LearnAlgorithms.LT.pdf). Here is brief summary:
+
There are various types other of learning algorithms suitable for use in ACE <ref name=TesfatsionLearning />). Here is brief summary by Leigh Tesfatsion:
 
# Reactive Reinforcement Learning (RL)
 
# Reactive Reinforcement Learning (RL)
 
## Example 1: Deterministic reactive RL (e.g. Derivative-Follower)
 
## Example 1: Deterministic reactive RL (e.g. Derivative-Follower)
Line 75: Line 90:
 
## Example 1: Fictitious play
 
## Example 1: Fictitious play
 
## Example 2: Hybrid forms (e.g. [http://www.hss.caltech.edu/~camerer/jeth2927.pdf Camerer/Ho EWA algorithm] )
 
## Example 2: Hybrid forms (e.g. [http://www.hss.caltech.edu/~camerer/jeth2927.pdf Camerer/Ho EWA algorithm] )
# Anticipatory Learning (Q-Learning)
+
# Anticipatory Learning ([http://en.wikipedia.org/wiki/Q-learning Q-Learning])
 
## Evolutionary Learning ([http://en.wikipedia.org/wiki/Genetic_algorithm Genetic Algorithms] - GAs)
 
## Evolutionary Learning ([http://en.wikipedia.org/wiki/Genetic_algorithm Genetic Algorithms] - GAs)
 
# Connectionist Learning ([http://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Nets] - ANNs)
 
# Connectionist Learning ([http://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Nets] - ANNs)
  
In reinforcement learning algorithms, if an action ''A'' in state ''S'' produces favourable outcomes (desired reward), tendency to choose the action A should be increased. Likewise if action ''A'' produces unfavourable results tendency to choose it should be decreased. In reactive RL agent contemplates, what action should be taken based on past events. Reactive RL can be deterministic or stochastic. In first case agent is increasing or decreasing scalar decision ''D'', he moves in the same direction until the reward level starts falling. Example of second case (Roth-Erev) is in [[#Learning|Learning]]. Belief-based learning uses reflection on past choices to determine whether different action could have led to more desirable outcome. These opportunity cost assessments are then used to choose better action now. In this type of learning, agent takes into consideration presence of other agents also making their decisions. To achieve this, agent uses probability distribution function to select best response on estimated actions of other agents. Example of this can be [[http://en.wikipedia.org/wiki/Matching_pennies matching pennies]] game:
+
In reinforcement learning algorithms, if an action ''A'' in state ''S'' produces favourable outcomes (desired reward), tendency to choose the action A should be increased. Likewise if action ''A'' produces unfavourable results tendency to choose it should be decreased. In reactive RL agent contemplates, what action should be taken based on past events. Reactive RL can be deterministic or stochastic. In first case agent is increasing or decreasing scalar decision ''D'', he moves in the same direction until the reward level starts falling. Example of second case (Roth-Erev) is in [[#Learning|Learning]]. Belief-based learning uses reflection on past choices to determine whether different action could have led to more desirable outcome. These opportunity cost assessments are then used to choose better action now. In this type of learning, agent takes into consideration presence of other agents also making their decisions. To achieve this, agent uses probability distribution function to select best response on estimated actions of other agents<ref name=TesfatsionLearning>Leigh Tesfatsion, [http://www.econ.iastate.edu/tesfatsi/LearnAlgorithms.LT.pdf Learning Algorithms: Illustrative Examples]</ref>. Example of this can be [http://en.wikipedia.org/wiki/Matching_pennies matching pennies] game:
  
 
{|class="wikitable"
 
{|class="wikitable"
!colspan="3"|Player 2
+
|+ align="bottom" |''Matching pennies game outcome matrix''
 +
|rowspan="2" colspan="2";|            || 
 +
!colspan="2"|Player 2
 +
|-
 +
|
 +
!Heads     
 +
!Tails     
 
|-
 
|-
!rowspan=2|Player 1
+
!rowspan=2|Player 1          
|                ||  Heads      || Tails     
+
!  Heads      ||  +1, −1    || −1, +1   
 
|-
 
|-
|  Heads      ||  +1, −1    ||  −1, +1   
+
! Tails      ||  −1, +1    ||  +1, −1  
|-
 
| Tails      ||  −1, +1    ||  +1, −1    
 
 
|}
 
|}
  
==Other computing methods==
+
If agent uses anticipatory learning (or [http://en.wikipedia.org/wiki/Temporal-difference_learning temporal-difference learning]), he's trying to predict what might happen in the future, if he takes some action ''A''. Relationship between value functions is therefore recursive. For each possible state it yields the optimum total reward that can be attained by the agent over current and future times. This method requires computation of transtition, return and value functions to compute optimal policy function. These functions are dependent on time and current state. [http://en.wikipedia.org/wiki/Q-learning Q-Learning] enables to compute optimal policy function without knowing these functions. Instead it iteratively acquires the Q-values, that are afterwards stored in observation history. This history is than used to estimate Q-values for next possible action choices<ref name=TesfatsionLearning />.
 +
[http://en.wikipedia.org/wiki/Cobweb_model Cobweb model] is example of [http://en.wikipedia.org/wiki/Genetic_algorithms_in_economics genetic algorithm for use in economics]. For connectionist learning various types of [http://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Nets] configurations can be used.
  
Linear Equations and Iterative Methods (Currently empty)
+
==Examples of real applications==
Optimization
+
 
Nonlinear Equations
+
*An agent-based system developedby Acklin (Netherlands)for international vehicle insurance claims reduced workload at one participating company by 3 people. Total time time need for indentification of a client and claim was reduced from 6 months to less than 2 minutes <ref>http://www.agentlink.org/resources/webCS/AL3_CS_004_Acklin.pdf</ref><ref name=agentlink>AgentLink, 50 facts about agent-based computing,http://www.econ.iastate.edu/tesfatsi/AgentLink.50CommercialApplic.MLuck.pdf.</ref>.
Approximation
+
 
Numerical Integration and Differentiation
+
*Agent-based application from Whitestein Technologies (Switzerland) is used for optimisation of large-scale transport. Vehicles are represented as agents in the system. These agents negotiate through auction-like protocol. Vehicle capable of cheapest delivery wins the auction. This way overall cost of cargo delivery and often combined distance travelled by all vehicles as well<ref name=agentlink />.
Monte Carlo and Simulation Methods (Currently empty)
+
 
Quasi-Monte Carlo Methods (Currently empty)
+
*Agent technology developed by Agentis Software was used to manage the complex processes and changing business requirements involved in the challenging task of relocating residents during project to refurbish or rebuild housing for 25,000 people by the Chicago Housing Authority<ref name=agentlink />.
Finite Difference Methods (Currently empty)
+
 
Projection Methods for Functional Equations (Currently empty)
+
*Agent technology by Agentis Software was used in project to rebuild and renovate housing for 25,000 people by the Chicago Housing Authority. Complex processes and changing requirements which were part of difficult task of relocating the occupants were managed thanks to this solution<ref name=agentlink />.
Numerical Dynamic Programming (Currently empty)
+
 
Regular Perturbations of Simple Systems (Currently empty)
+
==Software and programming==
Regular Perturbations in Multidimensional Systems (Currently empty)
+
For elaborate overview see [http://en.wikipedia.org/wiki/Comparison_of_agent-based_modeling_software Comparison of agent-based modeling software].
Advanced  Asymptotic Methods (Currently empty)
 
Solution Methods for Perfect Foresight Models (Currently empty)
 
Solving Rational Expectations Models
 
  
 
==References==
 
==References==
 
<references/>
 
<references/>
 
* http://bucky.stanford.edu/numericalmethods/PUBCODE/DEFAULT.HTM
 
* http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
 
* http://www2.econ.iastate.edu/classes/econ308/tesfatsion/
 
* http://www2.econ.iastate.edu/tesfatsi/aemind.htm
 
* http://www2.econ.iastate.edu/tesfatsi/aintro.htm
 

Latest revision as of 22:59, 20 June 2012

Agent-based computational economics or shortly ACE is branch of computational economics. It uses agent-based models or simulations to model real world market or economic interactions between agents. Agents can represent institutions, firms, individuals or environment. Models, often created in specialized software or framework, are dynamic and allow introduction of heterogenous behavior of agents. ACE is therefore "a computational study of economic processes modeled as dynamic systems of interacting agents"[1]

Resarch

Main pillars of ACE resarch according to Leigh Tesfatsion.[2] [3]

  • Empirical
  • Normative
  • Qualitativ insight and theory generation
  • Methodological advancement

Empirical

This area stands for explaining possible reasons for observed regularities. This is achieved through replication of such regularities using multi-agent models. This approach allows to seek causal explanations thanks to bottom-up modelling of simulated market or economy[3].

Normative

ACE can help to increase normative understanding, ACE models can serve as virtual test field for different policies, regulations and can simulate many different economic scenarios. Subsequent insights in social norms and institutions can help to explain why there are some persisting regularities in markets. Another aspepct is relationship between environmental properties, organization structure and performance of that organization. [4]

Qualitative insight and theory generation

Through ACE approach, self-organizing capabilities of decentralized market systems could be understood. It can explain why there are some regularities persistent over time and why they remain while others disappear. Evolving agent world can be used to observe needed degree of coordination to establish institutions and attain self organization[3][2].

Methodological advancement

ACE seeks the best instruments and methods to study economic studies using computational experiment. Important aspect is whether data produced by such experiments are in accordance with real-world data. In order to achieve this methodological principles need to be developed as well as Programming, visualization and validation tools[3][2]. For more information see Software and programming

Fields of application

One of the first major applications of multi-agent models in social sciences was famous Sugarscape model by Epstein and Axell. From this application it is not far to the economic field. ACE can approach can be applied to rather simple double-auction market models or two-sector trading worlds. ACE is also used in various complex market simulations like tourism, digital news or investments. ACE can also help to analyze the impacts of various policies and regulations for example effect of deregulation on an electric power market [5] [6]. More complex models are capable of simulating whole economies with all necessary aspects as financial, household or job markets while maintaining homogenity of agents. Example of this is the EURACE project. Models like this enable what-if analysis and policy experiments on European scale. [3]. There are also applications to model economic behaviour of vanished civilizations [7]

Computational world models

Agent hierarchy used in AMES framework

Computational worlds can composed of various agents, some of them can act on their own, have learning capability and memory. Others represent rather reactive elements of the world such as technology or nature. Some agents can be passive like house or patch of land. Composition of agents is also possible, music band agent can be for instance a composition of agents playing musical instruments. Agents are therefore ordered in hierachy as shown on AMES framework example. Agent can be simple-programmed, autonomous or human-like [8] In order for agents to operate in computational worlds, methods and protocols are required. These methods and protocols enable interactions between agents themselves, between agents artificial institutions e.g. market or between agents and the world itself. These protocol consits of rules for mediation between agents and serve as description of interaction between agents e.g. between market and agent. [3] For example in double auction model, agents may have following methods:

getWorldEventSchedule(clock time);
getWorldProtocols (collusion, insolvency);
getMarketProtocols (posting, matching, trade, settlement);

First method acquires (getWorldEventSchedule current time from the world itself. Through getMarketProtocols agent can acquire valid protocol used for different kinds of interaction and negotiations between agents. Method getWorldProtocols can serve for other out of market interactions.


Equilibriums and attractors

Agent stops too early in a basin of attraction missing the highest attainable profit

Model behavior can result to various types of equilibrium and attractors. "System is in equilibrium if all influences acting on the system offset each other so that the system is in an unchanging condition" [9]. Agent-based models can help to determine which parameters influence stability or effectiveness of the market while visualization capabilities can help to identify possible basins of attraction. These can than be pinpointed through generated reports, plots or through other available ex-post analytical tools. Agent can for be for instance attracted by different basins of attraction while using different learning algorithms. Image on the right shows how agent scale the profit curve using deterministic reactive reinforcement learning. Because of using simple Derivative-follower adaptation[10] agent stops when profit level start's to fall, which is in this case too soon. Parameters can be changed on different levels e.g. agent level, market level or world level. Agent may have parameters like risk aversion, market may have parameters like non-employment payment percentage etc.[3]

Agent types and characteristics

Simple programmed agents are represented by simple algorithm, be it short lenght of a code or simplicity of a pseudo-random number generator agent uses [8]. However even simple agents can exhibit form of swarm intelligence simillar to the emergent behavior of a group ants or termites. Groups of simple agents are than capable to solve complex tasks. In some cases even without learning capability, the agents can optimize or are able to generate orderly movement patterns. Stigmergy can be one way to achieve this. Agents can be differentiated by position in the cognitive hierarchy, where more complex agents are able to think more steps ahead than simple agents. Smarter agents can also emulate behavior of simple agents if favourable but it's not possible vica versa. Non-agent economic models often introduce simplifying assumptions e.g. that all agents are rational and homogenous [11]. Humans interacting in various systems or institutuins are heterogenous and it's desirable to emulate this feature to produce more realistic behavior[8].

Learning

In order to capture dynamic nature of real markets agents should be able to learn which means change their behavior according to the situations they encounter. (zdroj) Agents in ACE can use various types of learning algorithms. Selection of an algorithm can fundamentally influence the results of the simulation[3]. Roth-Elev reinforcement learning algorithm is one of the possible choices. It works in following steps:

  1. Initialize action propensities to an initial propensity value.
  2. Generate choice probabilities for all actions using current propensities.
  3. Choose an action according to the current choice probability distribution.
  4. Update propensities for all actions using the reward (profits) for the last chosen action.
  5. Repeat from step 2.

Possible learning types

There are various types other of learning algorithms suitable for use in ACE [10]). Here is brief summary by Leigh Tesfatsion:

  1. Reactive Reinforcement Learning (RL)
    1. Example 1: Deterministic reactive RL (e.g. Derivative-Follower)
    2. Example 2: Stochastic reactive RL (e.g. Roth-Erev algorithms)
  2. Belief-Based Learning
    1. Example 1: Fictitious play
    2. Example 2: Hybrid forms (e.g. Camerer/Ho EWA algorithm )
  3. Anticipatory Learning (Q-Learning)
    1. Evolutionary Learning (Genetic Algorithms - GAs)
  4. Connectionist Learning (Artificial Neural Nets - ANNs)

In reinforcement learning algorithms, if an action A in state S produces favourable outcomes (desired reward), tendency to choose the action A should be increased. Likewise if action A produces unfavourable results tendency to choose it should be decreased. In reactive RL agent contemplates, what action should be taken based on past events. Reactive RL can be deterministic or stochastic. In first case agent is increasing or decreasing scalar decision D, he moves in the same direction until the reward level starts falling. Example of second case (Roth-Erev) is in Learning. Belief-based learning uses reflection on past choices to determine whether different action could have led to more desirable outcome. These opportunity cost assessments are then used to choose better action now. In this type of learning, agent takes into consideration presence of other agents also making their decisions. To achieve this, agent uses probability distribution function to select best response on estimated actions of other agents[10]. Example of this can be matching pennies game:

Matching pennies game outcome matrix
Player 2
Heads Tails
Player 1 Heads +1, −1 −1, +1
Tails −1, +1 +1, −1

If agent uses anticipatory learning (or temporal-difference learning), he's trying to predict what might happen in the future, if he takes some action A. Relationship between value functions is therefore recursive. For each possible state it yields the optimum total reward that can be attained by the agent over current and future times. This method requires computation of transtition, return and value functions to compute optimal policy function. These functions are dependent on time and current state. Q-Learning enables to compute optimal policy function without knowing these functions. Instead it iteratively acquires the Q-values, that are afterwards stored in observation history. This history is than used to estimate Q-values for next possible action choices[10]. Cobweb model is example of genetic algorithm for use in economics. For connectionist learning various types of Artificial Neural Nets configurations can be used.

Examples of real applications

  • An agent-based system developedby Acklin (Netherlands)for international vehicle insurance claims reduced workload at one participating company by 3 people. Total time time need for indentification of a client and claim was reduced from 6 months to less than 2 minutes [12][13].
  • Agent-based application from Whitestein Technologies (Switzerland) is used for optimisation of large-scale transport. Vehicles are represented as agents in the system. These agents negotiate through auction-like protocol. Vehicle capable of cheapest delivery wins the auction. This way overall cost of cargo delivery and often combined distance travelled by all vehicles as well[13].
  • Agent technology developed by Agentis Software was used to manage the complex processes and changing business requirements involved in the challenging task of relocating residents during project to refurbish or rebuild housing for 25,000 people by the Chicago Housing Authority[13].
  • Agent technology by Agentis Software was used in project to rebuild and renovate housing for 25,000 people by the Chicago Housing Authority. Complex processes and changing requirements which were part of difficult task of relocating the occupants were managed thanks to this solution[13].

Software and programming

For elaborate overview see Comparison of agent-based modeling software.

References

  1. Leigh Tesfatsion, Agent-Based Computational Economics: A Constructive Approach to Economic Theory [(pdf,253KB) http://www.econ.iastate.edu/tesfatsi/hbintlt.pdf], in Leigh Tesfatsion and Kenneth L. Judd (eds.), Handbook of Computational Economics, Volume 2: Agent-Based Computational Economics, Handbooks in Economics Series, Elsevier/North-Holland, the Netherlands, 2006.
  2. 2.0 2.1 2.2 Leigh Tesfatsion (2007) Agent-based computational economics. Scholarpedia, http://www.scholarpedia.org/article/Agent-based_computational_economics
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 TESFATSION, Leigh. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems. 2010-03-24, [cit. 2012-06-18]. http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
  4. Tesfatsion, Leigh. “Agent-based computational economics: modeling economies as complex adaptive systems.” Ed. Leigh Tesfatsion & Kenneth L Judd. Information Sciences 149.4 (2003) : 262-268. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.143.4883&rep=rep1&type=pdf
  5. Cirillo R. et al (2006). Evaluating the potential impact of transmission constraints on the operation of a competitive electricity market in illinois. Argonne National Laboratory, Argonne, IL, ANL-06/ 16 (report prepared for the Illinois Commerce Commission), April. http://www.dis.anl.gov/pubs/61116.pdf
  6. Charles M. Macal and Michael J. North, "Tutorial on Agent-Based Modelling and Simulation" PDF,359KB, Journal of Simulation, Vol. 4, 2010, 151–162
  7. Kohler TA, Gumerman GJ and Reynolds RG (2005). Simulating ancient societies. Scient Amer 293(1): 77–84. http://libarts.wsu.edu/anthro/pdf/Kohler%20et%20al.%20SciAm.pdf
  8. 8.0 8.1 8.2 Chen,S.-H.,Varieties of agentsinagent-based computational economics: A historical and an interdisciplinary perspective. http://www.econ.iastate.edu/tesfatsi/ACEHistoricalSurvey.SHCheng2011.pdf, Journal of Economic Dynamics and Control(2011), doi:10.1016/j.jedc.2011.09.003,
  9. http://dl.acm.org/citation.cfm?id=1531270
  10. 10.0 10.1 10.2 10.3 Leigh Tesfatsion, Learning Algorithms: Illustrative Examples
  11. Charles M. Macal and Michael J. North, "Tutorial on Agent-Based Modelling and Simulation", http://www.econ.iastate.edu/tesfatsi/ABMTutorial.MacalNorth.JOS2010.pdf , Journal of Simulation, Vol. 4, 2010, 151–162
  12. http://www.agentlink.org/resources/webCS/AL3_CS_004_Acklin.pdf