Difference between revisions of "Agent-based computational economics"

From Simulace.info
Jump to: navigation, search
(List of learning types being used)
(List of learning types being used)
Line 74: Line 74:
 
# Belief-Based Learning
 
# Belief-Based Learning
 
## Example 1: Fictitious play
 
## Example 1: Fictitious play
## Example 2: Hybrid forms (e.g. Camerer/Ho EWA algorithm)
+
## Example 2: Hybrid forms (e.g. [http://www.hss.caltech.edu/~camerer/jeth2927.pdf Camerer/Ho EWA algorithm]|type = PDF )
 
# Anticipatory Learning (Q-Learning)
 
# Anticipatory Learning (Q-Learning)
 
## Evolutionary Learning ([http://en.wikipedia.org/wiki/Genetic_algorithm Genetic Algorithms] - GAs)
 
## Evolutionary Learning ([http://en.wikipedia.org/wiki/Genetic_algorithm Genetic Algorithms] - GAs)

Revision as of 17:12, 19 June 2012


Resarch

Main pillars of ACE resarch[1]:

  • Empirical
  • Normative
  • Qualitativ insight and theory generation
  • Methodological advancement

Empirical

This area area stands for explaining possible reasons for observed regularities.

Normative

Qualitativ insight and theory generation

Methodological advancement

Fields of application

Double auction simulation Financial markets Labour markets Economic zones model

Computational world models

Agent hierarchy used in AMES framework

Computational world is composed of many agents, some of them can act on their own, have learning capability and memory. Others represent rather reactive elements of the world such as technology or nature. Some agents can be passive like house or patch of land. Composition of agents is also possible, music band agent can be for instance a composition of agents playing musical instruments. Agents are therefore ordered in hierachy as shown on AMES framework example. Agent can be simple-programmed, autonomous or human-like [2] In order for agents to operate in computational worlds, methods and protocols are required. These methods and protocols enable interactions between agents themselves, between agents and the world or artificial institutions e.g. market. These protocol consits of rules for mediation between agents and serve as description of interaction between agents e.g. between market and agent. [3] [4] For example in double auction model, agents may have following methods:

getWorldEventSchedule(clock time);
getWorldProtocols (collusion, insolvency);
getMarketProtocols (posting, matching, trade, settlement);


Equilibriums and attractors

Model behavior can result to various types of equilibrium and attractors. System is in equilibrium if all influences acting on the system offset each other so that the system is in an unchanging condition[5]. Agent-based models can help to determine which parameters influence stability or effectiveness of the market. Parameters can be changed on different levels e.g. agent level, market level or world level. Agent may have parameters like risk aversion, market may have parameters like non-employment payment percentage etc.[6]


Agent types and characteristics

Simple programmed agents are represented by simple algorithm, be it short lenght of a code or simplicity of a pseudo-random number generator agent uses (Shen). However even simple agents can exhibit form of swarm intelligence simillar to the emergent behavior of a group ants or termites. Groups of simple agents are than capable to solve complex tasks. Even without learning capability, the agents can optimize or generate orderly movement patterns. Stigmergy can be one way to achieve this. Agents can be differentiated by position in the cognitive hierarchy, where more complex agents are able to think more steps ahead than simple agents. Smarter agents can also emulate behavior of simple agents if favourable but it's not possible vica versa.Non-agent economic models often introduce simplifying assumptions e.g. that all agents are rational and homogenous (CM Macal1,2* and MJ North1,2). Humans interacting in various systems or institutuins are heterogenous and it's desirable to emulate this feature to produce more realistic behavior(Shen).

Learning

In order to capture dynamic nature of real markets agents must be able to learn which means change their behavior according to the situations they encounter. (zdroj) Agents in ACE can use various types of learning algorithms. Selection of an algorithm can fundamentally influence the results of the simulation[7]. Roth-Elev reinforcement learning algorithm is one of the possible choices:

  1. Initialize action propensities to an initial propensity value.
  2. Generate choice probabilities for all actions using current propensities.
  3. Choose an action according to the current choice probability distribution.
  4. Update propensities for all actions using the reward (profits) for the last chosen action.
  5. Repeat from step 2.

List of learning types being used

There are various types of learning algorithms(Tesfatsion, LearnAlgorithms.LT.pdf). Here is brief summary:

  1. Reactive Reinforcement Learning (RL)
    1. Example 1: Deterministic reactive RL (e.g. Derivative-Follower)
    2. Example 2: Stochastic reactive RL (e.g. Roth-Erev algorithms)
  2. Belief-Based Learning
    1. Example 1: Fictitious play
    2. Example 2: Hybrid forms (e.g. Camerer/Ho EWA algorithm|type = PDF )
  3. Anticipatory Learning (Q-Learning)
    1. Evolutionary Learning (Genetic Algorithms - GAs)
  4. Connectionist Learning (Artificial Neural Nets - ANNs)

Other computing methods

Linear Equations and Iterative Methods (Currently empty) Optimization Nonlinear Equations Approximation Numerical Integration and Differentiation Monte Carlo and Simulation Methods (Currently empty) Quasi-Monte Carlo Methods (Currently empty) Finite Difference Methods (Currently empty) Projection Methods for Functional Equations (Currently empty) Numerical Dynamic Programming (Currently empty) Regular Perturbations of Simple Systems (Currently empty) Regular Perturbations in Multidimensional Systems (Currently empty) Advanced Asymptotic Methods (Currently empty) Solution Methods for Perfect Foresight Models (Currently empty) Solving Rational Expectations Models

References

  1. TESFATSION, Leigh. Agent-Based Computational Economics: Growing Economies from the Bottom Up. IOWA STATE UNIVERSITY. Agent-Based Computational Economics [online]. 2012-05-02, 2012-05-02 [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/tesfatsi/ace.htm
  2. Chen,S.-H.,Varieties of agentsinagent-based computational economics: A historical and an interdisciplinary perspective. Journal of Economic Dynamics and Control(2011), doi:10.1016/j.jedc.2011.09.003, Available from: http://www.econ.iastate.edu/tesfatsi/ACEHistoricalSurvey.SHCheng2011.pdf
  3. TESFATSION, Leigh. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
  4. Template:Cite web
  5. http://dl.acm.org/citation.cfm?id=1531270
  6. TESFATSION, Leigh. Modeling Economies as Complex Adaptive Systems. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, 2010-03-24 [cit. 2012-06-18]. Dostupné z: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf
  7. TESFATSION, Leigh. Modeling Economies as Complex Adaptive Systems. Agent-Based Computational Economics: Modeling Economies as Complex Adaptive Systems [online]. 2010-03-24, 2010-03-24 [cit. 2012-06-18]. Available at: http://www2.econ.iastate.edu/classes/econ308/tesfatsion/ACETutorial.pdf