A multi-agent simulation leveraging large language models to create dynamic, flexible environments for studying agentic interactions. Designed for researchers, developers, and hobbyists seeking to explore emergent behaviors and complex systems.
Token World is a dynamic, multi-agent simulation framework powered by large language models. Designed with flexibility in mind, the project enables the creation of diverse, open-ended environments where agents interact, evolve, and discover, mimicking complex real-world systems.
The simulation is built for developers, researchers, and enthusiasts exploring the intersections of artificial intelligence, social dynamics, and agent-based modeling. Whether you're simulating a bustling small-town community or creating intricate role-playing game scenarios, Token World offers the tools and grounding mechanisms to make interactions logical and meaningful.
- Flexibility: Adapt the framework to simulate various environments and scenarios with minimal customization effort.
- Grounding: Ensure agent actions remain logical and rooted in the environment to reduce hallucinations.
- Partial Observability: Introduce realistic discovery and exploration dynamics, making simulations more engaging and lifelike.
- Ease of Use: Aimed at lowering barriers for experimentation with large language models in agentic workflows.
By combining cutting-edge AI capabilities with a robust simulation environment, this project inspires developers to push boundaries and explore emergent behaviors in multi-agent systems. It’s not just a framework; it’s a learning experience designed to empower creativity and innovation in AI-driven simulations.
Token World includes a range of technical features that make it a powerful framework for multi-agent simulations (most or all of these features have not been implemented are are only included for documentation purposes):
-
Entity-Based Environment
- Every object in the simulation is represented as an entity with unique properties stored in an SQLite database.
- Supports dynamic state updates and seamless entity interactions validated through precondition checks.
-
Agentic Interactions
- Agents generate natural language actions that are translated into executable operations.
- A modular action pipeline ensures each action's feasibility before execution, using grounding mechanisms to validate preconditions and apply effects.
-
Delegated Action Handling
- Specialized sub-agents handle specific types of interactions (e.g., spatial navigation, item manipulation), ensuring efficiency and extensibility in task execution.
-
Partial Observability Framework
- Agents operate with limited knowledge of their environment.
- Information discovery is modeled through interactions, enabling realistic exploration and goal-driven behaviors.
-
Dynamic Mechanics Generation
- The simulation evolves dynamically, with new mechanics generated on demand based on the agents' actions. This allows the environment to grow and adapt in response to the agents' behaviors, making each run unique.
- This feature parallels the concept of "observer effect" in physics, where the state of a system materializes upon interaction, akin to the idea that "reality doesn't exist until observed."
-
Web-Based Debugging Tool
- Provides an interactive interface to inspect agent interactions and entity states in real time.
- Facilitates debugging and transparency for developers working on complex simulations.
-
Extensible Design
- Modular architecture allows for the addition of custom agents, actions, and environments.
- Easily integrates with different LLM backends, including lightweight local models.
-
Scalable Simulations
- Optimized to run small to medium-scale simulations on personal laptops with support for larger simulations on more powerful systems.
-
Interpretable Logging
- All interactions and state changes are logged, enabling detailed analysis of agent behaviors and environment dynamics.
- Scenario 1: Small-town simulation.
- Scenario 2: Role-playing game setup.
- Scenario 3: Custom experimental environment for testing agentic workflows.
For instructions on setting up the project for development and contributions, see CONTRIBUTING.md
To set up the environment variables, create a .env
file in the root directory of your project (See .env.example) and add the following lines:
OPENAI_BASE_URL=https://your_openai_endpoint
OPENAI_API_KEY=your_openai_api_key_here
Note that the OpenAI endpoint can be any endpoint that implements the OpenAI API (eg. Ollama).
Alternatively, you can provide the --openai_base_url
and --openai_api_key
arguments when running the CLI:
$ python -m token_world --openai_base_url http://192.168.1.199:11434/v1 --openai_api_key your_openai_api_key_here
#or
$ token_world --openai_base_url http://192.168.1.199:11434/v1 --openai_api_key your_openai_api_key_here
from token_world import BaseClass
from token_world import base_function
BaseClass().base_method()
base_function()
$ python -m token_world
#or
$ token_world
Read the CONTRIBUTING.md file.
Very loosely inspired by:
Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 2:1–2:22. https://doi.org/10.1145/3586183.3606763