diff --git a/README.md b/README.md index 75b9375..b4d1dd0 100644 --- a/README.md +++ b/README.md @@ -44,5 +44,8 @@ pip install opencv-python ## Documentation -Is not yet written :(. But has planned :) Before it has happend, there are random pieces of lib discussions which could be useful: +* [Ptan introduction](docs/intro.ipynb) + +Random pieces of information + * `ExperienceSource` vs `ExperienceSourceFirstLast`: https://github.com/Shmuma/ptan/issues/17#issuecomment-489584115 diff --git a/docs/.ipynb_checkpoints/intro-checkpoint.ipynb b/docs/.ipynb_checkpoints/intro-checkpoint.ipynb new file mode 100644 index 0000000..7069392 --- /dev/null +++ b/docs/.ipynb_checkpoints/intro-checkpoint.ipynb @@ -0,0 +1,1798 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Ptan intro\n", + "\n", + "[PTAN](https://github.com/Shmuma/ptan) (abbrevation of `PyTorch AgentNet`) is a small library I wrote to simplify RL experiments. It tries to keep the balance between two extremes:\n", + "\n", + "1. import lib, then write one line to train the DQN (very vivid example is [OpenAI baselines project](https://github.com/openai/baselines/))\n", + "2. implement everything from scratch\n", + "\n", + "First approach is very inflexible. It works good when you're using the library the way it supposed to be used. But if you want to do something fancy, you quickly find yourself hacking the lib and fighting with constraints imposed by the author rather than solving the problem you want to solve.\n", + "\n", + "Second extreme gives you *too much freedom* and requires implementing replay buffers and trajectory handling over and over again, which is error-prone, boring and inefficient.\n", + "\n", + "Several years ago I was tired of writing replay buffers and decided to implement something in between: not \"the universal RL lib\", but a set of classes to avoid writing boilerplate code.\n", + "\n", + "I used ptan to implement all the [examples for the \"Deep Reinforcement Learning Hands-On\" book](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/), which include all the major DRL methods, which includes DQN, A3C, all tricks in Rainbow paper, DDPG, D4PG, PPO, TRPO, Acktr and AlphaGo Zero.\n", + "\n", + "## High-level overview\n", + "\n", + "From the high level, ptan provides you the following entities:\n", + "\n", + "* `Agent`: class which knows how to convert batch of observations to batch of actions to be executed. It can contain optional state, in case you need to track some info between consequent actions in one episode (for example, noise params for Ornstein–Uhlenbeck exploration). Normally, you can use [already existing Agent instances](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py) or write your own subclass of `BaseAgent`.\n", + "* `ActionSelector`: small piece of logic which knows how to choose the action from some output of the network. Works in tandem with `Agent` https://github.com/Shmuma/ptan/blob/master/ptan/actions.py\n", + "* `ExperienceSource` and variations: by using the `Agent` instance and gym environment object can provide information about the trajectory from episodes. In the simplest form it is one single $(a, r, s')$ transition at a time, but functionality goes beyond this. Source file is https://github.com/Shmuma/ptan/blob/master/ptan/experience.py\n", + "* `ExperienceSourceBuffer` and friends: replay buffers with various characteristics. Includes simple replay buffer and two versions of prioritized replay buffers\n", + "* various utility classes, like `TargetNet` (both discrete and continuous), wrappers for time-series preprocessing (used for tracking training progress in TensorBoard)\n", + "* includes wrappers for Gym environments, for example, wrappers for Atari games (copy-pasted from OpenAI baselines with some tweaks): https://github.com/Shmuma/ptan/blob/master/ptan/common/wrappers.py\n", + "\n", + "And that's basically it. Total amount of sourse is just ~1500 lines of Python, which makes it possible to master in couple of hours.\n", + "\n", + "Below I'm going to demonstrate how ptan could be used to simplify RL methods implementation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Installation\n", + "\n", + "We'll need gym, opencv python bindings. And pytorch, of course" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting package metadata: done\n", + "Solving environment: done\n", + "\n", + "\n", + "==> WARNING: A newer version of conda exists. <==\n", + " current version: 4.6.2\n", + " latest version: 4.7.10\n", + "\n", + "Please update conda by running\n", + "\n", + " $ conda update -n base -c defaults conda\n", + "\n", + "\n", + "\n", + "# All requested packages already installed.\n", + "\n", + "Requirement already satisfied: ptan in /Users/shmuma/work/ptan (0.5)\n", + "Requirement already satisfied: torch==1.1.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (1.1.0)\n", + "Requirement already satisfied: gym in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (0.12.1)\n", + "Requirement already satisfied: atari-py in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (0.2.6)\n", + "Requirement already satisfied: numpy in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (1.16.2)\n", + "Requirement already satisfied: opencv-python in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (4.1.0.25)\n", + "Requirement already satisfied: pyglet>=1.2.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.3.2)\n", + "Requirement already satisfied: six in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.12.0)\n", + "Requirement already satisfied: scipy in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.2.1)\n", + "Requirement already satisfied: requests>=2.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (2.21.0)\n", + "Requirement already satisfied: future in /anaconda3/envs/ptan/lib/python3.7/site-packages (from pyglet>=1.2.0->gym->ptan) (0.17.1)\n", + "Requirement already satisfied: urllib3<1.25,>=1.21.1 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (1.24.1)\n", + "Requirement already satisfied: idna<2.9,>=2.5 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (2.8)\n", + "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (3.0.4)\n", + "Requirement already satisfied: certifi>=2017.4.17 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (2019.6.16)\n" + ] + } + ], + "source": [ + "!conda install pytorch torchvision -c pytorch\n", + "!pip install ptan" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Imports" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import ptan\n", + "import gym\n", + "import numpy as np\n", + "from typing import List, Any, Optional, Tuple\n", + "\n", + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F\n", + "import torch.optim as optim\n", + "import matplotlib.pylab as plt\n", + "\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Action selector\n", + "\n", + "https://github.com/Shmuma/ptan/blob/master/ptan/actions.py\n", + "\n", + "Helps to go from network output to concrete action values. Most common cases:\n", + "* Argmax: commonly used by Q-value methods, when the network predicts Q-values for set of actions and the desired action is the action with the largest Q\n", + "* Policy-based: network outputs the probability distribution (in form of logits or normalized distribution) and action need to be sampled from this distribution. Used commonly by PG-methods\n", + "\n", + "Action selector is used by the `Agent`, and rarely need to be customized (but you have this option). Concrete classes which could be used:\n", + "* [`ArgmaxActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L12): applies `argmax` on the second axis of passed tensor (matrix is assumed)\n", + "* [`ProbabilityActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L36): samples from probability distribution of discrete set of actions\n", + "* [`EpsilonGreedyActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L21): has parameter $\\epsilon$ which specifies the probability of random action to be taken. \n", + "\n", + "All the classes assume numpy arrays to be passed to them\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([[ 1, 2, 3],\n", + " [ 1, -1, 0]])" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "q_vals = np.array([[1, 2, 3], [1, -1, 0]])\n", + "q_vals" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 0])" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.ArgmaxActionSelector()\n", + "selector(q_vals)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 0])" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=0.0)\n", + "selector(q_vals)\n", + "# have to be the same result, as episilon is 0 (no random actions)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([1, 1])" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "selector(q_vals)\n", + "# will be random" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[2 2 1]\n", + "[1 2 1]\n", + "[1 2 0]\n", + "[1 2 1]\n", + "[1 2 1]\n", + "[1 2 1]\n", + "[0 2 0]\n", + "[1 2 0]\n", + "[1 2 1]\n", + "[1 2 0]\n" + ] + } + ], + "source": [ + "# here we sample from probability distribution (have to be normalized)\n", + "selector = ptan.actions.ProbabilityActionSelector()\n", + "for _ in range(10):\n", + " acts = selector(np.array([\n", + " [0.1, 0.8, 0.1],\n", + " [0.0, 0.0, 1.0],\n", + " [0.5, 0.5, 0.0]\n", + " ]))\n", + " print(acts)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Agent class\n", + "\n", + "`Agent` is class which knows how to convert observations into actions. There are three most common approaches:\n", + "* **Q-function**: NN predicts Q-values for actions, the $argmax Q(s)$ is the action\n", + "* **Policy-based**: NN predicts probability distribution over actions $\\pi(s)$, you sample from this distribution and get the action to do\n", + "* **Continuous control**: NN predits the $\\mu(s)$ of continuous control parameters and the output is your actions to execute.\n", + "\n", + "Third case is trivial, two first approached is implemented in `ptan` to be reused without any coding: [`DQNAgent`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L55) and [`PolicyAgent`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L104).\n", + "\n", + "But in reality, it is often needed to implement your own agent, some of the reasons:\n", + "* You have fancy architecture of the net -- mixture of continuous and discrete action space, have multi-modal observations (text and pixels, for example)\n", + "* You want to use non-standard exploration strategies, for example Ornstein–Uhlenbeck process (very popular exploration strategy in continuous control domain)\n", + "* You have PoMDP environment and you decision are not fully defined by observations, but by some internal agent state (which is also the case for Ornstein–Uhlenbeck)\n", + "\n", + "All those cases are easily supported by subclassing the `BaseAgent` class, in TextWorld's tutorial we'll do exactly this.\n", + "\n", + "Below is the example how provided `DQNAgent` and `PolicyAgent` could be used." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## DQNAgent\n", + "\n", + "Suppose we have NN which produces Q-values from observations. `DQNAgent` takes batch of observations on input (as numpy array), apply the network on them to get Q-values, then uses provided `ActionSelector` to convert Q-values to indices of actions.\n", + "\n", + "Below is the small example. For simplicity, our network always produces the same output for the input batch" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, actions: int):\n", + " super(Net, self).__init__()\n", + " self.actions = actions\n", + " \n", + " def forward(self, x):\n", + " # we always produce diagonal tensor of shape (batch_size, actions)\n", + " return torch.eye(x.size()[0], self.actions)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "net = Net(actions=3)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1., 0., 0.],\n", + " [0., 1., 0.]])" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net(torch.zeros(2, 10))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So, let's use simple $argmax$ policy for the beginning. Agent will return actions corresponding to 1s in the net output." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.ArgmaxActionSelector()\n", + "agent = ptan.agent.DQNAgent(dqn_model=net, action_selector=selector, device=\"cpu\")\n", + "# note that you need to tell agent are you using GPU or not by passing device, by default it equals to \"cpu\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can pass the agent some observations (which will be ignored as our example is trivial), the output will be the actions according to NN output." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(array([0, 1]), [None, None])" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(2, 5))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The output from the agent is a tuple with two components:\n", + "1. numpy array with actions to be executed -- in our case of discrete actions, they are indices\n", + "2. list with agent's internal state. This is used for stateful agents, and is a list of None in our case. As our agent is stateless, you can ignore it" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's try to make the agent with epsilon-greedy exploration strategy. For this, we need just pass a different action selector and that's done." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "agent = ptan.agent.DQNAgent(dqn_model=net, action_selector=selector)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As epsilon is 1, all the actions will be random, regardless of network's output" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([1, 0, 1, 1, 0, 2, 2, 1, 0, 0])" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "But we can change the epsilon value on the fly, which is very handy during the training, when we supposed to anneal epsilon over time." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 1, 2, 2, 0, 1, 2, 0, 0, 0])" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector.epsilon = 0.5\n", + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([0, 1, 2, 0, 0, 0, 0, 0, 0, 0])" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector.epsilon = 0.1\n", + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## PolicyAgent\n", + "\n", + "`PolicyAgent` expects the network to produce policy distribution over discrete set of actions. Policy distribution could be either logits (unnormalized) or normalized distribution. In practice you should always use logits to improve stability.\n", + "\n", + "Let's reimplement our above sample, but now network will produce probability" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, actions: int):\n", + " super(Net, self).__init__()\n", + " self.actions = actions\n", + " \n", + " def forward(self, x):\n", + " # Now we produce the tensor with first two actions having the same logit scores\n", + " res = torch.zeros((x.size()[0], self.actions), dtype=torch.float32)\n", + " res[:, 0] = 1\n", + " res[:, 1] = 1\n", + " return res" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.]])" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net = Net(actions=5)\n", + "net(torch.zeros(6, 10))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we need to use `ProbabilityActionSelector`. Also note the agument `apply_softmax=True` which tells agent that output is not normalized." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.ProbabilityActionSelector()\n", + "agent = ptan.agent.PolicyAgent(model=net, action_selector=selector, apply_softmax=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can pass agent observations (fake, as before) and get some actions. Agent, as before returns the tuple with actions and internal state, which will be ignored" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([4, 0, 4, 0, 0, 0])" + ] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(6, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please note, that softmax returns non-zero probabilities to zero logits, so, actions 2-5 are still could be sampled (but less likely than 0 and 1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Experience source\n", + "\n", + "`Agent` abstraction described above allows us to implement environment communications in a generic way. This communication is happening in form of trajectories, produced by applying agent's actions to gym environment.\n", + "\n", + "At high level, Experience source classes take the agent instance, environment and provide you step-by step data from the trajectories. Functionality of those clases include:\n", + "1. support of multiple environments being communicated at the same time. This allows efficient GPU utilization as batch of observations being processed by agent at once.\n", + "2. trajectory could be preprocessed and presented in a convenient form for further training. For example, there is implementation of sub-trajectory rollouts, which is convenient for DQN and n-step DQN, when we're not interested in intermediate steps in n-step subtrajectories, only in first and last observations + total reward for the subtrajectory.\n", + "3. support of vectorized environments from OpenAI Universe\n", + "\n", + "So, the experience source classes acts as a \"magic black box\" hiding the environment interaction and trajectory handling complexities from the library user. But the overall ptan philosophy is to be flexible and extensible, so, if you want, you can subclass one of existing classes or implement your own version in case of neccessity. \n", + "\n", + "There are classes which are provided by the system:\n", + "* [`ExperienceSource`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L18): by using agent and the set of environments produces n-step subtrajectories with all intermediate steps.\n", + "* [`ExperienceSourceFirstLast`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L161): the same as `ExperienceSource`, but instead of full subtrajectory (with all steps) keeps only first and last steps with proper reward accumulation in between. This can save lots of memory in case of N-step DQN or A2C rollouts.\n", + "* [`ExperienceSourceRollouts`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L200): follows A3C rollouts scheme described in Minh's paper about Atari games.\n", + "\n", + "All the classes are written to be efficient both in terms of CPU and memory, which is not very important for toy problems, but might become an issue when you want to solve Atari games, keeping 10M samples in replay buffer using commodity hardware.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Toy gym environment\n", + "\n", + "For demonstration purpoposes, we'll implement very simple gym environment with small predictable observation state to show how Experience source classes works" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [], + "source": [ + "class ToyEnv(gym.Env):\n", + " \"\"\"\n", + " Environment with observation 0..4 and actions 0..2\n", + " Observations are rotated sequentialy mod 5, reward is equal to given action.\n", + " Episodes are having fixed length of 10\n", + " \"\"\"\n", + " def __init__(self):\n", + " super(ToyEnv, self).__init__()\n", + " self.observation_space = gym.spaces.Discrete(n=5)\n", + " self.action_space = gym.spaces.Discrete(n=3)\n", + " self.step_index = 0\n", + " \n", + " def reset(self):\n", + " self.step_index = 0\n", + " return self.step_index\n", + " \n", + " def step(self, action):\n", + " is_done = self.step_index == 10\n", + " if is_done:\n", + " return self.step_index % self.observation_space.n, 0.0, is_done, {}\n", + " self.step_index += 1\n", + " return self.step_index % self.observation_space.n, float(action), self.step_index == 10, {}" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 22, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env = ToyEnv()\n", + "env.reset()" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(1, 1.0, False, {})" + ] + }, + "execution_count": 23, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.step(1)" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(2, 2.0, False, {})" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.step(2)" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(3, 0.0, False, {})\n", + "(4, 0.0, False, {})\n", + "(0, 0.0, False, {})\n", + "(1, 0.0, False, {})\n", + "(2, 0.0, False, {})\n", + "(3, 0.0, False, {})\n", + "(4, 0.0, False, {})\n", + "(0, 0.0, True, {})\n", + "(0, 0.0, True, {})\n", + "(0, 0.0, True, {})\n" + ] + } + ], + "source": [ + "for _ in range(10):\n", + " r = env.step(0)\n", + " print(r)" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 26, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.reset()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll also need the agent which always generates fixed action" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [], + "source": [ + "class DullAgent(ptan.agent.BaseAgent):\n", + " def __init__(self, action: int):\n", + " self.action = action\n", + " \n", + " def __call__(self, observations: List[Any], state: Optional[List] = None) -> Tuple[List[int], Optional[List]]:\n", + " return [self.action for _ in observations], state" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "([1, 1], None)" + ] + }, + "execution_count": 28, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent = DullAgent(action=1)\n", + "agent([1, 2])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ExperienceSource class\n", + "\n", + "Generates chunks of trajectories of the given length.\n", + "\n", + "Constructor arguments:\n", + "* gym environment to be use (could be the list of environments or one single environment)\n", + "* the agent\n", + "* `steps_count=2`: the length of sub-trajectories to be generated\n", + "* `steps_delta=1`: step in subtrajectories\n", + "* `vectorized=False`: if true, environment is OpenAI Universe vectorized environment (more about them in MiniWoB tutorial)" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [], + "source": [ + "env = ToyEnv()\n", + "agent = DullAgent(action=1)\n", + "exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All experience source classes are providing standard python's iterator interface, so, you can just iterate over them to get sub-trajectories." + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The result is a tuple of length `steps_count` (in our case we requested sub-trajectories of length 2). Every entry is a namedtuple object with the following fields:\n", + "* state: state we observed before taking the action\n", + "* action: action we've done\n", + "* reward: immediate reward we've got from env\n", + "* done: was the episode done or not" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " if exp[0].done:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please note, that partial trajectories are alse returned, so, we can handle end of episodes properly.\n", + "\n", + "At the end of episode, environment is being reset automatically, so, we don't need to bother about them, just keep iterating:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 15:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's very convenient, especially in cases when we have several environments running in parallel (several instances of the same Atari game, for example).\n", + "\n", + "Let's increase length of our experience chunks." + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=4)" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " if exp[0].done:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we're getting subtrajectories of length 4, including the final pieces of trajectory.\n", + "\n", + "Let's give several environments to the experience source." + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSource(env=[ToyEnv(), ToyEnv()], agent=agent, steps_count=2)" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 15:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, our environments are being iterated on a round-robin fashion, giving us access to trajectories from both environment step-by-step. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ExperienceSourceFirstLast\n", + "\n", + "Class `ExperienceSource` provides us full subtrajectories of given length as the list of $(s, a, r)$ objects. The next state $s'$ is returned in the next tuple, which is not always convenient. \n", + "\n", + "For example, in DQN training, we want to have tuples $(s, a, r, s')$ at once to do 1-step Bellman approximation during the training. In addition, some extension of DQN, like n-step DQN might want to collapse longer sequences of observations into (first-state, action, total-reward-for-n-steps, state-after-step-n).\n", + "\n", + "To support this in a generic way, simple subclass of `ExperienceSource` is implemented: `ExperienceSourceFirstLast`. It accepts almost the same arguments in constructor, but returns different data." + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 10:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now it returns single object on every iteration, which is again a namedtuple with the following fields:\n", + "* `state`: state which we used to decide on action to make\n", + "* `action`: action we've done at this step\n", + "* `reward`: partial accumulated reward for `steps_count` (in our case, `steps_count=1`, so it is equal to immediate reward)\n", + "* `last_state`: the state we've got after executing the action. If our episode ends, we have None here\n", + "\n", + "This data is much more convenient for DQN training, as we can apply Bellman approximation directly on this data.\n", + "\n", + "Let's check the result with larger amount of steps." + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=2)" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=2.0, last_state=4)\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=0)\n", + "ExperienceFirstLast(state=4, action=1, reward=2.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=2.0, last_state=4)\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=None)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 10:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So, now we're collapsing two steps on every iteration, calculating immediate reward (that's why our reward=2.0 for most of the samples).\n", + "\n", + "More interesting samples are at the end of the episode:\n", + "```\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=None)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "```\n", + "\n", + "As episode ends, we have `last_state=None` in those samples, but additionally, we calculating the tail of the episode. Those tiny details are very easy to implement wrong, if you're doing all the trajectory handling yourself." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Experience source buffers\n", + "\n", + "In DQN we rarely dealing with immediate experience samples, as they are heavily correlated, which lead to instability in training. \n", + "\n", + "Normally, we have large replay buffers, which are being populated with experience pieces. Then the buffer is being sampled (randomly or with priority weights) to get the training batch. Replay buffer normally has the maximum capacity, so old samples are being pushed out when replay buffer reaches the limit.\n", + "\n", + "There are several implementation tricks here, which becomes extremely important when you need to deal with large problems:\n", + "* how to efficiently sample from large buffer\n", + "* how to push old samples from the buffer\n", + "* in case of prioritized buffer, how priorities need to be maintained and handled in the most efficient way.\n", + "\n", + "All this becomes quite non-trivial task, if you want to solve atari, keeping 10-100M samples where every sample is an image from the game. Small mistake can lead to 10-100x memory increase and major slowdowns of the training process.\n", + "\n", + "Ptan provides several variants of replay buffers, which provide simple integration with `ExperienceSource` and `Agent` machinery. Normally, what you need to do is to ask buffer to pull new sample from the source and sample the training batch.\n", + "\n", + "Provided classes:\n", + "* [`ExperienceReplayBuffer`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L327): simple replay buffer of predefined size with uniform sampling\n", + "* [`PrioReplayBufferNaive`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L371): simple, but not very efficient prioritized replay buffer implementation. Complexity of sampling is O(n), which might become an issue with large buffers\n", + "* [`PrioritizedReplayBuffer`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L414): uses segment trees for sampling, which makes code cryptic, but with O(log(n)) sampling complexity.\n", + "\n", + "Below is the example of simple relay buffer, if you want, you can find examples of `PrioritizedReplayBuffer` usage in examples for chapter 7 of my book: https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter07/05_dqn_prio_replay.py" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": {}, + "outputs": [], + "source": [ + "env = ToyEnv()\n", + "agent = DullAgent(action=1)\n", + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [], + "source": [ + "buffer = ptan.experience.ExperienceReplayBuffer(exp_source, buffer_size=100)" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 43, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "len(buffer)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All replay buffers provides the following interface:\n", + "* python iterator interface to walk over all the samples in the buffer\n", + "* method `populate(N)`, to get N samples from the experience source and put into the buffer\n", + "* method `sample(N)`, to get the batch of N experience objects\n", + "\n", + "So, the normal training loop for DQN looks like infinite repetition of the following steps:\n", + "1. call `buffer.populate(1)` to get fresh sample from the environment\n", + "2. `batch = buffer.sample(BATCH_SIZE)` to get the batch from buffer\n", + "3. calculate the loss on the sampled batch\n", + "4. backpropagate\n", + "5. repeat until convergence (hopefully)\n", + "\n", + "All the rest is happening automatically -- reset of the environment, sub-trajectories handling, buffer size maintenance, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n" + ] + } + ], + "source": [ + "for step in range(10):\n", + " buffer.populate(1)\n", + " # if buffer is small enough, do nothing\n", + " if len(buffer) < 5:\n", + " continue\n", + " batch = buffer.sample(4)\n", + " print(\"Train time, %d batch samples:\" % len(batch))\n", + " for s in batch:\n", + " print(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Monitoring the training\n", + "\n", + "Normally, if we're running the training process, we want to keep an eye on several metrics to check how good our method is doing. Minimal set of things to watch includes:\n", + "* training loss (several loss components in case of A2C, for example)\n", + "* values predicted by the network (in case of DQN)\n", + "* statistics about episode rewards (to check that our agent improves over time)\n", + "* statistics about the length of the episode, as this is normally a proxy for reward\n", + "\n", + "First two items are being calculated in the training loop, but the rest two values are not that easy to get. If we're implementing everything from scratch, we need to track the current episode and when it ends, track the total reward and length.\n", + "\n", + "Ptan simplifies this by providing the method in experience source, which returns this information in one call. Method `pop_rewards_steps()` returns the list, where each entry is the information about the episode which since the lass call to the method. If no episodes have completed between the calls, empty list is returned. \n", + "\n", + "Every item is a tuple with (total_reword, total_steps). \n", + "\n", + "So, the only thing you need to do to monitor the training progress, is to periodically call method `pop_rewards_steps()` in the training loop and handle the results (printing on console or sending to TensorBoard, or whatever)." + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "metadata": {}, + "outputs": [], + "source": [ + "r = exp_source.pop_rewards_steps()" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[(10.0, 10)]" + ] + }, + "execution_count": 49, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "r" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We've one one episode completed so far, it got reward 10.0 and total amount of steps was 10" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[]" + ] + }, + "execution_count": 50, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "exp_source.pop_rewards_steps()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Other tools\n", + "\n", + "There are several smaller things, which could be used, like [`TargetNet`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L79), which allows you to keep a copy of model weights and syncronize them from time to time (which is essential for DQN to converge), or a [set of utils](https://github.com/Shmuma/ptan/blob/master/ptan/common/utils.py) to smooth time series for better training progress visualisation.\n", + "\n", + "There is [PyTorch Ignite bindings](https://github.com/Shmuma/ptan/blob/master/ptan/ignite.py) which implement integration of ptan with ignite framework:\n", + "\n", + "* install end of episode hooks `EpisodeEvents.EPISODE_COMPLETED`\n", + "* handle situations when reward reaches boundary `EpisodeEvents.BOUND_REWARD_REACHED`\n", + "* measure performance of the training process: `EpisodeFPSHandler`\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Simple CartPole solver\n", + "\n", + "Below is very simple DQN version which solves CartPole, just to demonstrate how all things fits together in real life." + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, obs_size, hidden_size, n_actions):\n", + " super(Net, self).__init__()\n", + " self.net = nn.Sequential(\n", + " nn.Linear(obs_size, hidden_size),\n", + " nn.ReLU(),\n", + " nn.Linear(hidden_size, n_actions)\n", + " )\n", + "\n", + " def forward(self, x):\n", + " # CartPole is stupid -- they return double observations, rather than standard floats, so, the cast here\n", + " return self.net(x.float())" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": {}, + "outputs": [], + "source": [ + "BATCH_SIZE = 64\n", + "REPLAY_SIZE = 1000\n", + "LR = 1e-3\n", + "GAMMA=0.9\n", + "EPS_DECAY=0.995" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "metadata": {}, + "outputs": [], + "source": [ + "env = gym.make(\"CartPole-v0\")" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Net(\n", + " (net): Sequential(\n", + " (0): Linear(in_features=4, out_features=64, bias=True)\n", + " (1): ReLU()\n", + " (2): Linear(in_features=64, out_features=2, bias=True)\n", + " )\n", + ")" + ] + }, + "execution_count": 54, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net = Net(obs_size=env.observation_space.shape[0], hidden_size=64, n_actions=env.action_space.n)\n", + "optimizer = optim.Adam(net.parameters(), LR)\n", + "net" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": {}, + "outputs": [], + "source": [ + "action_selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "agent = ptan.agent.DQNAgent(net, action_selector)\n", + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=GAMMA)\n", + "buffer = ptan.experience.ExperienceReplayBuffer(exp_source, buffer_size=REPLAY_SIZE)" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "metadata": {}, + "outputs": [], + "source": [ + "@torch.no_grad()\n", + "def unpack_batch(batch: List[ptan.experience.ExperienceFirstLast], net: nn.Module, gamma: float):\n", + " states = []\n", + " actions = []\n", + " rewards = []\n", + " done_masks = []\n", + " last_states = []\n", + " for exp in batch:\n", + " states.append(exp.state)\n", + " actions.append(exp.action)\n", + " rewards.append(exp.reward)\n", + " done_masks.append(exp.last_state is None)\n", + " if exp.last_state is None:\n", + " last_states.append(exp.state)\n", + " else:\n", + " last_states.append(exp.last_state)\n", + "\n", + " states_v = torch.tensor(states)\n", + " actions_v = torch.tensor(actions)\n", + " rewards_v = torch.tensor(rewards)\n", + " last_states_v = torch.tensor(last_states)\n", + " last_state_q_v = net(last_states_v)\n", + " best_last_q_v = torch.max(last_state_q_v, dim=1)[0]\n", + " best_last_q_v[done_masks] = 0.0\n", + " return states_v, actions_v, best_last_q_v + rewards_v" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "28: episode done, reward=27.000, steps=27, epsilon=1.00\n", + "43: episode done, reward=15.000, steps=15, epsilon=1.00\n", + "66: episode done, reward=23.000, steps=23, epsilon=1.00\n", + "85: episode done, reward=19.000, steps=19, epsilon=1.00\n", + "98: episode done, reward=13.000, steps=13, epsilon=1.00\n", + "111: episode done, reward=13.000, steps=13, epsilon=1.00\n", + "166: episode done, reward=55.000, steps=55, epsilon=1.00\n", + "191: episode done, reward=25.000, steps=25, epsilon=1.00\n", + "212: episode done, reward=21.000, steps=21, epsilon=0.94\n", + "231: episode done, reward=19.000, steps=19, epsilon=0.86\n", + "248: episode done, reward=17.000, steps=17, epsilon=0.79\n", + "266: episode done, reward=18.000, steps=18, epsilon=0.72\n", + "282: episode done, reward=16.000, steps=16, epsilon=0.66\n", + "298: episode done, reward=16.000, steps=16, epsilon=0.61\n", + "325: episode done, reward=27.000, steps=27, epsilon=0.53\n", + "337: episode done, reward=12.000, steps=12, epsilon=0.50\n", + "349: episode done, reward=12.000, steps=12, epsilon=0.47\n", + "362: episode done, reward=13.000, steps=13, epsilon=0.44\n", + "371: episode done, reward=9.000, steps=9, epsilon=0.42\n", + "387: episode done, reward=16.000, steps=16, epsilon=0.39\n", + "397: episode done, reward=10.000, steps=10, epsilon=0.37\n", + "408: episode done, reward=11.000, steps=11, epsilon=0.35\n", + "416: episode done, reward=8.000, steps=8, epsilon=0.34\n", + "436: episode done, reward=20.000, steps=20, epsilon=0.31\n", + "445: episode done, reward=9.000, steps=9, epsilon=0.29\n", + "458: episode done, reward=13.000, steps=13, epsilon=0.27\n", + "467: episode done, reward=9.000, steps=9, epsilon=0.26\n", + "476: episode done, reward=9.000, steps=9, epsilon=0.25\n", + "489: episode done, reward=13.000, steps=13, epsilon=0.23\n", + "501: episode done, reward=12.000, steps=12, epsilon=0.22\n", + "519: episode done, reward=18.000, steps=18, epsilon=0.20\n", + "533: episode done, reward=14.000, steps=14, epsilon=0.19\n", + "543: episode done, reward=10.000, steps=10, epsilon=0.18\n", + "553: episode done, reward=10.000, steps=10, epsilon=0.17\n", + "562: episode done, reward=9.000, steps=9, epsilon=0.16\n", + "571: episode done, reward=9.000, steps=9, epsilon=0.16\n", + "581: episode done, reward=10.000, steps=10, epsilon=0.15\n", + "593: episode done, reward=12.000, steps=12, epsilon=0.14\n", + "602: episode done, reward=9.000, steps=9, epsilon=0.13\n", + "613: episode done, reward=11.000, steps=11, epsilon=0.13\n", + "627: episode done, reward=14.000, steps=14, epsilon=0.12\n", + "640: episode done, reward=13.000, steps=13, epsilon=0.11\n", + "653: episode done, reward=13.000, steps=13, epsilon=0.10\n", + "664: episode done, reward=11.000, steps=11, epsilon=0.10\n", + "679: episode done, reward=15.000, steps=15, epsilon=0.09\n", + "700: episode done, reward=21.000, steps=21, epsilon=0.08\n", + "725: episode done, reward=25.000, steps=25, epsilon=0.07\n", + "736: episode done, reward=11.000, steps=11, epsilon=0.07\n", + "746: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "756: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "766: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "779: episode done, reward=13.000, steps=13, epsilon=0.05\n", + "794: episode done, reward=15.000, steps=15, epsilon=0.05\n", + "807: episode done, reward=13.000, steps=13, epsilon=0.05\n", + "818: episode done, reward=11.000, steps=11, epsilon=0.05\n", + "828: episode done, reward=10.000, steps=10, epsilon=0.04\n", + "837: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "846: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "856: episode done, reward=10.000, steps=10, epsilon=0.04\n", + "865: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "875: episode done, reward=10.000, steps=10, epsilon=0.03\n", + "886: episode done, reward=11.000, steps=11, epsilon=0.03\n", + "897: episode done, reward=11.000, steps=11, epsilon=0.03\n", + "909: episode done, reward=12.000, steps=12, epsilon=0.03\n", + "934: episode done, reward=25.000, steps=25, epsilon=0.03\n", + "947: episode done, reward=13.000, steps=13, epsilon=0.02\n", + "961: episode done, reward=14.000, steps=14, epsilon=0.02\n", + "974: episode done, reward=13.000, steps=13, epsilon=0.02\n", + "986: episode done, reward=12.000, steps=12, epsilon=0.02\n", + "1018: episode done, reward=32.000, steps=32, epsilon=0.02\n", + "1060: episode done, reward=42.000, steps=42, epsilon=0.01\n", + "1079: episode done, reward=19.000, steps=19, epsilon=0.01\n", + "1131: episode done, reward=52.000, steps=52, epsilon=0.01\n", + "1175: episode done, reward=44.000, steps=44, epsilon=0.01\n", + "1275: episode done, reward=100.000, steps=100, epsilon=0.00\n", + "1309: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "1339: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1405: episode done, reward=66.000, steps=66, epsilon=0.00\n", + "1500: episode done, reward=95.000, steps=95, epsilon=0.00\n", + "1535: episode done, reward=35.000, steps=35, epsilon=0.00\n", + "1556: episode done, reward=21.000, steps=21, epsilon=0.00\n", + "1580: episode done, reward=24.000, steps=24, epsilon=0.00\n", + "1625: episode done, reward=45.000, steps=45, epsilon=0.00\n", + "1654: episode done, reward=29.000, steps=29, epsilon=0.00\n", + "1684: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1714: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1748: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "1774: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "1804: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1831: episode done, reward=27.000, steps=27, epsilon=0.00\n", + "1848: episode done, reward=17.000, steps=17, epsilon=0.00\n", + "1864: episode done, reward=16.000, steps=16, epsilon=0.00\n", + "1894: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1922: episode done, reward=28.000, steps=28, epsilon=0.00\n", + "1954: episode done, reward=32.000, steps=32, epsilon=0.00\n", + "1982: episode done, reward=28.000, steps=28, epsilon=0.00\n", + "2038: episode done, reward=56.000, steps=56, epsilon=0.00\n", + "2072: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "2172: episode done, reward=100.000, steps=100, epsilon=0.00\n", + "2264: episode done, reward=92.000, steps=92, epsilon=0.00\n", + "2294: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "2328: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "2382: episode done, reward=54.000, steps=54, epsilon=0.00\n", + "2420: episode done, reward=38.000, steps=38, epsilon=0.00\n", + "2469: episode done, reward=49.000, steps=49, epsilon=0.00\n", + "2523: episode done, reward=54.000, steps=54, epsilon=0.00\n", + "2547: episode done, reward=24.000, steps=24, epsilon=0.00\n", + "2573: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "2606: episode done, reward=33.000, steps=33, epsilon=0.00\n", + "2620: episode done, reward=14.000, steps=14, epsilon=0.00\n", + "2646: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "2666: episode done, reward=20.000, steps=20, epsilon=0.00\n", + "2698: episode done, reward=32.000, steps=32, epsilon=0.00\n", + "2738: episode done, reward=40.000, steps=40, epsilon=0.00\n", + "2779: episode done, reward=41.000, steps=41, epsilon=0.00\n", + "2822: episode done, reward=43.000, steps=43, epsilon=0.00\n", + "2880: episode done, reward=58.000, steps=58, epsilon=0.00\n", + "2936: episode done, reward=56.000, steps=56, epsilon=0.00\n" + ] + } + ], + "source": [ + "step = 0\n", + "losses = []\n", + "rewards = []\n", + "\n", + "while True:\n", + " step += 1\n", + " buffer.populate(1)\n", + " solved = False\n", + " for reward, steps in exp_source.pop_rewards_steps():\n", + " print(\"%d: episode done, reward=%.3f, steps=%d, epsilon=%.2f\" % (\n", + " step, reward, steps, action_selector.epsilon))\n", + " rewards.append(reward)\n", + " solved = reward > 150\n", + " if solved:\n", + " print(\"Congrats!\")\n", + " break\n", + " if len(buffer) < 200:\n", + " continue\n", + " batch = buffer.sample(BATCH_SIZE)\n", + " states_v, actions_v, tgt_q_v = unpack_batch(batch, net, GAMMA)\n", + " optimizer.zero_grad()\n", + " q_v = net(states_v)\n", + " q_v = q_v.gather(1, actions_v.unsqueeze(-1)).squeeze(-1)\n", + " loss_v = F.mse_loss(q_v, tgt_q_v)\n", + " loss_v.backward()\n", + " optimizer.step() \n", + " losses.append(loss_v.item())\n", + " action_selector.epsilon *= EPS_DECAY\n", + " if step > 3000:\n", + " break" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deZgU1bk/8O87MyzKoiCLbDogKKBEAwSNElRQQfTnksRETaLmxuu9UaP+bq4JJi7cRK8YRRHXuGvcjSgYFNlkX4edAYYZYBhmYHZmZ5buPvePrp7ppaq7q7truqvm+3kenqmurq4+Rc+8dfqtU+8RpRSIiMhZ0pLdACIiSjwGdyIiB2JwJyJyIAZ3IiIHYnAnInKgjGQ3AAD69OmjMjMzk90MIiJb2bJlS7lSqq/ecykR3DMzM5GVlZXsZhAR2YqIHDZ6jmkZIiIHYnAnInIgBnciIgdicCciciAGdyIiB2JwJyJyIAZ3IiIHYnAnIkqQ/PJ6rMktT3YzAKTITUxERE5w2TMrAAD5s65JbkPAnjsRkSMxuBMRORCDOxGRAzG4ExE5EIM7EZEDMbgTETkQgzsRkQMxuBMRRWFrwXGU1jYmuxlRY3AnIorCj19eh+nPr0l2M6LG4E5EFKXyuqZkNyFqDO5ERCa43B6sy0uN+jHhMLgTEZnw/LJc3PrGRmw8WJHspoTF4E5EZMKBsjoAQHldc5JbEl7E4C4iQ0TkOxHZIyLZInK/tr63iCwRkVztZy9tvYjIXBHJE5GdIjLW6oMgIgp2otmNucty0eL2JLspSRFNz90F4PdKqdEALgJwj4iMBjADwDKl1AgAy7THAHA1gBHav7sAvJLwVhMRRfDC8lw8u2Q/Ps06EvM+Pt5UgA0pnn4xEjG4K6WOKaW2asu1APYCGATgegDvapu9C+AGbfl6AO8prw0AThWRAQlvORFRGA3NbgBAU0vsPfcZ83bh5tc26D6noGLeb3swlXMXkUwA3wewEUB/pdQx7aliAP215UEA/E+Vhdq64H3dJSJZIpJVVlZmstlERBRO1MFdRLoD+BzAA0qpGv/nlFIKMHcaU0q9ppQar5Qa37dvXzMvJSJKOoEkuwlhRRXcRaQTvIH9A6XUPG11iS/dov0s1dYXARji9/LB2joioqRSSiGvtDbZzWgX0YyWEQBvAtirlHrW76kFAG7Xlm8HMN9v/W3aqJmLAFT7pW+IiJLmsy2FuOLZVVidG38q2Ak590sA/ArAZBHZrv2bDmAWgCtFJBfAFdpjAPgawEEAeQBeB3B34ptNRGRedlE1AOBAaV3M+0j1dIxPRqQNlFJrAMOjmaKzvQJwT5ztIiJKSaneY/fhHapE5GgNza6QdYkIz6neg2dwJyJHe2bxfuzW0jHeS4iJkeo9eAZ3InK87KPVCdtXqvfYfRjciYgciMGdiBxPWZBBKahsgMeTuqkZBnciohj8bVEOXl6Rl+xmGGJwJyLHC+5f/89Xe9DY4o57vxsPVca9D6swuBNRhzR/u7OrojC4E5Hj2WN8S2IxuBOR4+ld9gw3pLGxxW18sdQmZ4qI5QeIiDqShmYXRj/6LX572Vn6G6TuAJkA7LkTUcdk0AOva/SWK/jnlsJ2bEziMbgTkePFMs7dirHx7YnBnagD+M9/bMGMz3cmuxn2ECmnbpOcO4M7UQewKLsYH28+EnlDh4qtXpi9u+4M7kTkeHopFqN4H7EwmE1iPoM7EXVIRuV/U72Ub7QY3InI8XwBOyHl3JlzJyKyH7vUa4+EwZ2ISAeHQhIR2ZDhBVVndNwZ3InIeXJLavHOuvy49mHzjjuDOxE5z2MLsiNu45QeuhEGdyJyPF/+3P9iKYM7EZHNxHMx9KrnViWuIUnE4E5E5KeyvhkAoAzOEGU1Te3ZnJgxuBNRh9bY4kazyxP19gWVDVFtV9/kwoOf7UB1Q0usTYsLJ+sgog7Jl38f+cgiDOvTDfdNGYGM9LZEfLyjZd7fcBifbSlEr26d8afpo+Lcm3kM7kTkeHqBes7S/XBrU+kdLK/HA59st+a9k3Q3FNMyRNRh+I+Qya9owO8/22G8bYLeK1l3ujK4ExHpMIrJ0VaNTHaNGgZ3IqIIfvPO5tZlu9ScYXAnopSw91gNfvnGRjS2uGPex9q8clz7wmq4PNGPfjHiH8SX7SttXTZ781OyzgUM7kSUEh7+cjfW5JVjd1F1zPv44+c7sbuoBkerGhPYstgw505EFAelFN5dl4/axhbjXrVdcikJxKGQRJRSzIbhdQcq8NiCbOworGq9iGnl8EO7nCci9txF5C0RKRWR3X7rZopIkYhs1/5N93vuIRHJE5EcEZlqVcOJiADgRLM3R1/d0NZzD46/vsdm0uVmThC5JbWoa3Lp7ydJWfdoeu7vAHgRwHtB659TSj3jv0JERgO4GcC5AAYCWCoiZyulYr9CQkQUJV/wtrJ3rZf6ufK5VRh7xqlB2/m+RVjXlnAi9tyVUqsAVEa5v+sBfKyUalJKHQKQB2BCHO0jog4i1lHherGzuCb+C6qG49yDnvhkcwEAYGtBVcD6ZFcUjueC6r0islNL2/TS1g0CcMRvm0JtXQgRuUtEskQkq6ysLI5mEJGTxNrTFWnrLbenP36+K+zz/9p5DB5P+3ffYw3urwA4C8AFAI4BmG12B0qp15RS45VS4/v27RtjM4iIvJQK31veXVQNVzsG2X3FNQCA8romfLipoN3e1yem0TJKqRLfsoi8DuBf2sMiAEP8Nh2srSMisp5BdN9fUotH50eeei+RPs0qbF0ur2v/GvAx9dxFZIDfwxsB+EbSLABws4h0EZGhAEYA2BRfE4moI4g3oxLu9aWxTLBh0Mm3yUjIyD13EfkIwGUA+ohIIYDHAFwmIhfAe5z5AP4DAJRS2SLyKYA9AFwA7uFIGSJqL+2RcU/2hdJoRQzuSqlbdFa/GWb7JwA8EU+jiIii5RuP3uTy4EBZvf427dmgFMHyA0SUVBV1TaamuTOyOrc8Aa1xDpYfIKKkGvf4Ulwzpu0yntnSAe09/NG/dSUJGE9vFfbciSjpFu46FvNrraojE81e95fUWfLeicDgTkSOl8j4b5cLqgzuRJQSkj0tXTCjbwR2uTjL4E5EKSV5SRZnYXAnoqSxsu66VezSZAZ3IkopZpMzNom17Y7BnYiSRq8X7Ft1/8fb8OnmI6EbJOh9Ir7GYH0sIy+TcT2BwZ2IDCmlsGDHUbjbo5piUPybv/0o/vD5Tuvf16RYThRf7TyKn7yyLvGNCYPBnYgMfbGtCPd9tA1vrjloyf6jiZNFVScwe3FOXPn5WF6ZyNx6Xmkdthw+nrgdRoHBnYgMVdQ1A4ixqmKC3P3BVrywPA/7imuT1gY7YvkBIorIqqSMXm88eFVTi1t3vcejsLOoOqr3SWzG2x6XcNlzJyJDvouHVg3/a3a3FQwzG4BfWXkAN7y0FpsPRZ7i2R7hOLEY3InItMYWN2Z9sw+NLfrTNeQU1yInijTKf/xjS9TvqYJCtC9Nc6w6cvGuxI6nT607aY0wuBORaW+tPYRXVx7AG6v1L7ROnbMKU+esirifVC7TG3wysRsGdyIyzVd/PRF12KMVPFbc9yiaIJzYMG2PoM/gTkQRBQdQX6BtzzDX3j1po0wOyw8QUVK53B40uayZwrg958eINBlHewfbeI49c8bCxDUkAgZ3Ioe69Y2NOOfhRZa+hxWBNbiHbnQx1MxIHrv0thOJwZ3IoTZFMUQwEqNes5l8d/TvlbBdJV0qVLtkcCeiiJIZq9p7jlSfeA75zTWHjPfbTv+ZDO5EZMgpnelEhtNoYnNWvnEdmfY6UTK4E5Fplt65GuU+rUgNRaOivjniNouyiw2fcyuFoqoTuPaF1Sivs65mD4M7EZnmS5Uk9r5Pp3xPCM/tUXh7zSHsLqrBF1uLLHsfBnciMtSe6e5Ye+CWpTksrKdTqfX+rfzWweBORBEZXQRszwutD34WOHGHmW8PqTB6xeehebswb5t1PXYfBneiFLazsApltcmrpW6kNeeeyKGQBmkZ39o9x2rgcnvwgyeW4qsdR0Oet4uFO4+1y/swuBOlsOteXItpURTgam9W5MfXH6wAENoT939cfaIFZbVNeGxBtu7zicTCYURkqWhGZ1glUSHc7VEojqI0byR64TaFMi4phcGdqIOpbmgxXc3RMH5GGVhnL87BRU8uiynA651gxGA9tWFwJ+pgzv/LYvz2/egnydDTlnOPznc5ZQBg6bhuCsTgTtQBLdtXGnGbqoZmzPxqj+5zsfaa4x1aWdWgl6KKop57DKkbu6d7GNyJSFfh8ROty8a1zaOLgIkainjFs96Ly2IyL7MmLzVnfLLyBMLgTkSmxVp+wIpRNlYFSJt33Bncicg7Xd6J5ugn9rCyVIDd0yGpgsGdiHD9S2sx6lFrJ/ZILGeMlbHyPBYxuIvIWyJSKiK7/db1FpElIpKr/eylrRcRmSsieSKyU0TGWth2og7D7VF4b32+ZdPm7T1WE/b5uiaXbs/ebHBKZK2aZMzjaifR9NzfATAtaN0MAMuUUiMALNMeA8DVAEZo/+4C8EpimknUsc3bWohH52fj5e8OBKzPyq+0LOD7+2JbES55annr41SaNWl5FCN/YpFK9WhiETG4K6VWAQier+t6AO9qy+8CuMFv/XvKawOAU0VkQKIaS9RR1Ta6AASO+jhQVoefvroeMxfoD1dMtEqdO2WjjX9m4mTwbf96J5JUOrmkqlhz7v2VUr7qN8UA+mvLgwAc8duuUFsXQkTuEpEsEckqKyuLsRlEHYMvmG05fBw1jS0AgKoG789IKRUjuSW1eObbnLh6qNHWX/FtF0tQ1mteWW0TSmvjL2eQbHOW7rds33FfUFXe3wzTvx1KqdeUUuOVUuP79u0bbzOIOozGFm8axuxdosFueX0DXvwuD8e1k4QZreV2kzgUcnVu4Nj1rp0SOz7E0w5ZmcYWc2UgzIj1f6PEl27RfvqSXkUAhvhtN1hbR0SJogWdeMOkK0L0Che4433vJpcbY/+6JKpto+3td++SEUeLnCfW4L4AwO3a8u0A5vutv00bNXMRgGq/9A0Rxcg/voXE3BjTKu2Ztg5uYlltk24OnxIn4qlORD4CcBmAPiJSCOAxALMAfCoivwFwGMDPtM2/BjAdQB6ABgC/tqDNRARr5jG1mq8XLiaS79Gfu3iV1V/E4K6UusXgqSk62yoA98TbKCIK5B8MVVBaJlLw++GTy1qXffl6f7FcUI13tIoVYdiuFSeVUqZOdtHiHapENmN25MkxvxrqIx9puws1EQEl6sJhfssNzS48sXBvmH0GPnb6sEerhtMzuBPZgH+ACw4GyZgOrvVbQ5Tb+04CAuD1VYewcBcvxflY9ekxuBNZ4MaX1+KfWwotfY/W2+/jjA6xvDzWoZAA4PJYN/zPjqy6E5Zjh4gssK2gCtsKqvDTcYMTsj+90TLJzHtH+96ltY3o16OrqX1/uLHAspICqYg9dyIHyi2pReaMhdhwsCLq1wT39KyrZx7FDEc627S423rmE55Ypm3nFc1JYVF2Md5Zlx9NEx2BOXciB1p3wBvUv9Zy0EopPPnNXhRUNBi+pqQm8Lb7ZAyFDBej//qvcLVuHH51NAZWXTNhcCdKouAZjfJK6/D3lQdx1z+yDF/zk1fWB7y2SWd4oxmJ7jluOXw8ofvjaJnYMOdOlERto068f+G+igDu4NIAYSLcwfL6gMd3vL0Jp57UKfJ7JyBo6gWm4P0WVzfiYFl96IZxvAdFxuBOlExxjDoxKsK1IqcdqqyGuTs2uF3+PXmn98JjwZ47kQNFO15cLyYmKlDGkvPVe+tD5fV4YuFeNLsChzqmBW1sttlOPyFYlXNncCdKoracu/cP3Mzdp/EGvUT0GI9Utl34fXT+7pAyvIDzg3O8OFqGyIGivRFJdzaiOEeeVESoyhi25K/21qtzy3G4oh5NLjcOlNYZbR2wxBR6IKv+P9hzJ0qi4NEysbw2bnHm+0tqmvDckv04Wq0/MxJ77uG1uDxAl8Tvlz13oiQKHi3Ttl7CPgbaxsbHKxE9x/VhbsJibA9v9pIcS/bL4E6UYGZqhYTruUfaz5yluWaaZUp9kwt3vL0pZH1OcW3Iukg98zS/DawobWt3zLkT2UTwH+vhinp4DKa0C55wI9ljuqsbWnDOw9/gf77K1p1bdeqcVQDMpVr8t2Vobz8M7kQWyimuxaVPr8ArKw/oPm804UZOSS3eXpvftl07RcXthVVocnnwaZZxRUulVEiQjubia6x2F9XEt4MUx8JhRDbh/8daeNw7VNDolvy2nnvon/hfwtZoSZ4/f7kbEebWDhB8vcBMkTSKHYM7UQJ5PAq/+2hr6+PgKfGC+dbP21qEuiaXYQ/Yyo67/3tWnwhNxQT7cGMBDleYKCfgn5YRYHN+YmvP2B1z7kQ2UFzTiK93Fbc+jlTq1n/92rzQG4Da230fbYtqO/8Lo5FOPP7Pz16833yjHI9VIYlSXnAQ97R2y/RDYLT5aDsPMvEfLbNgx9EktqRjYXAnslBrWsao5x4U9LcdiS1lcbCsLqqUip6YasuI/nKkbSkU0zJEtuT9y91zVH/Eh3/gq6hrxp+/2B3Tu0yevRI3vrQ2ptfGwky8jrdMAsWGwZ2oHRRVnQit0R7kRJyTbgTXdbdSuN54cBXIxxbEdsLqKNhzJ7Ih/z9cvREm0d6xGdz7XX8gucMJA9sjAYmd4GM6EMdEHR0Bp9kjsiH/P9vJs1fiWPWJgOdjTVjc8vqGmNsULNx8rUb843dNY2CuP7jnTsnB4E6UQMFfsYMnrqioCyyzG/Wt+RYGzJ+/Zv5E8eX2otblh4OuEzDHbg7TMkQpqtnlQV6pt6CWJ+gv9YFPtod9bWBRrcS3zSpHKtu+gRRVnUClX214Ox2HkzG4E8XpkS9344pnV6GstiliLyw4+EcbB1M9XvpfLE5jdDeFk3UQpaiNh7wXN+uaXEiPENh8MbCxxY39JbX47QdtpQqcEhIZ283hBNlEKcr/bzO4Zx7M9/zEp75DeV2T4X6C2akOun1a6mxMyxAliCBycPc9HRzYAYSttLh0T0kcLWtfTMuYw6GQRBY6WnUCLrcn8oZ+Co83hOTZmyPsI9zsSuGeW5RdbPhctPtoL4ztqYHBnTq8yvpmXDxrOR5fuDfsdnuO1qBRu4u0tLYRE5/6Dj94Ymnr8yLAtDmrw+7D6tC7q6ja4neILI0D3c3hUEiyyuGK+pDx2B2Jr+DWdzmlIc81trhx06vrsDavHNPnrsbvP9sBAHh/Q0HIttF0mo2m2wMip3SiYWYSDaswtJvDmZgoZtUNLa09zmA1jS249OkVeGjernZulbVONEdfpyU4GB0oq8PdH2xBs8uDnYXV2Jx/HH/4504AwJY4J5oI94f8v1/vi2vfQGoEVubcU0NcwV1E8kVkl4hsF5EsbV1vEVkiIrnaz16JaSrF6vy/LMY1c/XTBQ1N3iC4Jq+sPZtkqQ0HKzDq0UWmJ7/wdZxnfL4TX+8qxraC4yH5Y9/Fr9066Y9UiGlsg/1YdZ0kET33y5VSFyilxmuPZwBYppQaAWCZ9piSzKh4k1VX6pNp06FKAMDzS3Mx6W/fmerFA223zwcWw/L+VApwuT1Yvi80hRMNq693pkKv2U7DNp3MirTM9QDe1ZbfBXCDBe9BZMgXQDflV6KgsgF5pXVRva41JvkFcp/C42232wfntQsqvYW3WtyRI7cTT6bBGNrNSdWcuwKwWES2iMhd2rr+Sqlj2nIxgP56LxSRu0QkS0SyysqckxKwGzsXeWpscRtOguGvvtmFzBkLce+HW8Nu5wvmvsEeeoE43B/iHW9vitgWq2N7KnSaU+Hbg52kauGwiUqpsQCuBnCPiEzyf1J5k0m6TVdKvaaUGq+UGt+3b984m0GpKPtoNTJnLMS2gsTPdr+7qBq3vr4B0+euDihaBYQG5Zu1qof/2untc2wrOI45S9smag6ORa1pGRXaCw33h+jfu0+W2kZXspvAkr8pIq7grpQq0n6WAvgCwAQAJSIyAAC0n7ElJ8kyR6tOoKjKG4isTBOsyPF+I/s2O7q7Kw+YmAf02hfWYGtBFQCgodkb0FbuL8OeozURe0I3vrwOc5bmAvCOFvKlWQoqG1Dd0BJhflAV1/+Z1UmZSN9O2gNz7uakXFpGRLqJSA/fMoCrAOwGsADA7dpmtwOYH28jKTYHy/RzzRfPWo5LZi0PWOfrrd757mb8URv2Fy8Jk97w2XO0Bqv2e08CU2avxA0xzAPqC+a3v7UJ0+euDvvH8uLy3NblJpcb35u5GDMXZLeu+8WbbbXN9U4SHhXf1+g31xzCot3HIm8Yo/KgevHUccXTc+8PYI2I7ACwCcBCpdQiALMAXCkiuQCu0B5TO1uw4ygmz16J+X6TKkRj6d5SfJJ1JCFtaM3nhwmG0+euxm1vteWqD8U4D2i0swk9s7gtHdOk3bi1cn/bNZ/dRTWtN3TpnZRcbg9yS6K7QKtn+b5S/Of7ye9dW+lodfLTU3Zi1VDImKtCKqUOAjhfZ30FgCnxNIril33UOw77/o/DTxZh1ocbC3BG75MxcUSfiNtK5NieMPd9vK11uaohut6rUfIg67D3GoHe3Z4KwOwlOSZb17GkQHkbW0m5tAzZz9TnVsW9jz99sQu/fHNjVNu2Z+bV5Wkrn/De+sNRvSZSfl8pha92BKVQVGrc4k8UCYO7Q+mNmsgpqY16zHcitUelwt1FkYdEBpv41HcRt3lnXX7A49omV+s1AqKE4GQdFE5tYwu6dc5Ak8uD615cg1yDIH7Fsyt11xfXNCJzxsKQ9evyyrGjsBoFlQ34y/XnmmqTLy1jtqfr8aiUqCx4x9ubk90EopgxuCdAaW0j+vXomrT3r29yYczMxbhz4lBMO+90w8AerMXtiZgfvfWNthTM1HN170czFOsNUsFNcrk9KKo6gTNP66a/PdMkZGOcrCNFbT9ShQlPLMPnWwqT1gZfCuaNNYeizjcDwE2vro/5PVfolMcN5uu555bWYfm+8GPdi6sbW5eVUth7rKY1nfP4wr249OkVmLkgG/uKQ9MvqVDDnChWqXqHaoc3b6s3qPsmSU6G33/WNiJmwY6jUb9u+5EqHItx2JqZlMWq/WX4t3eywm4z6W9t+e8VOWW4+vnVeHaJd9iir7rjO+vycf2LoePgfZ8BEbVhcI9BTWML1h3wBhxfTzlZIyjW5pVjbV7sJxaj3vvzS3N115thdKdii9uDd9YeCpjWzn96Ol9a6YXleSGvberAk4qQM7HnnkLu+WArbn19I4771TT555bCgFTFpkOVYWfdSZRfvBHdsEQjRk18zq/2SqyCQ3vmjIX4bl8p3lmbj5lf7TFMIZmdkYgpd6JQDO4xyCmuBRA6GbKvlsqq/WX42d/X4/XVB9u9bcnwxuqDyJyxELWNgePG9TruC3cdQ12T9xrBcYObjY5WBaaKgvez5XBgIbJETE9HlCy8oJqCgmNKujZ8zxecnvxmX0y1UlKFL+ftY/Qr+I8N3h54hVbXZFehtxpkcJAGvMMcM7T/J6N5Wz/YGDg/6f6g2/0XBJVU8J1UieyIaZkUYlQQyxfc/Xua249UQSkFtw1va5y7LLa8+4ebvMF5pc7NPh7VNoY91vy5b3IMIjLG4B4D3/jtHz4ZWFnRaJKCO97ejLP+9HVC26CUMj19nNX+tdM7UuejTQWG27gVWnvuLe7Iwd1XI8dfnkG1SyI7Ym0ZG0g3+N/U68HGw+NReHnFAYx6dFFC9xtJbkltwOPso9VobHG3fivxr7gIhKZTAG/PPd1EcP/TvF0h61JhQgqiVMc7VBMoXdpm8Ink+aW5eG7pfhx6crqpyQ0OV9Tj0qdXxNjC+Pzv1/sCHl8zd43pfZTVNuHxhXsBAJ9vjVyOeEdhaM+9qiG6CT2IOjIGdxO2H6mC2+MxnKcyLU1w/8fbsHyf8d2by/eVYHCvkzFnmbeX2+JW6JwhmLe1EBcNOw0DTz0pbBv2HjNfIKs9Ha8PX25306HK1mU7XocgSjSrLqgyuEepxe1pHfky8BT9OjLpIpi/PfwdosF3aja7PXB7FP7r0x3IPO1krHjw8rCvT/V4+P2/Lkl2E4gIDO5Ru9Tv9vijfnVQ/MVSybDF5QEyvMn60tqmsNs+NG8nPtqUmFmSiChVpNhMTB2NUUD3ZzRaxmedViPFX7Pb03pSiHQzDgM7EUWLo2WiEG1u+KlF+8I+f6tOqYBml6d16GBjC+umEKWqMYNOsWS/vIkpiV5YHn8RLSNNLg/KI6RjAKCxJbXGtBN1NBcN623JfjnOPYmCa5kkUrPLePSNv+yjqT1KhijYGb1PTnYTEiqK2zJSCoN7kuWW1uL11YcibmdiKDxRuxnWV392LAC4+7Kz2rEl1rOqwJdVcwwzuOs40ezGs4tzWgtbRbpQGo///Xqv7vpPNhcgc8ZCw+JaRKlgeN/uhs+xQ5JcHC2j4+UVeXhheR7mLs/DB3deaOkvaUNTYC79Z39fj8LKBhyr8Y7OqW1swWndu1h6giGiyOxWWZrBXYf/xctfvLERI0/vYdl71TYF1knxv4MT8F5sKa9rsnXpYOqYYp0gvaPhBdV28MDH23DlsytDesn7imsNXmG9F5fnYU1u6Ph4Sm0v/2JsspvQLmzWmY2LVblxq7Dn7udLrXTA5FH9ktySNu+sy8ewPsYXrSg1TR6ZOr9DSeOwjrtVoZ3j3NvR3mPJ66nrOVhen+wmkEm8ROI8Nuu4M7jrWZXg+uvkTBOGWnNTi52EC3hOO79ZNhTSkr0yuLd6+MvQSSE6itN76le5dJL0GIq6RTK8X3cM7xc6FHDauacn5GLiq78cF/c+KHHYc7ep9zcYTw1n1n2ThydsX4lw3fkDwz6/+L8m4fXbxkfcjxUBsr346gN9b3Di6oM8eu1o3fVTRvVjWgYwNQmNHViXc+dNTJaJZro3M/7rqnMSuj8rXHp239blnl07YYBfjfq7LzsLWx+5Evv+Og29u3VuXX9yp3RL2vIfk4a1Lv/4+4MseY+HrxkFAPjpuMFRbb/wvokRt+naKV23f7eNbdIAAAz6SURBVJ6RnphBgA6LjbZnt9EyDO4A/vuzHTG9boDOpB3du9hjAJJ/cAcCv3KOGXQKenfrjK6d0vHN/T/Cg1O9J6tY6tVH45rvDWh7EPQWfbp3RiS/i/BN6d7Lh+POHw1DzuPTcOuEM6Jq04BTTsKvL8nEh3deGHY7vQCcnpbmuF6rMeOA57T/AY/Nbhbv8MH9P/+xJeLsSUbWPzQF7/8m8I9/45+mGG7/+A3nxfQ+ZkWTeggXewb3aiv41L9nV4w9oxcA47RMz67xndCGakM9T+6cHpKr/kFm5IuW5w7sqbv+joszAaD120eXjPSoU0sC4LH/dy4uHt4nwnah+8tI4+07ThRpvoVU06GDe05xLRZlFxs+7/8V/o6LMzFlZD9c+70B+PuvxiF/1jUAgIkj+gQE9G5az/3Ley4J2d/NPxiCdTMmB6y7ZcIQ3HP5WdjwkPFJwax7Lo+c8w8XfMYEnRx8X0eH9emGCUN745YJQ1qfWztjMrY/elXY91ry/ye1/n/pSfObWNx30rn38uG4YlQ/XDGqf8j2XTvp/9peNTpwW1+7/U9k0faoI232xI3nGW6XqFIRdj9BOO3Li9tmwd0eOQQLZB+txjVz1xg+37NrBp656Xxcd/5ArD1QjoeuHmW4bf+eXfHuv03Amty2IZQXDDm1dfmDOy/E0r0lyEhPC8hhr/7D5RhiQVnUqeeeji0PX4EbXl6LI5UndLcZFqbgUzDfXCWdM9Lw4b9fhK0Fx1tnhRoUYUJvAOjVTT+18tYd43FSp4yAIODrWA/pfRL+e+o5cLk9OFheh5e+O9C6zaL7J+GyZ1YAAGb9eAyMwqBvJEsspWe7Rri+8IsLzzR8Lj1NHBfYCPBYNIExJ8hOkPomF8597NuI2903ZQQAYNLZfTEpKD+t59Kz+4bksX0uGd4Hl2hf77t2Ssfnv70Yfbp3Ngzsj1w7Gn/9156AdRcO7Y2NQXVngk0c3gfjzvSmUE7r3gU9u3YCEBjc3/71D3B6z64YNSAwlREuGPm+jvp6pL5f8rFntJ3A8mddg8wZC0Ne+/A1o9Cne5fW5ccX7m197eSR3p52Q3NbfR1fQsP3d5SRnoYHp47EP7cUoqTGO6mJf2rFv93BfyO/vOhMjB7YE+POjH48+lM/GYMpo/pHDO5t72+QljER3W8aNxhP33R+yP+fHfL2Yce5p0DzZ990PjYdqsQnWfFPUWnZaBm7zaEqItMAPA8gHcAbSqlZVr2XkbomF/aX1GJxdgleXXkg7LY9umbg1V+OQ7PLg0uG90HnjPgzVh/eeSEG6PRsfQHYyB0XZ6JzuuCR+dnIPO1k5Fc04Ikbx6BzehpEgOteXIOZ152L+z/eHvC6124bh5M7t32kP8jsjeyjNeh1cqeA7YIDO+DNW99z+Vm4VadH6glKb/To6t3fOQYF1Uae3gP7imvx7M/Ox4/HtqW27vzRMNw0bghOCWpPlwxvIP3jtHNwdv8e+CTrSMgNQst/f1nrSTk4oPjaFbpeDAP7xOF9sEZnTtu+Pbq0noyCndW3Gw6UBd4t/PPxgzHzqz14647x2HusFk9/mxOS1oqkk9/vmoj3OGw86jTp5vz8AmQdrsTjN4wBAFw95nTd4P77K8/G7CX7o9rnhUN748Gp55i+PnfRsN7YcDB8p2xC5mmm9hktS4K7iKQDeAnAlQAKAWwWkQVKqT3hX2nOkj0leGjeTqSJIE0ELo8HSgEuj0L1iZaIrz+j98l4/bbxEAFG9Oue8J5SpItxwR64YgTmLM1Fmnh7ndPHDMBpOoFmm5bjXp1bjjQBPs0qxPUXDMRJQb3NP18zCr+48Az07tYZ764/DAA40+DbgojgwakjdZ/zBXHfNYhzTu+B939zIcZn6p+kFj0wyfAYgwM74O2J++fk9fLz3bpk4KVbx+KkzmkYeGpXDO/XHXmldZh67uno2ikd14wZgBlXj8QtE4ZgX3Etrhwdmqv32fTnKejZtRMOltVj77EarD1Qjk2HKvGHaSNx+TmhNWH+MO0cnDfwFJw7sCfGPb40oEro7Rdn4vaLMyEimDyyf8D1jrUzJqOstgmPzt+NmhMt+OLuS7C/pBaFx0/ggjNOxeBeJ+Fvi3Jw3+QRAcf9/obDuHBob3RKD+1gzPn5Bfh8ayFWp0gxuZEDemDZvtKQ9T8a0QcndYotvKSJ99tu8DH+fPyQ1iCd/T9T8cHGw9hXXIt5W4sCtrvh+4Nwg9+Q2pM7Z+BvP/ke/vD5TvTp3hnXnT8IV4zqh4uH98HvpozA0j0l6NujC/Ir6kM6TOcN6okv7r6k9bN45NrRuHBobxRUNqCkphGDe52Mf38vC3dOHIo7fzQM3bqk4731h/H0tzk45aRO+PiuH+L5pbn4ZHMBpo8ZgGF9u+PHYwfhUHk9VueWoUtGOn51kXGKLx5ixdhNEfkhgJlKqana44cAQCn1pN7248ePV1lZWabfZ1dhNT7aXAClFNwehfS0NKSnAS0uhbQ0QW1jC7p3ycC5g07BwFO6onuXDIzo3yMg791RKKVQWd8ccLLIL69HQ7Mbow1Gm5h1pLIBhysaMHGEuZMaGTvR7MZJnfVTRG6PQl5pHYb26YaqE804Xt+CNAG6d81AfnkD3t94GD8bPwSr95fhvEGnYHxmL2wtqEK/Hl3wu4+24Zz+PXDFqH6YdHZfrMgpQ35FPQ6U1WFtXgW6d8lAXZML908ZgeLqRpzVrxv69eiKbl0ycMGQU9GjawYy0gSNLg/W5JZjeD/viKfGFg9OtLhx3sBTIALM+mYfRg/siamjT0dFfRPeXpuPkzqno6KuGaW1jfjZ+CGorG/G9wafgha39+/4vEE90aNrJzS2uLEmtxzjM3vhcEUDzve7jmUVt8fbBjPf3GsaW7QUaPsTkS1KKd07EK0K7j8FME0pdaf2+FcALlRK3eu3zV0A7gKAM844Y9zhw4cT3g4iIicLF9yTNhRSKfWaUmq8Ump8376RL1gSEVH0rAruRQCG+D0erK0jIqJ2YFVw3wxghIgMFZHOAG4GsMCi9yIioiCWjJZRSrlE5F4A38I7FPItpVS2Fe9FREShLBvnrpT6GsDXVu2fiIiMdejaMkRETsXgTkTkQAzuREQOZMlNTKYbIVIGINa7mPoASI17sRPPqcfG47Ifpx6b3Y/rTKWU7o1CKRHc4yEiWUZ3aNmdU4+Nx2U/Tj02px4XwLQMEZEjMbgTETmQE4L7a8lugIWcemw8Lvtx6rE59bjsn3MnIqJQTui5ExFREAZ3IiIHsnVwF5FpIpIjInkiMiPZ7TFLRPJFZJeIbBeRLG1dbxFZIiK52s9e2noRkbnase4UkbHJbX0gEXlLREpFZLffOtPHIiK3a9vnisjtyTgWfwbHNVNEirTPbbuITPd77iHtuHJEZKrf+pT6XRWRISLynYjsEZFsEblfW2/rzyzMcdn+MzNNKWXLf/BWmzwAYBiAzgB2ABid7HaZPIZ8AH2C1v0NwAxteQaAp7Tl6QC+ASAALgKwMdntD2r3JABjAeyO9VgA9AZwUPvZS1vulYLHNRPAf+tsO1r7PewCYKj2+5meir+rAAYAGKst9wCwX2u/rT+zMMdl+8/M7D8799wnAMhTSh1USjUD+BjA9UluUyJcD+BdbfldADf4rX9PeW0AcKqIDEhGA/UopVYBCJ7m3eyxTAWwRClVqZQ6DmAJgGnWt96YwXEZuR7Ax0qpJqXUIQB58P6eptzvqlLqmFJqq7ZcC2AvgEGw+WcW5riM2OYzM8vOwX0QgCN+jwsR/kNMRQrAYhHZos0pCwD9lVLHtOViAP21ZTser9ljsdMx3qulJ97ypS5g0+MSkUwA3wewEQ76zIKOC3DQZxYNOwd3J5iolBoL4GoA94jIJP8nlfd7oyPGqjrpWAC8AuAsABcAOAZgdnKbEzsR6Q7gcwAPKKVq/J+z82emc1yO+cyiZefgbvt5WpVSRdrPUgBfwPtVsMSXbtF+lmqb2/F4zR6LLY5RKVWilHIrpTwAXof3cwNsdlwi0gneAPiBUmqettr2n5necTnlMzPDzsHd1vO0ikg3EenhWwZwFYDd8B6Db8TB7QDma8sLANymjVq4CEC139fnVGX2WL4FcJWI9NK+Nl+lrUspQdc6boT3cwO8x3WziHQRkaEARgDYhBT8XRURAfAmgL1KqWf9nrL1Z2Z0XE74zExL9hXdeP7BewV/P7xXtf+c7PaYbPsweK/A7wCQ7Ws/gNMALAOQC2ApgN7aegHwknasuwCMT/YxBB3PR/B+3W2BNz/5m1iOBcC/wXtRKw/Ar1P0uP6htXsnvH/wA/y2/7N2XDkArk7V31UAE+FNuewEsF37N93un1mY47L9Z2b2H8sPEBE5kJ3TMkREZIDBnYjIgRjciYgciMGdiMiBGNyJiByIwZ2IyIEY3ImIHOj/AMzVTW+Y1darAAAAAElFTkSuQmCC\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "plt.plot(losses);" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAD4CAYAAAAJmJb0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO29eZhcZZn3/31O7dXV+5aks+8ra4AAIkiCoiJhHBwRGNHhvRhfYVzG3zvq6Kio1zgu4zKvovKKiAvIiDpBFJCEBAUlIYGQfe0mne6k9626az3nPL8/znlOnVpOVXVXV1ed6vtzXbnSXV3Lc7qqv+c+3+deGOccBEEQRGUilXoBBEEQRPEgkScIgqhgSOQJgiAqGBJ5giCICoZEniAIooJxlnoBANDU1MQXL15c6mUQBEHYin379g1wzpuz3acsRH7x4sXYu3dvqZdBEARhKxhjZ3Ldh+wagiCICoZEniAIooIhkScIgqhgSOQJgiAqGBJ5giCICianyDPGfswY62OMHTLd1sAYe44xdlL/v16/nTHG/osxdooxdoAxdkkxF08QBEFkJ59I/icAbky57VMAdnDOVwDYoX8PAG8HsEL/dw+A70/PMgmCIIipkFPkOed/AjCUcvNWAI/oXz8C4BbT7T/lGi8DqGOMzZ2uxRIEkczOY33oHgmXehkzzrmRMLYf6S31MmzBVD35Vs75ef3rHgCt+tdtAM6a7tel35YGY+wexthextje/v7+KS6DIGY39z36Kn76lzdKvYwZ57E9nbjnZ3sRiSulXkrZU/DGK9emjkx68gjn/EHO+UbO+cbm5qxVuQRBWBCRVURltdTLmHEicQUqBzoGJkq9lLJnqiLfK2wY/f8+/fZuAAtM95uv30YQxDSjqhyKyiGrs0/k44oWV7b3k8jnYqoi/ySAu/Sv7wKwzXT7+/Usm00ARk22DkEQ00hcF3dZmX0jPBVVO+aOgfESr6T8ydmgjDH2GIDrADQxxroAfB7AfwD4b8bY3QDOAPg7/e5/APAOAKcAhAB8sAhrJggCCXGPz0KRF1cvFMnnJqfIc87fZ/GjzRnuywHcW+iiCILITVzRI/lZbNecJk8+J1TxShA2RQidrM6+SF7YNe3949BiS8IKEnmCsClGJK/MxkheO+ZgRMbgRKzEqylvSOQJwqYkRH72RbLmYyZfPjsk8gRhU2azXSOrHNVebUuxvZ8ybLJBIk8QNmUyG68vtw/i688eK/aSZgxZVbGo0Q+3U0I7bb5mhUSeIGzKZFIonz54Hj/6c0exlzRjyAqH2yFhcaOf7JockMgThE2J6ZG8koddE44rFWXryKoKp0PC0qYA2qkgKisk8gRhU+RJZNeEYgoUlVdMuqGscLgcDEuaq9A5GDKsKyIdEnmCsCnxSdg14ZiS933tQFzlcEgSljZVQVY5zg6FSr2ksoVEniBsitG7Jo+N17DekrdSqmNlRYVLYljaHABA3SizQSJPEDYlLguRzx2dhyoskldUDqeDYVlzFQDKlc8GiTxB2BQh7vkUQyXsmsqI5OOKCqckoc7vRrXXia5hsmusIJEnCJsymbYGhl1TIZG8rEfyAOB1OYxMIyIdEnmCsCmTqXgNVVgkLyscTkmTL5fEKsaGKgYk8gRhUxIVr/nYNXLe97UDsqrCpUfyToc0K5u05QuJPEHYFCHyuaJzzjlChl1TGWIoKxwOSYg8Q7xCTl7FgESeIGyKsChyVbxGZRWiBqpSbA1Z5XA5hF1DkXw2SOQJwqbk22pYZNaYH2N3ZEWF0xTJV8qGcjEgkScImyKi13iOAidh1QCVUwwVVzkcJk+e7BprSOQJwqbE9OiVc0DNInLJkXxliKFW8ZrIriG7xhoSeYKwKWZhyxbNm0W+EmwNVeVQOYw8ebJrskMiTxA2xeyvZxO5kJ4+CeS2duyASAMVnrzLIVXEcRULEnmCsClm6yWbyIfjlRXJi30Fp55d45Qoks8GiTxB2JSkSD5Pu6YSsmtSI3mnQ6qI4yoWJPIEYVPM0Wu2StZQpYm8kmrXsIqp5C0GJPIEYVPMgp1NvCvOrlFS7RoqhsoGiTxB2BRzbni2qtek7JoK2KAUUbvLlF1TKamhxYBEniBsihgaAmTPfw9VWJ58wq4xtTWogJNXsSCRJwibYha2rBuvSXaN/cUwbmTXUJ58PpDIE4RNieWbQhmTjU3KSojkFTUlkqfsmqyQyBOETTHbNbmya6q9Tu0xFWBrxJWUSF6i7JpskMgThE1JsmuyRLKhuIIan0u/n/3FMDWFUhsaYv/jKhYk8gRhU2IKh1tPI8wWyUZiCgIeLZKvBE8+teLV5WAVcYVSLEjkCcKmyIoKr0sX+RzZNX63QxdD+0e84lhdIpKXJHCee3jKbKUgkWeMfZwxdpgxdogx9hhjzMsYW8IY280YO8UYe5wx5p6uxRIEkSCuqPC7c3vtobgCn9tZMUVD4qrFPP4PqIxq3mIwZZFnjLUB+AiAjZzz9QAcAG4D8FUA3+KcLwcwDODu6VgoQRDJyAqHz+0wvrYiElPgc0laJF8B3nVcSbdrgMoZUj7dFGrXOAH4GGNOAH4A5wFcD+AJ/eePALilwNcgCCIDMUWF16WJvJI1kpfhdzsrJtVQSa14lYRlZf9jKwZTFnnOeTeAbwDohCbuowD2ARjhnIsG1l0A2jI9njF2D2NsL2Nsb39//1SXQRCzFlnh8OmefLYIPRxT4HM7KqZoSByrw9SgzHw7kUwhdk09gK0AlgCYB6AKwI35Pp5z/iDnfCPnfGNzc/NUl0EQs5a4oibsmhythn0uB5xSZQzXEMfqEg3KjAwj+x9bMSjErtkCoINz3s85jwP4DYCrAdTp9g0AzAfQXeAaCYLIQFxR4XNl9+Q55wjFE9k1lRDJK6n95PX/K+HYikEhIt8JYBNjzM8YYwA2AzgCYCeAW/X73AVgW2FLJAgiE3GFw6dn11htOkZlFZxDt2sqo5GXsGVcDinp/0rYbygGhXjyu6FtsL4K4KD+XA8C+CSAf2aMnQLQCOChaVgnQRApyKpqePJWm46izbDP5YDLISEm2z/aFceamkJJ2TWZcea+izWc888D+HzKze0ALi/keQmCyA7nXIvkhV1jIXAhvQOlYddUQiQv7JqU7BqK5DNDFa8EYUOEZWHYNRZ+dDgmG/erlIHXii7mLiklT74Cjq0YkMgThA0REbmI5K2yZsKxxP0qZeC1UfHqSDQo0263/7EVAxJ5grAhcVlE8tqfsGIRxYb0SD5h19g/2jU2Xo3JUJQnnw0SeYKwISJy9xqRfHZP3ud2VEzvGiVtMlTuJm2zGRJ5grAhwnZxOSTda88zu6YChDCe1k9ej+TJrskIiTxB2BDZlCvuzGLDCJFPFEPZXwhlVYVDYtDKcxK2DUXymSGRJwgbEjMieabbMHnYNQ6pIjx5WeVGFA+Y8uQr4ARWDEjkCcKGpEfyVnaNnkLpcsAlscrIrlGSRd5oUFYBJ7BiQCJPEDbE6KkuaZG8VWaJSKH0u50V04VSVlRjsxWgVsO5IJEnCBtibLw6tY1Xq37yobgMt1OCQ2KV07tG5Ub0DpjtGvufwIoBiTxB2BBzrni2CF20GQYAt0NCTLa/yCsKN6J3wNSgrAJOYMWARJ4gbIhs2nh1OSRLPzqsD/EGNGunEjZe43p2jYBaDWeHRJ4gbEjMNOc0u12jGINFnA7rLBw7ISupdg01KMsGiTxB2BBh17gdmt9uvfGaiORdDlYRloai8qSNVxrknR0SeYKwIYZd49TsmmwVr8KTd0oSOE9MVrIrcUVNzpOn7JqskMgThA0x7Bope8WrZtdo7YiN8n+bi6GscuNYABrknQsSeYKwIbLJrsnWJz4ck+F3JewaoEJE3pRdwxiDQ6qMgSjFgESeIGyIUQwl2hpYVbyaNl5dFdKtUU6xawBUzECUYkAiTxA2RKRMirYG2TZezdk12mPtHfHKSrJdA2i/B7JrMkMiTxA2JC6bG5Qxy83UUExJ2DUVkk8uq6pxVSLI1r9ntkMiTxA2RAiaFslnHuvHOU+yaypluEZqF0oAWfv3zHZI5AnChhiDMxzMcqxfVFbBOUyevCaMMZtvvMYVDoeULF2V0iu/GJDIE4QNMRqUSRKckpTRrgmJgSGmPHnA/gOvZUVNqngFkDWNdLZDIk8QNiSuaP1bJEnz5DPZNWKId2okb3e7JrXiFdBOdnZPDS0WJPIEYUPM/VusulBG9KlQXldyCqXdxTCuZkihrJBe+cWARJ4gbEhMUY3ZplZj/SJxTcyFyDsrpMdL6mQoAFlrBWY7JPIEYUPiigqXUxd5i2rPqJwcyQtP3u6RvJzJrslSKzDbIZEnCBtijmatBnkbkbx+MqiUHi8ZK14rZOpVMSCRJwgbElMSBUEui0IgEcl7XKl58vYWw0wVr84s7ZZnOyTyBGFDzBuvDou+LQlPvsIieZWnVbxma7c82yGRJwgbEjdF8mLjlfNk8TYieWdKgzKb2xpyyvg/gPLks0EiTxA2JK4kolmjJ02KyKVG8pUwC5Vzrh07tTXIGxJ5grAhcVPVp/DaU6tejTx5Z+XkyYtjzJRdQ3ZNZkjkCcKGmDsxigg9VbyjeqdKj4jkK8CTl9VEzx4zVrUCRIEizxirY4w9wRg7xhg7yhi7kjHWwBh7jjF2Uv+/froWSxCERlxOZJg4LdoViEheePKV0LvGEPkUu8Zl0dqBKDyS/w6AZzjnqwFcCOAogE8B2ME5XwFgh/49QRDTSFxN3ngFMnvyLgczNindhl1j34hXNs22NUNtDayZssgzxmoBvBnAQwDAOY9xzkcAbAXwiH63RwDcUugiCYJIJim7xth4TbVrFMOPB8wRv30jXtmYiEXFUPlSSCS/BEA/gIcZY68xxn7EGKsC0Mo5P6/fpwdAa6YHM8buYYztZYzt7e/vL2AZBDH7SGpQZpE1E4mrRiEUUBm9a8QxpvWTp2IoSwoReSeASwB8n3N+MYAJpFgzXEvczfib55w/yDnfyDnf2NzcXMAyCGL2EVNUw6ZxWdg1UVmBx5n4E3dVQO8a8wBzM04qhrKkEJHvAtDFOd+tf/8ENNHvZYzNBQD9/77ClkgQRCqywg2P3SFltmGicdXIkQcASWKQmL1FXrG0a5gx3JxIZsoizznvAXCWMbZKv2kzgCMAngRwl37bXQC2FbRCgiDSiJuadFm1K4jEFSOzRqBFvPYVQ+G7p9s19onkR0Ix9AejM/Z6zgIf/08AfsEYcwNoB/BBaCeO/2aM3Q3gDIC/K/A1CIJIIbnVcOZiqKicHMkDWoaNnb1rsfa0ilcHg8oBVeWQUn5Wbnz2fw5h35lhPPfP1yLgKVSCc1PQK3DO9wPYmOFHmwt5XoIgsmMu7TeKnFKySyJxxeglL3BadKy0C+IqJL3iVd9vUFV4JEfa48qJruEwzo9G8F87TuJf37Gm6K9HFa8EYUOSUyhFC+H0SN688Srua+dIXpyg0idD2acvz9BEDADw0IsdON4TLPrrkcgThA3ReqontytI9aQzRfJ27/GSra0BYA+RH56I4ZaL5qHa68S/bTuU1j10uiGRJwibwTlHTFHhdiRvvKZVvMqZ7Rq7Zdd849njuP93hwGYUihTN14tLKtyIyorCEZlLGsO4JM3rsaejiH89rXuor5m8V1/giCmlUQaoUihzNyTJhpPt2tckmS7VMM9HUMYi8QBZEmhtLCsyo3hCe04GgJuvHfjAhzvCWJ9W21RX3NWi/x3tp/E4iY/tl7UVuqlEETexFM2HxNdKNNTKNPtGvukGgpCcRnjURmAueI1PbsGKP8aAOHHN/jdkCSGL9y8ruivOavtmidePYunD/aUehkEMSmEJeEy7BrrFMq0jVcbNvIKxRRD5IWIp4//s0fLBkPkq9wz9pqzWuSjcRVhvR0rQdiFuJwsdI4M/eQ555rIp3ny9rNrwjEF4xEZnHPT0BAruyY5kldVjq89cww9o5GMzz00EcOXnzqCmDwzVwCDE1oRVGOARH5GiCkqwjESecJeyCmevCtDP3kxMCS1GMol2S+7ZiIqQ1Y5InHVOEGl9ZO3qPo9MxTCA7tO449HMl+x//a1bvzoxQ4cPT9WhJWnIyL5ej+J/IwQkymSJ+yHiDoTQ0PS7ZqoPt81va2B/bJrxN9oMBq37idvsfkc1DdsxYZnKns6BpNeo9gMT8TAGFBHIj8zREnkCRsiInl3yiBvc/pgRNbnu6ZG8jZraxBXVGO94xE5S5585kh+LKx5+cOhWNpzc87xyhvDAGZO5AcnYqj3u9M2jovJrBV5RdX8PbJrCLuwu30QsqKmtdt1ZKj2TB39J3DZbLhGyPT3OR6VE20N0vLkM3vyRiSfQeRP948b9klkhnRgaCKGer9rRl5LMGtFXlzyUiRP2IHOwRDe++DL+P3B88ZnN9v4PytP3inZK7vGHIRpkbxFP3kpc3aNyK8XYm5md8eQ8XVoBkW+scozI68lmLUiH9UvZymSJ+zASFgTqeM9wbQReK4MbQ1EJO/NEMnbyZMPxWTj62BUNnWhTE0NzTwQJRjRHj8SSvfk93QMGSfBmQr2hiZiM5o+CcxikTdH8sXuHUEQhRLRN1I7BibScsUdGaJYEcl7UiN5Byv7XHIzoZRIXrGI5DNlGAHAWDizXcM5x+72IWxa2gggcVIsNkMTMdSTyM8MUdkc9dgnsiFmJ0KE2vsn0vq3ZBrrZ0TyqXnykmTk2dsBc4Q9bork0ypeLbJrxvRIfjjFrukaDqNnLIJrV2qjR2fCrlFVjuFQDI0k8jODWeTJlyfKHfEZ7RicMD67bqcmdGKsnzmFMmKkUKZXhtqpGMpq49Wq4jUtu0b35CdiSlLBk/Djr1rWBLdDmhENGA3HofKZrXYFZrHIx0jkCRshIvOYrKJzMAQgOcMktU98VM4cydutd03Y7Mnrdg1jmXrXWOXJJx4/YrJs9nQMos7vwoqWALwuaUb25gZL0NIAmMUiL/4IgOQPEkGUI2bP+HivNmjCHM06U/rEi0g+dePVbr1rkiP5OOIqT6t2BaybtAlPHgCGTCL/yhvD2LioAZLE4HM7ZkTkxb4AifwMkRTJx+wT2RCzE/O+kZgmZG6365RYysarniefqRjKhnnyPpdDS6FU1LQcecCcJ58s8sGIjCq3dqITVa+KytE5FMLqOdUAAL/bOSNX84PjJPIzCnnyhJ0QkbzbKeFET3okn1rkZBnJ2zRPvqXGo3nyKk/LrAFM07HSNl7jWNhYBSARSQ+MR6GoHHNqvQA0S2smNKAUHSiBWSzy5kg+RHYNUeYIEVrVWo2g3nbXZdpUdaSIt1HxmpZCKUFWuW3ShkUk31LtwVhE23jNZNckMozSI/lFDX4ACZE/r3eknKuLvG+GPHmya2aYWIZ0M4IoVyJxFW6HhOUtAeM2l0nsXLp4C4w8+bTJUJm963IlFJfhdkqo9bmMilenI122Ms255ZwjGIljYaMm8qIg6vxIGACMSH4m7ZoqtyNtM7zYzFqRT9p4JZEnypxIXIHHJWFpU5VxW7aN12hcgccpgbGUoiFn5iyUciUUVeB3OxDwOI08eVemjdcMQ0MmYgpUDjQF3PC7HYZdkojkfQB0u2YGIvmhieiMF0IBs1jkk+0aEnmivInEFfhcDixtTkTyZm/aISXnv2eaCgVYZ6GUK6GYAr/LgYBXE3lF5XBk8OQzFYSJzJoarwv1frdhl/SMReBxSkajMJ97Zjz5wYmZL4QCSOQBUP8aovwR81qXWETyLkmCkuLJZ7IFrLo1livhuAyf24GAR7Nr4oqa1rcGSBSEmfclRI58tdeF+ipXwq4ZjWBurde4yvHPUCQ/HJr5vjXALBb55LYGJPJEeROOK/C6JEuR13rSJH+mM4l8JlsjlYNdo2nzYktFKKbA73ai2utETFERiikZs2sAMdrQFMnr1a41Pifq/W7DrukZDRt+PKBF8jORfDE0PvN9awASeQBk1xDlTySuwudywOd2oK3OByml6tMpsZSK18x2jYiCrWaado+E8a7vvohnD5fHgPtQTNEjeScArWo1U548IEYbmiN5TeSrvS7U+d1Gxeu5kYjhxwOaJ1/s/lWcc7JrZhoh8gHPzOysE0QhaBuvWmS+tLkqLcPEmZYnP7VIfiCoDZo+p2eglJpwLLHxCmgZMtkieTnJk9ei8xqvEw1+F4YmYlBVjt6xSFIk73c7EFPUolpY4biCqKyiYYZ7yQOzWORjspaS5nc7yK4hyh6x8QoAq+dUo9aXPF0otcjJMpLP4cln679eCkIxWRN5rybyw6FYxjx5IL35WmokPxaR0RuMQFa5kSMPwPi9FjPYS1S7zuxUKABwzvgrlgkxWYXbKel+HIk8Ud5E4qox4OIjm1fg9isWJf3cldJJMRLXvOxUrLo1CoQwDmUYl1cKwsKT1yP50XA8Y548oDVpS4rkjY1Xp7HheUyvFp5TkxB5rzsh8tXe4ohwotp15iP5WSvyUVnLI/bN0M46QRRC2GS/VHtdaWKkVbwmtzVoqMqUQpk9Tz4RyZeHyIfiml0jjlflsIzkU5uvjUXicDsleF0O1OnpkkfOjQEA5tUlPHm//nuNFLGHVfvAOACgMUCe/IxhjuTJkyfKHbNdkwmXI71BWeoQbyDhyVtF8iIjRTTzKjXGxqs3EY9aRfJa8zWTyIdl1Ognh3q/Jq5Hz2sin5pdA2jVtcUgElfwzedOYHlLABvaaovyGtmYtSIvPEuK5Ily43hPEO/94V+T0vqsNlIFmlWRPDQktW8NkPDkrea8jut9cVLH5ZUCReWIySr8Lqex8QogY8UrIPYlEscVjMRRo58chF1z5PwY3A4JDf5ERG148kXSgQd2nsLZoTC+tHV92rCTmWDWirwRyc9QBzqCyJc/n+zH7o4hdA6FjNs0T95a5B0OlpQjbhnJi3mwlp58+Yi8OMlpdk1C5FMHhgicjuTBKWMRGdX6BrWwazoGJtBa64Fkeg6fu3gbrx0DE/jBC+3YetE8XLmscdqfPx8KFnnGmIMx9hpj7Cn9+yWMsd2MsVOMsccZYzNvQuVBTCG7hihPuoa19MUJPapWVI6Ykth4zYRLYkkFTNF45vuL3jVWPeXFxutwKF7yTpUisva5HfA4JeMEZRUNu1IKwsyRvLBrOAfm1viSHlesSD4Uk/Hp3xyA2ynhM+9YM63PPRmmI5L/KICjpu+/CuBbnPPlAIYB3D0NrzHtiEiH7Bqi3OjWc9RFVG01ys+MliNusmtki7YGYuM1RyQfk9WSZ52J1/e7HWCMGb68ZZ58ShrpWDhuePJ+twNu/QRn9uOB4kTy3SNh3Pr9v2JPxxC+cPM6tNR4cz+oSBQk8oyx+QDeCeBH+vcMwPUAntDv8giAWwp5jWIh8uQpkifKjW4jktc+l2HTdCQrtIpXLYpVVI64wjM3KMvQkteM8OSB0ls2ZpEHYPjy2e0acyQvGzYPY8xoSDa3LkXkpzmSP9EbxNbvvoizQyE89IHLcOul86fleadKoZH8twH8CwDxm20EMMI5F5+ULgBtmR7IGLuHMbaXMba3v7+/wGVMnqTsGorkiTJCRPLjUc06iejV2dnsGqcjYddki/yNPHmLitcx0+DrUmfYCE/ep+f7C5HP1KAMSM8wCkZk1JiKxoRlM7emuJH8Y3s6MR6V8dt7r8JbVrVMy3MWwpRFnjF2E4A+zvm+qTyec/4g53wj53xjc3PzVJcxZczZNVFZLZuGTMTsZjwqY1RvkTueEsnnyq4RUWxi9J91nnzcondNMBJHa41WsFNukXx1TrsmUQwVV1StuMmUlSNEfk5tcT359v4JLGsOYHlL9bQ8X6EUEslfDeBmxtgbAH4Jzab5DoA6xpj4zc4H0F3QCouEObsGoE6URHkgrBoAGNejavHZzCby5ig2McQ7W+8a62KoRQ3JM1FLRSjFpjIi+SwbryK7RuwtJEXyekuBuame/DS3NWgfGE/qFlpqpizynPNPc87nc84XA7gNwPOc8zsA7ARwq363uwBsK3iVRUCL5B1GlEC+PFEOdI8k0iYnYvlvvDpMefJGJJ/B3nEbefKZr1zHIzLmN2iR7vBEaUU+HE+kUAJAQN9EtfTkpUSTNjEwxJx6adg1KSIvSQwe5/TMeY3EFXQNh5OGu5SaYuTJfxLAPzPGTkHz6B8qwmsUTFSP5L1FLoQgiMkgInmHxIxoNKyX2+eueNXuZ0TyGSterRuUCYtjfr0fjGlplKUkYdcke/LWXSgT2TVGJG9q/7C4sQoNVW40BtL7x0xXAkbnUAicA8uayyeSn5beNZzzXQB26V+3A7h8Op63mMRE7xqK5IkyomskDLdDwpxar5Enn7Brsmy8ShJUDqgqzxrJZ2s1LOyhOp8LNV5XyfvXmPPkARg579Ybr4mhIWOR9Ej+A1cvxt9eOj/jlcB0TYdq79d61CxtKp9IfhY3KNM2Xg27hiJ5ogzoHg5jbp0X1fpMUyARgGRNoTSyZtTESSFDJJ+YhZou8kFT18Z6vwtDZRPJ55lCKZkjeTEVKhHJuxyS5fg9r9uB0DQEeqf7JwAAi5v8BT/XdDErRZ5zblS8Crum1IUfBAFo6ZNtdT4oKjdEPp+NV1ENqqjcGIiTqXdNokFZul0TjCb6r9dXuUseyYdiCtwOydhoFcVQrmzj/3SRFwNDzJF8NnwuByLToAEdAxNoqfYUrWXxVJiVvWviCgfn2iYUZdcQ5UT3sCby1V5nIrsmi2gLnKYNVfFZzt67JoPIJ0XyiZmopSIckw2rBjB78rnbGoxliOSz4Z8mT769fxxLy8iPB2apyMeUxB+N2NQhT54oNVFZQV8wirZ6H6o8TiO7JpJnxSugiXc0S/EUYyxtgpIgVeRLPR0qpI/+Exh58tmya0QkH5HBGBDIMDglE17X9AwPah+YwJIy8uOB2Sry+h+BOZInu4YoNedHIgCAtjofAh7npPLkhQ2jqNkjeUAvnMpQDGUel1fvd5U+Tz6upETyWlSedfyfHsAFI3EEPM6kbpPZ8LkKHwM6PBHDSCheVpk1wCwVeXOxiNet/QookidKjWhn0Favi7zw5GUFToll7UVubKiqHNEcJx7uoG4AACAASURBVIVqr9OI2s2I16v2OlFf5UYoppTUxgynRPKJBmUW4/9MBWHmgSH5MB12jZj+RHZNGWCO5IVdMx2bLgRRCCJHfn6dH1UeJ6KyquWux7L3kgcSGSdmu8bKw28MeDA4EU27PdWuAbIP9N62vxu/fa0rx1FNnVBMht+VsFsMTz6LXaOoHJxzBCPxvDddAUzLrGeRWVNO6ZPAbBd5p2T09yC7hig1XSNhMKa1whWCNhGV9bbB2f9U2+q1KtWj54NZUygBoCngxsB4uhUzFonD7ZDgcTqMjo1Wlo2qcnz590fx3edP5XdwUyAcS7ZrFjb48a4L5+HyJQ0Z7y+ybmKKikPdo1jUmH8ao8/lLDjQa++fgMvBML/el/vOM8isFHkj0nFKcDokuFMm3RNEKegeDqO12gu3UzJEfjwqIxLLPvoPAC5dVI8arxM7jvYiElfBmHWqYWOV2zKSF9FvnR7JW7U2ONA9iv5gFJ1DIcu2xYWSuvHqdkr4v++72LJlgLBxDnaN4txoBJvXtOb9Wj53Zg04OxTCtv35td/qGBjHwga/pZ1UKsprNTNE1BTJA9qlGqVQEqWmeyRkROTCfx43IvnsIu9ySLhuVQueP9aHUEyB16kN2shEY8CDwQyR/LhJ5EXRkFVrgx1HewFoKZtdpqZq00koJZLPhbBxnj7UA8aA61fn3+bX53JA1mfKmnnoxQ587PH9eZ3I2vsnyqpnjWCWinxy9oHP5UgamkwQpaB7JGxc6leZ7Zq4mjV9UrBlbSsGJ2LY3TGY1d5pDGibqqmfec3H1mwaYdcMWdg1zx3pNdoMiA3H6WD/2RH0BbUso1BMTorkcyE2pp851IOLF9ShKUOPGit8FqnUx3rGwHlyn/1MKCrHmcFQ2W26ArNU5GMZIvlwvDiXnASRD5xz9I5GjdF0wq4JRmSEY7k9eQC4dmUznBLD4XNjlumTANBUpYlfajSfya4ZyWDXdA2HcKwniDs3LQKgRbDTQSgm47YH/4ovPXVU/14xEiPyQaSRdo+EJ2XVAMhYFMk5x/GeIIBEV0srzo+GEVNULG4kkS8LYiZPHgDNeSVKTlRWEVNU1OoVmomNVyUvuwYAan0uY1My20mhqVoT8IHxZF8+GJGN1xX7Apnsmh1H+wAAt146H/V+F9oHpkfk/3xyAJG4il3H+xCVFUTl/K5gBObGZTesnaTIu9MTMAbGY8bxiwpaK3rHtN9l6vzYcmBWinw0VeTdDqN3NUGUgjFTIRJg9uTjeiSfn9iJCDZbJN9oEcmPR+Wknit1FgVR24/2YmlzFZY2B7CkqcrovFgowucPRmS8cFwbCToZu0ZE8gsafFjRMjlv3KenapqDvRO9QePr0RyRfN+YZjG1VpPIlwVpdg1F8kSJEQ21hM8tyvHHo1pEm6/Ib1mjbTbm8uQBpGXYjKXkltf73YbIR+IKesciODM4gZfbB3GDfjJZ2hxIsms4T9+8tMJ8P1XleP5YH7asaYHbKeF3B84DmKzIa8e8ZU2r5aazFYmW44lg71hPQuTF+2NFX1D7XbbU5L8PMFPMSpHPlF1DnjxRSozWuHokXeXRRGc8IiMSV+DLw5MHgEWNVVjVWp21MZeI5M258qre9bLGLPJVbgyH4njp1AA2fWUHrvj3Hbj267sQVzi2rBUiX4W+YNRY/wO7TuPNX9uZM1ttaCKGC+5/Fo+/0gkA2N81goHxGN514TxcvawR249oUb1vEp58QP+d3TBJPx4wz3lN6MCJnqBRZJbbronAKTE0+DO3Mi4ls7LVcCxDdk2YsmuIEjJmzCRNlO57XRImYjLC8fztGgD4/p2XZI1kfW4HqtyOJLtmIiaD84RNBGgZNrvbB/H+H+/BsuYq/J+3rQIDQ63PhY2L6gEkqjvfGAhhw/xabNvfjZ6xCH5/4Dz+9tL5lms4OxRCJK7i3/9wDFvWtGL7kV44JIbrVrZgPCpjp27XVE0ikr9mRTN++g+X48pljXk/RpBpDOjx3iDWz6vB612jOe2a3rEoWqo9effKmUlmp8grGewaypMnSkgwxZMHtIZcQT2Sn4zI55OrndraING3JvH6DVVuRGUVW9a04FvvvShjj3SRMtg+MI5anwsnejV//tE9nVlFXrz2aDiO/3j6GA50jeKyxfWo9buweXUrPoNDADCpPHmXQ8KbVzbnfX8zibkS2u9BVTlO9gbxno0LcOjcWM7smr5gBC015efHA7NU5KPxDBuv5MkTJSThyZtF3qEVQ8Xz9+TzpTHgTorkzX1rBHduWoTlLQHcdtlCy2lMixr9kJjWt0U83/uvXISf/vUMjvWMYfWcmoyPE1bRTRfMxa/2af1vPvvONQC0DJUNbbU42D06qRTKQhAnE2EzdY+EMRFTsGpONWp9rrzsmiVN5Zc+CcxSTz6maGXfokJuuob4EsRUCWaYSVrlcWJIj3jzyZOfDI1VnqQUykxXEsuaA7jjikWWAg9oluf8ej/a+8ex41gvVrQE8PEtK+F2Snh0d6fl48QJ4f6b12Genna4xeSlb9Y3kCez8VoIfsOT13RA5MevbK1GjdeJ0Rwbr71jUbSWaSQ/O0VeVuF2SIZv6XM5EFd4xpFoBDETjEXicEgsubWux4mBoCaGVs3GpkpzdXKTsrEMkXy+LG2uwoGuUexuH8KWta2or3LjnRvm4revdltWkg+MR+F3O9AY8ODbt12Mf7p+ORabIuHbL1+Iu65chJWt1ZNez1QQkbyY83q8V4h8ADU+V1a7JhJXMBqOk8gXg+eO9OIff7YXaoYpN9kQQ7wFfnd6tdtU4JzjeztPTVveMDF7ENWm5g3TgMdpRNuT8abzobHKg6GJqPG3IwaUVHumIPJNAa1RmcqNFM7br1iIYFTGU6+fz/iYwfGokcp5+ZIGfOKtq5J+3lLjxf1b1xv7ZsXG45TAWKLl+IneoD6G0ZXTrunX0yebq8svfRKwucifGwnj2cO9lv01rIjKKtymyMibcqk2VQYnYvj6s8exbf+5gp6HmH2MheNpQy4CXqfx2Z52uybghsqBET1CTXjykx9AvUTffG2scuOiBVrWzcZF9Wiu9mDPG0MZHzM4ETNSOcsBxlhSAsbxniBWzdGuImq8rqzZNb2iEIoi+emnVS88EL/kfInKSlIkb+TIFhjJi/FtpR6bRtgPc98YQZXHCa5fpE6mvD8fGgOi6jWqv376nkC+LNNtlresbjH8e8YYFjb4jUEoqQyMx9AUKK+ccp8+5zWuqGjvnzCsohqfM6kYKhJX8PgrncZVkGhp0FqGhVCAzUVepCz1jaX3xs5GLMWu8WXIkZ0K50e1D/RgiafcE/ZjLJIeyZutE880i3xTlehfo31WgxE5bU8gX9bNq8XCBj9uTUmZbKvzGSMNUxkcj5ZVJA9oV/ThuIIn9nUhpqi4VK8FqEmxa54/1odP/vog/to+CMAUyZdhSwPA5iIvLo8mG8nHZDXJ6zM2XQq0a3r0dQxl6NVNTJ43BiZwzkIkyp0j58ZyFtCYsYrkBdO98WpE8hOJSD7gcU66HQAA1Ppd+NO/vAWbliYXIbXV+3B+NJy2Z6aqHEMTMcOTLxf8bgfOj0Tw1WeO4fIlDcb+Qo3XhZisGnt250e1v/MDXaMAgN5gBG6HhDr/5K2umcDWIt8cEHbN5CL51I3XOr0EXNgtU0W8+WTXFA7nHB94eA8+t+1wqZcyaVSV4z0/+Ase/NPpvB8zFo6ntSIImER+2jdeRf8aEclH5aTXmw7a6nyIK9zo6yIYDcchq9w40ZQLPrcDf20fRDAi40tb1xsnPPG+iAwb0YzsYPeI/n0ULTWeKZ0gZwJbi7zbKaGhym0MGciX1Eh+fVstan0u7DzeV9B6enSRJ7umcE72jeONwZBhgdmJkXAcEzFlUhOTMkXyZtGd7o3Xer8bEjN78umvXyhiylX3SCjpdnH1UG6evEjA+IerFxubrgCM9s/CshHOgYjk+4IRtJRpZg1gc5EHgJZqzxQieSVJ5LXRac14/lgflEmmY5oR1sLwRAycT/15CK2dLZDeDtcOiJS6fG1EReUIRuWM2TWC6d54dUgMDVVu9Ou/367hMOqnubnW/Dqf8dxmxD7AZCY3zQRNATfm1Hjx0S0rk24XTduE/SauTLqGwxieiJV1IRRQASLfWuNNi+RzCWxMUdP6bW9e04qhiRj2nx2e8lqEJy+rPOe4MCI7ogvh4ETUdidMkdueb0JAom9MFk9+mkUe0HLlB8ejONQ9iqPnx/DWdZPv3piNRCSfLPLixF1unvyXtq7HtvuuTrOtEnaN9j71jkWMGbgHu0fROxYhkS8mrTWepIjpZG8Q6z7/LI6cG7N8jKh4NSNGpz13ZGqWDecc50cjRkHEEFk2U2ZgPIrXzo6gocqNuGK/E+ZkI3nh9Wbz5Kd74xXQ+9dMxPCL3Z3wuiS8+2LrhmJTwe92ot7vSkujFHZNuWXXNAY8GcU61a7pG4viOr0R2p6OIQQjcln2kRdUgMh70R+MGjbLvjPDCMUU/O6AdUFSNMWTBxKj08R0mskyHIojJqtYO1dryDQ0MTkLiUjw/LE+cA68++I2AAnf2C4IkZ+IKUaUng1RiFSTzZN3T/+famPAg7NDITy5vxs3XTAPtUXIDmmrT0+jHBiPgbHEsPByR9hoo+E4JqIyglEZy1sDWNzoN2zFck2fBCpA5FuqPVB5Ijro0OdNZhPr1Dx5wZY1rTjZN44zg5OfWSn8+HXzhMjnnz5HJLPjaC/m1HiNtrF228juN52U+vKI5sdSBoYIxOAQxpB25TkdNFa50ReMYiKm4PYrFk778wN6rnxqJD8eRb3fbUxyKndEj/+xcNzw41urvdgwv86YHkV2TRFJLYg6rY8iO9E7js7BUMbHpGbXCEQXvO1HJ2/ZiMyadfNqAVAkP1UicQV/OjGAzWtajI05u0byQH7pvVYtBao92vc+l6Mo6Xkiu2XN3BpcvKBu2p8fANrq/OgeCSftqwyMR9FYVV5+fDY8Tge8LgljETkxy7XGiw1tiTbK5VrtChQg8oyxBYyxnYyxI4yxw4yxj+q3NzDGnmOMndT/r5++5aaTWhDVPjBuWCbbLaJ5LU8+3eNc2OjHytYAnj3UM+l1nB8TIq+9tt2iz3Lh5fZBhOMKtqxpRVO1JgT9NsuwGRiPGtZLPum9CU8+deNV+4wWY9MVSBRE3X7FwqLleLfV+xCKKRgJJa5sB8djZZdZk4sarwujoTh6g4kWBhvaEifGlgq1a2QAn+CcrwWwCcC9jLG1AD4FYAfnfAWAHfr3RSPRvyaKuKKiczCEa1c1Y0VLwFLkrSJ5AHjPpQuw540hPH9sct58z2gYTolhQYMfXpeEYRL5KbH3jWE4JIZNSxuNeZl2jOTFFV0+m6+ZerkDiRGA3iJ1YrxmRRPeu3GBsfdRDNrq0jNsBsuw2jUXorWBiORbarxYp0fyHqeUdoIuJ6b86eGcn+ecv6p/HQRwFEAbgK0AHtHv9giAWwpdZDaaAh4wpv0xdQ2HIascS5uqsGVtK/Z0DKWVlqsqR0yxFvm7rlqM5S0BfP7Jw5NqPXx+VEujckhMS00rgsi/1jmMbfu7p/15y4kD3aNY0RKAz+2A0yGh3u+yXa58fzCKxU1V8Lsdedk12Xq5BzxOeIs0OGN+vR9fvfWCpFTN6X8NkSufsE4HxqO2i+RFu+HesYgm6l4narwuLG2qQmuNt2yrXYFp8uQZY4sBXAxgN4BWzrloIt0DIGPyLWPsHsbYXsbY3v7+/im/tsshobHKg75gxOjjvrQ5gC1rWiCrHC+cSH5uMd8108YroFXRfnHrOpwdCuOBXfmXpZ8fiWCOPuGmvspVlBTK//fndnzhSfuV+ecL5xyHukdxwfxa47bUWaTlTlxRMRSKoaVaS8fLN5L3uRxwZdiIDHicRUmfnCnaUgqiorKCYES2lScPaJlPY2EZfcFokqi/97IFeNeFc0u8uuwULPKMsQCAXwP4GOc8KTmda7stGStZOOcPcs43cs43NjdPbfiuoKXag76xKNr1TdelTVW4aEE9GqrcaVk2uUQeAK5a1oSbL5yHH7xwGm8M5Jdp0zOWEPmGKk9R7JresSiGQ/GCh5uUK90jYQxNxLBhfsLrbKxKnmBU7gxNxMC5NkCipdqT1rclE2Nh2fJyv8rjnPa+NTNJnd8Fv9th2DUi+Cm3vjW5qPFpPeW1wqfE2v/x2mX4P29bXcKV5aYgkWeMuaAJ/C8457/Rb+5ljM3Vfz4XQGENYfKgtcaD3mAE7QPjqPe7UF/lhkNiuHZlM146NZi0s586xNuKz7xzDdwOCZ9/8nDOikutECpszKpsrHIXxa4RUaHI5Kk0Duq9QDa0JSL5poDHVp68eUpQS403rxTKYDRuOazjgvl1xma+HWGMJaVRinGGdvPkaw1PPmpk9NmFQrJrGICHABzlnH/T9KMnAdylf30XgG1TX15+aJfFWiS/tDlg3H7RgjoMjEeTfFERyecaK9Za48XHtqzACyf68ezhRLZNx8BEmuiMhuOIxFXMqdUuTev97mm3azhPdPM7X6DIR2UFh8+NTseyppWD3aNwSgyrTc2hRFWmXRA58s3VHrTqfZVyBQljYTmtEErwlXdvwBe3rp/2dc4k5oKogTJtTpaLGq8257VnLFLWhU+ZKCSSvxrA3wO4njG2X//3DgD/AeAGxthJAFv074tKS40XA+NRnOobx1LTMOD1ekR4oGvEuC2qWx35zI78wFWLsXpONb74uyMIxWQ8/kon3vqtF3DHj3ZDNg39Pqe3KJ4rIvmAG6GYMq22ymhYq6gFgJ6xwjozPra7Ezf93xenVPRVTA52j2LVnOqklMGmgAcjobhthqwbkbxeIh+OKwjmqHoNRqwj+UrAPDzE6FtTZi0NclHjc0Ll2syJcm5hkIlCsmte5JwzzvkFnPOL9H9/4JwPcs43c85XcM63cM4zD3mcRlprPOBcS80S8yYBYO3cGjgkhoPdiag14cnn9jmdDglfumU9zo1G8O4H/oJP/vogljRV4VhPED/96xnjfkJ0E568FqVMZzRvvhopNJJ/tXMEnGuD0MsFzjkOdCVvugKJy3q79AISIt8U8BhikMuyGStCm99yoq3eh5GQ1hJAXAU3lXFr3kzUmvoKlXPhUyZsX/EKJBciLG1K2DU+twMrWgLJIq9Hw/mWiV+2uAG3Xjofx3qC+ODVi/GHj1yDa1Y04ZvPnTD+eIXoikhetGydXpFPCEWhnvwh/fdhVUdQCrqGwxgNx42rL4GI+AZs4sv3B6Oo1jdLE4V62dcejKQPDKkkljRqgdcDu06hPxiFxymhymabyeaWE7PJrikbzGfWZaZIHtA28Q52jRq+aFTOz5M38+Vb1mPbvVfj8+9aB6dDwhe3rkdMVvHl3x9FTFbRPRyGQ2LGyaYY0acQ+YDHWVAkPxaJo31gAtVeJ155YxijofLosSMGMFzQllxe35Qywajc6R+PGp1IxSCJXFWvY+HKjuS3rG3Fey6dj+/tPI2fvXxGr20p37zyTJhPwrNm47WcEBGTxLTWBGYumF+LwYkYzunCKCL5XNk1ZrwuBy409fZY0lSFf7x2KZ58/RxWfvZpPLDrNFqrPcak+snYNTFZxfXf2IXfvW7dNRNIDCpY31ZT0LQkEcXf/aYlUFSOXSeKnvyUFwe7R+F2SFg5J5B0e+os0nJnIBg1rIiWPCL5SFxBTFHTmpNVEi6HhK/degE++841iCuq7awawN52TUWED41V2iiz+fX+NK9d5Fwf7BpFW50vYdcUWCp+3/XL0VrjNSpqLzTldhvl+HmI/On+cbQPTOD5Y31414XzLO/XNxZBrc+FJU1VBXnpQuTv3LQIP3/5DLYf7cPWi4pX1p4vB7tHsHpuddr7lzqLtNzpH49izRwt5THgcSLgcWYtiEp0oKyIP0VLGGP4X9csxYUL6iDZLIoHEnaNz+WY9lm4xcZeq7XA6ZDQFPBgaYpVAwCr51TDKTEc7B7BjevnICprGS/5bLxmw+N04M5NizL+rNbngkNieRVEnejVWpWaM4Ay0TsWRUu1B3NqfBgYjyEqK1M6hgP6ya4p4MH1q1vw9KEexBU1Y7XlTKGqHAe7RnFThpNctccJt0OadEFU13AIn/z1AXzqxjXYkLKZW0z6g1G8eUUi0mup8WSdEGX0kq9gT97MZYsbSr2EKSGK1VrLeGC3FRVh1wBa8dK9b1medrvX5cDK1mrD852KJz9ZJIlpPVfyEPnjej/q9oGJrAMmeoNabxyxuZvvaLlUDpraBmxe04pgRMYrHUVPgMrK/q4RjEVkXLEkXQAYY2gKuCddEPWFJ4/gpVOD+OSvDySluxaTSFwr2W822RGt1dlbG4gOlJXsyVcCIsXVbn48UEEiv/WiNsso4YL5tTjUrW2+RqfgyU8FrSAqtzCJSJ5z4HC3dYGSVmnnMdI0p7L5OhqK48xgyIhsr1nRBLdTwnNFyrI51TeOT/36QM5soO1HeuGQGK5b2ZLx540Bz6Sya3Yc7cX2o724ZkUTjpwfw89fPpP7QdOAOUde0KJXY8uKim9vP4Hv7TwF1TQsPjEVanZE8nbFITFUe5xlPRzEiooR+Wysb6vFcCiOruHwlDZep0JDlRvDeUyHOtYTxJVLGwEgKdXTjKpy9OmR/Lw6IfKT33w9dC65bYDf7cTVyxqx/WhvUYZlf3v7CfzylbO4+bsvYv9Zaztq+9FeXL64wXL83GSqXiNxBV/43WEsbwngobsuwzUrmvCffzyRV1/3QjFXuwpENfbdj+zFt7efxNefPY57frbPuGobs2gzTJQf//CmJfibi633zcqVWSHywp64/3eH8eR+LYulmHYNIIQpOfocGI/ihy+cRjim7QuMR2V0DYdx9fJGzKv1GpZSKsOhGOIKR2u1x2idIKLjntEIHnqxw1KkD3WP4r9fOQtV5cbzm3vDbFnbirNDYZzsGy/sgFMYGI/i2cM9eOvaVridEv7uh3/Fp39zAJ/bdghfe+YYRkKaaHcOhnCidxyb12SO4gEtVz7fjdcHdp3G2aEwvrR1PdxOCfffvA5RWcVX/nBsWo4rGwPBdJFvqfYgJqt46dQA/v1vNuD+m9dh5/E+/O0Df8HZoRDGwsKTJ7um3Pn4DStx/eqMTXXLmlnxyVo9pwbr5tVg35lhAJrIFXuHPLV/zeFzo7jnp/vQPRJGtdeF269YiJO6VbOytRobdEspE8ZcyRovAh4nqk258g/+qR0/fqkDly6qx0UZRrh98akj2NMxhF0n+hCOKVjY4EedP9E3ZPPqVnwGh7D9aC9WtlanPX6q/GpvF+IKx7/cuAr1fjc+8avX8Yw+cWtU7wHyzb+7yCjIumGt9R9PU8CNgXGtB0y2Ta9QTMbDL3XgHRvm4Mpl2tXR0uYA7nnzUnx35ym897IF2KRfNRUDEcmbe6VftawJF8yvxb++Y43x2suaA/jwL/bh5u++iCuWaLdRJE8Ui1kh8m6nhN9/5JoZfc3GKjdGwnH8+MUOTERlPLDrNGp9LrRUe7DjaC9uv2Khsem6ak41NrTV4tnDvRiLxNP82V5jGo0mHnNqvTg/qs3N3KFPsNpxtDdN5IcnYth3Zhjr22rw9KEecA68c0Ny7+s5tV5saKvF9iO9+PB16RvX2ZAVFc8e7sUNerQuUFWOx/Z04vIlDVjeop04fvLBy42ff+2ZY3hg12ncdtlC7DjWi+UtASxqTM+MMn6XATeisoqJmJL15PzU6+cRjMj4wFVLkm6/9y3L8dvXuvG5bYfw+49cU7RMIrEZbu6wuHZeDZ68701J93vTiiZsu+9NuPuRV/DM4R5IDLarACXsw6ywa0rByjnV4FyLpP/zuRNYM7caT953Nd6xYS5ePDWAcEzB8d4gfC4HFtT7jXz+TNG8EA9RUTun1oue0QhO9Y3jzGAIDollzJ3fdaIPisrx5Vs24Md3XYaGKjfesjrdFtm8pgWvnR2ZdOuAn/zlDdz76Kt4dHfyxuZLpwfQORTCHVcszPi4+65fjrY6H/71twexu33IGKBuhWhtkCvD5hd7OrGiJYDLFiePFfa5HfjCzetwonccD7/UkeuwpsSu43348UsdWN4SyOsksqSpCv9z79W4fnULljUHbJeWR9iHWRHJl4KbLpiHN69sNjIpan0uMMZww9pW/OQvb+DFUwM40RvEytYAJIkZPvnBrlFctawJPaMR+FwO1PpdaZH8vFofjvcEsf2oVq1615WL8eOXOtA1HML8+kTF7/ajfWiu9uCCtlpIEsO+z27JuNYta1rx7e0nsfNYH96zcUFex9czGsG3njsBAHh0TyfuumqxIVS/eLkTDVVu3Lh+TsbH+t1O/NtNa/Ghn+/TX9/ajwcSzawGxmOWEf+h7lG8fnYEn3/X2oyCecPaVmxe3YJvbz+Jd104D3P1vY2pcn40bFyJHeoexTefO4FVc2rw/95/ad7PUeN14ccfuGzGUjyJ2QlF8kWkxutCnd+NOr/bEJ7LFjeg2uPE9iO9ON4zbvjgDVVutNX5cLB7FE8dOIfrvrET9z76KgAtR77e7zKKn+bUetE/HsUzh85jfVsN7tykRczPH0u0KIjJKl443o/Nq1sg6e0WGGMZBXDdvBrMrfVOqmHZl39/BHGV48PXLcOJ3nHs1fc7jvcE8dzRXrzn0vlZi7Xetk4T3dYaDy5eWG95PwBYoM8J/c6Ok0Y2SiqP7umExynh3RfPt3yeL9y8DirnuPNHu9GR58SvTOw42osbvvknfODhV/CBh1/BN/54Am9dOwdPfOjKpJNsvjhLWIhGVD706Zph3E4J165qxh8OncfAeBSrTAMyLphfi+eO9OK+R1+Dx+nAi6cG0N4/jt6xaFJ+7txaLzgHXu8axebVrVjaHMDSlHYHezqGMB6VsTmHFQJo4r95TQv+hLMVcwAACSNJREFUfFJ7vc7BECayFGa9dGoATx04jw9ftwz3Xb8c1R4nHt3dCc45/m3bIVR7nfjHa5flfM3v3XEJnvqna4yeP1YsbQ7gK+/egL+cGsAt33sJ+8+OoHMwZPw71RfEtte68a4L51mmYQLAggY/Hvng5RiaiOGW772EF08OWN5XVTlCseTfAeccP3zhNP7XT/dicZMfv7xnE37z4avw+4+8CQ/ccUlRB2ITxFShT2UJuGFtK546oM06N4v8xQvr8PShHrzn0vn42A0rce3XduKxPZ3oC0aT0vJEQZR4LkBLhXz4pQ5jAMX2o73wOCW8aXlTnmuag5+/3Inr//MFANpszgfuuARXLUt+/IGuEXz88f1Y2ODHh65dBq/LgVsubsPje89iQ1st9nQM4d//ZoPRpC0bXpcjaUBINt53+UIsaarC//75PtzyvZcy3ud2iz0AM1csbcST+qbnXQ/vweduWov3X7ko6QqnLxjBh362Dyf7xvHd2y/BtSubEYkr+NffHMRvXuvGOy+Yi2/ceqGtZ68SswcS+RJw3coWOCQGReVYZUpbvOuqxbh4YT02LqoHYwxvXdeKX+3rglNiWLEq4VsLP3lOjdeY/7l5dQse/FM7/nxyAG9fPwfbj/biTcub8haiN69owg/uvBQTURkq5/jhn9rx/of24P6t63DHFVqPnt+9fg7/369eR1PAgwfff6kh0LdfsRA/e/kMvvjUEVy4oA63XZafrz9ZNi1txNMffTP+cnoAqWUBDQE3Lslh+wgWNPjx6/99FT7++H58/snDON4bxP03r4PLIeFQ9yju+eleDIVimFfnwwcf3oNPvHUVth/txWudI/jEDStx3/XLaaOUsA0k8iWg1u/CZYvrcbwnmBShe5yOpNYMd1yxCH84qOWWm9ubztWrXq9f02KIzaWL6lHnd+HeR1+FxLQTyGRSIhljSRulb1s/Bx997DV85reH8Llth8EAyCrHZYvr8f07L03KBV8ztwaXLKzDa2dH8OWt6409gGIwp9aLd19i7bvnS7XXhR/+/UZ844/H8f1dp/HLPZ2QGIOscsyr9eKJD12FJU1V+Njj+/H1Z4/D53LgB3deghvXz8395ARRRrBilLNPlo0bN/K9e/eWehkzyrGeMfSORXHtymbL+6gqx/X/uQtvDIbwxa3r8P4rFxs/27a/G5uWNiZ59TuP92HvG1qzMa/TgbuvWQK/e+rncUXPdxctFBqqPPj7TYsyVgsf7wmiY2DCMqOmnNlxtBevdmobxx6nA7ddvsBIV1VVjif2deGCBbVYrbcQJohygTG2j3O+Met9SOTLmx++cBpfefoYRZEEQaSRj8iTXVPmvO+KhegLRnFVnhuoBEEQZkjky5warwv/dtPaUi+DIAibQnnyBEEQFQyJPEEQRAVDIk8QBFHBkMgTBEFUMCTyBEEQFQyJPEEQRAVDIk8QBFHBkMgTBEFUMGXR1oAx1g/gTM47ZqYJgHVjcPtBx1Pe0PGUN7PteBZxzq0bYKFMRL4QGGN7c/VusBN0POUNHU95Q8eTDtk1BEEQFQyJPEEQRAVTCSL/YKkXMM3Q8ZQ3dDzlDR1PCrb35AmCIAhrKiGSJwiCICwgkScIgqhgbC3yjLEbGWPHGWOnGGOfKvV6JgtjbAFjbCdj7Ahj7DBj7KP67Q2MsecYYyf1/+tLvdbJwBhzMMZeY4w9pX+/hDG2W3+fHmeMuUu9xnxhjNUxxp5gjB1jjB1ljF1p5/eHMfZx/bN2iDH2GGPMa6f3hzH2Y8ZYH2PskOm2jO8H0/gv/bgOMMYuKd3KM2NxPF/XP28HGGO/ZYzVmX72af14jjPG3pbPa9hW5BljDgDfA/B2AGsBvI8xZrcRSjKAT3DO1wLYBOBe/Rg+BWAH53wFgB3693biowCOmr7/KoBvcc6XAxgGcHdJVjU1vgPgGc75agAXQjsuW74/jLE2AB8BsJFzvh6AA8BtsNf78xMAN6bcZvV+vB3ACv3fPQC+P0NrnAw/QfrxPAdgPef8AgAnAHwaAHRtuA3AOv0xD+g6mBXbijyAywGc4py3c85jAH4JYGuJ1zQpOOfnOeev6l8HoQlIG7TjeES/2yMAbinNCicPY2w+gHcC+JH+PQNwPYAn9LvY5ngYY7UA3gzgIQDgnMc45yOw8fsDbeSnjzHmBOAHcB42en84538CMJRys9X7sRXAT7nGywDqGGNzZ2al+ZHpeDjnf+Scy/q3LwOYr3+9FcAvOedRznkHgFPQdDArdhb5NgBnTd936bfZEsbYYgAXA9gNoJVzfl7/UQ+A1hItayp8G8C/AFD17xsBjJg+tHZ6n5YA6AfwsG4//YgxVgWbvj+c824A3wDQCU3cRwHsg33fH4HV+1EJGvEPAJ7Wv57S8dhZ5CsGxlgAwK8BfIxzPmb+GddyXG2R58oYuwlAH+d8X6nXMk04AVwC4Puc84sBTCDFmrHZ+1MPLRpcAmAegCqkWwW2xk7vRy4YY5+BZun+opDnsbPIdwNYYPp+vn6brWCMuaAJ/C8457/Rb+4Vl5X6/32lWt8kuRrAzYyxN6DZZ9dD87TrdHsAsNf71AWgi3O+W//+CWiib9f3ZwuADs55P+c8DuA30N4zu74/Aqv3w7YawRj7AICbANzBE8VMUzoeO4v8KwBW6JkBbmgbEk+WeE2TQverHwJwlHP+TdOPngRwl/71XQC2zfTapgLn/NOc8/mc88XQ3o/nOed3ANgJ4Fb9bnY6nh4AZxljq/SbNgM4Apu+P9Bsmk2MMb/+2RPHY8v3x4TV+/EkgPfrWTabAIyabJ2yhTF2IzTL82bOecj0oycB3MYY8zDGlkDbUN6T8wk557b9B+Ad0HafTwP4TKnXM4X1vwnapeUBAPv1f++A5mPvAHASwHYADaVe6xSO7ToAT+lfL9U/jKcA/AqAp9Trm8RxXARgr/4e/Q+Aeju/PwDuB3AMwCEAPwPgsdP7A+AxaPsJcWhXWndbvR8AGLQMvNMADkLLKir5MeRxPKegee9CE35guv9n9OM5DuDt+bwGtTUgCIKoYOxs1xAEQRA5IJEnCIKoYEjkCYIgKhgSeYIgiAqGRJ4gCKKCIZEnCIKoYEjkCYIgKpj/H5CT39nSB6BlAAAAAElFTkSuQmCC\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "plt.plot(rewards);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Of course, hyperparams should be tuned, target network will improve stability, but you've got the idea :)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/docs/intro.ipynb b/docs/intro.ipynb new file mode 100644 index 0000000..7069392 --- /dev/null +++ b/docs/intro.ipynb @@ -0,0 +1,1798 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Ptan intro\n", + "\n", + "[PTAN](https://github.com/Shmuma/ptan) (abbrevation of `PyTorch AgentNet`) is a small library I wrote to simplify RL experiments. It tries to keep the balance between two extremes:\n", + "\n", + "1. import lib, then write one line to train the DQN (very vivid example is [OpenAI baselines project](https://github.com/openai/baselines/))\n", + "2. implement everything from scratch\n", + "\n", + "First approach is very inflexible. It works good when you're using the library the way it supposed to be used. But if you want to do something fancy, you quickly find yourself hacking the lib and fighting with constraints imposed by the author rather than solving the problem you want to solve.\n", + "\n", + "Second extreme gives you *too much freedom* and requires implementing replay buffers and trajectory handling over and over again, which is error-prone, boring and inefficient.\n", + "\n", + "Several years ago I was tired of writing replay buffers and decided to implement something in between: not \"the universal RL lib\", but a set of classes to avoid writing boilerplate code.\n", + "\n", + "I used ptan to implement all the [examples for the \"Deep Reinforcement Learning Hands-On\" book](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/), which include all the major DRL methods, which includes DQN, A3C, all tricks in Rainbow paper, DDPG, D4PG, PPO, TRPO, Acktr and AlphaGo Zero.\n", + "\n", + "## High-level overview\n", + "\n", + "From the high level, ptan provides you the following entities:\n", + "\n", + "* `Agent`: class which knows how to convert batch of observations to batch of actions to be executed. It can contain optional state, in case you need to track some info between consequent actions in one episode (for example, noise params for Ornstein–Uhlenbeck exploration). Normally, you can use [already existing Agent instances](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py) or write your own subclass of `BaseAgent`.\n", + "* `ActionSelector`: small piece of logic which knows how to choose the action from some output of the network. Works in tandem with `Agent` https://github.com/Shmuma/ptan/blob/master/ptan/actions.py\n", + "* `ExperienceSource` and variations: by using the `Agent` instance and gym environment object can provide information about the trajectory from episodes. In the simplest form it is one single $(a, r, s')$ transition at a time, but functionality goes beyond this. Source file is https://github.com/Shmuma/ptan/blob/master/ptan/experience.py\n", + "* `ExperienceSourceBuffer` and friends: replay buffers with various characteristics. Includes simple replay buffer and two versions of prioritized replay buffers\n", + "* various utility classes, like `TargetNet` (both discrete and continuous), wrappers for time-series preprocessing (used for tracking training progress in TensorBoard)\n", + "* includes wrappers for Gym environments, for example, wrappers for Atari games (copy-pasted from OpenAI baselines with some tweaks): https://github.com/Shmuma/ptan/blob/master/ptan/common/wrappers.py\n", + "\n", + "And that's basically it. Total amount of sourse is just ~1500 lines of Python, which makes it possible to master in couple of hours.\n", + "\n", + "Below I'm going to demonstrate how ptan could be used to simplify RL methods implementation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Installation\n", + "\n", + "We'll need gym, opencv python bindings. And pytorch, of course" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting package metadata: done\n", + "Solving environment: done\n", + "\n", + "\n", + "==> WARNING: A newer version of conda exists. <==\n", + " current version: 4.6.2\n", + " latest version: 4.7.10\n", + "\n", + "Please update conda by running\n", + "\n", + " $ conda update -n base -c defaults conda\n", + "\n", + "\n", + "\n", + "# All requested packages already installed.\n", + "\n", + "Requirement already satisfied: ptan in /Users/shmuma/work/ptan (0.5)\n", + "Requirement already satisfied: torch==1.1.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (1.1.0)\n", + "Requirement already satisfied: gym in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (0.12.1)\n", + "Requirement already satisfied: atari-py in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (0.2.6)\n", + "Requirement already satisfied: numpy in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (1.16.2)\n", + "Requirement already satisfied: opencv-python in /anaconda3/envs/ptan/lib/python3.7/site-packages (from ptan) (4.1.0.25)\n", + "Requirement already satisfied: pyglet>=1.2.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.3.2)\n", + "Requirement already satisfied: six in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.12.0)\n", + "Requirement already satisfied: scipy in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (1.2.1)\n", + "Requirement already satisfied: requests>=2.0 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from gym->ptan) (2.21.0)\n", + "Requirement already satisfied: future in /anaconda3/envs/ptan/lib/python3.7/site-packages (from pyglet>=1.2.0->gym->ptan) (0.17.1)\n", + "Requirement already satisfied: urllib3<1.25,>=1.21.1 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (1.24.1)\n", + "Requirement already satisfied: idna<2.9,>=2.5 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (2.8)\n", + "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (3.0.4)\n", + "Requirement already satisfied: certifi>=2017.4.17 in /anaconda3/envs/ptan/lib/python3.7/site-packages (from requests>=2.0->gym->ptan) (2019.6.16)\n" + ] + } + ], + "source": [ + "!conda install pytorch torchvision -c pytorch\n", + "!pip install ptan" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Imports" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import ptan\n", + "import gym\n", + "import numpy as np\n", + "from typing import List, Any, Optional, Tuple\n", + "\n", + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F\n", + "import torch.optim as optim\n", + "import matplotlib.pylab as plt\n", + "\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Action selector\n", + "\n", + "https://github.com/Shmuma/ptan/blob/master/ptan/actions.py\n", + "\n", + "Helps to go from network output to concrete action values. Most common cases:\n", + "* Argmax: commonly used by Q-value methods, when the network predicts Q-values for set of actions and the desired action is the action with the largest Q\n", + "* Policy-based: network outputs the probability distribution (in form of logits or normalized distribution) and action need to be sampled from this distribution. Used commonly by PG-methods\n", + "\n", + "Action selector is used by the `Agent`, and rarely need to be customized (but you have this option). Concrete classes which could be used:\n", + "* [`ArgmaxActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L12): applies `argmax` on the second axis of passed tensor (matrix is assumed)\n", + "* [`ProbabilityActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L36): samples from probability distribution of discrete set of actions\n", + "* [`EpsilonGreedyActionSelector`](https://github.com/Shmuma/ptan/blob/master/ptan/actions.py#L21): has parameter $\\epsilon$ which specifies the probability of random action to be taken. \n", + "\n", + "All the classes assume numpy arrays to be passed to them\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([[ 1, 2, 3],\n", + " [ 1, -1, 0]])" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "q_vals = np.array([[1, 2, 3], [1, -1, 0]])\n", + "q_vals" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 0])" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.ArgmaxActionSelector()\n", + "selector(q_vals)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 0])" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=0.0)\n", + "selector(q_vals)\n", + "# have to be the same result, as episilon is 0 (no random actions)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([1, 1])" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "selector(q_vals)\n", + "# will be random" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[2 2 1]\n", + "[1 2 1]\n", + "[1 2 0]\n", + "[1 2 1]\n", + "[1 2 1]\n", + "[1 2 1]\n", + "[0 2 0]\n", + "[1 2 0]\n", + "[1 2 1]\n", + "[1 2 0]\n" + ] + } + ], + "source": [ + "# here we sample from probability distribution (have to be normalized)\n", + "selector = ptan.actions.ProbabilityActionSelector()\n", + "for _ in range(10):\n", + " acts = selector(np.array([\n", + " [0.1, 0.8, 0.1],\n", + " [0.0, 0.0, 1.0],\n", + " [0.5, 0.5, 0.0]\n", + " ]))\n", + " print(acts)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Agent class\n", + "\n", + "`Agent` is class which knows how to convert observations into actions. There are three most common approaches:\n", + "* **Q-function**: NN predicts Q-values for actions, the $argmax Q(s)$ is the action\n", + "* **Policy-based**: NN predicts probability distribution over actions $\\pi(s)$, you sample from this distribution and get the action to do\n", + "* **Continuous control**: NN predits the $\\mu(s)$ of continuous control parameters and the output is your actions to execute.\n", + "\n", + "Third case is trivial, two first approached is implemented in `ptan` to be reused without any coding: [`DQNAgent`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L55) and [`PolicyAgent`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L104).\n", + "\n", + "But in reality, it is often needed to implement your own agent, some of the reasons:\n", + "* You have fancy architecture of the net -- mixture of continuous and discrete action space, have multi-modal observations (text and pixels, for example)\n", + "* You want to use non-standard exploration strategies, for example Ornstein–Uhlenbeck process (very popular exploration strategy in continuous control domain)\n", + "* You have PoMDP environment and you decision are not fully defined by observations, but by some internal agent state (which is also the case for Ornstein–Uhlenbeck)\n", + "\n", + "All those cases are easily supported by subclassing the `BaseAgent` class, in TextWorld's tutorial we'll do exactly this.\n", + "\n", + "Below is the example how provided `DQNAgent` and `PolicyAgent` could be used." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## DQNAgent\n", + "\n", + "Suppose we have NN which produces Q-values from observations. `DQNAgent` takes batch of observations on input (as numpy array), apply the network on them to get Q-values, then uses provided `ActionSelector` to convert Q-values to indices of actions.\n", + "\n", + "Below is the small example. For simplicity, our network always produces the same output for the input batch" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, actions: int):\n", + " super(Net, self).__init__()\n", + " self.actions = actions\n", + " \n", + " def forward(self, x):\n", + " # we always produce diagonal tensor of shape (batch_size, actions)\n", + " return torch.eye(x.size()[0], self.actions)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "net = Net(actions=3)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1., 0., 0.],\n", + " [0., 1., 0.]])" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net(torch.zeros(2, 10))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So, let's use simple $argmax$ policy for the beginning. Agent will return actions corresponding to 1s in the net output." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.ArgmaxActionSelector()\n", + "agent = ptan.agent.DQNAgent(dqn_model=net, action_selector=selector, device=\"cpu\")\n", + "# note that you need to tell agent are you using GPU or not by passing device, by default it equals to \"cpu\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can pass the agent some observations (which will be ignored as our example is trivial), the output will be the actions according to NN output." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(array([0, 1]), [None, None])" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(2, 5))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The output from the agent is a tuple with two components:\n", + "1. numpy array with actions to be executed -- in our case of discrete actions, they are indices\n", + "2. list with agent's internal state. This is used for stateful agents, and is a list of None in our case. As our agent is stateless, you can ignore it" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's try to make the agent with epsilon-greedy exploration strategy. For this, we need just pass a different action selector and that's done." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "agent = ptan.agent.DQNAgent(dqn_model=net, action_selector=selector)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As epsilon is 1, all the actions will be random, regardless of network's output" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([1, 0, 1, 1, 0, 2, 2, 1, 0, 0])" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "But we can change the epsilon value on the fly, which is very handy during the training, when we supposed to anneal epsilon over time." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([2, 1, 2, 2, 0, 1, 2, 0, 0, 0])" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector.epsilon = 0.5\n", + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([0, 1, 2, 0, 0, 0, 0, 0, 0, 0])" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "selector.epsilon = 0.1\n", + "agent(torch.zeros(10, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## PolicyAgent\n", + "\n", + "`PolicyAgent` expects the network to produce policy distribution over discrete set of actions. Policy distribution could be either logits (unnormalized) or normalized distribution. In practice you should always use logits to improve stability.\n", + "\n", + "Let's reimplement our above sample, but now network will produce probability" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, actions: int):\n", + " super(Net, self).__init__()\n", + " self.actions = actions\n", + " \n", + " def forward(self, x):\n", + " # Now we produce the tensor with first two actions having the same logit scores\n", + " res = torch.zeros((x.size()[0], self.actions), dtype=torch.float32)\n", + " res[:, 0] = 1\n", + " res[:, 1] = 1\n", + " return res" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.],\n", + " [1., 1., 0., 0., 0.]])" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net = Net(actions=5)\n", + "net(torch.zeros(6, 10))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we need to use `ProbabilityActionSelector`. Also note the agument `apply_softmax=True` which tells agent that output is not normalized." + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [], + "source": [ + "selector = ptan.actions.ProbabilityActionSelector()\n", + "agent = ptan.agent.PolicyAgent(model=net, action_selector=selector, apply_softmax=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can pass agent observations (fake, as before) and get some actions. Agent, as before returns the tuple with actions and internal state, which will be ignored" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([4, 0, 4, 0, 0, 0])" + ] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent(torch.zeros(6, 5))[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please note, that softmax returns non-zero probabilities to zero logits, so, actions 2-5 are still could be sampled (but less likely than 0 and 1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Experience source\n", + "\n", + "`Agent` abstraction described above allows us to implement environment communications in a generic way. This communication is happening in form of trajectories, produced by applying agent's actions to gym environment.\n", + "\n", + "At high level, Experience source classes take the agent instance, environment and provide you step-by step data from the trajectories. Functionality of those clases include:\n", + "1. support of multiple environments being communicated at the same time. This allows efficient GPU utilization as batch of observations being processed by agent at once.\n", + "2. trajectory could be preprocessed and presented in a convenient form for further training. For example, there is implementation of sub-trajectory rollouts, which is convenient for DQN and n-step DQN, when we're not interested in intermediate steps in n-step subtrajectories, only in first and last observations + total reward for the subtrajectory.\n", + "3. support of vectorized environments from OpenAI Universe\n", + "\n", + "So, the experience source classes acts as a \"magic black box\" hiding the environment interaction and trajectory handling complexities from the library user. But the overall ptan philosophy is to be flexible and extensible, so, if you want, you can subclass one of existing classes or implement your own version in case of neccessity. \n", + "\n", + "There are classes which are provided by the system:\n", + "* [`ExperienceSource`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L18): by using agent and the set of environments produces n-step subtrajectories with all intermediate steps.\n", + "* [`ExperienceSourceFirstLast`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L161): the same as `ExperienceSource`, but instead of full subtrajectory (with all steps) keeps only first and last steps with proper reward accumulation in between. This can save lots of memory in case of N-step DQN or A2C rollouts.\n", + "* [`ExperienceSourceRollouts`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L200): follows A3C rollouts scheme described in Minh's paper about Atari games.\n", + "\n", + "All the classes are written to be efficient both in terms of CPU and memory, which is not very important for toy problems, but might become an issue when you want to solve Atari games, keeping 10M samples in replay buffer using commodity hardware.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Toy gym environment\n", + "\n", + "For demonstration purpoposes, we'll implement very simple gym environment with small predictable observation state to show how Experience source classes works" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [], + "source": [ + "class ToyEnv(gym.Env):\n", + " \"\"\"\n", + " Environment with observation 0..4 and actions 0..2\n", + " Observations are rotated sequentialy mod 5, reward is equal to given action.\n", + " Episodes are having fixed length of 10\n", + " \"\"\"\n", + " def __init__(self):\n", + " super(ToyEnv, self).__init__()\n", + " self.observation_space = gym.spaces.Discrete(n=5)\n", + " self.action_space = gym.spaces.Discrete(n=3)\n", + " self.step_index = 0\n", + " \n", + " def reset(self):\n", + " self.step_index = 0\n", + " return self.step_index\n", + " \n", + " def step(self, action):\n", + " is_done = self.step_index == 10\n", + " if is_done:\n", + " return self.step_index % self.observation_space.n, 0.0, is_done, {}\n", + " self.step_index += 1\n", + " return self.step_index % self.observation_space.n, float(action), self.step_index == 10, {}" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 22, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env = ToyEnv()\n", + "env.reset()" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(1, 1.0, False, {})" + ] + }, + "execution_count": 23, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.step(1)" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(2, 2.0, False, {})" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.step(2)" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(3, 0.0, False, {})\n", + "(4, 0.0, False, {})\n", + "(0, 0.0, False, {})\n", + "(1, 0.0, False, {})\n", + "(2, 0.0, False, {})\n", + "(3, 0.0, False, {})\n", + "(4, 0.0, False, {})\n", + "(0, 0.0, True, {})\n", + "(0, 0.0, True, {})\n", + "(0, 0.0, True, {})\n" + ] + } + ], + "source": [ + "for _ in range(10):\n", + " r = env.step(0)\n", + " print(r)" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 26, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "env.reset()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll also need the agent which always generates fixed action" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [], + "source": [ + "class DullAgent(ptan.agent.BaseAgent):\n", + " def __init__(self, action: int):\n", + " self.action = action\n", + " \n", + " def __call__(self, observations: List[Any], state: Optional[List] = None) -> Tuple[List[int], Optional[List]]:\n", + " return [self.action for _ in observations], state" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "([1, 1], None)" + ] + }, + "execution_count": 28, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent = DullAgent(action=1)\n", + "agent([1, 2])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ExperienceSource class\n", + "\n", + "Generates chunks of trajectories of the given length.\n", + "\n", + "Constructor arguments:\n", + "* gym environment to be use (could be the list of environments or one single environment)\n", + "* the agent\n", + "* `steps_count=2`: the length of sub-trajectories to be generated\n", + "* `steps_delta=1`: step in subtrajectories\n", + "* `vectorized=False`: if true, environment is OpenAI Universe vectorized environment (more about them in MiniWoB tutorial)" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [], + "source": [ + "env = ToyEnv()\n", + "agent = DullAgent(action=1)\n", + "exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All experience source classes are providing standard python's iterator interface, so, you can just iterate over them to get sub-trajectories." + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The result is a tuple of length `steps_count` (in our case we requested sub-trajectories of length 2). Every entry is a namedtuple object with the following fields:\n", + "* state: state we observed before taking the action\n", + "* action: action we've done\n", + "* reward: immediate reward we've got from env\n", + "* done: was the episode done or not" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " if exp[0].done:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please note, that partial trajectories are alse returned, so, we can handle end of episodes properly.\n", + "\n", + "At the end of episode, environment is being reset automatically, so, we don't need to bother about them, just keep iterating:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 15:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's very convenient, especially in cases when we have several environments running in parallel (several instances of the same Atari game, for example).\n", + "\n", + "Let's increase length of our experience chunks." + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSource(env=env, agent=agent, steps_count=4)" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n", + "(Experience(state=4, action=1, reward=1.0, done=True),)\n" + ] + } + ], + "source": [ + "for exp in exp_source:\n", + " print(exp)\n", + " if exp[0].done:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we're getting subtrajectories of length 4, including the final pieces of trajectory.\n", + "\n", + "Let's give several environments to the experience source." + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSource(env=[ToyEnv(), ToyEnv()], agent=agent, steps_count=2)" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=4, action=1, reward=1.0, done=False), Experience(state=0, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=0, action=1, reward=1.0, done=False), Experience(state=1, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=1, action=1, reward=1.0, done=False), Experience(state=2, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=2, action=1, reward=1.0, done=False), Experience(state=3, action=1, reward=1.0, done=False))\n", + "(Experience(state=3, action=1, reward=1.0, done=False), Experience(state=4, action=1, reward=1.0, done=True))\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 15:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, our environments are being iterated on a round-robin fashion, giving us access to trajectories from both environment step-by-step. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ExperienceSourceFirstLast\n", + "\n", + "Class `ExperienceSource` provides us full subtrajectories of given length as the list of $(s, a, r)$ objects. The next state $s'$ is returned in the next tuple, which is not always convenient. \n", + "\n", + "For example, in DQN training, we want to have tuples $(s, a, r, s')$ at once to do 1-step Bellman approximation during the training. In addition, some extension of DQN, like n-step DQN might want to collapse longer sequences of observations into (first-state, action, total-reward-for-n-steps, state-after-step-n).\n", + "\n", + "To support this in a generic way, simple subclass of `ExperienceSource` is implemented: `ExperienceSourceFirstLast`. It accepts almost the same arguments in constructor, but returns different data." + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 10:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now it returns single object on every iteration, which is again a namedtuple with the following fields:\n", + "* `state`: state which we used to decide on action to make\n", + "* `action`: action we've done at this step\n", + "* `reward`: partial accumulated reward for `steps_count` (in our case, `steps_count=1`, so it is equal to immediate reward)\n", + "* `last_state`: the state we've got after executing the action. If our episode ends, we have None here\n", + "\n", + "This data is much more convenient for DQN training, as we can apply Bellman approximation directly on this data.\n", + "\n", + "Let's check the result with larger amount of steps." + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [], + "source": [ + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=2)" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=2.0, last_state=4)\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=0)\n", + "ExperienceFirstLast(state=4, action=1, reward=2.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=2.0, last_state=4)\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=None)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=2.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=2.0, last_state=3)\n" + ] + } + ], + "source": [ + "for idx, exp in enumerate(exp_source):\n", + " print(exp)\n", + " if idx > 10:\n", + " break" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So, now we're collapsing two steps on every iteration, calculating immediate reward (that's why our reward=2.0 for most of the samples).\n", + "\n", + "More interesting samples are at the end of the episode:\n", + "```\n", + "ExperienceFirstLast(state=3, action=1, reward=2.0, last_state=None)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "```\n", + "\n", + "As episode ends, we have `last_state=None` in those samples, but additionally, we calculating the tail of the episode. Those tiny details are very easy to implement wrong, if you're doing all the trajectory handling yourself." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Experience source buffers\n", + "\n", + "In DQN we rarely dealing with immediate experience samples, as they are heavily correlated, which lead to instability in training. \n", + "\n", + "Normally, we have large replay buffers, which are being populated with experience pieces. Then the buffer is being sampled (randomly or with priority weights) to get the training batch. Replay buffer normally has the maximum capacity, so old samples are being pushed out when replay buffer reaches the limit.\n", + "\n", + "There are several implementation tricks here, which becomes extremely important when you need to deal with large problems:\n", + "* how to efficiently sample from large buffer\n", + "* how to push old samples from the buffer\n", + "* in case of prioritized buffer, how priorities need to be maintained and handled in the most efficient way.\n", + "\n", + "All this becomes quite non-trivial task, if you want to solve atari, keeping 10-100M samples where every sample is an image from the game. Small mistake can lead to 10-100x memory increase and major slowdowns of the training process.\n", + "\n", + "Ptan provides several variants of replay buffers, which provide simple integration with `ExperienceSource` and `Agent` machinery. Normally, what you need to do is to ask buffer to pull new sample from the source and sample the training batch.\n", + "\n", + "Provided classes:\n", + "* [`ExperienceReplayBuffer`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L327): simple replay buffer of predefined size with uniform sampling\n", + "* [`PrioReplayBufferNaive`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L371): simple, but not very efficient prioritized replay buffer implementation. Complexity of sampling is O(n), which might become an issue with large buffers\n", + "* [`PrioritizedReplayBuffer`](https://github.com/Shmuma/ptan/blob/master/ptan/experience.py#L414): uses segment trees for sampling, which makes code cryptic, but with O(log(n)) sampling complexity.\n", + "\n", + "Below is the example of simple relay buffer, if you want, you can find examples of `PrioritizedReplayBuffer` usage in examples for chapter 7 of my book: https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter07/05_dqn_prio_replay.py" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": {}, + "outputs": [], + "source": [ + "env = ToyEnv()\n", + "agent = DullAgent(action=1)\n", + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=1.0, steps_count=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [], + "source": [ + "buffer = ptan.experience.ExperienceReplayBuffer(exp_source, buffer_size=100)" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0" + ] + }, + "execution_count": 43, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "len(buffer)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "All replay buffers provides the following interface:\n", + "* python iterator interface to walk over all the samples in the buffer\n", + "* method `populate(N)`, to get N samples from the experience source and put into the buffer\n", + "* method `sample(N)`, to get the batch of N experience objects\n", + "\n", + "So, the normal training loop for DQN looks like infinite repetition of the following steps:\n", + "1. call `buffer.populate(1)` to get fresh sample from the environment\n", + "2. `batch = buffer.sample(BATCH_SIZE)` to get the batch from buffer\n", + "3. calculate the loss on the sampled batch\n", + "4. backpropagate\n", + "5. repeat until convergence (hopefully)\n", + "\n", + "All the rest is happening automatically -- reset of the environment, sub-trajectories handling, buffer size maintenance, etc." + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=3, action=1, reward=1.0, last_state=4)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=0)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=4, action=1, reward=1.0, last_state=None)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=2, action=1, reward=1.0, last_state=3)\n", + "Train time, 4 batch samples:\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=0, action=1, reward=1.0, last_state=1)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n", + "ExperienceFirstLast(state=1, action=1, reward=1.0, last_state=2)\n" + ] + } + ], + "source": [ + "for step in range(10):\n", + " buffer.populate(1)\n", + " # if buffer is small enough, do nothing\n", + " if len(buffer) < 5:\n", + " continue\n", + " batch = buffer.sample(4)\n", + " print(\"Train time, %d batch samples:\" % len(batch))\n", + " for s in batch:\n", + " print(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Monitoring the training\n", + "\n", + "Normally, if we're running the training process, we want to keep an eye on several metrics to check how good our method is doing. Minimal set of things to watch includes:\n", + "* training loss (several loss components in case of A2C, for example)\n", + "* values predicted by the network (in case of DQN)\n", + "* statistics about episode rewards (to check that our agent improves over time)\n", + "* statistics about the length of the episode, as this is normally a proxy for reward\n", + "\n", + "First two items are being calculated in the training loop, but the rest two values are not that easy to get. If we're implementing everything from scratch, we need to track the current episode and when it ends, track the total reward and length.\n", + "\n", + "Ptan simplifies this by providing the method in experience source, which returns this information in one call. Method `pop_rewards_steps()` returns the list, where each entry is the information about the episode which since the lass call to the method. If no episodes have completed between the calls, empty list is returned. \n", + "\n", + "Every item is a tuple with (total_reword, total_steps). \n", + "\n", + "So, the only thing you need to do to monitor the training progress, is to periodically call method `pop_rewards_steps()` in the training loop and handle the results (printing on console or sending to TensorBoard, or whatever)." + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "metadata": {}, + "outputs": [], + "source": [ + "r = exp_source.pop_rewards_steps()" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[(10.0, 10)]" + ] + }, + "execution_count": 49, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "r" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We've one one episode completed so far, it got reward 10.0 and total amount of steps was 10" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[]" + ] + }, + "execution_count": 50, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "exp_source.pop_rewards_steps()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Other tools\n", + "\n", + "There are several smaller things, which could be used, like [`TargetNet`](https://github.com/Shmuma/ptan/blob/master/ptan/agent.py#L79), which allows you to keep a copy of model weights and syncronize them from time to time (which is essential for DQN to converge), or a [set of utils](https://github.com/Shmuma/ptan/blob/master/ptan/common/utils.py) to smooth time series for better training progress visualisation.\n", + "\n", + "There is [PyTorch Ignite bindings](https://github.com/Shmuma/ptan/blob/master/ptan/ignite.py) which implement integration of ptan with ignite framework:\n", + "\n", + "* install end of episode hooks `EpisodeEvents.EPISODE_COMPLETED`\n", + "* handle situations when reward reaches boundary `EpisodeEvents.BOUND_REWARD_REACHED`\n", + "* measure performance of the training process: `EpisodeFPSHandler`\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Simple CartPole solver\n", + "\n", + "Below is very simple DQN version which solves CartPole, just to demonstrate how all things fits together in real life." + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self, obs_size, hidden_size, n_actions):\n", + " super(Net, self).__init__()\n", + " self.net = nn.Sequential(\n", + " nn.Linear(obs_size, hidden_size),\n", + " nn.ReLU(),\n", + " nn.Linear(hidden_size, n_actions)\n", + " )\n", + "\n", + " def forward(self, x):\n", + " # CartPole is stupid -- they return double observations, rather than standard floats, so, the cast here\n", + " return self.net(x.float())" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": {}, + "outputs": [], + "source": [ + "BATCH_SIZE = 64\n", + "REPLAY_SIZE = 1000\n", + "LR = 1e-3\n", + "GAMMA=0.9\n", + "EPS_DECAY=0.995" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "metadata": {}, + "outputs": [], + "source": [ + "env = gym.make(\"CartPole-v0\")" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Net(\n", + " (net): Sequential(\n", + " (0): Linear(in_features=4, out_features=64, bias=True)\n", + " (1): ReLU()\n", + " (2): Linear(in_features=64, out_features=2, bias=True)\n", + " )\n", + ")" + ] + }, + "execution_count": 54, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "net = Net(obs_size=env.observation_space.shape[0], hidden_size=64, n_actions=env.action_space.n)\n", + "optimizer = optim.Adam(net.parameters(), LR)\n", + "net" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": {}, + "outputs": [], + "source": [ + "action_selector = ptan.actions.EpsilonGreedyActionSelector(epsilon=1.0)\n", + "agent = ptan.agent.DQNAgent(net, action_selector)\n", + "exp_source = ptan.experience.ExperienceSourceFirstLast(env, agent, gamma=GAMMA)\n", + "buffer = ptan.experience.ExperienceReplayBuffer(exp_source, buffer_size=REPLAY_SIZE)" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "metadata": {}, + "outputs": [], + "source": [ + "@torch.no_grad()\n", + "def unpack_batch(batch: List[ptan.experience.ExperienceFirstLast], net: nn.Module, gamma: float):\n", + " states = []\n", + " actions = []\n", + " rewards = []\n", + " done_masks = []\n", + " last_states = []\n", + " for exp in batch:\n", + " states.append(exp.state)\n", + " actions.append(exp.action)\n", + " rewards.append(exp.reward)\n", + " done_masks.append(exp.last_state is None)\n", + " if exp.last_state is None:\n", + " last_states.append(exp.state)\n", + " else:\n", + " last_states.append(exp.last_state)\n", + "\n", + " states_v = torch.tensor(states)\n", + " actions_v = torch.tensor(actions)\n", + " rewards_v = torch.tensor(rewards)\n", + " last_states_v = torch.tensor(last_states)\n", + " last_state_q_v = net(last_states_v)\n", + " best_last_q_v = torch.max(last_state_q_v, dim=1)[0]\n", + " best_last_q_v[done_masks] = 0.0\n", + " return states_v, actions_v, best_last_q_v + rewards_v" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "28: episode done, reward=27.000, steps=27, epsilon=1.00\n", + "43: episode done, reward=15.000, steps=15, epsilon=1.00\n", + "66: episode done, reward=23.000, steps=23, epsilon=1.00\n", + "85: episode done, reward=19.000, steps=19, epsilon=1.00\n", + "98: episode done, reward=13.000, steps=13, epsilon=1.00\n", + "111: episode done, reward=13.000, steps=13, epsilon=1.00\n", + "166: episode done, reward=55.000, steps=55, epsilon=1.00\n", + "191: episode done, reward=25.000, steps=25, epsilon=1.00\n", + "212: episode done, reward=21.000, steps=21, epsilon=0.94\n", + "231: episode done, reward=19.000, steps=19, epsilon=0.86\n", + "248: episode done, reward=17.000, steps=17, epsilon=0.79\n", + "266: episode done, reward=18.000, steps=18, epsilon=0.72\n", + "282: episode done, reward=16.000, steps=16, epsilon=0.66\n", + "298: episode done, reward=16.000, steps=16, epsilon=0.61\n", + "325: episode done, reward=27.000, steps=27, epsilon=0.53\n", + "337: episode done, reward=12.000, steps=12, epsilon=0.50\n", + "349: episode done, reward=12.000, steps=12, epsilon=0.47\n", + "362: episode done, reward=13.000, steps=13, epsilon=0.44\n", + "371: episode done, reward=9.000, steps=9, epsilon=0.42\n", + "387: episode done, reward=16.000, steps=16, epsilon=0.39\n", + "397: episode done, reward=10.000, steps=10, epsilon=0.37\n", + "408: episode done, reward=11.000, steps=11, epsilon=0.35\n", + "416: episode done, reward=8.000, steps=8, epsilon=0.34\n", + "436: episode done, reward=20.000, steps=20, epsilon=0.31\n", + "445: episode done, reward=9.000, steps=9, epsilon=0.29\n", + "458: episode done, reward=13.000, steps=13, epsilon=0.27\n", + "467: episode done, reward=9.000, steps=9, epsilon=0.26\n", + "476: episode done, reward=9.000, steps=9, epsilon=0.25\n", + "489: episode done, reward=13.000, steps=13, epsilon=0.23\n", + "501: episode done, reward=12.000, steps=12, epsilon=0.22\n", + "519: episode done, reward=18.000, steps=18, epsilon=0.20\n", + "533: episode done, reward=14.000, steps=14, epsilon=0.19\n", + "543: episode done, reward=10.000, steps=10, epsilon=0.18\n", + "553: episode done, reward=10.000, steps=10, epsilon=0.17\n", + "562: episode done, reward=9.000, steps=9, epsilon=0.16\n", + "571: episode done, reward=9.000, steps=9, epsilon=0.16\n", + "581: episode done, reward=10.000, steps=10, epsilon=0.15\n", + "593: episode done, reward=12.000, steps=12, epsilon=0.14\n", + "602: episode done, reward=9.000, steps=9, epsilon=0.13\n", + "613: episode done, reward=11.000, steps=11, epsilon=0.13\n", + "627: episode done, reward=14.000, steps=14, epsilon=0.12\n", + "640: episode done, reward=13.000, steps=13, epsilon=0.11\n", + "653: episode done, reward=13.000, steps=13, epsilon=0.10\n", + "664: episode done, reward=11.000, steps=11, epsilon=0.10\n", + "679: episode done, reward=15.000, steps=15, epsilon=0.09\n", + "700: episode done, reward=21.000, steps=21, epsilon=0.08\n", + "725: episode done, reward=25.000, steps=25, epsilon=0.07\n", + "736: episode done, reward=11.000, steps=11, epsilon=0.07\n", + "746: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "756: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "766: episode done, reward=10.000, steps=10, epsilon=0.06\n", + "779: episode done, reward=13.000, steps=13, epsilon=0.05\n", + "794: episode done, reward=15.000, steps=15, epsilon=0.05\n", + "807: episode done, reward=13.000, steps=13, epsilon=0.05\n", + "818: episode done, reward=11.000, steps=11, epsilon=0.05\n", + "828: episode done, reward=10.000, steps=10, epsilon=0.04\n", + "837: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "846: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "856: episode done, reward=10.000, steps=10, epsilon=0.04\n", + "865: episode done, reward=9.000, steps=9, epsilon=0.04\n", + "875: episode done, reward=10.000, steps=10, epsilon=0.03\n", + "886: episode done, reward=11.000, steps=11, epsilon=0.03\n", + "897: episode done, reward=11.000, steps=11, epsilon=0.03\n", + "909: episode done, reward=12.000, steps=12, epsilon=0.03\n", + "934: episode done, reward=25.000, steps=25, epsilon=0.03\n", + "947: episode done, reward=13.000, steps=13, epsilon=0.02\n", + "961: episode done, reward=14.000, steps=14, epsilon=0.02\n", + "974: episode done, reward=13.000, steps=13, epsilon=0.02\n", + "986: episode done, reward=12.000, steps=12, epsilon=0.02\n", + "1018: episode done, reward=32.000, steps=32, epsilon=0.02\n", + "1060: episode done, reward=42.000, steps=42, epsilon=0.01\n", + "1079: episode done, reward=19.000, steps=19, epsilon=0.01\n", + "1131: episode done, reward=52.000, steps=52, epsilon=0.01\n", + "1175: episode done, reward=44.000, steps=44, epsilon=0.01\n", + "1275: episode done, reward=100.000, steps=100, epsilon=0.00\n", + "1309: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "1339: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1405: episode done, reward=66.000, steps=66, epsilon=0.00\n", + "1500: episode done, reward=95.000, steps=95, epsilon=0.00\n", + "1535: episode done, reward=35.000, steps=35, epsilon=0.00\n", + "1556: episode done, reward=21.000, steps=21, epsilon=0.00\n", + "1580: episode done, reward=24.000, steps=24, epsilon=0.00\n", + "1625: episode done, reward=45.000, steps=45, epsilon=0.00\n", + "1654: episode done, reward=29.000, steps=29, epsilon=0.00\n", + "1684: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1714: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1748: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "1774: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "1804: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1831: episode done, reward=27.000, steps=27, epsilon=0.00\n", + "1848: episode done, reward=17.000, steps=17, epsilon=0.00\n", + "1864: episode done, reward=16.000, steps=16, epsilon=0.00\n", + "1894: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "1922: episode done, reward=28.000, steps=28, epsilon=0.00\n", + "1954: episode done, reward=32.000, steps=32, epsilon=0.00\n", + "1982: episode done, reward=28.000, steps=28, epsilon=0.00\n", + "2038: episode done, reward=56.000, steps=56, epsilon=0.00\n", + "2072: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "2172: episode done, reward=100.000, steps=100, epsilon=0.00\n", + "2264: episode done, reward=92.000, steps=92, epsilon=0.00\n", + "2294: episode done, reward=30.000, steps=30, epsilon=0.00\n", + "2328: episode done, reward=34.000, steps=34, epsilon=0.00\n", + "2382: episode done, reward=54.000, steps=54, epsilon=0.00\n", + "2420: episode done, reward=38.000, steps=38, epsilon=0.00\n", + "2469: episode done, reward=49.000, steps=49, epsilon=0.00\n", + "2523: episode done, reward=54.000, steps=54, epsilon=0.00\n", + "2547: episode done, reward=24.000, steps=24, epsilon=0.00\n", + "2573: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "2606: episode done, reward=33.000, steps=33, epsilon=0.00\n", + "2620: episode done, reward=14.000, steps=14, epsilon=0.00\n", + "2646: episode done, reward=26.000, steps=26, epsilon=0.00\n", + "2666: episode done, reward=20.000, steps=20, epsilon=0.00\n", + "2698: episode done, reward=32.000, steps=32, epsilon=0.00\n", + "2738: episode done, reward=40.000, steps=40, epsilon=0.00\n", + "2779: episode done, reward=41.000, steps=41, epsilon=0.00\n", + "2822: episode done, reward=43.000, steps=43, epsilon=0.00\n", + "2880: episode done, reward=58.000, steps=58, epsilon=0.00\n", + "2936: episode done, reward=56.000, steps=56, epsilon=0.00\n" + ] + } + ], + "source": [ + "step = 0\n", + "losses = []\n", + "rewards = []\n", + "\n", + "while True:\n", + " step += 1\n", + " buffer.populate(1)\n", + " solved = False\n", + " for reward, steps in exp_source.pop_rewards_steps():\n", + " print(\"%d: episode done, reward=%.3f, steps=%d, epsilon=%.2f\" % (\n", + " step, reward, steps, action_selector.epsilon))\n", + " rewards.append(reward)\n", + " solved = reward > 150\n", + " if solved:\n", + " print(\"Congrats!\")\n", + " break\n", + " if len(buffer) < 200:\n", + " continue\n", + " batch = buffer.sample(BATCH_SIZE)\n", + " states_v, actions_v, tgt_q_v = unpack_batch(batch, net, GAMMA)\n", + " optimizer.zero_grad()\n", + " q_v = net(states_v)\n", + " q_v = q_v.gather(1, actions_v.unsqueeze(-1)).squeeze(-1)\n", + " loss_v = F.mse_loss(q_v, tgt_q_v)\n", + " loss_v.backward()\n", + " optimizer.step() \n", + " losses.append(loss_v.item())\n", + " action_selector.epsilon *= EPS_DECAY\n", + " if step > 3000:\n", + " break" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3deZgU1bk/8O87MyzKoiCLbDogKKBEAwSNElRQQfTnksRETaLmxuu9UaP+bq4JJi7cRK8YRRHXuGvcjSgYFNlkX4edAYYZYBhmYHZmZ5buPvePrp7ppaq7q7truqvm+3kenqmurq4+Rc+8dfqtU+8RpRSIiMhZ0pLdACIiSjwGdyIiB2JwJyJyIAZ3IiIHYnAnInKgjGQ3AAD69OmjMjMzk90MIiJb2bJlS7lSqq/ecykR3DMzM5GVlZXsZhAR2YqIHDZ6jmkZIiIHYnAnInIgBnciIgdicCciciAGdyIiB2JwJyJyIAZ3IiIHYnAnIkqQ/PJ6rMktT3YzAKTITUxERE5w2TMrAAD5s65JbkPAnjsRkSMxuBMRORCDOxGRAzG4ExE5EIM7EZEDMbgTETkQgzsRkQMxuBMRRWFrwXGU1jYmuxlRY3AnIorCj19eh+nPr0l2M6LG4E5EFKXyuqZkNyFqDO5ERCa43B6sy0uN+jHhMLgTEZnw/LJc3PrGRmw8WJHspoTF4E5EZMKBsjoAQHldc5JbEl7E4C4iQ0TkOxHZIyLZInK/tr63iCwRkVztZy9tvYjIXBHJE5GdIjLW6oMgIgp2otmNucty0eL2JLspSRFNz90F4PdKqdEALgJwj4iMBjADwDKl1AgAy7THAHA1gBHav7sAvJLwVhMRRfDC8lw8u2Q/Ps06EvM+Pt5UgA0pnn4xEjG4K6WOKaW2asu1APYCGATgegDvapu9C+AGbfl6AO8prw0AThWRAQlvORFRGA3NbgBAU0vsPfcZ83bh5tc26D6noGLeb3swlXMXkUwA3wewEUB/pdQx7aliAP215UEA/E+Vhdq64H3dJSJZIpJVVlZmstlERBRO1MFdRLoD+BzAA0qpGv/nlFIKMHcaU0q9ppQar5Qa37dvXzMvJSJKOoEkuwlhRRXcRaQTvIH9A6XUPG11iS/dov0s1dYXARji9/LB2joioqRSSiGvtDbZzWgX0YyWEQBvAtirlHrW76kFAG7Xlm8HMN9v/W3aqJmLAFT7pW+IiJLmsy2FuOLZVVidG38q2Ak590sA/ArAZBHZrv2bDmAWgCtFJBfAFdpjAPgawEEAeQBeB3B34ptNRGRedlE1AOBAaV3M+0j1dIxPRqQNlFJrAMOjmaKzvQJwT5ztIiJKSaneY/fhHapE5GgNza6QdYkIz6neg2dwJyJHe2bxfuzW0jHeS4iJkeo9eAZ3InK87KPVCdtXqvfYfRjciYgciMGdiBxPWZBBKahsgMeTuqkZBnciohj8bVEOXl6Rl+xmGGJwJyLHC+5f/89Xe9DY4o57vxsPVca9D6swuBNRhzR/u7OrojC4E5Hj2WN8S2IxuBOR4+ld9gw3pLGxxW18sdQmZ4qI5QeIiDqShmYXRj/6LX572Vn6G6TuAJkA7LkTUcdk0AOva/SWK/jnlsJ2bEziMbgTkePFMs7dirHx7YnBnagD+M9/bMGMz3cmuxn2ECmnbpOcO4M7UQewKLsYH28+EnlDh4qtXpi9u+4M7kTkeHopFqN4H7EwmE1iPoM7EXVIRuV/U72Ub7QY3InI8XwBOyHl3JlzJyKyH7vUa4+EwZ2ISAeHQhIR2ZDhBVVndNwZ3InIeXJLavHOuvy49mHzjjuDOxE5z2MLsiNu45QeuhEGdyJyPF/+3P9iKYM7EZHNxHMx9KrnViWuIUnE4E5E5KeyvhkAoAzOEGU1Te3ZnJgxuBNRh9bY4kazyxP19gWVDVFtV9/kwoOf7UB1Q0usTYsLJ+sgog7Jl38f+cgiDOvTDfdNGYGM9LZEfLyjZd7fcBifbSlEr26d8afpo+Lcm3kM7kTkeHqBes7S/XBrU+kdLK/HA59st+a9k3Q3FNMyRNRh+I+Qya9owO8/22G8bYLeK1l3ujK4ExHpMIrJ0VaNTHaNGgZ3IqIIfvPO5tZlu9ScYXAnopSw91gNfvnGRjS2uGPex9q8clz7wmq4PNGPfjHiH8SX7SttXTZ781OyzgUM7kSUEh7+cjfW5JVjd1F1zPv44+c7sbuoBkerGhPYstgw505EFAelFN5dl4/axhbjXrVdcikJxKGQRJRSzIbhdQcq8NiCbOworGq9iGnl8EO7nCci9txF5C0RKRWR3X7rZopIkYhs1/5N93vuIRHJE5EcEZlqVcOJiADgRLM3R1/d0NZzD46/vsdm0uVmThC5JbWoa3Lp7ydJWfdoeu7vAHgRwHtB659TSj3jv0JERgO4GcC5AAYCWCoiZyulYr9CQkQUJV/wtrJ3rZf6ufK5VRh7xqlB2/m+RVjXlnAi9tyVUqsAVEa5v+sBfKyUalJKHQKQB2BCHO0jog4i1lHherGzuCb+C6qG49yDnvhkcwEAYGtBVcD6ZFcUjueC6r0islNL2/TS1g0CcMRvm0JtXQgRuUtEskQkq6ysLI5mEJGTxNrTFWnrLbenP36+K+zz/9p5DB5P+3ffYw3urwA4C8AFAI4BmG12B0qp15RS45VS4/v27RtjM4iIvJQK31veXVQNVzsG2X3FNQCA8romfLipoN3e1yem0TJKqRLfsoi8DuBf2sMiAEP8Nh2srSMisp5BdN9fUotH50eeei+RPs0qbF0ur2v/GvAx9dxFZIDfwxsB+EbSLABws4h0EZGhAEYA2BRfE4moI4g3oxLu9aWxTLBh0Mm3yUjIyD13EfkIwGUA+ohIIYDHAFwmIhfAe5z5AP4DAJRS2SLyKYA9AFwA7uFIGSJqL+2RcU/2hdJoRQzuSqlbdFa/GWb7JwA8EU+jiIii5RuP3uTy4EBZvf427dmgFMHyA0SUVBV1TaamuTOyOrc8Aa1xDpYfIKKkGvf4Ulwzpu0yntnSAe09/NG/dSUJGE9vFfbciSjpFu46FvNrraojE81e95fUWfLeicDgTkSOl8j4b5cLqgzuRJQSkj0tXTCjbwR2uTjL4E5EKSV5SRZnYXAnoqSxsu66VezSZAZ3IkopZpMzNom17Y7BnYiSRq8X7Ft1/8fb8OnmI6EbJOh9Ir7GYH0sIy+TcT2BwZ2IDCmlsGDHUbjbo5piUPybv/0o/vD5Tuvf16RYThRf7TyKn7yyLvGNCYPBnYgMfbGtCPd9tA1vrjloyf6jiZNFVScwe3FOXPn5WF6ZyNx6Xmkdthw+nrgdRoHBnYgMVdQ1A4ixqmKC3P3BVrywPA/7imuT1gY7YvkBIorIqqSMXm88eFVTi1t3vcejsLOoOqr3SWzG2x6XcNlzJyJDvouHVg3/a3a3FQwzG4BfWXkAN7y0FpsPRZ7i2R7hOLEY3InItMYWN2Z9sw+NLfrTNeQU1yInijTKf/xjS9TvqYJCtC9Nc6w6cvGuxI6nT607aY0wuBORaW+tPYRXVx7AG6v1L7ROnbMKU+esirifVC7TG3wysRsGdyIyzVd/PRF12KMVPFbc9yiaIJzYMG2PoM/gTkQRBQdQX6BtzzDX3j1po0wOyw8QUVK53B40uayZwrg958eINBlHewfbeI49c8bCxDUkAgZ3Ioe69Y2NOOfhRZa+hxWBNbiHbnQx1MxIHrv0thOJwZ3IoTZFMUQwEqNes5l8d/TvlbBdJV0qVLtkcCeiiJIZq9p7jlSfeA75zTWHjPfbTv+ZDO5EZMgpnelEhtNoYnNWvnEdmfY6UTK4E5Fplt65GuU+rUgNRaOivjniNouyiw2fcyuFoqoTuPaF1Sivs65mD4M7EZnmS5Uk9r5Pp3xPCM/tUXh7zSHsLqrBF1uLLHsfBnciMtSe6e5Ye+CWpTksrKdTqfX+rfzWweBORBEZXQRszwutD34WOHGHmW8PqTB6xeehebswb5t1PXYfBneiFLazsApltcmrpW6kNeeeyKGQBmkZ39o9x2rgcnvwgyeW4qsdR0Oet4uFO4+1y/swuBOlsOteXItpURTgam9W5MfXH6wAENoT939cfaIFZbVNeGxBtu7zicTCYURkqWhGZ1glUSHc7VEojqI0byR64TaFMi4phcGdqIOpbmgxXc3RMH5GGVhnL87BRU8uiynA651gxGA9tWFwJ+pgzv/LYvz2/egnydDTlnOPznc5ZQBg6bhuCsTgTtQBLdtXGnGbqoZmzPxqj+5zsfaa4x1aWdWgl6KKop57DKkbu6d7GNyJSFfh8ROty8a1zaOLgIkainjFs96Ly2IyL7MmLzVnfLLyBMLgTkSmxVp+wIpRNlYFSJt33Bncicg7Xd6J5ugn9rCyVIDd0yGpgsGdiHD9S2sx6lFrJ/ZILGeMlbHyPBYxuIvIWyJSKiK7/db1FpElIpKr/eylrRcRmSsieSKyU0TGWth2og7D7VF4b32+ZdPm7T1WE/b5uiaXbs/ebHBKZK2aZMzjaifR9NzfATAtaN0MAMuUUiMALNMeA8DVAEZo/+4C8EpimknUsc3bWohH52fj5e8OBKzPyq+0LOD7+2JbES55annr41SaNWl5FCN/YpFK9WhiETG4K6VWAQier+t6AO9qy+8CuMFv/XvKawOAU0VkQKIaS9RR1Ta6AASO+jhQVoefvroeMxfoD1dMtEqdO2WjjX9m4mTwbf96J5JUOrmkqlhz7v2VUr7qN8UA+mvLgwAc8duuUFsXQkTuEpEsEckqKyuLsRlEHYMvmG05fBw1jS0AgKoG789IKRUjuSW1eObbnLh6qNHWX/FtF0tQ1mteWW0TSmvjL2eQbHOW7rds33FfUFXe3wzTvx1KqdeUUuOVUuP79u0bbzOIOozGFm8axuxdosFueX0DXvwuD8e1k4QZreV2kzgUcnVu4Nj1rp0SOz7E0w5ZmcYWc2UgzIj1f6PEl27RfvqSXkUAhvhtN1hbR0SJogWdeMOkK0L0Che4433vJpcbY/+6JKpto+3td++SEUeLnCfW4L4AwO3a8u0A5vutv00bNXMRgGq/9A0Rxcg/voXE3BjTKu2Ztg5uYlltk24OnxIn4qlORD4CcBmAPiJSCOAxALMAfCoivwFwGMDPtM2/BjAdQB6ABgC/tqDNRARr5jG1mq8XLiaS79Gfu3iV1V/E4K6UusXgqSk62yoA98TbKCIK5B8MVVBaJlLw++GTy1qXffl6f7FcUI13tIoVYdiuFSeVUqZOdtHiHapENmN25MkxvxrqIx9puws1EQEl6sJhfssNzS48sXBvmH0GPnb6sEerhtMzuBPZgH+ACw4GyZgOrvVbQ5Tb+04CAuD1VYewcBcvxflY9ekxuBNZ4MaX1+KfWwotfY/W2+/jjA6xvDzWoZAA4PJYN/zPjqy6E5Zjh4gssK2gCtsKqvDTcYMTsj+90TLJzHtH+96ltY3o16OrqX1/uLHAspICqYg9dyIHyi2pReaMhdhwsCLq1wT39KyrZx7FDEc627S423rmE55Ypm3nFc1JYVF2Md5Zlx9NEx2BOXciB1p3wBvUv9Zy0EopPPnNXhRUNBi+pqQm8Lb7ZAyFDBej//qvcLVuHH51NAZWXTNhcCdKouAZjfJK6/D3lQdx1z+yDF/zk1fWB7y2SWd4oxmJ7jluOXw8ofvjaJnYMOdOlERto068f+G+igDu4NIAYSLcwfL6gMd3vL0Jp57UKfJ7JyBo6gWm4P0WVzfiYFl96IZxvAdFxuBOlExxjDoxKsK1IqcdqqyGuTs2uF3+PXmn98JjwZ47kQNFO15cLyYmKlDGkvPVe+tD5fV4YuFeNLsChzqmBW1sttlOPyFYlXNncCdKoracu/cP3Mzdp/EGvUT0GI9Utl34fXT+7pAyvIDzg3O8OFqGyIGivRFJdzaiOEeeVESoyhi25K/21qtzy3G4oh5NLjcOlNYZbR2wxBR6IKv+P9hzJ0qi4NEysbw2bnHm+0tqmvDckv04Wq0/MxJ77uG1uDxAl8Tvlz13oiQKHi3Ttl7CPgbaxsbHKxE9x/VhbsJibA9v9pIcS/bL4E6UYGZqhYTruUfaz5yluWaaZUp9kwt3vL0pZH1OcW3Iukg98zS/DawobWt3zLkT2UTwH+vhinp4DKa0C55wI9ljuqsbWnDOw9/gf77K1p1bdeqcVQDMpVr8t2Vobz8M7kQWyimuxaVPr8ArKw/oPm804UZOSS3eXpvftl07RcXthVVocnnwaZZxRUulVEiQjubia6x2F9XEt4MUx8JhRDbh/8daeNw7VNDolvy2nnvon/hfwtZoSZ4/f7kbEebWDhB8vcBMkTSKHYM7UQJ5PAq/+2hr6+PgKfGC+dbP21qEuiaXYQ/Yyo67/3tWnwhNxQT7cGMBDleYKCfgn5YRYHN+YmvP2B1z7kQ2UFzTiK93Fbc+jlTq1n/92rzQG4Da230fbYtqO/8Lo5FOPP7Pz16833yjHI9VIYlSXnAQ97R2y/RDYLT5aDsPMvEfLbNgx9EktqRjYXAnslBrWsao5x4U9LcdiS1lcbCsLqqUip6YasuI/nKkbSkU0zJEtuT9y91zVH/Eh3/gq6hrxp+/2B3Tu0yevRI3vrQ2ptfGwky8jrdMAsWGwZ2oHRRVnQit0R7kRJyTbgTXdbdSuN54cBXIxxbEdsLqKNhzJ7Ih/z9cvREm0d6xGdz7XX8gucMJA9sjAYmd4GM6EMdEHR0Bp9kjsiH/P9vJs1fiWPWJgOdjTVjc8vqGmNsULNx8rUb843dNY2CuP7jnTsnB4E6UQMFfsYMnrqioCyyzG/Wt+RYGzJ+/Zv5E8eX2otblh4OuEzDHbg7TMkQpqtnlQV6pt6CWJ+gv9YFPtod9bWBRrcS3zSpHKtu+gRRVnUClX214Ox2HkzG4E8XpkS9344pnV6GstiliLyw4+EcbB1M9XvpfLE5jdDeFk3UQpaiNh7wXN+uaXEiPENh8MbCxxY39JbX47QdtpQqcEhIZ283hBNlEKcr/bzO4Zx7M9/zEp75DeV2T4X6C2akOun1a6mxMyxAliCBycPc9HRzYAYSttLh0T0kcLWtfTMuYw6GQRBY6WnUCLrcn8oZ+Co83hOTZmyPsI9zsSuGeW5RdbPhctPtoL4ztqYHBnTq8yvpmXDxrOR5fuDfsdnuO1qBRu4u0tLYRE5/6Dj94Ymnr8yLAtDmrw+7D6tC7q6ja4neILI0D3c3hUEiyyuGK+pDx2B2Jr+DWdzmlIc81trhx06vrsDavHNPnrsbvP9sBAHh/Q0HIttF0mo2m2wMip3SiYWYSDaswtJvDmZgoZtUNLa09zmA1jS249OkVeGjernZulbVONEdfpyU4GB0oq8PdH2xBs8uDnYXV2Jx/HH/4504AwJY4J5oI94f8v1/vi2vfQGoEVubcU0NcwV1E8kVkl4hsF5EsbV1vEVkiIrnaz16JaSrF6vy/LMY1c/XTBQ1N3iC4Jq+sPZtkqQ0HKzDq0UWmJ7/wdZxnfL4TX+8qxraC4yH5Y9/Fr9066Y9UiGlsg/1YdZ0kET33y5VSFyilxmuPZwBYppQaAWCZ9piSzKh4k1VX6pNp06FKAMDzS3Mx6W/fmerFA223zwcWw/L+VApwuT1Yvi80hRMNq693pkKv2U7DNp3MirTM9QDe1ZbfBXCDBe9BZMgXQDflV6KgsgF5pXVRva41JvkFcp/C42232wfntQsqvYW3WtyRI7cTT6bBGNrNSdWcuwKwWES2iMhd2rr+Sqlj2nIxgP56LxSRu0QkS0SyysqckxKwGzsXeWpscRtOguGvvtmFzBkLce+HW8Nu5wvmvsEeeoE43B/iHW9vitgWq2N7KnSaU+Hbg52kauGwiUqpsQCuBnCPiEzyf1J5k0m6TVdKvaaUGq+UGt+3b984m0GpKPtoNTJnLMS2gsTPdr+7qBq3vr4B0+euDihaBYQG5Zu1qof/2untc2wrOI45S9smag6ORa1pGRXaCw33h+jfu0+W2kZXspvAkr8pIq7grpQq0n6WAvgCwAQAJSIyAAC0n7ElJ8kyR6tOoKjKG4isTBOsyPF+I/s2O7q7Kw+YmAf02hfWYGtBFQCgodkb0FbuL8OeozURe0I3vrwOc5bmAvCOFvKlWQoqG1Dd0BJhflAV1/+Z1UmZSN9O2gNz7uakXFpGRLqJSA/fMoCrAOwGsADA7dpmtwOYH28jKTYHy/RzzRfPWo5LZi0PWOfrrd757mb8URv2Fy8Jk97w2XO0Bqv2e08CU2avxA0xzAPqC+a3v7UJ0+euDvvH8uLy3NblJpcb35u5GDMXZLeu+8WbbbXN9U4SHhXf1+g31xzCot3HIm8Yo/KgevHUccXTc+8PYI2I7ACwCcBCpdQiALMAXCkiuQCu0B5TO1uw4ygmz16J+X6TKkRj6d5SfJJ1JCFtaM3nhwmG0+euxm1vteWqD8U4D2i0swk9s7gtHdOk3bi1cn/bNZ/dRTWtN3TpnZRcbg9yS6K7QKtn+b5S/Of7ye9dW+lodfLTU3Zi1VDImKtCKqUOAjhfZ30FgCnxNIril33UOw77/o/DTxZh1ocbC3BG75MxcUSfiNtK5NieMPd9vK11uaohut6rUfIg67D3GoHe3Z4KwOwlOSZb17GkQHkbW0m5tAzZz9TnVsW9jz99sQu/fHNjVNu2Z+bV5Wkrn/De+sNRvSZSfl8pha92BKVQVGrc4k8UCYO7Q+mNmsgpqY16zHcitUelwt1FkYdEBpv41HcRt3lnXX7A49omV+s1AqKE4GQdFE5tYwu6dc5Ak8uD615cg1yDIH7Fsyt11xfXNCJzxsKQ9evyyrGjsBoFlQ34y/XnmmqTLy1jtqfr8aiUqCx4x9ubk90EopgxuCdAaW0j+vXomrT3r29yYczMxbhz4lBMO+90w8AerMXtiZgfvfWNthTM1HN170czFOsNUsFNcrk9KKo6gTNP66a/PdMkZGOcrCNFbT9ShQlPLMPnWwqT1gZfCuaNNYeizjcDwE2vro/5PVfolMcN5uu555bWYfm+8GPdi6sbW5eVUth7rKY1nfP4wr249OkVmLkgG/uKQ9MvqVDDnChWqXqHaoc3b6s3qPsmSU6G33/WNiJmwY6jUb9u+5EqHItx2JqZlMWq/WX4t3eywm4z6W9t+e8VOWW4+vnVeHaJd9iir7rjO+vycf2LoePgfZ8BEbVhcI9BTWML1h3wBhxfTzlZIyjW5pVjbV7sJxaj3vvzS3N115thdKdii9uDd9YeCpjWzn96Ol9a6YXleSGvberAk4qQM7HnnkLu+WArbn19I4771TT555bCgFTFpkOVYWfdSZRfvBHdsEQjRk18zq/2SqyCQ3vmjIX4bl8p3lmbj5lf7TFMIZmdkYgpd6JQDO4xyCmuBRA6GbKvlsqq/WX42d/X4/XVB9u9bcnwxuqDyJyxELWNgePG9TruC3cdQ12T9xrBcYObjY5WBaaKgvez5XBgIbJETE9HlCy8oJqCgmNKujZ8zxecnvxmX0y1UlKFL+ftY/Qr+I8N3h54hVbXZFehtxpkcJAGvMMcM7T/J6N5Wz/YGDg/6f6g2/0XBJVU8J1UieyIaZkUYlQQyxfc/Xua249UQSkFtw1va5y7LLa8+4ebvMF5pc7NPh7VNoY91vy5b3IMIjLG4B4D3/jtHz4ZWFnRaJKCO97ejLP+9HVC26CUMj19nNX+tdM7UuejTQWG27gVWnvuLe7Iwd1XI8dfnkG1SyI7Ym0ZG0g3+N/U68HGw+NReHnFAYx6dFFC9xtJbkltwOPso9VobHG3fivxr7gIhKZTAG/PPd1EcP/TvF0h61JhQgqiVMc7VBMoXdpm8Ink+aW5eG7pfhx6crqpyQ0OV9Tj0qdXxNjC+Pzv1/sCHl8zd43pfZTVNuHxhXsBAJ9vjVyOeEdhaM+9qiG6CT2IOjIGdxO2H6mC2+MxnKcyLU1w/8fbsHyf8d2by/eVYHCvkzFnmbeX2+JW6JwhmLe1EBcNOw0DTz0pbBv2HjNfIKs9Ha8PX25306HK1mU7XocgSjSrLqgyuEepxe1pHfky8BT9OjLpIpi/PfwdosF3aja7PXB7FP7r0x3IPO1krHjw8rCvT/V4+P2/Lkl2E4gIDO5Ru9Tv9vijfnVQ/MVSybDF5QEyvMn60tqmsNs+NG8nPtqUmFmSiChVpNhMTB2NUUD3ZzRaxmedViPFX7Pb03pSiHQzDgM7EUWLo2WiEG1u+KlF+8I+f6tOqYBml6d16GBjC+umEKWqMYNOsWS/vIkpiV5YHn8RLSNNLg/KI6RjAKCxJbXGtBN1NBcN623JfjnOPYmCa5kkUrPLePSNv+yjqT1KhijYGb1PTnYTEiqK2zJSCoN7kuWW1uL11YcibmdiKDxRuxnWV392LAC4+7Kz2rEl1rOqwJdVcwwzuOs40ezGs4tzWgtbRbpQGo///Xqv7vpPNhcgc8ZCw+JaRKlgeN/uhs+xQ5JcHC2j4+UVeXhheR7mLs/DB3deaOkvaUNTYC79Z39fj8LKBhyr8Y7OqW1swWndu1h6giGiyOxWWZrBXYf/xctfvLERI0/vYdl71TYF1knxv4MT8F5sKa9rsnXpYOqYYp0gvaPhBdV28MDH23DlsytDesn7imsNXmG9F5fnYU1u6Ph4Sm0v/2JsspvQLmzWmY2LVblxq7Dn7udLrXTA5FH9ktySNu+sy8ewPsYXrSg1TR6ZOr9DSeOwjrtVoZ3j3NvR3mPJ66nrOVhen+wmkEm8ROI8Nuu4M7jrWZXg+uvkTBOGWnNTi52EC3hOO79ZNhTSkr0yuLd6+MvQSSE6itN76le5dJL0GIq6RTK8X3cM7xc6FHDauacn5GLiq78cF/c+KHHYc7ep9zcYTw1n1n2ThydsX4lw3fkDwz6/+L8m4fXbxkfcjxUBsr346gN9b3Di6oM8eu1o3fVTRvVjWgYwNQmNHViXc+dNTJaJZro3M/7rqnMSuj8rXHp239blnl07YYBfjfq7LzsLWx+5Evv+Og29u3VuXX9yp3RL2vIfk4a1Lv/4+4MseY+HrxkFAPjpuMFRbb/wvokRt+naKV23f7eNbdIAAAz6SURBVJ6RnphBgA6LjbZnt9EyDO4A/vuzHTG9boDOpB3du9hjAJJ/cAcCv3KOGXQKenfrjK6d0vHN/T/Cg1O9J6tY6tVH45rvDWh7EPQWfbp3RiS/i/BN6d7Lh+POHw1DzuPTcOuEM6Jq04BTTsKvL8nEh3deGHY7vQCcnpbmuF6rMeOA57T/AY/Nbhbv8MH9P/+xJeLsSUbWPzQF7/8m8I9/45+mGG7/+A3nxfQ+ZkWTeggXewb3aiv41L9nV4w9oxcA47RMz67xndCGakM9T+6cHpKr/kFm5IuW5w7sqbv+joszAaD120eXjPSoU0sC4LH/dy4uHt4nwnah+8tI4+07ThRpvoVU06GDe05xLRZlFxs+7/8V/o6LMzFlZD9c+70B+PuvxiF/1jUAgIkj+gQE9G5az/3Ley4J2d/NPxiCdTMmB6y7ZcIQ3HP5WdjwkPFJwax7Lo+c8w8XfMYEnRx8X0eH9emGCUN745YJQ1qfWztjMrY/elXY91ry/ye1/n/pSfObWNx30rn38uG4YlQ/XDGqf8j2XTvp/9peNTpwW1+7/U9k0faoI232xI3nGW6XqFIRdj9BOO3Li9tmwd0eOQQLZB+txjVz1xg+37NrBp656Xxcd/5ArD1QjoeuHmW4bf+eXfHuv03Amty2IZQXDDm1dfmDOy/E0r0lyEhPC8hhr/7D5RhiQVnUqeeeji0PX4EbXl6LI5UndLcZFqbgUzDfXCWdM9Lw4b9fhK0Fx1tnhRoUYUJvAOjVTT+18tYd43FSp4yAIODrWA/pfRL+e+o5cLk9OFheh5e+O9C6zaL7J+GyZ1YAAGb9eAyMwqBvJEsspWe7Rri+8IsLzzR8Lj1NHBfYCPBYNIExJ8hOkPomF8597NuI2903ZQQAYNLZfTEpKD+t59Kz+4bksX0uGd4Hl2hf77t2Ssfnv70Yfbp3Ngzsj1w7Gn/9156AdRcO7Y2NQXVngk0c3gfjzvSmUE7r3gU9u3YCEBjc3/71D3B6z64YNSAwlREuGPm+jvp6pL5f8rFntJ3A8mddg8wZC0Ne+/A1o9Cne5fW5ccX7m197eSR3p52Q3NbfR1fQsP3d5SRnoYHp47EP7cUoqTGO6mJf2rFv93BfyO/vOhMjB7YE+POjH48+lM/GYMpo/pHDO5t72+QljER3W8aNxhP33R+yP+fHfL2Yce5p0DzZ990PjYdqsQnWfFPUWnZaBm7zaEqItMAPA8gHcAbSqlZVr2XkbomF/aX1GJxdgleXXkg7LY9umbg1V+OQ7PLg0uG90HnjPgzVh/eeSEG6PRsfQHYyB0XZ6JzuuCR+dnIPO1k5Fc04Ikbx6BzehpEgOteXIOZ152L+z/eHvC6124bh5M7t32kP8jsjeyjNeh1cqeA7YIDO+DNW99z+Vm4VadH6glKb/To6t3fOQYF1Uae3gP7imvx7M/Ox4/HtqW27vzRMNw0bghOCWpPlwxvIP3jtHNwdv8e+CTrSMgNQst/f1nrSTk4oPjaFbpeDAP7xOF9sEZnTtu+Pbq0noyCndW3Gw6UBd4t/PPxgzHzqz14647x2HusFk9/mxOS1oqkk9/vmoj3OGw86jTp5vz8AmQdrsTjN4wBAFw95nTd4P77K8/G7CX7o9rnhUN748Gp55i+PnfRsN7YcDB8p2xC5mmm9hktS4K7iKQDeAnAlQAKAWwWkQVKqT3hX2nOkj0leGjeTqSJIE0ELo8HSgEuj0L1iZaIrz+j98l4/bbxEAFG9Oue8J5SpItxwR64YgTmLM1Fmnh7ndPHDMBpOoFmm5bjXp1bjjQBPs0qxPUXDMRJQb3NP18zCr+48Az07tYZ764/DAA40+DbgojgwakjdZ/zBXHfNYhzTu+B939zIcZn6p+kFj0wyfAYgwM74O2J++fk9fLz3bpk4KVbx+KkzmkYeGpXDO/XHXmldZh67uno2ikd14wZgBlXj8QtE4ZgX3Etrhwdmqv32fTnKejZtRMOltVj77EarD1Qjk2HKvGHaSNx+TmhNWH+MO0cnDfwFJw7sCfGPb40oEro7Rdn4vaLMyEimDyyf8D1jrUzJqOstgmPzt+NmhMt+OLuS7C/pBaFx0/ggjNOxeBeJ+Fvi3Jw3+QRAcf9/obDuHBob3RKD+1gzPn5Bfh8ayFWp0gxuZEDemDZvtKQ9T8a0QcndYotvKSJ99tu8DH+fPyQ1iCd/T9T8cHGw9hXXIt5W4sCtrvh+4Nwg9+Q2pM7Z+BvP/ke/vD5TvTp3hnXnT8IV4zqh4uH98HvpozA0j0l6NujC/Ir6kM6TOcN6okv7r6k9bN45NrRuHBobxRUNqCkphGDe52Mf38vC3dOHIo7fzQM3bqk4731h/H0tzk45aRO+PiuH+L5pbn4ZHMBpo8ZgGF9u+PHYwfhUHk9VueWoUtGOn51kXGKLx5ixdhNEfkhgJlKqana44cAQCn1pN7248ePV1lZWabfZ1dhNT7aXAClFNwehfS0NKSnAS0uhbQ0QW1jC7p3ycC5g07BwFO6onuXDIzo3yMg791RKKVQWd8ccLLIL69HQ7Mbow1Gm5h1pLIBhysaMHGEuZMaGTvR7MZJnfVTRG6PQl5pHYb26YaqE804Xt+CNAG6d81AfnkD3t94GD8bPwSr95fhvEGnYHxmL2wtqEK/Hl3wu4+24Zz+PXDFqH6YdHZfrMgpQ35FPQ6U1WFtXgW6d8lAXZML908ZgeLqRpzVrxv69eiKbl0ycMGQU9GjawYy0gSNLg/W5JZjeD/viKfGFg9OtLhx3sBTIALM+mYfRg/siamjT0dFfRPeXpuPkzqno6KuGaW1jfjZ+CGorG/G9wafgha39+/4vEE90aNrJzS2uLEmtxzjM3vhcEUDzve7jmUVt8fbBjPf3GsaW7QUaPsTkS1KKd07EK0K7j8FME0pdaf2+FcALlRK3eu3zV0A7gKAM844Y9zhw4cT3g4iIicLF9yTNhRSKfWaUmq8Ump8376RL1gSEVH0rAruRQCG+D0erK0jIqJ2YFVw3wxghIgMFZHOAG4GsMCi9yIioiCWjJZRSrlE5F4A38I7FPItpVS2Fe9FREShLBvnrpT6GsDXVu2fiIiMdejaMkRETsXgTkTkQAzuREQOZMlNTKYbIVIGINa7mPoASI17sRPPqcfG47Ifpx6b3Y/rTKWU7o1CKRHc4yEiWUZ3aNmdU4+Nx2U/Tj02px4XwLQMEZEjMbgTETmQE4L7a8lugIWcemw8Lvtx6rE59bjsn3MnIqJQTui5ExFREAZ3IiIHsnVwF5FpIpIjInkiMiPZ7TFLRPJFZJeIbBeRLG1dbxFZIiK52s9e2noRkbnase4UkbHJbX0gEXlLREpFZLffOtPHIiK3a9vnisjtyTgWfwbHNVNEirTPbbuITPd77iHtuHJEZKrf+pT6XRWRISLynYjsEZFsEblfW2/rzyzMcdn+MzNNKWXLf/BWmzwAYBiAzgB2ABid7HaZPIZ8AH2C1v0NwAxteQaAp7Tl6QC+ASAALgKwMdntD2r3JABjAeyO9VgA9AZwUPvZS1vulYLHNRPAf+tsO1r7PewCYKj2+5meir+rAAYAGKst9wCwX2u/rT+zMMdl+8/M7D8799wnAMhTSh1USjUD+BjA9UluUyJcD+BdbfldADf4rX9PeW0AcKqIDEhGA/UopVYBCJ7m3eyxTAWwRClVqZQ6DmAJgGnWt96YwXEZuR7Ax0qpJqXUIQB58P6eptzvqlLqmFJqq7ZcC2AvgEGw+WcW5riM2OYzM8vOwX0QgCN+jwsR/kNMRQrAYhHZos0pCwD9lVLHtOViAP21ZTser9ljsdMx3qulJ97ypS5g0+MSkUwA3wewEQ76zIKOC3DQZxYNOwd3J5iolBoL4GoA94jIJP8nlfd7oyPGqjrpWAC8AuAsABcAOAZgdnKbEzsR6Q7gcwAPKKVq/J+z82emc1yO+cyiZefgbvt5WpVSRdrPUgBfwPtVsMSXbtF+lmqb2/F4zR6LLY5RKVWilHIrpTwAXof3cwNsdlwi0gneAPiBUmqettr2n5necTnlMzPDzsHd1vO0ikg3EenhWwZwFYDd8B6Db8TB7QDma8sLANymjVq4CEC139fnVGX2WL4FcJWI9NK+Nl+lrUspQdc6boT3cwO8x3WziHQRkaEARgDYhBT8XRURAfAmgL1KqWf9nrL1Z2Z0XE74zExL9hXdeP7BewV/P7xXtf+c7PaYbPsweK/A7wCQ7Ws/gNMALAOQC2ApgN7aegHwknasuwCMT/YxBB3PR/B+3W2BNz/5m1iOBcC/wXtRKw/Ar1P0uP6htXsnvH/wA/y2/7N2XDkArk7V31UAE+FNuewEsF37N93un1mY47L9Z2b2H8sPEBE5kJ3TMkREZIDBnYjIgRjciYgciMGdiMiBGNyJiByIwZ2IyIEY3ImIHOj/AMzVTW+Y1darAAAAAElFTkSuQmCC\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "plt.plot(losses);" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAD4CAYAAAAJmJb0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO29eZhcZZn3/31O7dXV+5aks+8ra4AAIkiCoiJhHBwRGNHhvRhfYVzG3zvq6Kio1zgu4zKvovKKiAvIiDpBFJCEBAUlIYGQfe0mne6k9626az3nPL8/znlOnVpOVXVXV1ed6vtzXbnSXV3Lc7qqv+c+3+deGOccBEEQRGUilXoBBEEQRPEgkScIgqhgSOQJgiAqGBJ5giCICoZEniAIooJxlnoBANDU1MQXL15c6mUQBEHYin379g1wzpuz3acsRH7x4sXYu3dvqZdBEARhKxhjZ3Ldh+wagiCICoZEniAIooIhkScIgqhgSOQJgiAqGBJ5giCICianyDPGfswY62OMHTLd1sAYe44xdlL/v16/nTHG/osxdooxdoAxdkkxF08QBEFkJ59I/icAbky57VMAdnDOVwDYoX8PAG8HsEL/dw+A70/PMgmCIIipkFPkOed/AjCUcvNWAI/oXz8C4BbT7T/lGi8DqGOMzZ2uxRIEkczOY33oHgmXehkzzrmRMLYf6S31MmzBVD35Vs75ef3rHgCt+tdtAM6a7tel35YGY+wexthextje/v7+KS6DIGY39z36Kn76lzdKvYwZ57E9nbjnZ3sRiSulXkrZU/DGK9emjkx68gjn/EHO+UbO+cbm5qxVuQRBWBCRVURltdTLmHEicQUqBzoGJkq9lLJnqiLfK2wY/f8+/fZuAAtM95uv30YQxDSjqhyKyiGrs0/k44oWV7b3k8jnYqoi/ySAu/Sv7wKwzXT7+/Usm00ARk22DkEQ00hcF3dZmX0jPBVVO+aOgfESr6T8ydmgjDH2GIDrADQxxroAfB7AfwD4b8bY3QDOAPg7/e5/APAOAKcAhAB8sAhrJggCCXGPz0KRF1cvFMnnJqfIc87fZ/GjzRnuywHcW+iiCILITVzRI/lZbNecJk8+J1TxShA2RQidrM6+SF7YNe3949BiS8IKEnmCsClGJK/MxkheO+ZgRMbgRKzEqylvSOQJwqYkRH72RbLmYyZfPjsk8gRhU2azXSOrHNVebUuxvZ8ybLJBIk8QNmUyG68vtw/i688eK/aSZgxZVbGo0Q+3U0I7bb5mhUSeIGzKZFIonz54Hj/6c0exlzRjyAqH2yFhcaOf7JockMgThE2J6ZG8koddE44rFWXryKoKp0PC0qYA2qkgKisk8gRhU+RJZNeEYgoUlVdMuqGscLgcDEuaq9A5GDKsKyIdEnmCsCnxSdg14ZiS933tQFzlcEgSljZVQVY5zg6FSr2ksoVEniBsitG7Jo+N17DekrdSqmNlRYVLYljaHABA3SizQSJPEDYlLguRzx2dhyoskldUDqeDYVlzFQDKlc8GiTxB2BQh7vkUQyXsmsqI5OOKCqckoc7vRrXXia5hsmusIJEnCJsymbYGhl1TIZG8rEfyAOB1OYxMIyIdEnmCsCmTqXgNVVgkLyscTkmTL5fEKsaGKgYk8gRhUxIVr/nYNXLe97UDsqrCpUfyToc0K5u05QuJPEHYFCHyuaJzzjlChl1TGWIoKxwOSYg8Q7xCTl7FgESeIGyKsChyVbxGZRWiBqpSbA1Z5XA5hF1DkXw2SOQJwqbk22pYZNaYH2N3ZEWF0xTJV8qGcjEgkScImyKi13iOAidh1QCVUwwVVzkcJk+e7BprSOQJwqbE9OiVc0DNInLJkXxliKFW8ZrIriG7xhoSeYKwKWZhyxbNm0W+EmwNVeVQOYw8ebJrskMiTxA2xeyvZxO5kJ4+CeS2duyASAMVnrzLIVXEcRULEnmCsClm6yWbyIfjlRXJi30Fp55d45Qoks8GiTxB2JSkSD5Pu6YSsmtSI3mnQ6qI4yoWJPIEYVPM0Wu2StZQpYm8kmrXsIqp5C0GJPIEYVPMgp1NvCvOrlFS7RoqhsoGiTxB2BRzbni2qtek7JoK2KAUUbvLlF1TKamhxYBEniBsihgaAmTPfw9VWJ58wq4xtTWogJNXsSCRJwibYha2rBuvSXaN/cUwbmTXUJ58PpDIE4RNieWbQhmTjU3KSojkFTUlkqfsmqyQyBOETTHbNbmya6q9Tu0xFWBrxJWUSF6i7JpskMgThE1JsmuyRLKhuIIan0u/n/3FMDWFUhsaYv/jKhYk8gRhU2IKh1tPI8wWyUZiCgIeLZKvBE8+teLV5WAVcYVSLEjkCcKmyIoKr0sX+RzZNX63QxdD+0e84lhdIpKXJHCee3jKbKUgkWeMfZwxdpgxdogx9hhjzMsYW8IY280YO8UYe5wx5p6uxRIEkSCuqPC7c3vtobgCn9tZMUVD4qrFPP4PqIxq3mIwZZFnjLUB+AiAjZzz9QAcAG4D8FUA3+KcLwcwDODu6VgoQRDJyAqHz+0wvrYiElPgc0laJF8B3nVcSbdrgMoZUj7dFGrXOAH4GGNOAH4A5wFcD+AJ/eePALilwNcgCCIDMUWF16WJvJI1kpfhdzsrJtVQSa14lYRlZf9jKwZTFnnOeTeAbwDohCbuowD2ARjhnIsG1l0A2jI9njF2D2NsL2Nsb39//1SXQRCzFlnh8OmefLYIPRxT4HM7KqZoSByrw9SgzHw7kUwhdk09gK0AlgCYB6AKwI35Pp5z/iDnfCPnfGNzc/NUl0EQs5a4oibsmhythn0uB5xSZQzXEMfqEg3KjAwj+x9bMSjErtkCoINz3s85jwP4DYCrAdTp9g0AzAfQXeAaCYLIQFxR4XNl9+Q55wjFE9k1lRDJK6n95PX/K+HYikEhIt8JYBNjzM8YYwA2AzgCYCeAW/X73AVgW2FLJAgiE3GFw6dn11htOkZlFZxDt2sqo5GXsGVcDinp/0rYbygGhXjyu6FtsL4K4KD+XA8C+CSAf2aMnQLQCOChaVgnQRApyKpqePJWm46izbDP5YDLISEm2z/aFceamkJJ2TWZcea+izWc888D+HzKze0ALi/keQmCyA7nXIvkhV1jIXAhvQOlYddUQiQv7JqU7BqK5DNDFa8EYUOEZWHYNRZ+dDgmG/erlIHXii7mLiklT74Cjq0YkMgThA0REbmI5K2yZsKxxP0qZeC1UfHqSDQo0263/7EVAxJ5grAhcVlE8tqfsGIRxYb0SD5h19g/2jU2Xo3JUJQnnw0SeYKwISJy9xqRfHZP3ud2VEzvGiVtMlTuJm2zGRJ5grAhwnZxOSTda88zu6YChDCe1k9ej+TJrskIiTxB2BDZlCvuzGLDCJFPFEPZXwhlVYVDYtDKcxK2DUXymSGRJwgbEjMieabbMHnYNQ6pIjx5WeVGFA+Y8uQr4ARWDEjkCcKGpEfyVnaNnkLpcsAlscrIrlGSRd5oUFYBJ7BiQCJPEDbE6KkuaZG8VWaJSKH0u50V04VSVlRjsxWgVsO5IJEnCBtibLw6tY1Xq37yobgMt1OCQ2KV07tG5Ub0DpjtGvufwIoBiTxB2BBzrni2CF20GQYAt0NCTLa/yCsKN6J3wNSgrAJOYMWARJ4gbIhs2nh1OSRLPzqsD/EGNGunEjZe43p2jYBaDWeHRJ4gbEjMNOc0u12jGINFnA7rLBw7ISupdg01KMsGiTxB2BBh17gdmt9uvfGaiORdDlYRloai8qSNVxrknR0SeYKwIYZd49TsmmwVr8KTd0oSOE9MVrIrcUVNzpOn7JqskMgThA0x7Bope8WrZtdo7YiN8n+bi6GscuNYABrknQsSeYKwIbLJrsnWJz4ck+F3JewaoEJE3pRdwxiDQ6qMgSjFgESeIGyIUQwl2hpYVbyaNl5dFdKtUU6xawBUzECUYkAiTxA2RKRMirYG2TZezdk12mPtHfHKSrJdA2i/B7JrMkMiTxA2JC6bG5Qxy83UUExJ2DUVkk8uq6pxVSLI1r9ntkMiTxA2RAiaFslnHuvHOU+yaypluEZqF0oAWfv3zHZI5AnChhiDMxzMcqxfVFbBOUyevCaMMZtvvMYVDoeULF2V0iu/GJDIE4QNMRqUSRKckpTRrgmJgSGmPHnA/gOvZUVNqngFkDWNdLZDIk8QNiSuaP1bJEnz5DPZNWKId2okb3e7JrXiFdBOdnZPDS0WJPIEYUPM/VusulBG9KlQXldyCqXdxTCuZkihrJBe+cWARJ4gbEhMUY3ZplZj/SJxTcyFyDsrpMdL6mQoAFlrBWY7JPIEYUPiigqXUxd5i2rPqJwcyQtP3u6RvJzJrslSKzDbIZEnCBtijmatBnkbkbx+MqiUHi8ZK14rZOpVMSCRJwgbElMSBUEui0IgEcl7XKl58vYWw0wVr84s7ZZnOyTyBGFDzBuvDou+LQlPvsIieZWnVbxma7c82yGRJwgbEjdF8mLjlfNk8TYieWdKgzKb2xpyyvg/gPLks0EiTxA2JK4kolmjJ02KyKVG8pUwC5Vzrh07tTXIGxJ5grAhcVPVp/DaU6tejTx5Z+XkyYtjzJRdQ3ZNZkjkCcKGmDsxigg9VbyjeqdKj4jkK8CTl9VEzx4zVrUCRIEizxirY4w9wRg7xhg7yhi7kjHWwBh7jjF2Uv+/froWSxCERlxOZJg4LdoViEheePKV0LvGEPkUu8Zl0dqBKDyS/w6AZzjnqwFcCOAogE8B2ME5XwFgh/49QRDTSFxN3ngFMnvyLgczNindhl1j34hXNs22NUNtDayZssgzxmoBvBnAQwDAOY9xzkcAbAXwiH63RwDcUugiCYJIJim7xth4TbVrFMOPB8wRv30jXtmYiEXFUPlSSCS/BEA/gIcZY68xxn7EGKsC0Mo5P6/fpwdAa6YHM8buYYztZYzt7e/vL2AZBDH7SGpQZpE1E4mrRiEUUBm9a8QxpvWTp2IoSwoReSeASwB8n3N+MYAJpFgzXEvczfib55w/yDnfyDnf2NzcXMAyCGL2EVNUw6ZxWdg1UVmBx5n4E3dVQO8a8wBzM04qhrKkEJHvAtDFOd+tf/8ENNHvZYzNBQD9/77ClkgQRCqywg2P3SFltmGicdXIkQcASWKQmL1FXrG0a5gx3JxIZsoizznvAXCWMbZKv2kzgCMAngRwl37bXQC2FbRCgiDSiJuadFm1K4jEFSOzRqBFvPYVQ+G7p9s19onkR0Ix9AejM/Z6zgIf/08AfsEYcwNoB/BBaCeO/2aM3Q3gDIC/K/A1CIJIIbnVcOZiqKicHMkDWoaNnb1rsfa0ilcHg8oBVeWQUn5Wbnz2fw5h35lhPPfP1yLgKVSCc1PQK3DO9wPYmOFHmwt5XoIgsmMu7TeKnFKySyJxxeglL3BadKy0C+IqJL3iVd9vUFV4JEfa48qJruEwzo9G8F87TuJf37Gm6K9HFa8EYUOSUyhFC+H0SN688Srua+dIXpyg0idD2acvz9BEDADw0IsdON4TLPrrkcgThA3ReqontytI9aQzRfJ27/GSra0BYA+RH56I4ZaL5qHa68S/bTuU1j10uiGRJwibwTlHTFHhdiRvvKZVvMqZ7Rq7Zdd849njuP93hwGYUihTN14tLKtyIyorCEZlLGsO4JM3rsaejiH89rXuor5m8V1/giCmlUQaoUihzNyTJhpPt2tckmS7VMM9HUMYi8QBZEmhtLCsyo3hCe04GgJuvHfjAhzvCWJ9W21RX3NWi/x3tp/E4iY/tl7UVuqlEETexFM2HxNdKNNTKNPtGvukGgpCcRnjURmAueI1PbsGKP8aAOHHN/jdkCSGL9y8ruivOavtmidePYunD/aUehkEMSmEJeEy7BrrFMq0jVcbNvIKxRRD5IWIp4//s0fLBkPkq9wz9pqzWuSjcRVhvR0rQdiFuJwsdI4M/eQ555rIp3ny9rNrwjEF4xEZnHPT0BAruyY5kldVjq89cww9o5GMzz00EcOXnzqCmDwzVwCDE1oRVGOARH5GiCkqwjESecJeyCmevCtDP3kxMCS1GMol2S+7ZiIqQ1Y5InHVOEGl9ZO3qPo9MxTCA7tO449HMl+x//a1bvzoxQ4cPT9WhJWnIyL5ej+J/IwQkymSJ+yHiDoTQ0PS7ZqoPt81va2B/bJrxN9oMBq37idvsfkc1DdsxYZnKns6BpNeo9gMT8TAGFBHIj8zREnkCRsiInl3yiBvc/pgRNbnu6ZG8jZraxBXVGO94xE5S5585kh+LKx5+cOhWNpzc87xyhvDAGZO5AcnYqj3u9M2jovJrBV5RdX8PbJrCLuwu30QsqKmtdt1ZKj2TB39J3DZbLhGyPT3OR6VE20N0vLkM3vyRiSfQeRP948b9klkhnRgaCKGer9rRl5LMGtFXlzyUiRP2IHOwRDe++DL+P3B88ZnN9v4PytP3inZK7vGHIRpkbxFP3kpc3aNyK8XYm5md8eQ8XVoBkW+scozI68lmLUiH9UvZymSJ+zASFgTqeM9wbQReK4MbQ1EJO/NEMnbyZMPxWTj62BUNnWhTE0NzTwQJRjRHj8SSvfk93QMGSfBmQr2hiZiM5o+CcxikTdH8sXuHUEQhRLRN1I7BibScsUdGaJYEcl7UiN5Byv7XHIzoZRIXrGI5DNlGAHAWDizXcM5x+72IWxa2gggcVIsNkMTMdSTyM8MUdkc9dgnsiFmJ0KE2vsn0vq3ZBrrZ0TyqXnykmTk2dsBc4Q9bork0ypeLbJrxvRIfjjFrukaDqNnLIJrV2qjR2fCrlFVjuFQDI0k8jODWeTJlyfKHfEZ7RicMD67bqcmdGKsnzmFMmKkUKZXhtqpGMpq49Wq4jUtu0b35CdiSlLBk/Djr1rWBLdDmhENGA3HofKZrXYFZrHIx0jkCRshIvOYrKJzMAQgOcMktU98VM4cydutd03Y7Mnrdg1jmXrXWOXJJx4/YrJs9nQMos7vwoqWALwuaUb25gZL0NIAmMUiL/4IgOQPEkGUI2bP+HivNmjCHM06U/rEi0g+dePVbr1rkiP5OOIqT6t2BaybtAlPHgCGTCL/yhvD2LioAZLE4HM7ZkTkxb4AifwMkRTJx+wT2RCzE/O+kZgmZG6365RYysarniefqRjKhnnyPpdDS6FU1LQcecCcJ58s8sGIjCq3dqITVa+KytE5FMLqOdUAAL/bOSNX84PjJPIzCnnyhJ0QkbzbKeFET3okn1rkZBnJ2zRPvqXGo3nyKk/LrAFM07HSNl7jWNhYBSARSQ+MR6GoHHNqvQA0S2smNKAUHSiBWSzy5kg+RHYNUeYIEVrVWo2g3nbXZdpUdaSIt1HxmpZCKUFWuW3ShkUk31LtwVhE23jNZNckMozSI/lFDX4ACZE/r3eknKuLvG+GPHmya2aYWIZ0M4IoVyJxFW6HhOUtAeM2l0nsXLp4C4w8+bTJUJm963IlFJfhdkqo9bmMilenI122Ms255ZwjGIljYaMm8qIg6vxIGACMSH4m7ZoqtyNtM7zYzFqRT9p4JZEnypxIXIHHJWFpU5VxW7aN12hcgccpgbGUoiFn5iyUciUUVeB3OxDwOI08eVemjdcMQ0MmYgpUDjQF3PC7HYZdkojkfQB0u2YGIvmhieiMF0IBs1jkk+0aEnmivInEFfhcDixtTkTyZm/aISXnv2eaCgVYZ6GUK6GYAr/LgYBXE3lF5XBk8OQzFYSJzJoarwv1frdhl/SMReBxSkajMJ97Zjz5wYmZL4QCSOQBUP8aovwR81qXWETyLkmCkuLJZ7IFrLo1livhuAyf24GAR7Nr4oqa1rcGSBSEmfclRI58tdeF+ipXwq4ZjWBurde4yvHPUCQ/HJr5vjXALBb55LYGJPJEeROOK/C6JEuR13rSJH+mM4l8JlsjlYNdo2nzYktFKKbA73ai2utETFERiikZs2sAMdrQFMnr1a41Pifq/W7DrukZDRt+PKBF8jORfDE0PvN9awASeQBk1xDlTySuwudywOd2oK3OByml6tMpsZSK18x2jYiCrWaado+E8a7vvohnD5fHgPtQTNEjeScArWo1U548IEYbmiN5TeSrvS7U+d1Gxeu5kYjhxwOaJ1/s/lWcc7JrZhoh8gHPzOysE0QhaBuvWmS+tLkqLcPEmZYnP7VIfiCoDZo+p2eglJpwLLHxCmgZMtkieTnJk9ei8xqvEw1+F4YmYlBVjt6xSFIk73c7EFPUolpY4biCqKyiYYZ7yQOzWORjspaS5nc7yK4hyh6x8QoAq+dUo9aXPF0otcjJMpLP4cln679eCkIxWRN5rybyw6FYxjx5IL35WmokPxaR0RuMQFa5kSMPwPi9FjPYS1S7zuxUKABwzvgrlgkxWYXbKel+HIk8Ud5E4qox4OIjm1fg9isWJf3cldJJMRLXvOxUrLo1CoQwDmUYl1cKwsKT1yP50XA8Y548oDVpS4rkjY1Xp7HheUyvFp5TkxB5rzsh8tXe4ohwotp15iP5WSvyUVnLI/bN0M46QRRC2GS/VHtdaWKkVbwmtzVoqMqUQpk9Tz4RyZeHyIfiml0jjlflsIzkU5uvjUXicDsleF0O1OnpkkfOjQEA5tUlPHm//nuNFLGHVfvAOACgMUCe/IxhjuTJkyfKHbNdkwmXI71BWeoQbyDhyVtF8iIjRTTzKjXGxqs3EY9aRfJa8zWTyIdl1Ognh3q/Jq5Hz2sin5pdA2jVtcUgElfwzedOYHlLABvaaovyGtmYtSIvPEuK5Ily43hPEO/94V+T0vqsNlIFmlWRPDQktW8NkPDkrea8jut9cVLH5ZUCReWIySr8Lqex8QogY8UrIPYlEscVjMRRo58chF1z5PwY3A4JDf5ERG148kXSgQd2nsLZoTC+tHV92rCTmWDWirwRyc9QBzqCyJc/n+zH7o4hdA6FjNs0T95a5B0OlpQjbhnJi3mwlp58+Yi8OMlpdk1C5FMHhgicjuTBKWMRGdX6BrWwazoGJtBa64Fkeg6fu3gbrx0DE/jBC+3YetE8XLmscdqfPx8KFnnGmIMx9hpj7Cn9+yWMsd2MsVOMsccZYzNvQuVBTCG7hihPuoa19MUJPapWVI6Ykth4zYRLYkkFTNF45vuL3jVWPeXFxutwKF7yTpUisva5HfA4JeMEZRUNu1IKwsyRvLBrOAfm1viSHlesSD4Uk/Hp3xyA2ynhM+9YM63PPRmmI5L/KICjpu+/CuBbnPPlAIYB3D0NrzHtiEiH7Bqi3OjWc9RFVG01ys+MliNusmtki7YGYuM1RyQfk9WSZ52J1/e7HWCMGb68ZZ58ShrpWDhuePJ+twNu/QRn9uOB4kTy3SNh3Pr9v2JPxxC+cPM6tNR4cz+oSBQk8oyx+QDeCeBH+vcMwPUAntDv8giAWwp5jWIh8uQpkifKjW4jktc+l2HTdCQrtIpXLYpVVI64wjM3KMvQkteM8OSB0ls2ZpEHYPjy2e0acyQvGzYPY8xoSDa3LkXkpzmSP9EbxNbvvoizQyE89IHLcOul86fleadKoZH8twH8CwDxm20EMMI5F5+ULgBtmR7IGLuHMbaXMba3v7+/wGVMnqTsGorkiTJCRPLjUc06iejV2dnsGqcjYddki/yNPHmLitcx0+DrUmfYCE/ep+f7C5HP1KAMSM8wCkZk1JiKxoRlM7emuJH8Y3s6MR6V8dt7r8JbVrVMy3MWwpRFnjF2E4A+zvm+qTyec/4g53wj53xjc3PzVJcxZczZNVFZLZuGTMTsZjwqY1RvkTueEsnnyq4RUWxi9J91nnzcondNMBJHa41WsFNukXx1TrsmUQwVV1StuMmUlSNEfk5tcT359v4JLGsOYHlL9bQ8X6EUEslfDeBmxtgbAH4Jzab5DoA6xpj4zc4H0F3QCouEObsGoE6URHkgrBoAGNejavHZzCby5ig2McQ7W+8a62KoRQ3JM1FLRSjFpjIi+SwbryK7RuwtJEXyekuBuame/DS3NWgfGE/qFlpqpizynPNPc87nc84XA7gNwPOc8zsA7ARwq363uwBsK3iVRUCL5B1GlEC+PFEOdI8k0iYnYvlvvDpMefJGJJ/B3nEbefKZr1zHIzLmN2iR7vBEaUU+HE+kUAJAQN9EtfTkpUSTNjEwxJx6adg1KSIvSQwe5/TMeY3EFXQNh5OGu5SaYuTJfxLAPzPGTkHz6B8qwmsUTFSP5L1FLoQgiMkgInmHxIxoNKyX2+eueNXuZ0TyGSterRuUCYtjfr0fjGlplKUkYdcke/LWXSgT2TVGJG9q/7C4sQoNVW40BtL7x0xXAkbnUAicA8uayyeSn5beNZzzXQB26V+3A7h8Op63mMRE7xqK5IkyomskDLdDwpxar5Enn7Brsmy8ShJUDqgqzxrJZ2s1LOyhOp8LNV5XyfvXmPPkARg579Ybr4mhIWOR9Ej+A1cvxt9eOj/jlcB0TYdq79d61CxtKp9IfhY3KNM2Xg27hiJ5ogzoHg5jbp0X1fpMUyARgGRNoTSyZtTESSFDJJ+YhZou8kFT18Z6vwtDZRPJ55lCKZkjeTEVKhHJuxyS5fg9r9uB0DQEeqf7JwAAi5v8BT/XdDErRZ5zblS8Crum1IUfBAFo6ZNtdT4oKjdEPp+NV1ENqqjcGIiTqXdNokFZul0TjCb6r9dXuUseyYdiCtwOydhoFcVQrmzj/3SRFwNDzJF8NnwuByLToAEdAxNoqfYUrWXxVJiVvWviCgfn2iYUZdcQ5UT3sCby1V5nIrsmi2gLnKYNVfFZzt67JoPIJ0XyiZmopSIckw2rBjB78rnbGoxliOSz4Z8mT769fxxLy8iPB2apyMeUxB+N2NQhT54oNVFZQV8wirZ6H6o8TiO7JpJnxSugiXc0S/EUYyxtgpIgVeRLPR0qpI/+Exh58tmya0QkH5HBGBDIMDglE17X9AwPah+YwJIy8uOB2Sry+h+BOZInu4YoNedHIgCAtjofAh7npPLkhQ2jqNkjeUAvnMpQDGUel1fvd5U+Tz6upETyWlSedfyfHsAFI3EEPM6kbpPZ8LkKHwM6PBHDSCheVpk1wCwVeXOxiNet/QookidKjWhn0Favi7zw5GUFToll7UVubKiqHNEcJx7uoG4AACAASURBVIVqr9OI2s2I16v2OlFf5UYoppTUxgynRPKJBmUW4/9MBWHmgSH5MB12jZj+RHZNGWCO5IVdMx2bLgRRCCJHfn6dH1UeJ6KyquWux7L3kgcSGSdmu8bKw28MeDA4EU27PdWuAbIP9N62vxu/fa0rx1FNnVBMht+VsFsMTz6LXaOoHJxzBCPxvDddAUzLrGeRWVNO6ZPAbBd5p2T09yC7hig1XSNhMKa1whWCNhGV9bbB2f9U2+q1KtWj54NZUygBoCngxsB4uhUzFonD7ZDgcTqMjo1Wlo2qcnz590fx3edP5XdwUyAcS7ZrFjb48a4L5+HyJQ0Z7y+ybmKKikPdo1jUmH8ao8/lLDjQa++fgMvBML/el/vOM8isFHkj0nFKcDokuFMm3RNEKegeDqO12gu3UzJEfjwqIxLLPvoPAC5dVI8arxM7jvYiElfBmHWqYWOV2zKSF9FvnR7JW7U2ONA9iv5gFJ1DIcu2xYWSuvHqdkr4v++72LJlgLBxDnaN4txoBJvXtOb9Wj53Zg04OxTCtv35td/qGBjHwga/pZ1UKsprNTNE1BTJA9qlGqVQEqWmeyRkROTCfx43IvnsIu9ySLhuVQueP9aHUEyB16kN2shEY8CDwQyR/LhJ5EXRkFVrgx1HewFoKZtdpqZq00koJZLPhbBxnj7UA8aA61fn3+bX53JA1mfKmnnoxQ587PH9eZ3I2vsnyqpnjWCWinxy9oHP5UgamkwQpaB7JGxc6leZ7Zq4mjV9UrBlbSsGJ2LY3TGY1d5pDGibqqmfec3H1mwaYdcMWdg1zx3pNdoMiA3H6WD/2RH0BbUso1BMTorkcyE2pp851IOLF9ShKUOPGit8FqnUx3rGwHlyn/1MKCrHmcFQ2W26ArNU5GMZIvlwvDiXnASRD5xz9I5GjdF0wq4JRmSEY7k9eQC4dmUznBLD4XNjlumTANBUpYlfajSfya4ZyWDXdA2HcKwniDs3LQKgRbDTQSgm47YH/4ovPXVU/14xEiPyQaSRdo+EJ2XVAMhYFMk5x/GeIIBEV0srzo+GEVNULG4kkS8LYiZPHgDNeSVKTlRWEVNU1OoVmomNVyUvuwYAan0uY1My20mhqVoT8IHxZF8+GJGN1xX7Apnsmh1H+wAAt146H/V+F9oHpkfk/3xyAJG4il3H+xCVFUTl/K5gBObGZTesnaTIu9MTMAbGY8bxiwpaK3rHtN9l6vzYcmBWinw0VeTdDqN3NUGUgjFTIRJg9uTjeiSfn9iJCDZbJN9oEcmPR+Wknit1FgVR24/2YmlzFZY2B7CkqcrovFgowucPRmS8cFwbCToZu0ZE8gsafFjRMjlv3KenapqDvRO9QePr0RyRfN+YZjG1VpPIlwVpdg1F8kSJEQ21hM8tyvHHo1pEm6/Ib1mjbTbm8uQBpGXYjKXkltf73YbIR+IKesciODM4gZfbB3GDfjJZ2hxIsms4T9+8tMJ8P1XleP5YH7asaYHbKeF3B84DmKzIa8e8ZU2r5aazFYmW44lg71hPQuTF+2NFX1D7XbbU5L8PMFPMSpHPlF1DnjxRSozWuHokXeXRRGc8IiMSV+DLw5MHgEWNVVjVWp21MZeI5M258qre9bLGLPJVbgyH4njp1AA2fWUHrvj3Hbj267sQVzi2rBUiX4W+YNRY/wO7TuPNX9uZM1ttaCKGC+5/Fo+/0gkA2N81goHxGN514TxcvawR249oUb1vEp58QP+d3TBJPx4wz3lN6MCJnqBRZJbbronAKTE0+DO3Mi4ls7LVcCxDdk2YsmuIEjJmzCRNlO57XRImYjLC8fztGgD4/p2XZI1kfW4HqtyOJLtmIiaD84RNBGgZNrvbB/H+H+/BsuYq/J+3rQIDQ63PhY2L6gEkqjvfGAhhw/xabNvfjZ6xCH5/4Dz+9tL5lms4OxRCJK7i3/9wDFvWtGL7kV44JIbrVrZgPCpjp27XVE0ikr9mRTN++g+X48pljXk/RpBpDOjx3iDWz6vB612jOe2a3rEoWqo9effKmUlmp8grGewaypMnSkgwxZMHtIZcQT2Sn4zI55OrndraING3JvH6DVVuRGUVW9a04FvvvShjj3SRMtg+MI5anwsnejV//tE9nVlFXrz2aDiO/3j6GA50jeKyxfWo9buweXUrPoNDADCpPHmXQ8KbVzbnfX8zibkS2u9BVTlO9gbxno0LcOjcWM7smr5gBC015efHA7NU5KPxDBuv5MkTJSThyZtF3qEVQ8Xz9+TzpTHgTorkzX1rBHduWoTlLQHcdtlCy2lMixr9kJjWt0U83/uvXISf/vUMjvWMYfWcmoyPE1bRTRfMxa/2af1vPvvONQC0DJUNbbU42D06qRTKQhAnE2EzdY+EMRFTsGpONWp9rrzsmiVN5Zc+CcxSTz6maGXfokJuuob4EsRUCWaYSVrlcWJIj3jzyZOfDI1VnqQUykxXEsuaA7jjikWWAg9oluf8ej/a+8ex41gvVrQE8PEtK+F2Snh0d6fl48QJ4f6b12Genna4xeSlb9Y3kCez8VoIfsOT13RA5MevbK1GjdeJ0Rwbr71jUbSWaSQ/O0VeVuF2SIZv6XM5EFd4xpFoBDETjEXicEgsubWux4mBoCaGVs3GpkpzdXKTsrEMkXy+LG2uwoGuUexuH8KWta2or3LjnRvm4revdltWkg+MR+F3O9AY8ODbt12Mf7p+ORabIuHbL1+Iu65chJWt1ZNez1QQkbyY83q8V4h8ADU+V1a7JhJXMBqOk8gXg+eO9OIff7YXaoYpN9kQQ7wFfnd6tdtU4JzjeztPTVveMDF7ENWm5g3TgMdpRNuT8abzobHKg6GJqPG3IwaUVHumIPJNAa1RmcqNFM7br1iIYFTGU6+fz/iYwfGokcp5+ZIGfOKtq5J+3lLjxf1b1xv7ZsXG45TAWKLl+IneoD6G0ZXTrunX0yebq8svfRKwucifGwnj2cO9lv01rIjKKtymyMibcqk2VQYnYvj6s8exbf+5gp6HmH2MheNpQy4CXqfx2Z52uybghsqBET1CTXjykx9AvUTffG2scuOiBVrWzcZF9Wiu9mDPG0MZHzM4ETNSOcsBxlhSAsbxniBWzdGuImq8rqzZNb2iEIoi+emnVS88EL/kfInKSlIkb+TIFhjJi/FtpR6bRtgPc98YQZXHCa5fpE6mvD8fGgOi6jWqv376nkC+LNNtlresbjH8e8YYFjb4jUEoqQyMx9AUKK+ccp8+5zWuqGjvnzCsohqfM6kYKhJX8PgrncZVkGhp0FqGhVCAzUVepCz1jaX3xs5GLMWu8WXIkZ0K50e1D/RgiafcE/ZjLJIeyZutE880i3xTlehfo31WgxE5bU8gX9bNq8XCBj9uTUmZbKvzGSMNUxkcj5ZVJA9oV/ThuIIn9nUhpqi4VK8FqEmxa54/1odP/vog/to+CMAUyZdhSwPA5iIvLo8mG8nHZDXJ6zM2XQq0a3r0dQxl6NVNTJ43BiZwzkIkyp0j58ZyFtCYsYrkBdO98WpE8hOJSD7gcU66HQAA1Ppd+NO/vAWbliYXIbXV+3B+NJy2Z6aqHEMTMcOTLxf8bgfOj0Tw1WeO4fIlDcb+Qo3XhZisGnt250e1v/MDXaMAgN5gBG6HhDr/5K2umcDWIt8cEHbN5CL51I3XOr0EXNgtU0W8+WTXFA7nHB94eA8+t+1wqZcyaVSV4z0/+Ase/NPpvB8zFo6ntSIImER+2jdeRf8aEclH5aTXmw7a6nyIK9zo6yIYDcchq9w40ZQLPrcDf20fRDAi40tb1xsnPPG+iAwb0YzsYPeI/n0ULTWeKZ0gZwJbi7zbKaGhym0MGciX1Eh+fVstan0u7DzeV9B6enSRJ7umcE72jeONwZBhgdmJkXAcEzFlUhOTMkXyZtGd7o3Xer8bEjN78umvXyhiylX3SCjpdnH1UG6evEjA+IerFxubrgCM9s/CshHOgYjk+4IRtJRpZg1gc5EHgJZqzxQieSVJ5LXRac14/lgflEmmY5oR1sLwRAycT/15CK2dLZDeDtcOiJS6fG1EReUIRuWM2TWC6d54dUgMDVVu9Ou/367hMOqnubnW/Dqf8dxmxD7AZCY3zQRNATfm1Hjx0S0rk24XTduE/SauTLqGwxieiJV1IRRQASLfWuNNi+RzCWxMUdP6bW9e04qhiRj2nx2e8lqEJy+rPOe4MCI7ogvh4ETUdidMkdueb0JAom9MFk9+mkUe0HLlB8ejONQ9iqPnx/DWdZPv3piNRCSfLPLixF1unvyXtq7HtvuuTrOtEnaN9j71jkWMGbgHu0fROxYhkS8mrTWepIjpZG8Q6z7/LI6cG7N8jKh4NSNGpz13ZGqWDecc50cjRkHEEFk2U2ZgPIrXzo6gocqNuGK/E+ZkI3nh9Wbz5Kd74xXQ+9dMxPCL3Z3wuiS8+2LrhmJTwe92ot7vSkujFHZNuWXXNAY8GcU61a7pG4viOr0R2p6OIQQjcln2kRdUgMh70R+MGjbLvjPDCMUU/O6AdUFSNMWTBxKj08R0mskyHIojJqtYO1dryDQ0MTkLiUjw/LE+cA68++I2AAnf2C4IkZ+IKUaUng1RiFSTzZN3T/+famPAg7NDITy5vxs3XTAPtUXIDmmrT0+jHBiPgbHEsPByR9hoo+E4JqIyglEZy1sDWNzoN2zFck2fBCpA5FuqPVB5Ijro0OdNZhPr1Dx5wZY1rTjZN44zg5OfWSn8+HXzhMjnnz5HJLPjaC/m1HiNtrF228juN52U+vKI5sdSBoYIxOAQxpB25TkdNFa50ReMYiKm4PYrFk778wN6rnxqJD8eRb3fbUxyKndEj/+xcNzw41urvdgwv86YHkV2TRFJLYg6rY8iO9E7js7BUMbHpGbXCEQXvO1HJ2/ZiMyadfNqAVAkP1UicQV/OjGAzWtajI05u0byQH7pvVYtBao92vc+l6Mo6Xkiu2XN3BpcvKBu2p8fANrq/OgeCSftqwyMR9FYVV5+fDY8Tge8LgljETkxy7XGiw1tiTbK5VrtChQg8oyxBYyxnYyxI4yxw4yxj+q3NzDGnmOMndT/r5++5aaTWhDVPjBuWCbbLaJ5LU8+3eNc2OjHytYAnj3UM+l1nB8TIq+9tt2iz3Lh5fZBhOMKtqxpRVO1JgT9NsuwGRiPGtZLPum9CU8+deNV+4wWY9MVSBRE3X7FwqLleLfV+xCKKRgJJa5sB8djZZdZk4sarwujoTh6g4kWBhvaEifGlgq1a2QAn+CcrwWwCcC9jLG1AD4FYAfnfAWAHfr3RSPRvyaKuKKiczCEa1c1Y0VLwFLkrSJ5AHjPpQuw540hPH9sct58z2gYTolhQYMfXpeEYRL5KbH3jWE4JIZNSxuNeZl2jOTFFV0+m6+ZerkDiRGA3iJ1YrxmRRPeu3GBsfdRDNrq0jNsBsuw2jUXorWBiORbarxYp0fyHqeUdoIuJ6b86eGcn+ecv6p/HQRwFEAbgK0AHtHv9giAWwpdZDaaAh4wpv0xdQ2HIascS5uqsGVtK/Z0DKWVlqsqR0yxFvm7rlqM5S0BfP7Jw5NqPXx+VEujckhMS00rgsi/1jmMbfu7p/15y4kD3aNY0RKAz+2A0yGh3u+yXa58fzCKxU1V8Lsdedk12Xq5BzxOeIs0OGN+vR9fvfWCpFTN6X8NkSufsE4HxqO2i+RFu+HesYgm6l4narwuLG2qQmuNt2yrXYFp8uQZY4sBXAxgN4BWzrloIt0DIGPyLWPsHsbYXsbY3v7+/im/tsshobHKg75gxOjjvrQ5gC1rWiCrHC+cSH5uMd8108YroFXRfnHrOpwdCuOBXfmXpZ8fiWCOPuGmvspVlBTK//fndnzhSfuV+ecL5xyHukdxwfxa47bUWaTlTlxRMRSKoaVaS8fLN5L3uRxwZdiIDHicRUmfnCnaUgqiorKCYES2lScPaJlPY2EZfcFokqi/97IFeNeFc0u8uuwULPKMsQCAXwP4GOc8KTmda7stGStZOOcPcs43cs43NjdPbfiuoKXag76xKNr1TdelTVW4aEE9GqrcaVk2uUQeAK5a1oSbL5yHH7xwGm8M5Jdp0zOWEPmGKk9R7JresSiGQ/GCh5uUK90jYQxNxLBhfsLrbKxKnmBU7gxNxMC5NkCipdqT1rclE2Nh2fJyv8rjnPa+NTNJnd8Fv9th2DUi+Cm3vjW5qPFpPeW1wqfE2v/x2mX4P29bXcKV5aYgkWeMuaAJ/C8457/Rb+5ljM3Vfz4XQGENYfKgtcaD3mAE7QPjqPe7UF/lhkNiuHZlM146NZi0s586xNuKz7xzDdwOCZ9/8nDOikutECpszKpsrHIXxa4RUaHI5Kk0Duq9QDa0JSL5poDHVp68eUpQS403rxTKYDRuOazjgvl1xma+HWGMJaVRinGGdvPkaw1PPmpk9NmFQrJrGICHABzlnH/T9KMnAdylf30XgG1TX15+aJfFWiS/tDlg3H7RgjoMjEeTfFERyecaK9Za48XHtqzACyf68ezhRLZNx8BEmuiMhuOIxFXMqdUuTev97mm3azhPdPM7X6DIR2UFh8+NTseyppWD3aNwSgyrTc2hRFWmXRA58s3VHrTqfZVyBQljYTmtEErwlXdvwBe3rp/2dc4k5oKogTJtTpaLGq8257VnLFLWhU+ZKCSSvxrA3wO4njG2X//3DgD/AeAGxthJAFv074tKS40XA+NRnOobx1LTMOD1ekR4oGvEuC2qWx35zI78wFWLsXpONb74uyMIxWQ8/kon3vqtF3DHj3ZDNg39Pqe3KJ4rIvmAG6GYMq22ymhYq6gFgJ6xwjozPra7Ezf93xenVPRVTA52j2LVnOqklMGmgAcjobhthqwbkbxeIh+OKwjmqHoNRqwj+UrAPDzE6FtTZi0NclHjc0Ll2syJcm5hkIlCsmte5JwzzvkFnPOL9H9/4JwPcs43c85XcM63cM4zD3mcRlprPOBcS80S8yYBYO3cGjgkhoPdiag14cnn9jmdDglfumU9zo1G8O4H/oJP/vogljRV4VhPED/96xnjfkJ0E568FqVMZzRvvhopNJJ/tXMEnGuD0MsFzjkOdCVvugKJy3q79AISIt8U8BhikMuyGStCm99yoq3eh5GQ1hJAXAU3lXFr3kzUmvoKlXPhUyZsX/EKJBciLG1K2DU+twMrWgLJIq9Hw/mWiV+2uAG3Xjofx3qC+ODVi/GHj1yDa1Y04ZvPnTD+eIXoikhetGydXpFPCEWhnvwh/fdhVUdQCrqGwxgNx42rL4GI+AZs4sv3B6Oo1jdLE4V62dcejKQPDKkkljRqgdcDu06hPxiFxymhymabyeaWE7PJrikbzGfWZaZIHtA28Q52jRq+aFTOz5M38+Vb1mPbvVfj8+9aB6dDwhe3rkdMVvHl3x9FTFbRPRyGQ2LGyaYY0acQ+YDHWVAkPxaJo31gAtVeJ155YxijofLosSMGMFzQllxe35Qywajc6R+PGp1IxSCJXFWvY+HKjuS3rG3Fey6dj+/tPI2fvXxGr20p37zyTJhPwrNm47WcEBGTxLTWBGYumF+LwYkYzunCKCL5XNk1ZrwuBy409fZY0lSFf7x2KZ58/RxWfvZpPLDrNFqrPcak+snYNTFZxfXf2IXfvW7dNRNIDCpY31ZT0LQkEcXf/aYlUFSOXSeKnvyUFwe7R+F2SFg5J5B0e+os0nJnIBg1rIiWPCL5SFxBTFHTmpNVEi6HhK/degE++841iCuq7awawN52TUWED41V2iiz+fX+NK9d5Fwf7BpFW50vYdcUWCp+3/XL0VrjNSpqLzTldhvl+HmI/On+cbQPTOD5Y31414XzLO/XNxZBrc+FJU1VBXnpQuTv3LQIP3/5DLYf7cPWi4pX1p4vB7tHsHpuddr7lzqLtNzpH49izRwt5THgcSLgcWYtiEp0oKyIP0VLGGP4X9csxYUL6iDZLIoHEnaNz+WY9lm4xcZeq7XA6ZDQFPBgaYpVAwCr51TDKTEc7B7BjevnICprGS/5bLxmw+N04M5NizL+rNbngkNieRVEnejVWpWaM4Ay0TsWRUu1B3NqfBgYjyEqK1M6hgP6ya4p4MH1q1vw9KEexBU1Y7XlTKGqHAe7RnFThpNctccJt0OadEFU13AIn/z1AXzqxjXYkLKZW0z6g1G8eUUi0mup8WSdEGX0kq9gT97MZYsbSr2EKSGK1VrLeGC3FRVh1wBa8dK9b1medrvX5cDK1mrD852KJz9ZJIlpPVfyEPnjej/q9oGJrAMmeoNabxyxuZvvaLlUDpraBmxe04pgRMYrHUVPgMrK/q4RjEVkXLEkXQAYY2gKuCddEPWFJ4/gpVOD+OSvDySluxaTSFwr2W822RGt1dlbG4gOlJXsyVcCIsXVbn48UEEiv/WiNsso4YL5tTjUrW2+RqfgyU8FrSAqtzCJSJ5z4HC3dYGSVmnnMdI0p7L5OhqK48xgyIhsr1nRBLdTwnNFyrI51TeOT/36QM5soO1HeuGQGK5b2ZLx540Bz6Sya3Yc7cX2o724ZkUTjpwfw89fPpP7QdOAOUde0KJXY8uKim9vP4Hv7TwF1TQsPjEVanZE8nbFITFUe5xlPRzEiooR+Wysb6vFcCiOruHwlDZep0JDlRvDeUyHOtYTxJVLGwEgKdXTjKpy9OmR/Lw6IfKT33w9dC65bYDf7cTVyxqx/WhvUYZlf3v7CfzylbO4+bsvYv9Zaztq+9FeXL64wXL83GSqXiNxBV/43WEsbwngobsuwzUrmvCffzyRV1/3QjFXuwpENfbdj+zFt7efxNefPY57frbPuGobs2gzTJQf//CmJfibi633zcqVWSHywp64/3eH8eR+LYulmHYNIIQpOfocGI/ihy+cRjim7QuMR2V0DYdx9fJGzKv1GpZSKsOhGOIKR2u1x2idIKLjntEIHnqxw1KkD3WP4r9fOQtV5cbzm3vDbFnbirNDYZzsGy/sgFMYGI/i2cM9eOvaVridEv7uh3/Fp39zAJ/bdghfe+YYRkKaaHcOhnCidxyb12SO4gEtVz7fjdcHdp3G2aEwvrR1PdxOCfffvA5RWcVX/nBsWo4rGwPBdJFvqfYgJqt46dQA/v1vNuD+m9dh5/E+/O0Df8HZoRDGwsKTJ7um3Pn4DStx/eqMTXXLmlnxyVo9pwbr5tVg35lhAJrIFXuHPLV/zeFzo7jnp/vQPRJGtdeF269YiJO6VbOytRobdEspE8ZcyRovAh4nqk258g/+qR0/fqkDly6qx0UZRrh98akj2NMxhF0n+hCOKVjY4EedP9E3ZPPqVnwGh7D9aC9WtlanPX6q/GpvF+IKx7/cuAr1fjc+8avX8Yw+cWtU7wHyzb+7yCjIumGt9R9PU8CNgXGtB0y2Ta9QTMbDL3XgHRvm4Mpl2tXR0uYA7nnzUnx35ym897IF2KRfNRUDEcmbe6VftawJF8yvxb++Y43x2suaA/jwL/bh5u++iCuWaLdRJE8Ui1kh8m6nhN9/5JoZfc3GKjdGwnH8+MUOTERlPLDrNGp9LrRUe7DjaC9uv2Khsem6ak41NrTV4tnDvRiLxNP82V5jGo0mHnNqvTg/qs3N3KFPsNpxtDdN5IcnYth3Zhjr22rw9KEecA68c0Ny7+s5tV5saKvF9iO9+PB16RvX2ZAVFc8e7sUNerQuUFWOx/Z04vIlDVjeop04fvLBy42ff+2ZY3hg12ncdtlC7DjWi+UtASxqTM+MMn6XATeisoqJmJL15PzU6+cRjMj4wFVLkm6/9y3L8dvXuvG5bYfw+49cU7RMIrEZbu6wuHZeDZ68701J93vTiiZsu+9NuPuRV/DM4R5IDLarACXsw6ywa0rByjnV4FyLpP/zuRNYM7caT953Nd6xYS5ePDWAcEzB8d4gfC4HFtT7jXz+TNG8EA9RUTun1oue0QhO9Y3jzGAIDollzJ3fdaIPisrx5Vs24Md3XYaGKjfesjrdFtm8pgWvnR2ZdOuAn/zlDdz76Kt4dHfyxuZLpwfQORTCHVcszPi4+65fjrY6H/71twexu33IGKBuhWhtkCvD5hd7OrGiJYDLFiePFfa5HfjCzetwonccD7/UkeuwpsSu43348UsdWN4SyOsksqSpCv9z79W4fnULljUHbJeWR9iHWRHJl4KbLpiHN69sNjIpan0uMMZww9pW/OQvb+DFUwM40RvEytYAJIkZPvnBrlFctawJPaMR+FwO1PpdaZH8vFofjvcEsf2oVq1615WL8eOXOtA1HML8+kTF7/ajfWiu9uCCtlpIEsO+z27JuNYta1rx7e0nsfNYH96zcUFex9czGsG3njsBAHh0TyfuumqxIVS/eLkTDVVu3Lh+TsbH+t1O/NtNa/Ghn+/TX9/ajwcSzawGxmOWEf+h7lG8fnYEn3/X2oyCecPaVmxe3YJvbz+Jd104D3P1vY2pcn40bFyJHeoexTefO4FVc2rw/95/ad7PUeN14ccfuGzGUjyJ2QlF8kWkxutCnd+NOr/bEJ7LFjeg2uPE9iO9ON4zbvjgDVVutNX5cLB7FE8dOIfrvrET9z76KgAtR77e7zKKn+bUetE/HsUzh85jfVsN7tykRczPH0u0KIjJKl443o/Nq1sg6e0WGGMZBXDdvBrMrfVOqmHZl39/BHGV48PXLcOJ3nHs1fc7jvcE8dzRXrzn0vlZi7Xetk4T3dYaDy5eWG95PwBYoM8J/c6Ok0Y2SiqP7umExynh3RfPt3yeL9y8DirnuPNHu9GR58SvTOw42osbvvknfODhV/CBh1/BN/54Am9dOwdPfOjKpJNsvjhLWIhGVD706Zph3E4J165qxh8OncfAeBSrTAMyLphfi+eO9OK+R1+Dx+nAi6cG0N4/jt6xaFJ+7txaLzgHXu8axebVrVjaHMDSlHYHezqGMB6VsTmHFQJo4r95TQv+hLMVcwAACSNJREFUfFJ7vc7BECayFGa9dGoATx04jw9ftwz3Xb8c1R4nHt3dCc45/m3bIVR7nfjHa5flfM3v3XEJnvqna4yeP1YsbQ7gK+/egL+cGsAt33sJ+8+OoHMwZPw71RfEtte68a4L51mmYQLAggY/Hvng5RiaiOGW772EF08OWN5XVTlCseTfAeccP3zhNP7XT/dicZMfv7xnE37z4avw+4+8CQ/ccUlRB2ITxFShT2UJuGFtK546oM06N4v8xQvr8PShHrzn0vn42A0rce3XduKxPZ3oC0aT0vJEQZR4LkBLhXz4pQ5jAMX2o73wOCW8aXlTnmuag5+/3Inr//MFANpszgfuuARXLUt+/IGuEXz88f1Y2ODHh65dBq/LgVsubsPje89iQ1st9nQM4d//ZoPRpC0bXpcjaUBINt53+UIsaarC//75PtzyvZcy3ud2iz0AM1csbcST+qbnXQ/vweduWov3X7ko6QqnLxjBh362Dyf7xvHd2y/BtSubEYkr+NffHMRvXuvGOy+Yi2/ceqGtZ68SswcS+RJw3coWOCQGReVYZUpbvOuqxbh4YT02LqoHYwxvXdeKX+3rglNiWLEq4VsLP3lOjdeY/7l5dQse/FM7/nxyAG9fPwfbj/biTcub8haiN69owg/uvBQTURkq5/jhn9rx/of24P6t63DHFVqPnt+9fg7/369eR1PAgwfff6kh0LdfsRA/e/kMvvjUEVy4oA63XZafrz9ZNi1txNMffTP+cnoAqWUBDQE3Lslh+wgWNPjx6/99FT7++H58/snDON4bxP03r4PLIeFQ9yju+eleDIVimFfnwwcf3oNPvHUVth/txWudI/jEDStx3/XLaaOUsA0k8iWg1u/CZYvrcbwnmBShe5yOpNYMd1yxCH84qOWWm9ubztWrXq9f02KIzaWL6lHnd+HeR1+FxLQTyGRSIhljSRulb1s/Bx997DV85reH8Llth8EAyCrHZYvr8f07L03KBV8ztwaXLKzDa2dH8OWt6409gGIwp9aLd19i7bvnS7XXhR/+/UZ844/H8f1dp/HLPZ2QGIOscsyr9eKJD12FJU1V+Njj+/H1Z4/D53LgB3deghvXz8395ARRRrBilLNPlo0bN/K9e/eWehkzyrGeMfSORXHtymbL+6gqx/X/uQtvDIbwxa3r8P4rFxs/27a/G5uWNiZ59TuP92HvG1qzMa/TgbuvWQK/e+rncUXPdxctFBqqPPj7TYsyVgsf7wmiY2DCMqOmnNlxtBevdmobxx6nA7ddvsBIV1VVjif2deGCBbVYrbcQJohygTG2j3O+Met9SOTLmx++cBpfefoYRZEEQaSRj8iTXVPmvO+KhegLRnFVnhuoBEEQZkjky5warwv/dtPaUi+DIAibQnnyBEEQFQyJPEEQRAVDIk8QBFHBkMgTBEFUMCTyBEEQFQyJPEEQRAVDIk8QBFHBkMgTBEFUMGXR1oAx1g/gTM47ZqYJgHVjcPtBx1Pe0PGUN7PteBZxzq0bYKFMRL4QGGN7c/VusBN0POUNHU95Q8eTDtk1BEEQFQyJPEEQRAVTCSL/YKkXMM3Q8ZQ3dDzlDR1PCrb35AmCIAhrKiGSJwiCICwgkScIgqhgbC3yjLEbGWPHGWOnGGOfKvV6JgtjbAFjbCdj7Ahj7DBj7KP67Q2MsecYYyf1/+tLvdbJwBhzMMZeY4w9pX+/hDG2W3+fHmeMuUu9xnxhjNUxxp5gjB1jjB1ljF1p5/eHMfZx/bN2iDH2GGPMa6f3hzH2Y8ZYH2PskOm2jO8H0/gv/bgOMMYuKd3KM2NxPF/XP28HGGO/ZYzVmX72af14jjPG3pbPa9hW5BljDgDfA/B2AGsBvI8xZrcRSjKAT3DO1wLYBOBe/Rg+BWAH53wFgB3693biowCOmr7/KoBvcc6XAxgGcHdJVjU1vgPgGc75agAXQjsuW74/jLE2AB8BsJFzvh6AA8BtsNf78xMAN6bcZvV+vB3ACv3fPQC+P0NrnAw/QfrxPAdgPef8AgAnAHwaAHRtuA3AOv0xD+g6mBXbijyAywGc4py3c85jAH4JYGuJ1zQpOOfnOeev6l8HoQlIG7TjeES/2yMAbinNCicPY2w+gHcC+JH+PQNwPYAn9LvY5ngYY7UA3gzgIQDgnMc45yOw8fsDbeSnjzHmBOAHcB42en84538CMJRys9X7sRXAT7nGywDqGGNzZ2al+ZHpeDjnf+Scy/q3LwOYr3+9FcAvOedRznkHgFPQdDArdhb5NgBnTd936bfZEsbYYgAXA9gNoJVzfl7/UQ+A1hItayp8G8C/AFD17xsBjJg+tHZ6n5YA6AfwsG4//YgxVgWbvj+c824A3wDQCU3cRwHsg33fH4HV+1EJGvEPAJ7Wv57S8dhZ5CsGxlgAwK8BfIxzPmb+GddyXG2R58oYuwlAH+d8X6nXMk04AVwC4Puc84sBTCDFmrHZ+1MPLRpcAmAegCqkWwW2xk7vRy4YY5+BZun+opDnsbPIdwNYYPp+vn6brWCMuaAJ/C8457/Rb+4Vl5X6/32lWt8kuRrAzYyxN6DZZ9dD87TrdHsAsNf71AWgi3O+W//+CWiib9f3ZwuADs55P+c8DuA30N4zu74/Aqv3w7YawRj7AICbANzBE8VMUzoeO4v8KwBW6JkBbmgbEk+WeE2TQverHwJwlHP+TdOPngRwl/71XQC2zfTapgLn/NOc8/mc88XQ3o/nOed3ANgJ4Fb9bnY6nh4AZxljq/SbNgM4Apu+P9Bsmk2MMb/+2RPHY8v3x4TV+/EkgPfrWTabAIyabJ2yhTF2IzTL82bOecj0oycB3MYY8zDGlkDbUN6T8wk557b9B+Ad0HafTwP4TKnXM4X1vwnapeUBAPv1f++A5mPvAHASwHYADaVe6xSO7ToAT+lfL9U/jKcA/AqAp9Trm8RxXARgr/4e/Q+Aeju/PwDuB3AMwCEAPwPgsdP7A+AxaPsJcWhXWndbvR8AGLQMvNMADkLLKir5MeRxPKegee9CE35guv9n9OM5DuDt+bwGtTUgCIKoYOxs1xAEQRA5IJEnCIKoYEjkCYIgKhgSeYIgiAqGRJ4gCKKCIZEnCIKoYEjkCYIgKpj/H5CT39nSB6BlAAAAAElFTkSuQmCC\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "plt.plot(rewards);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Of course, hyperparams should be tuned, target network will improve stability, but you've got the idea :)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}