diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 00000000..b5a6bdaa
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,2 @@
+tutorials/* linguist-vendored
+tests/qa_runbook.ipynb linguist-vendored
diff --git a/.gitignore b/.gitignore
index bac1c6d2..bbe9a163 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,6 +7,9 @@ playground.ipynb
*/*/.cache_dir
results*/
logs
+_hidden_local_dev*
+tmp_dir_new*/
+tmp/
notebooks/figures/
*.tsv
*.csv
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 62cee4a7..a45cf025 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -25,6 +25,11 @@ When adding new methods or APIs, unit tests are now enforced. To run existing te
cd pyvene
python -m unittest discover -p '*TestCase.py'
```
+For specific test case, yoou can run
+```bash
+cd pyvene
+python -m unittest tests.integration_tests.ComplexInterventionWithGPT2TestCase
+```
When checking in new code, please also consider to add new tests in the same PR. Please include test results in the PR to make sure all the existing test cases are passing. Please see the `qa_runbook.ipynb` notebook about a set of conventions about how to add test cases. The code coverage for this repository is currently `low`, and we are adding more automated tests.
#### Format
diff --git a/README.md b/README.md
index d3533aee..aae8232a 100644
--- a/README.md
+++ b/README.md
@@ -11,10 +11,10 @@
*This is a beta release (public testing).*
-# **Use _Activation Intervention_ to Interpret _Causal Mechanism_ of Model**
-**pyvene** supports customizable interventions on different neural architectures (e.g., RNN or Transformers). It supports complex intervention schemas (e.g., parallel or serialized interventions) and a wide range of intervention modes (e.g., static or trained interventions) at scale to gain interpretability insights.
+# A Library for _Understanding_ and _Improving_ PyTorch Models via Interventions
+Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce **pyvene**, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. **pyvene** supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters.
-**Getting Started:** [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/basic_tutorials/Basic_Intervention.ipynb) [**_pyvene_ 101**]
+**Getting Started:** [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/pyvene/pyvene_101.ipynb) [**Main _pyvene_ 101**]
## Installation
```bash
@@ -24,55 +24,45 @@ pip install pyvene
## _Wrap_ , _Intervene_ and _Share_
You can intervene with supported models as,
```python
-import pyvene
-from pyvene import IntervenableRepresentationConfig, IntervenableConfig, IntervenableModel
-from pyvene import VanillaIntervention
+import torch
+import pyvene as pv
-# provided wrapper for huggingface gpt2 model
-_, tokenizer, gpt2 = pyvene.create_gpt2()
+_, tokenizer, gpt2 = pv.create_gpt2()
-# turn gpt2 into intervenable_gpt2
-intervenable_gpt2 = IntervenableModel(
- intervenable_config = IntervenableConfig(
- intervenable_representations=[
- IntervenableRepresentationConfig(
- 0, # intervening layer 0
- "mlp_output", # intervening mlp output
- ),
- ],
- intervenable_interventions_type=VanillaIntervention
- ),
- model = gpt2
-)
+pv_gpt2 = pv.IntervenableModel({
+ "layer": 0,
+ "component": "mlp_output",
+ "source_representation": torch.zeros(
+ gpt2.config.n_embd)
+}, model=gpt2)
-# intervene base with sources on the 4th token.
-original_outputs, intervened_outputs = intervenable_gpt2(
- tokenizer("The capital of Spain is", return_tensors="pt"),
- [tokenizer("The capital of Italy is", return_tensors="pt")],
- {"sources->base": 4}
+orig_outputs, intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={"base": 3}
)
-original_outputs.last_hidden_state - intervened_outputs.last_hidden_state
+print(intervened_outputs.last_hidden_state - orig_outputs.last_hidden_state)
```
which returns,
-
```
tensor([[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
- [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
- [ 0.0008, -0.0078, -0.0066, ..., 0.0007, -0.0018, 0.0060]]])
+ [ 0.0483, -0.1212, -0.2816, ..., 0.1958, 0.0830, 0.0784],
+ [ 0.0519, 0.2547, -0.1631, ..., 0.0050, -0.0453, -0.1624]]])
```
-showing that we have causal effects only on the last token as expected. You can share your interventions through Huggingface with others with a single call,
+You can share your interventions through Huggingface with others with a single call,
```python
-intervenable_gpt2.save(
+pv_gpt2.save(
save_directory="./your_gpt2_mounting_point/",
save_to_hf_hub=True,
- hf_repo_name="your_gpt2_mounting_point",
+ hf_repo_name="your_gpt2_mounting_point"
)
```
-We see interventions are knobs that can mount on models. And people can share their knobs with others to share knowledge about how to steer models. You can try this at [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/basic_tutorials/Load_Save_and_Share_Interventions.ipynb) [**Intervention Sharing**]
-You can also use the `intervenable_gpt2` just like a regular torch model component inside another model, or another pipeline as,
+You can also use the `pv_gpt2` just like a regular torch model component inside another model, or another pipeline as,
```py
import torch
import torch.nn as nn
@@ -81,7 +71,7 @@ from typing import List, Optional, Tuple, Union, Dict
class ModelWithIntervenables(nn.Module):
def __init__(self):
super(ModelWithIntervenables, self).__init__()
- self.intervenable_gpt2 = intervenable_gpt2
+ self.pv_gpt2 = pv_gpt2
self.relu = nn.ReLU()
self.fc = nn.Linear(768, 1)
# Your other downstream components go here
@@ -94,18 +84,55 @@ class ModelWithIntervenables(nn.Module):
activations_sources: Optional[Dict] = None,
subspaces: Optional[List] = None,
):
- _, counterfactual_x = self.intervenable_gpt2(
+ _, counterfactual_x = self.pv_gpt2(
base,
sources,
unit_locations,
activations_sources,
subspaces
)
- counterfactual_x = counterfactual_x.last_hidden_state
-
- counterfactual_x = self.relu(counterfactual_x)
- counterfactual_x = self.fc(counterfactual_x)
- return counterfactual_x
+ return self.fc(self.relu(counterfactual_x.last_hidden_state))
+```
+
+## Complex _Intervention Schema_ as an _Object_
+One key abstraction that **pyvene** provides is the encapsulation of the intervention schema. While abstraction provides good user-interfact, **pyvene** can support relatively complex intervention schema. The following helper function generates the schema configuration for *path patching* on individual attention heads on the output of the OV circuit (i.e., analyzing causal effect of each individual component):
+```py
+import pyvene as pv
+
+def path_patching_config(
+ layer, last_layer,
+ component="head_attention_value_output", unit="h.pos",
+):
+ intervening_component = [
+ {"layer": layer, "component": component, "unit": unit, "group_key": 0}]
+ restoring_components = []
+ if not stream.startswith("mlp_"):
+ restoring_components += [
+ {"layer": layer, "component": "mlp_output", "group_key": 1}]
+ for i in range(layer+1, last_layer):
+ restoring_components += [
+ {"layer": i, "component": "attention_output", "group_key": 1}
+ {"layer": i, "component": "mlp_output", "group_key": 1}
+ ]
+ intervenable_config = IntervenableConfig(intervening_component + restoring_components)
+ return intervenable_config
+```
+then you can wrap the config generated by this function to a model. And after you have done your intervention, you can share your path patching with others,
+```py
+_, tokenizer, gpt2 = pv.create_gpt2()
+
+pv_gpt2 = pv.IntervenableModel(
+ path_patching_config(4, gpt2.config.n_layer),
+ model=gpt2
+)
+# saving the path
+pv_gpt2.save(
+ save_directory="./your_gpt2_path/"
+)
+# loading the path
+pv_gpt2 = pv.IntervenableModel.load(
+ "./tmp/",
+ model=gpt2)
```
@@ -113,14 +140,34 @@ class ModelWithIntervenables(nn.Module):
| **Level** | **Tutorial** | **Run in Colab** | **Description** |
| --- | ------------- | ------------- | ------------- |
-| Beginner | [**Getting Started**](tutorials/basic_tutorials/Basic_Intervention.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/basic_tutorials/Basic_Intervention.ipynb) | Introduces basic static intervention on factual recall examples |
-| Beginner | [**Intervened Model Generation**](tutorials/advanced_tutorials/Intervened_Model_Generation.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/Intervened_Model_Generation.ipynb) | Shows how to intervene a model during generation |
-| Intermediate | [**Intervene Your Local Models**](tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/basic_tutorials/Add_New_Model_Type.ipynb) | Illustrates how to run this library with your own models |
+| Beginner | [**pyvene 101**](pyvene_101.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/pyvene/pyvene_101.ipynb) | Introduce you to the basics of pyvene |
| Intermediate | [**ROME Causal Tracing**](tutorials/advanced_tutorials/Causal_Tracing.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/Causal_Tracing.ipynb) | Reproduce ROME's Results on Factual Associations with GPT2-XL |
| Intermediate | [**Intervention v.s. Probing**](tutorials/advanced_tutorials/Probing_Gender.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/Probing_Gender.ipynb) | Illustrates how to run trainable interventions and probing with pythia-6.9B |
| Advanced | [**Trainable Interventions for Causal Abstraction**](tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | [](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb) | Illustrates how to train an intervention to discover causal mechanisms of a neural model |
-## Causal Abstraction: From Interventions to Gain Interpretability Insights
+## Contributing to This Library
+Please see [our guidelines](CONTRIBUTING.md) about how to contribute to this repository.
+
+*Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged!* :octocat:
+
+### Other Ways of Installation
+
+**Method 2: Install from the Repo**
+```bash
+pip install git+https://github.com/stanfordnlp/pyvene.git
+```
+
+**Method 3: Clone and Import**
+```bash
+git clone https://github.com/stanfordnlp/pyvene.git
+```
+and in parallel folder, import to your project as,
+```python
+from pyvene import pyvene
+_, tokenizer, gpt2 = pyvene.create_gpt2()
+```
+
+## A Little Guide for Causal Abstraction: From Interventions to Gain Interpretability Insights
Basic interventions are fun but we cannot make any causal claim systematically. To gain actual interpretability insights, we want to measure the counterfactual behaviors of a model in a data-driven fashion. In other words, if the model responds systematically to your interventions, then you start to associate certain regions in the network with a high-level concept. We also call this alignment search process with model internals.
### Understanding Causal Mechanisms with Static Interventions
@@ -178,28 +225,6 @@ intervenable.train(
```
where you need to pass in a trainable dataset, and your customized loss and metrics function. The trainable interventions can later be saved on to your disk. You can also use `intervenable.evaluate()` your interventions in terms of customized objectives.
-## Contributing to This Library
-Please see [our guidelines](CONTRIBUTING.md) about how to contribute to this repository.
-
-*Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged!* :octocat:
-
-### Other Ways of Installation
-
-**Method 2: Install from the Repo**
-```bash
-pip install git+https://github.com/stanfordnlp/pyvene.git
-```
-
-**Method 3: Clone and Import**
-```bash
-git clone https://github.com/stanfordnlp/pyvene.git
-```
-and in parallel folder, import to your project as,
-```python
-from pyvene import pyvene
-_, tokenizer, gpt2 = pyvene.create_gpt2()
-```
-
## Related Works in Discovering Causal Mechanism of LLMs
If you would like to read more works on this area, here is a list of papers that try to align or discover the causal mechanisms of LLMs.
- [Causal Abstractions of Neural Networks](https://arxiv.org/abs/2106.02997): This paper introduces interchange intervention (a.k.a. activation patching or causal scrubbing). It tries to align a causal model with the model's representations.
diff --git a/pyvene/__init__.py b/pyvene/__init__.py
index 280702ae..059253d5 100644
--- a/pyvene/__init__.py
+++ b/pyvene/__init__.py
@@ -2,7 +2,7 @@
from .data_generators.causal_model import CausalModel
from .models.intervenable_base import IntervenableModel
from .models.configuration_intervenable_model import IntervenableConfig
-from .models.configuration_intervenable_model import IntervenableRepresentationConfig
+from .models.configuration_intervenable_model import RepresentationConfig
# Interventions
@@ -24,6 +24,8 @@
from .models.interventions import ZeroIntervention
from .models.interventions import LocalistRepresentationIntervention
from .models.interventions import DistributedRepresentationIntervention
+from .models.interventions import SourcelessIntervention
+from .models.interventions import NoiseIntervention
# Utils
diff --git a/pyvene/models/basic_utils.py b/pyvene/models/basic_utils.py
index b5236b69..cdffd45a 100644
--- a/pyvene/models/basic_utils.py
+++ b/pyvene/models/basic_utils.py
@@ -130,6 +130,7 @@ def get_list_depth(lst):
return 1 + max((get_list_depth(item) for item in lst), default=0)
return 0
+
def get_batch_size(model_input):
"""
Get batch size based on the input
@@ -141,3 +142,28 @@ def get_batch_size(model_input):
batch_size = v.shape[0]
break
return batch_size
+
+
+def GET_LOC(
+ LOC,
+ unit="h.pos",
+ batch_size=1,
+):
+ """
+ From simple locale to nested one.
+ """
+ if unit == "h.pos":
+ return [
+ [
+ [
+ [LOC[0]]
+ ] * batch_size,
+ [
+ [LOC[1]]
+ ] * batch_size
+ ]
+ ]
+ else:
+ raise NotImplementedError(
+ f"{unit} is not supported."
+ )
\ No newline at end of file
diff --git a/pyvene/models/configuration_intervenable_model.py b/pyvene/models/configuration_intervenable_model.py
index 2b334d9b..8432498e 100644
--- a/pyvene/models/configuration_intervenable_model.py
+++ b/pyvene/models/configuration_intervenable_model.py
@@ -8,15 +8,15 @@
from .interventions import VanillaIntervention
-IntervenableRepresentationConfig = namedtuple(
- "IntervenableRepresentationConfig",
- "intervenable_layer intervenable_representation_type "
- "intervenable_unit max_number_of_units "
- "intervenable_low_rank_dimension "
- "subspace_partition group_key intervention_link_key intervenable_moe "
+RepresentationConfig = namedtuple(
+ "RepresentationConfig",
+ "layer component unit "
+ "max_number_of_units "
+ "low_rank_dimension intervention_type "
+ "subspace_partition group_key intervention_link_key moe_key "
"source_representation hidden_source_representation",
defaults=(
- 0, "block_output", "pos", 1,
+ 0, "block_output", "pos", 1, None,
None, None, None, None, None, None, None),
)
@@ -24,49 +24,102 @@
class IntervenableConfig(PretrainedConfig):
def __init__(
self,
- intervenable_representations=[IntervenableRepresentationConfig()],
- intervenable_interventions_type=VanillaIntervention,
+ representations=[RepresentationConfig()],
+ intervention_types=VanillaIntervention,
mode="parallel",
- intervenable_interventions=[None],
+ interventions=[None],
sorted_keys=None,
+ model_type=None, # deprecating
+ # hidden fields for backlog
intervention_dimensions=None,
- intervenable_model_type=None,
**kwargs,
):
- if isinstance(intervenable_representations, list):
- self.intervenable_representations = intervenable_representations
- else:
- self.intervenable_representations = [intervenable_representations]
- self.intervenable_interventions_type = intervenable_interventions_type
+ if not isinstance(representations, list):
+ representations = [representations]
+
+ casted_representations = []
+ for reprs in representations:
+ if isinstance(reprs, RepresentationConfig):
+ casted_representations += [reprs]
+ elif isinstance(reprs, list):
+ casted_representations += [
+ RepresentationConfig(*reprs)]
+ elif isinstance(reprs, dict):
+ casted_representations += [
+ RepresentationConfig(**reprs)]
+ else:
+ raise ValueError(
+ f"{reprs} format in our representation list is not supported.")
+ self.representations = casted_representations
+ self.intervention_types = intervention_types
+ # the type inside reprs can overwrite
+ overwrite = False
+ overwrite_intervention_types = []
+ for reprs in self.representations:
+
+ if overwrite:
+ if reprs.intervention_type is None:
+ raise ValueError(
+ "intervention_type if used should be specified for all")
+ if reprs.intervention_type is not None:
+ overwrite = True
+ overwrite_intervention_types += [reprs.intervention_type]
+ if None in overwrite_intervention_types:
+ raise ValueError(
+ "intervention_type if used should be specified for all")
+ if overwrite:
+ self.intervention_types = overwrite_intervention_types
+
self.mode = mode
- self.intervenable_interventions = intervenable_interventions
+ self.interventions = interventions
self.sorted_keys = sorted_keys
self.intervention_dimensions = intervention_dimensions
- self.intervenable_model_type = intervenable_model_type
+ self.model_type = model_type
super().__init__(**kwargs)
+
+ def add_intervention(self, representations):
+ if not isinstance(representations, list):
+ representations = [representations]
+ for reprs in representations:
+ if isinstance(reprs, RepresentationConfig):
+ self.representations += [reprs]
+ elif isinstance(reprs, list):
+ self.representations += [
+ RepresentationConfig(*reprs)]
+ elif isinstance(reprs, dict):
+ self.representations += [
+ RepresentationConfig(**reprs)]
+ else:
+ raise ValueError(
+ f"{reprs} format in our representation list is not supported.")
+ if self.representations[-1].intervention_type is None:
+ raise ValueError(
+ "intervention_type should be provided.")
+ self.intervention_types += [self.representations[-1].intervention_type]
+
def __repr__(self):
- intervenable_representations = []
- for reprs in self.intervenable_representations:
+ representations = []
+ for reprs in self.representations:
if isinstance(reprs, list):
- reprs = IntervenableRepresentationConfig(*reprs)
+ reprs = RepresentationConfig(*reprs)
new_d = {}
for k, v in reprs._asdict().items():
if type(v) not in {str, int, list, tuple, dict} and v is not None and v != [None]:
new_d[k] = "PLACEHOLDER"
else:
new_d[k] = v
- intervenable_representations += [new_d]
+ representations += [new_d]
_repr = {
- "intervenable_model_type": str(self.intervenable_model_type),
- "intervenable_representations": tuple(intervenable_representations),
- "intervenable_interventions_type": str(
- self.intervenable_interventions_type
+ "model_type": str(self.model_type),
+ "representations": tuple(representations),
+ "intervention_types": str(
+ self.intervention_types
),
"mode": self.mode,
- "intervenable_interventions": [
- str(intervenable_intervention)
- for intervenable_intervention in self.intervenable_interventions
+ "interventions": [
+ str(intervention)
+ for intervention in self.interventions
],
"sorted_keys": tuple(self.sorted_keys) if self.sorted_keys is not None else str(self.sorted_keys),
"intervention_dimensions": str(self.intervention_dimensions),
diff --git a/pyvene/models/intervenable_base.py b/pyvene/models/intervenable_base.py
index 7c1fa16e..cefb0862 100644
--- a/pyvene/models/intervenable_base.py
+++ b/pyvene/models/intervenable_base.py
@@ -1,4 +1,4 @@
-import json, logging
+import json, logging, torch
import numpy as np
from collections import OrderedDict
from typing import List, Optional, Tuple, Union, Dict
@@ -10,7 +10,7 @@
from .constants import CONST_QKV_INDICES
from .configuration_intervenable_model import (
IntervenableConfig,
- IntervenableRepresentationConfig,
+ RepresentationConfig,
)
from .interventions import (
TrainableIntervention,
@@ -29,13 +29,18 @@ class IntervenableModel(nn.Module):
Generic intervenable model. Alignments are specified in the config.
"""
- def __init__(self, intervenable_config, model, **kwargs):
+ def __init__(self, config, model, **kwargs):
super().__init__()
- self.intervenable_config = intervenable_config
- self.mode = intervenable_config.mode
- intervention_type = intervenable_config.intervenable_interventions_type
+ if isinstance(config, dict) or isinstance(config, list):
+ config = IntervenableConfig(
+ representations = config
+ )
+ self.config = config
+
+ self.mode = config.mode
+ intervention_type = config.intervention_types
self.is_model_stateless = is_stateless(model)
- self.intervenable_config.intervenable_model_type = type(model) # backfill
+ self.config.model_type = type(model) # backfill
self.use_fast = kwargs["use_fast"] if "use_fast" in kwargs else False
self.model_has_grad = False
if self.use_fast:
@@ -48,7 +53,7 @@ def __init__(self, intervenable_config, model, **kwargs):
# each representation can get a different intervention type
if type(intervention_type) == list:
assert len(intervention_type) == len(
- intervenable_config.intervenable_representations
+ config.representations
)
###
@@ -62,7 +67,7 @@ def __init__(self, intervenable_config, model, **kwargs):
# To support a new model type, you need to provide a
# mapping between supported abstract type and module name.
###
- self.intervenable_representations = {}
+ self.representations = {}
self.interventions = {}
self._key_collision_counter = {}
self.return_collect_activations = False
@@ -86,22 +91,22 @@ def __init__(self, intervenable_config, model, **kwargs):
_any_group_key = False
_original_key_order = []
for i, representation in enumerate(
- intervenable_config.intervenable_representations
+ config.representations
):
_key = self._get_representation_key(representation)
- if representation.intervenable_unit not in CONST_VALID_INTERVENABLE_UNIT:
+ if representation.unit not in CONST_VALID_INTERVENABLE_UNIT:
raise ValueError(
- f"{representation.intervenable_unit} is not supported as intervenable unit. Valid options: ",
+ f"{representation.unit} is not supported as intervenable unit. Valid options: ",
f"{CONST_VALID_INTERVENABLE_UNIT}",
)
if (
- intervenable_config.intervenable_interventions is not None
- and intervenable_config.intervenable_interventions[0] is not None
+ config.interventions is not None
+ and config.interventions[0] is not None
):
# we leave this option open but not sure if it is a desired one
- intervention = intervenable_config.intervenable_interventions[i]
+ intervention = config.interventions[i]
else:
intervention_function = (
intervention_type
@@ -111,7 +116,7 @@ def __init__(self, intervenable_config, model, **kwargs):
other_medata = representation._asdict()
other_medata["use_fast"] = self.use_fast
intervention = intervention_function(
- get_intervenable_dimension(
+ embed_dim=get_dimension(
get_internal_model_type(model), model.config, representation
), **other_medata
)
@@ -135,11 +140,11 @@ def __init__(self, intervenable_config, model, **kwargs):
):
self.return_collect_activations = True
- intervenable_module_hook = get_intervenable_module_hook(
+ module_hook = get_module_hook(
model, representation
)
- self.intervenable_representations[_key] = representation
- self.interventions[_key] = (intervention, intervenable_module_hook)
+ self.representations[_key] = representation
+ self.interventions[_key] = (intervention, module_hook)
self._key_getter_call_counter[
_key
] = 0 # we memo how many the hook is called,
@@ -150,28 +155,28 @@ def __init__(self, intervenable_config, model, **kwargs):
_original_key_order += [_key]
if representation.group_key is not None:
_any_group_key = True
- if self.intervenable_config.sorted_keys is not None:
+ if self.config.sorted_keys is not None:
logging.warn(
"The key is provided in the config. "
"Assuming this is loaded from a pretrained module."
)
if (
- self.intervenable_config.sorted_keys is not None
+ self.config.sorted_keys is not None
or "intervenables_sort_fn" not in kwargs
):
- self.sorted_intervenable_keys = _original_key_order
+ self.sorted_keys = _original_key_order
else:
# the key order is independent of group, it is used to read out intervention locations.
- self.sorted_intervenable_keys = kwargs["intervenables_sort_fn"](
- model, self.intervenable_representations
+ self.sorted_keys = kwargs["intervenables_sort_fn"](
+ model, self.representations
)
# check it follows topological order
if not check_sorted_intervenables_by_topological_order(
- model, self.intervenable_representations, self.sorted_intervenable_keys
+ model, self.representations, self.sorted_keys
):
raise ValueError(
- "The intervenable_representations in your config must follow the "
+ "The representations in your config must follow the "
"topological order of model components. E.g., layer 2 intervention "
"cannot appear before layer 1 in transformers."
)
@@ -185,8 +190,8 @@ def __init__(self, intervenable_config, model, **kwargs):
# In case they are grouped, we would expect the execution order is given
# by the source inputs.
_validate_group_keys = []
- for _key in self.sorted_intervenable_keys:
- representation = self.intervenable_representations[_key]
+ for _key in self.sorted_keys:
+ representation = self.representations[_key]
assert representation.group_key is not None
if representation.group_key in self._intervention_group:
self._intervention_group[representation.group_key].append(_key)
@@ -205,7 +210,7 @@ def __init__(self, intervenable_config, model, **kwargs):
else:
# assign each key to an unique group based on topological order
_group_key_inc = 0
- for _key in self.sorted_intervenable_keys:
+ for _key in self.sorted_keys:
self._intervention_group[_group_key_inc] = [_key]
_group_key_inc += 1
# sort group key with ascending order
@@ -234,8 +239,8 @@ def __str__(self):
"""
attr_dict = {
"model_type": self.model_type,
- "intervenable_interventions_type": self.intervenable_interventions_type,
- "alignabls": self.sorted_intervenable_keys,
+ "intervention_types": self.intervention_types,
+ "alignabls": self.sorted_keys,
"mode": self.mode,
}
return json.dumps(attr_dict, indent=4)
@@ -244,9 +249,9 @@ def _get_representation_key(self, representation):
"""
Provide unique key for each intervention
"""
- l = representation.intervenable_layer
- r = representation.intervenable_representation_type
- u = representation.intervenable_unit
+ l = representation.layer
+ r = representation.component
+ u = representation.unit
n = representation.max_number_of_units
key_proposal = f"layer.{l}.repr.{r}.unit.{u}.nunit.{n}"
if key_proposal not in self._key_collision_counter:
@@ -397,17 +402,17 @@ def save(
create_directory(save_directory)
- saving_config = copy.deepcopy(self.intervenable_config)
- saving_config.sorted_keys = self.sorted_intervenable_keys
- saving_config.intervenable_model_type = str(
- saving_config.intervenable_model_type
+ saving_config = copy.deepcopy(self.config)
+ saving_config.sorted_keys = self.sorted_keys
+ saving_config.model_type = str(
+ saving_config.model_type
)
- saving_config.intervenable_interventions_type = []
+ saving_config.intervention_types = []
saving_config.intervention_dimensions = []
# handle constant source reprs if passed in.
- serialized_intervenable_representations = []
- for reprs in saving_config.intervenable_representations:
+ serialized_representations = []
+ for reprs in saving_config.representations:
serialized_reprs = {}
for k, v in reprs._asdict().items():
if k == "hidden_source_representation":
@@ -417,17 +422,19 @@ def save(
if v is not None:
serialized_reprs["hidden_source_representation"] = True
serialized_reprs[k] = None
+ elif k == "intervention_type":
+ serialized_reprs[k] = None
else:
serialized_reprs[k] = v
- serialized_intervenable_representations += [
- IntervenableRepresentationConfig(**serialized_reprs)
+ serialized_representations += [
+ RepresentationConfig(**serialized_reprs)
]
- saving_config.intervenable_representations = \
- serialized_intervenable_representations
-
+ saving_config.representations = \
+ serialized_representations
+
for k, v in self.interventions.items():
intervention = v[0]
- saving_config.intervenable_interventions_type += [str(type(intervention))]
+ saving_config.intervention_types += [str(type(intervention))]
binary_filename = f"intkey_{k}.bin"
# save intervention binary file
if isinstance(intervention, TrainableIntervention) or \
@@ -452,7 +459,8 @@ def save(
repo_id=hf_repo_name,
repo_type="model",
)
- saving_config.intervention_dimensions += [intervention.interchange_dim]
+ saving_config.intervention_dimensions += [intervention.interchange_dim.tolist()]
+
# save metadata config
saving_config.save_pretrained(save_directory)
if save_to_hf_hub:
@@ -493,28 +501,21 @@ def load(load_directory, model, local_directory=None, from_huggingface_hub=False
# load config
saving_config = IntervenableConfig.from_pretrained(load_directory)
- saving_config.intervenable_model_type = get_type_from_string(
- saving_config.intervenable_model_type
- )
- if not isinstance(model, saving_config.intervenable_model_type):
- raise ValueError(
- f"model type {str(type(model))} is not "
- f"matching with {str(saving_config.intervenable_model_type)}"
- )
- casted_intervenable_interventions_type = []
- for type_str in saving_config.intervenable_interventions_type:
- casted_intervenable_interventions_type += [get_type_from_string(type_str)]
- saving_config.intervenable_interventions_type = (
- casted_intervenable_interventions_type
+ casted_intervention_types = []
+
+ for type_str in saving_config.intervention_types:
+ casted_intervention_types += [get_type_from_string(type_str)]
+ saving_config.intervention_types = (
+ casted_intervention_types
)
- casted_intervenable_representations = []
+ casted_representations = []
for (
- intervenable_representation_opts
- ) in saving_config.intervenable_representations:
- casted_intervenable_representations += [
- IntervenableRepresentationConfig(*intervenable_representation_opts)
+ representation_opts
+ ) in saving_config.representations:
+ casted_representations += [
+ RepresentationConfig(*representation_opts)
]
- saving_config.intervenable_representations = casted_intervenable_representations
+ saving_config.representations = casted_representations
intervenable = IntervenableModel(saving_config, model)
# load binary files
@@ -522,7 +523,8 @@ def load(load_directory, model, local_directory=None, from_huggingface_hub=False
intervention = v[0]
binary_filename = f"intkey_{k}.bin"
if isinstance(intervention, TrainableIntervention) or \
- intervention.is_source_constant:
+ (intervention.is_source_constant and \
+ not isinstance(intervention, SourcelessIntervention)):
if not os.path.exists(load_directory) or from_huggingface_hub:
hf_hub_download(
repo_id=load_directory,
@@ -530,26 +532,26 @@ def load(load_directory, model, local_directory=None, from_huggingface_hub=False
cache_dir=local_directory,
)
logging.warn(f"Loading trainable intervention from {binary_filename}.")
- saved_state_dict = torch.load(os.path.join(load_directory, binary_filename))
- if intervention.is_source_constant:
+ if intervention.is_source_constant and not isinstance(intervention, ZeroIntervention):
+ saved_state_dict = torch.load(os.path.join(load_directory, binary_filename))
intervention.register_buffer(
'source_representation', saved_state_dict['source_representation']
)
- intervention.load_state_dict(saved_state_dict)
- intervention.interchange_dim = saving_config.intervention_dimensions[i]
+ intervention.load_state_dict(saved_state_dict)
+ intervention.set_interchange_dim(saving_config.intervention_dimensions[i])
return intervenable
def _gather_intervention_output(
- self, output, intervenable_representations_key, unit_locations
+ self, output, representations_key, unit_locations
) -> torch.Tensor:
"""
Gather intervening activations from the output based on indices
"""
if (
- intervenable_representations_key in self._intervention_reverse_link
- and self._intervention_reverse_link[intervenable_representations_key]
+ representations_key in self._intervention_reverse_link
+ and self._intervention_reverse_link[representations_key]
in self.hot_activations
):
# hot gather
@@ -559,7 +561,7 @@ def _gather_intervention_output(
# enable the following line when an error is hit
# torch.autograd.set_detect_anomaly(True)
selected_output = self.hot_activations[
- self._intervention_reverse_link[intervenable_representations_key]
+ self._intervention_reverse_link[representations_key]
]
else:
# cold gather
@@ -569,15 +571,15 @@ def _gather_intervention_output(
original_output = output[0]
# gather subcomponent
original_output = self._output_to_subcomponent(
- original_output, intervenable_representations_key
+ original_output, representations_key
)
# gather based on intervention locations
selected_output = gather_neurons(
original_output,
- self.intervenable_representations[
- intervenable_representations_key
- ].intervenable_unit,
+ self.representations[
+ representations_key
+ ].unit,
unit_locations,
)
@@ -586,7 +588,7 @@ def _gather_intervention_output(
def _output_to_subcomponent(
self,
output,
- intervenable_representations_key,
+ representations_key,
) -> List[torch.Tensor]:
"""
Helps to get subcomponent of inputs/outputs of a hook
@@ -596,9 +598,9 @@ def _output_to_subcomponent(
"""
return output_to_subcomponent(
output,
- self.intervenable_representations[
- intervenable_representations_key
- ].intervenable_representation_type,
+ self.representations[
+ representations_key
+ ].component,
self.model_type,
self.model_config,
)
@@ -607,7 +609,7 @@ def _scatter_intervention_output(
self,
output,
intervened_representation,
- intervenable_representations_key,
+ representations_key,
unit_locations,
) -> torch.Tensor:
"""
@@ -619,19 +621,19 @@ def _scatter_intervention_output(
else:
original_output = output
- intervenable_representation_type = self.intervenable_representations[
- intervenable_representations_key
- ].intervenable_representation_type
- intervenable_unit = self.intervenable_representations[
- intervenable_representations_key
- ].intervenable_unit
+ component = self.representations[
+ representations_key
+ ].component
+ unit = self.representations[
+ representations_key
+ ].unit
# scatter in-place
_ = scatter_neurons(
original_output,
intervened_representation,
- intervenable_representation_type,
- intervenable_unit,
+ component,
+ unit,
unit_locations,
self.model_type,
self.model_config,
@@ -642,15 +644,15 @@ def _scatter_intervention_output(
def _intervention_getter(
self,
- intervenable_keys,
+ keys,
unit_locations,
) -> HandlerList:
"""
Create a list of getter handlers that will fetch activations
"""
handlers = []
- for key_i, key in enumerate(intervenable_keys):
- intervention, intervenable_module_hook = self.interventions[key]
+ for key_i, key in enumerate(keys):
+ intervention, module_hook = self.interventions[key]
def hook_callback(model, args, kwargs, output=None):
if self._is_generation:
@@ -701,7 +703,7 @@ def hook_callback(model, args, kwargs, output=None):
# set version for stateful models
self._intervention_state[key].inc_getter_version()
- handlers.append(intervenable_module_hook(hook_callback, with_kwargs=True))
+ handlers.append(module_hook(hook_callback, with_kwargs=True))
return HandlerList(handlers)
@@ -779,7 +781,7 @@ def _reconcile_stateful_cached_activations(
def _intervention_setter(
self,
- intervenable_keys,
+ keys,
unit_locations_base,
subspaces,
) -> HandlerList:
@@ -789,8 +791,8 @@ def _intervention_setter(
self._tidy_stateful_activations()
handlers = []
- for key_i, key in enumerate(intervenable_keys):
- intervention, intervenable_module_hook = self.interventions[key]
+ for key_i, key in enumerate(keys):
+ intervention, module_hook = self.interventions[key]
self._batched_setter_activation_select[key] = [
0 for _ in range(len(unit_locations_base[0]))
] # batch_size
@@ -881,7 +883,7 @@ def hook_callback(model, args, kwargs, output=None):
self._intervention_state[key].inc_setter_version()
- handlers.append(intervenable_module_hook(hook_callback, with_kwargs=True))
+ handlers.append(module_hook(hook_callback, with_kwargs=True))
return HandlerList(handlers)
@@ -976,16 +978,16 @@ def _wait_for_forward_with_parallel_intervention(
# at each aligning representations
if activations_sources is None:
assert len(sources) == len(self._intervention_group)
- for group_id, intervenable_keys in self._intervention_group.items():
+ for group_id, keys in self._intervention_group.items():
if sources[group_id] is None:
continue # smart jump for advance usage only
group_get_handlers = HandlerList([])
- for intervenable_key in intervenable_keys:
+ for key in keys:
get_handlers = self._intervention_getter(
- [intervenable_key],
+ [key],
[
unit_locations_sources[
- self.sorted_intervenable_keys.index(intervenable_key)
+ self.sorted_keys.index(key)
]
],
)
@@ -995,27 +997,27 @@ def _wait_for_forward_with_parallel_intervention(
else:
# simply patch in the ones passed in
self.activations = activations_sources
- for _, passed_in_intervenable_key in enumerate(self.activations):
- assert passed_in_intervenable_key in self.sorted_intervenable_keys
+ for _, passed_in_key in enumerate(self.activations):
+ assert passed_in_key in self.sorted_keys
# in parallel mode, we swap cached activations all into
# base at once
- for group_id, intervenable_keys in self._intervention_group.items():
- for intervenable_key in intervenable_keys:
+ for group_id, keys in self._intervention_group.items():
+ for key in keys:
# skip in case smart jump
- if intervenable_key in self.activations or \
- self.interventions[intervenable_key][0].is_source_constant:
+ if key in self.activations or \
+ self.interventions[key][0].is_source_constant:
set_handlers = self._intervention_setter(
- [intervenable_key],
+ [key],
[
unit_locations_base[
- self.sorted_intervenable_keys.index(intervenable_key)
+ self.sorted_keys.index(key)
]
],
# assume same group targeting the same subspace
[
subspaces[
- self.sorted_intervenable_keys.index(intervenable_key)
+ self.sorted_keys.index(key)
]
]
if subspaces is not None
@@ -1033,32 +1035,32 @@ def _wait_for_forward_with_serial_intervention(
subspaces: Optional[List] = None,
):
all_set_handlers = HandlerList([])
- for group_id, intervenable_keys in self._intervention_group.items():
+ for group_id, keys in self._intervention_group.items():
if sources[group_id] is None:
continue # smart jump for advance usage only
- for intervenable_key_id, intervenable_key in enumerate(intervenable_keys):
+ for key_id, key in enumerate(keys):
if group_id != len(self._intervention_group) - 1:
unit_locations_key = f"source_{group_id}->source_{group_id+1}"
else:
unit_locations_key = f"source_{group_id}->base"
unit_locations_source = unit_locations[unit_locations_key][0][
- intervenable_key_id
+ key_id
]
if unit_locations_source is None:
continue # smart jump for advance usage only
unit_locations_base = unit_locations[unit_locations_key][1][
- intervenable_key_id
+ key_id
]
if activations_sources is None:
# get activation from source_i
get_handlers = self._intervention_getter(
- [intervenable_key],
+ [key],
[unit_locations_source],
)
else:
- self.activations[intervenable_key] = activations_sources[
- intervenable_key
+ self.activations[key] = activations_sources[
+ key
]
# call once per group. each intervention is by its own group by default
if activations_sources is None:
@@ -1070,18 +1072,18 @@ def _wait_for_forward_with_serial_intervention(
all_set_handlers.remove()
all_set_handlers = HandlerList([])
- for intervenable_key in intervenable_keys:
+ for key in keys:
# skip in case smart jump
- if intervenable_key in self.activations or \
- self.interventions[intervenable_key][0].is_source_constant:
+ if key in self.activations or \
+ self.interventions[key][0].is_source_constant:
# set with intervened activation to source_i+1
set_handlers = self._intervention_setter(
- [intervenable_key],
+ [key],
[unit_locations_base],
# assume the order
[
subspaces[
- self.sorted_intervenable_keys.index(intervenable_key)
+ self.sorted_keys.index(key)
]
]
if subspaces is not None
@@ -1097,47 +1099,125 @@ def _broadcast_unit_locations(
intervention_group_size,
unit_locations
):
- _unit_locations = {}
- for k, v in unit_locations.items():
- # special broadcast for base-only interventions
- is_base_only = False
- if k == "base":
- is_base_only = True
- k = "sources->base"
- if isinstance(v, int):
- if is_base_only:
- _unit_locations[k] = (None, [[[v]]*batch_size]*intervention_group_size)
+ if self.mode == "parallel":
+ _unit_locations = {}
+ for k, v in unit_locations.items():
+ # special broadcast for base-only interventions
+ is_base_only = False
+ if k == "base":
+ is_base_only = True
+ k = "sources->base"
+ if isinstance(v, int):
+ if is_base_only:
+ _unit_locations[k] = (None, [[[v]]*batch_size]*intervention_group_size)
+ else:
+ _unit_locations[k] = (
+ [[[v]]*batch_size]*intervention_group_size,
+ [[[v]]*batch_size]*intervention_group_size
+ )
+ self.use_fast = True
+ elif len(v) == 2 and isinstance(v[0], int) and isinstance(v[1], int):
+ _unit_locations[k] = (
+ [[[v[0]]]*batch_size]*intervention_group_size,
+ [[[v[1]]]*batch_size]*intervention_group_size
+ )
+ self.use_fast = True
+ elif len(v) == 2 and v[0] == None and isinstance(v[1], int):
+ _unit_locations[k] = (None, [[[v[1]]]*batch_size]*intervention_group_size)
+ self.use_fast = True
+ elif len(v) == 2 and isinstance(v[0], int) and v[1] == None:
+ _unit_locations[k] = ([[[v[0]]]*batch_size]*intervention_group_size, None)
+ self.use_fast = True
else:
+ if is_base_only:
+ _unit_locations[k] = (None, v)
+ else:
+ _unit_locations[k] = v
+ elif self.mode == "serial":
+ _unit_locations = {}
+ for k, v in unit_locations.items():
+ if isinstance(v, int):
_unit_locations[k] = (
[[[v]]*batch_size]*intervention_group_size,
[[[v]]*batch_size]*intervention_group_size
)
- self.use_fast = True
- elif len(v) == 2 and isinstance(v[0], int) and isinstance(v[1], int):
- _unit_locations[k] = (
- [[[v[0]]]*batch_size]*intervention_group_size,
- [[[v[1]]]*batch_size]*intervention_group_size
- )
- self.use_fast = True
- elif len(v) == 2 and v[0] == None and isinstance(v[1], int):
- _unit_locations[k] = (None, [[[v[1]]]*batch_size]*intervention_group_size)
- self.use_fast = True
- elif len(v) == 2 and isinstance(v[0], int) and v[1] == None:
- _unit_locations[k] = ([[[v[0]]]*batch_size]*intervention_group_size, None)
- self.use_fast = True
- else:
- if is_base_only:
- _unit_locations[k] = (None, v)
+ self.use_fast = True
+ elif len(v) == 2 and isinstance(v[0], int) and isinstance(v[1], int):
+ _unit_locations[k] = (
+ [[[v[0]]]*batch_size]*intervention_group_size,
+ [[[v[1]]]*batch_size]*intervention_group_size
+ )
+ self.use_fast = True
+ elif len(v) == 2 and v[0] == None and isinstance(v[1], int):
+ _unit_locations[k] = (None, [[[v[1]]]*batch_size]*intervention_group_size)
+ self.use_fast = True
+ elif len(v) == 2 and isinstance(v[0], int) and v[1] == None:
+ _unit_locations[k] = ([[[v[0]]]*batch_size]*intervention_group_size, None)
+ self.use_fast = True
else:
_unit_locations[k] = v
+ else:
+ raise ValueError(f"The mode {self.mode} is not supported.")
return _unit_locations
+ def _broadcast_source_representations(
+ self,
+ source_representations
+ ):
+ """Broadcast simple inputs to a dict"""
+ _source_representations = {}
+ if isinstance(source_representations, dict) or source_representations is None:
+ # pass to broadcast for advance usage
+ _source_representations = source_representations
+ elif isinstance(source_representations, list):
+ for i, key in enumerate(self.sorted_keys):
+ _source_representations[key] = source_representations[i]
+ elif isinstance(source_representations, torch.Tensor):
+ for key in self.sorted_keys:
+ _source_representations[key] = source_representations
+ else:
+ raise ValueError(
+ "Accept input type for source_representations is [Dict, List, torch.Tensor]"
+ )
+ return _source_representations
+
+ def _broadcast_sources(
+ self,
+ sources
+ ):
+ """Broadcast simple inputs to a dict"""
+ _sources = sources
+ if len(sources) == 1 and len(self._intervention_group) > 1:
+ for _ in range(len(self._intervention_group)):
+ _sources += [sources[0]]
+ else:
+ _sources = sources
+ return _sources
+
+ def _broadcast_subspaces(
+ self,
+ batch_size,
+ intervention_group_size,
+ subspaces
+ ):
+ """Broadcast simple subspaces input"""
+ _subspaces = subspaces
+ if isinstance(subspaces, int):
+ _subspaces = [[[subspaces]]*batch_size]*intervention_group_size
+
+ elif isinstance(subspaces, list) and isinstance(subspaces[0], int):
+ _subspaces = [[subspaces]*batch_size]*intervention_group_size
+ else:
+ # TODO: subspaces is easier to add more broadcast majic.
+ pass
+ return _subspaces
+
def forward(
self,
base,
sources: Optional[List] = None,
unit_locations: Optional[Dict] = None,
- activations_sources: Optional[Dict] = None,
+ source_representations: Optional[Dict] = None,
subspaces: Optional[List] = None,
):
"""
@@ -1205,6 +1285,11 @@ def forward(
Since we now support group-based intervention, the number of sources
should be equal to the total number of groups.
"""
+ # TODO: forgive me now, i will change this later.
+ activations_sources = source_representations
+ if sources is not None and not isinstance(sources, list):
+ sources = [sources]
+
self._cleanup_states()
# if no source inputs, we are calling a simple forward
@@ -1212,11 +1297,15 @@ def forward(
and unit_locations is None:
return self.model(**base), None
+ # broadcast
unit_locations = self._broadcast_unit_locations(
get_batch_size(base), len(self._intervention_group), unit_locations)
-
sources = [None]*len(self._intervention_group) if sources is None else sources
-
+ sources = self._broadcast_sources(sources)
+ activations_sources = self._broadcast_source_representations(activations_sources)
+ subspaces = self._broadcast_subspaces(
+ get_batch_size(base), len(self._intervention_group), subspaces)
+
self._input_validation(
base,
sources,
@@ -1258,7 +1347,7 @@ def forward(
collected_activations = []
if self.return_collect_activations:
- for key in self.sorted_intervenable_keys:
+ for key in self.sorted_keys:
if isinstance(
self.interventions[key][0],
CollectIntervention
@@ -1284,7 +1373,7 @@ def generate(
base,
sources: Optional[List] = None,
unit_locations: Optional[Dict] = None,
- activations_sources: Optional[Dict] = None,
+ source_representations: Optional[Dict] = None,
intervene_on_prompt: bool = False,
subspaces: Optional[List] = None,
**kwargs,
@@ -1314,6 +1403,11 @@ def generate(
counterfactual_outputs: the intervened output of the
base input.
"""
+ # TODO: forgive me now, i will change this later.
+ activations_sources = source_representations
+ if sources is not None and not isinstance(sources, list):
+ sources = [sources]
+
self._cleanup_states()
self._intervene_on_prompt = intervene_on_prompt
@@ -1323,10 +1417,14 @@ def generate(
# that means, we intervene on every generated tokens!
unit_locations = {"base": 0}
+ # broadcast
unit_locations = self._broadcast_unit_locations(
get_batch_size(base), len(self._intervention_group), unit_locations)
-
sources = [None]*len(self._intervention_group) if sources is None else sources
+ sources = self._broadcast_sources(sources)
+ activations_sources = self._broadcast_source_representations(activations_sources)
+ subspaces = self._broadcast_subspaces(
+ get_batch_size(base), len(self._intervention_group), subspaces)
self._input_validation(
base,
@@ -1369,7 +1467,7 @@ def generate(
collected_activations = []
if self.return_collect_activations:
- for key in self.sorted_intervenable_keys:
+ for key in self.sorted_keys:
if isinstance(
self.interventions[key][0],
CollectIntervention
@@ -1452,17 +1550,17 @@ def _batch_process_unit_location(self, inputs):
_curr_source_ind = 0
_parallel_aggr_left = []
_parallel_aggr_right = []
- for _, rep in self.intervenable_representations.items():
+ for _, rep in self.representations.items():
_curr_source_ind_inc = _curr_source_ind + 1
_prefix = f"source_{_curr_source_ind}->base"
_prefix_left = f"{_prefix}.0"
_prefix_right = f"{_prefix}.1"
_sub_loc_aggr_left = [] # 3d
_sub_loc_aggr_right = [] # 3d
- for sub_loc in rep.intervenable_unit.split("."):
+ for sub_loc in rep.unit.split("."):
_sub_loc_aggr_left += [inputs[f"{_prefix_left}.{sub_loc}"]]
_sub_loc_aggr_right += [inputs[f"{_prefix_right}.{sub_loc}"]]
- if len(rep.intervenable_unit.split(".")) == 1:
+ if len(rep.unit.split(".")) == 1:
_sub_loc_aggr_left = _sub_loc_aggr_left[0]
_sub_loc_aggr_right = _sub_loc_aggr_right[0]
_parallel_aggr_left += [_sub_loc_aggr_left] # 3D or 4D
@@ -1477,21 +1575,21 @@ def _batch_process_unit_location(self, inputs):
else:
# source into another source and finally to the base engaging different locations
_curr_source_ind = 0
- for _, rep in self.intervenable_representations.items():
+ for _, rep in self.representations.items():
_curr_source_ind_inc = _curr_source_ind + 1
_prefix = (
f"source_{_curr_source_ind}->base"
- if _curr_source_ind + 1 == len(self.intervenable_representations)
+ if _curr_source_ind + 1 == len(self.representations)
else f"source_{_curr_source_ind}->source{_curr_source_ind_inc}"
)
_prefix_left = f"{_prefix}.0"
_prefix_right = f"{_prefix}.1"
_sub_loc_aggr_left = [] # 3d
_sub_loc_aggr_right = [] # 3d
- for sub_loc in rep.intervenable_unit.split("."):
+ for sub_loc in rep.unit.split("."):
_sub_loc_aggr_left += [inputs[f"{_prefix_left}.{sub_loc}"]]
_sub_loc_aggr_right += [inputs[f"{_prefix_right}.{sub_loc}"]]
- if len(rep.intervenable_unit.split(".")) == 1:
+ if len(rep.unit.split(".")) == 1:
_sub_loc_aggr_left = _sub_loc_aggr_left[0]
_sub_loc_aggr_right = _sub_loc_aggr_right[0]
_curr_source_ind += 1
diff --git a/pyvene/models/intervention_utils.py b/pyvene/models/intervention_utils.py
index a2b81007..bfc4a105 100644
--- a/pyvene/models/intervention_utils.py
+++ b/pyvene/models/intervention_utils.py
@@ -37,7 +37,7 @@ def __repr__(self):
def __str__(self):
return json.dumps(self.state_dict, indent=4)
-def broadcast_tensor(x, target_shape):
+def broadcast_tensor_v1(x, target_shape):
# Ensure the last dimension of target_shape matches x's size
if target_shape[-1] != x.shape[-1]:
raise ValueError("The last dimension of target_shape must match the size of x")
@@ -50,6 +50,19 @@ def broadcast_tensor(x, target_shape):
broadcasted_x = x_reshaped.expand(*target_shape)
return broadcasted_x
+def broadcast_tensor_v2(x, target_shape):
+ # Ensure that target_shape has at least one dimension
+ if len(target_shape) < 1:
+ raise ValueError("Target shape must have at least one dimension")
+
+ # Extract the first n-1 dimensions from the target shape
+ target_dims_except_last = target_shape[:-1]
+
+ # Broadcast the input tensor x to match the target_dims_except_last and keep its last dimension
+ broadcasted_x = x.expand(*target_dims_except_last, x.shape[-1])
+
+ return broadcasted_x
+
def _do_intervention_by_swap(
base,
source,
@@ -66,7 +79,7 @@ def _do_intervention_by_swap(
# auto broadcast
if base.shape != source.shape:
try:
- source = broadcast_tensor(source, base.shape)
+ source = broadcast_tensor_v1(source, base.shape)
except:
raise ValueError(
f"source with shape {source.shape} cannot be broadcasted "
@@ -110,17 +123,20 @@ def _do_intervention_by_swap(
collect_base = []
for example_i in range(len(subspaces)):
# render subspace as column indices
- sel_subspace_indices = []
- for subspace in subspaces[example_i]:
- sel_subspace_indices.extend(
- [
- i
- for i in range(
- subspace_partition[subspace][0],
- subspace_partition[subspace][1],
- )
- ]
- )
+ if subspace_partition is None:
+ sel_subspace_indices = subspaces[example_i]
+ else:
+ sel_subspace_indices = []
+ for subspace in subspaces[example_i]:
+ sel_subspace_indices.extend(
+ [
+ i
+ for i in range(
+ subspace_partition[subspace][0],
+ subspace_partition[subspace][1],
+ )
+ ]
+ )
if mode == "interchange":
base[example_i, ..., sel_subspace_indices] = source[
example_i, ..., sel_subspace_indices
diff --git a/pyvene/models/interventions.py b/pyvene/models/interventions.py
index 41dda11a..3c903402 100644
--- a/pyvene/models/interventions.py
+++ b/pyvene/models/interventions.py
@@ -1,4 +1,5 @@
import torch
+import numpy as np
from abc import ABC, abstractmethod
from .layers import RotateLayer, LowRankRotateLayer, SubspaceLowRankRotateLayer
@@ -29,8 +30,11 @@ def __init__(self, **kwargs):
self.source_representation = None
def set_interchange_dim(self, interchange_dim):
- self.interchange_dim = interchange_dim
-
+ if isinstance(interchange_dim, int):
+ self.interchange_dim = torch.tensor(interchange_dim)
+ else:
+ self.interchange_dim = interchange_dim
+
@abstractmethod
def forward(self, base, source, subspaces=None):
pass
@@ -74,6 +78,15 @@ class ConstantSourceIntervention(Intervention):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.is_source_constant = True
+
+
+class SourcelessIntervention(Intervention):
+
+ """No source."""
+
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.is_source_constant = True
class BasisAgnosticIntervention(Intervention):
@@ -100,9 +113,10 @@ class ZeroIntervention(ConstantSourceIntervention, LocalistRepresentationInterve
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim
-
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
+
def forward(self, base, source=None, subspaces=None):
return _do_intervention_by_swap(
base,
@@ -124,9 +138,10 @@ class CollectIntervention(ConstantSourceIntervention):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim
-
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
+
def forward(self, base, source=None, subspaces=None):
return _do_intervention_by_swap(
base,
@@ -147,8 +162,9 @@ class SkipIntervention(BasisAgnosticIntervention, LocalistRepresentationInterven
"""Skip the current intervening layer's computation in the hook function."""
def __init__(self, embed_dim, **kwargs):
- super().__init__(**kwargs)
- self.interchange_dim = embed_dim # assuming full subspace
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
# source here is the base example input to the hook
@@ -172,8 +188,9 @@ class VanillaIntervention(Intervention, LocalistRepresentationIntervention):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim # assuming full subspace
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
return _do_intervention_by_swap(
@@ -196,8 +213,9 @@ class AdditionIntervention(BasisAgnosticIntervention, LocalistRepresentationInte
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim # assuming full subspace
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
return _do_intervention_by_swap(
@@ -220,12 +238,9 @@ class SubtractionIntervention(BasisAgnosticIntervention, LocalistRepresentationI
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim # assuming full subspace
- self.is_repr_distributed = False
-
- def set_interchange_dim(self, interchange_dim):
- self.interchange_dim = interchange_dim
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
@@ -251,8 +266,9 @@ def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
rotate_layer = RotateLayer(embed_dim)
self.rotate_layer = torch.nn.utils.parametrizations.orthogonal(rotate_layer)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim # assuming full subspace
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
rotated_base = self.rotate_layer(base)
@@ -281,8 +297,9 @@ class BoundlessRotatedSpaceIntervention(TrainableIntervention, DistributedRepres
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- self.embed_dim = embed_dim
- self.interchange_dim = embed_dim # assuming full subspace
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
rotate_layer = RotateLayer(embed_dim)
self.rotate_layer = torch.nn.utils.parametrizations.orthogonal(rotate_layer)
self.intervention_boundaries = torch.nn.Parameter(
@@ -302,10 +319,6 @@ def get_temperature(self):
def set_temperature(self, temp: torch.Tensor):
self.temperature.data = temp
- def set_interchange_dim(self, interchange_dim):
- """interchange dim is learned and can not be set"""
- assert False
-
def set_intervention_boundaries(self, intervention_boundaries):
self.intervention_boundaries = torch.nn.Parameter(
torch.tensor([intervention_boundaries]), requires_grad=True
@@ -352,7 +365,9 @@ def __init__(self, embed_dim, **kwargs):
torch.tensor([100] * embed_dim), requires_grad=True
)
self.temperature = torch.nn.Parameter(torch.tensor(50.0))
- self.embed_dim = embed_dim
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def get_boundary_parameters(self):
return self.intervention_boundaries
@@ -363,10 +378,6 @@ def get_temperature(self):
def set_temperature(self, temp: torch.Tensor):
self.temperature.data = temp
- def set_interchange_dim(self, interchange_dim):
- """interchange dim is learned and can not be set"""
- assert False
-
def forward(self, base, source, subspaces=None):
batch_size = base.shape[0]
rotated_base = self.rotate_layer(base)
@@ -396,10 +407,11 @@ class LowRankRotatedSpaceIntervention(TrainableIntervention, DistributedRepresen
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
- rotate_layer = LowRankRotateLayer(embed_dim, kwargs["intervenable_low_rank_dimension"])
+ rotate_layer = LowRankRotateLayer(embed_dim, kwargs["low_rank_dimension"])
self.rotate_layer = torch.nn.utils.parametrizations.orthogonal(rotate_layer)
- self.embed_dim = embed_dim
- self.interchange_dim = None
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(embed_dim))
def forward(self, base, source, subspaces=None):
rotated_base = self.rotate_layer(base)
@@ -484,13 +496,11 @@ def __init__(self, embed_dim, **kwargs):
self.pca_std = torch.nn.Parameter(
torch.tensor(pca_std, dtype=torch.float32), requires_grad=False
)
- self.interchange_dim = 10 # default to be 10.
- self.embed_dim = embed_dim
+ # TODO: put them into a parent class
+ self.register_buffer('embed_dim', torch.tensor(embed_dim))
+ self.register_buffer('interchange_dim', torch.tensor(kwargs["low_rank_dimension"]))
self.trainble = False
- def set_interchange_dim(self, interchange_dim):
- self.interchange_dim = interchange_dim
-
def forward(self, base, source, subspaces=None):
base_norm = (base - self.pca_mean) / self.pca_std
source_norm = (source - self.pca_mean) / self.pca_std
@@ -513,3 +523,24 @@ def forward(self, base, source, subspaces=None):
def __str__(self):
return f"PCARotatedSpaceIntervention(embed_dim={self.embed_dim})"
+
+class NoiseIntervention(ConstantSourceIntervention, LocalistRepresentationIntervention):
+ """Noise intervention"""
+
+ def __init__(self, embed_dim, **kwargs):
+ super().__init__()
+ self.interchange_dim = embed_dim
+ rs = np.random.RandomState(1)
+ prng = lambda *shape: rs.randn(*shape)
+ noise_level = kwargs["noise_leve"] \
+ if "noise_leve" in kwargs else 0.13462981581687927
+ self.register_buffer('noise', torch.from_numpy(
+ prng(1, 4, embed_dim)))
+ self.register_buffer('noise_level', torch.tensor(noise_level))
+
+ def forward(self, base, source=None, subspaces=None):
+ base[..., : self.interchange_dim] += self.noise * self.noise_level
+ return base
+
+ def __str__(self):
+ return f"NoiseIntervention(embed_dim={self.embed_dim})"
\ No newline at end of file
diff --git a/pyvene/models/modeling_utils.py b/pyvene/models/modeling_utils.py
index 63c8a3bb..ef94e1ab 100644
--- a/pyvene/models/modeling_utils.py
+++ b/pyvene/models/modeling_utils.py
@@ -91,11 +91,11 @@ def getattr_for_torch_module(model, parameter_name):
return current_module
-def get_intervenable_dimension(model_type, model_config, representation) -> int:
+def get_dimension(model_type, model_config, representation) -> int:
"""Based on the representation, get the aligning dimension size"""
dimension_proposals = type_to_dimension_mapping[model_type][
- representation.intervenable_representation_type
+ representation.component
]
for proposal in dimension_proposals:
if "*" in proposal:
@@ -143,20 +143,20 @@ def get_representation_dimension_by_type(
assert False
-def get_intervenable_module_hook(model, representation) -> nn.Module:
+def get_module_hook(model, representation) -> nn.Module:
"""Render the intervening module with a hook"""
type_info = type_to_module_mapping[get_internal_model_type(model)][
- representation.intervenable_representation_type
+ representation.component
]
parameter_name = type_info[0]
hook_type = type_info[1]
- if "%s" in parameter_name and representation.intervenable_moe is None:
+ if "%s" in parameter_name and representation.moe_key is None:
# we assume it is for the layer.
- parameter_name = parameter_name % (representation.intervenable_layer)
+ parameter_name = parameter_name % (representation.layer)
else:
parameter_name = parameter_name % (
- int(representation.intervenable_layer),
- int(representation.intervenable_moe)
+ int(representation.layer),
+ int(representation.moe_key)
)
module = getattr_for_torch_module(model, parameter_name)
module_hook = getattr(module, hook_type)
@@ -165,7 +165,7 @@ def get_intervenable_module_hook(model, representation) -> nn.Module:
def check_sorted_intervenables_by_topological_order(
- model, intervenable_representations, sorted_intervenable_keys
+ model, representations, sorted_keys
):
"""Sort the intervention with topology in transformer arch"""
if is_transformer(model):
@@ -176,7 +176,7 @@ def check_sorted_intervenables_by_topological_order(
TOPOLOGICAL_ORDER = CONST_GRU_TOPOLOGICAL_ORDER
scores = {}
- for k, _ in intervenable_representations.items():
+ for k, _ in representations.items():
l = 100*(int(k.split(".")[1]) + 1)
r = 10*TOPOLOGICAL_ORDER.index(k.split(".")[3])
# incoming order in case they are ordered
@@ -184,7 +184,7 @@ def check_sorted_intervenables_by_topological_order(
scores[k] = l + r + o
sorted_keys_by_topological_order = sorted(scores.keys(), key=lambda x: scores[x])
- return sorted_intervenable_keys == sorted_keys_by_topological_order
+ return sorted_keys == sorted_keys_by_topological_order
class HandlerList:
@@ -215,14 +215,14 @@ def bsd_to_b_sd(tensor):
return tensor.reshape(b, s * d)
-def b_sd_to_bsd(tensor, d):
+def b_sd_to_bsd(tensor, s):
"""
Convert a tensor of shape (b, s*d) back to (b, s, d).
"""
if tensor is None:
return tensor
b, sd = tensor.shape
- s = sd // d
+ d = sd // s
return tensor.reshape(b, s, d)
@@ -236,21 +236,23 @@ def bhsd_to_bs_hd(tensor):
return tensor.permute(0, 2, 1, 3).reshape(b, s, h * d)
-def bs_hd_to_bhsd(tensor, d):
+def bs_hd_to_bhsd(tensor, h):
"""
Convert a tensor of shape (b, s, h*d) back to (b, h, s, d).
"""
if tensor is None:
return tensor
b, s, hd = tensor.shape
- h = hd // d
+
+ d = hd // h
+
return tensor.reshape(b, s, h, d).permute(0, 2, 1, 3)
-def gather_neurons(tensor_input, intervenable_unit, unit_locations_as_list):
+def gather_neurons(tensor_input, unit, unit_locations_as_list):
"""Gather intervening neurons"""
- if "." in intervenable_unit:
+ if "." in unit:
unit_locations = (
torch.tensor(unit_locations_as_list[0], device=tensor_input.device),
torch.tensor(unit_locations_as_list[1], device=tensor_input.device),
@@ -260,7 +262,7 @@ def gather_neurons(tensor_input, intervenable_unit, unit_locations_as_list):
unit_locations_as_list, device=tensor_input.device
)
- if intervenable_unit in {"pos", "h"}:
+ if unit in {"pos", "h"}:
tensor_output = torch.gather(
tensor_input,
1,
@@ -270,7 +272,7 @@ def gather_neurons(tensor_input, intervenable_unit, unit_locations_as_list):
)
return tensor_output
- elif intervenable_unit in {"h.pos"}:
+ elif unit in {"h.pos"}:
# we assume unit_locations is a tuple
head_unit_locations = unit_locations[0]
pos_unit_locations = unit_locations[1]
@@ -282,7 +284,7 @@ def gather_neurons(tensor_input, intervenable_unit, unit_locations_as_list):
*head_unit_locations.shape, *(1,) * (len(tensor_input.shape) - 2)
).expand(-1, -1, *tensor_input.shape[2:]),
) # b, h, s, d
- d = head_tensor_output.shape[-1]
+ d = head_tensor_output.shape[1]
pos_tensor_input = bhsd_to_bs_hd(head_tensor_output)
pos_tensor_output = torch.gather(
pos_tensor_input,
@@ -294,11 +296,11 @@ def gather_neurons(tensor_input, intervenable_unit, unit_locations_as_list):
tensor_output = bs_hd_to_bhsd(pos_tensor_output, d)
return tensor_output # b, num_unit (h), num_unit (pos), d
- elif intervenable_unit in {"t"}:
+ elif unit in {"t"}:
# for stateful models, intervention location is guarded outside gather
return tensor_input
- elif intervenable_unit in {"dim", "pos.dim", "h.dim", "h.pos.dim"}:
- assert False, f"Not Implemented Gathering with Unit = {intervenable_unit}"
+ elif unit in {"dim", "pos.dim", "h.dim", "h.pos.dim"}:
+ assert False, f"Not Implemented Gathering with Unit = {unit}"
def split_heads(tensor, num_heads, attn_head_size):
@@ -311,11 +313,11 @@ def split_heads(tensor, num_heads, attn_head_size):
def output_to_subcomponent(
- output, intervenable_representation_type, model_type, model_config
+ output, representation_type, model_type, model_config
):
if (
- "head" in intervenable_representation_type
- or intervenable_representation_type
+ "head" in representation_type
+ or representation_type
in {"query_output", "key_output", "value_output"}
):
n_embd = get_representation_dimension_by_type(
@@ -333,7 +335,7 @@ def output_to_subcomponent(
hf_models.gpt2.modeling_gpt2.GPT2Model,
hf_models.gpt2.modeling_gpt2.GPT2LMHeadModel,
}:
- if intervenable_representation_type in {
+ if representation_type in {
"query_output",
"key_output",
"value_output",
@@ -342,7 +344,7 @@ def output_to_subcomponent(
"head_value_output",
}:
qkv = output.split(n_embd, dim=2)
- if intervenable_representation_type in {
+ if representation_type in {
"head_query_output",
"head_key_output",
"head_value_output",
@@ -352,13 +354,13 @@ def output_to_subcomponent(
split_heads(qkv[1], num_heads, attn_head_size),
split_heads(qkv[2], num_heads, attn_head_size),
) # each with (batch, head, seq_length, head_features)
- return qkv[CONST_QKV_INDICES[intervenable_representation_type]]
- elif intervenable_representation_type in {"head_attention_value_output"}:
+ return qkv[CONST_QKV_INDICES[representation_type]]
+ elif representation_type in {"head_attention_value_output"}:
return split_heads(output, num_heads, attn_head_size)
else:
return output
elif model_type in {GRUModel, GRULMHeadModel, GRUForClassification}:
- if intervenable_representation_type in {
+ if representation_type in {
"reset_x2h_output",
"new_x2h_output",
"reset_h2h_output",
@@ -369,15 +371,15 @@ def output_to_subcomponent(
n_embd = get_representation_dimension_by_type(
model_type, model_config, "cell_output"
)
- start_index = CONST_RUN_INDICES[intervenable_representation_type] * n_embd
+ start_index = CONST_RUN_INDICES[representation_type] * n_embd
end_index = (
- CONST_RUN_INDICES[intervenable_representation_type] + 1
+ CONST_RUN_INDICES[representation_type] + 1
) * n_embd
return output[..., start_index:end_index]
else:
return output
else:
- if intervenable_representation_type in {
+ if representation_type in {
"head_query_output",
"head_key_output",
"head_value_output",
@@ -391,14 +393,14 @@ def output_to_subcomponent(
def scatter_neurons(
tensor_input,
replacing_tensor_input,
- intervenable_representation_type,
- intervenable_unit,
+ representation_type,
+ unit,
unit_locations_as_list,
model_type,
model_config,
use_fast,
):
- if "." in intervenable_unit:
+ if "." in unit:
# extra dimension for multi-level intervention
unit_locations = (
torch.tensor(unit_locations_as_list[0], device=tensor_input.device),
@@ -410,8 +412,8 @@ def scatter_neurons(
)
if (
- "head" in intervenable_representation_type
- or intervenable_representation_type
+ "head" in representation_type
+ or representation_type
in {"query_output", "key_output", "value_output"}
):
n_embd = get_representation_dimension_by_type(
@@ -430,18 +432,18 @@ def scatter_neurons(
hf_models.gpt2.modeling_gpt2.GPT2LMHeadModel,
}:
if (
- "query" in intervenable_representation_type
- or "key" in intervenable_representation_type
- or "value" in intervenable_representation_type
- ) and "attention" not in intervenable_representation_type:
- start_index = CONST_QKV_INDICES[intervenable_representation_type] * n_embd
+ "query" in representation_type
+ or "key" in representation_type
+ or "value" in representation_type
+ ) and "attention" not in representation_type:
+ start_index = CONST_QKV_INDICES[representation_type] * n_embd
end_index = (
- CONST_QKV_INDICES[intervenable_representation_type] + 1
+ CONST_QKV_INDICES[representation_type] + 1
) * n_embd
else:
start_index, end_index = None, None
elif model_type in {GRUModel, GRULMHeadModel, GRUForClassification}:
- if intervenable_representation_type in {
+ if representation_type in {
"reset_x2h_output",
"new_x2h_output",
"reset_h2h_output",
@@ -452,27 +454,27 @@ def scatter_neurons(
n_embd = get_representation_dimension_by_type(
model_type, model_config, "cell_output"
)
- start_index = CONST_RUN_INDICES[intervenable_representation_type] * n_embd
+ start_index = CONST_RUN_INDICES[representation_type] * n_embd
end_index = (
- CONST_RUN_INDICES[intervenable_representation_type] + 1
+ CONST_RUN_INDICES[representation_type] + 1
) * n_embd
else:
start_index, end_index = None, None
else:
start_index, end_index = None, None
- if intervenable_unit == "t":
+ if unit == "t":
# time series models, e.g., gru
for batch_i, _ in enumerate(unit_locations):
tensor_input[batch_i, start_index:end_index] = replacing_tensor_input[
batch_i
]
else:
- if "head" in intervenable_representation_type:
+ if "head" in representation_type:
start_index = 0 if start_index is None else start_index
end_index = 0 if end_index is None else end_index
# head-based scattering
- if intervenable_unit in {"h.pos"}:
+ if unit in {"h.pos"}:
# we assume unit_locations is a tuple
for head_batch_i, head_locations in enumerate(unit_locations[0]):
for head_loc_i, head_loc in enumerate(head_locations):
@@ -515,16 +517,17 @@ def do_intervention(
):
"""Do the actual intervention"""
- d = base_representation.shape[-1]
-
+ num_unit = base_representation.shape[1]
+
# flatten
original_base_shape = base_representation.shape
if len(original_base_shape) == 2 or \
- isinstance(intervention, LocalistRepresentationIntervention):
+ (isinstance(intervention, LocalistRepresentationIntervention)):
# no pos dimension, e.g., gru
base_representation_f = base_representation
source_representation_f = source_representation
elif len(original_base_shape) == 3:
+
# b, num_unit (pos), d -> b, num_unit*d
base_representation_f = bsd_to_b_sd(base_representation)
source_representation_f = bsd_to_b_sd(source_representation)
@@ -534,20 +537,22 @@ def do_intervention(
source_representation_f = bhsd_to_bs_hd(source_representation)
else:
assert False # what's going on?
-
+
intervened_representation = intervention(
base_representation_f, source_representation_f, subspaces
)
-
+
+ post_d = intervened_representation.shape[-1]
+
# unflatten
if len(original_base_shape) == 2 or \
isinstance(intervention, LocalistRepresentationIntervention):
# no pos dimension, e.g., gru
pass
elif len(original_base_shape) == 3:
- intervened_representation = b_sd_to_bsd(intervened_representation, d)
+ intervened_representation = b_sd_to_bsd(intervened_representation, num_unit)
elif len(original_base_shape) == 4:
- intervened_representation = bs_hd_to_bhsd(intervened_representation, d)
+ intervened_representation = bs_hd_to_bhsd(intervened_representation, num_unit)
else:
assert False # what's going on?
@@ -555,7 +560,7 @@ def do_intervention(
def simple_output_to_subcomponent(
- output, intervenable_representation_type, model_config
+ output, representation_type, model_config
):
"""This is an oversimplied version for demo"""
return output
@@ -564,8 +569,8 @@ def simple_output_to_subcomponent(
def simple_scatter_intervention_output(
original_output,
intervened_representation,
- intervenable_representation_type,
- intervenable_unit,
+ representation_type,
+ unit,
unit_locations,
model_config,
):
diff --git a/pyvene_101.ipynb b/pyvene_101.ipynb
new file mode 100644
index 00000000..c1babeda
--- /dev/null
+++ b/pyvene_101.ipynb
@@ -0,0 +1,1656 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "ba6c7e19",
+ "metadata": {},
+ "source": [
+ "# Introduction to pyvene\n",
+ "This tutorial shows simple runnable code snippets of how to do different kinds of interventions on neural networks with pyvene."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9d6994fa",
+ "metadata": {},
+ "source": [
+ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/stanfordnlp/pyvene/blob/main/pyvene/pyvene_101.ipynb)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "d123a2ba",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "__author__ = \"Zhengxuan Wu\"\n",
+ "__version__ = \"01/20/2024\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "26298448-91eb-4cad-85bf-ec5fef436e1d",
+ "metadata": {},
+ "source": [
+ " # Table of Contents \n",
+ "1. [Set-up](#Set-up) \n",
+ "1. [pyvene 101](#pyvene-101) \n",
+ " 1. [Zero-out Intervention](#Simple-zero-out-intervention) \n",
+ " 1. [Zero-out & Subspaces](#Zero-out-&-the-notion-of-subspaces)\n",
+ " 1. [Interchange Intervention](#Interchange-Interventions)\n",
+ " 1. [Intervention Config](#Intervention-Configuration)\n",
+ " 1. [Addition Intervention](#Addition-Intervention)\n",
+ " 1. [Trainable Intervention](#Trainable-Intervention)\n",
+ " 1. [Activation Collection](#Activation-Collection-with-Intervention)\n",
+ " 1. [Activation Collection with Other Intervention](#Activation-Collection-at-Downstream-of-a-Intervened-Model)\n",
+ " 1. [Intervene Single Neuron](#Intervene-on-a-Single-Neuron)\n",
+ " 1. [Add New Intervention Type](#Add-New-Intervention-Type)\n",
+ " 1. [Intervene on Recurrent NNs](#Recurrent-NNs-(Intervene-a-Specific-Timestep))\n",
+ " 1. [Intervene across Times with RNNs](#Recurrent-NNs-(Intervene-cross-Time))\n",
+ " 1. [Intervene on LM Generation](#LM-Generation)\n",
+ " 1. [Saving and Loading](#Saving-and-Loading)\n",
+ " 1. [Multi-Source Intervention (Parallel)](#Multi-Source-Interchange-Intervention-(Parallel-Mode))\n",
+ " 1. [Multi-Source Intervention (Serial)](#Multi-Source-Interchange-Intervention-(Serial-Mode))\n",
+ " 1. [Multi-Source Intervention with Subspaces (Parallel)](#Multi-Source-Interchange-Intervention-with-Subspaces-(Parallel-Mode))\n",
+ " 1. [Multi-Source Intervention with Subspaces (Serial)](#Multi-Source-Interchange-Intervention-with-Subspaces-(Serial-Mode))\n",
+ " 1. [Interchange Intervention Training](#Interchange-Intervention-Training-(IIT))\n",
+ "1. [pyvene 102](#pyvene-102)\n",
+ " 1. [Intervention Grouping](#Grouping)\n",
+ " 1. [Intervention Skipping](#Intervention-Skipping-in-Runtime)\n",
+ " 1. [Subspace Partition](#Subspace-Partition)\n",
+ " 1. [Intervention Linking](#Intervention-Linking)\n",
+ " 1. [Add New Model Type](#Add-New-Model-Type)\n",
+ " 1. [Path Patching](#Composing-Complex-Intervention-Schema:-Path-Patching)\n",
+ " 1. [Causal Tracing](#Composing-Complex-Intervention-Schema:-Causal-Tracing-in-15-lines)\n",
+ "1. [The End](#The-End)\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0706e21b",
+ "metadata": {},
+ "source": [
+ "## Set-up"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e08304ea",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "try:\n",
+ " # This library is our indicator that the required installs\n",
+ " # need to be done.\n",
+ " import pyvene\n",
+ "\n",
+ "except ModuleNotFoundError:\n",
+ " !pip install git+https://github.com/frankaging/pyvene.git"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ede4f94",
+ "metadata": {},
+ "source": [
+ "## pyvene 101\n",
+ "Before we get started, here are a couple of core notations that are used in this library:\n",
+ "- **Base** example: this is the example we are intervening on, or, we are intervening on the computation graph of the model running the **Base** example.\n",
+ "- **Source** example or representations: this is the source of our intervention. We use **Source** to intervene on **Base**.\n",
+ "- **component**: this is the `nn.module` we are intervening in a pytorch-based NN.\n",
+ "- **unit**: this is the axis of our intervention. If we say our **unit** is `pos` (`position`), then you are intervening on each token position.\n",
+ "- **unit_locations**: this list gives you the percisely location of your intervention. It is the locations of the unit of analysis you are specifying. For instance, if your `unit` is `pos`, and your `unit_location` is 3, then it means you are intervening on the third token.\n",
+ "\n",
+ "### Simple zero-out intervention"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "a82664f9",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "# define the component to zero-out\n",
+ "pv_gpt2 = pv.IntervenableModel({\n",
+ " \"layer\": 0, \"component\": \"mlp_output\",\n",
+ " \"source_representation\": torch.zeros(gpt2.config.n_embd)\n",
+ "}, model=gpt2)\n",
+ "# run the intervened forward pass\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\"), \n",
+ " # we define the intervening token dynamically\n",
+ " unit_locations={\"base\": 3}\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "39071858",
+ "metadata": {},
+ "source": [
+ "### Zero-out & the notion of subspaces\n",
+ "The notion of subspace means the actual dimensions you are intervening. If we have a representation in a size of 512, the first 128 activation values are its subspace activations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "b7896c3b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "Directory './tmp/' already exists.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "# built-in helper to get a HuggingFace model\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "# create with dict-based config\n",
+ "pv_config = pv.IntervenableConfig({\n",
+ " \"layer\": 0, \"component\": \"mlp_output\"})\n",
+ "#initialize model\n",
+ "pv_gpt2 = pv.IntervenableModel(pv_config, model=gpt2)\n",
+ "# run an intervened forward pass\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " # the intervening base input\n",
+ " base=tokenizer(\"The capital of Spain is\", return_tensors=\"pt\"), \n",
+ " # the location to intervene at (3rd token)\n",
+ " unit_locations={\"base\": 3},\n",
+ " # the individual dimensions targetted\n",
+ " subspaces=[10,11,12],\n",
+ " source_representations=torch.zeros(gpt2.config.n_embd)\n",
+ ")\n",
+ "# sharing\n",
+ "pv_gpt2.save(\"./tmp/\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1410904d",
+ "metadata": {},
+ "source": [
+ "### Interchange Interventions\n",
+ "Instead of a static vector, we can intervene the model with activations sampled from a different forward run. We call this interchange intervention, where intervention happens between two examples and we are interchanging activations between them."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "9691c7d8",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "# built-in helper to get a HuggingFace model\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "# create with dict-based config\n",
+ "pv_config = pv.IntervenableConfig({\n",
+ " \"layer\": 0,\n",
+ " \"component\": \"mlp_output\"},\n",
+ " intervention_types=pv.VanillaIntervention\n",
+ ")\n",
+ "#initialize model\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " pv_config, model=gpt2)\n",
+ "# run an interchange intervention \n",
+ "intervened_outputs = pv_gpt2(\n",
+ " # the base input\n",
+ " base=tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors = \"pt\"), \n",
+ " # the source input\n",
+ " sources=tokenizer(\n",
+ " \"The capital of Italy is\", \n",
+ " return_tensors = \"pt\"), \n",
+ " # the location to intervene at (3rd token)\n",
+ " unit_locations={\"sources->base\": 3},\n",
+ " # the individual dimensions targeted\n",
+ " subspaces=[10,11,12]\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c890fda4",
+ "metadata": {},
+ "source": [
+ "### Intervention Configuration\n",
+ "You can also initialize the config without the lazy dictionary passing by enabling more options, e.g., the mode of these interventions are executed."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "4faa3e41",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "IntervenableConfig\n",
+ "{\n",
+ " \"model_type\": \"None\",\n",
+ " \"representations\": [\n",
+ " {\n",
+ " \"layer\": 0,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"unit\": \"pos\",\n",
+ " \"max_number_of_units\": 1,\n",
+ " \"low_rank_dimension\": null,\n",
+ " \"intervention_type\": null,\n",
+ " \"subspace_partition\": null,\n",
+ " \"group_key\": null,\n",
+ " \"intervention_link_key\": null,\n",
+ " \"moe_key\": null,\n",
+ " \"source_representation\": \"PLACEHOLDER\",\n",
+ " \"hidden_source_representation\": null\n",
+ " },\n",
+ " {\n",
+ " \"layer\": 1,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"unit\": \"pos\",\n",
+ " \"max_number_of_units\": 1,\n",
+ " \"low_rank_dimension\": null,\n",
+ " \"intervention_type\": null,\n",
+ " \"subspace_partition\": null,\n",
+ " \"group_key\": null,\n",
+ " \"intervention_link_key\": null,\n",
+ " \"moe_key\": null,\n",
+ " \"source_representation\": \"PLACEHOLDER\",\n",
+ " \"hidden_source_representation\": null\n",
+ " },\n",
+ " {\n",
+ " \"layer\": 2,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"unit\": \"pos\",\n",
+ " \"max_number_of_units\": 1,\n",
+ " \"low_rank_dimension\": null,\n",
+ " \"intervention_type\": null,\n",
+ " \"subspace_partition\": null,\n",
+ " \"group_key\": null,\n",
+ " \"intervention_link_key\": null,\n",
+ " \"moe_key\": null,\n",
+ " \"source_representation\": \"PLACEHOLDER\",\n",
+ " \"hidden_source_representation\": null\n",
+ " },\n",
+ " {\n",
+ " \"layer\": 3,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"unit\": \"pos\",\n",
+ " \"max_number_of_units\": 1,\n",
+ " \"low_rank_dimension\": null,\n",
+ " \"intervention_type\": null,\n",
+ " \"subspace_partition\": null,\n",
+ " \"group_key\": null,\n",
+ " \"intervention_link_key\": null,\n",
+ " \"moe_key\": null,\n",
+ " \"source_representation\": \"PLACEHOLDER\",\n",
+ " \"hidden_source_representation\": null\n",
+ " }\n",
+ " ],\n",
+ " \"intervention_types\": \"\",\n",
+ " \"mode\": \"parallel\",\n",
+ " \"interventions\": [\n",
+ " \"None\"\n",
+ " ],\n",
+ " \"sorted_keys\": \"None\",\n",
+ " \"intervention_dimensions\": \"None\"\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "# standalone configuration object\n",
+ "config = pv.IntervenableConfig([\n",
+ " {\n",
+ " \"layer\": _,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"source_representation\": torch.zeros(\n",
+ " gpt2.config.n_embd)\n",
+ " } for _ in range(4)],\n",
+ " mode=\"parallel\"\n",
+ ")\n",
+ "# this object is serializable\n",
+ "print(config)\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\"), \n",
+ " unit_locations={\"base\": 3}\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9c5b2270",
+ "metadata": {},
+ "source": [
+ "### Addition Intervention\n",
+ "Activation swap is one kind of interventions we can perform. Here is another simple one: `pv.AdditionIntervention`, which adds the sampled representation into the **Base** run."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "a40f5989",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig({\n",
+ " \"layer\": 0,\n",
+ " \"component\": \"mlp_input\"},\n",
+ " pv.AdditionIntervention\n",
+ ")\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base = tokenizer(\n",
+ " \"The Space Needle is in downtown\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), \n",
+ " unit_locations={\"base\": [[[0, 1, 2, 3]]]},\n",
+ " source_representations = torch.rand(gpt2.config.n_embd)\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "099ddf77",
+ "metadata": {},
+ "source": [
+ "### Trainable Intervention\n",
+ "Interventions can contain trainable parameters, and hook-up with the model to receive gradients end-to-end. They are often useful in searching for an particular interpretation of the representation.\n",
+ "\n",
+ "The following example does a single step gradient calculation to push the model to generate `Rome` after the intervention. If we can train such intervention at scale with low loss, it means you have a causal grab onto your model. In terms of interpretability, that means, somehow you find a representation (not the original one since its trained) that maps onto the `capital` output."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "id": "7f058ecd",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "das_config = pv.IntervenableConfig({\n",
+ " \"layer\": 8,\n",
+ " \"component\": \"block_output\",\n",
+ " \"low_rank_dimension\": 1},\n",
+ " # this is a trainable low-rank rotation\n",
+ " pv.LowRankRotatedSpaceIntervention\n",
+ ")\n",
+ "\n",
+ "das_gpt2 = pv.IntervenableModel(das_config, model=gpt2)\n",
+ "\n",
+ "last_hidden_state = das_gpt2(\n",
+ " base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), \n",
+ " sources = tokenizer(\n",
+ " \"The capital of Italy is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), \n",
+ " unit_locations={\"sources->base\": 3}\n",
+ ")[-1].last_hidden_state[:,-1]\n",
+ "\n",
+ "# golden counterfacutual label as Rome\n",
+ "label = tokenizer.encode(\n",
+ " \" Rome\", return_tensors=\"pt\")\n",
+ "logits = torch.matmul(\n",
+ " last_hidden_state, gpt2.wte.weight.t())\n",
+ "\n",
+ "m = torch.nn.CrossEntropyLoss()\n",
+ "loss = m(logits, label.view(-1))\n",
+ "loss.backward()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a8fd2b8e",
+ "metadata": {},
+ "source": [
+ "### Activation Collection with Intervention\n",
+ "You can also collect activations with our provided `pv.CollectIntervention` intervention. More importantly, this can be used interchangably with other interventions. You can collect something from an intervened model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "6e6bd585",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig({\n",
+ " \"layer\": 10,\n",
+ " \"component\": \"block_output\",\n",
+ " \"intervention_type\": pv.CollectIntervention}\n",
+ ")\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " config, model=gpt2)\n",
+ "\n",
+ "collected_activations = pv_gpt2(\n",
+ " base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), unit_locations={\"sources->base\": 3}\n",
+ ")[0][-1]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f7b0d0c6",
+ "metadata": {},
+ "source": [
+ "### Activation Collection at Downstream of a Intervened Model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "adcfcb05",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig({\n",
+ " \"layer\": 8,\n",
+ " \"component\": \"block_output\",\n",
+ " \"intervention_type\": pv.VanillaIntervention}\n",
+ ")\n",
+ "\n",
+ "config.add_intervention({\n",
+ " \"layer\": 10,\n",
+ " \"component\": \"block_output\",\n",
+ " \"intervention_type\": pv.CollectIntervention})\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " config, model=gpt2)\n",
+ "\n",
+ "collected_activations = pv_gpt2(\n",
+ " base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), \n",
+ " sources = [tokenizer(\n",
+ " \"The capital of Italy is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), None], unit_locations={\"sources->base\": 3}\n",
+ ")[0][-1]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a9e6e4d9",
+ "metadata": {},
+ "source": [
+ "### Intervene on a Single Neuron\n",
+ "We want to provide a good user interface so that interventions can be done easily by people with less pytorch or programming experience. Meanwhile, we also want to be flexible and provide the depth of control required for highly specific tasks. Here is an example where we intervene on a specific neuron at a specific head of a layer in a model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "d25b6401",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig({\n",
+ " \"layer\": 8,\n",
+ " \"component\": \"head_attention_value_output\",\n",
+ " \"unit\": \"h.pos\",\n",
+ " \"intervention_type\": pv.CollectIntervention}\n",
+ ")\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " config, model=gpt2)\n",
+ "\n",
+ "collected_activations = pv_gpt2(\n",
+ " base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\"\n",
+ " ), \n",
+ " unit_locations={\n",
+ " # GET_LOC is a helper.\n",
+ " # (3,3) means head 3 position 3\n",
+ " \"base\": pv.GET_LOC((3,3))\n",
+ " },\n",
+ " # the notion of subspace is used to target neuron 0.\n",
+ " subspaces=[0]\n",
+ ")[0][-1]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5692bc15",
+ "metadata": {},
+ "source": [
+ "### Add New Intervention Type"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "1597221a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "class MultiplierIntervention(\n",
+ " pv.ConstantSourceIntervention):\n",
+ " def __init__(self, **kwargs):\n",
+ " super().__init__()\n",
+ " def forward(\n",
+ " self, base, source=None, subspaces=None):\n",
+ " return base * 99.0\n",
+ "# run with new intervention type\n",
+ "pv_gpt2 = pv.IntervenableModel({\n",
+ " \"intervention_type\": MultiplierIntervention}, \n",
+ " model=gpt2)\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base = tokenizer(\"The capital of Spain is\", \n",
+ " return_tensors=\"pt\"), \n",
+ " unit_locations={\"base\": 3})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "079050f6",
+ "metadata": {},
+ "source": [
+ "### Recurrent NNs (Intervene a Specific Timestep)\n",
+ "Existing intervention libraries focus on Transformer models. They often lack of supports for GRUs, LSTMs or any state-space model. The fundemental problem is in the hook mechanism provided by PyTorch. Hook is attached to a module before runtime. Models like GRUs will lead to undesired callback from the hook as there is no notion of state or time of the hook. \n",
+ "\n",
+ "We make our hook stateful, so you can intervene on recurrent NNs like GRUs. This notion of time will become useful when intervening on Transformers yet want to unroll the causal effect during generation as well."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "7a53347a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, _, gru = pv.create_gru_classifier(\n",
+ " pv.GRUConfig(h_dim=32))\n",
+ "\n",
+ "pv_gru = pv.IntervenableModel({\n",
+ " \"component\": \"cell_output\",\n",
+ " \"unit\": \"t\", \n",
+ " \"intervention_type\": pv.ZeroIntervention},\n",
+ " model=gru)\n",
+ "\n",
+ "rand_t = torch.rand(1,10, gru.config.h_dim)\n",
+ "\n",
+ "intervened_outputs = pv_gru(\n",
+ " base = {\"inputs_embeds\": rand_t}, \n",
+ " unit_locations={\"base\": 3})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "031dd5de",
+ "metadata": {},
+ "source": [
+ "### Recurrent NNs (Intervene cross Time)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 92,
+ "id": "b48166c0",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "# built-in helper to get a GRU\n",
+ "_, _, gru = pv.create_gru_classifier(\n",
+ " pv.GRUConfig(h_dim=32))\n",
+ "# wrap it with config\n",
+ "pv_gru = pv.IntervenableModel({\n",
+ " \"component\": \"cell_output\",\n",
+ " # intervening on time\n",
+ " \"unit\": \"t\", \n",
+ " \"intervention_type\": pv.ZeroIntervention},\n",
+ " model=gru)\n",
+ "# run an intervened forward pass\n",
+ "rand_b = torch.rand(1,10, gru.config.h_dim)\n",
+ "rand_s = torch.rand(1,10, gru.config.h_dim)\n",
+ "intervened_outputs = pv_gru(\n",
+ " base = {\"inputs_embeds\": rand_b}, \n",
+ " sources = [{\"inputs_embeds\": rand_s}], \n",
+ " # intervening time step\n",
+ " unit_locations={\"sources->base\": (6, 3)})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "121366c1",
+ "metadata": {},
+ "source": [
+ "### LMs Generation\n",
+ "You can also intervene the generation call of LMs. Here is a simple example where we try to add a vector into the MLP output when the model decodes."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "f718e2d6",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
+ "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n",
+ "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Once upon a time there was a little girl named Lucy. She was three years old and loved to explore. One day, Lucy was walking in the park when she saw a big, red balloon. She was so excited and wanted to play with it.\n",
+ "\n",
+ "But then, a big, mean man came and said, \"That balloon is mine! You can't have it!\" Lucy was very sad and started to cry.\n",
+ "\n",
+ "The man said, \"I'm sorry, but I need the balloon for my work. You can have it if you want.\"\n",
+ "\n",
+ "Lucy was so happy and said, \"Yes please!\" She took the balloon and ran away.\n",
+ "\n",
+ "But then, the man said, \"Wait! I have an idea. Let's make a deal. If you can guess what I'm going to give you, then you can have the balloon.\"\n",
+ "\n",
+ "Lucy thought for a moment and then said, \"I guess I'll have to get the balloon.\"\n",
+ "\n",
+ "The man smiled and said, \"That's a good guess! Here you go.\"\n",
+ "\n",
+ "Lucy was so happy and thanked the man. She hugged the balloon and ran off to show her mom.\n",
+ "\n",
+ "The end.\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "# built-in helper to get tinystore\n",
+ "_, tokenizer, tinystory = pv.create_gpt_neo()\n",
+ "emb_happy = tinystory.transformer.wte(\n",
+ " torch.tensor(14628)) \n",
+ "\n",
+ "pv_tinystory = pv.IntervenableModel([{\n",
+ " \"layer\": l,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"intervention_type\": pv.AdditionIntervention\n",
+ " } for l in range(tinystory.config.num_layers)],\n",
+ " model=tinystory\n",
+ ")\n",
+ "# prompt and generate\n",
+ "prompt = tokenizer(\n",
+ " \"Once upon a time there was\", return_tensors=\"pt\")\n",
+ "_, intervened_story = pv_tinystory.generate(\n",
+ " tokenizer(\"Once upon a time there was\", return_tensors=\"pt\"),\n",
+ " source_representations=emb_happy*0.3, max_length=256\n",
+ ")\n",
+ "\n",
+ "print(tokenizer.decode(\n",
+ " intervened_story[0], \n",
+ " skip_special_tokens=True\n",
+ "))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cb539f4b",
+ "metadata": {},
+ "source": [
+ "### Saving and Loading\n",
+ "This is one of the benefits of program abstraction. We abstract out the intervention and its schema, so we have a user friendly interface. Furthermore, it allows us to have a serializable configuration file that tells everything about your configuration.\n",
+ "\n",
+ "You can then save, share and load interventions easily. Note that you still need your access to the data, if you need to sample **Source** representations from other examples. But we think this is doable via a separate HuggingFace datasets upload. In the future, there could be an option of coupling this configuration with a specific remote dataset as well."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "272f3773",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "Directory './tmp/' already exists.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "# run with new intervention type\n",
+ "pv_gpt2 = pv.IntervenableModel({\n",
+ " \"intervention_type\": pv.ZeroIntervention}, \n",
+ " model=gpt2)\n",
+ "\n",
+ "pv_gpt2.save(\"./tmp/\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "50b894b4",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "WARNING:root:The key is provided in the config. Assuming this is loaded from a pretrained module.\n",
+ "WARNING:root:Loading trainable intervention from intkey_layer.0.repr.block_output.unit.pos.nunit.1#0.bin.\n"
+ ]
+ }
+ ],
+ "source": [
+ "pv_gpt2 = pv.IntervenableModel.load(\n",
+ " \"./tmp/\",\n",
+ " model=gpt2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b2d07ca8",
+ "metadata": {},
+ "source": [
+ "### Multi-Source Interchange Intervention (Parallel Mode)\n",
+ "\n",
+ "What is multi-source? In the examples above, interventions are at most across two examples. We support interventions across many examples. You can sample representations from two inputs, and plut them into a single **Base**."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "id": "847410a8",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "_the 0.07233363389968872\n",
+ "_a 0.05731499195098877\n",
+ "_not 0.04443885385990143\n",
+ "_Italian 0.033642884343862534\n",
+ "_often 0.024385808035731316\n",
+ "_called 0.022171705961227417\n",
+ "_known 0.017808808013796806\n",
+ "_that 0.016059240326285362\n",
+ "_\" 0.012973357923328876\n",
+ "_an 0.012878881767392159\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "parallel_config = pv.IntervenableConfig([\n",
+ " {\"layer\": 3, \"component\": \"block_output\"},\n",
+ " {\"layer\": 3, \"component\": \"block_output\"}],\n",
+ " # intervene on base at the same time\n",
+ " mode=\"parallel\")\n",
+ "parallel_gpt2 = pv.IntervenableModel(\n",
+ " parallel_config, model=gpt2)\n",
+ "base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\")\n",
+ "sources = [\n",
+ " tokenizer(\"The language of Spain is\", \n",
+ " return_tensors=\"pt\"),\n",
+ " tokenizer(\"The capital of Italy is\", \n",
+ " return_tensors=\"pt\")]\n",
+ "intervened_outputs = parallel_gpt2(\n",
+ " base, sources,\n",
+ " {\"sources->base\": (\n",
+ " # each list has a dimensionality of\n",
+ " # [num_intervention, batch, num_unit]\n",
+ " [[[1]],[[3]]], [[[1]],[[3]]])}\n",
+ ")\n",
+ "\n",
+ "distrib = pv.embed_to_distrib(\n",
+ " gpt2, intervened_outputs[1].last_hidden_state, logits=False)\n",
+ "pv.top_vals(tokenizer, distrib[0][-1], n=10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2f93402c",
+ "metadata": {},
+ "source": [
+ "### Multi-Source Interchange Intervention (Serial Mode)\n",
+ "\n",
+ "Or you can do them sequentially, where you intervene among your **Source** examples, and get some intermediate states before merging the activations into the **Base** run."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 91,
+ "id": "5e5752dc",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "_the 0.06737838685512543\n",
+ "_a 0.059834375977516174\n",
+ "_not 0.04629501700401306\n",
+ "_Italian 0.03623826056718826\n",
+ "_often 0.021700192242860794\n",
+ "_called 0.01840786263346672\n",
+ "_that 0.0157712884247303\n",
+ "_known 0.014391838572919369\n",
+ "_an 0.013535155914723873\n",
+ "_very 0.013022392988204956\n"
+ ]
+ }
+ ],
+ "source": [
+ "config = pv.IntervenableConfig([\n",
+ " {\"layer\": 3, \"component\": \"block_output\"},\n",
+ " {\"layer\": 10, \"component\": \"block_output\"}],\n",
+ " # intervene on base one after another\n",
+ " mode=\"serial\")\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " config, model=gpt2)\n",
+ "base = tokenizer(\n",
+ " \"The capital of Spain is\", \n",
+ " return_tensors=\"pt\")\n",
+ "sources = [\n",
+ " tokenizer(\"The language of Spain is\", \n",
+ " return_tensors=\"pt\"),\n",
+ " tokenizer(\"The capital of Italy is\", \n",
+ " return_tensors=\"pt\")]\n",
+ "\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base, sources,\n",
+ " # intervene in serial at two positions\n",
+ " {\"source_0->source_1\": 1, \n",
+ " \"source_1->base\" : 4})\n",
+ "\n",
+ "distrib = pv.embed_to_distrib(\n",
+ " gpt2, intervened_outputs[1].last_hidden_state, logits=False)\n",
+ "pv.top_vals(tokenizer, distrib[0][-1], n=10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "28621880",
+ "metadata": {},
+ "source": [
+ "### Multi-Source Interchange Intervention with Subspaces (Parallel Mode)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "773aba2e",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " {\"layer\": 0, \"component\": \"block_output\",\n",
+ " \"subspace_partition\": \n",
+ " [[0, 128], [128, 256]]}]*2,\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ " # act in parallel\n",
+ " mode=\"parallel\"\n",
+ ")\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "sources = [tokenizer(\"The capital of Italy is\", return_tensors=\"pt\"),\n",
+ " tokenizer(\"The capital of China is\", return_tensors=\"pt\")]\n",
+ "\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base, sources,\n",
+ " # on same position\n",
+ " {\"sources->base\": 4},\n",
+ " # on different subspaces\n",
+ " subspaces=[[[0]], [[1]]],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7223603f",
+ "metadata": {},
+ "source": [
+ "### Multi-Source Interchange Intervention with Subspaces (Serial Mode)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "305e0607",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " {\"layer\": 0, \"component\": \"block_output\",\n",
+ " \"subspace_partition\": [[0, 128], [128, 256]]},\n",
+ " {\"layer\": 2, \"component\": \"block_output\",\n",
+ " \"subspace_partition\": [[0, 128], [128, 256]]}],\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ " # act in parallel\n",
+ " mode=\"serial\"\n",
+ ")\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "sources = [tokenizer(\"The capital of Italy is\", return_tensors=\"pt\"),\n",
+ " tokenizer(\"The capital of China is\", return_tensors=\"pt\")]\n",
+ "\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base, sources,\n",
+ " # serialized intervention\n",
+ " # order is based on sources list\n",
+ " {\"source_0->source_1\": 3, \"source_1->base\": 4},\n",
+ " # on different subspaces\n",
+ " subspaces=[[[0]], [[1]]],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4b5fcb37",
+ "metadata": {},
+ "source": [
+ "### Interchange Intervention Training (IIT)\n",
+ "Interchange intervention training (IIT) is a technique of inducing causal structures into neural models. This library naturally supports this. By training IIT, you can simply turn the gradient on for the wrapping model. In this way, your model can be trained with your interventional signals."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "8c7dde89",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel({\n",
+ " \"layer\": 8}, \n",
+ " model=gpt2\n",
+ ")\n",
+ "\n",
+ "pv_gpt2.enable_model_gradients()\n",
+ "# run counterfactual forward as usual"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b8c7ccad",
+ "metadata": {},
+ "source": [
+ "## pyvene 102\n",
+ "Now, you are pretty familiar with pyvene basic APIs. There are more to come. We support all sorts of weird interventions, and we encapsulate them as objects so that, even they are super weird (e.g., nested, multiple locations, different types), you can share them easily with others. BTW, if the intervention is trainable, the artifacts will be saved and shared as well.\n",
+ "\n",
+ "With that, here are a couple of additional APIs.\n",
+ "\n",
+ "### Grouping\n",
+ "\n",
+ "You can group interventions together so that they always receive the same input when you want to use them to get activations at different places. Here is an example, where you are taking in the same **Source** example, you fetch activations twice: once in position 3 and layer 0, once in position 4 and layer 2. You don't have to pass in another dummy **Source**."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "84afd62c",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " {\"layer\": 0, \"component\": \"block_output\", \"group_key\": 0},\n",
+ " {\"layer\": 2, \"component\": \"block_output\", \"group_key\": 0}],\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ ")\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "sources = [tokenizer(\"The capital of Italy is\", return_tensors=\"pt\")]\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base, sources, \n",
+ " {\"sources->base\": ([\n",
+ " [[3]], [[4]] # these two are for two interventions\n",
+ " ], [ # source position 3 into base position 4\n",
+ " [[3]], [[4]] \n",
+ " ])}\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "34aeb892",
+ "metadata": {},
+ "source": [
+ "### Intervention Skipping in Runtime\n",
+ "You may configure a lot of interventions, but during training, not every example will have to use all of them. So, you can skip interventions for different examples differently."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "61cd8fc9",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "True True\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " # these are equivalent interventions\n",
+ " # we create them on purpose\n",
+ " {\"layer\": 0, \"component\": \"block_output\"},\n",
+ " {\"layer\": 0, \"component\": \"block_output\"},\n",
+ " {\"layer\": 0, \"component\": \"block_output\"}],\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ ")\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "source = tokenizer(\"The capital of Italy is\", return_tensors=\"pt\")\n",
+ "# skipping 1, 2 and 3\n",
+ "_, pv_out1 = pv_gpt2(base, [None, None, source],\n",
+ " {\"sources->base\": ([None, None, [[4]]], [None, None, [[4]]])})\n",
+ "_, pv_out2 = pv_gpt2(base, [None, source, None],\n",
+ " {\"sources->base\": ([None, [[4]], None], [None, [[4]], None])})\n",
+ "_, pv_out3 = pv_gpt2(base, [source, None, None],\n",
+ " {\"sources->base\": ([[[4]], None, None], [[[4]], None, None])})\n",
+ "# should have the same results\n",
+ "print(\n",
+ " torch.equal(pv_out1.last_hidden_state, pv_out2.last_hidden_state),\n",
+ " torch.equal(pv_out2.last_hidden_state, pv_out3.last_hidden_state)\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d9df6acd",
+ "metadata": {},
+ "source": [
+ "### Subspace Partition\n",
+ "You can partition your subspace before hand. If you don't, the library assumes you each neuron is in its own subspace. In this example, you partition your subspace into two continous chunk, `[0, 128), [128,256)`, which means all the neurons from index 0 upto 127 are along to partition 1. During runtime, you can intervene on all the neurons in the same parition together."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "3a66bbeb",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " # they are linked to manipulate the same representation\n",
+ " # but in different subspaces\n",
+ " {\"layer\": 0, \"component\": \"block_output\",\n",
+ " # subspaces can be partitioned into continuous chunks\n",
+ " # [i, j] are the boundary indices\n",
+ " \"subspace_partition\": [[0, 128], [128, 256]]}],\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ ")\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "source = tokenizer(\"The capital of Italy is\", return_tensors=\"pt\")\n",
+ "\n",
+ "# using intervention skipping for subspace\n",
+ "intervened_outputs = pv_gpt2(\n",
+ " base, [source],\n",
+ " {\"sources->base\": 4},\n",
+ " # intervene only only dimensions from 128 to 256\n",
+ " subspaces=1,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0fdde257",
+ "metadata": {},
+ "source": [
+ "### Intervention Linking\n",
+ "Interventions can be linked to share weights and share subspaces. Here is an example of how to link interventions together. If interventions are trainable, then their weights are tied as well.\n",
+ "\n",
+ "Why this is useful? it is because sometimes, you may want to intervene on different subspaces differently. Say you have a representation in a size of 512, and you hypothesize the first half represents A, and the second half represents B, you can then use the subspace intervention to test it out. With trainable interventions, you can also optimize your interventions on the same representation yet with different subspaces."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "eec19da9",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "True\n",
+ "True\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "config = pv.IntervenableConfig([\n",
+ " # they are linked to manipulate the same representation\n",
+ " # but in different subspaces\n",
+ " {\"layer\": 0, \"component\": \"block_output\", \n",
+ " \"subspace_partition\": [[0, 128], [128, 256]], \"intervention_link_key\": 0},\n",
+ " {\"layer\": 0, \"component\": \"block_output\",\n",
+ " \"subspace_partition\": [[0, 128], [128, 256]], \"intervention_link_key\": 0}],\n",
+ " intervention_types=pv.VanillaIntervention,\n",
+ ")\n",
+ "pv_gpt2 = pv.IntervenableModel(config, model=gpt2)\n",
+ "\n",
+ "base = tokenizer(\"The capital of Spain is\", return_tensors=\"pt\")\n",
+ "source = tokenizer(\"The capital of Italy is\", return_tensors=\"pt\")\n",
+ "\n",
+ "# using intervention skipping for subspace\n",
+ "_, pv_out1 = pv_gpt2(\n",
+ " base, [None, source],\n",
+ " # 4 means token position 4\n",
+ " {\"sources->base\": ([None, [[4]]], [None, [[4]]])},\n",
+ " # 1 means the second partition in the config\n",
+ " subspaces=[None, [[1]]],\n",
+ ")\n",
+ "_, pv_out2 = pv_gpt2(\n",
+ " base,\n",
+ " [source, None],\n",
+ " {\"sources->base\": ([[[4]], None], [[[4]], None])},\n",
+ " subspaces=[[[1]], None],\n",
+ ")\n",
+ "print(torch.equal(pv_out1.last_hidden_state, pv_out2.last_hidden_state))\n",
+ "\n",
+ "# subspaces provide a list of index and they can be in any order\n",
+ "_, pv_out3 = pv_gpt2(\n",
+ " base,\n",
+ " [source, source],\n",
+ " {\"sources->base\": ([[[4]], [[4]]], [[[4]], [[4]]])},\n",
+ " subspaces=[[[0]], [[1]]],\n",
+ ")\n",
+ "_, pv_out4 = pv_gpt2(\n",
+ " base,\n",
+ " [source, source],\n",
+ " {\"sources->base\": ([[[4]], [[4]]], [[[4]], [[4]]])},\n",
+ " subspaces=[[[1]], [[0]]],\n",
+ ")\n",
+ "print(torch.equal(pv_out3.last_hidden_state, pv_out4.last_hidden_state))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ef5b7a3e",
+ "metadata": {},
+ "source": [
+ "### Add New Model Type"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "acce6e8f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import torch\n",
+ "import pyvene as pv\n",
+ "\n",
+ "# get a flan-t5 from HuggingFace\n",
+ "from transformers import T5ForConditionalGeneration, T5Tokenizer, T5Config\n",
+ "config = T5Config.from_pretrained(\"google/flan-t5-small\")\n",
+ "tokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-small\")\n",
+ "t5 = T5ForConditionalGeneration.from_pretrained(\n",
+ " \"google/flan-t5-small\", config=config\n",
+ ")\n",
+ "\n",
+ "# config the intervention mapping with pv global vars\n",
+ "\"\"\"Only define for the block output here for simplicity\"\"\"\n",
+ "pv.type_to_module_mapping[type(t5)] = {\n",
+ " \"mlp_output\": (\"encoder.block[%s].layer[1]\", \n",
+ " pv.models.constants.CONST_OUTPUT_HOOK),\n",
+ " \"attention_input\": (\"encoder.block[%s].layer[0]\", \n",
+ " pv.models.constants.CONST_OUTPUT_HOOK),\n",
+ "}\n",
+ "pv.type_to_dimension_mapping[type(t5)] = {\n",
+ " \"mlp_output\": (\"d_model\",),\n",
+ " \"attention_input\": (\"d_model\",),\n",
+ " \"block_output\": (\"d_model\",),\n",
+ " \"head_attention_value_output\": (\"d_model/num_heads\",),\n",
+ "}\n",
+ "\n",
+ "# wrap as gpt2\n",
+ "pv_t5 = pv.IntervenableModel({\n",
+ " \"layer\": 0,\n",
+ " \"component\": \"mlp_output\",\n",
+ " \"source_representation\": torch.zeros(\n",
+ " t5.config.d_model)\n",
+ "}, model=t5)\n",
+ "\n",
+ "# then intervene!\n",
+ "base = tokenizer(\"The capital of Spain is\", \n",
+ " return_tensors=\"pt\")\n",
+ "decoder_input_ids = tokenizer(\n",
+ " \"\", return_tensors=\"pt\").input_ids\n",
+ "base[\"decoder_input_ids\"] = decoder_input_ids\n",
+ "intervened_outputs = pv_t5(\n",
+ " base, \n",
+ " unit_locations={\"base\": 3}\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ba158a92",
+ "metadata": {},
+ "source": [
+ "### Composing Complex Intervention Schema: Path Patching"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "e51cadfe",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n",
+ "Directory './tmp/' already exists.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import pyvene as pv\n",
+ "\n",
+ "def path_patching_config(\n",
+ " layer, last_layer, \n",
+ " component=\"head_attention_value_output\", unit=\"h.pos\"\n",
+ "):\n",
+ " intervening_component = [\n",
+ " {\"layer\": layer, \"component\": component, \"unit\": unit, \"group_key\": 0}]\n",
+ " restoring_components = []\n",
+ " if not component.startswith(\"mlp_\"):\n",
+ " restoring_components += [\n",
+ " {\"layer\": layer, \"component\": \"mlp_output\", \"group_key\": 1}]\n",
+ " for i in range(layer+1, last_layer):\n",
+ " restoring_components += [\n",
+ " {\"layer\": i, \"component\": \"attention_output\", \"group_key\": 1},\n",
+ " {\"layer\": i, \"component\": \"mlp_output\", \"group_key\": 1}\n",
+ " ]\n",
+ " intervenable_config = pv.IntervenableConfig(\n",
+ " intervening_component + restoring_components)\n",
+ " return intervenable_config\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " path_patching_config(4, gpt2.config.n_layer), \n",
+ " model=gpt2\n",
+ ")\n",
+ "\n",
+ "pv_gpt2.save(\n",
+ " save_directory=\"./tmp/\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "9074f716",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "WARNING:root:The key is provided in the config. Assuming this is loaded from a pretrained module.\n"
+ ]
+ }
+ ],
+ "source": [
+ "pv_gpt2 = pv.IntervenableModel.load(\n",
+ " \"./tmp/\",\n",
+ " model=gpt2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d546e858",
+ "metadata": {},
+ "source": [
+ "### Composing Complex Intervention Schema: Causal Tracing in 15 lines"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "c0b6a70f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loaded model\n"
+ ]
+ }
+ ],
+ "source": [
+ "import pyvene as pv\n",
+ "\n",
+ "def causal_tracing_config(\n",
+ " l, c=\"mlp_activation\", w=10, tl=48):\n",
+ " s = max(0, l - w // 2)\n",
+ " e = min(tl, l - (-w // 2))\n",
+ " config = pv.IntervenableConfig(\n",
+ " [{\"component\": \"block_input\"}] + \n",
+ " [{\"layer\": l, \"component\": c} \n",
+ " for l in range(s, e)],\n",
+ " [pv.NoiseIntervention] +\n",
+ " [pv.VanillaIntervention]*(e-s))\n",
+ " return config\n",
+ "\n",
+ "_, tokenizer, gpt2 = pv.create_gpt2()\n",
+ "\n",
+ "pv_gpt2 = pv.IntervenableModel(\n",
+ " causal_tracing_config(4), \n",
+ " model=gpt2\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bc6eb49d",
+ "metadata": {},
+ "source": [
+ "### The End\n",
+ "Now you are graduating from pyvene 101! Feel free to take a look at our tutorials for more challenging interventions."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.12"
+ },
+ "toc-autonumbering": true,
+ "toc-showcode": false,
+ "toc-showmarkdowntxt": false,
+ "toc-showtags": true
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/tests/integration_tests/ComplexInterventionWithGPT2TestCase.py b/tests/integration_tests/ComplexInterventionWithGPT2TestCase.py
index 63989f5e..93718a3d 100644
--- a/tests/integration_tests/ComplexInterventionWithGPT2TestCase.py
+++ b/tests/integration_tests/ComplexInterventionWithGPT2TestCase.py
@@ -49,16 +49,16 @@ def test_clean_run_positive(self):
Positive test case to check whether vanilla forward pass work
with our object.
"""
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0, "block_output", "pos", 1, subspace_partition=[[0, 6], [6, 24]]
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
intervenable.set_device(self.device)
base = {"input_ids": torch.randint(0, 10, (10, 5)).to(self.device)}
golden_out = self.gpt2(**base).logits
@@ -74,22 +74,22 @@ def _test_subspace_partition_in_forward(self, intervention_type):
Provide subpace intervention indices in the forward only.
"""
batch_size = 10
- with_partition_intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ with_partition_config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
- intervenable_low_rank_dimension=24,
+ low_rank_dimension=24,
subspace_partition=[[0, 6], [6, 24]],
),
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
intervenable = IntervenableModel(
- with_partition_intervenable_config, self.gpt2, use_fast=False
+ with_partition_config, self.gpt2, use_fast=False
)
intervenable.set_device(self.device)
base = {"input_ids": torch.randint(0, 10, (batch_size, 5)).to(self.device)}
@@ -101,30 +101,30 @@ def _test_subspace_partition_in_forward(self, intervention_type):
subspaces=[[[0]] * batch_size],
)
- without_partition_intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
- 0, "block_output", "pos", 1, intervenable_low_rank_dimension=24
+ without_partition_config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
+ 0, "block_output", "pos", 1, low_rank_dimension=24
),
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
- intervenable_fast = IntervenableModel(
- without_partition_intervenable_config, self.gpt2, use_fast=True
+ fast = IntervenableModel(
+ without_partition_config, self.gpt2, use_fast=True
)
- intervenable_fast.set_device(self.device)
+ fast.set_device(self.device)
if intervention_type in {
RotatedSpaceIntervention,
LowRankRotatedSpaceIntervention,
}:
- list(intervenable_fast.interventions.values())[0][
+ list(fast.interventions.values())[0][
0
].rotate_layer.weight = list(intervenable.interventions.values())[0][
0
].rotate_layer.weight
- _, without_partition_our_output = intervenable_fast(
+ _, without_partition_our_output = fast(
base,
[source],
{"sources->base": ([[[0]] * batch_size], [[[0]] * batch_size])},
diff --git a/tests/integration_tests/IntervenableBasicTestCase.py b/tests/integration_tests/IntervenableBasicTestCase.py
new file mode 100644
index 00000000..0b71e7d0
--- /dev/null
+++ b/tests/integration_tests/IntervenableBasicTestCase.py
@@ -0,0 +1,611 @@
+import unittest
+from ..utils import *
+
+import torch
+import pyvene as pv
+
+class IntervenableBasicTestCase(unittest.TestCase):
+ """These are API level positive cases."""
+ @classmethod
+ def setUpClass(self):
+ _uuid = str(uuid.uuid4())[:6]
+ self._test_dir = os.path.join(f"./test_output_dir_prefix-{_uuid}")
+
+ def test_lazy_demo(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ pv_gpt2 = pv.IntervenableModel({
+ "layer": 0,
+ "component": "mlp_output",
+ "source_representation": torch.zeros(
+ gpt2.config.n_embd)
+ }, model=gpt2)
+
+ intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={"base": 3}
+ )
+
+ def test_less_lazy_demo(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {
+ "layer": _,
+ "component": "mlp_output",
+ "source_representation": torch.zeros(
+ gpt2.config.n_embd)
+ } for _ in range(4)],
+ mode="parallel"
+ )
+ print(config)
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={"base": 3}
+ )
+
+ def test_less_lazy_demo(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {
+ "layer": _,
+ "component": "mlp_output",
+ "source_representation": torch.zeros(
+ gpt2.config.n_embd)
+ } for _ in range(4)],
+ mode="parallel"
+ )
+ print(config)
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={"base": 3}
+ )
+
+ def test_source_reprs_pass_in_unit_loc_broadcast_demo(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ pv_gpt2 = pv.IntervenableModel({
+ "layer": 0,
+ "component": "mlp_output",
+ }, model=gpt2)
+
+ intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ source_representations = torch.zeros(gpt2.config.n_embd),
+ unit_locations={"base": 3}
+ )
+
+ def test_input_corrupt_multi_token(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig({
+ "layer": 0,
+ "component": "mlp_input"},
+ pv.AdditionIntervention
+ )
+
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ intervened_outputs = pv_gpt2(
+ base = tokenizer(
+ "The Space Needle is in downtown",
+ return_tensors="pt"
+ ),
+ unit_locations={"base": [[[0, 1, 2, 3]]]},
+ source_representations = torch.rand(gpt2.config.n_embd)
+ )
+
+ def test_trainable_backward(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig({
+ "layer": 8,
+ "component": "block_output",
+ "low_rank_dimension": 1},
+ pv.LowRankRotatedSpaceIntervention
+ )
+
+ pv_gpt2 = pv.IntervenableModel(
+ config, model=gpt2)
+
+ last_hidden_state = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ sources = tokenizer(
+ "The capital of Italy is",
+ return_tensors="pt"
+ ),
+ unit_locations={"sources->base": 3}
+ )[-1].last_hidden_state
+
+ loss = last_hidden_state.sum()
+ loss.backward()
+
+ def test_reprs_collection(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig({
+ "layer": 10,
+ "component": "block_output",
+ "intervention_type": pv.CollectIntervention}
+ )
+
+ pv_gpt2 = pv.IntervenableModel(
+ config, model=gpt2)
+
+ collected_activations = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={"sources->base": 3}
+ )[0][-1]
+
+ def test_reprs_collection_after_intervention(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig({
+ "layer": 8,
+ "component": "block_output",
+ "intervention_type": pv.VanillaIntervention}
+ )
+
+ config.add_intervention({
+ "layer": 10,
+ "component": "block_output",
+ "intervention_type": pv.CollectIntervention})
+
+ pv_gpt2 = pv.IntervenableModel(
+ config, model=gpt2)
+
+ collected_activations = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ sources = [tokenizer(
+ "The capital of Italy is",
+ return_tensors="pt"
+ ), None],
+ unit_locations={"sources->base": 3}
+ )[0][-1]
+
+ def test_reprs_collection_on_one_neuron(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig({
+ "layer": 8,
+ "component": "head_attention_value_output",
+ "unit": "h.pos",
+ "intervention_type": pv.CollectIntervention}
+ )
+
+ pv_gpt2 = pv.IntervenableModel(
+ config, model=gpt2)
+
+ collected_activations = pv_gpt2(
+ base = tokenizer(
+ "The capital of Spain is",
+ return_tensors="pt"
+ ),
+ unit_locations={
+ "base": pv.GET_LOC((3,3))
+ },
+ subspaces=[0]
+ )[0][-1]
+
+ def test_new_intervention_type(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ class MultiplierIntervention(
+ pv.ConstantSourceIntervention):
+ def __init__(self, embed_dim, **kwargs):
+ super().__init__()
+ def forward(
+ self, base, source=None, subspaces=None):
+ return base * 99.0
+ # run with new intervention type
+ pv_gpt2 = pv.IntervenableModel({
+ "intervention_type": MultiplierIntervention},
+ model=gpt2)
+ intervened_outputs = pv_gpt2(
+ base = tokenizer("The capital of Spain is",
+ return_tensors="pt"),
+ unit_locations={"base": 3})
+
+ def test_recurrent_nn(self):
+
+ _, _, gru = pv.create_gru_classifier(
+ pv.GRUConfig(h_dim=32))
+
+ pv_gru = pv.IntervenableModel({
+ "component": "cell_output",
+ "unit": "t",
+ "intervention_type": pv.ZeroIntervention},
+ model=gru)
+
+ rand_t = torch.rand(1,10, gru.config.h_dim)
+
+ intervened_outputs = pv_gru(
+ base = {"inputs_embeds": rand_t},
+ unit_locations={"base": 3})
+
+ def test_lm_generation(self):
+
+ # built-in helper to get tinystore
+ _, tokenizer, tinystory = pv.create_gpt_neo()
+ emb_happy = tinystory.transformer.wte(
+ torch.tensor(14628)) * 0.3
+
+ pv_tinystory = pv.IntervenableModel([{
+ "layer": _,
+ "component": "mlp_output",
+ "intervention_type": pv.AdditionIntervention
+ } for _ in range(
+ tinystory.config.num_layers)],
+ model=tinystory)
+
+ prompt = tokenizer(
+ "Once upon a time there was",
+ return_tensors="pt")
+ _, intervened_story = pv_tinystory.generate(
+ prompt,
+ source_representations=emb_happy,
+ max_length=32
+ )
+ print(tokenizer.decode(
+ intervened_story[0],
+ skip_special_tokens=True
+ ))
+
+ def test_save_and_load(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ # run with new intervention type
+ pv_gpt2 = pv.IntervenableModel({
+ "intervention_type": pv.ZeroIntervention},
+ model=gpt2)
+
+ pv_gpt2.save(self._test_dir)
+
+ pv_gpt2_load = pv.IntervenableModel.load(
+ self._test_dir,
+ model=gpt2)
+
+ def test_intervention_grouping(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {"layer": 0, "component": "block_output", "group_key": 0},
+ {"layer": 2, "component": "block_output", "group_key": 0}],
+ intervention_types=pv.VanillaIntervention,
+ )
+
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ sources = [tokenizer("The capital of Italy is", return_tensors="pt")]
+ intervened_outputs = pv_gpt2(
+ base, sources,
+ {"sources->base": ([
+ [[3]], [[4]] # these two are for two interventions
+ ], [ # source position 3 into base position 4
+ [[3]], [[4]]
+ ])}
+ )
+
+ def test_intervention_skipping(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ # these are equivalent interventions
+ # we create them on purpose
+ {"layer": 0, "component": "block_output"},
+ {"layer": 0, "component": "block_output"},
+ {"layer": 0, "component": "block_output"}],
+ intervention_types=pv.VanillaIntervention,
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ source = tokenizer("The capital of Italy is", return_tensors="pt")
+ # skipping 1, 2 and 3
+ _, pv_out1 = pv_gpt2(base, [None, None, source],
+ {"sources->base": ([None, None, [[4]]], [None, None, [[4]]])})
+ _, pv_out2 = pv_gpt2(base, [None, source, None],
+ {"sources->base": ([None, [[4]], None], [None, [[4]], None])})
+ _, pv_out3 = pv_gpt2(base, [source, None, None],
+ {"sources->base": ([[[4]], None, None], [[[4]], None, None])})
+ # should have the same results
+ self.assertTrue(torch.equal(pv_out1.last_hidden_state, pv_out2.last_hidden_state))
+ self.assertTrue(torch.equal(pv_out2.last_hidden_state, pv_out3.last_hidden_state))
+
+ def test_subspace_intervention(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ # they are linked to manipulate the same representation
+ # but in different subspaces
+ {"layer": 0, "component": "block_output",
+ # subspaces can be partitioned into continuous chunks
+ # [i, j] are the boundary indices
+ "subspace_partition": [[0, 128], [128, 256]]}],
+ intervention_types=pv.VanillaIntervention,
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ source = tokenizer("The capital of Italy is", return_tensors="pt")
+
+ # using intervention skipping for subspace
+ intervened_outputs = pv_gpt2(
+ base, [source],
+ {"sources->base": 4},
+ # intervene only only dimensions from 128 to 256
+ subspaces=1,
+ )
+
+ def test_linked_intervention_and_weights_sharing(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ # they are linked to manipulate the same representation
+ # but in different subspaces
+ {"layer": 0, "component": "block_output",
+ "subspace_partition": [[0, 128], [128, 256]], "intervention_link_key": 0},
+ {"layer": 0, "component": "block_output",
+ "subspace_partition": [[0, 128], [128, 256]], "intervention_link_key": 0}],
+ intervention_types=pv.VanillaIntervention,
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ source = tokenizer("The capital of Italy is", return_tensors="pt")
+
+ # using intervention skipping for subspace
+ _, pv_out1 = pv_gpt2(
+ base, [None, source],
+ # 4 means token position 4
+ {"sources->base": ([None, [[4]]], [None, [[4]]])},
+ # 1 means the second partition in the config
+ subspaces=[None, [[1]]],
+ )
+ _, pv_out2 = pv_gpt2(
+ base,
+ [source, None],
+ {"sources->base": ([[[4]], None], [[[4]], None])},
+ subspaces=[[[1]], None],
+ )
+ self.assertTrue(torch.equal(pv_out1.last_hidden_state, pv_out2.last_hidden_state))
+
+ # subspaces provide a list of index and they can be in any order
+ _, pv_out3 = pv_gpt2(
+ base,
+ [source, source],
+ {"sources->base": ([[[4]], [[4]]], [[[4]], [[4]]])},
+ subspaces=[[[0]], [[1]]],
+ )
+ _, pv_out4 = pv_gpt2(
+ base,
+ [source, source],
+ {"sources->base": ([[[4]], [[4]]], [[[4]], [[4]]])},
+ subspaces=[[[1]], [[0]]],
+ )
+ self.assertTrue(torch.equal(pv_out3.last_hidden_state, pv_out4.last_hidden_state))
+
+ def test_new_model_type(self):
+ try:
+ import sentencepiece
+ except:
+ print("sentencepiece is not installed. skipping")
+ return
+ # get a flan-t5 from HuggingFace
+ from transformers import T5ForConditionalGeneration, T5Tokenizer, T5Config
+ config = T5Config.from_pretrained("google/flan-t5-small")
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
+ t5 = T5ForConditionalGeneration.from_pretrained(
+ "google/flan-t5-small", config=config, cache_dir=self._test_dir
+ )
+
+ # config the intervention mapping with pv global vars
+ """Only define for the block output here for simplicity"""
+ pv.type_to_module_mapping[type(t5)] = {
+ "mlp_output": ("encoder.block[%s].layer[1]",
+ pv.models.constants.CONST_OUTPUT_HOOK),
+ "attention_input": ("encoder.block[%s].layer[0]",
+ pv.models.constants.CONST_OUTPUT_HOOK),
+ }
+ pv.type_to_dimension_mapping[type(t5)] = {
+ "mlp_output": ("d_model",),
+ "attention_input": ("d_model",),
+ "block_output": ("d_model",),
+ "head_attention_value_output": ("d_model/num_heads",),
+ }
+
+ # wrap as gpt2
+ pv_t5 = pv.IntervenableModel({
+ "layer": 0,
+ "component": "mlp_output",
+ "source_representation": torch.zeros(
+ t5.config.d_model)
+ }, model=t5)
+
+ # then intervene!
+ base = tokenizer("The capital of Spain is",
+ return_tensors="pt")
+ decoder_input_ids = tokenizer(
+ "", return_tensors="pt").input_ids
+ base["decoder_input_ids"] = decoder_input_ids
+ intervened_outputs = pv_t5(
+ base,
+ unit_locations={"base": 3}
+ )
+
+ def test_path_patching(self):
+
+ def path_patching_config(
+ layer, last_layer,
+ component="head_attention_value_output", unit="h.pos"
+ ):
+ intervening_component = [
+ {"layer": layer, "component": component, "unit": unit, "group_key": 0}]
+ restoring_components = []
+ if not component.startswith("mlp_"):
+ restoring_components += [
+ {"layer": layer, "component": "mlp_output", "group_key": 1}]
+ for i in range(layer+1, last_layer):
+ restoring_components += [
+ {"layer": i, "component": "attention_output", "group_key": 1},
+ {"layer": i, "component": "mlp_output", "group_key": 1}
+ ]
+ intervenable_config = pv.IntervenableConfig(
+ intervening_component + restoring_components)
+ return intervenable_config
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ pv_gpt2 = pv.IntervenableModel(
+ path_patching_config(4, gpt2.config.n_layer),
+ model=gpt2
+ )
+
+ pv_gpt2.save(
+ save_directory="./tmp/"
+ )
+
+ pv_gpt2 = pv.IntervenableModel.load(
+ "./tmp/",
+ model=gpt2)
+
+ def test_multisource_parallel(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {"layer": 0, "component": "mlp_output"},
+ {"layer": 2, "component": "mlp_output"}],
+ mode="parallel"
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ sources = [tokenizer("The capital of Italy is", return_tensors="pt"),
+ tokenizer("The capital of China is", return_tensors="pt")]
+
+ intervened_outputs = pv_gpt2(
+ base, sources,
+ # on same position
+ {"sources->base": 4},
+ )
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {"layer": 0, "component": "block_output",
+ "subspace_partition":
+ [[0, 128], [128, 256]]}]*2,
+ intervention_types=pv.VanillaIntervention,
+ # act in parallel
+ mode="parallel"
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ sources = [tokenizer("The capital of Italy is", return_tensors="pt"),
+ tokenizer("The capital of China is", return_tensors="pt")]
+
+ intervened_outputs = pv_gpt2(
+ base, sources,
+ # on same position
+ {"sources->base": 4},
+ # on different subspaces
+ subspaces=[[[0]], [[1]]],
+ )
+
+ def test_multisource_serial(self):
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {"layer": 0, "component": "mlp_output"},
+ {"layer": 2, "component": "mlp_output"}],
+ mode="serial"
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ sources = [tokenizer("The capital of Italy is", return_tensors="pt"),
+ tokenizer("The capital of China is", return_tensors="pt")]
+
+ intervened_outputs = pv_gpt2(
+ base, sources,
+ # serialized intervention
+ # order is based on sources list
+ {"source_0->source_1": 3, "source_1->base": 4},
+ )
+
+ _, tokenizer, gpt2 = pv.create_gpt2(cache_dir=self._test_dir)
+
+ config = pv.IntervenableConfig([
+ {"layer": 0, "component": "block_output",
+ "subspace_partition": [[0, 128], [128, 256]]},
+ {"layer": 2, "component": "block_output",
+ "subspace_partition": [[0, 128], [128, 256]]}],
+ intervention_types=pv.VanillaIntervention,
+ # act in parallel
+ mode="serial"
+ )
+ pv_gpt2 = pv.IntervenableModel(config, model=gpt2)
+
+ base = tokenizer("The capital of Spain is", return_tensors="pt")
+ sources = [tokenizer("The capital of Italy is", return_tensors="pt"),
+ tokenizer("The capital of China is", return_tensors="pt")]
+
+ intervened_outputs = pv_gpt2(
+ base, sources,
+ # serialized intervention
+ # order is based on sources list
+ {"source_0->source_1": 3, "source_1->base": 4},
+ # on different subspaces
+ subspaces=[[[0]], [[1]]],
+ )
+
+ @classmethod
+ def tearDownClass(self):
+ print(f"Removing testing dir {self._test_dir}")
+ if os.path.exists(self._test_dir) and os.path.isdir(self._test_dir):
+ shutil.rmtree(self._test_dir)
\ No newline at end of file
diff --git a/tests/integration_tests/InterventionWithGPT2TestCase.py b/tests/integration_tests/InterventionWithGPT2TestCase.py
index 98536565..7591a26e 100644
--- a/tests/integration_tests/InterventionWithGPT2TestCase.py
+++ b/tests/integration_tests/InterventionWithGPT2TestCase.py
@@ -20,17 +20,17 @@ def setUpClass(self):
vocab_size=10,
)
)
- self.vanilla_block_output_intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ self.vanilla_block_output_config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.gpt2 = self.gpt2.to(self.device)
@@ -62,7 +62,7 @@ def test_clean_run_positive(self):
with our object.
"""
intervenable = IntervenableModel(
- self.vanilla_block_output_intervenable_config, self.gpt2
+ self.vanilla_block_output_config, self.gpt2
)
intervenable.set_device(self.device)
base = {"input_ids": torch.randint(0, 10, (10, 5)).to(self.device)}
@@ -74,24 +74,24 @@ def test_clean_run_positive(self):
torch.allclose(GPT2_RUN(self.gpt2, base["input_ids"], {}, {}), golden_out)
)
- def test_invalid_intervenable_unit_negative(self):
+ def test_invalid_unit_negative(self):
"""
Invalid intervenable unit.
"""
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos.h",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
try:
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
except ValueError:
pass
else:
@@ -118,20 +118,20 @@ def _test_with_position_intervention(
"input_ids": torch.randint(0, 10, (b_s, max_position + 2)).to(self.device)
}
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
intervention_layer,
intervention_stream,
"pos",
len(positions),
)
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
intervenable = IntervenableModel(
- intervenable_config, self.gpt2, use_fast=use_fast
+ config, self.gpt2, use_fast=use_fast
)
intervention = list(intervenable.interventions.values())[0][0]
@@ -241,19 +241,19 @@ def _test_with_head_position_intervention(
"input_ids": torch.randint(0, 10, (b_s, max_position + 2)).to(self.device)
}
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
intervention_layer,
intervention_stream,
"h.pos",
len(positions),
)
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
intervention = list(intervenable.interventions.values())[0][0]
base_activations = {}
@@ -392,10 +392,10 @@ def _test_with_position_intervention_constant_source(
"input_ids": torch.randint(0, 10, (b_s, max_position + 1)).to(self.device)
}
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
intervention_layer,
intervention_stream,
"pos",
@@ -406,10 +406,10 @@ def _test_with_position_intervention_constant_source(
torch.rand(self.config.n_embd*4).to(self.gpt2.device)
)
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
intervenable = IntervenableModel(
- intervenable_config, self.gpt2, use_fast=use_fast
+ config, self.gpt2, use_fast=use_fast
)
intervention = list(intervenable.interventions.values())[0][0]
@@ -571,10 +571,10 @@ def _test_with_long_sequence_position_intervention_constant_source_positive(
}
intervention_layer = random.randint(0, 2)
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
intervention_layer,
intervention_stream,
"pos",
@@ -585,10 +585,10 @@ def _test_with_long_sequence_position_intervention_constant_source_positive(
torch.rand(self.config.n_embd*4).to(self.gpt2.device)
)
],
- intervenable_interventions_type=intervention_type,
+ intervention_types=intervention_type,
)
intervenable = IntervenableModel(
- intervenable_config, self.gpt2, use_fast=True
+ config, self.gpt2, use_fast=True
)
intervention = list(intervenable.interventions.values())[0][0]
@@ -641,7 +641,7 @@ def suite():
suite.addTest(InterventionWithGPT2TestCase("test_clean_run_positive"))
suite.addTest(
InterventionWithGPT2TestCase(
- "test_invalid_intervenable_unit_negative"
+ "test_invalid_unit_negative"
)
)
suite.addTest(
diff --git a/tests/integration_tests/InterventionWithMLPTestCase.py b/tests/integration_tests/InterventionWithMLPTestCase.py
index e6198ea2..db8515f0 100644
--- a/tests/integration_tests/InterventionWithMLPTestCase.py
+++ b/tests/integration_tests/InterventionWithMLPTestCase.py
@@ -13,10 +13,10 @@ def setUpClass(self):
)
)
- self.test_subspace_intervention_link_intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.mlp),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ self.test_subspace_intervention_link_config = IntervenableConfig(
+ model_type=type(self.mlp),
+ representations=[
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
@@ -27,7 +27,7 @@ def setUpClass(self):
], # partition into two sets of subspaces
intervention_link_key=0, # linked ones target the same subspace
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
@@ -39,14 +39,14 @@ def setUpClass(self):
intervention_link_key=0, # linked ones target the same subspace
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
- self.test_subspace_no_intervention_link_intervenable_config = (
+ self.test_subspace_no_intervention_link_config = (
IntervenableConfig(
- intervenable_model_type=type(self.mlp),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ model_type=type(self.mlp),
+ representations=[
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
@@ -56,7 +56,7 @@ def setUpClass(self):
[1, 3],
], # partition into two sets of subspaces
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
@@ -67,38 +67,38 @@ def setUpClass(self):
], # partition into two sets of subspaces
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
)
- self.test_subspace_no_intervention_link_trainable_intervenable_config = (
+ self.test_subspace_no_intervention_link_trainable_config = (
IntervenableConfig(
- intervenable_model_type=type(self.mlp),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ model_type=type(self.mlp),
+ representations=[
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
1,
- intervenable_low_rank_dimension=2,
+ low_rank_dimension=2,
subspace_partition=[
[0, 1],
[1, 2],
], # partition into two sets of subspaces
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"mlp_activation",
"pos", # mlp layer creates a single token reprs
1,
- intervenable_low_rank_dimension=2,
+ low_rank_dimension=2,
subspace_partition=[
[0, 1],
[1, 2],
], # partition into two sets of subspaces
),
],
- intervenable_interventions_type=LowRankRotatedSpaceIntervention,
+ intervention_types=LowRankRotatedSpaceIntervention,
)
)
@@ -108,7 +108,7 @@ def test_clean_run_positive(self):
with our object.
"""
intervenable = IntervenableModel(
- self.test_subspace_intervention_link_intervenable_config, self.mlp
+ self.test_subspace_intervention_link_config, self.mlp
)
base = {"inputs_embeds": torch.rand(10, 1, 3)}
self.assertTrue(
@@ -120,7 +120,7 @@ def test_with_subspace_positive(self):
Positive test case to intervene only a set of subspace.
"""
intervenable = IntervenableModel(
- self.test_subspace_intervention_link_intervenable_config, self.mlp
+ self.test_subspace_intervention_link_config, self.mlp
)
# golden label
b_s = 10
@@ -148,7 +148,7 @@ def test_with_subspace_negative(self):
Negative test case to check input length.
"""
intervenable = IntervenableModel(
- self.test_subspace_intervention_link_intervenable_config, self.mlp
+ self.test_subspace_intervention_link_config, self.mlp
)
# golden label
b_s = 10
@@ -173,7 +173,7 @@ def test_intervention_link_positive(self):
Positive test case to intervene linked subspace.
"""
intervenable = IntervenableModel(
- self.test_subspace_intervention_link_intervenable_config, self.mlp
+ self.test_subspace_intervention_link_config, self.mlp
)
# golden label
b_s = 10
@@ -219,7 +219,7 @@ def test_no_intervention_link_positive(self):
Positive test case to intervene not linked subspace (overwrite).
"""
intervenable = IntervenableModel(
- self.test_subspace_no_intervention_link_intervenable_config, self.mlp
+ self.test_subspace_no_intervention_link_config, self.mlp
)
# golden label
b_s = 10
@@ -266,7 +266,7 @@ def test_no_intervention_link_negative(self):
Negative test case to intervene not linked subspace with trainable interventions.
"""
intervenable = IntervenableModel(
- self.test_subspace_no_intervention_link_trainable_intervenable_config,
+ self.test_subspace_no_intervention_link_trainable_config,
self.mlp,
)
# golden label
diff --git a/tests/unit_tests/IntervenableConfigUnitTestCase.py b/tests/unit_tests/IntervenableConfigUnitTestCase.py
index 3e9fc411..9f3ded9d 100644
--- a/tests/unit_tests/IntervenableConfigUnitTestCase.py
+++ b/tests/unit_tests/IntervenableConfigUnitTestCase.py
@@ -22,40 +22,40 @@ def setUpClass(self):
)
def test_initialization_positive(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
- assert intervenable_config.intervenable_model_type == type(self.gpt2)
- assert len(intervenable_config.intervenable_representations) == 1
+ assert config.model_type == type(self.gpt2)
+ assert len(config.representations) == 1
assert (
- intervenable_config.intervenable_interventions_type == VanillaIntervention
+ config.intervention_types == VanillaIntervention
)
assert (
- intervenable_config.intervenable_representations[0].intervenable_layer == 0
+ config.representations[0].layer == 0
)
assert (
- intervenable_config.intervenable_representations[
+ config.representations[
0
- ].intervenable_representation_type
+ ].component
== "block_output"
)
assert (
- intervenable_config.intervenable_representations[0].intervenable_unit
+ config.representations[0].unit
== "pos"
)
assert (
- intervenable_config.intervenable_representations[0].max_number_of_units == 1
+ config.representations[0].max_number_of_units == 1
)
diff --git a/tests/unit_tests/IntervenableUnitTestCase.py b/tests/unit_tests/IntervenableUnitTestCase.py
index b2e996a8..0b5da071 100644
--- a/tests/unit_tests/IntervenableUnitTestCase.py
+++ b/tests/unit_tests/IntervenableUnitTestCase.py
@@ -24,26 +24,26 @@ def setUpClass(self):
self.test_output_dir_pool = []
def test_initialization_positive(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
1,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
assert intervenable.mode == "parallel"
self.assertTrue(intervenable.is_model_stateless)
@@ -66,26 +66,26 @@ def test_initialization_positive(self):
assert len(intervenable._batched_setter_activation_select) == 0
def test_initialization_invalid_order_negative(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
1,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
try:
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
except ValueError:
pass
else:
@@ -93,26 +93,26 @@ def test_initialization_invalid_order_negative(self):
"ValueError for invalid intervention " "order is not thrown"
)
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"mlp_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
try:
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
except ValueError:
pass
else:
@@ -121,26 +121,26 @@ def test_initialization_invalid_order_negative(self):
)
def test_initialization_invalid_repr_name_negative(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
1,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
0,
"strange_stream_me",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
try:
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
except KeyError:
pass
else:
@@ -149,26 +149,26 @@ def test_initialization_invalid_repr_name_negative(self):
)
def test_local_non_trainable_save_positive(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
1,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=VanillaIntervention,
+ intervention_types=VanillaIntervention,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
_uuid = str(uuid.uuid4())[:6]
_test_dir = os.path.join(f"./test_output_dir_prefix-{_uuid}")
self.test_output_dir_pool += [_test_dir]
@@ -184,26 +184,26 @@ def test_local_non_trainable_save_positive(self):
)
def test_local_trainable_save_positive(self):
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
0,
"block_output",
"pos",
1,
),
- IntervenableRepresentationConfig(
+ RepresentationConfig(
1,
"block_output",
"pos",
1,
),
],
- intervenable_interventions_type=RotatedSpaceIntervention,
+ intervention_types=RotatedSpaceIntervention,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
_uuid = str(uuid.uuid4())[:6]
_test_dir = os.path.join(f"./test_output_dir_prefix-{_uuid}")
self.test_output_dir_pool += [_test_dir]
@@ -221,35 +221,35 @@ def test_local_trainable_save_positive(self):
"there should binary file for each of them."
)
- def _test_local_trainable_load_positive(self, intervenable_interventions_type):
+ def _test_local_trainable_load_positive(self, intervention_types):
b_s = 10
- intervenable_config = IntervenableConfig(
- intervenable_model_type=type(self.gpt2),
- intervenable_representations=[
- IntervenableRepresentationConfig(
- 0, "block_output", "pos", 1, intervenable_low_rank_dimension=4
+ config = IntervenableConfig(
+ model_type=type(self.gpt2),
+ representations=[
+ RepresentationConfig(
+ 0, "block_output", "pos", 1, low_rank_dimension=4
),
- IntervenableRepresentationConfig(
- 1, "block_output", "pos", 1, intervenable_low_rank_dimension=4
+ RepresentationConfig(
+ 1, "block_output", "pos", 1, low_rank_dimension=4
),
],
- intervenable_interventions_type=intervenable_interventions_type,
+ intervention_types=intervention_types,
)
- intervenable = IntervenableModel(intervenable_config, self.gpt2)
+ intervenable = IntervenableModel(config, self.gpt2)
_uuid = str(uuid.uuid4())[:6]
_test_dir = os.path.join(f"./test_output_dir_prefix-{_uuid}")
self.test_output_dir_pool += [_test_dir]
intervenable.save(save_directory=_test_dir, save_to_hf_hub=False)
- intervenable_loaded = IntervenableModel.load(
+ loaded = IntervenableModel.load(
load_directory=_test_dir,
model=self.gpt2,
)
- assert intervenable != intervenable_loaded
+ assert intervenable != loaded
base = {"input_ids": torch.randint(0, 10, (b_s, 10))}
source = {"input_ids": torch.randint(0, 10, (b_s, 10))}
@@ -258,7 +258,7 @@ def _test_local_trainable_load_positive(self, intervenable_interventions_type):
base, [source, source], {"sources->base": ([[[3]], [[4]]], [[[3]], [[4]]])}
)
- _, counterfactual_outputs_loaded = intervenable_loaded(
+ _, counterfactual_outputs_loaded = loaded(
base, [source, source], {"sources->base": ([[[3]], [[4]]], [[[3]], [[4]]])}
)
diff --git a/tests/utils.py b/tests/utils.py
index e6d82841..d005a01d 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -33,7 +33,7 @@ def is_package_installed(package_name):
from pyvene.models.basic_utils import embed_to_distrib, top_vals, format_token
from pyvene.models.configuration_intervenable_model import (
- IntervenableRepresentationConfig,
+ RepresentationConfig,
IntervenableConfig,
)
from pyvene.models.intervenable_base import IntervenableModel
diff --git a/tutorials/advanced_tutorials/Boundless_DAS.ipynb b/tutorials/advanced_tutorials/Boundless_DAS.ipynb
index ee089e25..281677c6 100644
--- a/tutorials/advanced_tutorials/Boundless_DAS.ipynb
+++ b/tutorials/advanced_tutorials/Boundless_DAS.ipynb
@@ -63,7 +63,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 3,
"id": "9a39c2b3",
"metadata": {},
"outputs": [],
@@ -84,7 +84,7 @@
"from pyvene import (\n",
" IntervenableModel,\n",
" BoundlessRotatedSpaceIntervention,\n",
- " IntervenableRepresentationConfig,\n",
+ " RepresentationConfig,\n",
" IntervenableConfig,\n",
")\n",
"from pyvene import create_llama\n",
@@ -391,25 +391,23 @@
"outputs": [],
"source": [
"def simple_boundless_das_position_config(model_type, intervention_type, layer):\n",
- " intervenable_config = IntervenableConfig(\n",
- " intervenable_model_type=model_type,\n",
- " intervenable_representations=[\n",
- " IntervenableRepresentationConfig(\n",
- " layer, # layer\n",
+ " config = IntervenableConfig(\n",
+ " model_type=model_type,\n",
+ " representations=[\n",
+ " RepresentationConfig(\n",
+ " layer, # layer\n",
" intervention_type, # intervention type\n",
- " \"pos\", # intervention unit\n",
- " 1, # max number of unit\n",
" ),\n",
" ],\n",
- " intervenable_interventions_type=BoundlessRotatedSpaceIntervention,\n",
+ " intervention_types=BoundlessRotatedSpaceIntervention,\n",
" )\n",
- " return intervenable_config\n",
+ " return config\n",
"\n",
"\n",
- "intervenable_config = simple_boundless_das_position_config(\n",
+ "config = simple_boundless_das_position_config(\n",
" type(llama), \"block_output\", 15\n",
")\n",
- "intervenable = IntervenableModel(intervenable_config, llama)\n",
+ "intervenable = IntervenableModel(config, llama)\n",
"intervenable.set_device(\"cuda\")\n",
"intervenable.disable_model_gradients()"
]
@@ -521,7 +519,7 @@
" _, counterfactual_outputs = intervenable(\n",
" {\"input_ids\": inputs[\"input_ids\"]},\n",
" [{\"input_ids\": inputs[\"source_input_ids\"]}],\n",
- " {\"sources->base\": ([[[80]] * b_s], [[[80]] * b_s])}, # swap 80th token\n",
+ " {\"sources->base\": 80}, # swap 80th token\n",
" )\n",
" eval_metrics = compute_metrics(\n",
" [counterfactual_outputs.logits], [inputs[\"labels\"]]\n",
@@ -586,7 +584,7 @@
" _, counterfactual_outputs = intervenable(\n",
" {\"input_ids\": inputs[\"input_ids\"]},\n",
" [{\"input_ids\": inputs[\"source_input_ids\"]}],\n",
- " {\"sources->base\": ([[[80]] * b_s], [[[80]] * b_s])}, # swap 80th token\n",
+ " {\"sources->base\": 80}, # swap 80th token\n",
" )\n",
" eval_labels += [inputs[\"labels\"]]\n",
" eval_preds += [counterfactual_outputs.logits]\n",
diff --git a/tutorials/advanced_tutorials/Causal_Tracing.ipynb b/tutorials/advanced_tutorials/Causal_Tracing.ipynb
index 58db2357..37951e6e 100644
--- a/tutorials/advanced_tutorials/Causal_Tracing.ipynb
+++ b/tutorials/advanced_tutorials/Causal_Tracing.ipynb
@@ -50,7 +50,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
"from pyvene import (\n",
" IntervenableModel,\n",
" VanillaIntervention, Intervention,\n",
- " IntervenableRepresentationConfig,\n",
+ " RepresentationConfig,\n",
" IntervenableConfig,\n",
" ConstantSourceIntervention,\n",
" LocalistRepresentationIntervention\n",
@@ -109,7 +109,7 @@
},
{
"cell_type": "code",
- "execution_count": 38,
+ "execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -133,10 +133,7 @@
],
"source": [
"device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n",
- "config, tokenizer, gpt = create_gpt2(\n",
- " name=\"gpt2-xl\",\n",
- " cache_dir=\"../../../.huggingface_cache/\", # change to your local dir\n",
- ")\n",
+ "config, tokenizer, gpt = create_gpt2(name=\"gpt2-xl\")\n",
"gpt.to(device)\n",
"\n",
"base = \"The Space Needle is in downtown\"\n",
@@ -164,7 +161,7 @@
},
{
"cell_type": "code",
- "execution_count": 28,
+ "execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -187,17 +184,17 @@
"\n",
"\n",
"def corrupted_config(model_type):\n",
- " intervenable_config = IntervenableConfig(\n",
- " intervenable_model_type=model_type,\n",
- " intervenable_representations=[\n",
- " IntervenableRepresentationConfig(\n",
+ " config = IntervenableConfig(\n",
+ " model_type=model_type,\n",
+ " representations=[\n",
+ " RepresentationConfig(\n",
" 0, # layer\n",
" \"block_input\", # intervention type\n",
" ),\n",
" ],\n",
- " intervenable_interventions_type=NoiseIntervention,\n",
+ " intervention_types=NoiseIntervention,\n",
" )\n",
- " return intervenable_config"
+ " return config"
]
},
{
@@ -209,7 +206,7 @@
},
{
"cell_type": "code",
- "execution_count": 30,
+ "execution_count": 5,
"metadata": {},
"outputs": [
{
@@ -231,8 +228,8 @@
],
"source": [
"base = tokenizer(\"The Space Needle is in downtown\", return_tensors=\"pt\").to(device)\n",
- "intervenable_config = corrupted_config(type(gpt))\n",
- "intervenable = IntervenableModel(intervenable_config, gpt)\n",
+ "config = corrupted_config(type(gpt))\n",
+ "intervenable = IntervenableModel(config, gpt)\n",
"_, counterfactual_outputs = intervenable(\n",
" base, unit_locations={\"base\": ([[[0, 1, 2, 3]]])}\n",
")\n",
@@ -263,21 +260,21 @@
" layer, stream=\"mlp_activation\", window=10, num_layers=48):\n",
" start = max(0, layer - window // 2)\n",
" end = min(num_layers, layer - (-window // 2))\n",
- " intervenable_config = IntervenableConfig(\n",
- " intervenable_representations=[\n",
- " IntervenableRepresentationConfig(\n",
+ " config = IntervenableConfig(\n",
+ " representations=[\n",
+ " RepresentationConfig(\n",
" 0, # layer\n",
" \"block_input\", # intervention type\n",
" ),\n",
" ] + [\n",
- " IntervenableRepresentationConfig(\n",
+ " RepresentationConfig(\n",
" i, # layer\n",
" stream, # intervention type\n",
" ) for i in range(start, end)],\n",
- " intervenable_interventions_type=\\\n",
+ " intervention_types=\\\n",
" [NoiseIntervention]+[VanillaIntervention]*(end-start),\n",
" )\n",
- " return intervenable_config"
+ " return config"
]
},
{
@@ -316,12 +313,12 @@
" data = []\n",
" for layer_i in tqdm(range(gpt.config.n_layer)):\n",
" for pos_i in range(7):\n",
- " intervenable_config = restore_corrupted_with_interval_config(\n",
+ " config = restore_corrupted_with_interval_config(\n",
" layer_i, stream, \n",
" window=1 if stream == \"block_output\" else 10\n",
" )\n",
- " n_restores = len(intervenable_config.intervenable_representations) - 1\n",
- " intervenable = IntervenableModel(intervenable_config, gpt)\n",
+ " n_restores = len(config.representations) - 1\n",
+ " intervenable = IntervenableModel(config, gpt)\n",
" _, counterfactual_outputs = intervenable(\n",
" base,\n",
" [None] + [base]*n_restores,\n",
diff --git a/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb b/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb
index 596bd35a..3bc39013 100644
--- a/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb
+++ b/tutorials/advanced_tutorials/DAS_Main_Introduction.ipynb
@@ -69,20 +69,9 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 2,
"metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "c:\\Users\\attic\\anaconda3\\envs\\bigkid\\lib\\site-packages\\pandas\\core\\computation\\expressions.py:21: UserWarning: Pandas requires version '2.8.0' or newer of 'numexpr' (version '2.7.3' currently installed).\n",
- " from pandas.core.computation.check import NUMEXPR_INSTALLED\n",
- "c:\\Users\\attic\\anaconda3\\envs\\bigkid\\lib\\site-packages\\pandas\\core\\arrays\\masked.py:62: UserWarning: Pandas requires version '1.3.4' or newer of 'bottleneck' (version '1.3.2' currently installed).\n",
- " from pandas.core import (\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"import torch\n",
"from torch.utils.data import DataLoader\n",
@@ -104,23 +93,23 @@
" VanillaIntervention,\n",
" RotatedSpaceIntervention,\n",
" LowRankRotatedSpaceIntervention,\n",
- " IntervenableRepresentationConfig,\n",
+ " RepresentationConfig,\n",
" IntervenableConfig,\n",
")"
]
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- ""
+ ""
]
},
- "execution_count": 4,
+ "execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -161,7 +150,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -229,12 +218,12 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjwAAAIuCAYAAAC7EdIKAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8g+/7EAAAACXBIWXMAAAsTAAALEwEAmpwYAABM8UlEQVR4nO3de5zM9eLH8fesXXZXSHIpClEpl3Ti5BRRHQrVSZ0kKZQSdpe1t/luuvy67Hdv1t5QFIrulFMdTjp0JN2skkoq3RRlJde1y16+vz9W5X6d3c/Md17Px8PjnGZmZ1/nmHzfvjM743EcRwAAAG4WYjoAAACgqjF4AACA6zF4AACA6zF4AACA6zF4AACA6zF4AACA64Ue7spTTz3VadGiRTWlAAAAHL/ly5f/6jhOw4Ndd9jB06JFCxUUFFRNFQAAgA95PJ4fDnUdT2kBAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXCzUdACA4FBYVasaKGVq5YaW2lmxVvfB66tC4g4Z2HKqGtRuazgPgch7HcQ55ZadOnZyCgoJqzAHgNsvWLZP9jq35a+ZLkkrKSv64LiI0Qo4c9W7dW1ZXS52bdjaVCcAFPB7PcsdxOh3sOp7SAlBlJhdMVo+nemju6rkqKSvZZ+xIUnFZsUrKSjR39Vz1eKqHJhdMNtIJwP14SgtAlZhcMFnxC+K1s3TnEW/ryNHO0p2KXxAvSRrRaURV5wEIMgweAD63bN2yg4+djyW9J+k3SbUknSfpSkkRlVf/Pno6n95ZnU4/6FlpADguPKUFwOfsd2wVlxbve+G7kv4rqackS9IwSVskzZRU9ufNikuLZS+xqycUQNBg8ADwqcKiQs1fM1+O9vqBiBJJb0nqLelsSTUk1Zd0kypHz8o/b+rI0bw187SxaGO1NQNwPwYPAJ+asWLGgRf+qMqzOOftd3ktVQ6gb/e92CPPwe8HAI4TgweAT63csPKAn8bSTkmRqjyzs7+T9ly/l+KyYn1a+GnVBAIISgweAD61tWTrgRdGqnLUlB/kC3bsuX4/m0s2+zYMQFBj8ADwqXrh9Q688AxV/kzoF/tdvkvS15JaHvgl9cPr+7wNQPBi8ADwqQ6NOyg8NHzfC8MldZc0X5UDp1zSZkkvSaor6YJ9bx4RGqH2jdpXfSyAoMHgAeBTQzoOOfgVXVX5njsLJNmSnpBUT9JgHfCOYI6cQ98PABwH3ngQgE81qt1IvVv31tzVc/f90XRJ+sueX4fhkUd9WvfhA0UB+BRneAD4nNXVUkRYxHF9bURYhKxulo+LAAQ7Bg8An+vctLMye2UqMuwgP351GJFhkcrslcnHSgDwOZ7SAlAlfv8A0PgF8SouLT7w6a29eORRRFiEMntl8sGhAKoEZ3gAVAnHcbTlzS269KtL1a9NP4WHhisidN+nuSJCIxQeGq5LT7lU5757rm5pfYuhWgBuxxkeAD63du1a3XbbbXr77bfVtm1bfTbjM20s2qgZK2bo08JPtblks+qH11f7Ru01pOMQLXljiW78940688wzNX36dN14442m/ycAcBkGDwCfmjRpkhITE1VcXPlp6R07dpQkNazdUAmXJhz0a5o3b67w8HBt375dt99+u5544gnNmjVLDRo0qK5sAC7HU1oAfMZxHE2ZMkXl5eWqqKiQJJ1++ulH/LpTTjlFNWpUftDWrl279NZbb2n16tVV2goguDB4APiMx+PR8uXL1aVLF4WFhalGjRpq1KjREb/ulFNOUUlJiWrVqqXQ0FCtWLFCl156aTUUAwgWDB4APrVq1Sp9/vnn+uyzz3TPPfcc1XCpW7eubrnlFr355psaNGiQHn/88WooBRBMPI5z6B8V7dSpk1NQUFCNOQACmeM4uvzyy9W/f3+NHDnyuO6jsLBQbdu21f/+9z+1bdvWx4UA3Mzj8Sx3HOegb+TFGR4APvPCCy9o69atGj58+HHfR6NGjXT//fcrOjpah/sLGQAcCwYPAJ/YsWOHEhISlJeX98cLkI/XiBEj9Ouvv2r27Nk+qgMQ7Bg8AHzi0UcfVY8ePdS1a9cTvq/Q0FDl5eUpLi5ORUVFPqgDEOwYPABO2FdffaWpU6cqPT3dZ/fZvXt3de3aVbZt++w+AQQvBg+AE+I4jkaPHi2v16vTTjvNp/edkZGhyZMna82aNT69XwDBh8ED4IS89tpr+v777xUTE+Pz+27atKkSExM1ZswYn983gODC4AFw3EpKShQbG6vc3FzVrFmzSr7HmDFj9PXXX+v111+vkvsHEBwYPACOW0ZGhjp27KiePXtW2feoVauWcnJyNGbMGJWUlFTZ9wHgbgweAMflhx9+UHZ2trKysqr8e1199dVq165dtXwvAO7E4AFwXOLi4jR69Gg1b968Wr5fVlaWxo8fr7Vr11bL9wPgLgweAMfsv//9rz766CMlJCRU2/c866yzFBUVpfj4+Gr7ngDcg8ED4Jjs3r1b0dHRys7OVkRERLV+76SkJH344YdatGhRtX5fAIGPwQPgmOTl5alFixa69tprq/17R0ZGKisrS9HR0SotLa327w8gcDF4ABy1n3/+WbZtKycnRx6Px0hDv3791LRpU02cONHI9wcQmBg8AI5aUlKShg0bpnPOOcdYg8fjUW5urh555BH98ssvxjoABBYGD4CjsnTpUi1atEjjxo0znaI2bdpo6NCh8nq9plMABAgGD4AjKi8vV1RUlDIzM3XSSSeZzpEk3XfffXrzzTf13nvvmU4BEAAYPACOaMqUKapXr55uvvlm0yl/qFu3rtLS0hQVFaXy8nLTOQD8HIMHwGH9+uuveuCBB5Sbm2vshcqHcuuttyoyMlJPPvmk6RQAfo7BA+Cwxo0bpwEDBqhDhw6mUw7g8XiUl5en++67T7/99pvpHAB+jMED4JCWL1+uuXPn6qGHHjKdckgdO3bUP//5T913332mUwD4MQYPgIOqqKhQdHS0UlJSdPLJJ5vOOayHH35Ys2fP1scff2w6BYCfYvAAOKiZM2eqvLxcQ4YMMZ1yRKeccooefvhhRUdHy3Ec0zkA/BCDB8ABtm7dKsuylJ+fr5CQwPhj4s4771RJSYmeeeYZ0ykA/FBg/EkGoFr93//9n/r27avOnTubTjlqNWrUUH5+vpKSkrRt2zbTOQD8DIMHwD4+//xzzZo1SykpKaZTjlmXLl3Uq1cvPfzww6ZTAPgZBg+APziOo5iYGN1///1q2LCh6ZzjkpqaqhkzZuiLL74wnQLAjzB4APxh9uzZ2rhxo+655x7TKcetcePGuvfeexUTE8MLmAH8gcEDQJJUVFSkuLg45efnKzQ01HTOCRk1apR+/vlnvfLKK6ZTAPgJBg8ASZJt2+rWrZsuu+wy0yknLCwsTHl5eRo7dqx27txpOgeAH2DwANCaNWv02GOPKT093XSKz1x++eW6+OKLlZaWZjoFgB9g8ABQbGysEhMT1bRpU9MpPpWZman8/Hx9++23plMAGMbgAYLc66+/rq+++kpjxowxneJzZ5xxhuLi4hQbG2s6BYBhDB4giJWUlGjMmDHKzc1VzZo1TedUibi4OK1atUrz5883nQLAIAYPEMSysrLUvn17XXXVVaZTqkytWrWUk5Oj0aNHa9euXaZzABjC4AGC1I8//qisrCxlZWWZTqlyffr00bnnnqvs7GzTKQAMYfAAQSo+Pl5RUVFq2bKl6ZRqkZ2drYyMDK1bt850CgADGDxAEFq0aJE+/PBDJSUlmU6pNq1atdKIESOUkJBgOgWAAQweIMiUlpYqOjpaWVlZioiIMJ1TrSzL0tKlS7V48WLTKQCqGYMHCDITJ05Us2bNdP3115tOqXaRkZEaP368oqOjVVZWZjoHQDVi8ABB5JdfftGjjz6qnJwceTwe0zlG3HjjjWrYsKEmT55sOgVANWLwAEHE6/Vq6NChatOmjekUYzwej/Ly8vTQQw+psLDQdA6AasLgAYLEe++9p//+97+67777TKcYd/755+v2229XcnKy6RQA1YTBAwSB8vJyRUVFKS0tTXXq1DGd4xceeOABzZs3Tx9++KHpFADVgMEDBIEnn3xSkZGRGjhwoOkUv1G3bl2lpqYqKipKFRUVpnMAVDEGD+BymzZt0n333af8/PygfaHyoQwaNEihoaGaPn266RQAVYzBA7jcfffdp5tuukkXXHCB6RS/ExISovz8fN17773avHmz6RwAVYjBA7jYxx9/rJdfflkPPfSQ6RS/9Ze//EX9+vXTAw88YDoFQBVi8AAu5TiOoqOj9fDDD+uUU04xnePXHnnkET3//PNauXKl6RQAVYTBA7jUM888o127dumOO+4wneL3GjRooIceekhRUVFyHMd0DoAqwOABXGjbtm1KSkpSfn6+atSoYTonINx1113asWOHnn/+edMpAKoAgwdwoYceekhXXXWVLr74YtMpAaNGjRrKy8tTQkKCduzYYToHgI8xeACX+eKLL/TUU0/Jtm3TKQHn0ksv1ZVXXqlHHnnEdAoAH2PwAC7iOI5iYmI0btw4NW7c2HROQEpLS9MTTzyhL7/80nQKAB9i8AAu8sorr+iXX37RqFGjTKcErCZNmig5OVmjR4/mBcyAizB4AJfYuXOnxo4dq7y8PIWGhprOCWjR0dFau3atXn31VdMpAHyEwQO4RGpqqrp06aIePXqYTgl4YWFhysvLU2xsrIqLi03nAPABBg/gAt9++60mTZqkzMxM0ymuceWVV+qiiy5SRkaG6RQAPsDgAVwgNjZWcXFxatasmekUVxk/frxycnL0/fffm04BcIIYPECAmz9/vlatWqWxY8eaTnGdM888848xCSCwMXiAALZr1y6NHj1aOTk5qlWrlukcV4qPj9eKFSu0YMEC0ykATgCDBwhgEyZMUJs2bdSnTx/TKa4VHh6u7OxsxcTEaPfu3aZzABwnBg8QoH766SdlZmZqwoQJplNc75prrlGrVq2Um5trOgXAcWLwAAEqISFBI0aMUKtWrUynuJ7H41F2drZSU1O1fv160zkAjgODBwhAixcv1rvvvivLskynBI2zzz5bd999t5KSkkynADgODB4gwJSVlSkqKkpZWVmKjIw0nRNU7r33Xi1evFhLliwxnQLgGDF4gAAzadIkNW7cWDfccIPplKBTu3ZtZWRkKDo6WuXl5aZzABwDBg8QQAoLC/Xwww8rNzdXHo/HdE5Q6t+/v+rXr6/HH3/cdAqAY8DgAQKIZVm6/fbbdf7555tOCVoej0d5eXl68MEH9euvv5rOAXCUGDxAgPjwww81f/58PfDAA6ZTgl67du00cOBA3XvvvaZTABwlBg8QACoqKjRq1CilpaWpbt26pnMg6cEHH9Srr76qgoIC0ykAjgKDBwgA06ZNU82aNTVo0CDTKdjj5JNPVkpKiqKiolRRUWE6B8ARMHgAP7d582aNGzdO+fn5vFDZzwwePFiS9PTTTxsuAXAkDB7Az91///3q16+fLrzwQtMp2E9ISIjy8/NlWZa2bNliOgfAYTB4AD+2cuVKvfjii3rkkUdMp+AQOnXqpGuvvVYPPvig6RQAh8HgAfyU4ziKiorSQw89pAYNGpjOwWGkpKTo2Wef1WeffWY6BcAhMHgAP/Xcc8+pqKhIw4YNM52CIzj11FP1wAMPKDo6Wo7jmM4BcBAMHsAPbd++XYmJicrLy1ONGjVM5+AoDB8+XJs3b9ZLL71kOgXAQTB4AD/0yCOP6Morr9Qll1xiOgVHKTQ0VHl5eYqLi9OOHTtM5wDYD4MH8DNffvmlpk2bprS0NNMpOEbdunVT9+7dlZKSYjoFwH4YPIAfcRxHMTExSk5OVpMmTUzn4Dikp6drypQp+vrrr02nANgLgwfwI//617/0008/KSoqynQKjtPpp5+upKQkjRkzxnQKgL0weAA/UVxcrNjYWOXm5iosLMx0Dk7A6NGj9c033+j11183nQJgDwYP4CfS09PVqVMnXXnllaZTcIJq1qyp3NxcjR49WiUlJaZzAIjBA/iF77//Xnl5eRo/frzpFPhIr169dMEFFygzM9N0CgAxeAC/MHbsWMXGxurMM880nQIfysrKUnZ2ttauXWs6BQh6DB7AsAULFmjlypWKi4sznQIfa9GihaKjo/m9BfwAgwcwaPfu3YqJiVF2drbCw8NN56AKJCYmqqCgQAsXLjSdAgQ1Bg9gUE5Ojlq1aqVrrrnGdAqqSEREhCZMmKDo6GiVlpaazgGCFoMHMGT9+vVKS0tTdna26RRUsX/84x8688wzlZeXZzoFCFoMHsCQxMREDR8+XGeffbbpFFQxj8ejnJwc2batX375xXQOEJQYPIABS5Ys0dtvv63k5GTTKagm5557ru644w4lJSWZTgGCEoMHqGZlZWWKiopSZmamateubToH1WjcuHFauHCh3n33XdMpQNBh8ADV7PHHH1eDBg100003mU5BNatTp47S09MVFRWl8vJy0zlAUGHwANVo48aN+r//+z/l5ubK4/GYzoEBt9xyi0466SRNnTrVdAoQVBg8QDW69957deutt6pdu3amU2CIx+NRfn6+HnjgAW3atMl0DhA0GDxANSkoKNBrr72mBx980HQKDOvQoYP69++vcePGmU4BggaDB6gGFRUVioqKkm3bqlevnukc+IGHHnpIr7zyij766CPTKUBQYPAA1eCpp56Sx+PR7bffbjoFfqJ+/fp65JFHFBUVpYqKCtM5gOsxeIAqtmXLFiUnJysvL08hIfwrhz/dcccdKisr06xZs0ynAK7Hn75AFXvwwQd13XXXqVOnTqZT4GdCQkKUn58vr9errVu3ms4BXI3BA1Shzz77TM8++6weffRR0ynwU3/961/Vu3dvPfTQQ6ZTAFdj8ABVxHEcRUdH68EHH9Spp55qOgd+zLZtPf3001q1apXpFMC1GDxAFXnxxRe1efNmDR8+3HQK/FyjRo103333KSYmRo7jmM4BXInBA1SBHTt2KD4+Xvn5+apRo4bpHASAkSNHasOGDZozZ47pFMCVGDxAFUhJSVGPHj3UtWtX0ykIEKGhocrPz1dcXJyKiopM5wCuw+ABfOzrr7/WlClTlJ6ebjoFAaZ79+665JJLlJqaajoFcB0GD+BDjuNo9OjR8nq9Ou2000znIABlZGRo8uTJ+uabb0ynAK7C4AF86PXXX9e3336rmJgY0ykIUM2aNVN8fLxiY2NNpwCuwuABfKSkpERjxoxRbm6uatasaToHASw2NlarV6/Wv//9b9MpgGsweAAfyczMVMeOHdWrVy/TKQhwtWrVUm5ursaMGaNdu3aZzgFcgcED+MAPP/ygCRMmaPz48aZT4BJXX321zj//fGVlZZlOAVyBwQP4QHx8vEaPHq0WLVqYToGLTJgwQZmZmfrxxx9NpwABj8EDnKCFCxdq+fLlSkhIMJ0ClznrrLM0atQoxcfHm04BAh6DBzgBpaWlio6O1oQJExQREWE6By7k9Xr1wQcf6K233jKdAgQ0Bg9wAvLy8tS8eXNdd911plPgUpGRkcrKylJ0dLRKS0tN5wABi8EDHKeff/5ZKSkpysnJkcfjMZ0DF+vXr59OO+00TZo0yXQKELAYPMBx8nq9GjZsmM455xzTKXA5j8ej3NxcPfLII9qwYYPpHCAgMXiA4/Duu+9q4cKFGjdunOkUBInzzjtPgwcPltfrNZ0CBCQGD3CMysvLFRUVpYyMDJ100kmmcxBE7r//fi1YsEDvv/++6RQg4DB4gGM0depU1alTRwMGDDCdgiBTt25dpaWlKSoqSuXl5aZzgIDC4AGOwaZNm3T//fcrLy+PFyrDiFtvvVXh4eGaNm2a6RQgoDB4gGMwbtw4DRgwQB06dDCdgiDl8XiUn5+vcePG6bfffjOdAwQMBg9wlD766CO98sor+r//+z/TKQhyHTt21I033qj77rvPdAoQMBg8wFGoqKhQVFSUHn30UdWvX990DqBHHnlEs2fP1ooVK0ynAAGBwQMchVmzZqmsrExDhw41nQJIkk455RQ9/PDDioqKkuM4pnMAvxdqOgDwF4VFhZqxYoZWbliprSVbVS+8njo07qAbz7pRXq9Xc+fOVUgIf0eA/7jzzjv1+OOP65lnntGgQYMO+Rge2nGoGtZuaDoXMMpzuL8ZdOrUySkoKKjGHKD6LVu3TPY7tuavmS9JKikr+eO6iNAIlZaVqunOpnop5iV1btrZVCZwUO+9955uib9Ff4n5yyEfw44c9W7dW1ZXi8cwXM3j8Sx3HKfTwa7jr6sIapMLJqvHUz00d/VclZSV7HOgkKTismKVqUxrI9eqx1M9NLlgspFO4FBWhK1QYZ/Cwz6GS8pKNHf1XB7DCGo8pYWgNblgsuIXxGtn6c4j3taRo52lOxW/IF6SNKLTiKrOA47o98dwcVnxEW/LYxjBjsGDoGLbtt5++2099MRDf46dXEmnSBq01w1zJfWQNE/SLZKaV16889edGtltpCJeitCQa4ZUazuwt9439NabP7yp8uv2esfl7yW9IKlEB/7pXi6prrRzTOXo6Xx6Z3U6/aBn/gFX4iktBJXLLrtM7777rlLeTlFxabG0XZUHgp8lVey50XZJv0lqIenvkl6VVLrnutckdZReK3qtWruB/YX0CVH5l+XSN3suKFXlY7WXpAck3bvXr2hJEZK6V960uLRY9hK7upMBoxg8CCqdO3dWaWmp5i2ZJ0eO9IOklpJOlfTLnhv9IKm+pLqSLpJUR9JiSSskbZJ0hTRvzTxtLNpY7f2AVPkThYs2LJL6qHKE71blY/QUSRfud+NySS9JOufP6xw5PIYRdBg8CCo1a9bUaW1OU8X3e07n/CDpzD2/ftCfl+15CkseSddJWibpP5KulVRT8sijGStmVF84sJc/HnttJZ0mabak5ap8fO7vTVWe/emz78U8hhFsGDwIOrXPrq2y78oq/2GtKsfN3oNnrSqfzvpdPVWe5amlP4ZQcVmxPi38tDpygQOs3LDyz5/G6ivpO1U+XVVvvxuuUuWZyf6Swva9iscwgg2DB0GnduvalaNmp6QiSQ0knSHpxz2XFerPMzyS9I4qX/9QW9K7f168uWRz9QQD+9lasvXPfzhJUqSk/d9X8FdJ/5J0vSqf6joIHsMIJgweBJ3m7ZpX/hTLR6o8syNJ4ao8i/PRnv/8/eOyClU5cq7b82uJKl/HI6l+OJ+pBTPqhe9/Kmc/uyW9KKmTpDaHvhmPYQQTBg+Czl/O/Is8TT3Se/pz8GjPf39Pf57dqVDlT71cqsq/PTeRdLGk16TwGuFq36h9NVYDf+rQuIPCQ8MPfYPXVXlW8spD3yQiNILHMIIKgwdBZ0jHIQppGVL5dNb+g6dIfw6eD1T5Ys9L97pNd0k7pLKCMg3pOKQ6coEDHPaxt0XSSkk/SbIlPbrfrz0cOTyGEVR440EEnUa1G+m6kddp7hVzK380/Xft9vz63d/2/NpbqOSJ8ui6NtfxYYwwplHtRurdurfmrt7zGI7d68qTJT14+K/3yKM+rfvwGEZQ4QwPgpLV1VJEWMRxfW1EWISsbpaPi4Bjw2MYODYMHgSlzk07K7NXpiLDIo/p6yLDIpXZK5O35IdxPIaBY8NTWghav394YvyCeBWXFu/79NZ+PPIoIixCmb0y+dBF+A0ew8DR4wwPglr3yO66fvP16temn8JDwxURuu9TBBGhEQoPDVe/Nv20eMhiDhTwOyM6jdDiIYsP+xgO84Tp/JDzeQwjqHGGB0Fr0aJF6t27t0JDQ1VUVKSNRRs17eNpyn8pXx3+2kENIhuofaP2GtJxCC/uhF/rdHonzbl5jjYWbdSMFTM0681ZqnVyLbVp3kbtG7XXVy99pSdyntCT65/UBbkXKCws7Mh3CrgMZ3gQdBzH0aOPPqprrrlGu3fvVkRE5d+IG9ZuqDN/PFM/5f6kG0pu0NP9nlbCpQmMHQSMhrUbanj74Vr16CptzNn4x2O49WmtJUnTpk3T3/72N23YsMFwKVD9GDwIOvfff7/uv/9+FRcX73N5eXm5kpKSJEnjxo1TWVmZiTzghGRnZ8txHK1fv16LFy+WJJWWlkqSdu/erY8//lgXXHABj28EHQYPgs6wYcN08803S5JCQ0P/OBi8+OKL2ry58rOFtm/frpkzZxprBI7Htm3blJGRofLycu3evVsJCQmSpF27dkmSQkJCdPrpp2vKlCkKDeUVDQguDB4EnebNm6tnz57q2rWrhg8frvPPP1+SdN999/1xYNi9e7fuv/9+k5nAMXv88cdVUlKiGjVqKCwsTMuWLdP777+v008/XRdffLGSk5PVrFkzXXvttaZTgWrHxEfQKS8vV1pamiZNmqQrrrjij8szMjJUWFioESNGKDs7Ww0b8todBJbevXurbt26euqpp9S6dWtdeumlat26tbp06aIRI0aovLxcL774ot5++211797ddC5QrTyOc+j3bejUqZNTUFBQjTlA1Zs9e7YyMjL0/vvvy+PxHHB9aGioSkpKOOWPgHXHHXeoa9euuuOOOw647sknn9SLL76oN954w0AZULU8Hs9yx3EO+q6aPKWFoOI4jmzblmVZBx07gNvddtttWrVqlZYvX246BahWDB4ElTfffFMlJSW67rrrTKcARtSsWVNxcXFKTU01nQJUKwYPgopt2/J6vQoJ4aGP4HXXXXdp8eLF+vLLL02nANWGP/URNN577z199913GjBggOkUwKjatWsrKipKaWlpplOAasOrMhE0bNtWYmIib6sPSIqKitLZZ5+tH3/8UWeccYbpHKDKcYYHQeHTTz/VsmXLNHToUNMpgF845ZRTdMcdd2j8+PGmU4BqweBBUEhNTdXo0aP/+NwsAFJsbKyefvppbdy40XQKUOUYPHC9b7/9Vm+88YZGjBhhOgXwK6effrpuuukm5ebmmk4BqhyDB66XkZGh4cOHq169eqZTAL+TmJioxx57TNu2bTOdAlQpBg9c7eeff9YLL7yg0aNHm04B/FKrVq3Us2dPPf7446ZTgCrF4IGrTZgwQbfeeqsaNWpkOgXwW16vVxMmTFBJSYnpFKDKMHjgWps3b9aTTz6p+Ph40ymAX+vQoYMuuugizZgxw3QKUGUYPHCtiRMn6tprr1Xz5s1NpwB+z7Ispaenq6yszHQKUCUYPHCloqIi5ebmKikpyXQKEBAuueQSnXHGGXrhhRdMpwBVgsEDV3riiSfUrVs3nXfeeaZTgIBhWZZSU1NVUVFhOgXwOQYPXGf37t0aP368LMsynQIElKuuukphYWH697//bToF8DkGD1znmWee0bnnnqtOnTqZTgECisfjkWVZSklJkeM4pnMAn2LwwFXKy8uVmprK2R3gON1www367bfftHjxYtMpgE8xeOAqr7zyiurXr6/LL7/cdAoQkGrUqKHExETZtm06BfApBg9cw3Ec2bYty7Lk8XhM5wAB67bbbtOqVau0fPly0ymAzzB44BoLFixQSUmJrr32WtMpQECrWbOm4uLiOMsDV2HwwDV+P7sTEsLDGjhRd911l95++22tXr3adArgExwZ4ArvvfeefvjhBw0YMMB0CuAKtWvXVnR0tNLT002nAD4RajoA8AXbtpWQkKDQUB7SgK9ERUWpdevWWrt2rc4880zTOcAJ4QwPAt6nn36qZcuWaejQoaZTAFepX7++7rjjDo0fP950CnDCGDwIeKmpqRozZowiIiJMpwCuExsbq5kzZ2rjxo2mU4ATwuBBQPv222/1xhtvaMSIEaZTAFc6/fTT1b9/f+Xm5ppOAU4IgwcBLSMjQ8OHD1fdunVNpwCulZCQoMcee0zbtm0znQIcNwYPAtbPP/+sF154QaNHjzadArhaq1at1LNnTz322GOmU4DjxuBBwJowYYIGDRqkRo0amU4BXM/r9So7O1slJSWmU4DjwuBBQNq8ebOefPJJxcfHm04BgkKHDh100UUXacaMGaZTgOPC4EFAys/P17XXXst7gwDVyLIspaenq6yszHQKcMwYPAg4RUVFys/PV1JSkukUIKhccsklOuOMM/TCCy+YTgGOGYMHAeeJJ55Q165ddd5555lOAYJOcnKyUlNTVVFRYToFOCYMHgSU3bt3KzMzU5ZlmU4BglKvXr0UFhamf//736ZTgGPC4EFAmTVrls477zx16tTJdAoQlDwejyzLUkpKihzHMZ0DHDUGDwJGeXm50tLSOLsDGHbDDTfot99+0+LFi02nAEeNwYOA8corr6h+/frq0aOH6RQgqNWoUUNJSUmybdt0CnDUGDwICI7jKCUlRZZlyePxmM4Bgt6gQYO0atUqLV++3HQKcFQYPAgICxYs0O7du3XttdeaTgEgqWbNmoqLi+MsDwIGgwcBwbZteb1ehYTwkAX8xV133aUlS5Zo9erVplOAI+LoAb/37rvv6ocfftCAAQNMpwDYS+3atRUVFaX09HTTKcARhZoOAI7Etm0lJiYqNJSHK+BvoqKi1Lp1a61du5aPeoFf4wwP/Nqnn36qgoICDR061HQKgIOoX7++7rzzTo0fP950CnBYDB74tdTUVI0ZM0bh4eGmUwAcQmxsrGbOnKmNGzeaTgEOicEDv/XNN9/ojTfe0IgRI0ynADiM0047Tf3791dOTo7pFOCQGDzwWxkZGbrnnntUt25d0ykAjiAhIUGPPfaYtm3bZjoFOCgGD/zSzz//rBdffFGjR482nQLgKLRq1UpXXXWVHnvsMdMpwEExeOCXJkyYoEGDBqlhw4amUwAcJa/Xq+zsbJWUlJhOAQ7A4IHf2bx5s5544gnFx8ebTgFwDNq3b6+LLrpI06dPN50CHIDBA7+Tn5+vf/zjH7ynBxCALMtSRkaGysrKTKcA+2DwwK8UFRUpLy9PSUlJplMAHIdLLrlEZ555pl544QXTKcA+GDzwK1OnTlW3bt3Upk0b0ykAjpNlWbJtWxUVFaZTgD8weOA3du/erfHjx8uyLNMpAE5Ar169VLNmTb3++uumU4A/MHjgN2bNmqXzzjtPnTp1Mp0C4AR4PB4lJyfLtm05jmM6B5DE4IGfKC8vV1paGmd3AJfo16+ffvvtNy1evNh0CiCJwQM/8fLLL+uUU05Rjx49TKcA8IEaNWooKSlJKSkpplMASQwe+AHHcWTbtizLksfjMZ0DwEcGDRqkL774QsuXLzedAjB4YN6CBQu0e/duXXPNNaZTAPhQzZo1FR8fL9u2TacADB6Yl5KSIq/Xq5AQHo6A2wwbNkxLlizR6tWrTacgyHGEgVHvvvuu1q5dqwEDBphOAVAFateuraioKKWlpZlOQZALNR2A4GbbthITExUaykMRcKuoqCi1bt1aa9eu5SNjYAxneGDMypUrVVBQoKFDh5pOAVCF6tevrzvvvFPjx483nYIgxuCBMampqYqNjVV4eLjpFABVLDY2VjNnztTGjRtNpyBIMXhgxDfffKMFCxbonnvuMZ0CoBqcdtpp6t+/v3JyckynIEgxeGBERkaG7rnnHtWtW9d0CoBqkpiYqMcee0zbtm0znYIgxOBBtVu/fr1efPFFjR492nQKgGp01lln6aqrrtLkyZNNpyAIMXhQ7SZMmKDbbrtNDRs2NJ0CoJp5vV5lZ2eruLjYdAqCDIMH1Wrz5s2aNm2a4uLiTKcAMKB9+/bq3LmzZsyYYToFQYbBg2qVn5+v6667jvfiAIKYZVlKT09XWVmZ6RQEEQYPqk1RUZHy8vKUlJRkOgWAQX/729/UvHlzPf/886ZTEEQYPKg2U6dO1WWXXaY2bdqYTgFgmGVZSk1NVUVFhekUBAkGD6rF7t27NX78eFmWZToFgB/o1auXatWqpddff910CoIEgwfVYubMmTr//PN10UUXmU4B4Ac8Ho8sy1JKSoocxzGdgyDA4EGVKy8vV1paGmd3AOyjX79+2rx5s/73v/+ZTkEQYPCgyr388stq0KCBunfvbjoFgB+pUaOGvF6vbNs2nYIgwOBBlXIcR7Zty7IseTwe0zkA/Mytt96q1atXq6CgwHQKXI7Bgyr1xhtvqLS0VNdcc43pFAB+qGbNmoqLi+MsD6ocgwdVyrZteb1ehYTwUANwcMOGDdM777yj1atXm06Bi3EUQpVZunSpfvzxR918882mUwD4sdq1ays6OlppaWmmU+BioaYD4F62bSsxMVGhoTzMABzeqFGj1Lp1a61du5aPnkGV4AwPqsTKlSv10UcfaciQIaZTAASA+vXr684771RmZqbpFLgUgwdVIjU1VWPGjFF4eLjpFAABIjY2VrNmzdLGjRtNp8CFGDzwuW+++UYLFizQPffcYzoFQAA57bTTdPPNNysnJ8d0ClyIwQOfS09P14gRI1S3bl3TKQACTEJCgh577DFt27bNdApchsEDn1q/fr1eeuklxcTEmE4BEIDOOussXXXVVZo8ebLpFLgMgwc+NWHCBN12221q2LCh6RQAAcrr9So7O1vFxcWmU+AiDB74zG+//aYnn3xS8fHxplMABLD27durc+fOmj59uukUuAiDBz6Tn5+v66+/XmeccYbpFAABzrIsZWRkqKyszHQKXILBA58oKipSfn6+kpKSTKcAcIG//e1vat68uZ5//nnTKXAJBg98YurUqbrssst07rnnmk4B4BLJyclKTU1VRUWF6RS4AIMHJ2zXrl3KzMyUZVmmUwC4SM+ePVWrVi299tprplPgAgwenLBZs2apbdu2uuiii0ynAHARj8cjy7Jk27YcxzGdgwDH4MEJKS8vV1paGmd3AFSJfv36acuWLfrf//5nOgUBjsGDEzJnzhydeuqp6t69u+kUAC5Uo0YNJSUlKSUlxXQKAhyDB8fNcRzZti3LsuTxeEznAHCpW2+9VV9++aUKCgpMpyCAMXhw3N544w2VlZWpb9++plMAuFjNmjUVFxcn27ZNpyCAMXhw3GzbltfrVUgIDyMAVWvYsGF655139MUXX5hOQYDiSIXjsnTpUv3444+6+eabTacACAK1a9dWdHS00tLSTKcgQIWaDkBgsm1biYmJCg3lIQSgeowaNUqtW7fW2rVrdeaZZ5rOQYDhDA+O2SeffKKPPvpIQ4YMMZ0CIIjUr19fw4YNU2ZmpukUBCAGD45ZamqqYmNjFR4ebjoFQJCJjY3VrFmzVFhYaDoFAYbBg2OyZs0avfnmm7rnnntMpwAIQk2aNNHNN9+snJwc0ykIMAweHJOMjAyNGDFCderUMZ0CIEglJCTo8ccf17Zt20ynIIAweHDU1q9fr5deekkxMTGmUwAEsbPOOktXX321Jk+ebDoFAYTBg6OWlZWl22+/XQ0bNjSdAiDIeb1eZWdnq7i42HQKAgSDB0flt99+07Rp0xQXF2c6BQDUrl07de7cWdOnTzedggDB4MFRyc/P1/XXX68zzjjDdAoASJKSk5OVkZGh0tJS0ykIAAweHNGOHTuUn5+vpKQk0ykA8IcuXbqoRYsWev75502nIAAweHBEU6dOVffu3XXuueeaTgGAfViWpdTUVFVUVJhOgZ9j8OCwdu3apfHjx8uyLNMpAHCAnj17Kjw8XK+99prpFPg5Bg8Oa9asWWrbtq3+8pe/mE4BgAN4PB4lJycrJSVFjuOYzoEfY/DgkMrLy5WWlqbk5GTTKQBwSP369dPWrVv11ltvmU6BH2Pw4JDmzJmjU089VZdddpnpFAA4pJCQECUlJcm2bdMp8GMMHhyU4ziybVuWZcnj8ZjOAYDDuvXWW/Xll19q2bJlplPgpxg8OKj//Oc/Ki8vV9++fU2nAMAR1axZU/Hx8ZzlwSExeHBQtm3L6/UqJISHCIDAMGzYMC1dulRffPGF6RT4IY5mOMDSpUu1bt069e/f33QKABy1yMhIRUdHKy0tzXQK/FCo6QD4H9u2lZCQoNBQHh4AAsuoUaPUunVr/fDDD2revLnpHPgRzvBgH5988ok++ugjDRkyxHQKAByz+vXra9iwYcrMzDSdAj/D4ME+UlNTFRsbq/DwcNMpAHBcYmNj9cwzz6iwsNB0CvwIgwd/WLNmjd58803dc889plMA4Lg1adJEN998s3JyckynwI8wePCH9PR0jRw5UnXq1DGdAgAnJCEhQY8//ri2bt1qOgV+gsEDSdL69es1e/ZsxcTEmE4BgBN21lln6eqrr9bkyZNNp8BPMHggScrKytLtt9+uU0891XQKAPiE1+tVTk6OiouLTafADzB4oN9++03Tp09XXFyc6RQA8Jl27drpr3/9q6ZPn246BX6AwQPl5eXp+uuv1xlnnGE6BQB8yrIspaenq7S01HQKDGPwBLkdO3Zo4sSJSkxMNJ0CAD7XpUsXtWzZUs8//7zpFBjG4AlyU6dOVffu3XXuueeaTgGAKmFZllJTU1VRUWE6BQYxeILYrl27NH78eFmWZToFAKpMz549FRERoVdffdV0Cgxi8ASxmTNnql27dvrLX/5iOgUAqozH45FlWbJtW47jmM6BIQyeIFVeXq709HTO7gAICv369dPWrVv11ltvmU6BIQyeIDVnzhw1bNhQl112mekUAKhyISEh8nq9sm3bdAoMYfAEIcdxZNu2LMuSx+MxnQMA1WLgwIH68ssv9eGHH5pOgQEMniD0n//8R+Xl5erbt6/pFACoNjVr1lR8fLxSU1NNp8AABk8Qsm1bXq+XszsAgs6wYcO0dOlSffHFF6ZTUM0YPEHmnXfe0bp169S/f3/TKQBQ7SIjIxUTE6O0tDTTKahmoaYDUL1s21ZiYqJCQ/mtBxCcRo0apVatWumHH35Q8+bNTeegmnCGJ4h88skn+vjjjzV48GDTKQBgzMknn6xhw4YpMzPTdAqqEYMniKSmpio2Nlbh4eGmUwDAqNjYWD3zzDMqLCw0nYJqwuAJEmvWrNF///tf3XPPPaZTAMC4Jk2aaMCAAcrOzjadgmrC4AkS6enpGjFihOrUqWM6BQD8QkJCgqZMmaKtW7eaTkE1YPAEgfXr12v27NmKiYkxnQIAfqNly5a6+uqrNXnyZNMpqAYMniCQlZWlwYMH69RTTzWdAgB+xev1Kjs7W8XFxaZTUMUYPC63adMmTZs2TXFxcaZTAMDvtGvXThdffLGmTZtmOgVVjMHjcvn5+erXr5+aNWtmOgUA/JJlWcrIyFBpaanpFFQhBo+L7dixQ/n5+UpMTDSdAgB+q0uXLmrZsqWef/550ymoQgweF5syZYouv/xynXvuuaZTAMCvJScny7ZtVVRUmE5BFWHwuNSuXbuUlZUly7JMpwCA3/v73/+uyMhIvfrqq6ZTUEUYPC41c+ZMtWvXThdeeKHpFADwex6PR5ZlybZtOY5jOgdVgMHjQuXl5UpLS1NycrLpFAAIGP369dO2bdu0aNEi0ymoAgweF5o9e7YaNWqkbt26mU4BgIAREhKipKQk2bZtOgVVgMHjMo7jyLZtWZYlj8djOgcAAsrAgQP11VdfadmyZaZT4GMMHpf5z3/+o4qKCvXt29d0CgAEnJo1ayo+Pp6zPC7E4HGZlJQUzu4AwAkYNmyYli5dqlWrVplOgQ8xeFzknXfe0fr163XTTTeZTgGAgBUZGamYmBilpaWZToEPhZoOgO/Ytq3ExESFhvLbCgAnYtSoUWrVqpW+//57tWjRwnQOfIAzPC6xYsUKrVixQoMHDzadAgAB7+STT9Zdd92lzMxM0ynwEQaPS6Smpio2Nlbh4eGmUwDAFcaMGaNnn31WGzZsMJ0CH2DwuMCaNWu0cOFCDR8+3HQKALhGkyZNNGDAAOXk5JhOgQ8weFwgPT1dI0aMUJ06dUynAICrJCQk6PHHH9fWrVtNp+AEMXgC3Lp16zR79mzFxMSYTgEA12nZsqX69OmjSZMmmU7BCWLwBLisrCwNHjxYp556qukUAHAlr9ernJwcFRcXm07BCWDwBLBNmzZp+vTpiouLM50CAK7Vtm1bXXzxxZo2bZrpFJwABk8Ay8/PV79+/dSsWTPTKQDgapZlKSMjQ6WlpaZTcJwYPAFqx44dys/PV1JSkukUAHC9Ll266KyzztJzzz1nOgXHicEToKZMmaLLL79c55xzjukUAAgKlmUpNTVVFRUVplNwHBg8AWjXrl3KysqSZVmmUwAgaPz9739XZGSkXn31VdMpOA4MngA0c+ZMtW/fXhdeeKHpFAAIGh6PR8nJyUpJSZHjOKZzcIwYPAGmvLxcaWlpnN0BAAOuv/56bd++XYsWLTKdgmPE4Akws2fPVqNGjdStWzfTKQAQdEJCQpSUlCTbtk2n4BgxeAKI4ziybVuWZcnj8ZjOAYCgNHDgQH311Vf68MMPTafgGDB4Asj8+fPlOI769u1rOgUAglbNmjWVkJDAWZ4Aw+AJILZty+v1cnYHAAy788479e6772rVqlWmU3CUGDwB4p133tH69et10003mU4BgKAXGRmpmJgYpaWlmU7BUQo1HYCjY9u2kpKSFBrKbxkA+INRo0apVatW+v7779WiRQvTOTgCzvAEgBUrVmjFihUaPHiw6RQAwB4nn3yy7rrrLmVmZppOwVFg8ASA1NRUxcbGqlatWqZTAAB7GTNmjJ599llt2LDBdAqOgMHj577++mstXLhQw4cPN50CANhPkyZNNGDAAGVnZ5tOwREwePxcenq6Ro4cqTp16phOAQAcREJCgqZMmaKtW7eaTsFhMHj82Lp16zRnzhzFxMSYTgEAHELLli3Vp08fTZo0yXQKDoPB48eysrI0ePBgNWjQwHQKAOAwvF6vcnJytHPnTtMpOAQGj5/atGmTpk+frri4ONMpAIAjaNu2rbp06aJp06aZTsEhMHj8VF5enm644QY1a9bMdAoA4ChYlqXMzEyVlpaaTsFBMHj80I4dOzRx4kQlJiaaTgEAHKWLL75YZ511lp577jnTKTgIBo8fmjJlii6//HKdc845plMAAMfAsiylpqaqoqLCdAr2w+DxM7t27VJWVpYsyzKdAgA4Rn//+99Vu3Zt/etf/zKdgv0wePzM008/rfbt2+vCCy80nQIAOEYej0eWZcm2bTmOYzoHe2Hw+JHy8nKlp6dzdgcAAtj111+v7du3a9GiRaZTsBcGjx+ZPXu2GjdurG7duplOAQAcp5CQEHm9XqWkpJhOwV4YPH7CcRzZti3LsuTxeEznAABOwMCBA7VmzRp9+OGHplOwB4PHT8yfP1+O46hPnz6mUwAAJygsLEzx8fGybdt0CvZg8PgJ27bl9Xo5uwMALnHnnXfq3Xff1eeff246BWLw+IUlS5bo559/1k033WQ6BQDgI5GRkRo9erTS0tJMp0BSqOkAVJ7dSUxMVGgovx0A4CYjR45Uq1at9P3336tFixamc4IaZ3gMW7FihT755BMNHjzYdAoAwMdOPvlk3XXXXcrMzDSdEvQYPIalpqYqNjZWtWrVMp0CAKgCY8aM0TPPPKMNGzaYTglqDB6Dvv76ay1cuFDDhw83nQIAqCJNmjTRwIEDlZ2dbTolqDF4DEpPT9fIkSNVp04d0ykAgCqUkJCgKVOmaMuWLaZTghaDx5B169Zpzpw5iomJMZ0CAKhiLVq0UJ8+fTRp0iTTKUGLwWNIVlaWhgwZogYNGphOAQBUA6/Xq9zcXO3cudN0SlBi8BiwadMmTZ8+XWPHjjWdAgCoJm3btlWXLl00bdo00ylBicFjQF5enm644QY1a9bMdAoAoBpZlqWMjAyVlpaaTgk6DJ5qtn37dk2cOFGJiYmmUwAA1eziiy9Wq1at9Oyzz5pOCToMnmo2ZcoUXXHFFTrnnHNMpwAADEhOTlZaWpoqKipMpwQVBk812rVrl7KysuT1ek2nAAAMufLKK1W7dm3961//Mp0SVBg81ejpp59Whw4ddOGFF5pOAQAY4vF4ZFmWUlJS5DiO6ZygweCpJmVlZUpLS1NycrLpFACAYddff7127NihhQsXmk4JGgyeajJ79mw1adJE3bp1M50CADAsJCREXq9Xtm2bTgkaDJ5q4DiOUlNTZVmW6RQAgJ8YOHCg1qxZow8++MB0SlBg8FSD+fPny3Ec9enTx3QKAMBPhIWFKT4+nrM81YTBUw1SUlJkWZY8Ho/pFACAH7nzzjv1/vvv6/PPPzed4noMniq2ZMkS/fLLL/rnP/9pOgUA4GciIyMVExOjtLQ00ymuF2o6wO1s21ZiYqJCQ/m/GgBwoJEjR6pVq1b67rvv1LJlS9M5rsUZniq0YsUKffLJJxo8eLDpFACAnzr55JN19913KzMz03SKqzF4qpBt2xo7dqxq1aplOgUA4MfGjBmj5557Ths2bDCd4loMniry9ddfa9GiRbr77rtNpwAA/Fzjxo11yy23KDs723SKazF4qkh6erpGjhypOnXqmE4BAASAhIQETZkyRVu2bDGd4koMniqwbt06zZkzRzExMaZTAAABokWLFurbt68mTZpkOsWVGDxVYPz48RoyZIgaNGhgOgUAEECSkpKUm5urnTt3mk5xHQaPj23atEkzZszQ2LFjTacAAAJM27Zt1aVLFz355JOmU1yHweNjeXl5uuGGG9SsWTPTKQCAAGRZljIzM1VaWmo6xVUYPD60fft2TZw4UUlJSaZTAAAB6uKLL1br1q317LPPmk5xFQaPD02ZMkVXXHGFzj77bNMpAIAAZlmWUlNTVVFRYTrFNRg8PrJr1y5lZWXJ6/WaTgEABLgrr7xSJ510kubOnWs6xTUYPD7y9NNP64ILLtCFF15oOgUAEOA8Ho+Sk5Nl27YcxzGd4woMHh8oKytTWlqaLMsynQIAcIl//OMfKioq0sKFC02nuAKDxwdmz56tJk2aqFu3bqZTAAAuERISoqSkJKWkpJhOcQUGzwlyHEe2bXN2BwDgcwMHDtQ333yjDz74wHRKwGPwnKB58+ZJkvr06WO4BADgNmFhYUpISJBt26ZTAh6D5wT9fnbH4/GYTgEAuNAdd9yh999/X5999pnplIDG4DkBS5Ys0S+//KJ//vOfplMAAC4VGRmpmJgYpaWlmU4JaKGmAwKZbdtKSkpSaCj/NwIAqs7IkSPVqlUrfffdd2rZsqXpnIDEGZ7j9PHHH+uTTz7R7bffbjoFAOByJ598su6++25lZmaaTglYDJ7jlJqaqrFjx6pWrVqmUwAAQWDMmDF69tln9csvv5hOCUgMnuPw1VdfadGiRbr77rtNpwAAgkTjxo01cOBAZWdnm04JSAye45Cenq5Ro0apTp06plMAAEEkISFBU6dO1ZYtW0ynBBwGzzH66aef9PLLLys6Otp0CgAgyLRo0UJ9+/bVxIkTTacEHAbPMcrKytKQIUPUoEED0ykAgCCUlJSk3Nxc7dy503RKQGHwHINNmzZpxowZiouLM50CAAhSbdu21SWXXKInn3zSdEpAYfAcg9zcXN14441q2rSp6RQAQBCzLEuZmZnavXu36ZSAweA5Stu3b9ekSZOUmJhoOgUAEOT++te/qnXr1nr22WdNpwQMBs9RmjJliq644gqdffbZplMAAJBlWUpLS1NFRYXplIDA4DkKu3btUlZWlizLMp0CAIAk6corr1SdOnU0d+5c0ykBgcFzFJ566ildcMEF6tixo+kUAAAkSR6PR5ZlKSUlRY7jmM7xewyeIygrK1N6ejpndwD4vcKiQqUvTde6i9fpGecZDXp5kNKXpmtj0UbTaagi//jHP7Rz507997//NZ3i9/iY7yOYPXu2mjRpom7duplOAYCDWrZumex3bM1fM1+SVFJWUnnFT9LLX7ysB/73gHq37i2rq6XOTTsbLIWvhYSEKCkpSbZtq2fPnqZz/BpneA7DcRzZtq3k5GTTKQBwUJMLJqvHUz00d/VclZSV/Dl29iguK1ZJWYnmrp6rHk/10OSCyUY6UXUGDhyob775Ru+//77pFL/G4DmMefPmyePxqHfv3qZTAOAAkwsmK35BvHaW7pQzx5Hm7neD7yWlSdouOXK0s3Sn4hfEM3pcJiwsTAkJCbJt23SKX2PwHIZt2/J6vfJ4PKZTAGAfy9Yt+2PsSJJ6S/pa0jd7blAq6VVJvSTt9TnHv4+egvUF1ZmLKnbHHXfogw8+0GeffWY6xW8xeA5hyZIl2rBhg2666SbTKQBwAPsdW8WlxX9eECmpj6TXJO2WtFjSKZIuPPBri0uLZS/hbICbREZGavTo0UpLSzOd4rcYPIeQkpKixMRE1ahRw3QKAOyjsKhQ89fMl6P9fhS5raTTJM2WtFzStQf/ekeO5q2Zx09vuczIkSM1b948fffdd6ZT/BKD5yA+/vhjrVy5UrfffrvpFAA4wIwVMw59ZV9J30nqLqneoW/mkefw94OAU69ePd19993KyMgwneKXGDwHkZqaqrFjx6pWrVqmUwDgACs3rDzgp7H+cJIqn95qePj7KC4r1qeFn/o6DYaNGTNGzz33nH755RfTKX6HwbOfr776SosWLdLw4cNNpwDAQW0t2eqT+9lcstkn9wP/0bhxY916663Kzs42neJ3GDz7SU9P16hRo3TSSSeZTgGAg6oXfpjnqo5B/fD6Prkf+Jf4+HhNnTpVW7ZsMZ3iVxg8e/npp5/08ssvKzo62nQKABxSh8YdFB4afkL3EREaofaN2vuoCP6kRYsW6tu3ryZOnGg6xa8wePaSlZWloUOHqkGDBqZTAOCQhnQccvgbxEpqdfibOHKOfD8IWF6vV7m5udq5c6fpFL/B4Nnj119/1YwZMzR27FjTKQBwWI1qN1Lv1r3l0fG9KapHHvVp3UcNax/hlc0IWOeff74uueQSPfHEE6ZT/AaDZ4+8vDzdeOONatq0qekUADgiq6uliLCI4/raiLAIWd0sHxfB31iWpczMTO3evdt0il9g8Ejavn27Jk2apMTERNMpMKiwqFDpS9N1aeal6vdiPw16eZDSl6bz5mzwS52bdlZmr0xFhkUe09dFhkUqs1emOp3eqYrK4C/++te/6uyzz9azzz5rOsUveBzHOeSVnTp1cgoK3P95K5mZmSooKNDzzz9vOgUGLFu3TPY7tuavmS9J+7y/SURohBw56t26t6yuljo37WwqEzio3z9AtLi0+MB3Xt6LRx5FhEUos1emRnQaUY2FMGnhwoUaNWqUPv/886D45ACPx7PccZyDrvmgP8Oza9cuTZgwQV6v13QKDJhcMFk9nuqhuavnqqSs5IA3cysuK1ZJWYnmrp6rHk/14FOm4XdGdBqhxUMWq1+bfgoPDVdE6L5Pc0WERig8NFz92vTT4iGLGTtB5oorrlDdunU1d+5c0ynGhZoOMO2pp57SBRdcoI4dO5pOQTX7/W/GO0t3SrskTZJ0paQOe26wS9JESVdJTlvnj0+ZlsRBA36l0+mdNOfmOdpYtFEzVszQp4WfanPJZtUPr6/2jdprSMchvEA5SHk8HlmWpUcffVQ33HCDPJ7je6G7GwT1U1plZWVq06aNZsyYoa5du5rOQTVatm6ZejzVo3Ls/G6NpJcljZJUW9LrknZIGrDv10aGRWrxkMW8BgJAQKioqFC7du2Uk5Ojnj17ms6pUjyldQgvvfSSTjvtNMZOELLfsVVcWrzvha0lnS1pvio/fPFzVX4Q436KS4tlL7GrvBEAfCEkJERer1cpKSmmU4wK2sHjOI5SU1NlWfxoZrApLCrU/DXzD/4Cz6slfS/pRUm9JNU58CaOHM1bM4+f3gIQMG655RZ99913ev/9902nGBO0g2fevHnyeDzq3bu36RRUsxkrZhz6yghVfsp0qaTzDn0zjzyHvx8A8CNhYWGKj4+XbQfv2emgHDyO4yglJUWWZQX1C7iC1coNKw/4aaw/fCJpi6SzJL156PsoLivWp4Wf+j4OAKrInXfeqQ8++ECfffaZ6RQjgnLwLFmyRIWFhfrnP/9pOgUGbC3ZevArdkh6Q9J1kq5V5Wt4fjj0/Wwu2ezzNgCoKhERERo9erRSU1NNpxgRlIPHtm0lJiYGxZsw4UD1wusd/Ip5ktpIaqnK1+70lPSqpLKD37x+eP2qyAOAKjNy5EjNnz9f3333nemUahd0g+fjjz/WypUrdfvtt5tOgSEdGndQeGj4vhd+IWmtKkfO7y5S5fBZfOB9RIRGqH2j9lXWCABVoV69err77ruVkZFhOqXaBd378PTv319dunThU9GDWGFRoZpnNz/063iOQnhouNaOWcubuQEIOBs2bFCbNm30xRdfqEmTJqZzfIr34dnjq6++0ltvvaW7777bdAoMalS7kXq37i2Pju8F6x551Kd1H8YOgIDUuHFj3XrrrZowYYLplGoVVIMnPT1do0aN0kknnWQ6BYZZXS1FhEUc+YYHEREWIasb798EIHDFx8friSee0JYtW0ynVBtXfpZWYVGhZqyYoZUbVmpryVbVC6+ns+uerTn/maM1n6wxnQc/0LlpZ2X2yvzzs7SOUmRYpDJ7ZfKxEgACWosWLXTNNdfoqaee0i3DbjngmNmhcQcN7TjUVWeyXfUanmXrlsl+x9b8NfMlaZ/XaESERqiiokJ9zukjq6ulzk07m8qEH/n9A0SLS4sP/s7Le3jkUURYhDJ7ZfLBoQBc4a0v31L6++n630//k3TgMdORo96tewfUMfNwr+FxzeDhwIXjVbC+QPYSW/PWzJNHHhWX/fkZW7//S9+ndR9Z3SzO7ABwBbceMw83eFzxlNbkgskafe9olX5bKg3a64pcSadon8ucXEc7L9+peMVLUkD8BqJqdTq9k+bcPEcbizZqxooZ+rTwU20u2az64fXVvlF7Dek4xFWndQEEt9/Hzs7SndJKSa8d5EalknpITg9HO0t3Kn5B4B8zA37wLFu3TPEL4lXarLTy/VIqVPlS7O2SyiX9vN9lv0lqrj9+Azuf3pm/tUOS1LB2QyVcmmA6AwCqzO/HzD9eu9hhz6+9LZe0SJXvRbaHG46ZAf9TWvY7topLi6XTVTlwftlzxQ+qfMfcU/e7rL6kupX/WFxaLHtJ8H6QGgAguPxxzDyUnyX9R9I/VfnGq3sJ9GNmQA+ewqJCzV8zv/L5x1BJzfTnZx/9IOnMPb/2vqz5n1/vyNG8NfO0sWhj9UUDAGDAPsfMgymW9KKk7qo8YbCfQD9mBvTgmbFixr4XNNef42btnn8+c7/LWuz7JR55DrwfAABc5rDHOkfSK5IaSbr00DcL5GNmQA+elRtW7vvxAM1VOWp2SiqS1EDSGZJ+3HNZofY5wyNJxWXF+rTw02rpBQDAlAOOmXt7R9JGSddLh3sT+kA+Zgb04NlasnXfC86QVCLpI1We2ZGkcFU+D/nRnv88yAdcby7ZXHWRAAD4gQOOmb/7TtISSf0lHcUb0AfqMTOgB0+98Hr7XhCmyhcvv6c/B4/2/Pf3dMDZnd/VDz/ICgIAwEUOOGZKlT+9PFvS1ZJOO7r7CdRjZkAPng6NOyg8NHzfC1uo8ums/QdPkQ46eCJCI9S+UfuqSgQAwC8c9Ji5XJXHx/mSHt3v10HenyeQj5kB/U7LhUWFap7d/NDPSR6F8NBwrR2zljeWAwC4WjAcMw/3TssBfYanUe1G6t26tzyHe4XVYXjkUZ/Wffz2Nw4AAF8J9mNmQA8eSbK6WooIO4pXWR1ERFiErG6Wj4sAAPBPwXzMDPjB07lpZ2X2ylRkWOQxfV1kWKQye2UG7FtkAwBwrIL5mBnwn6Ul/flhZm785FcAAHwpWI+ZAX+G53cjOo3Q4iGL1a9NP4WHhisidN9TdhGhEQoPDVe/Nv20eMjigP+NAwDgeAXjMTOgf0rrUDYWbdSMFTP0aeGn2lyyWfXD66t9o/Ya0nFIwL7YCgCAquCmY+bhfkrLlYMHAAAEH9f+WDoAAMDRYPAAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADX8ziOc+grPZ6Nkn6ovhwAAIDj1txxnIYHu+KwgwcAAMANeEoLAAC4HoMHAAC4HoMHAAC4HoMHAAC4HoMHAAC43v8Dvi/RKKEVPu0AAAAASUVORK5CYII=",
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjwAAAIuCAYAAAC7EdIKAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/OQEPoAAAACXBIWXMAAAsTAAALEwEAmpwYAABM8UlEQVR4nO3de5zM9eLH8fesXXZXSHIpClEpl3Ti5BRRHQrVSZ0kKZQSdpe1t/luuvy67Hdv1t5QFIrulFMdTjp0JN2skkoq3RRlJde1y16+vz9W5X6d3c/Md17Px8PjnGZmZ1/nmHzfvjM743EcRwAAAG4WYjoAAACgqjF4AACA6zF4AACA6zF4AACA6zF4AACA6zF4AACA64Ue7spTTz3VadGiRTWlAAAAHL/ly5f/6jhOw4Ndd9jB06JFCxUUFFRNFQAAgA95PJ4fDnUdT2kBAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXCzUdACA4FBYVasaKGVq5YaW2lmxVvfB66tC4g4Z2HKqGtRuazgPgch7HcQ55ZadOnZyCgoJqzAHgNsvWLZP9jq35a+ZLkkrKSv64LiI0Qo4c9W7dW1ZXS52bdjaVCcAFPB7PcsdxOh3sOp7SAlBlJhdMVo+nemju6rkqKSvZZ+xIUnFZsUrKSjR39Vz1eKqHJhdMNtIJwP14SgtAlZhcMFnxC+K1s3TnEW/ryNHO0p2KXxAvSRrRaURV5wEIMgweAD63bN2yg4+djyW9J+k3SbUknSfpSkkRlVf/Pno6n95ZnU4/6FlpADguPKUFwOfsd2wVlxbve+G7kv4rqackS9IwSVskzZRU9ufNikuLZS+xqycUQNBg8ADwqcKiQs1fM1+O9vqBiBJJb0nqLelsSTUk1Zd0kypHz8o/b+rI0bw187SxaGO1NQNwPwYPAJ+asWLGgRf+qMqzOOftd3ktVQ6gb/e92CPPwe8HAI4TgweAT63csPKAn8bSTkmRqjyzs7+T9ly/l+KyYn1a+GnVBAIISgweAD61tWTrgRdGqnLUlB/kC3bsuX4/m0s2+zYMQFBj8ADwqXrh9Q688AxV/kzoF/tdvkvS15JaHvgl9cPr+7wNQPBi8ADwqQ6NOyg8NHzfC8MldZc0X5UDp1zSZkkvSaor6YJ9bx4RGqH2jdpXfSyAoMHgAeBTQzoOOfgVXVX5njsLJNmSnpBUT9JgHfCOYI6cQ98PABwH3ngQgE81qt1IvVv31tzVc/f90XRJ+sueX4fhkUd9WvfhA0UB+BRneAD4nNXVUkRYxHF9bURYhKxulo+LAAQ7Bg8An+vctLMye2UqMuwgP351GJFhkcrslcnHSgDwOZ7SAlAlfv8A0PgF8SouLT7w6a29eORRRFiEMntl8sGhAKoEZ3gAVAnHcbTlzS269KtL1a9NP4WHhisidN+nuSJCIxQeGq5LT7lU5757rm5pfYuhWgBuxxkeAD63du1a3XbbbXr77bfVtm1bfTbjM20s2qgZK2bo08JPtblks+qH11f7Ru01pOMQLXljiW78940688wzNX36dN14442m/ycAcBkGDwCfmjRpkhITE1VcXPlp6R07dpQkNazdUAmXJhz0a5o3b67w8HBt375dt99+u5544gnNmjVLDRo0qK5sAC7HU1oAfMZxHE2ZMkXl5eWqqKiQJJ1++ulH/LpTTjlFNWpUftDWrl279NZbb2n16tVV2goguDB4APiMx+PR8uXL1aVLF4WFhalGjRpq1KjREb/ulFNOUUlJiWrVqqXQ0FCtWLFCl156aTUUAwgWDB4APrVq1Sp9/vnn+uyzz3TPPfcc1XCpW7eubrnlFr355psaNGiQHn/88WooBRBMPI5z6B8V7dSpk1NQUFCNOQACmeM4uvzyy9W/f3+NHDnyuO6jsLBQbdu21f/+9z+1bdvWx4UA3Mzj8Sx3HOegb+TFGR4APvPCCy9o69atGj58+HHfR6NGjXT//fcrOjpah/sLGQAcCwYPAJ/YsWOHEhISlJeX98cLkI/XiBEj9Ouvv2r27Nk+qgMQ7Bg8AHzi0UcfVY8ePdS1a9cTvq/Q0FDl5eUpLi5ORUVFPqgDEOwYPABO2FdffaWpU6cqPT3dZ/fZvXt3de3aVbZt++w+AQQvBg+AE+I4jkaPHi2v16vTTjvNp/edkZGhyZMna82aNT69XwDBh8ED4IS89tpr+v777xUTE+Pz+27atKkSExM1ZswYn983gODC4AFw3EpKShQbG6vc3FzVrFmzSr7HmDFj9PXXX+v111+vkvsHEBwYPACOW0ZGhjp27KiePXtW2feoVauWcnJyNGbMGJWUlFTZ9wHgbgweAMflhx9+UHZ2trKysqr8e1199dVq165dtXwvAO7E4AFwXOLi4jR69Gg1b968Wr5fVlaWxo8fr7Vr11bL9wPgLgweAMfsv//9rz766CMlJCRU2/c866yzFBUVpfj4+Gr7ngDcg8ED4Jjs3r1b0dHRys7OVkRERLV+76SkJH344YdatGhRtX5fAIGPwQPgmOTl5alFixa69tprq/17R0ZGKisrS9HR0SotLa327w8gcDF4ABy1n3/+WbZtKycnRx6Px0hDv3791LRpU02cONHI9wcQmBg8AI5aUlKShg0bpnPOOcdYg8fjUW5urh555BH98ssvxjoABBYGD4CjsnTpUi1atEjjxo0znaI2bdpo6NCh8nq9plMABAgGD4AjKi8vV1RUlDIzM3XSSSeZzpEk3XfffXrzzTf13nvvmU4BEAAYPACOaMqUKapXr55uvvlm0yl/qFu3rtLS0hQVFaXy8nLTOQD8HIMHwGH9+uuveuCBB5Sbm2vshcqHcuuttyoyMlJPPvmk6RQAfo7BA+Cwxo0bpwEDBqhDhw6mUw7g8XiUl5en++67T7/99pvpHAB+jMED4JCWL1+uuXPn6qGHHjKdckgdO3bUP//5T913332mUwD4MQYPgIOqqKhQdHS0UlJSdPLJJ5vOOayHH35Ys2fP1scff2w6BYCfYvAAOKiZM2eqvLxcQ4YMMZ1yRKeccooefvhhRUdHy3Ec0zkA/BCDB8ABtm7dKsuylJ+fr5CQwPhj4s4771RJSYmeeeYZ0ykA/FBg/EkGoFr93//9n/r27avOnTubTjlqNWrUUH5+vpKSkrRt2zbTOQD8DIMHwD4+//xzzZo1SykpKaZTjlmXLl3Uq1cvPfzww6ZTAPgZBg+APziOo5iYGN1///1q2LCh6ZzjkpqaqhkzZuiLL74wnQLAjzB4APxh9uzZ2rhxo+655x7TKcetcePGuvfeexUTE8MLmAH8gcEDQJJUVFSkuLg45efnKzQ01HTOCRk1apR+/vlnvfLKK6ZTAPgJBg8ASZJt2+rWrZsuu+wy0yknLCwsTHl5eRo7dqx27txpOgeAH2DwANCaNWv02GOPKT093XSKz1x++eW6+OKLlZaWZjoFgB9g8ABQbGysEhMT1bRpU9MpPpWZman8/Hx9++23plMAGMbgAYLc66+/rq+++kpjxowxneJzZ5xxhuLi4hQbG2s6BYBhDB4giJWUlGjMmDHKzc1VzZo1TedUibi4OK1atUrz5883nQLAIAYPEMSysrLUvn17XXXVVaZTqkytWrWUk5Oj0aNHa9euXaZzABjC4AGC1I8//qisrCxlZWWZTqlyffr00bnnnqvs7GzTKQAMYfAAQSo+Pl5RUVFq2bKl6ZRqkZ2drYyMDK1bt850CgADGDxAEFq0aJE+/PBDJSUlmU6pNq1atdKIESOUkJBgOgWAAQweIMiUlpYqOjpaWVlZioiIMJ1TrSzL0tKlS7V48WLTKQCqGYMHCDITJ05Us2bNdP3115tOqXaRkZEaP368oqOjVVZWZjoHQDVi8ABB5JdfftGjjz6qnJwceTwe0zlG3HjjjWrYsKEmT55sOgVANWLwAEHE6/Vq6NChatOmjekUYzwej/Ly8vTQQw+psLDQdA6AasLgAYLEe++9p//+97+67777TKcYd/755+v2229XcnKy6RQA1YTBAwSB8vJyRUVFKS0tTXXq1DGd4xceeOABzZs3Tx9++KHpFADVgMEDBIEnn3xSkZGRGjhwoOkUv1G3bl2lpqYqKipKFRUVpnMAVDEGD+BymzZt0n333af8/PygfaHyoQwaNEihoaGaPn266RQAVYzBA7jcfffdp5tuukkXXHCB6RS/ExISovz8fN17773avHmz6RwAVYjBA7jYxx9/rJdfflkPPfSQ6RS/9Ze//EX9+vXTAw88YDoFQBVi8AAu5TiOoqOj9fDDD+uUU04xnePXHnnkET3//PNauXKl6RQAVYTBA7jUM888o127dumOO+4wneL3GjRooIceekhRUVFyHMd0DoAqwOABXGjbtm1KSkpSfn6+atSoYTonINx1113asWOHnn/+edMpAKoAgwdwoYceekhXXXWVLr74YtMpAaNGjRrKy8tTQkKCduzYYToHgI8xeACX+eKLL/TUU0/Jtm3TKQHn0ksv1ZVXXqlHHnnEdAoAH2PwAC7iOI5iYmI0btw4NW7c2HROQEpLS9MTTzyhL7/80nQKAB9i8AAu8sorr+iXX37RqFGjTKcErCZNmig5OVmjR4/mBcyAizB4AJfYuXOnxo4dq7y8PIWGhprOCWjR0dFau3atXn31VdMpAHyEwQO4RGpqqrp06aIePXqYTgl4YWFhysvLU2xsrIqLi03nAPABBg/gAt9++60mTZqkzMxM0ymuceWVV+qiiy5SRkaG6RQAPsDgAVwgNjZWcXFxatasmekUVxk/frxycnL0/fffm04BcIIYPECAmz9/vlatWqWxY8eaTnGdM888848xCSCwMXiAALZr1y6NHj1aOTk5qlWrlukcV4qPj9eKFSu0YMEC0ykATgCDBwhgEyZMUJs2bdSnTx/TKa4VHh6u7OxsxcTEaPfu3aZzABwnBg8QoH766SdlZmZqwoQJplNc75prrlGrVq2Um5trOgXAcWLwAAEqISFBI0aMUKtWrUynuJ7H41F2drZSU1O1fv160zkAjgODBwhAixcv1rvvvivLskynBI2zzz5bd999t5KSkkynADgODB4gwJSVlSkqKkpZWVmKjIw0nRNU7r33Xi1evFhLliwxnQLgGDF4gAAzadIkNW7cWDfccIPplKBTu3ZtZWRkKDo6WuXl5aZzABwDBg8QQAoLC/Xwww8rNzdXHo/HdE5Q6t+/v+rXr6/HH3/cdAqAY8DgAQKIZVm6/fbbdf7555tOCVoej0d5eXl68MEH9euvv5rOAXCUGDxAgPjwww81f/58PfDAA6ZTgl67du00cOBA3XvvvaZTABwlBg8QACoqKjRq1CilpaWpbt26pnMg6cEHH9Srr76qgoIC0ykAjgKDBwgA06ZNU82aNTVo0CDTKdjj5JNPVkpKiqKiolRRUWE6B8ARMHgAP7d582aNGzdO+fn5vFDZzwwePFiS9PTTTxsuAXAkDB7Az91///3q16+fLrzwQtMp2E9ISIjy8/NlWZa2bNliOgfAYTB4AD+2cuVKvfjii3rkkUdMp+AQOnXqpGuvvVYPPvig6RQAh8HgAfyU4ziKiorSQw89pAYNGpjOwWGkpKTo2Wef1WeffWY6BcAhMHgAP/Xcc8+pqKhIw4YNM52CIzj11FP1wAMPKDo6Wo7jmM4BcBAMHsAPbd++XYmJicrLy1ONGjVM5+AoDB8+XJs3b9ZLL71kOgXAQTB4AD/0yCOP6Morr9Qll1xiOgVHKTQ0VHl5eYqLi9OOHTtM5wDYD4MH8DNffvmlpk2bprS0NNMpOEbdunVT9+7dlZKSYjoFwH4YPIAfcRxHMTExSk5OVpMmTUzn4Dikp6drypQp+vrrr02nANgLgwfwI//617/0008/KSoqynQKjtPpp5+upKQkjRkzxnQKgL0weAA/UVxcrNjYWOXm5iosLMx0Dk7A6NGj9c033+j11183nQJgDwYP4CfS09PVqVMnXXnllaZTcIJq1qyp3NxcjR49WiUlJaZzAIjBA/iF77//Xnl5eRo/frzpFPhIr169dMEFFygzM9N0CgAxeAC/MHbsWMXGxurMM880nQIfysrKUnZ2ttauXWs6BQh6DB7AsAULFmjlypWKi4sznQIfa9GihaKjo/m9BfwAgwcwaPfu3YqJiVF2drbCw8NN56AKJCYmqqCgQAsXLjSdAgQ1Bg9gUE5Ojlq1aqVrrrnGdAqqSEREhCZMmKDo6GiVlpaazgGCFoMHMGT9+vVKS0tTdna26RRUsX/84x8688wzlZeXZzoFCFoMHsCQxMREDR8+XGeffbbpFFQxj8ejnJwc2batX375xXQOEJQYPIABS5Ys0dtvv63k5GTTKagm5557ru644w4lJSWZTgGCEoMHqGZlZWWKiopSZmamateubToH1WjcuHFauHCh3n33XdMpQNBh8ADV7PHHH1eDBg100003mU5BNatTp47S09MVFRWl8vJy0zlAUGHwANVo48aN+r//+z/l5ubK4/GYzoEBt9xyi0466SRNnTrVdAoQVBg8QDW69957deutt6pdu3amU2CIx+NRfn6+HnjgAW3atMl0DhA0GDxANSkoKNBrr72mBx980HQKDOvQoYP69++vcePGmU4BggaDB6gGFRUVioqKkm3bqlevnukc+IGHHnpIr7zyij766CPTKUBQYPAA1eCpp56Sx+PR7bffbjoFfqJ+/fp65JFHFBUVpYqKCtM5gOsxeIAqtmXLFiUnJysvL08hIfwrhz/dcccdKisr06xZs0ynAK7Hn75AFXvwwQd13XXXqVOnTqZT4GdCQkKUn58vr9errVu3ms4BXI3BA1Shzz77TM8++6weffRR0ynwU3/961/Vu3dvPfTQQ6ZTAFdj8ABVxHEcRUdH68EHH9Spp55qOgd+zLZtPf3001q1apXpFMC1GDxAFXnxxRe1efNmDR8+3HQK/FyjRo103333KSYmRo7jmM4BXInBA1SBHTt2KD4+Xvn5+apRo4bpHASAkSNHasOGDZozZ47pFMCVGDxAFUhJSVGPHj3UtWtX0ykIEKGhocrPz1dcXJyKiopM5wCuw+ABfOzrr7/WlClTlJ6ebjoFAaZ79+665JJLlJqaajoFcB0GD+BDjuNo9OjR8nq9Ou2000znIABlZGRo8uTJ+uabb0ynAK7C4AF86PXXX9e3336rmJgY0ykIUM2aNVN8fLxiY2NNpwCuwuABfKSkpERjxoxRbm6uatasaToHASw2NlarV6/Wv//9b9MpgGsweAAfyczMVMeOHdWrVy/TKQhwtWrVUm5ursaMGaNdu3aZzgFcgcED+MAPP/ygCRMmaPz48aZT4BJXX321zj//fGVlZZlOAVyBwQP4QHx8vEaPHq0WLVqYToGLTJgwQZmZmfrxxx9NpwABj8EDnKCFCxdq+fLlSkhIMJ0ClznrrLM0atQoxcfHm04BAh6DBzgBpaWlio6O1oQJExQREWE6By7k9Xr1wQcf6K233jKdAgQ0Bg9wAvLy8tS8eXNdd911plPgUpGRkcrKylJ0dLRKS0tN5wABi8EDHKeff/5ZKSkpysnJkcfjMZ0DF+vXr59OO+00TZo0yXQKELAYPMBx8nq9GjZsmM455xzTKXA5j8ej3NxcPfLII9qwYYPpHCAgMXiA4/Duu+9q4cKFGjdunOkUBInzzjtPgwcPltfrNZ0CBCQGD3CMysvLFRUVpYyMDJ100kmmcxBE7r//fi1YsEDvv/++6RQg4DB4gGM0depU1alTRwMGDDCdgiBTt25dpaWlKSoqSuXl5aZzgIDC4AGOwaZNm3T//fcrLy+PFyrDiFtvvVXh4eGaNm2a6RQgoDB4gGMwbtw4DRgwQB06dDCdgiDl8XiUn5+vcePG6bfffjOdAwQMBg9wlD766CO98sor+r//+z/TKQhyHTt21I033qj77rvPdAoQMBg8wFGoqKhQVFSUHn30UdWvX990DqBHHnlEs2fP1ooVK0ynAAGBwQMchVmzZqmsrExDhw41nQJIkk455RQ9/PDDioqKkuM4pnMAvxdqOgDwF4VFhZqxYoZWbliprSVbVS+8njo07qAbz7pRXq9Xc+fOVUgIf0eA/7jzzjv1+OOP65lnntGgQYMO+Rge2nGoGtZuaDoXMMpzuL8ZdOrUySkoKKjGHKD6LVu3TPY7tuavmS9JKikr+eO6iNAIlZaVqunOpnop5iV1btrZVCZwUO+9955uib9Ff4n5yyEfw44c9W7dW1ZXi8cwXM3j8Sx3HKfTwa7jr6sIapMLJqvHUz00d/VclZSV7HOgkKTismKVqUxrI9eqx1M9NLlgspFO4FBWhK1QYZ/Cwz6GS8pKNHf1XB7DCGo8pYWgNblgsuIXxGtn6c4j3taRo52lOxW/IF6SNKLTiKrOA47o98dwcVnxEW/LYxjBjsGDoGLbtt5++2099MRDf46dXEmnSBq01w1zJfWQNE/SLZKaV16889edGtltpCJeitCQa4ZUazuwt9439NabP7yp8uv2esfl7yW9IKlEB/7pXi6prrRzTOXo6Xx6Z3U6/aBn/gFX4iktBJXLLrtM7777rlLeTlFxabG0XZUHgp8lVey50XZJv0lqIenvkl6VVLrnutckdZReK3qtWruB/YX0CVH5l+XSN3suKFXlY7WXpAck3bvXr2hJEZK6V960uLRY9hK7upMBoxg8CCqdO3dWaWmp5i2ZJ0eO9IOklpJOlfTLnhv9IKm+pLqSLpJUR9JiSSskbZJ0hTRvzTxtLNpY7f2AVPkThYs2LJL6qHKE71blY/QUSRfud+NySS9JOufP6xw5PIYRdBg8CCo1a9bUaW1OU8X3e07n/CDpzD2/ftCfl+15CkseSddJWibpP5KulVRT8sijGStmVF84sJc/HnttJZ0mabak5ap8fO7vTVWe/emz78U8hhFsGDwIOrXPrq2y78oq/2GtKsfN3oNnrSqfzvpdPVWe5amlP4ZQcVmxPi38tDpygQOs3LDyz5/G6ivpO1U+XVVvvxuuUuWZyf6Swva9iscwgg2DB0GnduvalaNmp6QiSQ0knSHpxz2XFerPMzyS9I4qX/9QW9K7f168uWRz9QQD+9lasvXPfzhJUqSk/d9X8FdJ/5J0vSqf6joIHsMIJgweBJ3m7ZpX/hTLR6o8syNJ4ao8i/PRnv/8/eOyClU5cq7b82uJKl/HI6l+OJ+pBTPqhe9/Kmc/uyW9KKmTpDaHvhmPYQQTBg+Czl/O/Is8TT3Se/pz8GjPf39Pf57dqVDlT71cqsq/PTeRdLGk16TwGuFq36h9NVYDf+rQuIPCQ8MPfYPXVXlW8spD3yQiNILHMIIKgwdBZ0jHIQppGVL5dNb+g6dIfw6eD1T5Ys9L97pNd0k7pLKCMg3pOKQ6coEDHPaxt0XSSkk/SbIlPbrfrz0cOTyGEVR440EEnUa1G+m6kddp7hVzK380/Xft9vz63d/2/NpbqOSJ8ui6NtfxYYwwplHtRurdurfmrt7zGI7d68qTJT14+K/3yKM+rfvwGEZQ4QwPgpLV1VJEWMRxfW1EWISsbpaPi4Bjw2MYODYMHgSlzk07K7NXpiLDIo/p6yLDIpXZK5O35IdxPIaBY8NTWghav394YvyCeBWXFu/79NZ+PPIoIixCmb0y+dBF+A0ew8DR4wwPglr3yO66fvP16temn8JDwxURuu9TBBGhEQoPDVe/Nv20eMhiDhTwOyM6jdDiIYsP+xgO84Tp/JDzeQwjqHGGB0Fr0aJF6t27t0JDQ1VUVKSNRRs17eNpyn8pXx3+2kENIhuofaP2GtJxCC/uhF/rdHonzbl5jjYWbdSMFTM0681ZqnVyLbVp3kbtG7XXVy99pSdyntCT65/UBbkXKCws7Mh3CrgMZ3gQdBzH0aOPPqprrrlGu3fvVkRE5d+IG9ZuqDN/PFM/5f6kG0pu0NP9nlbCpQmMHQSMhrUbanj74Vr16CptzNn4x2O49WmtJUnTpk3T3/72N23YsMFwKVD9GDwIOvfff7/uv/9+FRcX73N5eXm5kpKSJEnjxo1TWVmZiTzghGRnZ8txHK1fv16LFy+WJJWWlkqSdu/erY8//lgXXHABj28EHQYPgs6wYcN08803S5JCQ0P/OBi8+OKL2ry58rOFtm/frpkzZxprBI7Htm3blJGRofLycu3evVsJCQmSpF27dkmSQkJCdPrpp2vKlCkKDeUVDQguDB4EnebNm6tnz57q2rWrhg8frvPPP1+SdN999/1xYNi9e7fuv/9+k5nAMXv88cdVUlKiGjVqKCwsTMuWLdP777+v008/XRdffLGSk5PVrFkzXXvttaZTgWrHxEfQKS8vV1pamiZNmqQrrrjij8szMjJUWFioESNGKDs7Ww0b8todBJbevXurbt26euqpp9S6dWtdeumlat26tbp06aIRI0aovLxcL774ot5++211797ddC5QrTyOc+j3bejUqZNTUFBQjTlA1Zs9e7YyMjL0/vvvy+PxHHB9aGioSkpKOOWPgHXHHXeoa9euuuOOOw647sknn9SLL76oN954w0AZULU8Hs9yx3EO+q6aPKWFoOI4jmzblmVZBx07gNvddtttWrVqlZYvX246BahWDB4ElTfffFMlJSW67rrrTKcARtSsWVNxcXFKTU01nQJUKwYPgopt2/J6vQoJ4aGP4HXXXXdp8eLF+vLLL02nANWGP/URNN577z199913GjBggOkUwKjatWsrKipKaWlpplOAasOrMhE0bNtWYmIib6sPSIqKitLZZ5+tH3/8UWeccYbpHKDKcYYHQeHTTz/VsmXLNHToUNMpgF845ZRTdMcdd2j8+PGmU4BqweBBUEhNTdXo0aP/+NwsAFJsbKyefvppbdy40XQKUOUYPHC9b7/9Vm+88YZGjBhhOgXwK6effrpuuukm5ebmmk4BqhyDB66XkZGh4cOHq169eqZTAL+TmJioxx57TNu2bTOdAlQpBg9c7eeff9YLL7yg0aNHm04B/FKrVq3Us2dPPf7446ZTgCrF4IGrTZgwQbfeeqsaNWpkOgXwW16vVxMmTFBJSYnpFKDKMHjgWps3b9aTTz6p+Ph40ymAX+vQoYMuuugizZgxw3QKUGUYPHCtiRMn6tprr1Xz5s1NpwB+z7Ispaenq6yszHQKUCUYPHCloqIi5ebmKikpyXQKEBAuueQSnXHGGXrhhRdMpwBVgsEDV3riiSfUrVs3nXfeeaZTgIBhWZZSU1NVUVFhOgXwOQYPXGf37t0aP368LMsynQIElKuuukphYWH697//bToF8DkGD1znmWee0bnnnqtOnTqZTgECisfjkWVZSklJkeM4pnMAn2LwwFXKy8uVmprK2R3gON1www367bfftHjxYtMpgE8xeOAqr7zyiurXr6/LL7/cdAoQkGrUqKHExETZtm06BfApBg9cw3Ec2bYty7Lk8XhM5wAB67bbbtOqVau0fPly0ymAzzB44BoLFixQSUmJrr32WtMpQECrWbOm4uLiOMsDV2HwwDV+P7sTEsLDGjhRd911l95++22tXr3adArgExwZ4ArvvfeefvjhBw0YMMB0CuAKtWvXVnR0tNLT002nAD4RajoA8AXbtpWQkKDQUB7SgK9ERUWpdevWWrt2rc4880zTOcAJ4QwPAt6nn36qZcuWaejQoaZTAFepX7++7rjjDo0fP950CnDCGDwIeKmpqRozZowiIiJMpwCuExsbq5kzZ2rjxo2mU4ATwuBBQPv222/1xhtvaMSIEaZTAFc6/fTT1b9/f+Xm5ppOAU4IgwcBLSMjQ8OHD1fdunVNpwCulZCQoMcee0zbtm0znQIcNwYPAtbPP/+sF154QaNHjzadArhaq1at1LNnTz322GOmU4DjxuBBwJowYYIGDRqkRo0amU4BXM/r9So7O1slJSWmU4DjwuBBQNq8ebOefPJJxcfHm04BgkKHDh100UUXacaMGaZTgOPC4EFAys/P17XXXst7gwDVyLIspaenq6yszHQKcMwYPAg4RUVFys/PV1JSkukUIKhccsklOuOMM/TCCy+YTgGOGYMHAeeJJ55Q165ddd5555lOAYJOcnKyUlNTVVFRYToFOCYMHgSU3bt3KzMzU5ZlmU4BglKvXr0UFhamf//736ZTgGPC4EFAmTVrls477zx16tTJdAoQlDwejyzLUkpKihzHMZ0DHDUGDwJGeXm50tLSOLsDGHbDDTfot99+0+LFi02nAEeNwYOA8corr6h+/frq0aOH6RQgqNWoUUNJSUmybdt0CnDUGDwICI7jKCUlRZZlyePxmM4Bgt6gQYO0atUqLV++3HQKcFQYPAgICxYs0O7du3XttdeaTgEgqWbNmoqLi+MsDwIGgwcBwbZteb1ehYTwkAX8xV133aUlS5Zo9erVplOAI+LoAb/37rvv6ocfftCAAQNMpwDYS+3atRUVFaX09HTTKcARhZoOAI7Etm0lJiYqNJSHK+BvoqKi1Lp1a61du5aPeoFf4wwP/Nqnn36qgoICDR061HQKgIOoX7++7rzzTo0fP950CnBYDB74tdTUVI0ZM0bh4eGmUwAcQmxsrGbOnKmNGzeaTgEOicEDv/XNN9/ojTfe0IgRI0ynADiM0047Tf3791dOTo7pFOCQGDzwWxkZGbrnnntUt25d0ykAjiAhIUGPPfaYtm3bZjoFOCgGD/zSzz//rBdffFGjR482nQLgKLRq1UpXXXWVHnvsMdMpwEExeOCXJkyYoEGDBqlhw4amUwAcJa/Xq+zsbJWUlJhOAQ7A4IHf2bx5s5544gnFx8ebTgFwDNq3b6+LLrpI06dPN50CHIDBA7+Tn5+vf/zjH7ynBxCALMtSRkaGysrKTKcA+2DwwK8UFRUpLy9PSUlJplMAHIdLLrlEZ555pl544QXTKcA+GDzwK1OnTlW3bt3Upk0b0ykAjpNlWbJtWxUVFaZTgD8weOA3du/erfHjx8uyLNMpAE5Ar169VLNmTb3++uumU4A/MHjgN2bNmqXzzjtPnTp1Mp0C4AR4PB4lJyfLtm05jmM6B5DE4IGfKC8vV1paGmd3AJfo16+ffvvtNy1evNh0CiCJwQM/8fLLL+uUU05Rjx49TKcA8IEaNWooKSlJKSkpplMASQwe+AHHcWTbtizLksfjMZ0DwEcGDRqkL774QsuXLzedAjB4YN6CBQu0e/duXXPNNaZTAPhQzZo1FR8fL9u2TacADB6Yl5KSIq/Xq5AQHo6A2wwbNkxLlizR6tWrTacgyHGEgVHvvvuu1q5dqwEDBphOAVAFateuraioKKWlpZlOQZALNR2A4GbbthITExUaykMRcKuoqCi1bt1aa9eu5SNjYAxneGDMypUrVVBQoKFDh5pOAVCF6tevrzvvvFPjx483nYIgxuCBMampqYqNjVV4eLjpFABVLDY2VjNnztTGjRtNpyBIMXhgxDfffKMFCxbonnvuMZ0CoBqcdtpp6t+/v3JyckynIEgxeGBERkaG7rnnHtWtW9d0CoBqkpiYqMcee0zbtm0znYIgxOBBtVu/fr1efPFFjR492nQKgGp01lln6aqrrtLkyZNNpyAIMXhQ7SZMmKDbbrtNDRs2NJ0CoJp5vV5lZ2eruLjYdAqCDIMH1Wrz5s2aNm2a4uLiTKcAMKB9+/bq3LmzZsyYYToFQYbBg2qVn5+v6667jvfiAIKYZVlKT09XWVmZ6RQEEQYPqk1RUZHy8vKUlJRkOgWAQX/729/UvHlzPf/886ZTEEQYPKg2U6dO1WWXXaY2bdqYTgFgmGVZSk1NVUVFhekUBAkGD6rF7t27NX78eFmWZToFgB/o1auXatWqpddff910CoIEgwfVYubMmTr//PN10UUXmU4B4Ac8Ho8sy1JKSoocxzGdgyDA4EGVKy8vV1paGmd3AOyjX79+2rx5s/73v/+ZTkEQYPCgyr388stq0KCBunfvbjoFgB+pUaOGvF6vbNs2nYIgwOBBlXIcR7Zty7IseTwe0zkA/Mytt96q1atXq6CgwHQKXI7Bgyr1xhtvqLS0VNdcc43pFAB+qGbNmoqLi+MsD6ocgwdVyrZteb1ehYTwUANwcMOGDdM777yj1atXm06Bi3EUQpVZunSpfvzxR918882mUwD4sdq1ays6OlppaWmmU+BioaYD4F62bSsxMVGhoTzMABzeqFGj1Lp1a61du5aPnkGV4AwPqsTKlSv10UcfaciQIaZTAASA+vXr684771RmZqbpFLgUgwdVIjU1VWPGjFF4eLjpFAABIjY2VrNmzdLGjRtNp8CFGDzwuW+++UYLFizQPffcYzoFQAA57bTTdPPNNysnJ8d0ClyIwQOfS09P14gRI1S3bl3TKQACTEJCgh577DFt27bNdApchsEDn1q/fr1eeuklxcTEmE4BEIDOOussXXXVVZo8ebLpFLgMgwc+NWHCBN12221q2LCh6RQAAcrr9So7O1vFxcWmU+AiDB74zG+//aYnn3xS8fHxplMABLD27durc+fOmj59uukUuAiDBz6Tn5+v66+/XmeccYbpFAABzrIsZWRkqKyszHQKXILBA58oKipSfn6+kpKSTKcAcIG//e1vat68uZ5//nnTKXAJBg98YurUqbrssst07rnnmk4B4BLJyclKTU1VRUWF6RS4AIMHJ2zXrl3KzMyUZVmmUwC4SM+ePVWrVi299tprplPgAgwenLBZs2apbdu2uuiii0ynAHARj8cjy7Jk27YcxzGdgwDH4MEJKS8vV1paGmd3AFSJfv36acuWLfrf//5nOgUBjsGDEzJnzhydeuqp6t69u+kUAC5Uo0YNJSUlKSUlxXQKAhyDB8fNcRzZti3LsuTxeEznAHCpW2+9VV9++aUKCgpMpyCAMXhw3N544w2VlZWpb9++plMAuFjNmjUVFxcn27ZNpyCAMXhw3GzbltfrVUgIDyMAVWvYsGF655139MUXX5hOQYDiSIXjsnTpUv3444+6+eabTacACAK1a9dWdHS00tLSTKcgQIWaDkBgsm1biYmJCg3lIQSgeowaNUqtW7fW2rVrdeaZZ5rOQYDhDA+O2SeffKKPPvpIQ4YMMZ0CIIjUr19fw4YNU2ZmpukUBCAGD45ZamqqYmNjFR4ebjoFQJCJjY3VrFmzVFhYaDoFAYbBg2OyZs0avfnmm7rnnntMpwAIQk2aNNHNN9+snJwc0ykIMAweHJOMjAyNGDFCderUMZ0CIEglJCTo8ccf17Zt20ynIIAweHDU1q9fr5deekkxMTGmUwAEsbPOOktXX321Jk+ebDoFAYTBg6OWlZWl22+/XQ0bNjSdAiDIeb1eZWdnq7i42HQKAgSDB0flt99+07Rp0xQXF2c6BQDUrl07de7cWdOnTzedggDB4MFRyc/P1/XXX68zzjjDdAoASJKSk5OVkZGh0tJS0ykIAAweHNGOHTuUn5+vpKQk0ykA8IcuXbqoRYsWev75502nIAAweHBEU6dOVffu3XXuueeaTgGAfViWpdTUVFVUVJhOgZ9j8OCwdu3apfHjx8uyLNMpAHCAnj17Kjw8XK+99prpFPg5Bg8Oa9asWWrbtq3+8pe/mE4BgAN4PB4lJycrJSVFjuOYzoEfY/DgkMrLy5WWlqbk5GTTKQBwSP369dPWrVv11ltvmU6BH2Pw4JDmzJmjU089VZdddpnpFAA4pJCQECUlJcm2bdMp8GMMHhyU4ziybVuWZcnj8ZjOAYDDuvXWW/Xll19q2bJlplPgpxg8OKj//Oc/Ki8vV9++fU2nAMAR1axZU/Hx8ZzlwSExeHBQtm3L6/UqJISHCIDAMGzYMC1dulRffPGF6RT4IY5mOMDSpUu1bt069e/f33QKABy1yMhIRUdHKy0tzXQK/FCo6QD4H9u2lZCQoNBQHh4AAsuoUaPUunVr/fDDD2revLnpHPgRzvBgH5988ok++ugjDRkyxHQKAByz+vXra9iwYcrMzDSdAj/D4ME+UlNTFRsbq/DwcNMpAHBcYmNj9cwzz6iwsNB0CvwIgwd/WLNmjd58803dc889plMA4Lg1adJEN998s3JyckynwI8wePCH9PR0jRw5UnXq1DGdAgAnJCEhQY8//ri2bt1qOgV+gsEDSdL69es1e/ZsxcTEmE4BgBN21lln6eqrr9bkyZNNp8BPMHggScrKytLtt9+uU0891XQKAPiE1+tVTk6OiouLTafADzB4oN9++03Tp09XXFyc6RQA8Jl27drpr3/9q6ZPn246BX6AwQPl5eXp+uuv1xlnnGE6BQB8yrIspaenq7S01HQKDGPwBLkdO3Zo4sSJSkxMNJ0CAD7XpUsXtWzZUs8//7zpFBjG4AlyU6dOVffu3XXuueeaTgGAKmFZllJTU1VRUWE6BQYxeILYrl27NH78eFmWZToFAKpMz549FRERoVdffdV0Cgxi8ASxmTNnql27dvrLX/5iOgUAqozH45FlWbJtW47jmM6BIQyeIFVeXq709HTO7gAICv369dPWrVv11ltvmU6BIQyeIDVnzhw1bNhQl112mekUAKhyISEh8nq9sm3bdAoMYfAEIcdxZNu2LMuSx+MxnQMA1WLgwIH68ssv9eGHH5pOgQEMniD0n//8R+Xl5erbt6/pFACoNjVr1lR8fLxSU1NNp8AABk8Qsm1bXq+XszsAgs6wYcO0dOlSffHFF6ZTUM0YPEHmnXfe0bp169S/f3/TKQBQ7SIjIxUTE6O0tDTTKahmoaYDUL1s21ZiYqJCQ/mtBxCcRo0apVatWumHH35Q8+bNTeegmnCGJ4h88skn+vjjjzV48GDTKQBgzMknn6xhw4YpMzPTdAqqEYMniKSmpio2Nlbh4eGmUwDAqNjYWD3zzDMqLCw0nYJqwuAJEmvWrNF///tf3XPPPaZTAMC4Jk2aaMCAAcrOzjadgmrC4AkS6enpGjFihOrUqWM6BQD8QkJCgqZMmaKtW7eaTkE1YPAEgfXr12v27NmKiYkxnQIAfqNly5a6+uqrNXnyZNMpqAYMniCQlZWlwYMH69RTTzWdAgB+xev1Kjs7W8XFxaZTUMUYPC63adMmTZs2TXFxcaZTAMDvtGvXThdffLGmTZtmOgVVjMHjcvn5+erXr5+aNWtmOgUA/JJlWcrIyFBpaanpFFQhBo+L7dixQ/n5+UpMTDSdAgB+q0uXLmrZsqWef/550ymoQgweF5syZYouv/xynXvuuaZTAMCvJScny7ZtVVRUmE5BFWHwuNSuXbuUlZUly7JMpwCA3/v73/+uyMhIvfrqq6ZTUEUYPC41c+ZMtWvXThdeeKHpFADwex6PR5ZlybZtOY5jOgdVgMHjQuXl5UpLS1NycrLpFAAIGP369dO2bdu0aNEi0ymoAgweF5o9e7YaNWqkbt26mU4BgIAREhKipKQk2bZtOgVVgMHjMo7jyLZtWZYlj8djOgcAAsrAgQP11VdfadmyZaZT4GMMHpf5z3/+o4qKCvXt29d0CgAEnJo1ayo+Pp6zPC7E4HGZlJQUzu4AwAkYNmyYli5dqlWrVplOgQ8xeFzknXfe0fr163XTTTeZTgGAgBUZGamYmBilpaWZToEPhZoOgO/Ytq3ExESFhvLbCgAnYtSoUWrVqpW+//57tWjRwnQOfIAzPC6xYsUKrVixQoMHDzadAgAB7+STT9Zdd92lzMxM0ynwEQaPS6Smpio2Nlbh4eGmUwDAFcaMGaNnn31WGzZsMJ0CH2DwuMCaNWu0cOFCDR8+3HQKALhGkyZNNGDAAOXk5JhOgQ8weFwgPT1dI0aMUJ06dUynAICrJCQk6PHHH9fWrVtNp+AEMXgC3Lp16zR79mzFxMSYTgEA12nZsqX69OmjSZMmmU7BCWLwBLisrCwNHjxYp556qukUAHAlr9ernJwcFRcXm07BCWDwBLBNmzZp+vTpiouLM50CAK7Vtm1bXXzxxZo2bZrpFJwABk8Ay8/PV79+/dSsWTPTKQDgapZlKSMjQ6WlpaZTcJwYPAFqx44dys/PV1JSkukUAHC9Ll266KyzztJzzz1nOgXHicEToKZMmaLLL79c55xzjukUAAgKlmUpNTVVFRUVplNwHBg8AWjXrl3KysqSZVmmUwAgaPz9739XZGSkXn31VdMpOA4MngA0c+ZMtW/fXhdeeKHpFAAIGh6PR8nJyUpJSZHjOKZzcIwYPAGmvLxcaWlpnN0BAAOuv/56bd++XYsWLTKdgmPE4Akws2fPVqNGjdStWzfTKQAQdEJCQpSUlCTbtk2n4BgxeAKI4ziybVuWZcnj8ZjOAYCgNHDgQH311Vf68MMPTafgGDB4Asj8+fPlOI769u1rOgUAglbNmjWVkJDAWZ4Aw+AJILZty+v1cnYHAAy788479e6772rVqlWmU3CUGDwB4p133tH69et10003mU4BgKAXGRmpmJgYpaWlmU7BUQo1HYCjY9u2kpKSFBrKbxkA+INRo0apVatW+v7779WiRQvTOTgCzvAEgBUrVmjFihUaPHiw6RQAwB4nn3yy7rrrLmVmZppOwVFg8ASA1NRUxcbGqlatWqZTAAB7GTNmjJ599llt2LDBdAqOgMHj577++mstXLhQw4cPN50CANhPkyZNNGDAAGVnZ5tOwREwePxcenq6Ro4cqTp16phOAQAcREJCgqZMmaKtW7eaTsFhMHj82Lp16zRnzhzFxMSYTgEAHELLli3Vp08fTZo0yXQKDoPB48eysrI0ePBgNWjQwHQKAOAwvF6vcnJytHPnTtMpOAQGj5/atGmTpk+frri4ONMpAIAjaNu2rbp06aJp06aZTsEhMHj8VF5enm644QY1a9bMdAoA4ChYlqXMzEyVlpaaTsFBMHj80I4dOzRx4kQlJiaaTgEAHKWLL75YZ511lp577jnTKTgIBo8fmjJlii6//HKdc845plMAAMfAsiylpqaqoqLCdAr2w+DxM7t27VJWVpYsyzKdAgA4Rn//+99Vu3Zt/etf/zKdgv0wePzM008/rfbt2+vCCy80nQIAOEYej0eWZcm2bTmOYzoHe2Hw+JHy8nKlp6dzdgcAAtj111+v7du3a9GiRaZTsBcGjx+ZPXu2GjdurG7duplOAQAcp5CQEHm9XqWkpJhOwV4YPH7CcRzZti3LsuTxeEznAABOwMCBA7VmzRp9+OGHplOwB4PHT8yfP1+O46hPnz6mUwAAJygsLEzx8fGybdt0CvZg8PgJ27bl9Xo5uwMALnHnnXfq3Xff1eeff246BWLw+IUlS5bo559/1k033WQ6BQDgI5GRkRo9erTS0tJMp0BSqOkAVJ7dSUxMVGgovx0A4CYjR45Uq1at9P3336tFixamc4IaZ3gMW7FihT755BMNHjzYdAoAwMdOPvlk3XXXXcrMzDSdEvQYPIalpqYqNjZWtWrVMp0CAKgCY8aM0TPPPKMNGzaYTglqDB6Dvv76ay1cuFDDhw83nQIAqCJNmjTRwIEDlZ2dbTolqDF4DEpPT9fIkSNVp04d0ykAgCqUkJCgKVOmaMuWLaZTghaDx5B169Zpzpw5iomJMZ0CAKhiLVq0UJ8+fTRp0iTTKUGLwWNIVlaWhgwZogYNGphOAQBUA6/Xq9zcXO3cudN0SlBi8BiwadMmTZ8+XWPHjjWdAgCoJm3btlWXLl00bdo00ylBicFjQF5enm644QY1a9bMdAoAoBpZlqWMjAyVlpaaTgk6DJ5qtn37dk2cOFGJiYmmUwAA1eziiy9Wq1at9Oyzz5pOCToMnmo2ZcoUXXHFFTrnnHNMpwAADEhOTlZaWpoqKipMpwQVBk812rVrl7KysuT1ek2nAAAMufLKK1W7dm3961//Mp0SVBg81ejpp59Whw4ddOGFF5pOAQAY4vF4ZFmWUlJS5DiO6ZygweCpJmVlZUpLS1NycrLpFACAYddff7127NihhQsXmk4JGgyeajJ79mw1adJE3bp1M50CADAsJCREXq9Xtm2bTgkaDJ5q4DiOUlNTZVmW6RQAgJ8YOHCg1qxZow8++MB0SlBg8FSD+fPny3Ec9enTx3QKAMBPhIWFKT4+nrM81YTBUw1SUlJkWZY8Ho/pFACAH7nzzjv1/vvv6/PPPzed4noMniq2ZMkS/fLLL/rnP/9pOgUA4GciIyMVExOjtLQ00ymuF2o6wO1s21ZiYqJCQ/m/GgBwoJEjR6pVq1b67rvv1LJlS9M5rsUZniq0YsUKffLJJxo8eLDpFACAnzr55JN19913KzMz03SKqzF4qpBt2xo7dqxq1aplOgUA4MfGjBmj5557Ths2bDCd4loMniry9ddfa9GiRbr77rtNpwAA/Fzjxo11yy23KDs723SKazF4qkh6erpGjhypOnXqmE4BAASAhIQETZkyRVu2bDGd4koMniqwbt06zZkzRzExMaZTAAABokWLFurbt68mTZpkOsWVGDxVYPz48RoyZIgaNGhgOgUAEECSkpKUm5urnTt3mk5xHQaPj23atEkzZszQ2LFjTacAAAJM27Zt1aVLFz355JOmU1yHweNjeXl5uuGGG9SsWTPTKQCAAGRZljIzM1VaWmo6xVUYPD60fft2TZw4UUlJSaZTAAAB6uKLL1br1q317LPPmk5xFQaPD02ZMkVXXHGFzj77bNMpAIAAZlmWUlNTVVFRYTrFNRg8PrJr1y5lZWXJ6/WaTgEABLgrr7xSJ510kubOnWs6xTUYPD7y9NNP64ILLtCFF15oOgUAEOA8Ho+Sk5Nl27YcxzGd4woMHh8oKytTWlqaLMsynQIAcIl//OMfKioq0sKFC02nuAKDxwdmz56tJk2aqFu3bqZTAAAuERISoqSkJKWkpJhOcQUGzwlyHEe2bXN2BwDgcwMHDtQ333yjDz74wHRKwGPwnKB58+ZJkvr06WO4BADgNmFhYUpISJBt26ZTAh6D5wT9fnbH4/GYTgEAuNAdd9yh999/X5999pnplIDG4DkBS5Ys0S+//KJ//vOfplMAAC4VGRmpmJgYpaWlmU4JaKGmAwKZbdtKSkpSaCj/NwIAqs7IkSPVqlUrfffdd2rZsqXpnIDEGZ7j9PHHH+uTTz7R7bffbjoFAOByJ598su6++25lZmaaTglYDJ7jlJqaqrFjx6pWrVqmUwAAQWDMmDF69tln9csvv5hOCUgMnuPw1VdfadGiRbr77rtNpwAAgkTjxo01cOBAZWdnm04JSAye45Cenq5Ro0apTp06plMAAEEkISFBU6dO1ZYtW0ynBBwGzzH66aef9PLLLys6Otp0CgAgyLRo0UJ9+/bVxIkTTacEHAbPMcrKytKQIUPUoEED0ykAgCCUlJSk3Nxc7dy503RKQGHwHINNmzZpxowZiouLM50CAAhSbdu21SWXXKInn3zSdEpAYfAcg9zcXN14441q2rSp6RQAQBCzLEuZmZnavXu36ZSAweA5Stu3b9ekSZOUmJhoOgUAEOT++te/qnXr1nr22WdNpwQMBs9RmjJliq644gqdffbZplMAAJBlWUpLS1NFRYXplIDA4DkKu3btUlZWlizLMp0CAIAk6corr1SdOnU0d+5c0ykBgcFzFJ566ildcMEF6tixo+kUAAAkSR6PR5ZlKSUlRY7jmM7xewyeIygrK1N6ejpndwD4vcKiQqUvTde6i9fpGecZDXp5kNKXpmtj0UbTaagi//jHP7Rz507997//NZ3i9/iY7yOYPXu2mjRpom7duplOAYCDWrZumex3bM1fM1+SVFJWUnnFT9LLX7ysB/73gHq37i2rq6XOTTsbLIWvhYSEKCkpSbZtq2fPnqZz/BpneA7DcRzZtq3k5GTTKQBwUJMLJqvHUz00d/VclZSV/Dl29iguK1ZJWYnmrp6rHk/10OSCyUY6UXUGDhyob775Ru+//77pFL/G4DmMefPmyePxqHfv3qZTAOAAkwsmK35BvHaW7pQzx5Hm7neD7yWlSdouOXK0s3Sn4hfEM3pcJiwsTAkJCbJt23SKX2PwHIZt2/J6vfJ4PKZTAGAfy9Yt+2PsSJJ6S/pa0jd7blAq6VVJvSTt9TnHv4+egvUF1ZmLKnbHHXfogw8+0GeffWY6xW8xeA5hyZIl2rBhg2666SbTKQBwAPsdW8WlxX9eECmpj6TXJO2WtFjSKZIuPPBri0uLZS/hbICbREZGavTo0UpLSzOd4rcYPIeQkpKixMRE1ahRw3QKAOyjsKhQ89fMl6P9fhS5raTTJM2WtFzStQf/ekeO5q2Zx09vuczIkSM1b948fffdd6ZT/BKD5yA+/vhjrVy5UrfffrvpFAA4wIwVMw59ZV9J30nqLqneoW/mkefw94OAU69ePd19993KyMgwneKXGDwHkZqaqrFjx6pWrVqmUwDgACs3rDzgp7H+cJIqn95qePj7KC4r1qeFn/o6DYaNGTNGzz33nH755RfTKX6HwbOfr776SosWLdLw4cNNpwDAQW0t2eqT+9lcstkn9wP/0bhxY916663Kzs42neJ3GDz7SU9P16hRo3TSSSeZTgGAg6oXfpjnqo5B/fD6Prkf+Jf4+HhNnTpVW7ZsMZ3iVxg8e/npp5/08ssvKzo62nQKABxSh8YdFB4afkL3EREaofaN2vuoCP6kRYsW6tu3ryZOnGg6xa8wePaSlZWloUOHqkGDBqZTAOCQhnQccvgbxEpqdfibOHKOfD8IWF6vV7m5udq5c6fpFL/B4Nnj119/1YwZMzR27FjTKQBwWI1qN1Lv1r3l0fG9KapHHvVp3UcNax/hlc0IWOeff74uueQSPfHEE6ZT/AaDZ4+8vDzdeOONatq0qekUADgiq6uliLCI4/raiLAIWd0sHxfB31iWpczMTO3evdt0il9g8Ejavn27Jk2apMTERNMpMKiwqFDpS9N1aeal6vdiPw16eZDSl6bz5mzwS52bdlZmr0xFhkUe09dFhkUqs1emOp3eqYrK4C/++te/6uyzz9azzz5rOsUveBzHOeSVnTp1cgoK3P95K5mZmSooKNDzzz9vOgUGLFu3TPY7tuavmS9J+7y/SURohBw56t26t6yuljo37WwqEzio3z9AtLi0+MB3Xt6LRx5FhEUos1emRnQaUY2FMGnhwoUaNWqUPv/886D45ACPx7PccZyDrvmgP8Oza9cuTZgwQV6v13QKDJhcMFk9nuqhuavnqqSs5IA3cysuK1ZJWYnmrp6rHk/14FOm4XdGdBqhxUMWq1+bfgoPDVdE6L5Pc0WERig8NFz92vTT4iGLGTtB5oorrlDdunU1d+5c0ynGhZoOMO2pp57SBRdcoI4dO5pOQTX7/W/GO0t3SrskTZJ0paQOe26wS9JESVdJTlvnj0+ZlsRBA36l0+mdNOfmOdpYtFEzVszQp4WfanPJZtUPr6/2jdprSMchvEA5SHk8HlmWpUcffVQ33HCDPJ7je6G7GwT1U1plZWVq06aNZsyYoa5du5rOQTVatm6ZejzVo3Ls/G6NpJcljZJUW9LrknZIGrDv10aGRWrxkMW8BgJAQKioqFC7du2Uk5Ojnj17ms6pUjyldQgvvfSSTjvtNMZOELLfsVVcWrzvha0lnS1pvio/fPFzVX4Q436KS4tlL7GrvBEAfCEkJERer1cpKSmmU4wK2sHjOI5SU1NlWfxoZrApLCrU/DXzD/4Cz6slfS/pRUm9JNU58CaOHM1bM4+f3gIQMG655RZ99913ev/9902nGBO0g2fevHnyeDzq3bu36RRUsxkrZhz6yghVfsp0qaTzDn0zjzyHvx8A8CNhYWGKj4+XbQfv2emgHDyO4yglJUWWZQX1C7iC1coNKw/4aaw/fCJpi6SzJL156PsoLivWp4Wf+j4OAKrInXfeqQ8++ECfffaZ6RQjgnLwLFmyRIWFhfrnP/9pOgUGbC3ZevArdkh6Q9J1kq5V5Wt4fjj0/Wwu2ezzNgCoKhERERo9erRSU1NNpxgRlIPHtm0lJiYGxZsw4UD1wusd/Ip5ktpIaqnK1+70lPSqpLKD37x+eP2qyAOAKjNy5EjNnz9f3333nemUahd0g+fjjz/WypUrdfvtt5tOgSEdGndQeGj4vhd+IWmtKkfO7y5S5fBZfOB9RIRGqH2j9lXWCABVoV69err77ruVkZFhOqXaBd378PTv319dunThU9GDWGFRoZpnNz/063iOQnhouNaOWcubuQEIOBs2bFCbNm30xRdfqEmTJqZzfIr34dnjq6++0ltvvaW7777bdAoMalS7kXq37i2Pju8F6x551Kd1H8YOgIDUuHFj3XrrrZowYYLplGoVVIMnPT1do0aN0kknnWQ6BYZZXS1FhEUc+YYHEREWIasb798EIHDFx8friSee0JYtW0ynVBtXfpZWYVGhZqyYoZUbVmpryVbVC6+ns+uerTn/maM1n6wxnQc/0LlpZ2X2yvzzs7SOUmRYpDJ7ZfKxEgACWosWLXTNNdfoqaee0i3DbjngmNmhcQcN7TjUVWeyXfUanmXrlsl+x9b8NfMlaZ/XaESERqiiokJ9zukjq6ulzk07m8qEH/n9A0SLS4sP/s7Le3jkUURYhDJ7ZfLBoQBc4a0v31L6++n630//k3TgMdORo96tewfUMfNwr+FxzeDhwIXjVbC+QPYSW/PWzJNHHhWX/fkZW7//S9+ndR9Z3SzO7ABwBbceMw83eFzxlNbkgskafe9olX5bKg3a64pcSadon8ucXEc7L9+peMVLUkD8BqJqdTq9k+bcPEcbizZqxooZ+rTwU20u2az64fXVvlF7Dek4xFWndQEEt9/Hzs7SndJKSa8d5EalknpITg9HO0t3Kn5B4B8zA37wLFu3TPEL4lXarLTy/VIqVPlS7O2SyiX9vN9lv0lqrj9+Azuf3pm/tUOS1LB2QyVcmmA6AwCqzO/HzD9eu9hhz6+9LZe0SJXvRbaHG46ZAf9TWvY7topLi6XTVTlwftlzxQ+qfMfcU/e7rL6kupX/WFxaLHtJ8H6QGgAguPxxzDyUnyX9R9I/VfnGq3sJ9GNmQA+ewqJCzV8zv/L5x1BJzfTnZx/9IOnMPb/2vqz5n1/vyNG8NfO0sWhj9UUDAGDAPsfMgymW9KKk7qo8YbCfQD9mBvTgmbFixr4XNNef42btnn8+c7/LWuz7JR55DrwfAABc5rDHOkfSK5IaSbr00DcL5GNmQA+elRtW7vvxAM1VOWp2SiqS1EDSGZJ+3HNZofY5wyNJxWXF+rTw02rpBQDAlAOOmXt7R9JGSddLh3sT+kA+Zgb04NlasnXfC86QVCLpI1We2ZGkcFU+D/nRnv88yAdcby7ZXHWRAAD4gQOOmb/7TtISSf0lHcUb0AfqMTOgB0+98Hr7XhCmyhcvv6c/B4/2/Pf3dMDZnd/VDz/ICgIAwEUOOGZKlT+9PFvS1ZJOO7r7CdRjZkAPng6NOyg8NHzfC1uo8ums/QdPkQ46eCJCI9S+UfuqSgQAwC8c9Ji5XJXHx/mSHt3v10HenyeQj5kB/U7LhUWFap7d/NDPSR6F8NBwrR2zljeWAwC4WjAcMw/3TssBfYanUe1G6t26tzyHe4XVYXjkUZ/Wffz2Nw4AAF8J9mNmQA8eSbK6WooIO4pXWR1ERFiErG6Wj4sAAPBPwXzMDPjB07lpZ2X2ylRkWOQxfV1kWKQye2UG7FtkAwBwrIL5mBnwn6Ul/flhZm785FcAAHwpWI+ZAX+G53cjOo3Q4iGL1a9NP4WHhisidN9TdhGhEQoPDVe/Nv20eMjigP+NAwDgeAXjMTOgf0rrUDYWbdSMFTP0aeGn2lyyWfXD66t9o/Ya0nFIwL7YCgCAquCmY+bhfkrLlYMHAAAEH9f+WDoAAMDRYPAAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADXY/AAAADX8ziOc+grPZ6Nkn6ovhwAAIDj1txxnIYHu+KwgwcAAMANeEoLAAC4HoMHAAC4HoMHAAC4HoMHAAC4HoMHAAC43v8Dvi/RKKEVPu0AAAAASUVORK5CYII=\n",
"text/plain": [
"