Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
andrearafanelli committed Dec 15, 2023
1 parent 5c106af commit 38ae4ab
Showing 1 changed file with 0 additions and 64 deletions.
64 changes: 0 additions & 64 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,24 +134,6 @@ Knowledge injection methods aim to enhance the sustainability of machine learnin

Most existing works on knowledge injection techniques showcase standard evaluation metrics that aim to quantify these potential benefits. However, accurately quantifying sustainability, accuracy improvements, dataset needs, and interpretability in a consistent manner remains an open challenge.

## <b> Standard assessment: </b>

Given:

- an injection procedure $I$,
- some symbolic knowledge $K$,
- a sub-symbolic predictor $N$

We define the “educated” predictor as:

$$\hat{N} = I(K,N)$$

and the standard assessment of this predictor is given by:

$$\epsilon = \pi(\hat{N}) - \pi(N)$$

where $\pi$ is a performance score of choice, i.e. accuracy, MSE, etc.

## Efficiency metrics:

- **Memory footprint**, i.e., the size of the predictor under examination;
Expand All @@ -164,67 +146,21 @@ where $\pi$ is a performance score of choice, i.e. accuracy, MSE, etc.

<u><i>Idea:</i></u> measure the amount of total operations (FLOPs or MACs) required by the model.

#### Formulation:

$$\mu_{\Psi, K, N}(\mathcal{I}) = \Psi(N) - \Psi(\hat{N})$$

where $\hat{N} = \mathcal{I}(K, N)$ represents the educated predictor attained by injecting $K$ into $N$, and $\Psi$ is a memory footprint metric of choice (FLOPs or MACs).

### <b>🗂️ Data Efficiency </b>
<u><i>Intuition:</i></u> several concepts are injected → some portions of training data are not required.

<u><i>Idea:</i></u> reducing the size of the training set $D$ for the educated predictor by letting the input knowledge $K$ compensate for such lack of data.

#### Formulation:

$$\Delta_\pi(e, N, D, T) = \frac{e}{\pi(N, T)} \sum_{d \in D} \beta(d)$$

where $d$ is a single training sample, $\beta(d)$ is the amount of bytes required for its in-memory representation, and $\pi$ is some performance score of choice.

The *data-efficiency gain* is equal to:

$$\delta_{e, K, N, D, D', T}(\mathcal{I}) = \Delta_\pi(e, N, D, T) - \Delta_\pi(e, \hat{N}, D', T)$$

### <b>⏳ Latency </b>
<u><i>Intuition:</i></u> knowledge injection remove unnecessary computations required to draw a prediction.

<u><i>Idea:</i></u> measures the average time required to draw a single prediction from a dataset $T$.

#### Formulation:

$$\Lambda(N, T) = \frac{1}{\vert T \vert} \sum_{t \in T} \Theta(N, t)$$

where $\Theta(N, t)$ represents the time required to draw a prediction from $N$ on the input $t$.

The *latency improvement* is equal to:

$$\lambda_{K, N, T}(\mathcal{I}) = \Lambda(N, T) - \Lambda(\hat{N}, T)$$

where $\hat{N} = \mathcal{I}(K, N)$ represents the educated predictor attained by injecting $K$ into $N$.

### <b>⚡ Energy Consumption </b>
<u><i>Intuition:</i></u> knowledge injection reduces learning complexity → reduces amount of computations for training and running a model.

<u><i>Idea:</i></u> i) measures the average energy consumption (mW) for a single update, on a training dataset $T$; ii) measures the average energy consumption (mW) of a single forward on a test dataset $T$ composed by several samples.

#### Formulation:
- Energy consumed by N on a per-single-inference basis:
$$\upsilon^\mathsf{i}_{\upsilon}(N, T) = \frac{1}{\vert T \vert} \sum_{t \in T} \upsilon(N, t)$$

where $\upsilon(N, t)$ measures the energy consumption of a single forward run of N on a single sample $t$.

- Energy consumed by N during training:

$$\upsilon^\mathsf{t}_{\upsilon, \gamma}(e, N, T) = \frac{\gamma(e, N, T)}{e \cdot \vert T \vert} - \usilon^\mathsf{i}_\upsilon(N, T)$$

where $\gamma(e, N, T)$ measures the overall energy consumed by the training phase as whole.

- The *energy consumption improvement* is equal to:

$$\varepsilon^\mathsf{i}_{\upsilon, K, N, T}(\mathcal{I}) = \upsilon^\mathsf{i}_{\upsilon}(N, T) - \upsilon^\mathsf{i}_{\upsilon}(\mathcal{I}(K, N), T)$$

$$\varepsilon^\mathsf{t}_{\upsilon, \gamma, e, K, N, T}(\mathcal{I}) = \upsilon^\mathsf{t}_{\upsilon, \gamma}(e, N, T) - \upsilon^\mathsf{t}_{\upsilon, \gamma}(e, \mathcal{I}(K, N), T)$$

## Example
The following example shows how to use QoS metrics.

Expand Down

0 comments on commit 38ae4ab

Please sign in to comment.