Skip to content

Commit

Permalink
deploy: e9b3bdd
Browse files Browse the repository at this point in the history
  • Loading branch information
jeipollack committed Nov 6, 2023
1 parent 3d92e10 commit 64f38a7
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 9 deletions.
Binary file modified .doctrees/configuration.doctree
Binary file not shown.
Binary file modified .doctrees/environment.pickle
Binary file not shown.
8 changes: 4 additions & 4 deletions _sources/configuration.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ metrics:
The metrics key `model_save_path` enables a choice of running the metrics evaluation for a fully trained PSF model or the weights of a given checkpoint cycle.
The parameter `saved_training_cycle` specifies the cycle at which to run metrics evaluation.

As stated in the previous section, the `metrics` evaluation pipeline can be executed subsequently after the completion of the `training` routine to evaluate the trained PSF model. It can also be launched independently to compute the metrics of a previously trained model. This is done by setting the value of the parameter `trained_model_path` to the absolute path of the parent directory containing the output files of the model. This is the directory with the naming convention: `wf-outputs-timestamp` (see this {ref}`example of the run output directory<wf-outputs>`). The user must then provide as an entry for the key: `trained_model_config` the subdirectory path to the training configuration file, ex: `config/train_config.yaml`. Below we show an example of this for the case where a user wants to run metrics evaluation of a pretrained full PSF model saved in the directory `wf-outputs-202310161536`.
As stated in the previous section, the `metrics` evaluation pipeline can be executed subsequently after the completion of the `training` routine to evaluate the trained PSF model. It can also be launched independently to compute the metrics of a previously trained model. This is done by setting the value of the parameter `trained_model_path` to the absolute path of the parent directory containing the output files of the model. This is the directory with the naming convention: `wf-outputs-timestamp` (see this {ref}`example of the run output directory<wf-outputs>`). The user must then provide as an entry for the key: `trained_model_config` the subdirectory path to the training configuration file, e.g. `config/train_config.yaml`. Below we show an example of this for the case where a user wants to run metrics evaluation of a pretrained full PSF model saved in the directory `wf-outputs-202310161536`.

```
WaveDiff Pre-trained Model
Expand Down Expand Up @@ -204,11 +204,11 @@ The WaveDiff `metrics` pipeline is programmed to automatically evaluate the Poly
| Optical Path Differences Reconstruction (OPD) | `opd` | Optional | Optional |
| Weak Lensing Shape Metrics (super-res only) | `shape_sr` | Default | Optional |

The option to generate plots of the metric evaluation results is provided by setting the value of the parameter `plotting_config` to the name of the [plotting configuration](plotting_config) file, ex: `plotting_config.yaml`. This will trigger WaveDiff's plotting pipeline to produce plots after completion of the metrics evaluation pipeline. If the field is left empty, no plots are generated.
The option to generate plots of the metric evaluation results is provided by setting the value of the parameter `plotting_config` to the name of the [plotting configuration](plotting_config) file, e.g. `plotting_config.yaml`. This will trigger WaveDiff's plotting pipeline to produce plots after completion of the metrics evaluation pipeline. If the field is left empty, no plots are generated.

To compute the errors of the trained PSF model, the `metrics` package can retrieve a ground truth data set if it exists in the dataset files listed in the [data_configuration](data_config) file. If they do exist, WaveDiff can generate at runtime a `ground truth model` using the parameters in the metrics configuration file associated to the key: `ground_truth_model`. The parameter settings for the ground truth model are similar to those contained in the [training configuration](training_config) file. Currently, the choice of model, which is indicated by the key `model_name`, is currently limited to the polychromatic PSF model, referenced by the short name `poly`.

The `metrics` package is run using [TensorFlow](https://www.tensorflow.org) to reconstruct the PSF model and to evaluate the various metrics. The `metrics_hparams` key contains a couple of usual machine learning hyperparameters such as the `batch_size` as well as additional parameters like `output_dim`, which sets the dimension of the output pixel postage stamp, etc.
The `metrics` package is run using [TensorFlow](https://www.tensorflow.org) to reconstruct the PSF model and to evaluate the various metrics. The `metrics_hparams` key contains some standard machine learning hyperparameters such as the `batch_size` as well as additional parameters like `output_dim`, which sets the dimension of the output pixel postage stamp, etc.

(plotting_config)=
## Plot Configuration
Expand All @@ -219,7 +219,7 @@ An example of the contents of the `plotting_config.yaml` file is shown below.

```
plotting_params:
# Specify path to parent folder containing wf-psf metrics outputs for all runs, ex: $WORK/wf-outputs/
# Specify path to parent folder containing wf-psf metrics outputs for all runs, e.g. $WORK/wf-outputs/
metrics_output_path: <PATH>
# List all of the parent output directories (i.e. wf-outputs-xxxxxxxxxxx) that contain metrics results to be included in the plot
metrics_dir:
Expand Down
8 changes: 4 additions & 4 deletions configuration.html
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ <h1>Configuration<a class="headerlink" href="#configuration" title="Link to this
</div>
<p>The metrics key <code class="docutils literal notranslate"><span class="pre">model_save_path</span></code> enables a choice of running the metrics evaluation for a fully trained PSF model or the weights of a given checkpoint cycle.
The parameter <code class="docutils literal notranslate"><span class="pre">saved_training_cycle</span></code> specifies the cycle at which to run metrics evaluation.</p>
<p>As stated in the previous section, the <code class="docutils literal notranslate"><span class="pre">metrics</span></code> evaluation pipeline can be executed subsequently after the completion of the <code class="docutils literal notranslate"><span class="pre">training</span></code> routine to evaluate the trained PSF model. It can also be launched independently to compute the metrics of a previously trained model. This is done by setting the value of the parameter <code class="docutils literal notranslate"><span class="pre">trained_model_path</span></code> to the absolute path of the parent directory containing the output files of the model. This is the directory with the naming convention: <code class="docutils literal notranslate"><span class="pre">wf-outputs-timestamp</span></code> (see this <a class="reference internal" href="basic_execution.html#wf-outputs"><span class="std std-ref">example of the run output directory</span></a>). The user must then provide as an entry for the key: <code class="docutils literal notranslate"><span class="pre">trained_model_config</span></code> the subdirectory path to the training configuration file, ex: <code class="docutils literal notranslate"><span class="pre">config/train_config.yaml</span></code>. Below we show an example of this for the case where a user wants to run metrics evaluation of a pretrained full PSF model saved in the directory <code class="docutils literal notranslate"><span class="pre">wf-outputs-202310161536</span></code>.</p>
<p>As stated in the previous section, the <code class="docutils literal notranslate"><span class="pre">metrics</span></code> evaluation pipeline can be executed subsequently after the completion of the <code class="docutils literal notranslate"><span class="pre">training</span></code> routine to evaluate the trained PSF model. It can also be launched independently to compute the metrics of a previously trained model. This is done by setting the value of the parameter <code class="docutils literal notranslate"><span class="pre">trained_model_path</span></code> to the absolute path of the parent directory containing the output files of the model. This is the directory with the naming convention: <code class="docutils literal notranslate"><span class="pre">wf-outputs-timestamp</span></code> (see this <a class="reference internal" href="basic_execution.html#wf-outputs"><span class="std std-ref">example of the run output directory</span></a>). The user must then provide as an entry for the key: <code class="docutils literal notranslate"><span class="pre">trained_model_config</span></code> the subdirectory path to the training configuration file, e.g. <code class="docutils literal notranslate"><span class="pre">config/train_config.yaml</span></code>. Below we show an example of this for the case where a user wants to run metrics evaluation of a pretrained full PSF model saved in the directory <code class="docutils literal notranslate"><span class="pre">wf-outputs-202310161536</span></code>.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>WaveDiff Pre-trained Model
--------------------------

Expand Down Expand Up @@ -338,16 +338,16 @@ <h1>Configuration<a class="headerlink" href="#configuration" title="Link to this
</tr>
</tbody>
</table>
<p>The option to generate plots of the metric evaluation results is provided by setting the value of the parameter <code class="docutils literal notranslate"><span class="pre">plotting_config</span></code> to the name of the <a class="reference internal" href="#plotting-config"><span class="std std-ref">plotting configuration</span></a> file, ex: <code class="docutils literal notranslate"><span class="pre">plotting_config.yaml</span></code>. This will trigger WaveDiff’s plotting pipeline to produce plots after completion of the metrics evaluation pipeline. If the field is left empty, no plots are generated.</p>
<p>The option to generate plots of the metric evaluation results is provided by setting the value of the parameter <code class="docutils literal notranslate"><span class="pre">plotting_config</span></code> to the name of the <a class="reference internal" href="#plotting-config"><span class="std std-ref">plotting configuration</span></a> file, e.g. <code class="docutils literal notranslate"><span class="pre">plotting_config.yaml</span></code>. This will trigger WaveDiff’s plotting pipeline to produce plots after completion of the metrics evaluation pipeline. If the field is left empty, no plots are generated.</p>
<p>To compute the errors of the trained PSF model, the <code class="docutils literal notranslate"><span class="pre">metrics</span></code> package can retrieve a ground truth data set if it exists in the dataset files listed in the <a class="reference internal" href="#data-config"><span class="std std-ref">data_configuration</span></a> file. If they do exist, WaveDiff can generate at runtime a <code class="docutils literal notranslate"><span class="pre">ground</span> <span class="pre">truth</span> <span class="pre">model</span></code> using the parameters in the metrics configuration file associated to the key: <code class="docutils literal notranslate"><span class="pre">ground_truth_model</span></code>. The parameter settings for the ground truth model are similar to those contained in the <a class="reference internal" href="#training-config"><span class="std std-ref">training configuration</span></a> file. Currently, the choice of model, which is indicated by the key <code class="docutils literal notranslate"><span class="pre">model_name</span></code>, is currently limited to the polychromatic PSF model, referenced by the short name <code class="docutils literal notranslate"><span class="pre">poly</span></code>.</p>
<p>The <code class="docutils literal notranslate"><span class="pre">metrics</span></code> package is run using <a class="reference external" href="https://www.tensorflow.org">TensorFlow</a> to reconstruct the PSF model and to evaluate the various metrics. The <code class="docutils literal notranslate"><span class="pre">metrics_hparams</span></code> key contains a couple of usual machine learning hyperparameters such as the <code class="docutils literal notranslate"><span class="pre">batch_size</span></code> as well as additional parameters like <code class="docutils literal notranslate"><span class="pre">output_dim</span></code>, which sets the dimension of the output pixel postage stamp, etc.</p>
<p>The <code class="docutils literal notranslate"><span class="pre">metrics</span></code> package is run using <a class="reference external" href="https://www.tensorflow.org">TensorFlow</a> to reconstruct the PSF model and to evaluate the various metrics. The <code class="docutils literal notranslate"><span class="pre">metrics_hparams</span></code> key contains some standard machine learning hyperparameters such as the <code class="docutils literal notranslate"><span class="pre">batch_size</span></code> as well as additional parameters like <code class="docutils literal notranslate"><span class="pre">output_dim</span></code>, which sets the dimension of the output pixel postage stamp, etc.</p>
</section>
<section id="plot-configuration">
<span id="plotting-config"></span><h2>Plot Configuration<a class="headerlink" href="#plot-configuration" title="Link to this heading"></a></h2>
<p>The <a class="reference external" href="https://github.com/CosmoStat/wf-psf/blob/dummy_main/config/plotting_config.yaml">plotting_config.yaml</a> file stores the configuration parameters for the WaveDiff pipeline to generate plots for the metrics listed in the <a class="reference internal" href="#metrics-settings"><span class="std std-ref">metrics settings table</span></a> for each data set.</p>
<p>An example of the contents of the <code class="docutils literal notranslate"><span class="pre">plotting_config.yaml</span></code> file is shown below.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">plotting_params</span><span class="p">:</span>
<span class="c1"># Specify path to parent folder containing wf-psf metrics outputs for all runs, ex: $WORK/wf-outputs/</span>
<span class="c1"># Specify path to parent folder containing wf-psf metrics outputs for all runs, e.g. $WORK/wf-outputs/</span>
<span class="n">metrics_output_path</span><span class="p">:</span> <span class="o">&lt;</span><span class="n">PATH</span><span class="o">&gt;</span>
<span class="c1"># List all of the parent output directories (i.e. wf-outputs-xxxxxxxxxxx) that contain metrics results to be included in the plot </span>
<span class="n">metrics_dir</span><span class="p">:</span>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

0 comments on commit 64f38a7

Please sign in to comment.