-
Notifications
You must be signed in to change notification settings - Fork 0
/
optimization.tex
709 lines (655 loc) · 35.4 KB
/
optimization.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
Optimization of the nuclear fuel cycle aids in developing
a fuel cycle based on a specific objective or multiple objectives.
Potential objectives include minimizing the amount of a specific
resource required to support the fuel cycle.
Optimization has previously been applied to the nuclear fuel
cycle \cite{passerini_sensitivity_2012,andrianov_optimization_2019}
and other nuclear engineering applications
\cite{chee_fluoride-salt-cooled_2022}.
In this chapter we demonstrate a methodology to identify
optimized fuel cycle transitions from \glspl{LWR} to select
advanced reactors. We also provide observations and
commentary about the methodology.
\section{Methodology}
We developed three different optimization problems to apply to
Scenario 7 (no growth, once-through transition to the \gls{MMR}, Xe-100,
and VOYGR): minimizing the \gls{SWU} capacity to
produce \gls{HALEU}, minimizing the mass of \gls{UNF} for disposal,
and minimizing both the \gls{HALEU} \gls{SWU} and the \gls{UNF}
mass. Both the \gls{HALEU} \gls{SWU} and \gls{UNF} mass are important
parameters for decision makers. Minimizing the \gls{HALEU} \gls{SWU}
minimizes the new enrichment infrastructure needed to support the
fuel cycle. The US does not have an operating geologic
repository for commercial \gls{UNF}, so minimizing this metric
minimizes the amount of commercial \gls{UNF} kept at reactor
sites.
To optimize these transitions, we consider six different
variables: percent of \glspl{LWR} operating for 80 years (the \gls{LWR}
lifetime), the build share
of Xe-100s, the build share of \glspl{MMR}, the build share of VOYGRs,
the discharge burnup of the Xe-100, and the discharge burnup of the
\gls{MMR}. The sensitivity analysis in Chapter \ref{ch:sa} considered
all of these variables and the transition start time. That analysis
showed that the transition start time has
very little effect on the \gls{HALEU} \gls{SWU} and \gls{UNF},
and delaying the transition
can lead to unfulfilled energy demand. Therefore, this parameter is
not considered in the optimization of each scenario.
For this work, the percent of \glspl{LWR} operating for 80 years
was constrained to between 0-50\% of the current \gls{LWR} fleet.
The build share of each advanced reactor was allowed to range between
0-100\%, but the three parameters had to sum to 100\%. The discharge
burnups are restricted to the values considered in the \gls{OAT}
sensitivity analysis (Section \ref{sec:ot_oat}), which are based
on different cycle lengths or
the number of passes pebbles go through the core.
We coupled \Cyclus \cite{huff_fundamental_2016} with Dakota
\cite{adams_dakota_2021} to perform this optimization, similar to
the coupling used for the sensitivity analysis (Chapter \ref{ch:sa}).
For the
single-objective problems, we used the ``soga'' solve method in
Dakota, which is a single-objective genetic algorithm. For the
multi-objective problem, we used the ``moga'' solve method in
Dakota, which is a multi-objective genetic algorithm. The Dakota
User's Manual \cite{adams_dakota_2021} provides some guidance
on selecting the most appropriate optimization method available
in Dakota. According to this guide, these two methods are the most
appropriate to use, because the problem has bound (variables
must be within a given range) and linear constraints (variables must
sum to a value). Additionally, genetic algorithms
have previously been used to optimize fuel cycle transitions
\cite{passerini_systematic_2014}, which provides confidence that
this optimization method is appropriate for this application.
More specifically, the ``moga'' method
in Dakota was found to always lead to a correct and accurate solution set
when used with five different benchmark problems \cite{chiandussi_comparison_2012}.
Genetic algorithms are based on the survival of the fittest principle in
evolutionary biology \cite{adams_dakota_2021}. The algorithm randomly
selects an initial population of model parameters, with each set of model
parameters forming a ``genetic string'' that is analogous to a DNA string
that is unique for each member of a population \cite{adams_dakota_2021}.
Each member of a population is evaluated for its performance on a given
problem, and the best performing sets of parameters are passed to the next
generations, emulating the breeding of a population. In addition to
passing along parameters sets to the next generation, crossovers and
mutations occur. Crossovers are when one parameter value is exchanged
between two members of the same population, similar to the combination of
the genes from a parent to a child \cite{kramer_genetic_2017}. In
the crossover, the string of parameters are split at an arbitrary point, and
switched to form two different offspring.
Mutations are when a parameter
value of a population member is replaced with a random value within the
parameter space. After mutation, the new population is evaluated for
its performance in the defined problem. In the single-objective algorithm,
each population is evaluated for its performance with respect to the one
objective. In the multi-objective algorithm, the various objectives
are combined into a single objective function (such as L-2 norm).
This process of
crossover, mutation, and evaluation is performed for each population
until a convergence criteria is met. Convergence criteria can be based on
the convergence of a solution, the number of evaluations, or both.
Genetic algorithms have a variety of hyperparameters (parameters that
control the exploration of the genetic algorithm), such as the the
mutation rate and crossover rate, that can affect
the results of the algorithm and the speed at which a solution is found.
Specifically, the crossover and mutation rates of a genetic algorithm
control how similar each population is to the one before it, and how much
of new space is considered. If the mutation rate is too low, then the
genetic algorithm can reach a local minimum instead of a global
minimum. If the mutation rate is too high, then the algorithm behaves more
like a random search and isn't able to fully use the information about the
previous generation to find a minimum. If the crossover rate is too low,
each generation is too similar to the one before it and the algorithm
with converge without adequately exploring the parameter space. If the
crossover rate is too high, then the population may not have enough
diversity and also lead to convergence without adequately exploring the
parameter space.
Therefore, before the genetic algorithms in Dakota can be used to
optimize the fuel cycle transition, we must tune multiple hyperparameters
to determine the best combination for this work and the exact algorithm
used by Dakota. We evaluate hyperparameters based on the optimized solution
identified with a given set of hyperparameters; the more optimized the
solution then the better the hyperparameters.
\subsection{Single-objective hyperparameter tuning}
We tuned the hyperparameters by performing a grid search across
the possible values for some parameters, a method recommended
by Deb \cite{deb_multi-objective_2001}, and used random
search over other parameters. Chee demonstrated this methodology for
reactor design optimization \cite{chee_fluoride-salt-cooled_2022}.
We did not consider all hyperparameters or all possible values of the
hyperparameters. We downselected from
the possible hyperparameters and values defined in the Dakota Reference
Manual \cite{adams_dakota_2021} based on hyperparameters defined in
previous generic algorithms for nuclear energy applications
\cite{passerini_systematic_2014,chee_fluoride-salt-cooled_2022},
personal intuition, and limitations on time to perform
the tuning. We performed the hyperparameter tuning on the single-objective
problem to minimize the \gls{SWU} capacity required to produce
\gls{HALEU}, then used the selected hyperparameters on both
single-objective problems.
The
first grid search performed 40 total iterations of different hyperparameter
values across a coarse grid. We restricted the total number of samples to
500 for each evaluation, which restricts the population size and the
number of generations, as the product of these two must equal the
total number of samples. Table \ref{tab:soga_tuning} describes
the hyperparameters considered in the tuning and the range of possible
values considered. The mutation type takes discrete values, so
we applied a grid search to this hyperparameter and considered
two different mutation types. We also performed a grid
search on the population size, because of the evaluation limit,
considering the values defined in Table \ref{tab:soga_tuning}.
We used random samplng for the constraint penalty, crossover
rate, and mutation rate, using the ranges defined in Table
\ref{tab:soga_tuning} for each hyperparameter. Hyperparameters
not varied in the tuning, and
held constant for all runs include the convergence type (kept at
``best fitness tracker'' ), the fitness type (kept at ``merit function''),
and the random seed (kept at the same value for all runs). We did
not vary the convergence type or fitness type because of computational
resource limitations.
\begin{table}[h!]
\centering
\caption{Hyperparameters and values considered in tuning. If a range
of values is provided, then the hyperparameter a random value in
that range was selected.}
\label{tab:soga_tuning}
\begin{tabular}{c c c}
\hline
Hyperparameter & Coarse Search & Fine Search \\
\hline
Experiments [\#] & 1-40 & 41-68 \\
Population size [\#] & 5, 10, 25, 50, 100 & 50, 100\\
Constraint penalty & 0.5 $\leq$ x $\leq$ 2 & 0.5 $\leq$ x $\leq$ 2\\
Crossover rate & 0.1 $\leq$ x $\leq$ 0.9 & 0.1 $\leq$ x $\leq$ 0.9\\
Mutation type & replace uniform, offset normal & replace uniform, offset normal\\
Mutation rate & 0.01 $\leq$ x $\leq$ 0.2 & 0.01 $\leq$ x $\leq$ 0.19\\
\hline
\end{tabular}
\end{table}
We limited all six of the input parameters considered (the build shares,
discharge burnups, and the \gls{LWR} lifetimes) to integer
values by using the ``discrete design set'' variable definition for the
Xe-100 and \gls{MMR} burnups and the ``discrete design range'' variable
definition for the other four parameters. These variable definitions
prevent rounding errors when passing parameters between Dakota
and \Cyclus.
Figure \ref{fig:soga_coarse_tuning} shows the results of the coarse tuning
for the single objective problem, comparing each model parameter (y-axes)
selected when using each hyperparameter value (x-axes), and the resulting
\gls{HALEU} \gls{SWU} when using that combination (the color of the data
points, dark purple is best). From these results, we observe that a
larger population size results in lower \gls{HALEU} \gls{SWU} capacity required
and performs the best in meeting the build share constraint for this work.
These two observations strongly point to the use of a larger population size.
From the other hyperparameters, there are no conclusions that can be drawn
about a range of values that should be used.
Therefore, we performed more tuning cases (the ``Fine Search'' in Table
\ref{tab:soga_tuning}) to provide more insight into how the other hyperparameters
affect the results with the larger population sizes. In the fine search, we
further limited the population size values, based on the results of the
coarse tuning, but mostly left the other hyperparameter ranges the
same.
\begin{figure}[h!]
\includegraphics[scale=0.4]{soga_coarse_tuning_all_parameters.pdf}
\caption{Results of coarse tuning for the single-objective
optimization. The purple line in the bottom row (total build
share at each hyperparameter) identifies a 100\% total build share.}
\label{fig:soga_coarse_tuning}
\end{figure}
The results from the fine tuning, shown in Figure \ref{fig:soga_fine_tuning},
are not completely aligned with the results from the coarse tuning.
The coarse tuning showed that a population size of 100 consistently performed
better than the other population sizes, but the fine tuning showed that a
population size of 50 performed better. Both sets of tuning runs suggested
a population size to use, albeit suggesting different population sizes,
but did not show any strong correlation between
the other hyperparameters and the \gls{HALEU} \gls{SWU}. Therefore,
we compared
the hyperparameters and model parameters values in the five best performing
tuning cases, based on the \gls{HALEU} \gls{SWU} required, shown in
Table \ref{tab:soga_tuning_results}.
\begin{figure}[h!]
\includegraphics[scale=0.4]{soga_fine_tuning_all_parameters.pdf}
\caption{Results of fine tuning for the single-objective
optimization.}
\label{fig:soga_fine_tuning}
\end{figure}
Four of the five best performing tuning cases are able to find the
theoretical minimum for this problem, 0 kg-SWU. However, none of the
cases perfectly match the build share target of 100\%. Case 38 is the only
one that at least meets the target, but it exceeds it by specifying a total
build share of 103\%. However, this is the only tuning case that does
not reach the theoretical minimum.
\begin{table}[h!]
\centering
\caption{Hyperparameter and model parameter values from the five best
performing tuning cases for the single-objective problems.}
\label{tab:soga_tuning_results}
\begin{tabular}{c c c c c c}
\hline
& \multicolumn{5}{c}{Tuning case number} \\
\textbf{Hyperparmeter} & 28 & 38 & 43 & 45 & 59 \\
\hline
Mutation type & offset normal & offset normal & replace uniform &
replace uniform & offset normal\\
Population size & 25 & 100 & 50 & 50 & 50 \\
Mutation rate & 0.113 & 0.059 & 0.182 & 0.116 & 0.035\\
Crossover rate & 0.5 & 0.664 & 0.333 & 0.584 & 0.603\\
Constraint penalty & 0.648 & 1.261 & 1.072 & 1.109 & 1.031\\
\hline
\textbf{Model Parameter} \\
\gls{HALEU} \gls{SWU} (kg-SWU)& 0 & 4.37 $\times 10^7$ & 0 & 0 & 0\\
VOYGR build share (\%) & 75 & 100 & 91 & 99 & 94\\
\gls{MMR} build share (\%)& 0 & 0 & 0 & 0 & 0\\
Xe-100 build share (\%) & 0 & 3 & 0 & 0 & 0\\
LWR Lifetime (\%)& 35 & 48 & 42 & 49 & 15\\
\gls{MMR} burnup (MWd/kgU) & 86 & 74 & 74 & 41 & 90\\
Xe-100 burnup (MWd/kgU) & 56 & 185 & 112 & 168 & 151\\
\hline
\end{tabular}
\end{table}
\subsubsection{Hyperparameter selection}
Based on the information in Table \ref{tab:soga_tuning_results}, we
selected hyperparameters to use for all single-objective problems in this
work. We selected the hyperparameters from tuning case 45, given in
Table \ref{tab:soga_parameters}, because this tuning case reached the
theoretical minimum for the problem and was the closest to meeting
the total build share constraint.
\begin{table}[h!]
\centering
\caption{Hyperparameters selected for single-objective optimization.}
\label{tab:soga_parameters}
\begin{tabular}{c c}
\hline
Parameter & Value \\
\hline
Population & 50 \\
Constraint penalty & 1.109\\
Crossover rate & 0.584\\
Mutation type & replace uniform\\
Mutation rate & 0.116\\
\hline
\end{tabular}
\end{table}
The tuning cases each ran a maximum of 500 evaluations to limit the
amount of time required for each run. However, we run each
single-objective problem for a maximum of 1000 evaluations. By
increasing the maximum number of evaluations, we allow the
algorithm to explore the input space and
find the minimum objective
function for the problem while also meeting the total build share
constraint
(assuming the convergence criteria is not met first).
\subsection{Multi-objective hyperparamter tuning}
Hyperparameter tuning for a multi-objective problem is more complicated
than tuning for a single-objective problem. In multi-objective
problems, there is no single best solution but rather many solutions
that form a Pareto front \cite{adams_dakota_2021}. The points on a
Pareto front are equally optimized, and indicate that further improvement
of one objective
results in a worsening of another objective. Therefore, to tune the
hyperparameters of a multi-objective problem, one tunes base on the
hypervolume of the Pareto front \cite{deb_multi-objective_2001}, because
a larger hypervolume indicates a better exploration of the problem space.
The
hypervolume of a Pareto front is calculated by calculating the area
between the Pareto front and a reference point. The reference point
must be selected such that all values on the Pareto front are contained in
the hypervolume.
We tuned the hyperparameters based on minimizing the \gls{SWU} capacity
to produce \gls{HALEU} and minimizing the mass of \gls{UNF} in Scenario 7.
The multi-objective genetic algorithm in Dakota outputs the Pareto front
for a problem, which we then use to calculate a hypervolume with a
reference point. We selected the reference point after plotting the
Pareto fronts and selecting a point that contains all of the points on
the front. We applied the same variable definitions
from the single objective hyperparameter
tuning: bounds, linear constraints, and integers for
all variables. We observed that by using a linear equality constraint for
the different advanced reactor build shares
Dakota only found a single solution, as opposed to a Pareto front, because
not all of the samples adhere to linear constraints. By replacing the
linear equality constraint with a linear
inequality constraint (i.e., defining that the advanced reactor build share is
at least 100\% instead of equal to 100\%), Dakota found multiple
solutions to form a Pareto front.
Therefore, we used this linear inequality constraint for the tuning cases and
the final problem run to ensure that a Pareto front can be found. Therefore,
in addition to considering the hypervolumes resulting from each hyperparameter
value, we will also consider how close to 100\% the total build share is
to minimize oversupply of power.
We downselected hyperparameters and hyperparameter values used in this
tuning because of computational resource limitations. We tuned the
population size, the mutation rate, and the crossover rate, using
specific values instead of doing a random
search of values.
Table \ref{tab:moga_tuning} defines the different values considered,
and the default values of each hyperparameter in Dakota.
We tuned these hyperparameters because of their observed effect
on the single-objective optimization hyperparameter tuning
(population size) or intuition (mutation rate and crossover rate).
We selected these values
based off of the hyper parameters used for the single-objective
problem, the default value in Dakota, or intuition. If the value of a
hyperparameter is not specified, we used the default value in Dakota
(i.e., when using a population size of 25, the mutation rate
us 0.08 and the crossover rate is 0.8).
We ran each set of hyper
parameters 20 times, and compared the statistics of the hypervolumes of
the resulting Pareto fronts. From this data, we selected the
hyperparameters to use for multi-objective optimization for this work.
\begin{table}[h!]
\centering
\caption{Values of each hyperparameter considered in the
multi-objective hyperparameter tuning.}
\label{tab:moga_tuning}
\begin{tabular}{c c c}
\hline
Hyperparameter & Values & Default value\\
\hline
Population size & 25, 50, 100 & 50\\
Mutation rate & 0.08, 0.1, 0.116, 0.15 & 0.08 \\
Crossover rate & 0.3, 0.584, 0.8 & 0.8 \\
\hline
Initialization type & - & unique random \\
Mutation type & - & replace uniform \\
Crossover type & - & shuffle random \\
Fitness type & - & domination count \\
\hline
\end{tabular}
\end{table}
Figure \ref{fig:moga_crossover} shows box and whisker plots of
the hypervolumes
resulting from each value of the crossover rate. All three of the
crossover rates result in similar median values, and similar statistics.
A crossover rate of
0.3 results in the largest median, mean, and maximum for the
hypervolumes. A crossover rate of 0.584 results in the smallest minimum
hypervolume. A crossover rate of 0.8 results in the smallest median
and mean of the hypervolumes.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.75]{moga_crossover.pdf}
\caption{Statistics of the hypervolumes resulting from each
crossover rate value considered in the MOGA Tuning.}
\label{fig:moga_crossover}
\end{figure}
Figure \ref{fig:moga_population} shows ox and whisker plots of
the hypervolumes
resulting from different population sizes. There is more observed variation
in the statistics of this hyperparameter than the variation from varying
the crossover rate. This observation suggests that the population size has
a greater impact on the hypervolume than the crossover rate. A population
size of 50 results in the largest median and first quartiles, as well
as the smallest minimum. A population of 25 results in the largest mean
and maximum hypervolume.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.75]{moga_population.pdf}
\caption{Statistics of the hypervolumes resulting from
each population size considered in the MOGA Tuning.}
\label{fig:moga_population}
\end{figure}
Finally, Figure \ref{fig:moga_mutation} shows ox and whisker plots of
the hypervolumes
resulting from different values of the mutation rate.
A mutation rate of 0.08 resulted in the largest mean and median of the
values considered, closely followed by a mutation rate of 0.1, but it
also resulted in the largest range of values.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.75]{moga_mutation.pdf}
\caption{Statistics of the hypervolumes resulting from
each mutation rate considered in the MOGA Tuning.}
\label{fig:moga_mutation}
\end{figure}
\subsubsection{Hyperparameter selection}
For the multi-objective problems, we selected the hyperparameter
values defined in Table \ref{tab:moga_parameters}. We selected
a crossover rate of 0.3 because this parameter resulted in the
most favorable statistics in the tuning process: the largest
median, mean, and maximum of the resulting hypervolumes. We selected
a population size of 25 because this population size resulted in
the largest mean and maximum hypervolumes in the tuning cases.
Additionally, a population size of 25 results in the highest
values for the upper quartiles plus or minus 1.5 times the interquartile
range (the end of the whiskers), which suggests that when this
hyperparameter value is used again the results will continue to be
favorable. Finally, we selected a mutation rate of 0.1 because
although a mutation rate of 0.08 resulted in a larger mean and median
than a rate of 0.1, it also resulted in a smaller minimum hypervolume and
more values that are lower than those from a a rate of 0.1. Therefore, we
selected a mutation rate of 0.1 because the hypervolumes were more
consistent than those from a rate of 0.08.
\begin{table}[h!]
\centering
\caption{Hyperparameter values selected for multi-objective
optimization problems based on tuning.}
\label{tab:moga_parameters}
\begin{tabular}{c c}
\hline
Hyperparameter & Value \\
\hline
Crossover rate & 0.3\\
Population size & 25\\
Mutation rate & 0.1\\
\hline
\end{tabular}
\end{table}
We applied each of the hyperparameters in Table \ref{tab:moga_parameters}
to the multi-objective problem in this work. We also increased the
maximum number of evaluations to 1500 to allow for more exploration
of the problem space.
\section{Single-objective optimization}
This section presents the results of the two single-objective
problems: minimizing \gls{SWU} capacity required to produce
\gls{HALEU} and minimizing the \gls{UNF} mass. We used the hyperparametes
defined in Table \ref{tab:soga_tuning_results} and a maximum
of 1000 evaluations for both problems.
\subsection{Minimize HALEU SWU}
By applying the hyperparameters to an optimization problem to minimize the
\gls{HALEU} \gls{SWU} capacity needed by this transition scenario, Dakota
found a minimum with the values defined in Table \ref{tab:soga_ot_haleu}.
The \gls{SWU} capacity required to produce \gls{HALEU} for these input
parameters is 4.812$\times 10^7$ kg-SWU.
\begin{table}[h!]
\centering
\caption{Values resulting in a minimum \gls{HALEU} \gls{SWU} capacity for
a once-through transition scenario.}
\label{tab:soga_ot_haleu}
\begin{tabular}{c c}
\hline
Variable & Value \\
\hline
LWR Lifetime & 36\%\\
Xe-100 build share & 0\%\\
MMR build share & 2\%\\
VOYGR build share & 100\%\\
Xe-100 burnup & 151 MWd/kgU\\
MMR burnup & 90 MWd/kgU\\
\hline
HALEU SWU & 4.812 $\times 10^7$ kg-SWU\\
\hline
\end{tabular}
\end{table}
The \gls{HALEU} \gls{SWU} capacity output from this optimization is not the
theoretical minimum for this problem, and is larger than any of the values
presented in Table \ref{tab:soga_tuning_results}. This solution results in
a larger \gls{HALEU} \gls{SWU} capacity required because the parameters selected
by Dakota results in a build share greater than 100\%, which artificially
inflates the material resources needed for the transition. Additionally,
Dakota selected a non-zero build share of \glspl{MMR} which, as the transition
analysis (Chapter \ref{ch:once_through_results}) and sensitivity analysis
(Chapter \ref{ch:sa}) shows, requires more \gls{SWU} capacity than
the Xe-100 and VOYGR.
The inability of the ``soga'' algorithm to properly meet the build share
constraint in the tuning cases and in this problem identifies a significant
disadvantage of using this method to optimize the transition. According to the
Dakota documentation, the ``soga'' method provides a feasible solution, but
may violate the constraint as it searches the input space \cite{noauthor_dakota_2021}.
In practice, we observed that the algorithm freely explores the parameter space
without regards to the constraint and the final solution provided is the
iteration that best meets the constraint and minimizes the objective function.
Dakota developers suggest variation of the `constraint penalty' hyperparameter
to force the algorithm to more strictly adhere to the constraint
\cite{noauthor_optimization_2023}.
However, Figures \ref{fig:soga_coarse_tuning} and \ref{fig:soga_fine_tuning}
do not show any strong correlation between the constraint penalty and how
well the constraint is met (shown in the purple line in the bottom row of
plots) in this problem. This problem ended at 1000 iterations (the maximum
allowed) and
provided results that met the build share constraint to a similar
extent as many of the tuning cases, which only ran a maximum of 500 iterations.
Therefore, it is not clear if allowing more iterations will yield results that
better meet the constraint. However, if Dakota were allowed to perform
as many evaluations as needed to meet the convergence criteria, then
it is possible that the constraint would be met. The Dakota manual defines
some optimization
methods that will more strictly adhere to linear constraints as they search
\cite{noauthor_dakota_2021}, such as derivative-free pattern searches.
Therefore, a future exploration path would include use of these other
methods to optimize fuel cycle transitions.
The other model input parameters identified by the optimization
of the transition follow the trends identified by the
sensitivity analysis to minimize the \gls{HALEU} \gls{SWU}.
The percent of \glspl{LWR} receiving license extensions
is above 25\%, the \gls{MMR} burnup is at the maximum value considered,
and the Xe-100 burnup is near the maximum value considered. There is
room for further minimizing the \gls{HALEU} \gls{SWU} by increasing
the percent of \glspl{LWR} receiving license extensions. One way to
achieve this further improvement is to increase the maximum
number of function evaluations. For this specific
problem, these three input parameters are not as impactful on the results as the
build share of each reactor type. Therefore, it is reasonable that the
results provided from Dakota do not place these variables at the very
edge of the range of values considered when the algorithm does not
converge.
Because the results provided by this algorithm in Dakota do not strictly
meet the linear constraint and do not reach the theoretical minimum for
this problem, the results should not be accepted at face value.
Rather, the results should be taken as guidance on how to implement an
optimized solution of this problem
with respect to this objective function; maximizing the VOYGR build share in
this transition will minimize the \gls{SWU} capacity required to produce
\gls{HALEU}.
\subsection{Minimize waste mass}
By applying the hyperparameters to an optimization problem to minimize the
\gls{HALEU} \gls{SWU} capacity needed by this transition scenario, Dakota
found a minimum with the model input parameters defined in Table
\ref{tab:soga_ot_waste}. The minimum waste mass determined through the
optimization is 1,736 MT. This \gls{UNF} mass is smaller than the minimum
values presented in the once-through transition
analysis (Table \ref{tab:nogrowth_waste}) and the \gls{OAT} analysis
(Table \ref{tab:oat_values}), suggesting that the algorithm finds
a well-optimized solution.
\begin{table}[h!]
\centering
\caption{Values resulting in a minimum waste mass disposed of for
a once-through transition scenario.}
\label{tab:soga_ot_waste}
\begin{tabular}{c c}
\hline
Variable & Value \\
\hline
LWR Lifetime & 50\%\\
Xe-100 build share & 100\%\\
MMR build share & 7\%\\
VOYGR build share & 11\%\\
Xe-100 burnup & 185 MWd/kgU\\
MMR burnup & 90 MWd/kgU\\
\hline
UNF mass & 1,736 MT \\
\hline
\end{tabular}
\end{table}
From Table \ref{tab:soga_ot_waste}, it is clear that more than 100\% build share
is defined, which artificially increases the \gls{UNF} mass generated from
advanced reactors in this fuel cycle. This result is consistent with the results of
the other single-objective problem for this fuel cycle, but suggests
that the fuel cycle can be further-optimized for the \gls{UNF} mass.
These results suggest that to minimize the \gls{UNF} mass, the
\gls{LWR} lifetimes, Xe-100 burnup, and the Xe-100 build share should
be maximized. If \glspl{MMR} are deployed, then maximizing the \gls{MMR}
build share contributes to minimizing the \gls{UNF} mass.
This guidance is consistent with the sensitivity analysis
for minimizing this metric. The results of the
\gls{LWR} lifetime, \gls{MMR} burnup, and Xe-100 burnup demonstrate that this
algorithm can perform well with respect to variables that do not have a
linear constraint applied.
This optimization problem converged in 590 iterations, and the best objective
function found in iteration 437. Therefore, the hyperparameters resulted
in a faster convergence for this problem than the previous one, despite the
hyperparameters being tuned on the previous problem. The convergence of
the algorithm for this problem contributes to the identification
of maximizing the the \gls{LWR} lifetime and Xe-100 burnup parameter
in the solution.
\section{Multi-objective optimization}
The multi-objective problem in this work minimizes the \gls{SWU} to produce
\gls{HALEU} and minimizes the \gls{UNF} mass. In solving
this problem, Dakota identified a pareto front
of 217 points, shown in Figure \ref{fig:once_through_pareto}. The points
on this figure identify different values of each objective that are equally
optimized; in order to improve one objective the other objective becomes
worse. There is a linear trade-off between each of the objectives along this
pareto front, which is consistent with the the \gls{LWR} lifetime and
each advanced reactor build share having a linear relationship with
each of the metrics in the \gls{OAT} analysis (Section \ref{sec:ot_oat}).
The hypervolume of this pareto front is 1.061$\times 10^{18}$ using a
reference point of (6$\times 10^9$, 2$\times 10^8$). This hypervolume is
greater than any of the median or mean values in the hyperparameter tuning,
using the same reference point. This result suggests that these hyperparameters
performed well in this problem.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.9]{once_through_pareto.pdf}
\caption{Pareto front resulting from the multi-objective optimization
of the once-through transition.}
\label{fig:once_through_pareto}
\end{figure}
The points on this pareto front have some similar model parameter values.
Every point has a 0\% build share of \glspl{MMR} and an Xe-100 discharge
burnup of 185 MWd/kgU. Additionally, the percent \glspl{LWR} operating
for 80 years ranges between 47-50\%. Each of these results are very
consistent with the results of the sensitivity analysis and the
single-objective optimization for each problem. The \gls{MMR} discharge
burnup varies
across the range of possible values in this work because the discharge
burnup becomes irrelevant if none of these reactors are deployed in the
transition. This results identify that maximizing the \gls{LWR}
lifetimes, maximizing the Xe-100 burnup, and minimizing the
\gls{MMR} are all ways to minimize the \gls{HALEU} \gls{SWU}
capacity and the \gls{UNF} mass, independent of the weighting
of these two objectives.
The Xe-100 and VOYGR build shares vary across the pareto front. The Xe-100
build share varies between 11-100\% and the VOYGR build share varies
between 10-99\%. The total build share varies between 100-191\% in these
results, with 3 of the results having a total build share of 100\%.
These two input parameters vary the most in each of the points on
the pareto front because they have the most effect on the results. The effect of
these parameters explains why the pareto front is linear: each of these
build shares has a linear relationship with the objectives (see Section
\ref{sec:ot_oat}). Therefore, their combined effect remains linear.
The exact balance between the Xe-100 and VOYGR build share to optimize
for these metrics will be based on the relative priority of
each objective.
Examining the Xe-100, VOYGR, and total build shares also identifies
weaknesses in using this tool and methodology. Neither of the two reactor
build shares reaches 0\%, which suggests that those ares of the parameter
space were not fully explored. Additionally, 73 of the 217 (34\%) points
on the pareto front have a total build share above 150\%. Intuition
suggests that both objectives would be minimized with a lower total
build share. Therefore, the total build shares further suggests that the
genetic algorithm, with the selected hyperparameters, does not fully
explore the parameter space.
However, there is degeneracy in the build share of different reactors.
Because the reactors must be built in whole numbers, multiple values of
a build share can result in the same number of a reactor type built,
suggesting that multiple build share percents will minimize the
objectives. Therefore, some variation in the build share of each
advanced reactor is a result of the degeneracy of the build
shares. The degeneracy of the build shares is why setting a strict
linear constraint of the build shares summing to 100\% results in
so few points on the pareto front. However, the results of this work
suggest that a non-infinite upper limit should be applied to the linear
constraint.