Skip to content

Commit

Permalink
deploy: 8cca5de
Browse files Browse the repository at this point in the history
  • Loading branch information
cromanpa94 committed Apr 17, 2024
1 parent f7325e0 commit d182f4a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion publications/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ <h6 class="cs-title">Journals</h6>
<span class="cs-publication-author"><span class="cs-highlight-author">Danielle Van Boxel</span></span>
<span class="cs-publication-year">2024</span>
<span class="cs-desc">
<div class="read-more-text"><p>\ We apply Bayesian Additive Regression Tree (BART) principles to training an ensemble of small neural networks for regression tasks. Using Markov Chain Monte Carlo, we sample from the posterior distribution of neural networks that have a single hidden layer. To create an ensemble of these, we apply Gibbs sampling to update each network against the residual target value (i.e. subtracting the effect of the other networks). We demonstrate the effectiveness of this technique on several benchmark regression problems, comparing it to equivalent shallow neural networks, BART, and ordinary least squares. Our Bayesian Additive Regression Networks (BARN) provide more consistent and often more accurate results. On test data benchmarks, BARN averaged between 5 to 20 percent lower root mean square error. This error performance does come at the cost, however, of greater computation time. BARN sometimes takes on the order of a minute where competing methods take a second or less. But, BARN without cross-validated hyperparameter tuning takes about the same amount of computation time as tuned other methods. Yet BARN is still typically more accurate.</p>
<div class="read-more-text"><p>We apply Bayesian Additive Regression Tree (BART) principles to training an ensemble of small neural networks for regression tasks. Using Markov Chain Monte Carlo, we sample from the posterior distribution of neural networks that have a single hidden layer. To create an ensemble of these, we apply Gibbs sampling to update each network against the residual target value (i.e. subtracting the effect of the other networks). We demonstrate the effectiveness of this technique on several benchmark regression problems, comparing it to equivalent shallow neural networks, BART, and ordinary least squares. Our Bayesian Additive Regression Networks (BARN) provide more consistent and often more accurate results. On test data benchmarks, BARN averaged between 5 to 20 percent lower root mean square error. This error performance does come at the cost, however, of greater computation time. BARN sometimes takes on the order of a minute where competing methods take a second or less. But, BARN without cross-validated hyperparameter tuning takes about the same amount of computation time as tuned other methods. Yet BARN is still typically more accurate.</p>
</div>
<span class="read-more-btn">...read more</span>
</span>
Expand Down

0 comments on commit d182f4a

Please sign in to comment.