-
Notifications
You must be signed in to change notification settings - Fork 2
Home
Models work as iterative process distributed in time. Should provide a balance between computation cost and accuracy. Surrogate model
In parameter tuning approaches, a single criterion may not be sufficient to correctly characterize the behaviour of the configuration space under consideration and multiple criteria have to be considered.
How to tune parameters for Time Series Analysis? A mix of online and offline (simulator) experiments
Introduce approaches that can significantly influence on the effort being spent on multi-objective solution. Grouping work stages and makes them reusable and scalable.
Can the same single-objective models be equally applied to various types of problems in multi-/single-objective optimization?
The sklearn.pipeline
module implements utilities to build a composite estimator, as a chain of transforms and estimators.
-
Pipeline with
PolynomialFeatures
+Ridge
. Make non-linear regression with a linear model, using a pipeline to add non-linear features.-
PolynomialFeatures
Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. -
Ridge()
is Linear least squares with l2 regularization. RLS is used for two main reasons. The first comes up when the number of variables in the linear system exceeds the number of observations. The second reason that RLS is used occurs when the number of variables does not exceed the number of observations.
-
- SVM with gaussian RBF (Radial Basis Function)
- Polynomial
- Regression forests
Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
The Gaussian process uses lazy learning and a measure of the similarity between points (the kernel function) to predict the value for an unseen point from training data.
- AutoKeras
- TPOT
- mlrMBO: Bayesian Optimization and Model-Based Optimization
- Optimising hyper-parameters efficiently with Scikit-Optimize
- Benchmarks and Interfaces for Black Box Optimization Software | Code
-
No Free Lunch Theorems for Optimization
results "No Free Lunch" (NFL) theorems demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems Additionally, the name emphasizes the parallel with similar results in supervised learning. You have to try multiple types of components to find the best one for your data; A number of NFL theorems were derived that demonstrate the danger of comparing algorithms by their performance on a small sample of problems.
-
In the black box competition, in single objective track a combination of genetic and local search works best (Results). But in expensive single objective gaussian processes base method is working best (Results).
-
ax.dev/ Adaptive Experimentation Platform
"To the optimist, the glass is half full. To the pessimist, the glass is half empty. To the engineer, the glass is twice as big as it needs to be." – Unknown
"In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away." – Unknown
" Clearly, a lot of people have personally encountered the large gap between “here is how a convolutional layer works” and “our convnet achieves state of the art results”." – Andrej Karpathy
"never use a plugin you would not be able to write yourself" – Jamis Buck, Ruby on Rails book
hashtags: Model-based Search, Search-Based Software, Bayesian optimization, Bayesian inference, Multi-objective, Many-objective, Evolution approaches, Parameters-tuning