Okay that is interesting. You could realize that by making a restricted area in the search space by returning np.nan in the objective function for those cases. Gradient-Free-Optimizers can handle np.nan and np.inf just fine.
This is at a high level how one of our Google internal black box optimizers behaves. The term used is "infeasible region" but it's the same idea as using a nan.
I would caution against using nan to always mean infeasible. Instead users should catch experiments outside the feasibility region and return a special infeasible value. This will increase visibility into the behavior of the optimizer, because it leaves nan to be used for values inside the region of constraint that are still problematic (due to bugs, numerical instability, etc)
If the volume of feasible space is small, the optimizer may have considerable trouble finding the feasible region(s) in the domain. The simple solution is to use a penalty function instead, i.e. add a term like M*max(0, 1-(a+b)) to the objective, with sufficiently large M. If the optimum is at the boundary of the feasible region, you might still get a solution that is slightly infeasible, which can be worked around with more special case logic...
Is there a subset of these constrained optimisation techniques that work when the constraint bounds are unknown before evaluation time? This applies to certain physical engineering problems where, due to the complexity of the environmental system you're modelling, you don't necessarily know what combination of input parameters lead to a non-viable solution. That is, a given solution might be numerically valid but the objective function could determine it to be non-viable due to not satisfying engineering constraints (e.g. it vibrates too fast).
This would be like the barrier function [1] for gradient based methods; which produces some challenges for those techniques. Do you know how this affects gradient free techniques?
ialyos|5 years ago
I would caution against using nan to always mean infeasible. Instead users should catch experiments outside the feasibility region and return a special infeasible value. This will increase visibility into the behavior of the optimizer, because it leaves nan to be used for values inside the region of constraint that are still problematic (due to bugs, numerical instability, etc)
unnah|5 years ago
opium_tea|5 years ago
unknown|5 years ago
[deleted]
craftinator|5 years ago
abhgh|5 years ago
[1] https://en.m.wikipedia.org/wiki/Barrier_function