Skip to main content

Hello,

I would like to know how the initial variations on the variables during the optimization are defined.

It seems that the value used corresponds to a fraction of the original variable, so when this variable starts from 0 the change is very small and does not modify the merit function much, and the optimization is very slow especially for a large number of variables.

Would it be possible to define ourselves this starting variation for each variable with a new option?

Best regards

Scaling the deltas used is one of the things the code does when running the optimizer and its a complex set of actions that can’t be easily summarized (for example, it’s different for thicknesses than for radii). But the optimizer should handle this for you just fine, especially when using the default merit functions for image quality as they are robust over a huge range of starting points. What is the specific difficulty you’re having that prompts this question?

 

  • Mark

Hi,

The problem is when creating custom merit functions and on other parameters than the radius of curvature or thickness, or for non-sequential designs, in many cases the first deltas are too small and the software struggles to converge in suitable times, it might be interesting to be able to define these initial deltas ourselves

Marc

 


Hey Marc, what makes you believe that’s the problem? The actual deltas aren’t revealed. I suspect the problem (slow convergence) may lie in the merit function construction. The default MFs are designed to be stable over almost any starting values (especially RMS Spot) and provide smooth change vectors from completely defocussed to diffraction limited. 

I think it would be helpful to post a specific example of where you’re having problems. 


The problem is when you don’t use the default MF. When I make my own MF, like using the POPD function or complex functions by example, or with Non-Sequential system, if the initial variable is zero, I can see after a given time that the variables have changed of only a very small proportion (10-6 to 10-4) when the final value will be a few unit.

I have the same problem recently in a tolerancing analysis, with a script in which I have to specified a number of cycle of optimization to avoid huge time of calculation, but find after the fact that the system is not fully optimized. If I modify the script to leave the number of cycles in automatic, the tolerancing time lengthens a lot. To work around the problem I had to add in the script a default on the concerned variable with the PERTURB function, which partially solves the problem, but it's a stopgap.

Marc


Hmm..you might be right. What does the merit function look like when you plot it versus a variable on the Universal Plot?

Also, have you tried Orthogonal Descent? It uses much wider stepping.


Reply