Solved

Does zemax do optimization in orders based on operand weight or contribution in merit function?

  • 16 August 2022
  • 5 replies
  • 364 views

Userlevel 1

Hi Zemax Community,

 

 I am doing a design with my operand to optimize the optical system.

 As part of operands are set as below.

For example, I set 15 to the weight of the most important operands here in red box. and others are 1 in green box.

 

My question is

  1. when zemax does the optimization, whether it  selectively focuses on optimizing the heavier weight firstly and then other lower weight operands?
  2. or Not the weight, but the contributions are considered? zemax would focus on the more contribution operands firstly?  

 

 

icon

Best answer by Ray 16 August 2022, 11:30

View original

5 replies

Userlevel 1

or I shall do the optimization in order manually?

 

First, the most important operands only 

then , add the less important ones to the merit function

 

Userlevel 4
Badge

Hi Yongtao,

Zemax does not work on the operands but on the parameters. All operands are treated as a whole (the merit function value), but the parameters may be treated either as a whole or sequentially, depending on the algorithm. This is not only Zemax, it’s how most (all?) optimization algorithms work.

When using damped least squares, Zemax first computes how each operand changes with respect to each parameter (the Jacobian, used to approximate the Hessian) and when it is computed, changes all the parameters at the same time in order to reduce the merit function.

When using orthogonal descent, Zemax improves each parameter in sequence (I guess, in the order they are found in the editor) by trying to improve the merit function directly. That’s why, in the dialog window, you see a “cycle” number in the form of x.y where x is the iteration and y the parameter number.

But all operands are considered the same at every iteration or cycle.

 

Userlevel 1

Hi Yongtao,

Zemax does not work on the operands but on the parameters. All operands are treated as a whole (the merit function value), but the parameters may be treated either as a whole or sequentially, depending on the algorithm. This is not only Zemax, it’s how most (all?) optimization algorithms work.

When using damped least squares, Zemax first computes how each operand changes with respect to each parameter (the Jacobian, used to approximate the Hessian) and when it is computed, changes all the parameters at the same time in order to reduce the merit function.

When using orthogonal descent, Zemax improves each parameter in sequence (I guess, in the order they are found in the editor) by trying to improve the merit function directly. That’s why, in the dialog window, you see a “cycle” number in the form of x.y where x is the iteration and y the parameter number.

But all operands are considered the same at every iteration or cycle.

 

Hi Ray,

 Thanks for your explanation!

 I thought Zemax firstly picks up the heavier or  greater contribution operands to optimize parameters.  Maybe a new algorithm for optimization ? :-)

Thanks again.

 

Yongtao

Userlevel 4
Badge

Hi Yongtao,

This sounds a little similar to switching between several merit functions, although that would mean 1 function per operand and having to select them based on potential contribution. This could be done by using a macro (or the API), either by loading or rewriting the MF at every step, or by changing the weights (probably easier). A more common way is to cycle between multiple merit functions.

Glatzel adaptive method in the 1960’s could be seen as similar too.

Not all operands are equal, some are to be strictly applied (physical limitations, such as positive thicknesses), other are more flexible (e.g. spot size, wavefront), so they may have to be addressed differently (e.g., with Lagrange multipliers, if they actually worked).

The topic of optimization is far from frozen (Zemax has introduced several algorithms to test in recent versions), and I think your description fits into the wide field of multi-objective optimization.

Userlevel 1

Hi Yongtao,

This sounds a little similar to switching between several merit functions, although that would mean 1 function per operand and having to select them based on potential contribution. This could be done by using a macro (or the API), either by loading or rewriting the MF at every step, or by changing the weights (probably easier). A more common way is to cycle between multiple merit functions.

Glatzel adaptive method in the 1960’s could be seen as similar too.

Not all operands are equal, some are to be strictly applied (physical limitations, such as positive thicknesses), other are more flexible (e.g. spot size, wavefront), so they may have to be addressed differently (e.g., with Lagrange multipliers, if they actually worked).

The topic of optimization is far from frozen (Zemax has introduced several algorithms to test in recent versions), and I think your description fits into the wide field of multi-objective optimization.

Hi  Ray

 Thank you !

 Actually, I am facing the problem that the total merit function value is not so bad, but the operand which is the most important in my design just cannot meet my requirements.

 Not all operands are equal  that’s the point. Thanks for your advice, I will have a try.

  1. Divide the operands into several functions according to their weight and switch the merit function depending on the contribution
  2. Rewriting/Reloading the function merit every step 
  3. Changing the weight

   1 may be a little challenging for me, and 2 I think it is the basic method(?) I shall try.

about 3, actually I have put a greater weight to the important operands in my merit function, but not works so well. I am also considering the negative weight (Lagrange multipliers ). Anyway I will try these, and wish a good result come out.

   I am really interested in  Glatzel adaptive method, and will have a check.

 

Thank you!

Have a good day

 

Yongtao

 

Reply