Question

Optimizing Computational Efficiency in the Design of Continuous Zoom, Multi-Wavelength Optical Systems


Hello,

When designing a continuous zoom, multi-wavelength system, I encounter a significant computational power challenge. To address this issue, I convert clear semi-diameters rather than their maximum values to automatic semi-diameters (optimizing for maximum diameter increases the optimization time by 2-3 times). Then comes the tricky part: I lack information on each surface's maximum diameter. In some cases, such as wider angles, the diameter of the lenses decreases while they approach closer positions according to zoom settings. Converting these lenses to their maximum values leads to unrealistic positions for the mechanical structure.

To overcome this hurdle, I consistently utilize DMLT or ETGT comments for 15-20 configuration systems. Interestingly, when optimizing the same system with other software, they automatically convert every diameter to its maximum without affecting computational power.

Do you have a proper way to solve this problem?

 


10 replies

Userlevel 7
Badge +3

Using maximum semi-diameter solves is the correct approach, but it’s not clear to me where your factor of 2 to 3 speed reduction is coming from. There is a calculation time cost of course but it should not be of this magnitude, and really shouldn’t be noticeable by the user.

I’d suggest it’s either a bug or a setup error. Can’t say without seeing the file and fortunately that’s not my job anymore :-) Zemax support should be able to confirm one way or the other with a sample file.

  • Mark

Using maximum semi-diameter solves is the correct approach, but it’s not clear to me where your factor of 2 to 3 speed reduction is coming from. There is a calculation time cost of course but it should not be of this magnitude, and really shouldn’t be noticeable by the user.

I’d suggest it’s either a bug or a setup error. Can’t say without seeing the file and fortunately that’s not my job anymore :-) Zemax support should be able to confirm one way or the other with a sample file.

  • Mark

Thanks for your reply Mark. This situation has been valid since Zemax 13, unfortunately, I couldn't convince the support team that this might be a bug. So I decided to look for a method on the forum. I convinced support to create a file, they found that even with the simplest system there was approximately 1.5 times slowdown, but I couldn't share a more complex system with them, they didn't try a more complex system and the case was left in the middle :)

Userlevel 7
Badge +3

And when you say a speed reduction, what is it in? Ray-tracing? Optimization? System Setup? 

Also, how are you defining the system aperture, and field of view? Does your file need ray-aiming?

Userlevel 6
Badge +2

Also, what is your field definition?  If you have your fields defined in Image Space, then OpticStudio has to do an optimization to convert from something like Real Image Height to (object) Angle.  

A few of the biggest hitters from speed in OpticStudio are:

  • Field definition (make sure to always use object space)
  • Ray Aiming (if you have a front aperture design, don’t use Ray Aiming...if you have a buried Stop use Paraxial Ray Aiming unless you truly need “Real” Ray Aiming such as for high NA microscopes)
  • Non-parameterized surfaces (avoid optimization with surfaces such as Grid Sag)
  • Aspheres “blowing up” just outside the Clear Semi-Diameter (use as few coefficients as needed when designing with aspheres.  Always start off with your p^4 term as a variable, optimize, check your Merit Function and increase to your p^6 if needed...repeat as necessary)
Userlevel 7
Badge +3

OK, I’ve done some tests and I do think there’s something strange going on here that a developer should look at.

The attached file shows one of the zoom lenses from the samples folder, with 20 configurations. With automatic semi-diameter solves, I got:

Putting Maximum solves on all surfaces except the stop gave me

You can see the ray tracing speed is unaffected (running the Performance Test multiple times for each mode gives random fluctuations about the same mean), and a quick test using the FFT MTF showed the calculations running at the same speeds. However, notice the system updates/second is much lower in the second case (with Maximum solves used)

@Berke, is this the slowdown you are experiencing? Nothing in ray-tracing speed, but slower system updates?

I re-did the experiment with EPD as the system aperture (it was image space f/# previously) and got the same result: ray tracing speed unaffected but system updates lower by about the same amount. The file is attached (setup with image space f# as the aperture definition).

@yuan.chen (Zemax), I find the reduction in system updates/second concerning. In order to use a maximum solve, OpticStudio must first calculate all the semi-diameters, and then set them all to the maximum value on the list. Without maximum solves, OS still needs to calculate all the individual semi-diameters, it just misses out the step where it sets them all to the maximum value. While that last step will take some time, it seems HIGHLY disproportionate in the effect on the System Updates/Sec. I think a % or two reduction would be very generous...I just don’t see what’s taking up two orders of magnitude.

Irrespective of whatever Berke’s problem ultimately resolves to be, this slowdown is unexpected and I think a developer should track down what is consuming all this resource.

Hope that helps,

  • Mark

OK, I’ve done some tests and I do think there’s something strange going on here that a developer should look at.

The attached file shows one of the zoom lenses from the samples folder, with 20 configurations. With automatic semi-diameter solves, I got:

Putting Maximum solves on all surfaces except the stop gave me

You can see the ray tracing speed is unaffected (running the Performance Test multiple times for each mode gives random fluctuations about the same mean), and a quick test using the FFT MTF showed the calculations running at the same speeds. However, notice the system updates/second is much lower in the second case (with Maximum solves used)

@Berke, is this the slowdown you are experiencing? Nothing in ray-tracing speed, but slower system updates?

I re-did the experiment with EPD as the system aperture (it was image space f/# previously) and got the same result: ray tracing speed unaffected but system updates lower by about the same amount. The file is attached (setup with image space f# as the aperture definition).

@yuan.chen (Zemax), I find the reduction in system updates/second concerning. In order to use a maximum solve, OpticStudio must first calculate all the semi-diameters, and then set them all to the maximum value on the list. Without maximum solves, OS still needs to calculate all the individual semi-diameters, it just misses out the step where it sets them all to the maximum value. While that last step will take some time, it seems HIGHLY disproportionate in the effect on the System Updates/Sec. I think a % or two reduction would be very generous...I just don’t see what’s taking up two orders of magnitude.

Irrespective of whatever Berke’s problem ultimately resolves to be, this slowdown is unexpected and I think a developer should track down what is consuming all this resource.

Hope that helps,

  • Mark

Hi again Mark,

Thanks for your support. Yes, this is the problem. The system setup is slow, but also when the computation starts, the system cannot update very quickly (optimization is getting slower), as shown in the pictures. In the end, when I optimize my system, I have to wait much longer. As you said, this is unrelated to the system build (EPD, FOV, etc.).

By the way, you are the only one who understands the problem and can create a file :)

 

OK, I’ve done some tests and I do think there’s something strange going on here that a developer should look at.

The attached file shows one of the zoom lenses from the samples folder, with 20 configurations. With automatic semi-diameter solves, I got:

Putting Maximum solves on all surfaces except the stop gave me

You can see the ray tracing speed is unaffected (running the Performance Test multiple times for each mode gives random fluctuations about the same mean), and a quick test using the FFT MTF showed the calculations running at the same speeds. However, notice the system updates/second is much lower in the second case (with Maximum solves used)

@Berke, is this the slowdown you are experiencing? Nothing in ray-tracing speed, but slower system updates?

I re-did the experiment with EPD as the system aperture (it was image space f/# previously) and got the same result: ray tracing speed unaffected but system updates lower by about the same amount. The file is attached (setup with image space f# as the aperture definition).

@yuan.chen (Zemax), I find the reduction in system updates/second concerning. In order to use a maximum solve, OpticStudio must first calculate all the semi-diameters, and then set them all to the maximum value on the list. Without maximum solves, OS still needs to calculate all the individual semi-diameters, it just misses out the step where it sets them all to the maximum value. While that last step will take some time, it seems HIGHLY disproportionate in the effect on the System Updates/Sec. I think a % or two reduction would be very generous...I just don’t see what’s taking up two orders of magnitude.

Irrespective of whatever Berke’s problem ultimately resolves to be, this slowdown is unexpected and I think a developer should track down what is consuming all this resource.

Hope that helps,

  • Mark

By the way, you don't need to maximize all the diamaters. If you maximize one diameter you will get the same results.

Userlevel 5
Badge +3

Hi @Berke, thank you for sharing this with us and @Mark.Nicholson for bringing this to our attention. 

Sorry for the speed slow down issue in multi-configuration. I have discussed this with our dev team and they told me to submit a bug report so that they can investigate further. It may take a while to locate the root cause. To simplify the optimization, do we have certain configurations that give large element size? (I am sorry that this is the only workaround that I can think of).

By the way, may I know your ZOS versions?

Hi @yuan.chen, Finally, it is reported as a bug. I already opened a ticket for this issue(66673), but the response was just rude and vague. That is why I chose to follow this issue on the forum. I am using '2024 R1.02,' but this situation has been valid for more than 10 years with Zemax. 

Userlevel 5
Badge +3

Hi @yuan.chen, Finally, it is reported as a bug. I already opened a ticket for this issue(66673), but the response was just rude and vague. That is why I chose to follow this issue on the forum. I am using '2024 R1.02,' but this situation has been valid for more than 10 years with Zemax. 

Hi @Berke , many thanks for providing your AZOS version and sorry for keeping you waiting for so long. I have found the ticket and add the bug report number to the ticket. We will reach you once we have some progress. Apologize for your feeling of getting vague response and thank you so much for offering us another opportunity to make this up. 

The test that @Mark.Nicholson performed is the key to draw our developer’s attention. Many thanks Mark! 

By the way, I would like to mention that the performance difference is not the same among different laptops, I got totally same system updates when I tried to reproduce the result and my checker could reproduce this.  It might be the reason that the engineer could not understand your situation. 

Reply