Abnormally long load/optimization times when using TOLR


Badge

As of this week I started to run into abnormally long load times when updating a merit function editior that has the TOLR operand in it. I am aware that TOLR is very computationally intensive, but I've been using TOLR for my projects for tolerancing and haven't run into any issues prior to this. What normally took my laptop approximately 10 seconds to update the merit function with TOLR can now take upwards of several minutes. This consequently has led to obscenely long optimization times whenever TOLR was used -- much longer than before. Has anyone else ran into a similar issue and can offer some suggestions?


More information: The problem persists throughout all of Zemax, regardless of the file or the system, including Zemax's sample files (i.e. Cooke triplet). I've updated my OpticStudio from 20.1 to 20.2 with the hopes that updating to the new version might fix this, but it did not. I've tried calling technical support, but since I am using a student license, they could not help me much.


I am using a Dell Inspiron, i7-8550U @ 1.80 GHz, 8 GB RAM.


12 replies

Userlevel 7
Badge +3

Hi Jake,


 


I'd suggest checking the tolerance settings. Run the tolerancer manually, and load the TOLRxxx.TOP file that holds the settings fo rthe analysis. How long does that take?


- Mark 

Badge

Hi Mark,


 


I think I must've overwritten that .TOP file, as the values are no longer Zemax's default. Could you possibly list the set-up for TOLRxxx.TOP? The tolerancing set-up I'm using is:


 


Mode: Sensitivity


Criterion: Diff. MTF Avg


Sampling: 4


Comp: Paraxial Focus


Field: Y-symmetric


Monte Carlo: 600 runs


 


With a total of 74 tolerance operands and a fairly complex aspheric imaging lens, the tolerance analysis took about 30-40 minutes, which is not out of the ordinary. However, the optimization with TOLR takes much longer than it should (i.e. after 1 hour, local optimizer was still on Cycle 1, when previously multiple cycles have been processed within that time).

Userlevel 7
Badge +3

Hi Jake,


Set up the Tolerancer as you want it to be, and then press the Save button on the Settings dialog. Save the settings to a file called TOLR***.TOP where *** is an integer between 001 and 999. Then when you call TOLR in the merit function, you refer to the *** integer to tell OS what settings to use.


There's a good Knowledge Base article here: https://my.zemax.com/en-US/Knowledge-Base/kb-article/?ka=KA-01370 :-)


Much as I love TOLR, I think the new HYLD approach beats it in terms of coming up with designs that are insensitive to tolerances.


- Mark

Badge

Hello Mark,


I've recreated the Tolerancer and saved it as you suggested, but the merit function still takes minutes to simply update.


 


Regarding HYLD, I am familiar with it and have used it for various cases, but for this particular system I was unsure if it would be beneficial. I am designing a cell phone camera lens with very tight tolerances with plastic aspheres, and minimal track length is highly desirable. Ken Moore (the one who created HYLD) stated in his 2019 paper that looser tolerances will increase the HYLD advantage, and if the limit is of very tight tolerances, the HYLD design will not be superior. Also, in my experience, the use of HYLD pushes the system toward the maximum allowable track length (in order to decrease incedent ray angles between surfaces).

Userlevel 7
Badge +3

Hi Jake,


Can you share your file as a ZAR? Then we can see what's going on.


Everything pushes the design to its maximum length. That's why we usually just set a maximum length, rather than optimize to obtain some other length where performance drops on either side. 

Badge

Hi Mark,


 


Thank you for your replies and input! I've attached the .ZAR file. The primary concern is the TOLR operand -- I may eventually resort to uninstalling/reinstalling Zemax to see if that might fix the unusually long computation time.

Userlevel 7
Badge +3

Hi Jake,


OK, I have a couple of thoughts on your (very complicated!) system.


1. You are using Paraxial f/# as the aperture definition, which is fine for initial setup. But you can't use it for tolerancing. Imagine some tolerance that changes the f/#: OpticStudio will compensate for the tolerance by changing the ray it traces to ensure the f/# condition is met. That's not what you want. Switch to Float-by-Stop so the aperture is defined by the diameter of the stop surface for tolerancing. That's the physically realistic case. Note the Design Lockdown tool does this for you, and should always be used as you move from optimization to tolerancing. Of course, you want to use tolerancing in the optimization, so you're not ready for the design lockdown tool yet. But you should still set the aperture by the system aperture size directly, and use an operand to control the f/# when you want to tolerance.


2. Your tolerancing crirerion is diffraction MTF (which you said in an earlier post, and I failed to see). That simply is going to be a big calculation, especially as it is set to sampling =4, or 246x256 grid. That's going to be a BIG calculation, and if you have n variables you will do it n+1 times per optimization cycle. Instead, for tolerancing purposes, use RMS Wavefront. You'll get best MTF near to where the RMS wavefront optimizes anyway, but it will be much, much faster. I made that change (and the change to aperture) in the file you sent me and it is attached.


3. In your even asphere surfaces, you are using R and k and the r^2 term of the Even asphere. (R, k) and the r^2 term are degenerate, so you should only use one (R&k) or the r^2 term. I suggest you go back to an earlier version of the file, set all the r^2 terms to zero and don't set them as variables. It's making optimization needlessly complex.


4. You're using TOLR data 0, which is the expected change. Its possible to find a design with poor performance but low tolerance sensitivity, and that's nott he solution you want. I suggest you use data item 2, which is the nominal + expected change. This is the number you want to have as small as possible.


5. Last, and you probably know this, be very careful using paraxial quantities when you have such wildly aspheric surfaces. Paraxial quantities are computed using R, k and r^2 only. The features in Analysis...Applications...PAL/Freeform are very useful in cases where paraxial quantities can't be relied upon. Always use calculations based on real ray tracing when modeling wildly aspheric/freeform surfaces, as the surface vertex power has little to do with the power over the aperture of the surface.



Lastly, if I may sound like an old geezer, back when I were a lad optimizing a zoom lens was something that took days/weeks of time, so don't be too harsh on the time TOLR takes. If you have n variables, it is doing n+1 Sensitivity analyses per optimization cycle. It's a very, very big calculation.


 


 Hope that helps,


- Mark

Badge

Hi Mark,


 


Thanks for the in-depth analysis of my system; your input is much, much appreciated! Some of the points you've made I am familar with, but the rest is very helpful for my learning, and I'll make the necessary changes.


I suppose the main point I wanted to make was that the time to update the merit function with TOLR suddenly increased tenfold without any obvious reason. Before Monday, it would take roughly 5-10 seconds to refresh the merit function with TOLR, but now it takes an average of 3 minutes (same file, same tolerance criterion!) This is only the case when Diff. MTF Avg. is selected as the tolerance criterion, e.g. if RMS wavefront is the criterion, refreshing the merit function takes less than a second. Strangely enough, when I switch the criterion from Diff. MTF Avg to RMS wavefront and try refreshing the merit function editor again, I run into the abnormally long load time again unless I close and restart. Someone suggested the possibility of corrupted TOP files or internal processes in OpticStudio that is impeding the processing time, but I am not sure. I am aware of the long computation times when using Diff. MTF Avg, as previous optimizations with the same system using TOLR took approximately 3-5 hours on average.


 


In any case, I will continue troubleshooting and maybe do a fresh install of OpticStudio on my laptop. Thank you very much for your time and effort in addressing this problem!

Badge

Hi again,


A followup question to one of your points you've made -- is it advised to match the optimization criteria to the tolerance criteria? For example, if I set the tolerance criterion as RMS wavefront, would optimizing for wavefront produce a better imaging system than optimizing for contrast?


-Jake

Userlevel 7
Badge +3

Hey Jake, it's common to do so but not required. It depends on what you want. I usually use contrast or wavefront for optimization and tolerancing, but switch to something like diffraction MTF or Strehl for output of a 'final number'. Tolerancing scripts are great for this. You can tolerance on wavefront, then load an MTF merit function to report that data. 


Rememeber that in general, all diffraction based criteria optimize as the RMS wavefront goes to zero.


- Mark

Badge

Hi Mark,


Thank you for your suggestions and insight, I will see what I can do!


-Jake

Userlevel 1
Badge

Hi Mark,


     I opened the merit function editor in Jake's file, but failed to open the merit function editor in your file.


did I do something wrong?


 

Reply