how to make great use of CPU efficiency when use ZPLM to do tolerance analysis

  • 27 December 2021
  • 1 reply

Question1: I use ZPLM+MACRO to calculation edge spread size, and tolerance script to get tolerance result for edge spread size variation. But CPU only use 10%, so the computation time is very long. Even I choose the max core in tolerance analysis. Is there any setting I miss for macro operation to make better use of CPU? 

Question2: When I open Zemax file, it become very slow and need to waiting a long time to see it full open already. Why this issue happen? It was open fast before, and I use the same Zemax file on same computer.


Best answer by Alissa Wilczynski 6 January 2022, 01:08

View original

1 reply

Userlevel 4
Badge +3

Hi Lei,

It sounds like in both cases that your macro takes a long time to execute. While any built-in analysis is multi-threaded, a macro can only be executed on a single thread, because OpticStudio doesn’t know what the macro is going to change about the system. Therefore, it can’t assume that running 4 (or 8 or 16, etc.) copies of that macro on the same system won’t result in disaster. I can’t think of any way to speed up your tolerance analysis, but you can prevent your macro from re-calculating when you open your file. That will let your file open faster, but then the next time you need to update the window that calls your macro, it’ll pause to calculate again.

To prevent your system from automatically re-calculating when you open a file, you can go to Setup > Project Preferences and adjust some/all of the following:

  • Editors > Update Merit Function on Load (disable)
  • Project Preferences > General > Auto Apply (disable. This likely isn’t the cause of your current slowness but want to share for other readers who experience slowness from other causes)