Fastest Method for Implementing a User Defined Operand

  • 6 September 2019
  • 4 replies
  • 102 views

I'm looking for the community's input/experience on the fastest method for implementing a user defined operand. Are there any appreciable differences between using a compiled DDE or ZOSAPI operand vs the macro language? I know this will be dependent on the specific calculation involved, but I'm more curious about any fundamental differences between the methods in terms of the speed of passing data back and forth between ZOS and the operand computation.



Thanks for any responses.

4 replies

ZPL Macro will be the fastest to implement for sure. Pick up the supplied sample, modify it as required, and BLAM! it's ready to go.



A ZOS-API operand will require some external compiler, like Visual Studio, but it's still pretty easy assuming you also know C#, Python, MatLab etc as the coding language is not supplied with OpticStudio. This kind of extension is a separate program that connects to OS.



Don't use DDE for a new project. ZOS-API replaces DDE, and we only support it for backwards compatibility. Don't start a new project using DDE.



The speed differences are really down to the calculation you need to perform. ZPL is quite fast, and has a low overhead as it's built into ZOS. ZOS-API gives you the speed of a fully compiled application, with a little more overhead as you have to set up the interprocess communication. However ZPL does not have a libray of FFT or statistical functions etc for you to call, so if it isn't already supported as a ZPL keyword you'll have to roll your own.



I would suggest starting with a ZPLM operand. 90% of the issue is getting the program flow (your algorithm) correct, and the macro will likely to be 'fast enough'. However, if you are writing this operand for other people to use, or if you need more speed, then convert it to ZOS-API using C# (or anything else, but I use C# and find it fairly easy to write simple things, whilst having the ability for the more talented/knowledgeable to produce truly well-optimized code).



In general, people are much more forgiving of their own code's foibles, so if you are going to be the only user, a ZPLM  macro will be fine unless you're doing something with a heavy computational load. If you're expecting other people to use it, use ZOS-API as it's faster, and you also get access to whatever the compiler offers in error handling, user feedback, etc.



HTH,



- Mark


Userlevel 1
Badge +2
I have done some measurements of execution times for ZPLM (zpl macro) and UDOC (compiled C++ program) user-defined merit function operands.  For a zpl macro, the overhead time on my computer was 12 ms + 0.3 ms per surface (the overhead increases with the number of surfaces in the OpticStudio file).  The overhead time can get significantly longer than 12 ms if you have hundreds of surfaces in your design.  The UDOC had an overhead of 250 ms + 0.15 ms per surface.  So my conclusion was that for moderately complex calculations, the zpl macro is faster.  You would have to be tracing thousands of rays or doing other similarly intense calculations in order for the increased speed of a compiled program to have a chance of making up for the increased overhead.



Also, to avoid the overhead as much as possible, combine as many calculations as possible into one macro call.  For relatively simple calculations, the execution time is dominated by the overhead.
Paul,



Thanks so much for you reply. This was the exact information I was looking for!



Best,



Ramzi
Hey Paul, that's good example data, thank you.



I do think that sometimes ZPL gets a bum rap because it's so easy to use. Despite being interpreted and not compiled, it has low calling overhead and is often perfectly 'fast enough'. You do need to be doing something pretty intensive to really need a compiled alternative.



All the other advice Paul gives is spot-on too. Just pay the overhead once, and get your macro/extension/API to do everything you need, and then send the data back via multiple calls. Don't write separate code for each piece of data, as you'll pay the overhead every time.

Reply