Skip to main content

Hello,

When using objects as sources it would be very useful to normalize the power to the surface area of the object. A good example of where this is useful is for blackbody sources where the intensity per surface area is well known.

Internally Zemax must be calculating the surface area when doing the raytrace in order to distribute the rays evenly.

Is it possible to have access to this information?
Has someone already created a macro to do this for generic objects?

thanks,
Ross

 

It appears that volume (and mass), and presumably surface area as well, are *not* explicitly available to users for non-sequential objects in OpticStudio, see: NSC object volume .  I’m trying to think if there is any clever way to find the surface area without exporting the object, but nothing is coming to mind at the moment...


Hi Ross,

this is a bit of a work-around, but one can get to the total surface area of a volume object by using it as a detector first. In the text detector viewer the flux and irradiance of the objects sub-pixels are stated. In this simple case I used a rectangular volume with a surface area of 6 mm2 with a small rectangular volume source placed within the object. You can calculate the area of each pixel triangle with the flux and irradiance data and sum it up to the total surface area.

 

Hope this helps out.

Best regards,

Sven

 


Clever solution!  With more complicated objects having curved surfaces, tessellation generally leads to a very large number of small triangles (pixels), so it would be important to make sure all pixels are receiving at least one ray, although I suppose if some of the smallest triangles are not illuminated then the resulting error might not be too large.  Also, more complicated objects may require multiple sources for complete illumination; nevertheless, it’s definitely an interesting approach.


Also if the object is more complex with also internal structures it might become difficult to differentiate between pixels that are on the outside with those that are facing towards the inside. Still it might be possible to place absorbing objects in between the internal and external features with outside sources in order to “filter out” the non-interesting internal pixels, but then again it might be easier to just export the object and get the info with a CAD software.

 


Reply