I was watching a webinar from Ansys and I was surprised to see that SPEOS was suggested for stray light.
Why not use non-sequential ray tracing in OpticStudio for stray light analysis?
I was watching a webinar from Ansys and I was surprised to see that SPEOS was suggested for stray light.
Why not use non-sequential ray tracing in OpticStudio for stray light analysis?
Hey Brian,
I think the two main reasons Ansys suggests Speos is a) money and b) user interface.
Speos is an additional license so of course Ansys will try to push users to getting a second license (one for OpticStudio, one for Speos).
The second, and more useful reason, is Speos has a “component-based design” interface, very similar to other major CAD platforms. Although the editor in Non-Sequential mode is intuitive for optical designers coming from Sequential mode, many new users/non-optical engineers find the NSC editor confusing.
Speos was originally designed as a “human vision” simulation software for the automotive industry, so it was mainly dealing with HUD projections, headlight design, and interior illumination. However, after Ansys acquired both Speos and Zemax, Ansys has shifted the marketing of Speos from only “human vision” to also include stray light.
If you’re familiar with Non-Sequential mode (and especially if you have experience with the ZPL/ZOS-API), you can do 100% of your stray light analysis in OpticStudio Non-Sequential and you don’t need to buy Speos.
My perspective is that if you are doing instrument design and need quantitative analysis you’d use OpticStudio’s NSC mode.
If you want visual appearance and are doing human factors engineering, Speos is for you.
Both are very valid, and I like OS and SPEOS being sold side by side, although I don’t think many people will use both.
As a one-man-band, I’m always trying to get more out of my investment in software. My clients won’t pay me to purchase SPEOS.
Few of my clients care about stray light. Those that do are often frustrated that I don’t have the BRDF data to perform stray light analysis. I came close to building a DIY BRDF measurement tool. It is pretty simple to throw together, but time (life) is short.
While I enjoy NSC mode for many design problems, I have not been able to use it for Tissue Optics. Reaching that limitation makes me wonder what other limitations there are for NSC tracing and where it can be used best and when it is not appropriate to use.
The most interesting application I’ve been working on recently with NSC is self-emission for infrared (thermal) imaging. There I’m able to use it effectively as long as I exercise caution!
I’m sure they would pay if it delivered something they care about. I don’t think the Zemax-NSC market ‘cares’ about the human-factors market, and vice versa.
What’s the problem with tissue optics? Running out of segments? I don’t know if SPEOS offers anything in that regard. But totally agree that in stray light analysis, OS is more capable than users are to provide input data. Nobody’s made a commercial success out of BRDF measurements for software simulations. Even back in the days of Radiant Zemax (remember that?) few people cared enough to pay the hourly rate for making a measurement, let alone amortize the capital cost.
A client of mine did make money from both BRDF and BSDF measurements. They have both services and instruments. I’m not interested in the business, I’m more interested in the capability and in Open Sourcing the design of the hardware / software to achieve the purpose - getting the data.
The problem with Tissue Optics is the number of intersections / segments. There is no guidance to set these parameters to converge on a solution that matches the scientific literature on the subject. Furthermore, I max’d out the number for each and was not able to trace 3 mm of whole blood.
Hey Brian,
I remember someone doing some super clever tricks combining NSC with Matlab to do polarization sensitive Monte Carlo Mie scattering simulations in tissue:
Polarization-sensitive scattering in OpticStudio – Knowledgebase (zemax.com)
Holistic Monte-Carlo optical modelling of biological imaging | Scientific Reports (nature.com)
Although other non-sequential ray tracing software might have a higher intersection/segment threshold (Zemax has these hard-coded from 10+ years ago when computers weren’t as powerful), I don’t know of any non-sequential stray light software that has a robust BRDF/BSDF library for all Tissue Optics.
Assuming you’ve maxed out the Intersections/Ray to 4000, the Segments/Ray to 2000000, and adjusted the Minimum Ray Intensities, you might be able to create an iterative workflow to get around the intersection/segment limitation.
It is super kludgy, but you could:
This is fairly simple if you have just forward scattering and then a single layer of back scattering, but if your scattering events can go in 4pi steradians, then this would become difficult:
Interesting approach. Bear in mind that there is forward and backwards scattering. In an iterative approach, one would need to handle the flux of rays in/out of any given sub-volume.
I’ve run into Tissue Optics problems several times in my career. Typically it is a whole blood assay. It is frustrating that with the books on my shelf, all the C/C++ code for Monte Carlo ray tracing for Tissue Optics, and OpticStudio NSC that there are no canonical test cases.
In other words, there should be a reference case for whole blood, milk, … where one can see the analytical solution and the numerical solution and benchmark a model to ensure that it is computationally sound. I have yet to find such a reference.
Nobody has “paid me enough” to create such a reference case. It takes a fair amount of work to do so. As a result, I haven’t had the incentive to play with parameters in OpticStudio long enough to determine if there is a combination of parameters that would enable such a ray trace.
Have you seen a canonical case?
OpticStudio doesn’t actually record the flux, it simply calculates the flux based on the incoming & outgoing intensities, so the SDF file would be able to handle this. The biggest book-keeping issue with backward scattering is this could act like an etalon, so you don’t know until you trace & scatter the ray how many “slices” you’d need to propagate a single ray (since the bulk scattering intensity should be very low, the number of “sliced” will most likely be low before the intensity drops below a given threshold).
Unfortunately, I haven’t seen a canonical, holistic or “gold standard” case for any scattering model beyond the very basic (i.e., non-DLL based) scattering. As you mentioned, the biggest issue with something like this is a) matching the DLL model with real-world measurements and b) making a case that is generic enough it can be useful to more than just the original authors.
I’m sure a canonical model exists within large companies, but most likely they consider it a trade secret or proprietary information and I haven’t seen anything in the public or commercial domain.
Here’s where I think the problem lies with doing tissue sampling.
When you trace a ray in non-sequential it can produce any number of child rays and have any number of ray-object interactions. Because we want to store data about the history of the ray, we have to store a fixed amount of data for each segment of each ray. This works fine for a wide variety of problems.
The issue arises when you are looking at scattering in a medium with a very small mean free path and scattering into all angles. The amount of memory needed to write out the history of the ray becomes too much, even for a well-spec’ed machine.
One solution is to super-compute it, and just throw money at ever more memory, distributed computing and the like. But equally, we could ‘black-box’ the ray trace in some region, so there is yet another set of ‘entrance’ and ‘exit’ planes defined. Rays entering the region would interact normally, but would not store their data on a segment-by-segment basis. Intensity would be cumulatively reduced, wavelengths shifted between etc, with no record of the history.
In principle this would be as accurate as what we do now, except that the history is lost and all we have is the cumulative state of the ray after millions of interactions. It might take some time to trace, and have no concept of ‘time to completion’ as the ray would just interact until its energy dropped below some threshold.
The benefit would be that the ray trace would need much less memory as you’d only store its current state. I guess you could increment a counter that tracks how many interactions, but you would lose the ability to look at, for example, interaction 38, 374 and seeing exactly what its position and angle was before it went on to hit interaction 38,375.
It would be very hard to debug of course, but it could make scattering in highly turbid media feasible. You just wouldn’t be able to carry the history of the ray around.
Mark,
That sounds like a great feature for Zemax to implement. Reducing the RAM requirements of difficult non-sequential simulation by getting ride of data I typically don’t use after set up would be very helpful. I would add that it would be useful to add a feature to save some of the ray segments. They would be test rays for visualization and troubleshooting. Say for every 100 rays you launch you see all the segments of 1 of them.
Andy
There is Monte Carlo ray tracing code that already performs these volumetric scattering ray traces without using OpticStudio. The code has been around for literally decades.
So, it is clearly possible to compute the solution with “older” computers.
The real questions are:
What is the code you’re referring to, Brian?
The code I’m referring to are the code base(s) by the leaders in the field of Tissue Optics. Experts that I know in this field include (forgive me if an expert is not referenced here):
There are scores of papers and books on the topic of Tissue Optics.
See:
https://pypi.org/project/pytissueoptics/
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.