@Michael.Young
I’ve never ran into this issue. You’re probably already aware of such solutions but in general I like to use JSON files to store meta data, you could also look into YAML and TOML.
Take care,
David
@David.Nguyen
Thank you for the reply. I admit to not being the most proficient programmer and having only a cursory familiarity with JSON and somewhat more with YAML.
Would you mind briefly outlining what your workflow would look like for the following situation.
You have a base zemax file and an analysis script that runs in ZPL. It saves a text file of the analysis. The analysis needs to be run for multiple files and each analysis is saved with a separate file name distinguished by a suffix that increments (00, 01, 02), etc.
Would you have a subroutine or child macro that would record the file name and save metadata in a YAML file in the same directory? That seems very reasonable. I frequently rerun this analysis and I worry a little bit about the YAML file aggregating incorrect information. How do you maintain “good hygiene” with the database? What do you use to query the database? Do you simply open the file to read it?
Thanks again. This is very helpful.
@Michael.Young
I’m not claiming to be the most proficient programmer either, but here’s what I have to say about your workflow.
Originally, I assumed you needed a human-readable file format. Hence why I suggested those file formats above. However, if the analysis results are rich and are better understood with graphs, quite often I would save those in binary files. I work quite a bit with Python, and I like the simplicity of Pickle
for that.
https://www.geeksforgeeks.org/understanding-python-pickling-example/
I was told that Pickle
isn’t good for large amount of data, but it hasn’t been an issue for me so far, and my analysis results remain small. Then, I would have a sort of Python dashboard to load the binary analysis results and display them how I like.
Regarding naming. I like using a timestamp as the suffix, but an increment is good as well I guess. Then, instead of your file being called:
geometricimageanalysis_myfile_fieldsize-1.2_imagesize-2.3_rays-1000_etc.txt
I would give the text file a short name that relates to the experiment such as analysis_of_lens_x_001.txt
. And then, within the text file you’d have a header with the analysis settings:
>settings]
analysis = "Geometric Image Analysis"
lens_file = "my_file.zmx"
field_size = 1.2
image_size = 2.3
rays_x_1000 = 1000
start_time = 2024-02-07T15:03:45.2211466Z
my_results = ...
As I mentioned, if the analysis is complex, I would “query” it with Python. And in that sense, it might be more straightforward to work with the ZOS-API from the beginning. If you really need to maintain a high level of “hygiene”, I’d suggest to save the lens file as a ZAR after running the analysis and keeping this archive alongside the metadata, and perhaps add an identifier in the ZAR file name that will also be present in the text file so you always know which analysis and which file go together.
Its probably far from perfect, but I hope you can take something away from this.
Take care,
David
@David.Nguyen,
Thank you for your feedback. That is very interesting.
Sincerely,
Michael