Synthetic Images to .cub

Dear community
I am working on trying to quantify how different inputs in the ASP tool influence the model of Vesta (DAWN) and to that end I would like to use a “ground Truth” model. I have gotten images from JPL of a synthetic Vesta and associated kernels. Does anyone have experience with using ISIS and synthetic images this in here? Is it possible to import some .DAT files into eventually a .cub format? I have managed to combine a label from a PDS .IMG with the binary data of the DAT and generate a .cub but it uses the information from the original image of course. I guess I have to point to my own kernels and use an arbitrary time stamp. Has this or similar things been done before to your knowledge? Thank you in advance.

When you say that you’re trying to quantify how different inputs to ASP influence the model of Vesta, do you mean that you’re trying to make DEMs in ASP and you would like to compare them with an independent reference DEM (“ground truth”) of Vesta?

Yes I am trying to make DEMs in ASP (and I have made some already using PDS data from DAWN) and so far I have just been comparing them to each other. But of course none of the results are “correct” so I am just comparing two models without any way of quantifying anything. So I want to make DEMs of the synthetic vesta (I have +500 images) with the same settings and compare the different Synthetic DEMs to eachother. I do not want to compare Dawn data with the synthetic data - just synthetic with synthetic. Hope this clarifies?

I’m not sure I understand what you mean by “synthetic data.” Do you mean hillshades of DEMs?

You could compare your ASP results to this global DEM of Vesta by Preusker et a.: https://astrogeology.usgs.gov/search/map/Vesta/Dawn/DLR/HAMO/Vesta_Dawn_HAMO_DTM_DLR_Global_48ppd
That model was derived via stereo photogrammetry, but not using ASP, so any DEMs you generate in ASP will be independent in that sense.

I have a ‘fake’ 3D model of Vesta that is just ‘made up’ of an ellipsoid with craters scattered on it based on some frequency of craters (it was created before DAWN flew). Then a non-existing camera ‘took images’ of the model (via. ray tracing - i.e. they were generated synthetically) and now I want to use these images to create a DEM with ASP to compare with an area of the fake 3D model to see how good the stereo worked :slight_smile:

Thanks for the suggestion on comparing with the DLR model :slight_smile: That is in principle my end goal. I want to quantify why, how and when the ASP stereo differs from the established models.

This is going to be pretty hard to do unless you have the image labels setup for a specific camera model. ISIS has an ideal camera model that you could potentially use, but it’s not really publicly documented and is mostly a noproj output.

Is the synthetic data fake dawn FC imagery and labels?

Okay, I think I understand now. If you’re working purely with synthetic data, then you may not actually need to create fake SPICE kernels and process data in ISIS.

If you have these synthetic images, then presumably there was some sort of camera model used to create them. If you know the intrinsic properties of the synthetic camera, you can generate a pinhole model for each of your synthetic images using ASP’s cam_gen tool. This would allow you to pass the synthetic images + pinhole models directly into ASP’s stereo tools as you would any other image data.

No the synthetic data is in .DAT format and it does not have any labels :frowning: I have information about the position etc. from NAIF kernels that were created for the data.

Ok thank you that might be useful! I will look into that tool!

I’m guessing that the .DAT extension of your synthetic data simply indicates they’re flat binary images. In order to convert that into something ASP knows how to read, I’d recommend using GDAL and the gdal_translate utility. You could convert the files into cubes with gdal_translate, but if you’re not using an ISIS-based camera model, it might be easier in the long run to simply convert the images to GeoTIFFs instead.

Hi again. Still trying to do this haha. A question related to this: I have managed to make a .cub file that runs with dawnfc2isis and spiceinit and I can view it in stereo_gui (asp). But something is wrong with how the information is read (surprise) and if I run caminfo on the .cub I can see that I have NULL values in the LowerLeftLongitude, LowerLeftLatitude, LowerRightLongitude, LowerRightLatitude, UpperRightLongitude and UpperRightLatitude keywords. My question is which spice kernels are involved here? How does it get these keywords?

ps. my task is to quantify/qualify how different input conditions influence a stereo DEM model made by ASP and ISIS. So I would like to do what they have done but where I have the ‘truth’ - that is the motivation for trying to import it in ASP :slight_smile:

The Kernels group in the cube label lists all of the SPICE kernels that spiceinit used. You can just look at the top of the file with something like less, or you can use catlab to print out just the label.

Ah yes my question was not clear. I know the list of kernels involved and I actually set them myself by overruling the kernel pointing when running spiceinit. What I mean is what kernel is responsible for populating the LowerLeftLongitude, LowerLeftLatitude, LowerRightLongitude, LowerRightLatitude, UpperRightLongitude and UpperRightLatitude keywords?

I have found out that the addendum.ti file influences this a lot. But I cannot seem to find much information about the Addendum file online. As I have synthetic images I cannot use the one made for Dawn. Do you know if 1. it is the addendum file that is the culprit for the NULL values in the long?lat keywords and 2. how to make an addendum (or omit it completely) so that it is correct.

best,
Christina

It is all of the kernels. The lat/lon extents are computed from the camera model which uses all of the kernels. There are 4 main sets of data the camera model uses from the kernels

  1. The interior orientation of the camera, stuff like focal plane alignment, focal length, distortion, etc.
  2. The Position of the camera when the image was exposed
  3. The Pointing of the camera when the image was exposed
  4. The Orientation of the target body when the image was exposed