Hi,
I want to use drone imagery I have taken in the field at sites analogs to planetary surfaces such as Mars and the Moon as the input of a machine-learning Convolutional Neural Network (along with labels of specific features, in my case, the outline of boulders). I want the inputs to be grayscale (1-band) images because I would like the algorithm to be able to make predictions on 1-band imagery, as most of the spacecraft missions do not return multiple wavelength/bands products.
In the case I have more than one band available, as in my drone images (RGBA), I want to convert them to one band image to mimic the high-resolution one-band images returned from instruments such as LRO NAC, Framing Camera on the Dawn Spacecraft, HiRISE one-band product…).
When translating a color to grayscale image, most image-processing python libraries use the three RGB bands in the conversion (see Image Module - Pillow (PIL Fork) 9.4.0 documentation). Still, I was wondering if it would make more sense to convert only one of the bands to grayscale (e.g., the red band) to produce a more “real” planetary grayscale image?
I wonder if my question makes sense or matters that much, but I thought, why not asking?
Thanks for your help,
Best regards,
Nils