Illustrative image of an Earth observation satellite from the Sentinel-2 mission
At this very moment, there are thousands of artificial satellites orbiting Earth completing different tasks such as facilitating communication, providing GPS locations and taking images of our planet. These satellite images provide a powerful source to monitor the Earth and are an input for many of our applications at Cervest. Within an image we can identify different landscapes and objects, and by comparing images over time we can track trends and changes. Scanning the earth with satellites to obtain such information is commonly referred to as remote sensing. While very powerful, properly incorporating remote sensing analysis is not easy. In this series of blog posts we are demonstrating some of the possibilities and limitations of satellite imagery, but first we will explain some of the basics of remote sensing in this introduction.
What is an Earth observation satellite?
At Cervest we make use of two types of Earth observation satellites: optical imagery and synthetic aperture radar (SAR) satellites. Optical imaging satellite sensors capture light (electromagnetic radiation) from the Sun that is reflected from the surface of the Earth. Sunlight travels as waves and is composed of different wavelengths in the visible, ultraviolet and infrared regions. Waves arriving at the interface between materials can be absorbed, transmitted or reflected. As the light waves travel from the sun to the Earth, it can be reflected off of different surfaces including clouds, particles in the atmosphere and the ground. The satellites pick up the intensity of the reflected light, which can be separated into bands representing the different wavelengths. We are ultimately interested in the true intensities from the ground. Reflections off of other surfaces, such as clouds, distort the signal we can capture from the ground and are referred to as noise.
While optical satellites capture the reflected light waves emitted from the sun, SAR satellites emit their own waves which they capture again after they are reflected. SAR satellites emit waves in the micro range of the spectrum, which falls outside the visible spectrum probed in optical imaging satellites. As such, no ‘coloured’ images can be constructed. However, by comparing the intensity and orientation of the waves that are returning to the SAR satellite with the initial emitted waves, researchers can infer structures and densities of objects.
In short, the two satellites measure different things and each has its advantages. For example, optical imaging can identify colour and is often easier to interpret, while SAR is not affected by cloud cover as the waves penetrate through. Nevertheless, both are used to address similar tasks in many cases, such as forest management and urban development.
What are some important satellite characteristics?
Critical for monitoring the Earth is the time that it takes a satellite to image the entire Earth and return to its original position, also known as the period or temporal resolution. A longer period or lower temporal resolution implies a longer waiting time for collection of another image at the same location. The height of the satellite’s orbit and the angle at which the satellite lens captures light are important determinants for the period.
Besides the number of observations, another important consideration is their precision with respect to space. The area that is covered by one pixel on the ground is called the spatial resolution. A high spatial resolution image has many of these pixels in an image, providing more detail, as is illustrated in the images below. In general, satellites experience a decrease in temporal resolution with an increase in spatial resolution.
In the last few years the number of both private and public satellite projects has exploded, boosting the accessibility of remote sensing data. Besides temporal and spatial resolutions, there are many more considerations, such as how long a satellite has been running for. Depending on the task, some considerations are more important than others.
What are some satellite image challenges?
The conditions of the images are also constrained by the capabilities of how the images can actually be handled by your system. First, the total size of relatively high resolution images over a reasonable time span and coverage can easily get to petabytes of data. Second, raw images do not necessarily represent ground truth accurately – they need to be processed to resemble this well. We explained how clouds and atmosphere can interfere with optical images, but in other ways SAR images tend to be inherently noisy as well. Throughout this blog post series we will discuss ways to reduce these sources of noise.
What is next?
In this series we will be looking at products from three public satellites used at Cervest which cover a range of spatial and temporal resolutions as well as a mixture of SAR and optical imagery. Sentinel-2 is a programme with relatively high spatial resolution, optical imagery satellites. The first blog post will focus on the general processing steps for such optical satellites using normalized difference vegetation index (NDVI) as a comparison parameter. NDVI is a transformation constructed from some optical bands. The second article will compare the time series of Sentinel-2 with a low resolution optical image satellite sensor called MODIS. The third article will look into Sentinel-1 which uses SAR satellites. The fourth article will compare the signal from Sentinel-1 with Sentinel-2. The final article will look into the limitations of NDVI and some of the advantages of using other indices or simply using raw bands for analysis.
About us: We are Owen, Ramani and Maxim, part of the Cervest Science team, a group of statisticians, machine learning & natural scientists, and software engineers. We aim to solve urgent problems within Earth Science through application of machine learning tools and techniques. If you’re interested in what we do, then please head over to our careers page!