LEAF AREA INDEX ESTIMATION IN VINEYARDS FROM UAV HYPERSPECTRAL DATA , 2 D IMAGE MOSAICS AND 3 D CANOPY SURFACE MODELS

The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r >73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.


INTRODUCTION
Biomass and leaf area index (LAI) are important variables in many ecological, environmental and agricultural applications.Accurate estimation of biomass is required for carbon stock accounting and monitoring, while LAI, which is defined as the one half of the total leaf area per unit ground surface area, controls many biological and physical processes in the water, nutrient and carbon cycle.These key crop parameters are frequently used to assess crop health status, nutrient supply and effects of agricultural management practices [Zarco-Tejada et al., 2013], [Duan et al., 2014].
In particular, for precision agriculture applications LAI is associated with agronomic, biological, environmental, and physiologic processes, which are related to growth analysis, photosynthesis, transpiration, interception of radiation, and energy balance [Thenkabail et al., 2000], [Haboudane et al., 2004], [Liu et al., 2012], [Kandylakis et al., 2013], [Atzberger et al., 2015].It is also one of the most relevant indices applied to experimentation, even for crop yield prediction and water balance modelling in the soil-water-atmosphere system [Verger et al., 2014].
Direct methods are the most precise, but they have the disadvantage of being extremely time-consuming and as a consequence making large-scale implementation only marginally feasible.Precision problems may in this case result from the definition of LAI, the scaling-up method, or from the error accumulation due to frequently repeated measurements.LAI estimation with direct methods are the most precise and therefore are often implemented as calibration tools for indirect measurement techniques.Indirect optical observations on LAI can be well correlated with vegetation indices like NDVI for single plant species which are grown under uniform conditions.However, for mixed, dense and multilayered canopies, these indices have non-linear relationships and can only be employed as proxies for crop-dependent vegetation parameters such as fractional vegetation cover, LAI, albedo and emissivity.
Recent advances in remote sensing and photogrammetry have combined 3D measurements with rich spectral information, yielding unprecedented capabilities for observing crops, biodiversity and ecosystem functioning.Aerial manned and unmanned systems are gaining continually important research and development efforts and also market share for several geospatial applications due to the describing cost and increasing reliability.In particular, for precision agriculture applications many studies beyond estimating a standard NDVI map aim to build consistent calibrated models and validate them against the accurate estimation of crop LAI.The estimation of the canopy volume through the calculation of 3D models and other metrics of vertical structure have been, already, employed from several studies for estimating aboveground biomass and carbon density, biomass change and LAI [Dandois and Ellis, 2013], [Bendig et al., 2015].While conventional airborne LIDAR acquisitions have become less expensive over time, they remain very costly for researchers and other end-users, especially if required at high spatial resolution over a few small areas or at high temporal frequencies [Dandois and Ellis, 2013].
In this paper, the estimation of crop leaf area index is performed based on three different imaging datasets acquired from an unmanned aerial vehicle (UAV).Hyperspectral data, 2D RGB image mosaics and 3D crop surface models have been used to establish relationships with the measured on the ground LAI from several vine crops in Nemea, Greece.The overall evaluation indicated that the joint use of both the hyperspectral data and the crop surface model resulted into the highest correlation with the

MATERIALS AND METHOD
Nemea Study Area: Our experiments were performed in the study area of Nemea which is located in the North-East of Peloponnese, with the Agiorgitiko vine variety the dominating one for red winemaking.In particular, Nemea-Agiorgitiko is the grape allowed to use the Nemea Appellation (PDO Nemea).During this study we focused on vineyards near the semi-mountainous village of Asprokambos at an altitude of about 700m above sea level.Aerial and concurrent field campaigns were conducted with a low-cost standard RGB camera, a push-broom hyperspectral sensor and a portable spectroradiometer.
Aerial campaign: An aerial campaign with an unmanned aerial vehicle (Figure 1) was conducted on the 3 rd of August 2014 at the Nemea study area.A multicopter (OnyxStar BAT-F8, Altigator, Belgium) with electronic controllers and navigation systems (BL-Ctrl V2.0, Navi-Ctrl v2.0, Mikrokopter, Germnay) equipped with: • a push-broom hyperspectral VNIR imaging sensor (Micro-Hyperspec A-Series 380nm-1000nm, Headwall Photonics, USA) • a low-cost standard RGB camera (i.e., GoPro Hero3) was employed.The sensors were mounted and stabilized thought a camera gimbal (AV200, PhotoHigher, New Zealand).The hyperspectral sensor was connected through a frame grabber with a custom-made lightweight mini-ITX with low power consumption (Figure 1).A GoPro HERO3+ Black Edition was, also, concurrently onboard the UAV delivering video and images at certain time intervals.
Field campaign: Along with the aerial an intensive field campaign was conducted in order to collect reference/ ground truth data including the precise location and variety of each parcel,  vineyard or vine row [Karakizi et al., 2015].Existing maps with geographic information and varietal plantation were verified or updated during field surveys.In situ reflectance measurements were performed using the GER 1500 (Spectra Vista Corporation, US) portable spectroradiometer which provides spectra with 512 spectral bands distributed in the spectral region from 350nm to 1050nm with 3.2 nm FWHM.The position of each measurement was recorded using a portable GPS.Moreover, at certain locations with vigour and non vigorous plants, LAI was assessed directly by a non-destructive precise counting of all leaves per vine.In particular, after collecting the aerial and ground reflectance data the mean leaf area was estimated along with the number of leaves per sampling location.
Automatic aerial image orientation: An indispensable step for the generation of both 2D RGB orthomosaic and 3D canopy model is the estimation of image orientations and camera calibration.This is performed via an automatic image-based framework.
In a first step all GoPro views are corrected from severe radial distortion effects due to the fish-eye lens.Following a hierarchical image orientation process (structure from motion, SFM) all available images are relatively oriented and optimally calibrated.This procedure incorporates 2D feature extraction and matching among images, outlier detection for the elimination of false point correspondences, orientation initialization through closed-form algorithms and a final self-calibrating bundle adjustment solution.
It should be noted that all these steps are applied at successive image scales in order to handle effectively the large number of high resolution images.The resulted orthophotomosaic from the collected aerial RGB images is shown in (Figure 2a).
3D canopy model: Once all aerial images are oriented a dense point cloud is generated by employing dense stereo and multiimage matching algorithms.The 3D point cloud is then converted to a 3D model (3D Mesh) through 3D triangulation and finally to a DSM by keeping the highest elevation for every planimetric ground position.Appropriate texture is also computed for each 3D triangle via a multi-view algorithm, using a weighted blending scheme.The resulted DSM from the collected aerial RGB images is shown in (Figure 3a).In order to estimate more precisely the volume of the canopy the soil between the vine rows was detected.Having detected both the canopy and the soil in 2D, the DTM was estimated based on a morphological reconstruction approach.The 3D model of the detected canopy was then calculated after projecting the estimated from the DSM canopy height on the DTM (Figure 3b).
2D RGB imaging mosaics: Combining the oriented image set with the reconstructed DSM of the vineyard, a 2D orthomosaic is produced by a multi-image algorithm based on automatic visibility checking and texture blending that can compensate for different orientations, scale and resolution of the images involved.
The resulted orthomosaic from the collected aerial RGB images is shown in Figure 2a.
2D hyperspectral canopy greenness: Due to the movement and the vibrations of the UAV platform the raw hyperspectral data acquired from the push-broom sensor were highly distorted.In order to perform a rough geometric correction, every single scan line was aligned with the precedent, via a 1D transformation that minimized their intensity differences.In particular, each scanline is shifted (upwards and downwards) relatively to the precedent one for a range of different displacements by one pixel step at a time (e.g. from -20 to 20 pixels).At each discrete displacement a cost is computed as the sum of the intensities absolute difference of the current scanline to the previous one.By a winner takes all (WTA) scheme the displacement with the minimum cost is chosen and applied to the selected scanline.The same procedure is repeated for every consecutive scanline and a final 2D roughly undistorted hyperspectral mosaic is generated.All the above computations for estimating the required displacements are performed based a (narrow) color composite which resembles a standard RGB and the final estimated shifts are then applied to entire hypercube.
Moreover, in the same locations with the in-situ reflectance data the relationship with the aerial hyperspectral data were estimated.The high correlation rate (r 2 >94%) indicated the consistency of the acquired dataset [Karakizi et al., 2015].The narrow NDVI was calculated from the hyperspectral data and through a further classification the different canopy greenness levels were estimated which can be associated with the vegetative canopy vigour, biomass, leaf chlorophyll content, canopy cover and structure.The resulted canopy greenness map is shown in Figure 4.

EXPERIMENTAL RESULTS AND EVALUATION
Aerial and in-situ data were collected during the veraison period at the Nemea study area.Aerial data were collected from a multicopter with a low cost standard RGB camera (i.e., GoPro Hero3), a push-broom hyperspectral sensor, a lightweight single board computer and a frame grabber Figure 1.The goal was to benchmark the estimation of LAI from the acquired hyperspectral data, the RGB orthomosaic and the 3D canopy model against the insitu LAI measurements.
The resulted orthomosaic from the collected aerial RGB images and the calculated GRVI index on the detected canopy are shown in  timated canopy levels between the 2D RGB mosaic and (a) the 3D canopy (b) the hyperspectral map were performed in (Figure 5).The correlation between the calculated GRVI (from the 2D RGB orthomosaic) and the canopy greenness from the hyperspectral data were relative high and above 84%, while the one between the calculated GRVI and the 3D canopy was lower at approximately 79%.The highest relations (r 2 > 90%) were established between the estimations from the hyperspectral data and the 3D canopy model.
Regarding the relations against the ground truth (direct, in-situ LAI measurements) the experimental results followed a similar pattern Figure 6.The LAI estimation from the hyperspectral data and the 3D canopy model resulted in higher correlations rates (r 2 > 80%), while the ones from the 2D RGB orthomosaic relative lower (r 2 < 73%).
The aforementioned results indicate that LAI was estimated more accurately from the hyperspectral data and 3D canopy model.It should be noted that for the hyperspectral data just a standard narrow NDVI was employed, while more sophisticated indices may would have correlated better in terms of chlorophyll concentrations, etc.Both datasets seems to fail more in cases with lower LAI values over sparse, weak, unhealthy plants and canopy.

CONCLUSION AND FUTURE PERSPECTIVES
In this paper, LAI estimation from three different UAV-based imaging sources was validated against direct, in-situ LAI measurements.In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthomosaics and (iii) 3D crop surface models.The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece.The overall evaluation indicated that the estimated canopy levels were correlated (r 2 >73%) with the ground truth.Between the different observations the hyperspectral and the 3D model established the highest relations.Moreover, as expected, the lowest correlations against the ground truth data were derived from the calculated greenness levels from the 2D RGB orthomosaics.The highest correlation rates were established for the hyperspectral and the 3D canopy levels.The experimental results and the evaluation indicated that the leaf area index in vineyards can be approximated from both hyperspectral sensors and 3D canopy models.For the later the accurate detection of canopy, soil and other materials in between the vine rows is required.Further validation in several vineyards, vine varieties and other crop types is required in order to conclude on the optimal, efficient and cost-effective manner for LAI estimation from UAVs.

Figure 1 :
Figure 1: The UAV system that was employed during the aerial campaigns with a low cost standard RGB camera (i.e., GoPro Hero3) and the push-broom hyperspectral sensor.
Figure2: The Green-Red Vegetation Index (GRVI) was calculated on the detected canopy from the aerial RGB orthomosaic.
(a) The resulted DSM from the aerial dataset with texture from the RGB orthomosaic.(b)The 3D model of the canopy after the estimation of the soil in-between the vine rows.

Figure 3 :
Figure 3: The estimated DSM and 3D model of the canopy derived from the aerial imagery and the low-cost standard RGB lightweight camera.

Figure 4 :
Figure 4: The estimated canopy greenness map based on the calculation of a narrow NDVI from the UAV hyperspectral data

Figure 2 .
The resulted DSM from the aerial dataset with texture from the RGB orthomosaic and the calculated 3D canopy model after the estimation of the soil in-between the vine rows are shown in Figure 3.The estiamted canopy greenness map based on the calculation of a narrow NDVI from the UAV hyperspectral data are shown in Figure 4.For the quantitative evaluation firstly the relation between the es-(a) Correlating the calculated LAI (GT) with the 2D GRVI map from the aerial RGB orthomosaic (b) Correlating the calculated LAI (GT) with the 3D canopy volume (c) Correlating the calculated LAI (GT) with the canopy greenness map from the hyperspectral UAV data

Figure 6 :
Figure 6: The relation between the calculated LAI (ground truth, GT) and the estimated canopy (a) from the 2D GRVI map, (b) from the 3D model and (c) from the hyperspectral map.