→ EO Open Science 2017

25-28 September 2017 | ESA-ESRIN

official hashtag #EO #OpenScience

Agenda

Day 1 - 25/09/2017

ESA Conference Intro

09:00 - 09:30

  • 09:00 - ESA Intro

    ESA Conference Intro


Artificial Intelligence

Chairs: Lanari, Riccardo (CNR-IREA), Campbell, Gordon (ESA)

09:30 - 11:00

  • 09:30 - A Cloud Computing solution for national scale mapping of surface deformations through massive Sentinel 1 radar data processing
    Lanari, Riccardo (1); Bonanno, Manuela (2); Buonanno, Sabatino (1); Casu, Francesco (1); de Luca, Claudio (1); Fusco, Adele (1); Manunta, Michele (1); Manzo, Mariarosaria (1); Pepe, Antonio (1); Zinno, Ivana (1) - 1: CNR-IREA, Italy; 2: IMAA-CNR, Italy

    Nowadays the Remote Sensing scenario is characterized by a huge availability of SAR data that offer the possibility to map the Earth surface deformation at a very large scale. In particular, starting from April 2014, we are collecting big data archives acquired by the new Sentinel 1-A sensor, which has been paired with its twin sensor Sentinel 1-B on April 2016, showing enhanced features in terms of revisit frequency and spatial coverage. Therefore the challenge is to maximize the exploitation of such data, and, in this direction, the use of distributed computing infrastructures and, in particular, of Cloud Computing platforms can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric processing chain, based on the P-SBAS approach [1], [2], aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, such as SAR image coregistration, interferogram generation, interferometric phase unwrapping, in order to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services public cloud platform and a thorough analysis of the attained parallel performances has been performed in order to identify and overcome the major bottlenecks to the scalability. Some impressive results relevant to the national-scale DInSAR analyses performed over Italy, involving the processing of more than 1500 S1 IWS images, will be presented, with all the details about the processing times and costs. Such outcomes confirms the big advantage of exploiting Cloud Computing platforms in the context of massive SAR data processing, because of the large collection of computational and storage resources that they offer, which allows performing DInSAR analyses at unprecedented large scale. The presented Cloud Computing P-SBAS processing chain can be a precious tool within the EO Open Science scenario, allowing us to fully exploit the huge S1 data stream, also in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation. [1] F. Casu et al., "SBAS-DInSAR Parallel Processing for Deformation Time-Series Computation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 8, pp. 3285-3296, 2014. [2] I. Zinno et al., "A Cloud Computing Solution for the Efficient Implementation of the P-SBAS DInSAR Approach," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , vol. 10, no. , pp. 802-817, 2017.


  • 10:00 - Deep Learning Feedback On Some Sentinel 2 Cases
    De Vieilleville, Francois; Bosch, Sebastien; Ristorcelli, Thomas - MAGELLIUM, France

    The arrival of sentinel 2 data provides a great opportunity for researchers and scientist to test their algorithms on a large scale with many variations, at least both in time and locations. Many of these methods originate from the computer vision and pattern vision communities and address detection, classification and recognition problems. The available resolutions (10 meter GDS for panchromatic image) forbid the search and monitoring of small objects such as cars for example. However, many challenges remain for agricultural and urban related problems. In this study, we have focused on the cloud detection, although it is partially addressed by the ground segment for level 1C product, we have found that was still an important matter of interest for the strengthening of the product quality, leading to more robust higher level object extractions. In this regard we also have focused on another recognition problem which deals with the building footprints recognition and extraction. Thus, this paper sums-up our experience with both of these problems using a few convolutional neural networks from the literature. Many companies have open sourced their framework, allowing at little cost the use of very powerful tools to enable the design, training and application of such networks both locally and in the cloud. From our experience, Tensor Flow with Keras is a good way to prototype networks. None the less, we emphasize that a huge work is required in building proper the data sets. As far as we are concerned, this task remains one of the most challenging phase in the whole process, since a high ground truth quality is often required to reach high quality detection rates or high quality segmentation masks. In this regard, we address some comments with respects to the available open ground truth data and elaborate on our results while broadening the discussion to the benefits of active learning and unsupervised learning.


  • 10:15 - Deep learning for crop mapping based on Sentinel missions
    Lavreniuk, Mykola; Kussul, Nataliia; Shelestov, Andrii - Space Research Institute NASU-SSAU, Ukraine

    During the last years satellite data with high spatial and temporal resolution have become available under free and open licenses form Sentinel missions: Sentinel-1A/B and Sentinel-2A/B. The large volumes of these data allow providing classification maps at global, national and regional scale in operational procedures and update them every two weeks. At the same time, for crop identification at global or even at national scale effective algorithms of data storing and processing should be utilized. We propose a four-level deep learning architecture for crop mapping based on multi-temporal imagery from different satellites [1, 2]. These levels are pre-processing, supervised classification, post-processing and geospatial analysis. An ensemble of convolutional neural network is used for time series classification [3]. Also, effective data access algorithms have been implemented to avoid step with merging all images during vegetation period into single image cube. This methodology allows us update large scale crop maps every two weeks and use new acquired images and evaluate increasing accuracy of crop mapping with new data acquisition. Keywords: agriculture, image processing and data fusion, open data, Sentinel-1, Sentinel-2 1. A. Shelestov, M. Lavreniuk, N. Kussul, A. Novikov, and S. Skakun, “Exploring Google Earth Engine Platform for Big Data Processing: Classification of Multi-Temporal Satellite Imagery for Crop Mapping,” Front. Earth Sci., vol. 5, no. 17, pp. 1-10, 2017. doi: 10.3389/feart.2017.00017. 2. S. Skakun, N. Kussul, A. Y. Shelestov, M. Lavreniuk, and O. Kussul, “Efficiency Assessment of Multitemporal C-Band Radarsat-2 Intensity and Landsat-8 Surface Reflectance Satellite Imagery for Crop Classification in Ukraine,” IEEE Journal of Select. Topics in Applied Earth Observation and Remote Sensing, vol. 9, no. 8, pp. 3712-3719, 2016. 3. N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 778-782, 2017.


  • 10:30 - Estimation of the near-surface air temperature during day and night time from MODIS products in Berlin, Germany
    Marzban, Forough (1); Sodoudi, Sahar (1); Preusker, René (2); Marzban, Pouria (3) - 1: Institut für Meteorologie, Freie Universität Berlin; 2: Institut für Weltraumwissenschaften, Freie; 3: Department of Computer Engineering,Islamic Azad University,Sari Branch,Sari,Iran

    Air temperature (Tair or T2m) is an important climatological variable for forest, biosphere processes and climate change research. Due to the low density and uneven distribution of weather stations, traditional ground-based observations cannot accurately capture the spatial distribution of Tair. In this study, Tair in Berlin estimated during day and night time over six land cover/land use (LC/LU) types by satellite remote sensing data over a large domain and a relatively long temporal period (7 years). Aqua and Terra MODIS (Moderate Resolution Imaging Spectroradiometer) data and meteorological data for the period from 2007 to 2013 were collected to estimate Tair. Twelve environmental variables (land surface temperature (LST), normalized difference vegetation index (NDVI), Julian day, latitude, longitude, Emis31, Emis32, altitude, albedo, wind speed, wind direction and air pressure) were selected as predictors. Moreover, a comparison between LST from MODIS Terra and Aqua with daytime and nighttime air temperature (Tday ,Tnight ) was done respectively and next to it, the spatial variability of LST and Tair relationship by applying a varying window size on the MODIS LST grid was examined. Analysis of the relationship between observed Tair and spatially averaged remotely sensed LST indicated that 3 × 3 and 1 × 1 pixel size was the optimal window size for the statistical model estimating Tair from MODIS data during day and night time, respectively. Three supervised learning methods (Adaptive Neuro Fuzzy Inference system (ANFIS), Artificial Neural Network (ANN) and Support vector machine (SVR)) were used to estimate Tair during day and nighttime, and their performances were validated by cross-validation for each LC/LU. Moreover, tuning the hyper parameters of some models like SVR and ANN were investigated. For tuning the hyper parameters of SVR, Simulated annealing (SA) was applied (SA-SVR model) and a multiple-layer feed-forward (MLF) neural networks with three layer and different nodes in hidden layer are used with LevenberMarquardt back-propagation (LM-BP) in order to achieve higher accuracy in estimation of Tair. Results indicated that the ANN model achieved better accuracy (RMSE= 2.16°C, MAE = 1.69°C, R2 = 0.95) than SA_SVR model (RMSE= 2.50°C, MAE = 1.92°C, R2 = 0.91) and ANFIS model (RMSE= 2.88°C, MAE = 2.20°C, R2 = 0.89) over six LC/LU during day and nighttime. The Q-Q diagram of SA-SVR, ANFIS and NN show that all three models slightly tend to underestimate and overestimate the extreme and low temperature for all LC/LU classes during day and night time. The weak performance in the extreme and low temperature are a consequence of small number of data in these temperatures. These satisfactory results indicate that this approach is proper for estimating air temperature and spatial window size is an important factor that should be considered in the estimation of air temperature.


  • 10:45 - UrbanAI - Complex data made easy
    Moreno, Laura - Starlab, Spain

    UrbanAI, next Starlab spinoff, value proposition is to ease the access to complex data by levering all the value from Satellite, IoT network and Crowdsourced data to create a disruptive data exploitation capability based in Machine Learning processing. Nowadays, with the big data era and the IoT massive data collection, new datasets are becoming available every day at cities, but their exploitation has not yet reached full potential. With the full deployment of the sentinels, and also new players such as Planet similar things are happening in to the satellite ecosystem. Data access, mining, combination and exploitation is becoming the main bottleneck. UrbanAI provides information data and most important, enables data mining empowering the user with new capabilities that they didn’t had before due to the fact that all data (from different nature and sources) was scattered apart. UrbanAI will then enable data mining and easy the access to unreachable data before and empower the user with machine learning tools The first UrbanAI application purpose is to help cities maximise and demonstrate the environmental and socio-economic benefits related to having a healthy and resilient urban forest. Right now, UrbanAI is being accelatered by ESADE(Barcelona business school) and also ESA thanks to an OpenInnovation project: Street Health. The solution is being validated in the market through the completion of ongoing pilots in several cities such as Montreal, Singapore, Paris, Barcelona and Milton Keynes from different application lines that range from urban forest to urban sprawl, fires monitoring and also leak detection.


Data Analytics

Chairs: Datcu, Mihai (DLR/German Aerospace Center), Lopes, Cristiano (ESA- ESRIN)

11:30 - 13:15

  • 11:30 - Detecting Clouds In A Cloud Environment – Finding A Fast And Accurate Cloud Detection To Run On Sentinel Hub Cloud
    Aleksandrov, Matej; Batic, Matej; Kadunc, Miha; Kolaric, Primoz; Milcinski, Grega; Mocnik, Rok; Repse, Marko; Sovdat, Blaz; Vrecko, Anja - Sinergise, Slovenia

    Sentinel-2 satellites, with its global coverage and short revisit time, acquire a dataset ideal for observation of land surface changes. However, automatic or semi-automatic solutions are very susceptible to atmospheric fluctuations and become erratic due to large amount of "false positives" – algorithms marking cloudy data as changes. Even though Sentinel-2 data are available for almost two years, there still does not exist a single scene (non multi-temporal) cloud detection mechanism with high accuracy, suitable for on-the-fly processing. Trying to find a globally sound solution, we have tested several existing approaches, starting with the default cloud masks, which are part of L1C products, Sen2Cor classification, widely used Fmask and other single scene and multi-temporal algorithms. In order to obtain our own algorithm, fast enough to be used for on-the-fly cloud detection, we have employed a machine learning process, based on classical Bayesian probability estimation approach. The learning process used the cloud masks of each of the cloud detection algorithm as a separate training dataset to obtain a set of parameters for our algorithm. Clouds detected with our algorithm were validated against manually classified data, originating from a repository of manually labeled Sentinel-2A spectra (Hollstein et al.) and our own in-house cloud database, for each trained set of parameters. Finally, we have integrated the subsequent procedures within our Sentinel Hub services to be able to identify and visualize cloudy areas on-the-fly. The benefits of having an accurate and compute-optimized cloud detection are enormous - starting with filtering scenes based on localized cloud coverage, creating dynamic cloudless mosaics, having good change detection results, etc. We will present our findings on the cloud detection algorithms, compare results from different training datasets and show the classifications on Sentinel Hub.


  • 12:00 - New Paradigms Offering New Earth Observation Opportunities
    Datcu, Mihai; Schwarz, Gottfried - German Aerospece Center (DLR), Germany

    Today, we are faced with a number of demanding scientific and technical challenges that - if resolved successfully - will lead to advanced and highly demanded applications of Earth observation data. The given challenges arose in many fields of remote sensing and data interpretation and have to be tackled by advanced methods and paradigms. Prominent examples range from new satellite sensor concepts and computational sensing via efficient distributed data processing on ground up to automated data interpretation and knowledge extraction for individual end users. We will give a survey of innovative sensor concepts, new approaches of how to combine remote sensing data with geomatics, future prospects for cloud-based data handling and services, and data interpretation by (deep) learning and interactive visualization. These approaches will be complemented by assessments of typical new user-oriented applications that are expected to result from each new paradigm. In particular, we will address the prospects of computational imaging, the advantages offered by exploiting compressed data or of compact descriptors based on selected metadata, the potential of extracting high-level semantic information by combining sensor data with already existing knowledge contained in publicly accessible databases, and data trend investigations based on cloud-computing concepts together with advanced algorithms for the analysis of image time series. Finally, these approaches will be compared with totally new ideas resulting from the introduction and application of quantum computing. In order to assess the feasibility and the application potential of each proposed new approach, we will provide typical application scenarios based on the parameters of the current series of European Sentinel satellites and its stakeholder community. This does not only include the description of algorithms but will also consider the design of user interfaces and analysis tools such as the provision of standardized data analysis platforms. This results in a global scenario of future activities for the remote sensing community at large.


  • 12:15 - Open Source Software For Change Monitoring Using Satellite Image Time Series: Overview, Challenges And Solutions
    Verbesselt, Jan (1); Pebesma, Edzer (2); Hamunyela, Eliakim (1); Reiche, Johannes (1); DeVries, Ben (3); Dutrieux, Loic (4); Tsendbazar, Nandin-Erdene (1); Herold, Martin (1) - 1: Wageningen University, Belgium; 2: Muenster University, Germany; 3: University of Maryland, US; 4: Conabio, Mexico

    With the advent of Sentinel 1 and 2 satellites together with the Landsat constellation, dense satellite image time series with a high spatial resolution (up to 10m) are now available. Methods for analysing full and dense time series, which were previously only applicable to medium and coarse spatial resolution time series, are becoming applicable on satellite image time series that provide high spatial details. This offers a great opportunity to explore the full potential of time series analysis for ecosystem monitoring. However, this opportunity comes with challenges and requires new methods that can efficiently handle dense satellite image time series of multiple satellite sensors (e.g. Landsat, Sentinel-1 and 2), and are able to perform temporal analysis while accounting for a spatial context. This would enable the monitoring of land surface dynamics, disturbances, and extremes at unprecedented detail. We present an overview of open-source software, with emphasis on R packages, that have been developed for satellite data-based change monitoring (e.g. deforestation and regrowth monitoring), inter- sensor calibration and fusion, and land cover monitoring. Based on this overview, we highlight current challenges and needs in the current context of big Earth Observation data analysis with dense high spatial resolution satellite image time series available globally. With support from the R Consortium, a new R package called stars will try to address a number of these challenges. Potential solutions enabling more transparent, open and reproducible earth observation research are formulated.


  • 12:30 - High Resolution Urban Air Quality Maps Combining Satellite Measurements and Low-cost Sensors
    Mijling, Bas - KNMI, Netherlands, The

    In many cities the population is exposed to elevated levels of air pollution. Often the local air quality is not well known due to the sparseness of the official monitoring network, or unrealistic assumptions being made in urban air quality models (such as vehicle emission factors). However, new sources of alternative air quality data rapidly become available. From space by satellite instruments like TROPOMI (providing air quality information on a 7 x 3.5 km2 resolution), and on the ground by new sensor technologies allowing for low-cost in-situ measurements. Numerous research groups, companies, and citizens are already experimenting with these low-cost sensors. The objective of the RETINA project at KNMI is to develop operational services able to produce high spatio-temporal resolution maps of urban air pollution. This is not straightforward due to the localized nature of pollutants such as nitrogen dioxide (NO2). With a new data assimilation approach we combine the heterogeneous measurements with atmospheric dispersion models, making optimal use of all available information. In a preliminary study, we assessed the in-field performance of low-cost NO2 sensors in a citizen science campaign in Amsterdam. We found that the current generation of sensors can provide useful data (with an error around 7 μg/m3), but only after extensive calibration efforts. The second step was to develop a prototype system for the city of Eindhoven, where an alternative (mid-cost) air quality network is operational since 2015. We implemented an atmospheric dispersion model, driven by emission proxies from open data sources (e.g. road network, traffic intensity, population density). With regression techniques we find the emission factors for the best overall model performance, while we use Kalman filtering to adjust the model results locally. The developed techniques in the RETINA project are sufficiently versatile to be applied to other pollutants (such as particulate matter) and to data from other sensor networks in other cities. The RETINA system will first be used to better understand the satellite observations of air pollution within an urban area. In a later stage, the satellite observations will be used as an additional data stream for assimilation in the system, and thus provide better air quality information for cities which lack a representative official monitoring network.


  • 12:45 - Challenges and opportunities in developing analytics of open EO and geospatial data for urban thematic applications
    Tapete, Deodato - Italian Space Agency, Italy

    To capture the complexity of urban environments and the dynamicity of the processes transforming them (e.g. urban sprawl, rural-to-urban transformation, land-use change), data scientists rely on the accessibility of open geospatial data. Ideally, these should be reliable, of suitable spatial resolution, up-to-date and interoperable. Local authority data repositories and institutional geoportals are increasingly becoming the most preferred free and open data-sharing resources. Depending on the type of urban application, the data mining exercise can also include unconstructed data collated from the web and social media. However, these data frequently require manipulation and transformation to be ready for use. In this arena, free-access EO data from satellite missions (e.g. Sentinel-1A/B and 2A/B) and EO-derived mapping products (e.g. the Copernicus Land Monitoring Service) are key resources for change detection and time series analysis in urban applications, of such renowned value that the challenge for scientists and practitioners is how to generate metrics and indicators to address user-oriented (or better, user-defined) questions and/or real-world issues. In light of the recognised need for up-to-date and relevant examples of EO data use in R&D activities (Byfield et al., n.d.) and the challenging topics identified by the EO Open Science Community (Snik, 2015), this paper aims to share research experiences of urban remote sensing applications based on analytics integrating both free and non-free EO and geospatial data. In particular the paper showcases approaches of data analytics developed for: (1) geospatial analysis of shallow geohazards in dense urban environments, and (2) generation of mapping products showing constraints and opportunities for sustainable use of land. Drawing from case studies in Italy, value and limitations of open data from Copernicus services (e.g. Urban Atlas), web data mining and city data portals are discussed with regard to post-processing chains of ground deformation estimates retrieved from multi-interferogram processing of Synthetic Aperture Radar (InSAR) Big Data. Open data are therefore exploited as informative layers to reduce the redundancy of hundreds of thousands of InSAR observations, understand their cause-effect relationships and narrow down to the most relevant areas of concerns that the potential stakeholder should focus on. Potential of data analytics combining high resolution geological data, Landsat and Sentinels time series, and free land cover/land use data are instead explored with regard to the provision of land indicators for the Sustainable Development Goal 11 ‘Make cities and human settlements inclusive, safe, resilient and sustainable’. More specifically, the integration with proprietary geological data is used to test the transformability of EO and EO-derived open data into a mapping tool that can inform decisions on strategic planning of green/blue infrastructure in cities. References Byfield, Val, Kapur, Ravi, Del Frate, Fabio, Mathieu, Pierre-Philippe, Higgins Mark (n.d.) EO Open Science 2.0 - Training a new generation of data scientists. White paper, http://esaconferencebureau.com/docs/default-source/15c12_library/training-a-new-generation-of-data-scientists.pdf?sfvrsn=0 Snik, Frans (2015) Summary of the "jam session" during the EOscience2.0 workshop in Frascati 12-14 Oct 2015. http://esaconferencebureau.com/docs/default-source/15c12_library/summary-of-the-jam-session-.pdf?sfvrsn=0


Data Analytics

Chairs: Datcu, Mihai (DLR/German Aerospace Center), Lopes, Cristiano (ESA- ESRIN)

14:15 - 15:45

  • 14:15 - Using land use time series to monitor species extinction risk under the Red List framework
    Santini, Luca (1); Rondinini, Carlo (2); Benítez López, Ana (1); Butchart, Stuart (3); Hilbers, Jelle (1); Schipper, Aafke (1,4); Čengić, Mirza (1); Huijbregts, Mark (1,4) - 1: Department of Environmental Science, Institute for Water and Wetland Research, Radboud University, PO Box 9010, 6500 GL Nijmegen, The Netherlands.; 2: Global Mammal Assessment Program, Department of Biology and Biotechnologies, Sapienza Università di Roma, Rome, Italy.; 3: BirdLife International, Wellbrook Court, Girton Road, Cambridge CB3 0NA, UK.; 4: PBL Netherlands Environmental Assessment Agency, PO Box 30314, 2500 GH, The Hague.

    The Red List of Threatened Species is a framework used to assess the conservation status of species. International groups of species experts regularly assess species under quantitative criteria based on species distribution and population abundance data. These information however are sparse and uncertain for most species, and insufficient for many. Here we propose a complementary tool to assess species conservation status under the same framework using land use change time series coupled with habitat suitability models and statistical predictions of population abundance. We demonstrate the applicability of our method on mammals globally (~5,000 species). Our predictions are fairly consistent but more optimistic than the Red List assessments. However, we predict around 400 species to be more threatened than assessed under the Red List, around 40% of data-deficient species (species for which data are insufficient to be assessed under the Red List) to be at risk of extinction. Our method relies on several optimistic assumption about species distribution and abundance, so species that are predicted to be more at risk than under the Red List should be re-assessed in the light of the new information available. Our method can be become a complementary supporting tool for the Red List specialist groups for assessing species conservation status globally. Furthermore, it can set the basis for an early warning system based on automatically processed satellite images with periodic updates.


  • 14:30 - Earth Observation + Copernicus Programme + Big Data + Artificial Intelligence = Business Intelligence 2.0 - It Starts Now
    Moldestad, Dag Anders (1); Dehls, John (2); Marinkovic, Petar (3); Larsen, Yngvar (4) - 1: Norwegian Space Centre, Norway; 2: Geological Survey of Norway, Norway; 3: PPO.labs, The Netherlands; 4: Norut, Norway

    Business intelligence (BI) aims to support decisions, not only in the business area stricto sensu, but also in the domains of environment, risk management, energy, transportation, health, science, and more. It provides a transverse vision of an organization's data and allows quick and simple access to strategic information. Thanks to the advent of cloud computing, BI is now casually used in many organizations. However, despite ongoing trends, the use has been mostly confined to internal organizational data. This is mainly due to the lack of easy access to external information sources, and focus on improving only the local operational efficiency. It is foreseen that the Copernicus programme, with its portfolio of services and paradigm shift in EO data applications, will provide a critical information layer and a cornerstone for a radical reinvention of BI. The programme is 3rd largest data provider in the world. It ensures free and open access to a wealth of operational and globally acquired data, bringing astonishing exploitation opportunities if the related challenges are addressed in parallel. The Copernicus programme forms a critical highlight in development of EO applications for BI, and vice versa. Integration of a multitude of external and internal data sources is acknowledged as the best way to provide the most complete view for decision making. Yet, tackling data heterogeneity has always been an issue. With big Copernicus EO data coming into play, benefits from processing external data look even better, but issues are also more complex. Data volumes challenge even warehouses that were tailored for (what was previously considered) large amounts of data. Velocity with which data is collected challenges the very idea of materializing historicized data. Variety and veracity issues remain, but at a much greater extent. Moreover, extracting intelligible information from big Copernicus EO data (data value) requires novel methods. Finally, developments and new technologies such as cloud computing, data mining, and deep learning also question classical BI. This contribution will gather and discuss top-level issues addressing problems related to the integration of EO information layers into BI, outline technological issues, as well as Copernicus EO data analytics applications. Many of the discussions will be in context of InSAR.no -- a national InSAR-based deformation mapping service of Norway, building on top of Sentinel-1 data. Building such a service involves large amounts of input data and significant computational resources. However, the biggest challenge will be in extracting significant patterns of deformation from billions of dynamically updated 4D deformation data records and their integration into BI workflows.


  • 14:45 - A new method for unsupervised flood mapping using Sentinel-1 images
    Amitrano, Donato; Di Martino, Gerardo; Iodice, Antonio; Riccio, Daniele; Ruello, Giuseppe - University of Napoli Federico II, Italy

    Effective response to natural disasters requires the availability of systems able to provide decision makers and first responders a map of the affected area in a short time. Floods are among the most serious natural hazards in the world, causing significant damage to people, infrastructure and economies. In these scenarios, rapid estimation of inundated areas is crucial to effectively organize response operations. The use of synthetic aperture radar (SAR) data is a crucial value-added in rapid flood mapping due to its all-weather and all-time imaging characteristics. We propose a new unsupervised framework for rapid flood mapping based on the exploitation of Sentinel-1 ground range detected (GRD) products. GRD products are pre-processed images made available by ESA on the Sentinels Data Hub, on which the basic TOPSAR processing has been already implemented. Therefore, they are ready to feed information extraction processing chains. In particular, two processing levels providing maps with increasing resolution and computational burden are discussed. The first one (RFP-L1) is based on the analysis of the single GRD product, and exploits classic Haralick textural feature. The output is a flood map with 30 meters spatial resolution obtainable with few minutes processing time. The second processing level (RFP-L2) is based on change detection, exploiting the comparison of the response of two GRD products imaging the same area. The output is a flood map at the full resolution of the input GRD products (10 meters). In this case, the processing time varies depending on the selected despeckling, which represents the most computationally demanding step of the designed chain. The proposed methodology has been tested using five cases taken from the list of activations of the Copernicus Emergency Management Service (EMS). In particular, we considered the following test-sites: Parachique (Peru, EMS activation code: EMSR199), Selby (UK, EMS activation code: EMSR150), Ballinasloe (Ireland, EMS activation code: EMSR149), Poplar Bluff (USA, EMS activation code: EMSR176), Jemalong (Australia, EMS activation code: EMSR184). The obtained results using the RFP-L1 are the followings. As for overall accuracy: Parachique - 99.8%, Ballinasloe - 98.0%, Selby - 92.1%, Poplar Bluff - 96.2%, Jemalong - 91.2%. On average, the accuracy is 95.4%. Ground truth data were provided by ESA through the Copernicus EMS. As for false alarms the following values were obtained: Parachique - 8.58%, Ballinasloe - 4.69%, Selby - 1.00%, Poplar Bluff - 7.33%, Jemalong - 13.3%. The average value is 6.98%. The obtained results using the RFP-L2 are the followings. As for overall accuracy: Parachique - 97.3%, Ballinasloe - 98.6%, Selby - 91.8%, Poplar Bluff - 84.1%, Jemalong - 93.6%. On average, the accuracy is 92.9%. As for false alarms the following values were obtained: Parachique - 12.5%, Ballinasloe - 6.60%, Selby - 2.40%, Poplar Bluff - 11.1%, Jemalong - 12.3%. The average value is 8.88%. The proposed processing chains were also compared with several literature method such as SVM, neural net, maximum likelihood, thresholding, and kmeans. The obtained results showed that RFP-L1 and RFP-L2 outperform all of them.


  • 15:00 - Free Global DSM Assessment Exploiting the Innovative Google Earth Engine Platform
    Nascetti, Andrea; Di Rita, Martina; Ravanelli, Roberta; Amicuzi, Maria Teresa; Esposito, Sara; Crespi, Mattia - University of Rome La Sapienza, Italy

    The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. Google Earth Engine (GEE) is a computing platform recently released by Google “for petabyte-scale scientific analysis and visualization of geospatial datasets” (Google Earth Engine Team, 2015). GEE can be used to run geospatial analysis using a dedicated HPC (High Performance Computing) infrastructure. GEE enables researchers to access geospatial information and satellite imagery, for global and large scale remote sensing applications. The leading idea of this work is to evaluate the precision and the accuracy of three available free global Digital Surface Models (DSMs), ASTER GDEM, SRTM and the new ALOS-2 global DEM, on large areas, by leveraging GEE Platform capabilities. Accordingly to previous studies (Jacobsen, K. 2013), proper routines to evaluate standard statistical parameters to represent DSM precision and accuracy (i.e. mean, median, standard deviation, NMAD, LE95) were implemented inside the GEE Code Editor. Moreover, the routines were used to characterize the accuracy of the input DSM within different slope classes. In particular, in this work, the geometric accuracy of these three global DSMs has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.


  • 15:15 - Complex Processing of Radar and Optical Imagery from Sentinel Satellites
    Mozgovoy, Dmitry Konstantynovich (1,2); Hnatushenko, Volodymyr Volodymyrovich (1,2); Vasyliev, Volodymyr Volodymyrovich (1,2) - 1: EOS Data Analytics, Ukraine; 2: Oles Gonchar Dnipropetrovsk National University, Ukraine

    After the commissioning of another Sentinel-2B satellite in 2017, the Copernicus system became the most informative and available source of free remote sensing data. This can be judged by the large number of web services that provide a wide range of information products, mainly based on optical imagery from Sentinel-2A and Sentinel-2B satellites. This can be explained by a successful combination of high quality indices of the images themselves (rather high spatial and radiometric resolution, different spectral bands) and high survey frequency (5 days for two satellites). However, in the optical bands, the actual periodicity of the survey (taking into account the clouds), even for small areas, is much lower (sometimes several times). Use of data from Landsat-7, Landsat-8 and Terra (ASTER) as additional free sources is not always possible due to poorer characteristics of these images, and also because of the clouds. In this case, it is possible to partially improve the frequency of obtaining information on the desired territory by means of a radar survey, one of the main advantages of which is the independence from clouds. Obviously, it's impossible to get a full-fledged RGB-image in natural colors or calculate spectral indices (NDVI, etc.) according to radar data. However, preliminary research confirmed the possibility of using radar images from Sentinel-1A and Sentinel-1B satellites as an alternative source for areas covered by dense clouds. The list of applied problems, where complex processing of radar and optical imagery showed high efficiency, is quite large: - agriculture (monitoring of field cultivation, dynamics of crop growth, harvesting); - monitoring of emergency situations (floods, forest fires, landslides, ice monitoring, movement of large icebergs); - ecological monitoring (deforestation, control of mining operations, estimation of oil pollution of the seas and oceans); - navigation (control of traffic and parking lots of tankers, container ships and other large vessels). Data combination of optical and radar remote sensing data as in our work specified gives a good number of indexes very useful to analyze the vegetation state.


  • 15:30 - Wetlands Monitoring - Lake Titicaca South-East Basin
    Flueraru, Cristian; Serban, Ionut; Constantin, Sorin; Serban, Florin; Budileanu, Marius; Copacenaru, Olimpia - Terrasigna, Romania

    The paper addresses the dynamics of some very sensitive environments: the high-altitude wetlands in Altiplano, Bolivia. They undergo a great deal of stress caused by climate change and intense urbanisation. The scope of the current research is to observe and quantify the behaviour of the wetlands at 3 moments in time (2003, 2009, 2016) together with the dynamics of other environmental factors and indicators. The main source of information represents a wide variety of Earth Observation (EO) data with various spatial and temporal resolutions in accordance with the scale of the environmental variables: - wetlands extent: Sentinel-2, SPOT5, LANDSAT; - snow cover extent: MOD10A2, MYD10A2; - lakes extent: LANDSAT; - mean air temperature: MOD11A2, MYD11A2, meteorological data; - vegetation indices: MOD13Q1; - evapotranspiration: MOD16A2; - precipitation: TRMM3B43, meteorological data. This analysis of the data follows the geographical limits of the hydrological basins and provides an unique insight on the subtle changes currently occurring in that area. It reveals an abrupt shrinkage of the wetlands doubled by the decrease of the precipitations and the snow cover extent. The methodology of the project focuses on sustainability and creates the premises of future updates using the Copernicus data stream (Sentinel-2, Sentinel-3). The project is financed by the European Space Agency (ESA) under the contract no. 4000115222/15/I-NB.


Round Table Artificial Intelligence and Data Analytics

16:30 - 16:50

  • 16:30 - Round Table

    Artificial Intelligence/Data Analytics


Lightning Talks

16:50 - 18:00

  • 16:50 - Complex SAR Data Processing For Evaluation Of Environmental Changes
    Frasheri, Neki; Beqiraj, Gudar; Bushati, Salvatore - Academy of Sciences of Albania, Albania

    Past experiments with Sentinel-1 interferograms applied in PreAdriatic Depression area in Albania gave some intriguing results, characterized by lack of fringes in southern part and dominance of fringes in northern part but from images with 12 days if difference. Such results indicated a strong impact of land coverage in interferograms, making difficult the evaluation of possible subsidence in beaches of Adriatic Sea. In this paper we present results from Sentinel-1 interferograms based on different polarisations and comparison with land coverage features obtained from combination of coherence, intensity averages and intensity differences. Results show a strong impact of land coverage spatial and temporal variations, as well as new information on subsidence areas. Processing of Sentinel-1 images was done with ESA’s SNAP software, post-processing of resulting images with GIMP, appliicable in framework of citizen science as well.


  • 16:53 - Trend analysis of vegetation index in different land use land cover (Case study: Iran).
    Fakharizadehshirazi, Elham (1,2); Sabziparvar, Ali akbar (2); Sodoudi, Sahar (1); Fallah hasanabadi, Bijan (1) - 1: Free university of Berlin, Germany; 2: Bu-Ali Sina University,Iran

    Vegetation Is an Essential Element of the Land Surface System That Links Climate Change. Both Environmental and Anthropogenic Factors Are Effected on Vegetation Dynamics. There Are Several Remote Sensing Indices for Indicating Vegetation. Normalized Difference Vegetation Index (NDVI) Is a Common and Widely Used Index. NDVI Is Highly Sensitive to Ecosystem Conditions; Therefore, It Can Be Representative of Detecting Changes in Vegetation Activity. The Visible and near Infrared Bands on the Satellite Multi Spectral Sensors Have Monitored the Greenness of Vegetation. In This Research, We Present the Trend Analyses of 31 Years (1982-2012) Remote Sensing Vegetation Index Based on Long-term Time Series of NDVI Observation from the Global Inventory Modelling and Mapping Studies (GIMMS) Group Derived from NOAA AVHRR Imagery With 0.08 Degree Spatial Resolution over Iran (40 T0 65 East, 25- 45 North). Iran Has a Dry Climate Characterized by Long, Hot and Dry Summers. NDVI Has a Strong Seasonal Cycle, and in This Research, We Only Used the Mean Growth Season NDVI. We Depict Greening (NDVI Increase) and Browning (NDVI Decrease) Regions. NDVI, an Indicator of Vegetation Growth and Coverage, Has Been Described the Characteristics of Land Use and Land Cover. In This Research, We Also Have Investigated NDVI Changes in Different Land Use Land Cover to Find out the Relationship Between Land Use Land Cover and NDVI Trend. The Existence of Positive Autocorrelation in the Time Series Increases the Probability of Detecting Trends When Actually None Exist, and vice Versa. In This Study, the Effect of Autocorrelation on the Variance of the Mann-Kendall Trend Test Statistic Is Considered, Therefore We Have Used Modified Man-Kendall Method to Do Trend Analyses. According to the Results, Almost Regions Have Negative NDVI Trend and It Is Harmonic with Land Use Land Cover in Some Cases.


  • 16:56 - Crustal structure beneath Mt Cameroon region derived from gravity data
    Ngatchou Heutchi, Evariste - University of Yaounde 1, Cameroon

    In the present study, gravity information is available for improving the understanding of the crustal structure and its relationship to regional tectonics environment beneath the large volcanic system called Mt Cameroon. The multi-scale wavelet analysis method is applied to separate the gravity fields. The logarithmic power spectrum method is used to estimate different depths of the gravity field source. The results show that the crustal structure is very complicated beneath Mt Cameroon area with the crustal density exhibiting lateral inhomogeneity. The lateral discontinuities of density structure causes undulations of the gravity anomaly field whose complexity can be an indicator of past crustal instability. The Buea-Tiko region appears to be the most tectonic active zone in the Mt Cameroon area. The upper and middle crusts consist of many small-scale faults, uplifts and depressions. In the lower crust, these small-scale tectonic units disappear gradually, and replaced by large-scale units. The gravity anomalies in upper and middle crusts are correlated with geological and topographic features on the surface. Compared with the crust, the structure is relatively simple in uppermost mantle. The earthquakes occurred predominantly in upper and middle crusts, their epicenters are limited in transitional regions between high gravity anomaly and low gravity anomaly. The earthquake occurrence as well as complicated gravity behavior may be related to the Upwelling of high density magmatic materials and asthenosphere heat flow materials beneath Mt Cameroon. The overall results, in a good agreement with previous findings, show the performance of the wavelet-based filter in the possibility of getting a multi-resolution analysis and the study of structures using gravity data.


  • 16:59 - Deep Learning Techniques For The Information Extraction From Large Earth Observation Data
    Licciardi, Giorgio (1,2); Dalla Mura, Mauro (2); Chanussot, Jocelyn (2,3) - 1: Research Consortium Hypatia, Italy; 2: Gipsa Lab Grenoble, France; 3: University of Iceland, Iceland

    Creation of valuable content from large and growing volume of EO derived data is a challenge for Research organizations, governments and companies. Several approaches have been proposed in the literature for the information-extraction from EO data, among which Deep learning (DL) algorithms have recently become a hotspot in the machine-learning area and have been introduced into the geoscience and remote sensing (RS) community for RS big data analysis. With the term Deep Learning is intended a set of machine learning systems (usually neural networks) with multiple layers. Deep learning involves a class of models, which try to hierarchically learn deep features of input data with very deep neural networks, typically deeper than three layers. The network is first layer-wise initialized via unsupervised training and then tuned in a supervised manner. In this scheme, high-level features can be learned from low-level ones, whereas the proper features can be formulated for pattern classification in the end. Thus, the use of more than three layers permits to extract more abstract, invariant features of data, and have been shown to yield promising performance in many fields of remote sensing including classification or regression tasks. By analyzing the practical demands in Earth Observation applications, in this paper we show how the use of deep learning approaches can be used anywhere in remote sensing analysis: from pre-processing to the recent challenging tasks of high-level semantic feature extraction and RS scene understanding. Different examples showing the use of DL techniques applied to different stages of EO data processing will be presented. In a first example, it will be shown an application for the spectral compression and the noise suppression of hyperspectral images [1][2]. Then, an example of how the DL can extract relevant information from data acquired from different sensors will be presented [3][4]. Finally, a large time-series dataset of meteorological images is processed with DL in order to extract relevant features [5]. REFERENCES [1] G. Licciardi, J. Chanussot, A. Piscini, “Spectral compression of hyperspectral images by means of nonlinear principal component analysis decorrelation”, ICIP 2014, 27-30 october 2014, Paris, France. [2] G. Licciardi, J. Chanussot, ‘Nonlinear PCA for visible and thermal hyperspectral image quality enhancement”, IEEE Geoscience and Remote Sensing Letters, Volume 12, issue 6, pages 1228-1231. 2015. [3] G. Licciardi, M. M. Khan, J. Chanussot, A. Montanvert, L. Condat, C. Jutten, “Fusion of hyperspectral and panchromatic images using multiresolution analysis and nonlinear PCA band reduction”, EURASIP Journal of Advances in Signal Processing, 2012, 2012:207 [4] G. Licciardi, R. G. Avezzano, F. Del Frate, G. Schiavon, J. Chanussot “A Novel Approach to Polarimetric SAR Data Procesing Based on Nonlinear PCA”. Pattern Recognition, Volume 47, issue 5, pp: 1953-1967, 2014. [5] G. A. Licciardi, R. Dambreville , J. Chanussot, S. Dubost “Spatio-temporal pattern recognition and nonlinear PCA for global horizontal irradiance forecasting”, IEEE Geoscience and Remote Sensing Letters, Volume 12, issue 2, pages 284-288, 2014.


  • 17:02 - Contribution of the New satellites (Sentinel-1, Sentinel-2 and SPOT-6) to the Coastal Vegetation Monitoring in the Pays de Brest (France).
    Talab Ou Ali, Halima; Niculescu, Simona - LETG UBO, France

    The significant economic, societal and environmental changes that have occurred worldwide and at different regional levels account for the strong interest that scientists are currently showing in issues related to coastal land cover changes. The intense urban extension, the development of its related infrastructures and town planning, the deep transformations of agricultural practices and the increasingly intensive farming and exploitation of natural resources have all led to considerable changes in the coastal ecosystems. Our paper aims mainly at finding a methodological solution applicable for the processing of several heterogeneous coastal ecosystem parameters, which would allow their description in their full complexity. This complexity is the result of both local variations of ecological conditions and different anthropogenic factors having direct and indirect influences on plant communities’ dynamics of these ecosystems. Although understanding the dynamics which governs the changes occurring in these “patchwork” areas is still a difficult task (natural evolution of plant communities, human activities,…), we begin our paper by an interferogram calculation method of the main types of vegetation to achieve the coherence of a multi-temporal Sentinel-1 radar image series, in SLC format (C band, VV and VH polarization), between 2015 and 2016. We then proceed to calculating radar backscatter coefficient based on Sentinel-1 images in GRD format. The approach adopted relies on analysis of relations between state of different vegetation ecosystems and radar response, while establishing a link between the physical coherence responses and backscatter coefficients with the vegetation parameters (phenology, structure…) along with soil humidity due to precipitations. Assuming that backscatter coefficient may be considered suggestive of degree of vegetation development and of its various phenological stages, we were thus able to determine the different temporal patterns of the various classes of coastal vegetation. In addition to the spectral and spatial dimensions, the time component is a priceless source of information in terms of plant resource monitoring and management and land cover dynamics follow-up. Our study of radar image series collected especially during the growth stage enables us to improve the recognition of the main types of vegetation by relying on the temporal dynamics of the various classes of vegetation. The temporal analyses have proven that there is not only one date which would allow a satisfactory characterization of all vegetation classes. The temporal dimension, represented by the seasonal dynamic, is thus a vital component of any thorough inventory and analysis of the coastal vegetation ecosystems. A combination of coherence and intensity images complements the Sentinel-1 radar image processing, since the use of the two images may be employed for land cover classification. Second, our findings concern combinations of data collected and recorded by Sentinel- 1 using different optical satellite sensors (Sentinel-2 and Spot-6) to improve the accuracy of recognition and mapping of the main classes of vegetation in the Pays de Brest, going as plant formation. For this first stage, the findings show average accuracy levels for SPOT-6 and Sentinel-2 image classification. Furthermore, the combination of the three types of data insures excellent multi-sensor classification accuracy (higher than 92%).


  • 17:05 - Evaluation of multi-source Earth Observation data exploitation for monitoring intensive crop farming.
    Arcorace, Mauro; Delgado Blasco, Jose Manuel; Cuccu, Roberto; Sabatino, Giovanni; Rivolta, Giancarlo - Progressive Systems Srl, Parco Scientifico di Tor Vergata, 00133, Rome, Italy

    The exploitation of satellite Earth Observation (EO) data has already played key roles in the agricultural sector where remote sensing contributions are traditionally strong. EO data can be used in fact to monitor fields in the cropping season, to estimate crop production loss due to drought or excessive rainfall, to estimate the humidity of the soil, to identify change in type of vegetation and to monitor deforestation. In this domain, we are exploring new capabilities on multi-source EO data exploitation for precise farming applications. The main goal of this applied research is to build a dedicated and customizable portfolio of services that can be easily adopted by private or public sector stakeholders to support agricultural needs at local scale. Having a versatile EO-based agricultural monitoring service in place would be, in fact, a key instrument to provide valuable and reliable information on crop conditions, promote the use of EO data within the industry, address crop field monitoring needs over extensive rural areas and guide local stakeholders to identify yield and production losses. A preliminary study over selected sites in Italy has been carried out to demonstrate the feasibility of prototypical tools for agricultural monitoring. In particular, in order to evaluate the applicability of remotely sensed indicators to enable cost reduction of agricultural activities, multiple investigations have been performed over different types of crop field. Thanks to the Google Earth Engine cloud-based platform, the information derived from Earth Observations, such as multi sensor (e.g. Sentinel-2, Landsat-8, ProbaV) low and medium resolution time series analysis, have been used to monitor crop growth dynamics and to better understand environmental behaviours meaningful for agriculture. First results, have been shown that by means of these satellite based measurements it is possible to identify a correlation with the crop season of different cultivation fields and to analyse other relevant features. For example, thanks to an integrated analysis of vegetation and water indexes, it is possible to assess crop field status and to estimate harvesting period and annual irrigation schedule. Furthermore, multi temporal remote sensed surface temperature analysis, retrieved from Landsat 8 Thermal Infrared (TIRS) are also useful to identify seasonal temperature fluctuation across the years. Besides creating value for the farmers, such a type of analysis can be used for other purposes as well, e.g. for measuring economic loss in case of disaster events. Precise farming is expected to be widely adopted by the agricultural community in the near future, as the continuously growing remote sensing data sources (satellite, UAV) and ground observations will enable the development of new ad-hoc services. This work is intended to pave the way towards future implementations of support services for agricultural industry and farmers, providing a wide range of satellite-derived indicators for precise farming applications, such as crop disease identification, pollution monitoring, soil mapping and yield prediction. The results derived from this study are expected to provide supportive evidence of the potential value of the envisioned support services for local-scale applications.


  • 17:08 - Lena Delta Water Bodies Mapping And Level Heights Determination Based On Remote Sensing Data
    Volynetc, Aleksandr - St. Petersburg University, Russian Federation

    Low-lying permafrost-dominated Arctic river deltas are particularly sensitive to climate variability. This sensitivity is dramatically expressed in landscapes changes because of permafrost degradation (thermokarst and thermoabrasion processes) and river-ocean interactions changes due to rivers run-off increase. As thawing of permafrost may lead to the release of great amount of green-house gases and acceleration of climate change, close monitoring of permafrost affected regions dynamics using remotely sensed data becomes a prominent instrument for analysis of climate variability. One of the main features of permafrost degradation is thermokarst - expressed in formation, growth, decreasing and vanishing of thermokarst lakes. Thus water objects quantity size and level change can be considered as important indicators for the water balance and for frozen ground mutability in corresponding areas. The region of interest of this study is situated in continuous permafrost zone Lena Delta, which is the largest delta in the Arctic region with an area exceeds 32 000 km². It includes about 60 000 lakes, most of them are impacted by thermokarst, and numerous branches of the Lena River. On the base of high resolution (5m) multi-year RapidEye satellite imagery was created a map of water bodies in the Lena Delta. The first step of mapping was preprocessing of satellite images, which includes atmospheric correction of obtained images, orthorectification and projection in geodetic coordinate system UTM 52N (zone of the eastern part of Lena delta) using software provided by PCI Geomatics. On the next step near-infrared channels of appropriate ones were superimposed on each other, with the condition based on suitable ground reflectance values, and a binary raster image of water bodies was obtained. Received raster was filtered using different methods to remove noises and then converted in vector scheme. Thus vector map of lakes and channels in the Lena Delta was obtained, which contains about 34 000 objects. This map was overlaid by footprints of laser altimeter on board of ICESat, provides an unprecedented set of global elevation measurements of the Earth, and water level heights of appropriate water bodies were estimated. As a result was received sufficiently detailed map of Lena Delta water objects with elevations of several lakes and inclinations of main river channels.


  • 17:11 - Datacube Analytics for the Digital Earth: Challenges and Opportunities
    Mantovani, Simone (1); Natali, Stefano (1); Barboni, Damiano (1); Steer, Adam (2); Evans, Ben (2); Hogan, Patrick (3); Baumann, Peter (4) - 1: MEEO, Italy; 2: National Computational Infrastructure - The Australian National University, Australia; 3: NASA Ames Research Center, USA; 4: JACOBS University, Bremen, Germany

    Recently, the term datacube is receiving increasing attention as it has the potential of greatly simplifying “Big Earth Data” services for users by providing massive spatio-temporal data in an analysis-ready way. The Datacube Manifesto [1] provides a concise, crisp definition of datacubes, based on the consolidated experience of project partners in datacube modeling (query languages, architectures, standards development, …) and in operation of Petascale datacube services at some of data centers worldwide. A number of datacube-aware platforms and services have emerged that enable a new collaborative approach for analysing the vast quantities of satellite imagery and other Earth Observations, making it quicker and easier to explore a time series of image data stored in global or regional datacubes. In this context, the European Space Agency and European Commission H2020-funded projects ([2], [3]) bring together multiple organisations in Europe, Australia and United States to allow federated data holdings to be analysed using web-based access to petabytes of multidimensional geospatial datasets. The aim is to create and ensure that these large spatial data sources can be accessed based on OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards. Unlike Web Mapping Service (WMS) which returns spatial data as an image or ‘static map’, WC(P)S returns data in its raw form, with its original semantics enabling further data processing or the building of web applications, while at the same time the data volume transferred is minimised. WCS provides access to the full range of geospatial data served from a web server and allows for requesting only a subset of the data. A WCS supports slice and trim operations, where either the data dimension (slice) or the data extent (trim) is reduced. WCPS is an extension of the WCS 2.0 core specification and allows the user to craft queries to be run on the data using a text based query language, similar to SQL. This allows the user to not only limit the data transfer to the area they are interested in, but also web-based, on-demand data processing. In this study, we provide an overview of the existing datacubes (EarthServer-2 datacubes, Sentinel Datacube, European and Australian Landsat Datacubes, …), how the regional datacube structures differ, how interoperability is enabled through standards, and finally how the datacubes can be visualized on a virtual globe (ESA-NASA WebWorldWind) based on a WC(P)S query via any standard internet browser. The current study is co-financed by the European Space Agency under the MaaS project (ESRIN Contract No. 4000114186/15/I-LG) and the European Union’s Horizon 2020 research and innovation programme under the EarthServer-2 project (Grant Agreement No. 654367). [1] The Datacube Manifesto (http://earthserver.eu/sites/default/files/upload_by_users/The-Datacube-Manifesto.pdf) [2] MEA as a Service (http://eodatacube.eu) [3] EarthServer-2 (http://www.earthserver.eu)


  • 17:14 - EarthStartsBeating: an innovative Earth Observation divulgation project
    Palumbo, Giovanna; Tarchini, Salvatore; Cerreti, Fiammetta; Cadau, Enrico Giuseppe; Iannone, Rosario Quirino; Marino`, Fernando - SerCo S.p.a, Italy

    This abstract proposes and describes a project aimed at the divulgation of scientific topics, mainly written for a non-expert audience. The outreach activities are performed through the writing of articles published on a website named EarthStartsBeating (https://earthstartsbeating.com/) and launched since early 2015. In line with the Copernicus free and open data access idea, EarthStartsBeating is intended to present and promote the multiple exploitation areas of the Copernicus Sentinels Earth Observation (EO) data, reaching the far-end users, i.e. the European citizens, drawing their attention to a broader usage of Earth Observation data. The proposed articles cover therefore various scientific disciplines and simple curiosities about the Earth natural/human processes through a weekly publication of Sentinels' eye-catching products elaborations. The publication of EO images fosters the understanding of subjects such as geological and atmospheric phenomena, including also the anthropic changes caused by the human activities. The main goal of the contributors is thus to work on communication and storytelling, approaching the information with a direct and simple style without neglecting the main scientific principles of the Remote Sensing. EarthStartsBeating website is structured in four thematic sections: • The 'Sentinels' section, which shows EO images of natural and anthropic phenomena, following the most significant events occurring throughout the Earth. The relevant material is then classified by three tags corresponding to the name of the Sentinels data exploited for the elaborations (i.e. Sentinel-1, Sentinel-2, and Sentinel-3). In this section, events like hurricanes, floods, forest fires, volcano eruptions are treated. • The 'Exploration' section is meant to re-experience the paths of the various historical explorers thanks to the eyes of the Sentinels. The first story tells about some stages of James Cook's first journey that led to the exploration and discovery of a great part of the South Pacific Ocean and Australia. • The 'Expert User' section is a more technical area where simple and basic scripts written in java/python where technical articles about the elaboration of EO images are reported. • The 'Interesting facts' section collects analysis done with data coming mainly from ESA not-Sentinels missions like PROBA-V. We present some examples of the main exposed themes, demonstrating that the proposed website represents an attractive tool not only for gathering information but also for the development of a forum in which to discuss the most relevant aspects linked to our Planet.


  • 17:17 - EO with Sentinel2A: a school-work path way experience
    Amici, Stefania (1); Stelitano, Dario (1); D'addezio, Giuliana (1); Vento, Paola (2); Acocella, Francesca (2); Giorgetti, Giorgio (2); Rocchi, Giorgia (2); Sbrenni, Eleonora (2); Serenellini, Luca (2) - 1: Istituto Nazionale di Geofisica e Vulcanologia; 2: Liceo Scientifico Stanislao Cannizzaro

    School path way experience known as alternanza scuola-lavoro (ASL) is an innovative learning-teaching module at Italian college schools. As detailed within the "Good School" law ( La Buona Scuola 107- 2015) it is aligned with the principle of open school. The idea is to anticipate the contact of students with working world providing them new skills and directions for future work choices within a sort of internship project with a private or public body. In this context Istituto Nazionale di Geofisica e Vulcanologia offers a list of projects within geophysical application. For the first time, researchers from INGV wanted to challenge college students on Remote Sensing to test their response and potential receptivity. The project, titled "Earth Observation with satellite: the case of burn areas" was selected by five students, here co-authoring, from Liceo Scientifico Cannizzaro in Rome. The school-work path way experience has been hold at INGV in Rome for a total of 40 hours (contact and not contact) during which the students have been guided to how to: • identify a problem • create an initial research questions • establish basic theoretical knowledge on satellite remote sensing • identify the algorithms suitable for solving the problem • select the satellite data (Sentinel2A) and the process tool (SNAP) • analyze and interpret data producing a scientific report As test case, two poorly characterized wildfires occurred in the area of Aidomaggiore (on 02 July 2016) , and Scano di Montiferro (on 24 August 2016 ) Sardinia Italy were selected. Images acquired by Seninel2A over the area were selected and used to produce NDVI and NBR map at 20m spatial resolution. The results have been summarized by students in a scientific report. Both students and researchers and professor as well considered the experience very positive and perhaps we have initiated the concept that "Junior Remote Sensing specialist" can be potentially extended to college students


  • 17:20 - High School Students and Sentinels’ Data: Our Experiment
    Liberti, Gian Luigi (1); Ciotoli, Elisa (2); Bilhac, Marius (2); Liberti, Juliette (3); Capannolo, Edoardo (2); Ciobano, Rafaela (2); Picone, Massimiliano (2) - 1: ISAC-CNR, Via Fosso del Cavaliere 100, I-00133, Rome, Italy; 2: Liceo Scientifico Statale "Louis Pasteur" V. Giuseppe Barellai, 130 - 00135 Rome, Italy; 3: Liceo Scientifico Statale "Isacco Newton" V.le Manzoni 47, 00185 Roma

    Within the frame of the recent introduction (Alternanza Scuola Lavoro www.istruzione.it/alternanza/index.shtml) in the Italian public school system of compulsory stages in private and public enterprises, the possibility that high school students could contribute actively to a scientific project was explored. About 30 high school students from the 3rd and 4th year (roughly 16-18 years old) from two Scientific High Schools in Rome, “Louis Pasteur” (www.liceopasteur.it) and “Isacco Netwon” (www.liceonewtonroma.gov.it), were involved in different phases of a study performed at the Institute for Atmospheric Sciences and Climate of the Italian National Research Council (ISAC-CNR www.isac.cnr.it). The study was done in response to the ITT Sea-ice cloud screening for Copernicus Sentinel-3 Sea and Land Surface Temperature Radiometer issued by EUMETSAT. The project required to produce Probability Density Functions (PDF) for a three classes (Clear Sky/Cloud/Sea Ice) Bayesian classification of the Sentinel 3 SLSTR data over polar oceans. Student were involved mainly in two different job tasks: first, they were asked to organize a set of relevant reviewed literature, about 140 among journal articles, Technical Reports and Algorithm Theoretical Basis Documents (ATBD), and to summarize the main information used for cloud-sea ice detection from previous studies. Secondly, their contribution consisted in selecting and documenting study cases, used as a reference dataset for validation and tuning of the PDF’s in different phases of their development. Job Organization: Different approaches for the job organization were tested, such as not only a tutor vs. students formation, but also a peer-to-peer one. The students had the possibility to work and to develop their abilities both at the work place and remotely (their school or their houses). Even if they could also work far from the tutor, there was an intense exchange of information. The study cases’ selection was based on the use of these tools: SNAP (step.esa.int), a Cloud area to share results and data, Microsoft Office. This required the students to become familiar with them. As far as SNAP is concerned, they learnt how to manage it mostly taking advantage of the online tutorials. Earth Observation (EO) data used in this project were Sentinel-3(A) SLSTR L1 (scihub.copernicus.eu) data and daily sea ice 1 km concentration charts (SEAICE_ARC_SEAICE_L4_NRT_OBSERVATIONS_011_002), based mainly on observations from SAR, including the one on board of Sentinel 1. Furthermore, some examples of obtained results will be shown and discussed. Some difficulties emerged during the time the project was carried out. They were not due to the tools or to the EO data but mostly to the novelty of the situation the students had to face. Their scholastic background revealed itself to be less important than a constructive attitude aimed at an efficient way to manage time and resources. In particular, the inability to, adequately, report the work done and to respect deadlines appeared as common weaknesses. Both students and project’s gained benefits will be discussed.


  • 17:23 - Communication Duration With Low Earth Orbiting Satellite
    Gupta, Lalit - Poornima Institute of Engineering and Technology Jaipur, India

    Communication via satellite begins when the satellite is positioned in the desired orbital position. The satellite’s coverage area on the Earth depends on orbital parameters. Ground stations can communicate with LEO (Low Earth Orbiting) satellites only when the satellite is in their visibility region. The duration of the visibility and so the communication duration varies for each satellite pass at the ground station. For low cost LEO satellite ground stations in urban environment it will be a big challenge to ensure communication down to the horizon. The communication at low elevation angles can be hindered through natural barriers or will be interfered by man made noise. This paper discusses the variations of the communication duration between the ground station and LEO satellites and investigates if it is useful to support low elevation passes. For this paper data recorded at the Vienna satellite ground station within the Canadian space observation project “MOST” (Micro variability and Oscillations of Stars) are applied.


  • 17:26 - Earth Observation Projects for Maximum Education
    Fortunato, Ronald (1); Hogan, Patrick (2) - 1: Trillium Learning, USA; 2: NASA, United States of America

    The World Bridge program integrates Earth observation technology to advance educational research as applied in the classroom. These projects dynamically design and implement Real-Time, Real-World Project-Based Learning into the local curriculum, and account for the classroom content needed to address academic standards. Three projects will be presented, all involving Earth observing web apps built entirely by students. One project involves the United Nation World Heritage sites that you can instantly zoom to and see in detail from a satellite view of the Earth. Another is a virtual globe web app for managing the water, sewer, power and road system for the city of Kodiak Alaska. Another project is a sophisticated monitoring system for measuring the Earth's magnetic field at 50Hz, at the nano-Gauss sensitivity for the x, y and z axes. The magnetic field has been show to become anomalous in the few days preceding an earthquake, measurable within a few hundred kilometers of the seismic hypo-center. This EO system is student designed, built and installed, and delivering data live via a virtual globe representing the exact location of the data. All of these Earth observing education projects are part of the World Bridge program, and entirely based on principles of open source and open data.


  • 17:29 - 3D Cave Mapping Applied to the "CAVES" and "PANGAEA" ESA Programs
    Santagata, Tommaso (1); Sauro, Francesco (1,2); De Waele, Jo (1,2); Bessone, Loredana (3) - 1: La Venta Esplorazioni Geografiche, Italy; 2: Department of Biological, Geological and Environmental Sciences, University of Bologna; 3: Directorate of Human Space Flight and Operations, European Space Agency, Linder Höhe, 51147 Köln, Germany

    The PANGAEA (Planetary Analogue Geological and Astrobiological Exercise for Astronauts) and CAVES (Cooperative Adventure for Valuing and Exercising human behaviour and performance Skills) ESA training courses are designed to prepare European astronauts to become effective partners of planetary scientists and engeneers in designing for the next exploration missions and to give them a solid knoweledge in the geology of the solar system studying several caves, especially lava tubes, through geolgical field training courses and tests of new technologies. In recent years we have seen a remarkable developments of 3D mapping methods for cave surveys as cameras and softwares for digital photogrammetric and instruments as laser scanning or mobile mapping tools. During the 2016 CAVES training course in Sardinia (Italy), photogrammetry has been widely used in order to give astronauts a basic knoweledge of this 3D mapping technique and as a toll that can be used for documenting the surfaces of other planets during field geology activities. Photogrammetry allows to acquire metric data through the acquisition and analysis of a couple of frames that can be obtained using standard digital cameras. In 2017, laser scanning and UAV'S (Unmanned Aerial Vehicles) photogrammetry will be used during the PANGEA course to test new instruments to obtain 3D maps of the Corona lava tube on the Canarian island of Lanzarote (Spain) with the goal to realize virtual models that can be used to test rovers and to plan future space analogue missions. In both cases, the 3D models realized with photogrammetric and laser scanning technologies were subsequently analysed to obtain information about size, volume, shapes and morphologies of the detected surfaces. The aim of this work is to describe methods and technologies used during these tests and the results obtained.


  • 17:32 - A set of Software Tools supporting EO Satellites for Orbit and Instrument Swath Coverage
    Pinol Sole, Montserrat; Zundo, Michele - ESA/ESTEC, Netherlands, The

    This paper presents the software applications for satellite orbit and instrument swath visualization distributed by the ESA-ESTEC EOP System Support Division to users part of the ESA Earth Observation Earth Explorer and Copernicus satellites community. The ESOV NG [REF 1] and EOMER [REF 2]) software applications can be used to perform mission analysis activities related to instrument swath coverage over regions of interest and ground station contact. These tools can and have been used in the preparatory feasibility studies (e.g. to analyse coverage and revisit time), to support downlink and ground station visibility analysis as well as support to Calibration and Validation activities e.g. to plan on-ground campaigns during satellite commissioning or scheduling of ground transponders. The Earth observation Swath and Orbit Visualization (ESOV NG) is a 2D orbit and swath visualization application, delivered with a predefined set of missions (Aeolus, Biomass, Cryosat , EarthCARE, MetOp-SG, Sentinel 1, 2, 3, 5p, 6, SMOS, SWARM), although it is possible to configure user-defined satellites. This tool is multi-platform, available for Mac OS X, Linux and Windows. EOMER (Earth Observation Mission Evaluation and Representation) is a Windows application for multi-satellite and swath visualization in 2D/3D, tailored to ESA Earth Observation missions (currently supporting Aeolus, Biomass, MetOp-SG, Sentinel 1, 2, 3, 5p). Both ESOV NG and EOMER applications allow the user to visualize satellite orbit ground-tracks and instrument swaths, and to calculate ground station visibility passes, times of overpass a given ground point and times when an instrument swath overlaps with a given region of interest. Regarding this last point, a dedicated command line executable program (ZoneOverPass [REF 3]) is available to obtain overpass tables of a given satellite ground-track or instrument swath over an area of interest. Finding opportunities of observations over a given area may be useful to search relevant time-tagged products or plan future on-ground campaigns. Further exploiting these features, the InstrCollocation tool [REF 3] provides a mechanism to identify collocation opportunities between two different instruments. It would benefit users involved in instrument calibration activities or interested in combining data from different types of products, acquired over the same geographical area within a given period of time between observations. The output information produced by the ZoneOverPass and InstrCollocation tools are provided in both tabular and graphical formats. The coherence and accuracy of the orbital and geometrical calculations within ESOV NG application and the ZoneOverPass and InstrCollocation tools is ensured by the use of embedded Earth Observation CFI Software libraries (EOCFI SW). The libraries are used to obtain orbit ground-track, instrument swath, time passes over selected area of interest or ground. The EOMER application instead makes use of the SatX and GanttX components developed by Taitus to support respectively the orbital calculations and its timeline visualisation. The use of common interfaces (orbit files, swath files, SCF segments export format from ESOV NG, KML Google Earth) is a key point to facilitate sharing the input data and the comparison of the output results across the various software applications. REFERENCES [REF 1] ESOV website: http://eop-cfi.esa.int/index.php/applications/esov [REF 2] EOMER website: http://eop-cfi.esa.int/index.php/applications/eomer [REF 3] ZONE_OVERPASS / INSTRUMENT_COLLOCATION website: http://eop-cfi.esa.int/index.php/applications/tools


  • 17:35 - Virtual Exploitation Environment Demonstration for Atmospheric Missions
    Natali, Stefano (1); Mantovani, Simone (1); Santillan, Daniel (2); Triebnig, Gerhard (2); Hirtl, Marcus (3); Lopes, Cristiano (4) - 1: SISTEMA gmbH, Austria; 2: EOX IT Services GmbH, Austria; 3: Zentralanstalt für Meteorologie und Geodynamik, Austria; 4: ESA ESRIN, Italy

    The scientific and industrial communities are being confronted with a strong increase of Earth Observation (EO) satellite missions and related data. This is in particular the case for the Atmospheric Sciences communities, with the upcoming Copernicus Sentinel-5 Precursor, Sentinel-4, -5 and -3, and ESA’s Earth Explorers scientific satellites ADM-Aeolus and EarthCARE. The challenge is not only to manage the large volume of data generated by each mission / sensor, but to allow users to analyze the data streams in near-real-time and for long term monitoring tasks. Creating synergies among the different datasets will be key to exploit the full potential of the available information. As a preparation activity supporting scientific data exploitation for Earth Explorer and Sentinel atmospheric missions, ESA funded the “Technology and Atmospheric Mission Platform” (TAMP) [1] [2] project, with the twofold aim of demonstrating (1) that multiple data sources (satellite-based data, numerical model data and ground measurements) can be simultaneously exploited by users (mainly scientists), and (2) that a fully virtualized environment (Virtual Research Environment, VRE) that allows avoiding downloading all data locally, and retrieving only the processing results is the optimal solution. With the “Virtual Exploitation Environment Demonstration for Atmospheric Missions” (VEEDAM) project, the concept of VRE is further extended: a Jupyter notebook interface has been deployed aside of the data, providing the users with all data access and processing tools already available within the TAMP platform as libraries for further exploitation; thus the user can exploit the potentialities of the platform, including the freedom of writing and running own code directly on the VRE; moreover the user has the possibility to interact with other existing VREs using external data and processing resources. Finally the interactive 3D visualization capabilities of TAMP have been further evolved providing geographic (latitude, longitude, height) slicing capabilities. This interactive poster presents the VEEDAM capabilities, allowing the attendants of EO Science 2.0 to directly experience both the data visualization and exploitation potentialities of TAMP. [1] TAMP platform landing page http://vtpip.zamg.ac.at/ [2] TAMP introductory video https://www.youtube.com/watch?v=xWiy8h1oXQY


  • 17:38 - Grand Ethiopian Renaissance Dam break scenario and expected effects over the archaeological sites
    Elshobaki, Mohamed (1); Elfadaly, Abdelaziz (2) - 1: Università degli Studi dell'Aquila-Italy; 2: Università degli Studi della Basilicata-Italy

    The Nile River flows for 6,700 kilometres through ten countries in north-eastern Africa. It is classified as one of the longest river in the world. Two main tributaries supply the discharge of the Nile being the White, and the Blue Nile. The Blue Nile currently attracts more attentions due to the construction of the Grand Ethiopian Renaissance Dam Project (GERDP). Large debate from the downstream countries (Egypt) has feared the consequences of such project. The GERDP left us with a huge concern in case of failure. Therefore, we provide a full numerical simulation based on the Shallow water mathematical model which solved by TELEMAC-2D provides the necessary water flow information in case of the Dam breaking. Satellite images data includes Sentinel1 and DEM data is used to supply the initial and boundary conditions of the simulations. Overall aim, the extracted simulation data are used to assess the potential impact of the GERDP failure on the archaeological area along the banks. Detect the expected environmental risks should supply the decision-makers with understandable information to protect the cultural heritage sites. Tsunami waves are expected.


  • 17:41 - Crowdsourcing EO datasets to improve cloud detection algorithms and land cover change
    Aleksandrov, Matej (1); Batič, Matej (1); Milčinski, Grega (1); See, Linda (2); Perger, Christoph (2); Moorthy, Inian (2); Fritz, Steffen (2) - 1: Sinergise, Slovenia; 2: International Institute for Applied System Analysis (IIASA)

    Involving citizens in science is gaining considerable traction of late. With positive examples (e.g. Geo-Wiki, FotoQuest Austria), a number of projects are exploring the options to engage the public in contributing to scientific research, often by asking participants to collect some data or validate some results. The International Institute for Applied Systems Analysis (IIASA), with extensive experience in crowdsourcing and gamification, has joined Sinergise, Copernicus Masters 2016 winners, to engage the public in an initiative involving ESA’s Sentinel-2 satellite imagery. Sentinel-2 imagery offers high revisit times and sufficient resolution for land change detection applications. Unfortunately, simple (but fast) algorithms often fail due to many false-positives: changes in clouds are perceived as land changes. The ability to discriminate of cloudy pixels is thus crucial for any automatic or semi-automatic solutions that detect land change. A plethora of algorithms to distinguish clouds in Sentinel-2 data are available. However, there is a need for better data on where and when clouds occur to help improve these algorithms. To overcome this current gap in the data, we are engaging the public in this task. Using a number of tools, developed at IIASA, and Sentinel Hub services, which provide fast access to the entire global archive of Sentinel-2 data, the aim is to obtain a large data resource of curated cloud classifications. The resulting dataset will be published as open data and made available through Geopedia platform. The gamified process will start by asking users if there are clouds on a small image (e.g. 8x8 pixels at the highest Sentinel-2 resolution of 10 m/px), which will provide us with a screening process to pinpoint cloudy areas, employing Picture Pile crowdsourcing game from IIASA. The next step will involve a more detailed workflow, as users will get a slightly larger image (e.g. 64x64 pixels) and will then be asked to delineate different types of clouds: opaque clouds (nothing is seen through the clouds), thick clouds (where the surface is still discernible through the clouds), and thin clouds (where the surface is unequivocally covered by a cloud); the rest of the image will be implicitly cloud-free. The resulting data will be made available through the Geopedia portal, both for exploring and downloading. This paper will demonstrate this process and show some results from a crowdsourcing campaign. The approach will also allow us to collect other datasets in a rapid and efficient manner. For example, using a slightly modified configuration, a similar workflow could be used to obtain a manually curated land cover classification data set, which could be used as training data for machine learning algorithms.  


Exh/Demo/Exhibitions - TEP Demo

18:00 - 19:00

  • 18:00 - Exhibitions - TEP Demos 1

    Exhibitions - TEP Demos 1


Day 2 - 26/09/2017

Lightning Talks

16:35 - 17:35

  • 16:35 - Moscow Surface Drain Net As Open Data And Open Shared Resource
    Karfidova, Ekaterina A.; Batrak, Gleb I. - Institute enviromental geoscience (IEG RAS), Russian Federation

    The Moscow city is known to be based on the Moskva River and is located on seven hills. There are more than 140 rivers and 400 ponds within the city territory. River net was strong transformed: a part of rivers was buried, an another part was canalized. Certainly the calculation of the drain network is very important for the city. In the drain net modeling the known methods of hydrological simulation are used on the basis of digital elevation model (DEM). Moscow drainage network was built on the base of radar data (from SRTM version 4.1. http://srtm.csi.cgiar.org/SELECTION/inputCoord.asp) in Institute of Enviromental Geoscience RAS (doi: 10.5176/2251-3353_GEOS16.35). In the GIS-project (software ArcGIS-ArcView) were calculated: drainage network (flow direction, flow length, flow accumulation zone), closed local lowlands, watershed boundaries, depth of river and gully erosion. directed flows with a stream order of inflows, topographic wetness index. These data are necessary both for the population in emergency situations during extreme high precipitation and for different urban specialists: for the design of storm sewage and maintenance of the storm drain, for hydrogeological models, to assess the natural risk and the transmission of pollutants in the surface runoff network, to preserve the natural landscape. The project “Calculated Map Of Drain Net For Moscow Territory” /Team “Tarakanovka” was presented at the II All-Russian Open Data Contest (http://www.opendatacontest.ru/projects/karta_skhema_raschetnoy_seti_poverkhnostnykh_stokov_na_territorii_moskvy_/ ) . At the present time in Moscow city there are two main directions of open data development: Open data portal of the Government of Moscow ( https://data.mos.ru/) more than 300 thematic datasets (education, sports, health, …) have been published on portal, Data are presented in tabular and cartographic form and in machine-readable formats for developers, using the API of the Open Data Portal. The target audience – residencies and guests of Moscow. Scientific data are not presented. Portal of the united geoinformation space in the Moscow integrated automated information system for urban development (http://egip.mka.mos.ru/egip/). The access to the information are separated at the common simple part (for inhabitants) and special part for professional community (executive authorities, developers, designers: (http://mka.mos.ru/specialists/isogd/). Users of common part can view raster image. The specialists – download vector data. Unfortunately, scientific organizations are not included in this professional community. In the interests of the development of society, it is advisable to change the principles for the development of open data to support scientific initiatives, to include scientific data in the composition of urban information resources. Then Moscow surface drain net as open shared resource can be of great benefit in the development of the city.


  • 16:38 - Leveraging the value of Crowdsourcing to Understand Local Environmental Changes
    Mazumdar, Suvodeep (1); Horrocks, Matthew (1); Ireson, Neil (1); Drapkin, Julia Kumari (2); Leininger, Daniel (2); Wagner, Lindsey (2); Scazzosi, Jacopo (2) - 1: University of Sheffield, United Kingdom; 2: iSeeChange

    The massive impact of global warming and climate change is a growing concern and a topic of much research today. A major aspect of this is to understand when observable changes occur to inform how our climate is changing. While sensors and weather stations can provide quantifiable data to measure various aspects of our environment such as temperature, humidity, rainfall and so on, some changes are subtle, qualitative and highly localised. Citizens residing in their neighbourhoods may often realise these changes, based on their memories of situations in the past. For example, ‘the number of gypsy moths have been increasing lately’ or ‘this year lizards are out much earlier’ may be of interest to climate researchers but could be highly localised observations. Such observations are often reliant on the experience of individuals and communities and hence, rather difficult to quantify as measurable sensor readings. However, if such information was available and analysed in a much larger scale, it could potentially be a highly valuable resource for climate researchers.   To this end, iSeeChange is a community crowdsourced climate and weather platform that enables users to document such environmental changes, share observations and discuss the impacts over time via a mobile and desktop application. The app combines citizen science, public media and satellite and sensor monitoring to observe and collect data on how weather and climate are changing daily life. In this talk, we will discuss our analysis of historical observations made by users of the application. Particularly, it is important to understand what kind of information do communities provide via such mechanisms and how can we leverage the value of such observations. We will discuss how external datasets and semantic web technologies are providing means to automatically aggregate and analyse large volumes of such data. We will finally share future plans of analysis and provide insight into how such user observations can be analysed and processed, eventually providing means for identifying climate change events.


  • 16:41 - Data Enrichment of Sentinel-2 and Landsat-8 Surface-reflectance Measurements for Agriculture Oriented Services
    Brkljač, Branko (1,2); Lugonja, Predrag (1,3); Minić, Vladan (1,3); Brdar, Sanja (1,3); Crnojević, Vladimir (1,3) - 1: University of Novi Sad, Serbia; 2: Faculty of Technical Sciences; 3: BioSense Institute

    Since the first attempts to utilize the useful information incorporated in surface-reflectance measurements of plants, almost a fifty years ago, which consisted of the introduction of the “simple ratio” index in 1969 and subsequently in 1973 the famous normalized difference vegetation index, there was a need to derive quantities that will ease interpretation of original measurements and improve their usefulness. Accumulated domain knowledge over this long period of time brought a vast diversity of surface-reflectance derived broadband vegetation and spectral indices that were specially designed to fulfill the user needs in characterization of plant health and growth conditions. With the advancement of satellite imaging technology and related data policies in recent years, previously introduced quantities (in the form of spectral indices) gained in value as efficient tools for simple and effective characterization of complex biophysical processes at large scales and with increasing spatial resolution. Although these indices were initially designed with intention to be computationally simple, due to technology constraints, they proved successful in numerous applications. This area of research is still very active and aimed towards improvement of their robustness to environmental factors like soil variability and its properties. Currently available computational power enables design of large data cubes, as aggregating structure that can incorporate an abundance of previously designed and finely tuned spectral indices, which opens new possibilities for their application. Feature extraction workflows in the domain of land cover and land use classification are one of the application areas that can benefit from the enrichment of original measurements through computation of known spectral indices available in the literature. Thus, a significant amount of currently available domain knowledge can be incorporated directly in the feature engineering process. A large number of different spectral indices also contributes to the overall capabilities of a more traditional applications, like visualization of agriculture related processes and easier photo-interpretation through e.g. web based services. In this context we made a comprehensive overview and implemented a processing workflow with more than 40 most significant broadband vegetation and spectral indices, equipped with Matlab and Python programming interfaces. In such a way, original data cubes provided by Sentinel-2 and Landsat-8 multispectral instruments are further enriched, offering enhanced discriminability in tasks such as classification and change detection, as well as improved visual interpretation that is demonstrated through interactive web service.


  • 16:44 - Sentinel hub integration into GET SDI PORTAL
    Symeonidis, Panagiotis; Mavrellis, Gabriel; Vakkas, Theodoros - Geospatial Enabling Technologies, Greece

    Sentinel hub is an innovative platform, developed by Sinergise, that provides access to near real time and historical Earth Observation data (Sentinel-2; Sentinel-3; Landsat; MERIS) using standard OGC web services (WMS, WCS). GET SDI PORTAL is an open source geospatial web visualization platform developed by GET. The software is ISO/OGC standards compliant, modular and extensible, addressing the limited availability of ready to use open source geoportal software. It utilizes several open source geospatial projects and tools like GDAL, openlayers and geoserver in order to provide a rich in functionalities web interface for spatial data visualization and analysis. GET SDI PLATFORM can connect to different data sources using the OGC standards for data access like WMS, WFS, WCS, CSW, kml, geojson etc. By integrating the sentinel hub services to GET SDI PORTAL the users can easily visualize and work with Earth Observation data. They can select the sensor and the visualization type among several options available between RGB band combinations (natural color, color infrared – vegetation, false color – urban, agriculture, geology) or indexes like NDVI, LAI, SWIR, NDWI, Moisture Index and many more. The application provides automatically a mosaic created from several difference satellite images, according to the users’ specifications (date, cloud coverage). Custom tools have been developed in order to enhance the user experience. Among them are image enhancement (atmospheric correction, apply different gain and gamma value to the images), date selection and time lapse animation, and image comparison using swipe tool of different EO products (for the same location and acquisition date or for the same location but for different acquisition dates). These tools can provide valuable insights (like identification of burned areas, of droughts or of flooded areas) without any technical or remote sensing skills. In addition, the build-in functionalities of GET SDI PORTAL like the metadata management, the ability to overlay additional raster or vector layers, query and filter capabilities on attribute data, spatial analysis tools (using WPS services and tools) and data export functions result to a complete platform for Earth Observation data visualization and analysis.


  • 16:47 - How to visualise the stability of civil infrastructures through an advanced use of the DInSar technology: the I.MODI project
    Marsella, Maria Antonietta; Scifoni, Silvia; D'Aranno, Peppe Junior Valentino; Giangiacomo, Valeria - Survey Lab s.r.l., Italy

    I.MODI is an added value service, created via a European project funded through a H2020 initiative, that integrates EO observation technologies, aerial, ground based data and ICT to create visualized data easy accessible from all kind of users, also non EO professionals. I.MODI uses DInSAR data to examine structural stability, performing an assessment on the level of damage suffered and evaluating its future evolution. Monitoring structural stability in urban areas and large infrastructure networks is emerging as one of the dominant socio-economical issues for the safety of the population. The problem is accentuated by the age of the constructions, exposed to increasing risks due to the material deterioration and loss of loading capacity. This becomes a civil protection issue when the structures are threatened by the evolution of natural and man-made ground deformation processes. In the latter case, the monitoring system is strictly devoted to safeguarding the population and has a primary role in setting up mitigation and prevention actions, as well in the implementation of an alert system. To date the evaluation of risks associated with the deformation of a structure used ground based methods, able to measure displacements at the surface or in boreholes, and on direct analyses such as in-situ inspections/investigations. These methods, although accurate at a local scale, require placing devices on the structures (destructive method) that are expensive and not always feasible due to accessibility and logistic constraints. In addition, due to the extension, capillarity and frequency required for the monitoring of large urban areas, critical infrastructures (plants) and networks extended at a national scale (road, railway, airport), an approach based only on in-situ measurements would require huge resources, not available today. To guarantee a systematic and comprehensive control of structural stability over large areas, satellite remote sensing can be effectively adopted. Among the different methods based on passive and active satellite sensors, the Differential Interferometry SAR technology, the same technology chosen for I.MODI, today represents an adequate alternative solution in terms of providing data that, for precision, reliability and cost sustainability, can be fully assimilated within the monitoring approach based on in-situ data. A web-based customized version of I.MODI is now in its final developing stage with the aim to completely integrate the EO data within the standard procedures based on in-situ technologies (GNSS and ground surveying). EO and non-EO inputs will be linked and managed adopting open standards for data documentation and using ICT technology to furnish an added-value service to final users (companies, professional operators as well as private citizens). More over the service foresees customized applications for different market segments and monitoring procedures.


  • 16:50 - Saturnalia, or Sentinel-based Vine Monitoring: from the ESA App Camp to a Real-world Application
    Dell'Acqua, Fabio (1,2); De Vecchi, Daniele (1,2); Galeazzo, Daniel Aurelio (1) - 1: Ticinum Aerospace s.r.l. - Pavia, Italy; 2: University of Pavia, Italy

    Each year, the European Space Agency organises the ESA App Camp [1], where selected young innovators gather at ESRIN for a full week of immersing into the coding of algorithms for mobile devices, which turn Earth Observation (EO) data into valuable information. At the 2016 edition, a team of 4 young scientists from across Europe put together the winning idea named Saturnalia [2]. Saturnalia was conceived as a system using open Sentinel EO data and in-situ data to monitor growing conditions of vines in typical production areas of fine wine. This latter is -increasingly- seen as a valuable asset besides a comparatively safe and steadily well-performing investment. The International Organisation of Vine and Wine (OIV) estimated a total market value of 28.3 B€ in 2015 [3]; international wine trade is also expanding, with currently around 43% of all the wine consumed in a different country than the one where it was produced [4]. In this context, advance knowledge of wine quality and quotations is a key success factor for wine traders. Saturnalia is meant to collect all relevant space-based (Sentinel) and ground-based (in-situ sensors) data to estimate the quality of wine from growing conditions, in advance of bottling. After the success at the App Camp, a part of the winning team continued developing the application. As the result of successive, intensive development efforts, Saturnalia is now a pre-operational system ingesting open Sentinel data on early experiment areas and combining it with ground-based information to test and prove the hypothesised correlations with assessed wine quality. In the full paper, we will provide some more details about the system and its development, and we will discuss is as a concrete example of how the open policy launched on Copernicus data is translating into more EO-based services becoming financially viable and new business being effectively triggered by the flood of open, space-based remotely sensed data. References [1] The European Space Agency “App Camp” contest. Online at http://www.app-camp.eu/ [2] The Frascati App Camp Hall of Fame. Online at http://www.app-camp.eu/hall-of-fame-frascati/ [3] International Organisation of Vine and Wine 2016 report. Online at: http://www.oiv.int/public/medias/4587/oiv-noteconjmars2016-en.pdf [4] Forbes. “The Global Wine Business in 2015”. Online at: http://www.forbes.com/sites/karlsson/2016/04/21/the-global-wine-business-in-2015-stable/#795e331b23f9


  • 16:53 - CoastalCast: Copernicus Masters makes Earth Observation serve coastal economy
    De Vecchi, Daniele (1,2); Dell'Acqua, Fabio (1,2) - 1: Ticinum Aerospace s.r.l. - Pavia, Italy; 2: University of Pavia, Italy

    The European Space Agency (ESA) Anwendungszentrum GmbH Oberpfaffenhofen (AZO), supported by several global partners, launched the Copernicus Masters (CM) initiative [1] in 2011 to foster the User Uptake of Copernicus services [2]. Copernicus Masters is an international competition awarding prizes to innovative solutions for business and society based on Earth observation data; as such, it often becomes the moving force of promoting cutting-edge solutions in the field. Each year, besides different prize categories, CM offers in-kind support to most valuable competition entries through consulting, active tutoring and dedicated webinars. With this paper, we present an application idea, which we submitted to the Catapult Challenge (CC), organised by the UK Space Agency within CM. The idea, named CoastCast, was selected for tutoring and reached a good stage of development. Despite sophisticated financial tools have been developed, it still remains a complex task to reliably estimate the risk and revenues connected to an investment in the real estate domain [3], especially in coastal areas. The value of buildings and land parcels in coveted coastal areas is generally high and may vary suddenly due to external factors such as a change in environmental conditions or regulations, not to mention -obviously- natural disasters. Much of the risk threatening the investment can be estimated and forecast through suitable risk models [4], and companies exist whose business consists of developing risk models and applying them to concrete situations. Such models, however, typically need a large pool of input data in order to provide reliable outputs, i.e. figures that can be confidently used to assess whether a real estate investment will lead to a reasonable financial return in the intended period. The service we are developing aims at providing a significant set of risk proxies derived from EO and in-situ data (including from “citizen sensors”) that may be used at different levels: • At the most quantitative level, to feed an otherwise possibly data-starved risk model for coastal area investment; • At a qualitative level, to help single investors understand trends of the area under considerations in order to make better informed decisions on whether and how much to invest in development, renovation, etc. • In the long run, to help refining risk model for the insurance industry operating on coastal area real estate stocks We surely do not intend to build a new financial risk model, but rather provide EO-based inputs to risk modellers, such as e.g. environmental and land cover trends etc. The step forward with respect to existing business consists of fusing EO data with Citizen Sensor crowdsourced data, two sources that are rarely used together in risk mapping. This has nowadays become more feasible than ever, thanks to the flood of free, open data and information ensured by the ESA-European Union (EU) Copernicus initiative and its Sentinel satellites. The full paper will report preliminary, interesting findings, and will present the current service structure and its future development plans. References [1] The ESA-AZO “Copernicus Masters” initiative. Online at http://www.copernicus-masters.com/ [2] The EU Copernicus initiative. Online at http://copernicus.eu/ [3] Various authors, “Views from the Observatory: Real Estate Portfolio Risk Management and Monitoring”. Morgan Stanley Real Estate Investing, July 2015. Online at: https://www.morganstanley.com/assets/pdfs/articles/ViewfromObservatory.pdf [4] Devin Bunten, Matthew E. Kahn, Optimal real estate capital durability and localized climate change disaster risk, Journal of Housing Economics, Volume 36, June 2017, Pages 1-7, ISSN 1051-1377, https://doi.org/10.1016/j.jhe.2017.01.004


  • 16:56 - Assessment of Copernicus Global Land Products from global networks of field observatories - Concept and demonstration
    Sicard, Pierre (1); Lopez, Ernesto (2); Fell, Frank (3); Ghent, Darren (4); Dash, Jadu (5); Muller, Jan-Peter (6); Lerebourg, Christophe (1) - 1: ACRI-ST, France; 2: University of Valencia, Spain; 3: Informus Gmbh, Berlin Schöneberg, Germany; 4: University of Leicester, United Kingdom; 5: University of Southampton, United Kingdom; 6: University College London, United Kingdom

    The Copernicus programme, e.g. the Copernicus Global Land Service, provides a wide range of products to fulfil monitoring needs for environmental and climate policies evaluation and planning, and hence, to support a sustainable management of natural resources. In the context of global climate change and adjustment/resilience policies’ design and implementation, there is a pressing need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land “products” but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making and also iii. for a perfect handling of the thresholds which help translate “environment tolerance limits” to match detected EO changes through ecosystem modelling. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to assess the performance of satellite-derived products. It is marred by logistical challenges and cost/benefit issues, reason why it is complemented by permanently instrumented sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). By quantifying their uncertainties, the performance of the satellite-derived products can be better understood, facilitating their appropriate use through a “fitness for use”. Unfortunately, most of the ground-level monitoring sites, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES), mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. The Joint Research Centre has commissioned a Copernicus global land products’ validation service, based on match-ups between ground-based observations and EO derived information, e.g. SPOT-VGT, Sentinel-2 or Sentinel-3. It requires: 1. Collecting existing multi-year time-series of ground-based measurements at stations integrated in worldwide networks. Data from these networks are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters’ consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). 2. Upgrading existing stations with new instrumentation, and building-up new monitoring sites. 3. Distributing reference measurements and land products to users through an easy access gate, i.e. a web-based platform. The newly upgraded stations will cover the complexity and heterogeneity of the environment, covering major biogeographical and rare biomes (e.g. ice/polar, desert) and different ecosystem types and land cover classes (e.g. closed shrublands, woody savannas and savannas) or completing the in-situ instrument set of reference Land CalVal sites (e.g. Valencia Anchor, Harvard Forest). Test of the procedure for land-cover products’ consistency assessment with field measurements delivered by worldwide networks will be presented. Focus will be on i. upscaling procedures, from in-situ data to land products matchup, ii. continuous calibration (spectral, radiometric) and adjustment (geometric, radiometric) of processors. This work is made possible by the financial support of the JRC (contract n° 932059) in the framework of the project GBOV “Ground-Based Observations for validation of Copernicus Global Land Products”.


  • 16:59 - RUS - Bridging the Gap between Users and Observations
    Palazzo, Francesco (1); Remondiere, Sylvie (1); Šmejkalová, Tereza (1); Scarda, Barbara (1); Bonneval, Beatrice (1); Gilles, Chloe (1); Guzzonato, Eric (2); Mora, Brice (2); Rabaute, Thierry (2); Quang, Carine (2); Dorgan, Sebastien (2); Devignot, Olivier (2); Jonquieres Creach, Katie (2); Jeansou, Eric (3); Soleilhavoup, Isabelle (3); Fabry, Pierre (4) - 1: Serco, Italy; 2: CS, France; 3: Noveltis, France; 4: Along-Track, France

    In 2014 Sentinel-1, the first of new fleet of satellites developed by ESA for the European Commission, begun systematic acquisition of Earth Observation data (SAR) over the globe. It was soon followed by the Sentinel-2 optical mission and the Sentinel-3 land and ocean monitoring mission. Acquisitions will continue for the next decades, with follow-on missions of existing satellites and new satellites with different observation capabilities. Technological and knowledge issues are partially preventing user's uptake of such large volume of data. We intend to present a service aiming to overcome such issues. The Research and User Support for Sentinel core products (RUS) service provides a free and open scalable platform in a powerful computing environment, hosting a suite of open source toolboxes pre-installed on virtual machines, to handle and process data derived from the Copernicus Sentinel satellites constellation.


  • 17:02 - A Cloud Platform For Geoanalytics From Massive Satellite Data Processing
    Drimaco, Daniela; Abbattista, Cristoforo; Zotti, Massimo - Planetek Italia s.r.l., Italy

    The cloud-based platform developed by Planetek Italia, called Rheticus® from the name of the unique pupil of Nicolaus Copernicus, provides application services, based on open data, such as satellite images and geospatial, environmental and socio-cultural data available online. The main services already available on the platform are based on Sentinel-1, Sentinel-2 and Sentinel-3 satellite data. Thanks to these data, Rheticus® is capable of delivering continuous monitoring services of Earth's surface transformation phenomena, as the urban evolution, landslides, fires, or the quality of marine waters. Planetek Italia is continuously working in the creation of new monitoring services through collaborations with academic and research centers. New applications may benefit from multi-source and multi-sensor analysis, as well as from merging data from heterogeneous platforms. At the same time the new EO data exploitation scenarios, in order to cope with the increasing data availability, require massive data mining processing infrastructures. Whether it's land monitoring or infrastructure, perimeter of the fires or monitoring the quality of coastal marine waters, Rheticus® works as a big hub that processes the data automatically to deliver geoinformation services ready-to-use in users' final applications. Automatic data analysis allows to create geoanalytics and dynamic indicators and provide actionable knowledge to decision makers. This way engineering and utilities companies, public and private organizations can now easily integrate geospatial free and open information in their business processes, without having to worry about technical data analysis and having the skills to process data.


  • 17:05 - WASDI Platform
    Campanella, Paolo (1); Versace, Cosimo (2); Boni, Giorgio (3) - 1: FadeOut Software, Italy; 2: Acrotec, Italy; 3: CIMA Foundation, Italy

    The WASDI project is aimed at developing new software tools that will be available for Italian National Collaborative Ground Segment. WASDI allow researchers to carry out the main operations regarding searching satellite data, in particular Sentinel ones, displaying them on line, running algorithms, displaying and evaluating the results. WASDI allow to share these activities among different users. The results of the calculations will then be available for download, allowing local further processing, or published directly through WxS standards. The project WASDI lies in a complex environment of existing computer tools: SNAP, DhUS, GPOD, CloudToolbos and others, each of them deals with fulfilling a specific service and has been developed and supported over the years by ESA and ASI. The opportunity of the creation of the Collaborative National Ground Segment at the center of Matera is optimal to perform an up to date analysis of existing instruments. WASDI is composed by 4 main modules: • Catalogue: A single and unified catalogue for research of satellite images from all existing catalogues; the WASDI catalogue will act as a gateway based at least on the Open Search standard and will allow through a unique interface and API to query several existing data sources from a single access point; • Visualization: users will immediately see the data retrieved through a dedicated web-based workspace. Thereby, it will be possible to navigate loaded data with a typical web-GIS, and to integrate them with third-party not-EO sources that can be useful for various users; • Processing: through WASDI it will be possible to perform the processing of the EO data directly on the server, taking advantage of the Coll-IT facilities. The execution of these processing algorithms can benefit from the potential of the existing grid and cloud infrastructures. Hence, access, processing times and bandwidth for data transfer will be optimized; • Profiling: the User Management System will be based on open standards today in use at ESA and ASI. The user management subsystem will be responsible for managing the credentials and the rights of users, integrating with the Single Sign On system currently in use in the various integrated systems of Coll-IT


  • 17:08 - Open Data Distribution Using A Virtual Hub
    Baranowski, Marek - Institute of Geodesy and Cartography, Poland

    A Virtual Hus is a useful mechanism for sharing open geospatial and Earth Observation data resources which use diverse spatial data models, encodings and are distributed in diverse ways. The scattered data require establishing such solution to make them available for the interested communities of stakeholders by easy and harmonised means. The technology standing behind the Virtual Hubs implements a brokering architecture for collaboration in the virtual environment. A virtual hub (VH) is a single point of access to the above-mentioned data and information, instead of individually approaching to every data source. Datasets discovered and accessed via VH are normalised and transformed answering the user’s needs Open data started to serve as the very important components of the contemporary information society. They are provided by the public authorities, international organisations, citizen networks, non-profit organisations and the private sector entities. Users of the geospatial open data usually identify the available resources by searching on the Internet and locating the particular access points’ urls. Accessing and further processing of the resources require a good technological knowledge and skills, enabling for interoperability of heterogeneous data. It is mostly a time consuming and tedious process. Thanks to the Virtual Hub platform they can use all linked resources in one system environment. One of the projects implementing the VH called ENERGIC OD (European NEtwork for Redistributing Geospatial Information to user Communities – Open Data) and co-financed by the European Union has been approaching to the final stage. A group of partners from France, Germany, Italy, Poland and Spain prepared a network of the Virtual Hubs solving the problem of dispersed, heterogenic spatial data. There has been developed 6 Virtual Hubs (in each partner’s country and additional in Berlin) comprising links to the variety of sources, like INSPIRE compliant services, Copernicus services and data in each country or services and data provided by the international organisations and citizen initiatives. The Virtual Hubs functionality consists of a number of interoperability solutions making the available services and data more harmonised and user-friendly. They can be used by the Small and Medium Enterprises, public institutions, universities and all interested citizens. On the top of the 6 Virtual Hubs there were 10 applications developed. The role of the application was defined to serve as powerful examples of the Virtual Hubs and their resources use. They are especially useful for future software developers which can additionally utilize some API and Javascript libraries prepared for them by the Project partners. The applications can also provide an information developed on the basis of available services and geospatial data to the end-users. There have been also a number of challenges arisen during developing Virtual Hub solutions and tailoring the content of their services and data resources. One of them is a scope of applicability of the collected linked data and the level of their harmonisation. Fortunately, the functionality of the developed Virtual Hubs provides a mechanism of unlimited appending the services and data with low operational costs.


  • 17:11 - Heterogeneous data analysis in the Urban context.
    Balhar, Jakub - Gisat s.r.o., Czech Republic

    The family of TEP projects aims at bringing the big data from the Earth Observation satellites to the diverse users from multiple areas. The important part of these projects is the visualisation of the processed data and at least for the TEP Urban on the fly combination with other sources of the data such as the population data or geotagged tweets. The TEP Urban project integrates the processing platform with the visualisation and analysis toolbox formerly known as PUMA (Platform for Urban Mapping and Analysis). The open source nature of the toolbox allows other organisations to deploy the same capabilities for the integration and visualisation of other data sources. The needs of the different types of users of such platform greatly, which lead to the need to distinguish between policy makers, who usually aren’t specialist in the GIS area, through the standard GIS users from the municipalities to the expert users covering the topic on the universities. What all these groups have in common is that they need to understand the data in relevant contexts. The datasets relevant for the understanding of the Urban dimension of the EO data comes from various data sources. It is important to allow the users to bring their own data to the platform. One of the key advantages of this tool is simplicity in the integration of new datasets and statistics built on them against an area of interest. The key insights are usually achieved by the visualisation of the data in combination. The platform facilitates such usage by allowing the user to visualise the data as layers on the globe in real world elevation combined with statistics displayed in multiple types of the charts. In order to understand which urban areas follow which categories, it is possible to create maps coloured based on such categories.


  • 17:14 - ESA Research and Service Support helping researchers and companies developing EO based services
    Delgado Blasco, Jose Manuel (1,2); Sabatino, Giovanni (1,2); Cuccu, Roberto (1,2); Arcorace, Mauro (1,2); Rivolta, Giancarlo (1,2) - 1: Progressive Systems Srl, Parco Scientifico di Tor Vergata, 00133 Rome, Italy; 2: ESA Research & Service Support, via Galileo Galilei snc, 00044 Frascati, Italy

    The ESA Research and Service Support (RSS) service has a multi-year experience in supporting new generations of Earth Observation (EO) and data scientists with the provision of open tools and software, virtual research environment and dedicated training. This is in agreement with the mission of this service which is enhancing Earth Observation (EO) data exploitation through an operational pilot of the paradigm: “bring users to data”. This approach, in fact, dramatically lowers the barrier to make research activities, develop algorithms and downstream services by opening such possibility to new users and reducing effort and resources needed by the ones already involved in EO. This objective is achieved by: i) offering tools and services granting a fast and easy access to EO data and scalable processing resources for EO data exploitation; ii) providing expert advice to non-EO companies (including start-ups) interested in developing EO based services, thus fostering competitiveness and knowledge of EO exploitation; iii) supporting researchers in developing new algorithms and processing and iv) promoting EO data exploitation via university seminars and lectures as well. The RSS service offer is composed of several elements supporting different phases of the research process flow. The processing environments offered are: (i) customised cloud toolboxes where scientists, developers and alike can fine tune their algorithms on selected datasets and, (ii) scalable processing environment where fine-tuned algorithms can be integrated and made available as EO applications for on-demand and/or massive processing. Results visualization tools are made available as well. As far as the algorithm development process is concerned, including the fine-tuning phase, the RSS CloudToolbox is the basic tool offered by RSS to EO researchers. Such tool is a customised virtual machine with pre-installed software powerful enough to speed-up the development phase. The algorithm can be successively integrated into the RSS processing environment, thus bringing it close to data, once it is deemed to be stable, either if the scientist plans to run it on massive datasets (big data processing) or to make it available to the scientific community as a web application. In such RSS environment, high-performance computing resources using Grid and Cloud technologies, provide the necessary flexibility to allow quick access to data and fast production of processing results. The RSS Grid/Cloud infrastructure counts over 90 processing nodes with a total of 2.3TB RAM memory and 490 CPU’s. This represents the RSS base capacity that is on average sufficient to satisfy users’ processing requirements. When the processing requests exceed the RSS base capacity, it is possible to scale up the resources by seamlessly federating additional cloud clusters. In the big data era, started with the launch of Sentinel-1A in April 2014, data volume and processing requirements are becoming more and more challenging. Hence, the EO scientific community accessing and using RSS resources experiences greater benefits, in terms of of time and cost savings, for all the activities related to the EO research process, including algorithm development, data access, processing and results analysis


Exhibitions - TEP Demo

17:45 - 18:45

  • 17:45 - Exhibitions - TEP Demos 2

    Exhibitions - TEP Demos 2


Citizen Science

Chairs: Brovelli, Maria Antonia (Politecnico di Milano), Mathieu, Pierre Philippe (ESA-ESRIN)

08:30 - 10:30

  • 08:30 - A new method and web tool for Map Assesment and Warping (MAW) and its application to OSM data.
    Brovelli, Maria Antonia; Prestifilippo, Gabriele; Zamboni, Giorgio - Politecnico di Milano, Italy

    OpenStreetMap (OSM) is currently the world's largest database, to which millions of volunteers contribute. By now, it has nearly 4 billion nodes, 400 million ways and almost 5 million spatial and logical relationships among the elements. This database is a precious source of information and the data it contains can be reused by everyone in accordance with its Open License, the Open Data Commons Open Database License (ODBL). Anyone is free to copy, distribute and adapt the data, providing the paternity to OSM and its contributors and guarantying that any alteration of the data is made available with the same license. Often the accuracy of the OSM data is called into question. This work tries to partly answer this question by providing a method for measuring the metric accuracy of a target layer, like for instance that corresponding to OSM buildings (the layer of building footprints), by comparison with a second reference layer, for instance an authoritative one, representing the ground truth. The method is therefore more general, allowing the evaluation of metric differences between two layers, but it can be applied to OSM data in order to check their quality. It is complemented by a web mapping tool, where users can upload their target and reference layers and have back all the relevant statistics. The prepared procedure also provides the ability to warp the target layer to the authoritative one with a least squares approach based on the affine or on multi-resolution spline transformation. The transformation parameters can then be applied to all layers (road, hydrography, etc) spatially homogeneous with the target one (i.e, the building footprints layer), making them spatially homogeneous to the reference. Besides the procedure and the web application, statistics for some large Italian and foreign cities will be shown (Milan, Paris, Berlin, S.Francisco, Boston), comparing OSM data with official data with accuracy at urban scale (from 1: 1000 to 1: 5000).


  • 09:00 - Citizen Scientists Measuring Australia’s Water Quality
    Anstee, Janet; Ford, Phillip; Malthus, Tim - CSIRO, Australia

    Water colour is a very informative indicator of the ecological state of marine and fresh-waters. Until recently, however, colour information has only been measurable with a suite of unwieldy scientific instruments. With the ubiquity of smartphones, several phone applications have been developed which offer the exciting prospect that water colour measurements and associated metadata can now be collected over wide spatial and temporal scales by citizen scientists. A new project starting in July 2017 aims to harness the enthusiasm of Australian citizen scientists interested in the water quality of their local regions. These individuals will generate a large pool of valid data to calibrate satellite information, and provide a synoptic overview of river, lake, and coastal water quality for natural resource management in Australia. The usefulness of the data for calibration of satellite remote sensing data and its applicability in water quality management such as an early warning system of harmful algal blooms will be assessed. Each participant will obtain a quantitative understanding of how local water bodies change seasonally and in response to short duration events like floods and cyclones. As Australia has vast underpopulated regions, a targeted group of citizen scientist will include indigenous rangers working in isolated regions in tropical Australia where scientific investigations are limited or non-existent. Once organisations and communities are engaged, CSIRO will provide training for the mobile phone application, deployment of a Secchi disk, use of a basic chemical water quality test kit and data transfer. Data will be uploaded by the participants and results will be returned to the users through a website and regular newsletters. This presentation will outline the early stages of implementation of the project, the benefit to citizen scientists in enhanced knowledge of water quality issues and environmental awareness, and the benefits provided in calibration and validation of satellite data.


  • 09:15 - Earth Science in Real Time
    Douglas, Elizabeth (1); Lukaszczyk, Agnieszka (2); Mascaro, Joseph (3) - 1: Planet Labs Germany GmbH, Germany; 2: Planet Labs Germany GmbH, Germany; 3: Planet Labs Germany GmbH, Germany

    The value of science and data has, as of late, been a hot button topic. Demonstrators are marching and headlines are being written about future government investment in data collection in support of science. And even while some governments are pulling back, we are in the midst of a global sensing revolution. Innovations in satellite and rocket manufacturing have ushered in a step-change: 2017 was the first year in history that more than 100 Earth observation satellites were launched in a single year. Today, most locations on the land surface now have their picture taken multiple times per day. Ecological, geological and human-caused changes on the Earth’s surface are visible like never before: the demographics of remote tropical trees, flowering, dropping their leaves, and flushing. Alpine lakes, dammed by glacial ice, draining and re-filling. Sand encircling Indian Ocean Islands, ebbing with seasonal currents. And yet, each of these dynamic, diverse Earth systems are under intense and increasing threat from climate change, human expansion and loss of biological diversity. If it is coupled with accessibility for scientific use, this global sensing revolution gives the scientific community an unprecedented opportunity to conduct Earth science in real time. So far, the results are encouraging, with more and more commercial data becoming accessible to scientists and researchers. But, this new vantage comes with challenges: multiple sensors and platforms, operated by scores of agencies, mean analyses must be increasingly interoperable to keep pace. The study of change on our Planet must be rigorously tuned to technological changes in the very sensors that allow us to track that change. How we meet the challenge of synthesizing and drawing insights from these evolving data streams—as a global community—will determine our planet’s fate.


  • 09:30 - Including participative observation into environmental sciences : the Simplex system
    Mangin, Antoine (1); Rueda, Sebastien (1); Serra, Romain (1); Vincent, Chloé (1); Hembise, Odile (1); Baudin, Thomas (2); Riddell, Mike (3) - 1: ACRI, France; 2: Synext, France; 3: Université Internationale de la Mer - Cagnes/Mer, France

    In 2010 has been launched a participative system (web-based - meduse.acri.fr ) to report on presence/absence of jellyfish on French Riviera. All citizens, locals and tourists, were invited to report on jellyfishes. The result is double : i) a near-real-time mapping of risk, widely open to public, to manage their recreative bathing and ii) a mean to get statistical data, useful to carry out scientific studies of correlation of jellyfish presence and specific environmental conditions (that can be observed in situ and from space) ; scientific papers have been published making use of this “crowd” data and, by this, have demonstrated the quality of this approach for scientific purposes. This precursor system has been fully adopted by a very large public and can reasonably considered as a deep success (around 30.000 observations reported – that makes this data set the largest in Europe regarding jellyfish - more than 1,3 millions of connections, 7.500 registered observers…). This reporting/mapping system is still largely alive and is regularly promoted by public media (mainly for public health purpose). Based on this (unexpectedly large) success, and lessons learnt regarding acceptance of public to this type of monitoring, we have developed a “generic” system (Simplex (TM)) based on the contribution of public, or semi-public for scientific applications. The development of this system is supported by the French Ministry of Environment and allows any type of environmental monitoring in complement to other types of observation (EO, modeling, in situ…). SimplexTM consists in three complementary branches i) one generic and simple smartphone application for an observing cohort, ii) one web-GIS allowing management of this cohort by a coordinator and crossings with external input (environmental and socio-economic) and iii) NRT direct access to Copernicus (EO and models) data at the time/location of the observer and for the coordinator. Simplex(TM) is presently in beta-version and under test for four applications with various stakeholders (sea mammals monitoring, macro-wastes at sea, …). Results and possible benefit for EO validation will be presented at the EO Open Science workshop.


  • 09:45 - The E2mC Project: An Innovative Approach to Combine Social Media and Crowdsourcing for Rapid Mapping
    Corsi, Marco (1); Grandoni, Domenico (1); De Vendictis, Laura (1); Francalanci, Chiara (2); Pernici, Barbara (2); Scalia, Gabriele (2); Fernandez-Marquez, Jose Luis (3); Mondardini, Rosy (3) - 1: e-GEOS, Italy; 2: Politecnico di Milano, Italy; 3: University of Geneve, Switzerland

    The goal of the E2mC project is to demonstrate feasibility and usefulness of the integrating social media analysis and crowdsourced information within both the Rapid Mapping and Early Warning Components of Copernicus Emergency Management Service (EMS). Since 2012, the Copernicus EMS provides timely information for emergency response in relation to different types of disasters, as well as prevention, preparedness, response and recovery activities. It operates through four modules (Rapid Mapping, Risk & Recovery, European Forest Fire Information System (EFFIS) and European Flood Awareness System (EFAS)) that constitute to two main Service Components: Mapping and Early Warning. Through these two components, Copernicus EMS supports crisis managers, Civil Protection authorities and humanitarian aid actors dealing with natural disasters, manmade emergency situations, and humanitarian crises during the different phases of crisis management. The experience gained in these recent years of operational services has highlighted the need to enhance the service performance of both the Components, in terms of timeliness, quality and production capacity. Relating to the Mapping Component, satellite images are the key source for the maps production. One of the main user requirements of the EMS is to receive first crisis information within the first 24 hours after the disaster, while today it is not unusual to experience delays up to 72 hours, mainly due to the availability of the first usable post-event satellite image, caused by various casues, including satellite tasking and orbital constraints, bad weather conditions, and late activation. Moreover, the crisis maps, purely based on satellite information, have known quality limitations due to the physical constraints of satellite acquisitions (e.g. resolution or near-nadiral view) that negatively affect accuracy. Social media have proven particularly valuable in collecting information in emergency situations, such as in earthquakes, in particular in very destructive ones such as Haiti and Nepal, and in large floods and hurricanes, such as Hurricane Matthew and floods in the Philippines, in the different phases of such events. Preliminary analyses and surveys conducted in the requirements analysis phase of the project have highlighted that the operators involved in the crisis map production during an event (Rapid Mapping service) use social media information gathered manually, mainly with internet search and download. A fundamental innovation with E2MC is to provide a technology platform that supports the automated retrieval and analysis of social media information, combined with crowdsourcing, with the general goal of improving the quality and dependability of the information provided to professional users within the Copernicus network. This paper discusses the E2mC approach to the integration of social media and crowdsourcing within Copernicus rapid mapping and early warning service components. It discusses the functional architecture of the platform that will be developed by the E2mC team to support this integration. Preliminary results are discussed providing insights on the type, quality, amount and timeliness of the information that can be found on social media during an emergency and on the tasks that are best candidate for crowdsourcing.


  • 10:00 - Developing Methods To Use Citizen Observations As A Part Of A Long-Term Efficient Monitoring System
    Pyhälahti, Timo; Lindholm, Matti; Korhonen, Sami; Bruun, Eeva; Heinilä, Kirsikka; Alasalmi, Hanna; Junttila, Sofia; Lehto, Samuli; Keto, Vesa - Finnish Environment Institute SYKE, Finland

    Data sources for sustainable long term environmental monitoring system for any official or operational use should be reliable, consistent and unbiased both in spatial distribution of observations and observed values distriribution domain. Citizen observations (CO) have challenges to meet any of these requirements. However, there is a potential for both cost-effective data gathering and citizen involvement with co-operation and information dissemination benefits beyond acquiring environmental data. In fact, at current state of technology the problem is not acquisition of georeferenced data from mobile communication devices, rather the problem is how to motivate observers to commit to the information provision activity among other possible competing activities. With outlined process of introducing participatory citizen observations, the motivation of observers and the expectations of data gathering organizations can meet. By committing to extended Open311 information exchange practices the questionnaires for formalizing the provided CO data, similar questions can be provided across different user platforms and interfaces, supporting multiple languages. Practices of ensuring service discovery, data availability and quality are outlined for multi-organization framework. For water qality related CO measurements with Secchi3000 technology are reviewed with potential for satellite Earth Observation (EO) synergy.


Citizen Science

Chairs: Brovelli, Maria Antonia (Politecnico di Milano), Mathieu, Pierre Philippe (ESA-ESRIN)

11:15 - 13:00

  • 11:15 - The Cli-Mate? app.
    Soukup, Petr; Moullard, Olivier - University of Westminster, United Kingdom

    This presentation will document the design and development of a practical solution to climate change: the Cli-Mate? app. Cli-Mate? is a social network service for people interested about climate change. The application allows users to record and share environmental data using their iPhones, acting as a bridge between scientists and citizens concerned about climate change. Data collected through the Cli-Mate? app could form a much needed local record of climate change worldwide, which could help scientists to validate observations made by satellites and help them to make predictions about climate change.


  • 11:30 - Flowered-Geodbapp: An Application Based On Crowd-Generating Data Using Sentinel2 Imagery
    Deflorio, Anna Maria (2); La Mantia, Claudio (2); Melis, Maria Teresa (1); Dessì, Francesco (1); Loddo, Paolo (2); Da Pelo, Stefania (1); Ghiglieri, Giorgio (1); Tesfaw Hailu, Binyam (3); Kalegele, Khamisi (4); Mwasi, Benjamin (5) - 1: TeleGIS Laboratory, Dept. of Chemical and Geological Sciences, University of Cagliari, via Trentino 51, 09127 Cagliari, Italy; 2: Planetek Italia SRL, via Massaua, 12 70132 Bari, Italy; 3: School of Earth Sciences, University of Addis Ababa, College of Natural Sciences, P.O. Box 1176 Addis Ababa, Ethiopia; 4: Nelson Mandela African Istitution of Science and Technology, P.O. Box 447, Tangeru Campus, Arusha, Tanzania; 5: School of Environmental Studies, University of Eldoret P.O Box 1125 – 30100, Eldoret, Kenya

    This study is part of the EU H2020 research Project FLOWERED (de-FLuoridation technologies for imprOving quality of WatEr and agRo-animal products along the East African Rift Valley in the context of aDaptation to climate change). FLOWERED project aims to develop technologies and methodologies at cross-boundary catchment scales to manage the risks associated with high Fluoride water supply in Africa, focusing on three representative test areas along the African Rift Valley (i.e. Ethiopia, Kenya and Tanzania), characterized by high fluoride contents in waters and soils, water scarcity, overexploitation of groundwater and high vulnerability to risks arising from climate change, as drought and desertification. It also is empowering local communities to take responsibility for the integrated-sustainability of the natural resources, growing national and international environmental priorities, enhancing transboundary cooperation and promoting local ownership based on a scientific and technological approach. Within the FLOWERED project, the transition from the land cover to the land use and water use maps is provided through the development of a mobile application (FLOWERED-GeoDBapp ). It is dedicated to the collection of local geo-information on land use, water uses, irrigation systems, household features, use of drinking water and the other information needful for the specific knowledge of water supply involving local communities through participative approach. This system is structured to be populated, through an action of crowd-generating data by local communities (students and people involved mainly by NGOs). The SHAREGEODBapp is proposed as an innovative tool for water management and agriculture institutions at regional and local level.


  • 11:45 - Exploring the potential use of radar and optical data for Natural Flood Management
    Caccia, Michele (1); De Avila Siqueira, Andreia (1); Valcarce-Diñeiro, Rubén (2) - 1: CGI, Italy; 2: CGI, UK

    According to SEPA, 2015, Natural Flood Management (NFM) involves techniques that aim to work with natural hydrological and morphological processes, features and characteristics to manage the sources and pathways of flood waters. These techniques include restoration, enhancement and alteration of natural features and characteristics, but exclude traditional flood defence engineering that works against or disrupts these natural processes. Surface runoff is one key component impacting floods in a catchment area. Changes in the environment caused by human activities, changes to rainfall patterns caused by climate change as well as overall changes in the land surface can greatly contribute to runoff increase. Traditional runoff models use a number of different variables e.g. soil type, ground cover type and rainfall to estimate the runoff for a given area. Earth observation (EO) data has been widely used to monitor environmental variables including surface runoff (Gajbhiye, 2015). EO data applications are well-known to be a very reliable and cost-effective alternative to the conventional methods of monitoring and modelling environmental changes. The present study aims to demonstrate the effectiveness use of Sentinel-1 and Sentinel-2 data in addition to other datasets (DEM and VHR image) to map and monitor areas where runoff hazard is present. A Vulnerability Index is proposed based on value-added information originated from EO data. The proof of concept was carried out in the Hendred Farm (West Thames catchment area, United Kingdom, UK) during the 2016 growing season (March to September). The methodology involved the generation of two Land Cover thematic layers using a combination of co-polarised and cross-polarised images, and extracting the interferometric coherence and intensity over Single Look Complex product pairs from Sentinel-1 images. Sentinel-2 images from two distinct dates during the growing season (6 June and 15 August) were used to generate Vegetation Indexes (NDVI, NDWI, LAI, FAPAR and FCOVER). A slope thematic layer was created using a DEM (SRTM 1-sec resolution). Furthermore, a spatial framework was proposed and tillage distribution and direction thematic layer was created using a very high resolution image (WV3 – 04 May 2016). Classification maps were produced by integrating the different thematic layers and presenting runoff hazard spatial distribution as high, medium and low. Even though the study area was limited, it is believed that the proposed methodology is able to demonstrate that runoff hazard areas can be identified and monitored using radar and optical images over time. The real value of being able to assess and monitor runoff hazard is when large catchment areas are considered and traditional methods of monitoring are not cost-effective. Thus, the future steps of this study is to extend the methodology described here to provide a high level assessment of wider areas prior taking more detailed NFM actions.


  • 12:00 - Citizen Science and Crowdsourcing to Support Mobility Decision-Making in Large Metropolitan Areas: A Big Data approach
    Mazumdar, Suvodeep; Lanfranchi, Vitaveska; Wang, Zhibao; Simanjuntak, Humasak; Bai, Lu; Ireson, Neil; Ciravegna, Fabio - University of Sheffield, United Kingdom

    By the year 2050, global population living in metropolitan areas is estimated to grow significantly and is expected to cause enormous strain on infrastructure and resources. Metropolitan areas are typically very large areas with dense city centers and large expanding suburbs. With limited resources and rising population and pollution concerns, such areas need to grow in an efficient, sustainable and resilient way. A significant concern is how transport and mobility in such areas can scale up with such increasing demands. To this end, the Seta project is currently developing Big Data solutions to create a technology and methodology that will address these challenges and change the way mobility is organised, monitored and planned in such areas. The project collects massive volumes of mobility data via crowdsourcing, physical sensors, video cameras, and environmental sensors. The data is linked, fused and exploited to model mobility with a precision, granularity and dynamicity that is impossible with today's technologies. Essential to the success of the project is the key role played by citizens and communities. The project exploits different crowdsourcing mechanisms such as participatory and opportunistic to provide massive volumes of Volunteered Geographic Information (VGI). In this talk, we will discuss how hundreds of thousands of users, throughout three use case cities in Europe (Birmingham, UK; Santander, Spain; and Turin, Italy) contribute to opportunistically providing VGI on motorised and non-motorised mobility via mobile applications. While crowdsourced motorised mobility data informs various aspects of traffic and infrastructure management such as road occupancy, vehicle speed, pollution estimation and so on, crowdsourced non-motorised mobility provides significant insights into land use and function as well as how citizens and communities organise their daily commutes, activities and lifestyle. At the core of the project is a citizens observatories approach, where citizens can actively provide real-time information about traffic and mobility conditions and issues. This talk will present how such massive volumes of data is collected, processed and analysed to provide exploratory analysis for decision support. We will share our initial results and insights following the first phase of evaluations in the three cities and discuss how we plan the next phase of development in the project.


  • 12:15 - Monitoring the flowering of Invasive Alien Plants with Sentinel 2A/B and Citizen Science data
    César de Sá, Nuno (1,2); Sillero, Neftalí (3); Gil, Artur (4); Marchante, Elizabete (1); Marchante, Hélia (1,2) - 1: Centre for Functional Ecology, University of Coimbra, Portugal; 2: Coimbra College of Agriculture, Polytechnic Institute of Coimbra; 3: Research Center for Geo-space Science, University of Porto; 4: Centre for Ecology, Evolution and Environmental Changes, University of Azores

    Biological invasions by Invasive Alien Species (IAS) are not only one of the greatest threats to biodiversity and ecosystems functioning worldwide but also a potential health hazard for humans, with extensive economical costs with EU estimating up to 12 Billion € per year. Reducing costs and increasing the efficiency of IAS monitoring is vital for a sustainable strategy to counter biological invasions. Free and open access to Earth Observation data coupled with Citizen Science and data gathering platforms can provide a unique opportunity to curb these costs. Furthermore, these programmes contribute to increase citizen awareness and participation in the topic of IAS monitoring which is crucial for prevention and early detection. The web-platform invasoras.pt has been an example of success in increasing citizen awareness and participation in this topic, specifically towards invasive alien plants. Besides being part of a wider countywide set of activities that encourage active action by citizens it also allows the collection and storage of Volunteered Geographic Information (VGI) of invasive plants. This data is validated by experts and made totally available for public use. Coupling this continuous and increasing collection of data with freely available Earth Observation products offers is a great opportunity to increase the ability of monitoring IAP at very low operational costs. The availability of free Earth Observation products for continuous monitoring and their consistent improvement in recent years is a unique opportunity to improve and increase the scope of IAP monitoring. As a case study we’ve chosen three of the worst IAP in Portugal: Acacia longifolia, Acacia melanoxylon and Acacia dealbata and Acacia saligna. All these are characterized by very strong flowering events that occurring during Portuguese winter. The sheer intensity of their flowering facilitates their visibility in the field and hypothetically also through Earth Observation satellites. Furthermore, a biocontrol has been introduced recently in the country which is expected to seriously disrupt the flower production of A. longifolia. As such it is significant for future monitoring of this biocontrol expansion to establish if flowering is or isn’t detectable by satellite Earth Observation. While previous Earth Observation missions as the Landsat already offered the opportunity for monitoring IAP but they were hindered by the lower spatial resolutions and revisit time. Especially during winter, cloud cover is very prevalent. Therefore Sentinel 2A and 2B satellites are a unique opportunity for detection of these IAP because they offer both an increased spatial resolution as well as a smaller revisit time. In this research we showcase the possibility of using VGI to validate detection of these four IAP throughout various seasons using Sentinel 2A and 2B satellites. To do this, we pre-selected areas of known homogeneous and monospecific stands of these species and evaluated their detectability in the various seasons which are validated by the VGI. It is an example of how free-to-use EO data and simple citizen data gathering platforms can easily contribute to improve the monitoring of IAS at very large scales.


  • 12:30 - Picture Pile: A citizen science tool for rapid post-disaster damage assessment using satellite imagery
    Danylo, Olha (1); Sturn, Tobias (1); Giovando, Cristiano (2); Moorthy, Inian (1); Fritz, Steffen (1); See, Linda (1); Kapur, Ravi (3); Girardot, Blake (2); Ajmar, Andrea (4); Giulio Tonolo, Fabio (4); Reinicke, Tobias (3); Mathieu, Pierre Philippe (5); Fraisl, Dilek (1); Duerauer, Martina (1) - 1: International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria; 2: Humanitarian OpenStreetMap Team; 3: Imperative Space; 4: Information Technology for Humanitarian Assistance Cooperation and Action (ITHACA), Turin Italy; 5: European Space Agency (ESA), ESRIN, Frascati, Italy

    Citizen science, crowdsourcing and volunteered geographic information have become important components of participatory scientific research. Within the domain of post-disaster damage assessment, crowdsourcing can provide data to assist humanitarian organizations in their relief and recovery efforts. Several citizen science powered tools already exist for data collection, including those that support visual image interpretation and online interactive mapping. One example of such a tool is Picture Pile, which is a cross-platform application designed to be a generic and flexible way to ingest satellite imagery for rapid classification. As part of the ESA’s Crowd4Sat initiative led by Imperative Space, this study demonstrates how satellite imagery coupled with a crowdsourcing-based application (Picture Pile) can support humanitarian efforts. We demonstrate how satellite image interpretation tasks within Picture Pile can be crowdsourced in a demonstration case that uses imagery before and after Hurricane Matthew, which affected large regions of Haiti in September 2016. Volunteers were asked a simple yes/no question about what they could see with regards to building damage from the hurricane in images before and after this event. Since the task is simple and clearly formulated, a rapid review of large areas was done with the help of 135 volunteers who completed 120,000+ tasks within a few days of the start of the campaign. We present the latest results and discuss the lessons learned from the campaign which ran in May 2017. We found that most volunteers showed a high agreement rate with our experts, supporting the validity of such a crowd-driven approach for rapid post-disaster damage assessments. The proposed approach brings satellite imagery to volunteers and can be used to detect various features in different disaster events such as damaged buildings, flooded areas, areas burned by wildfires, damaged roads, and storms, among others. These features are easily recognizable from very high resolution satellite imagery, even to the lesser trained eyes of new participants. Other than collecting data, this approach also helps to increase citizen awareness of natural disasters and provides everyone with a unique opportunity to contribute directly to relief efforts. Picture Pile is intended to supplement existing approaches for post-disaster damage assessment and can be used by different networks of volunteers (e.g. the Humanitarian OpenStreetMap Team) to assess damage rapidly and to create up-to-date maps for timely response to disaster events.


  • 12:45 - SnapPlanet - Tell Your Stories From A New Perspective
    Gasperi, Jérôme (1); Manaud, Nicolas (2) - 1: SnapPlanet, France; 2: Space Frog Design, France

    SnapPlanet is on a mission to democratise the access and use of Earth Observation data in an easy, affordable and enjoyable way. At its core, SnapPlanet provides a “photographic centric” mobile application to create beautiful images of the Earth from space within seconds, thanks to the freely available ten meters Sentinel-2 imagery. These data ensure a free high-resolution snapping service for all users. These “snaps” can be annotated, commented and shared, making SnapPlanet a great tool to raise awareness of our living planet. Thus, by combining the ever-increasing availability of Earth observation images with the advantages of a social network for everyone, SnapPlanet wants to drive new uses of EO data to empower citizens, journalists, scientists to tell stories that can have a positive impact on society and environment on a global scale.


Atmospheric Correction

Chairs: Mangin, Antoine (ACRI), Arino, Olivier (ESA- ESRIN)

14:00 - 15:15

  • 14:00 - SEOM project for an operational atmospheric correction for Sentinel-2 above coastal and inland waters
    Mangin, Antoine (1); Serra, R. (1); Vincent, C. (1); Loisel, H. (2); Jamet, C. (2); Ngoc, Dinh (2,3); Fell, F. (4); Lafrance, B. (5); Aznay, O. (5) - 1: ACRI, France; 2: LOG (France); 3: VAST (Vietnam); 4: Informus (Germany); 5: C-S (France)

    In 2016 has been launched a scientific project supported by ESA in order to study the atmospheric correction for Sentinel-2 above coastal and inland waters. When successful, the overall idea is to made this algorithm available to the coastal community who is showing an increasing interest for Sentinel-2 through public tool (e.g. SNAP) and later on to allow systematic processing of L2A products dedicated to coastal zone in complement to land products. A consortium has been selected including ACRI, LOG, Informus and C-S. Each of the partners is bringing a particular expertise to each module of the whole processing chain: gaseous absorption correction, aerosols scattering, cloud/cloud shadow/land/water classifications, detection of topographic effects and, more specific to the water part and its interface to land; sun glint detection and correction, white caps and adjacency effects. After scientific evaluation of possible modules, first validation and a first round of implementation, the project is now entering into validation phase. Rationale for algorithm selections and validation plan and objectives will be presented.


  • 14:20 - MeetC2: multi-scales atmospheric corrections for the Sentinels 2
    Saulquin, Bertrand; Martin-Lauzer, Francois-Regis - ACRI-ST, France

    From the Top Of Atmosphere (TOA) observations and in coastal areas, unmixing the high spatial-frequency water signal from the low spatial-frequency atmospheric signal is challenging as multiple sets of solution of vector {ρaer (λ),ρw(λ),Ttot(λ)} are possible for a single set of TOA(λ) observations. Aerosol spatial-scales are between one to tens of kilometres, while in the same time in coastal areas, water signal spatial-scales are between tens of meters to hundreds of meters. These differences are particularly interesting to be exploited for unmixing the water and the atmospheric signal in coastal areas, where the “black pixel hypothesis”= null contribution of the water, is not valid and the shape of the water and the aerosol signals in the infrared may be very similar, leading to inversion errors. The inversion scheme itself of the existing processors such as the OL2 OLCI processor addresses partially the reliability of the estimates. For example, aerosol optical thicknesses and types are estimated regardless the fact that it will result later in the processor in negative water reflectance estimates: the inversion has converged towards a local minimum. The final consequence for the users is an over-flagging of the products in coastal areas and for the spatial agencies an under-optimisation of the level 2. To limit the influence of this bad conditioned system, we use here the Maximum a posteriori (MAP) criterion to add to the minimization cost function a priori onto the variable to be estimated. MeetC2 is a Level 2 prototype processor for the Sentinels 2 based onto both a spatial-downscaling analysis and a probabilistic inversion scheme. The TOA signal is estimated using LUTs (interpolated on the fly) generated with the radiative transfer codes OSOAA and SMART-G. The spatial downscaling approach relies onto the estimation of the parameters using an ensemble of pixels, decreasing for each scale from 6 km to 600m resolution. The spatial continuity of the atmospheric components between scales is here modelled using 4 directional gradients onto the inversed atmospheric variables. The high resolution water signal is finally obtained by the subtraction of the high resolution S2 observations with the low resolution atmospheric signal. The validation results include 64 matchups on 6 Aeronet-OC sites and MOBY. We characterise also in this presentation the added-value of the spatial-regularisation terms in the inversion. The V1 of the MeetC2 processor, written in python, is parallelised and is able to process one S2 image at 60m resolution in 2-3 minutes (resp. 10m resolution in 15 minutes using 16 CPUs). Sub-region extraction is also possible to enhance the computation time. Figure 1: Functional scheme for the MeetC2 algorithm based onto i/ the Maximum a Posteriori (MAP) criterion for convergence, as much as possible, towards non-negative and realistic ρw and ii/ spatial gradient constraints onto the atmospheric parameter estimates. Sentinel 2 Image over the Laguna of Venice, 20160827, AAOT Aeronet-OC site.


  • 14:40 - iCOR in coastal, transitional and inland waters : application to Sentinel-2
    De Keukelaere, Liesbeth (1); Sterckx, Sindy (1); Adriaensen, Stefan (1); Knaeps, Els (1); Reusen, Ils (1); Giardino, C (2); Bresciani, M. (2); Hunter, P. (3); Van der Zande, D. (4); Vaiciute, D. (5) - 1: Vlaams Instituut voor Technologisch Onderzoek (VITO), Belgium; 2: Consiglio Nazionale delle Ricerche – Instituto per il rilevamento elettromagnetico dell’ambiente (CNR-IREA), Italy; 3: University of Stirling, UK; 4: Royal Belgian Institute of Natural Sciences (RBINS), Belgium; 5: Klaipeda University, Lithuania

    The iCOR image correction, previously known as OPERA (Sterckx et al., 2015), is a scene and sensor generic atmospheric correction algorithm which can process images containing land and water pixels (coastal, transitional or inland waters). ICOR has the advantage that elevation, air-water interfaces and adjacency effects are considered and no assumptions are made on the turbidity of the water. Through the use of a single atmospheric correction implementation, discontinuities in the reflectance between land and the highly dynamic water areas are reduced. iCOR contains an image-based aerosol optical thickness retrieval module, based on the method developed by Guanter et al. (2007), and allows to perform a simplified adjacency correction over land using fixed ranges and the SIMilarity Environmental Correction (SIMEC) over water (Sterckx et al., 2014). This study illustrates the performance of iCOR for Sentinel-2 over coastal, transitional and inland waters by comparing the image retrieved water-leaving reflectanceswith in-situ optical data collected during field campaigns in European lakes and AERONET-OC measurements. Specific attention is given to the impact of applying an adjacency correction for inland waters . Keywords: iCOR; OPERA; adjacency effects; SIMEC; inland, coastal and transitional waters; AERONET-OC; in-situ optical data; water-leaving reflectance References Sterckx, S., Knaeps, E., Adriaensen, S., Reusen, I., Keukelaere, L. De, Hunter, P., 2015. Opera : an Atmospheric Correction for Land and Water. Proc. Sentin. Sci. Work. 3–6. Sterckx, S., Knaeps, S., Kratzer, S., Ruddick, K., 2014. SIMilarity Environment Correction (SIMEC) applied to MERIS data over inland and coastal waters. Remote Sens. Environ. doi:10.1016/j.rse.2014.06.017 Guanter, L., Del Carmen González‐Sanpedro, M., Moreno, J., 2007. A method for the atmospheric correction of ENVISAT/MERIS data over land targets. Int. J. Remote Sens. 28, 709–728. doi:10.1080/01431160600815525


  • 15:00 - ACOLITE processing of Sentinel-2 data: Evaluation and applications in coastal and inland waters
    Vanhellemont, Q.; Ruddick, K. - Natural Science (Belgium)

    With the advent of freely available data from the Landsat 8 (2013-...), Sentinel-2A (2015-...) and soon Sentinel-2B (2017-...) missions, uptake of high resolution water remote sensing has drastically increased in the coastal and inland water (hereafter "aquatic") community. No standard atmospheric correction and processor for aquatic applications was provided for any of these missions, so several algorithms were developed by different teams. In this presentation the ACOLITE atmospheric correction is discussed. ACOLITE is freely available and can process data from a number of sensors, including Landsat 5/7/8 and Sentinel-2. ACOLITE performs a two-step atmospheric correction: first, the Rayleigh reflectance is computed based on sun-sensor geometry and second, the aerosol reflectance over water is estimated using the SWIR bands and simply extrapolated to the visible and near-infrared bands in order to derive water-leaving reflectance. The performance of ACOLITE is evaluated using autonomous AERONET-OC measurements in Belgian coastal waters. Same-day imagery from L8 and S2 is compared in terms of water-leaving reflectance for the common band set. The impact of noise from the SWIR bands used in this approach is discussed. Processing challenges for inland waters are also highlighted, with proposed updates to the atmospheric correction algorithm. Novel aquatic applications are highlighted, such as monitoring of local sediment transport phenomena (turbid wakes) and events (dredging) using L8 and S2, and the detection of intense algal blooms with S2/MSI.


  • 15:20 - Sentinel-2/MSI ocean colour with the Polymer algorithm
    Steinmetz, François; Ramon, Didier - HYGEOS

    The Polymer algorithm is a robust and generic atmospheric correction method, designed to recover the ocean colour in presence of sun glint. It has been applied to multiple sensors, and is used within the Ocean Colour Climate Change Initiative project (OC-CCI). Initially developed for the open ocean, it has been extended to process case 2 waters, with the inherent advantage of not requiring negligible water reflectance in near infrared bands. The sun glint is a critical aspect of the Sentinel-2 MSI water observations: since the revisit of both S2A and S2B is done in the same viewing conditions, many areas are continuously affected by sun glint over a season, leading to long periods of no coverage if the sun glint is not recovered. The results of Sentinel-2 MSI processed with Polymer will be presented and evaluated.


TEP - Exploitation Platforms

Chairs: Marin, Alessandro (Solenix.ch), Amler, Esther (ESA ESRIN)

08:30 - 10:30

  • 08:30 - A New Platform for Global Urban Monitoring and Analysis - The Urban TEP
    Esch, Thomas (1); Asamer, Hubert (1); Balhar, Jakub (2); Boettcher, Martin (3); Boissier, Enguerran (4); Hirner, Andreas (1); Mathot, Emmanuel (4); Marconcini, Mattia (1); Metz, Annekatrin (1); Permana, Hans (3); Soukop, Tomas (2); Uereyen, Soner (1); Svaton, Vaclav (5); Zeidler, Julian (1) - 1: German Aerospace Center (DLR), Germany; 2: GISAT, Czech Republic; 3: Brockmann Consult, Germany; 4: Terradue, Italy; 5: IT4Innovations - Technical University of Ostrava, Czech Republic

    The upcoming suite of Sentinel satellites in combination with their free and open access data policy will open new perspectives for establishing a spatially and temporally detailed monitoring of the Earth’s surface. However, the capability to effectively and efficiently access, process, analyze and distribute the mass data streams from the Sentinels and high-level information products derived from them poses a key challenge. This is also true with respect to the necessity of flexibly adapting the processing and analysis procedures to new or changing user requirements and technical developments. Hence, the implementation of operational, modular and highly automated processing chains, embedded in powerful hard- and software environments and linked with effective distribution functionalities, is of central importance. This contribution introduces the TEP Urban platform that aims at the utilization of modern information technology functionalities and services to bridge the gap between the technology-driven EO sector and the information needs of environmental science, planning, and policy. Key components of the system are an open, web-based portal connected to distributed high-level computing infrastructures and providing key functionalities for i) high-performance data access and processing, ii) modular and generic state-of-the art pre-processing, analysis, and visualization, iii) customized development and sharing of algorithms, products and services, and iv) networking and communication. These services and functionalities are supposed to enable any interested user to easily exploit and generate thematic information on the status and development of the environment based on EO data and technologies. The TEP Urban platform is supposed to initiate a step change in the use of EO data by providing an open and participatory platform based on modern ICT technologies and services that enable any interested user to easily exploit and generate thematic information on the status and development of the built environment. So far more than 240 institutions from 41 countries have requested U-TEP data and system access.


  • 09:00 - Scalable EO Value Added Services (EO VAS) For Land Monitoring
    Kolitzus, David (1); Milcinski, Grega (2); Riffler, Michael (1) - 1: GeoVille, Austria; 2: Sinergise, Slovenia

    Nowadays most EO services are still based on downloading source data to the local environment, before performing basic preprocessing (e.g. compositing, mosaicking, atmospheric correction) and starting with specific analyses relevant for the service. Structured availability of data within a cloud environment makes it possible to automate tasks and run them faster. However, this work-process is still either too slow or the required computing power/bandwidth too high to be scalable to regional or even global scales. GeoVille and Sinergise have joined forces to solve this problem by connecting a standard scripting environment with Sentinel Hub's web services enabling innovative EO value added services (EO-VAS). EO-VAS will create a scalable on-demand service, which actually will generate and deliver EO products to users worldwide in a cost-effective manner. To demonstrate the approach, we have designed an initial service allowing for multi-temporal land classification in an extremely efficient manner. It takes spectral indices, such as for example the maximum NDVI pixel value, from a multi-temporal image stack with multiple observations and creates clusters based on the spatio-temporal variability of the index. This allows investigating seasonal vegetation variations and to discriminate various land cover types, such as cropland (high variation of vegetation throughout the year), bare soil (no variation, low vegetation index value) and grassland/forest (low variation, high vegetation index value). More detailed and accurate classifications are possible by introducing additional indices, derived metrics and region-specific thresholds. The Sentinel Hub web service is used to create the spectral indices for any chosen area and season. This comes as an input to a clustering procedure. As there is practically no need to store the data of intermediate steps, the service can be provided in real-time. When a request comes, it can be globally run without large IT resources, limited only by "regional EO knowhow" with regard to different spectral index patterns in various regions. A side effect of this simplicity is the given flexibility - it is possible to change the "rule engine" and parameters and see the result in a matter of seconds. This opens the possibility to develop and evolve services in a very fast and efficient manner. The main technological novelty lies in a cost-effective approach, which uses resources only when somebody is asking for results, thus making the services more affordable. The functional novelty is expressed through its connecting characteristics - connecting end-user communities with developers, who can build their services on top of the initial tool-set and commercializing them through the platform without the need for large investments in processing and storage infrastructure or data processing developments.


  • 09:15 - Playing with ESA Hydrology Exploitation Platform
    Martínez, Bernat; Romero, Laia - isardSAT, Spain

    ESA Hydrology Exploitation Platform (HEP) is a project aim at facilitating discover, access, process, visualisation and sharing of water information derived from Earth Observation data such as flood mapping, water level, water quality and the assimilation of these data into hydrological models. ESA HEP team will show the existing features and capabilities of this HEP through an on-line demonstration. ESA HEP services could be accessed through https://hydrology-tep.eo.esa.int/#!thematic.


  • 09:30 - Forestry TEP changes the paradigm in value adding of remotely sensed data
    Häme, Tuomas (1); Tergujeff, Renne (1); Rauste, Yrjö (1); Farquhar, Clive (2); van Zetten, Peter (2); Bennett, Victoria (3); deGroof, Arnaud (4); Hämäläinen, Jarno (5); van Bemmelen, Joost (6); Seifert, Frank Martin (6) - 1: VTT Technical Research Centre of Finland Ltd. (FI); 2: CGI (UK); 3: Science and Technology Facilities Council STFC (UK); 4: Spacebel (B); 5: Arbonaut Oy Ltd. (FI); 6: European Space Agency, ESRIN

    The Forestry Thematic Exploitation Platform (Forestry TEP) has recently entered the pre-operational phase. This means that the principal functionalities are in place and a higher number of users can start using it. At the time of writing of the abstract, the database of interested organizations has more than fifty members from academia, research organizations, private sector, public administration, and NGO’s. The user pool is growing when marketing of the platform is now intensifying. User interest in Forestry TEP has exceeded all the expectations. The enthusiasm clearly shows that such one-stop-shop for Earth Observation services in forestry has a strong demand in the user community. The users are seeing the benefits of working on the cloud instead of having to download and install data and software on their own computing facilities that may be incapable to process the vast volumes of Copernicus data. Presently six thematic processing services are available on the Forestry TEP - for computing vegetation indices and for mapping of land and forest cover, biomass and change. Through a voting system available on the Forestry TEP web site, a user can affect the prioritization of future applications. In addition, a user can utilize all the features of the Sentinel Application Platform SNAP, Monteverdi/Orfeo toolbox, and an open source Geographic Information System QGIS. The user can search the data on the Forestry TEP Platform, switch to access remotely any of the above-mentioned software tools, process data using these tools, save the result to the platform, and continue processing there or download the result to his own computer. Relevant images from the search can be saved to a Data Basket where they can be found also for the future processing in another session. The user can upload his own reference data to train the land cover or biomass models or he can generate the data interactively with QGIS on the platform, for instance. Additionally, via a web-based developer interface, it is possible for service providers to develop their existing or completely new applications to be run on the platform. Applications and result products can be shared to designated users or openly to everybody. In the operational phase, there will be a possibility also to sell products and offer application software as a service on the platform. Already now, data privacy and security functionality is in place. The platform presently offers access to Sentinel-1, Sentinel-2, and Landsat-8 data. More data types, including commercial data will be introduced according to user interest. Users can search over a geographic region not only for the available satellite images but also for existing products that have been computed and shared by other users. The Forestry TEP is being developed by VTT Technical Research Centre of Finland as the coordinator, application and user specialist, CGI IT UK as the system developer and integrator, Science and Technology Facilities Council (STFC UK) as the principal data access and infrastructure provider, and Spacebel (BE) and Arbonaut (FI) as application and service experts. In addition to using the Climate and Environmental Monitoring from Space (CEMS) facility and cooperative ground segment of STFC, Sentinel data can experimentally be downloaded from the Earth Observation Innovative Platform Testbed Poland (IPT) facility. It is expected that for the future operational phase to start in 2018, the Copernicus Data and Information Access Services (DIAS) will offer a potential infrastructure solution for Forestry TEP. https://forestry-tep.eo.esa.int/


  • 09:45 - The Polar thematic Exploitation Platform
    Fleming, Andrew (1); Puestow, Thomas (2) - 1: BAS, United Kingdom; 2: C-Core Canada

    The volume and variety of Earth Observation data available for the polar regions is growing rapidly, providing the opportunity for ever more complex investigations and exploitation in support of polar communities. The European Space Agency (ESA) and other satellite operators have been at the forefront of collecting, analysing, processing and disseminating new data and information from EO satellites. With this increase in volume of data and range of uses, there are new challenges in order to fully utilise and exploit this capacity. ESA has established a series of Thematic Exploitation Platforms (TEP) which will provide the necessary collaborative environment which will deliver the resources and capabilities required for users’ exploitation work. The TEP projects address present challenges and opportunities in scientific data exploitation and operational applications by collocating data, processing capabilities and ICT infrastructure, providing a complete cloud based work-environment for users. The Polar TEP addresses the requirements and challenges of the diverse polar user community. This presentation will describe the Polar TEP concept and the range of potential uses it will support. We will present details of the working environment where users can bring their algorithms, applications and development activities directly to the data. It will also cover the rich set of polar themed EO and complimentary datasets, relevant toolboxes and processing capabilities, plus functionality to allow deployment of user defined workflows and processing environments. The PTEP is currently in a pre-operations phase and will increasingly be used by real users. We will report on progress made in providing platform access to early users. We will also outline the ongoing pilot project which will demonstrate the potential of Polar TEP to investigate current and future iceberg risk in Baffin Bay. The pilot project will integrate a diverse set of data, processors and models to allow users to investigate linkages between iceberg trajectories, changes in ice sheet velocity, glacier calving rates and ocean circulation. The integration of these components and toolsets will allow Polar TEP users to easily investigate questions about changing iceberg populations in support of regional climate change studies, evaluate risk assessments and inform infrastructure and ship routing decisions.


  • 10:00 - Supporting Sustainable Food Production from Space
    Volden, Espen (1); Romeo, Antonio (2); Mougnaud, Philippe (1); Amler, Esther (1); Migdall, Silke (3); Muerth, Markus (3); Bach, Heike (3); De Avila Siqueira, Andreia (4); Colapicchioni, Andrea (4); Goor, Erwin (5); Gilliams, Sven (5); Van Roey, Tom (5); Dean, Andy (6); Suwala, Jason (6) - 1: ESA-ESRIN, Italy; 2: Rhea Group; 3: Vista-GEO, Germany; 4: CGI; 5: VITO, Belgium; 6: Hatfield, Canada

    In line with the paradigm shift in Earth Observation of “Bringing the users to the data”, ESA provides collaborative, virtual work environments giving access to EO data and tools, processors, and ICT resources through coherent interfaces. These coherent interfaces are categorized thematically, tailored to the related user communities and named Thematic Exploitation Platforms (TEP). The Food Security Thematic Exploitation Platform (FS-TEP) is the youngest out of seven TEPs and is developed in an agile mode in close coordination with its users. It will provide a “one stop platform” for the extraction of information from EO data for services in the food security sector mainly in Europe & Africa, allowing both access to EO data and processing of these data sets. Thereby it will foster smart, data-intensive agricultural and aquacultural applications in the scientific, private and public domain. The FS-TEP builds on a large and heterogeneous user community, spanning from application developers in agriculture to aquaculture, from small-scale farmers to agricultural industry, from public science to the finance and insurance sectors, from local and national administration to international agencies. To meet the requirements of these groups, the FS-TEP will provide different frontend interfaces. Service pilots will demonstrate the platform’s ability to support agriculture and aquaculture with tailored EO based information services. The project team developing the FS-TEP and implementing pilot services during a 30 months period (started in April 2017) is led by Vista GmbH, Germany, supported by CGI Italy, VITO, Belgium, and Hatfield Consultants, Canada. It is funded by ESA under contract number 4000120074/17/I-EF.


  • 10:15 - A Single Step from Software Prototype to Massive Processing
    Gilles, N. (1); Clerc, Sebastien (1); Aspetsberger, M. (1); Craciunescu, V. (1); Ceriola, G. (1); Campbell, Gordon (2); Leone, Rosemarie (2) - 1: ACRI-ST, France; 2: ESA, Esrin

    Today, massive processing resources are often reserved to a very limited number of users mostly because of the human investment needed to start using the resources. Users will generally be required to: · Specify their hardware requirements and reserve the resources · Learn the APIs to access the system · modify, package or sometimes rewrite their prototype code in order to make it compatible with the system This not only raises the entry bar to start using the system, but it also tends to lock forever the software to a specific platform. In the frame of the Coastal Thematic Exploitation Platform (CTEP), we have developed a radically different approach based on the idea that one should be able to integrate a software on the platform with a few clicks. The only constraint on the software being that all output files should be placed in a specific folder, the exact same code can be run on a stand-alone linux machine and on the CTEP cluster. This makes integration much simpler and facilitates maintenance of the software. Our solution relies on a number of specific software tools: · An original python-based implementation of a WPS server (WISPY). WISPY is managing a dynamical database of processing services which can be created, modified or deleted on-the-fly. · A processor integration mechanism and user interface. The interface allows users to define the WPS service (name, abstract, input parameters) and upload the software packaged as archive file. The CTEP automatically creates a container with the software and records in the WPS database. The interface also allows the user to share the processing service with other users. · Built-in pre- and post-processing steps to retrieve input files from the data archive, store processing results in the user data storage area, and register georeferenced images as layers for visualization. These automatic steps avoids the need to learn specific APIs. In addition, effort has been made to facilitate the development: · The CTEP provides interactive remote desktops based on noVNC for development and prototyping · A large range of pre-installed python libraries for python developers · Systematic recording of execution logs on the cluster for debugging · On-line access to SNAP and QGIS for in-depth analysis of Sentinel or geospatial data. We will describe the technical solution and report on the first experience of the system on the integration of processing services for coastal environment monitoring. As expected, simple software can be integrated on the platform in one hour typically (including debugging time), with little if any learning curve required.


TEP - Exploitation Platforms

Chairs: Charalampopoulou, Vasiliki (Betty) (GEOSYSTEMS HELLAS S.A.), Volden, Espen (ESA-ESRIN)

11:15 - 13:15

  • 11:15 - The SAFE Scientific Exploitation Platform
    Santoro, Francesca (1); Carbone, Marianna (1); Amoruso, Leonardo (1); Abbattista, Cristoforo (1); De Santis, Angelo (2) - 1: Planetek Italia s.r.l., Italy; 2: INGV – Istituto Nazionale di Geofisica e Vulcanologia

    Exploitation Platforms (EPs) have been becoming the preferred way to distribute EO data and derived information since their concept introduction in 2013. They have been declined with several approaches (e.g. Thematic, Mission EP…) according to the intended users community. A new one, specifically designed to satisfy the needs of the scientific community is described in this paper, demonstrating how EP concept perfectly fits and opens up to new possibilities. In fact, aside with the already assessed advantages this EP defines a set of thematic data coming from different domains, provides their easy centralized availability and provides algorithms and processing capabilities in order to test, tune and assess scientific outcomes. The study case is the SAFE (SwArm For Earthquake study) project, funded by the European Space Agency (ESA), in the framework of "STSE Swarm+Innovation”, and coordinated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), that deals with the integrated analysis of several physical parameters whose abnormal variations have been found to be possibly associated with impending earthquakes (EQs). In the frame of the project Planetek has developed the so-called SAFE Exploitation Platform with the aim of sharing its outcomes and results, and of demonstrating the implemented techniques and scientific algorithms. Main purpose of the SAFE project is the investigation of the phase preceding large earthquakes with the aim to identify any electromagnetic signal eventually related to the forthcoming seismic events. In particular, the project has been intended to assess the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC), using both space-borne data coming from the ESA’s Swarm Constellation satellites together with ground geophysical data from different sources (e.g. in-situ instruments, seismic catalogues and other archives). The platform is specially designed to integrate both Swarm-satellites dataset with these ground-based geophysical data; it allows for accessing and visualizing data, performing online user-adjustable analyses and disseminating the project’s results to the scientific community. At a functional level, the SAFE Exploitation Platform provides means to properly configure and perform the collection and automatic update of data from external catalogue sources, to make them available to the user for browsing, visualization of their geographical information on the world map, and to perform a set of customized analyses, which can be configured according to the scientific findings and can then contribute to their assessment at a global scale. The SAFE Platform is able to integrate both engineered software (in C/C++) and prototypal implementations (in Matlab® and Python). Among the advantages we can mention: direct (and simplified) web access to data from different sources, to global results and to algorithms configuration and execution environment; unique reference deployment so to always share the latest version of the scientific algorithms, that have fast and flexible evolution maintaining a prototypal implementation approach. Finally it includes also integrated dissemination instruments, as a Web Portal intended to share the project’s results with the international scientific community, collect feedbacks on algorithms and results, and to highlight all the related events/ initiatives. This paper presents the scientific Exploitation Platform approach, with its architecture and functionalities, with a special focus on the SAFE case study in which its effectiveness has been demonstrated, with scientists and users involved in geophysical studies, for the multiple analyses in geomagnetic, ionospheric and seismic data domain and the contribution to the investigation of EQ preparation process.


  • 11:30 - The EO4Atlantic Pathfinder Regional Exploitation Platform
    McGlynn, Sinead (1); Juracic, Ana (1); O'Callaghan, Derek (1); Hanlon, Lorraine (1); McBreen, Sheila (1); Campbell, Gordon (2) - 1: Parameter Space, Ireland; 2: ESA/ESRIN

    A prototype platform for Earth Observation data in the Atlantic region (EO4Atlantic) is being developed in Ireland, and is funded by the European Space Agency. The project is led by Parameter Space Ltd., an Irish SME with expertise in platform development for a number of scientific missions, in collaboration with a number of SMEs including Treemetrics, Techworks Marine and iGeotec. Trial users are located in relevant institutions and organisations located along the Atlantic coastline, with an interest in satellite data. The EO4Atlantic platform will allow users to analyse large data sets in a high-performance capacity, without the need to download large data volumes to their computers. The users can make use of existing available software (e.g. open source tools currently being used for analysis of satellite or sensor data) or upload their own tools to the platform and run their analysis close to the data. The objective of the EO4Atlantic pathfinder platform is to support the development and delivery of EO based information services, based on high volume EO data access and processing, with a focus on expert and non-expert users of EO data from the European Atlantic region. The platform is designed to demonstrate how best to use and integrate existing capabilities and infrastructure in the Atlantic region, and how a full regional exploitation platform could operate in this area with potential new capabilities and infrastructure which will be implemented in the medium to long term. Customised toolkits are made available through the platform for testing. These include open source toolboxes for analysis of satellite data and workflows allowing the chaining together of different tools to create multiple processing steps. End to end services also provide products such as sea surface wind speed evaluations for renewable energy exploitation, cloud-free pixel databases for high-cloud areas, forestry monitoring and management tools, flood risk assessment in coastal regions, and ecological and physical observations of inland and coastal waters. These services are designed to complement the common issues and goals of the regional and European initiatives specific to the Atlantic area.


  • 11:45 - Satellite Agricultural Monitoring With Cloud Platforms
    Shelestov, Andrii (1); Kolotii, Andrii (1); Vasiliev, Vladimir (2); Lavreniuk, Mykola (1); Yailymov, Bohdan (1) - 1: Space Research Institute NAS Ukraine and SSA Ukraine; 2: EOS Data Analytics

    With launch of Sentinel-1 and Sentinel-2 missions new era of geospatial science in agricultural domain based on open data has started. High spatial (10 m) and temporal resolution (6 days of revisit period) leads to big amount of satellite data that couldn’t be processed within local computational infrastructure due to timely download, difficulties in data storage and processing. On the other hand such new quality of satellite imagery is very efficient for crop type mapping, area estimations etc [1-3]. These issues can be solved with use of modern satellite platforms for dealing with big amounts of geospatial data. Among them Google Earth Engine (GEE) and Amazon Elastic Compute Cloud (Amazon EC2) with Amazon Simple Storage Service (Amazon S3) are good examples of different ways of processing geospatial data in the cloud. GEE provides high-performance computation on free of charge basis with some amount limitations (in test mode only). With GEE geospatial analyst can use wide range of already developed methods for different modern tasks that occur in daily routines quite easy and very intensively. On the other side users are forced to use data catalog with already preprocessed wide range of satellite data. In case of SAR data this leads to some difficulties in further data exploitation. Within exploitation of Amazon EC2 and S3 services users are allowed to download and preprocess data in timely efficient way with use of scalable computational, networking and storage infrastructure. Amazon already hosts Sentinel-2 data for ESA (within official bucket) and Sentinel-1 for Alaska Space Facilities. Actually, Amazon Web Services and GEE demonstrate different ways to organization of cloud based geospatial data processing – more flexible with requirements of higher competence from end-users or restricted with easier daily exploitation. References 1. Kussul, N., Lavreniuk, M., Skakun, S., & Shelestov, A. (2017). Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geoscience and Remote Sensing Letters, 14(5), 778-782. 2. Shelestov, A., Lavreniuk, M., Kussul, N., Novikov, A., & Skakun, S. (2017). Exploring Google Earth Engine Platform for Big Data Processing: Classification of Multi-Temporal Satellite Imagery for Crop Mapping. Frontiers in Earth Science, 5, 17. 3. Kussul, N. N., Lavreniuk, N. S., Shelestov, A. Y., Yailymov, B. Y., & Butko, I. N. (2016). Land Cover Changes Analysis Based on Deep Machine Learning Technique. Journal of Automation and Information Sciences, 48(5).


  • 12:00 - UrtheCast Data and Information Platform: How To Democratise Access To Earth Observation
    Ramos Perez, Jose Julio - Deimos Imaging SLU, an Urthecast company, Spain

    Urthecast's vision to democratise the access to Earth Observation touches ground with our data distribution and exploitation ecosystem, the UrthePlatform. This platform is putting together: - Data sources - an extensive and rich data sources offer (including our current operational satellites Deimos-1 and Deimos-2, public satellite data sets like Landsat and Sentinels, our partner satellites of the Pangeo Alliance, and the future UrtheDaily and OptiSAR constellations); - Geo-analytics - an ever-growing set of EO services and applications specifically designed for quick combination, analysis and extraction of information (including bring-your-own-algorithm capabilities); - Visualisation - complete portfolio of visually captivating presentation options (including graphs, 2D- and 3D-maps of both raster and vector layers); - GIS - close integration with unmatched Geographical Information Systems (GIS); - Marketplace - the possibility to monetise developed services and products; - Cloud - securely running over the world-biggest and most advanced cloud infrastructure; - Community - a massively big community of remote sensing experts, engineers and data scientists developing algorithms and final users consuming Earth Observation imagery and analytics; - Vertical integration - seamless compatibility with technologies used for companies in much larger industries (b2b and b2c model) and sectors like forestry, agriculture, infrastructures, urban planning, defence and intelligence. UrthePlatform provides unhindered and near universal access of EO imagery and data at affordable price point, in formats and on platforms that do not require expertise, within an eco-system that attracts third-party investment and innovation, that significantly broaden the utility of the data for organisations and individuals. This combination of data wealth, engineering effectiveness, expansion possibilities and massive community makes the UrthePlatform a perfect place for both EO and data scientists to deploy their work and make their work truly useful for their institutions and the society This presentation will introduce this UrthePlatform and discuss how its user could benefit from it.


  • 12:15 - Rationales For An Open Cloud Transition: The Case Of Bringing The EGI Federated Cloud As A Commodity For The Geohazards Scientific Community
    Rossi, Cesare; Caumont, Hervè; Pacini, Fabrizio - Terradue Srl, Italy

    Earth observations from satellites produce vast amounts of data. In particular, the new Copernicus Sentinel missions are playing an increasingly important role as a reliable, high-quality and free open data source for scientific, public sector and commercial activities. Latest developments in Information and Communication Technology (ICT) facilitate the handling of such large volumes of data, and European initiatives (e.g. EOSC, DIAS) are flourishing to deliver on it. In this context, Terradue is moving forward an approach resolutely promoting an Open Cloud model of operations. With solutions to transfer EO processing algorithms to Cloud infrastructures, Terradue Cloud Platform is optimising the connectivity of data centres with integrated discovery and processing methods. This is for example the case with the Geohazards Exploitation Platform initiative, an R&D activity funded by ESA. Implementing a Hybrid Cloud model, and using Cloud APIs based on international standards, the Platform fulfils its growing user needs by leveraging capabilities of several Public Cloud providers. Operated according to an “Open Cloud” strategy, it involves partnerships complying with a set of best practices and guideline: • Open APIs. Embrace Cloud bursting APIs that can be easily plugged into the Platform’s codebase, so to expand the Platform offering with Providers offering complementary strategic advantages for different user communities. • Developer community. Support and nurture Cloud communities that collaborate on evolving open source technologies, including at the level of the Platform engineering team, when it comes to deliver modular extensions. • Self-service provisioning and management of resources. The Platform’s end-users are able to self-provision their required ICT resources and to work autonomously. • Users rights to move data as needed. By supporting distributed instances of its EO Data management layer, the Platform delivers the required level of data locality to ensure high performance processing with optimized costs, and guarantees that value added chains can be built on top of intermediate results. • Federated Cloud operations. The Platform’s collaborative environment and business processes support users to seamlessly deploy apps and data from a shared marketplace and across multiple cloud environments. As a recent case, thanks to the integration within the Platform of the Open Cloud Computing Interface (OCCI), and the close partnership between EGI and Terradue (a Service Level Agreement signed in September 2016 allowing Terradue to access a set of EGI federated Cloud providers in Europe), our provisioning of ICT resources supports ever more demanding exploitation scenarios. At this stage, EGI compute and storage resources from GOEGRID-GWGD (Germany) are used to support the ESA SNAP Sentinel-1 COherence and INtensity (COIN) Service, an on-demand processing model accessed by GEP users. Also, EGI compute and storage resources from ReCaS Bari (Italy) and BELNET-BEGRID (Belgium) are used to support the global scale systematic production of InSAR Browse Services integrated on GEP by the DLR. As Platform services, they automatically produce interferograms out of Copernicus Sentinel-1 acquisitions, over a subset of the global strain rate model, where volcanoes eruptions and earthquakes are most likely to impact society.


  • 12:30 - SPOT World Heritage: preserve and promote new enhanced SPOT 1-to-5 products
    Nosavan, Julien; Henry, Patrice; Hosford, Steven - CNES, France

    SPOT 1-to-5 satellites have collected more than 15 million images all over the word during the last 30 years from 1986 to 2015 which represents a unique historical dataset. Spot World Heritage (SWH) is the CNES initiative to preserve and promote this SPOT archive by providing new enhanced products on an open Web platform. A first step has begun in 2015 with the start of the repatriation of the last SPOT data hosted in the Receiving Stations spread across the world in the CNES central archive system. Meanwhile, some preliminary SWH processing chains have been developed with the production of more than 100.000 orthorectified SPOT products, provided by CNES through the French Land products data Centre THEIA. From mid-2017, the SWH initiative will move into another phase with the development of operational SWH processing chains in line with Sentinel-2 “standards” to allow deeper time series analysis. The first one is based on the current SPOT operational processing chain and will provide SWH-L1A products which are first exploitable images with radiometric corrections. The following ones will provide SWH-L1B and SWH-L1C products, geometrically compatible with ESA Sentinel-2 products, SWH-L1C being the orthorectified product. SWH processing will take place on CNES High Performance Computing Centre to take advantage of SPOT archive proximity. Dedicated means are being put in place to process the whole SPOT archive using 24-core processors and optimized solutions for file sharing (GPFS), deployment (Docker) and cataloging (Elastic Stack). Finally, all the generated SPOT products will be accessible free of charge to registered users via the Web. First SWH products are expected to be distributed in 2018 while the whole archive is expected to be processed within 2 years until 2020, depending on the timing of SPOT data retrieval from the reception stations.


  • 12:45 - The Digital Transformation of Earth Observation Market - challenges and opportunities in moving from the “pipeline” to the “platform” business model
    Manieri, Andrea (1); Spito, Nicolò (1,2) - 1: Engineering Ingegneria Informatica SpA, Italy; 2: Politecnic of Turin

    Recently, a great attention has been given to the exploitation of Earth Observation data, as a mean of industrial innovation and source of potential societal benefits. EU is at forefront in Earth Observation technologies: ESA launched the Sentinel Constellations, a set of redundant satellites that will offer high availability and resiliency as required by industries to run businesses. Several attempts and approaches have been experimented with alternate success in terms of self-sustainability, easy-of-use and scalability: from Thematic Exploitation Platforms , Business incubators to a Marketplace for EO services and data . All these approaches seem to enable a pipeline business model, i.e. where business is based on the acquisition of resources (a product and/or service) that are pushed to the consumer through the value chain in a unidirectional way. However, the Digital Transformation is radically changing the market landscape: ubiquitous connectivity, hands-held technology and user interactions are enabling elements of the platform business model, as successfully exemplified in various markets such as AirBnB, Uber, Google, etc. The platform model, instead, focuses on the creation of value through establishing an intelligent networking among users: where pipelines create value “on-top” of managed resources, platforms (that usually don’t even own such resources) create value by linking producer and consumer of resources. Platforms, as analysed in depth by S.P. Choudary (Choudary et.al. 2015), execute as a content aggregator that can simultaneously satisfy different type of interests. Platforms exploit also the phenomenon of network externalities, i.e. a service increases in value whenever increases the number of interacting individuals. Externalities could be of two types: same side (e.g. as in the telecommunication networks) or cross-side. Platform model exploits cross-network externalities, which are linked to the diffusion of the product, not among members on the same side of the market, but to the diffusion of the product on another network (or side). For example, Amazon marketplace bridges producers and consumers, while Uber helps drivers and riders to meet or AirBnB links housekeepers and tourists. These features also allow the business model to scale faster, thanks to the nature of the user, who could be both resource provider as well consumer (prosumer): this creates new opportunities, but also new challenges to face with. The breakthrough work of Choudary on the platform analysis models highlights the distinctive features of this new approach as well as the best practices that facilitate its understanding and implementation. This presentation aims to open a debate among stakeholders and to illustrate the initial hypothesis about how to implement the platform model in the EO sector for the benefit of all actors involved.


  • 13:00 - The Earth Observation Broker Platform for the Energy Sector
    Partington, Kim Charles (1); Lefort, Thomas (1); Debart, Carles (2); Reeve, Chris (3); Cetinic, Frano (4); Hartmann, Knut (5); Gasperi, Jerome (6) - 1: Geocento Limited; 2: Kongsberg Satellite Services; 3: Satellite Applications Catapult; 4: GlobeSAR; 5: Eomap Gmbh; 6: Jeobrowser

    The EO Broker platform is designed to encourage the uptake of earth observation by the energy sector by supporting users through stages of pre-procurement including product discovery, feasibility and supplier interaction. The open source platform has been developed through support from ESA's GSTP programme and as such includes some innovative brokering capabilities for products and services, but at the same time is aimed to support relatively easy uptake by both users and suppliers. The design has been influenced strongly by user cases derived from the energy sector, including from ESA's "EO4OG" projects, but has benefitted also from guidance by a project Steering Committee consisting of oil and gas industry professionals, The platform is at a mature stage of development, but is not operational. Its capabilities will be demonstrated through the presentation, along with an update on planned exploitation beyond the development phase of the project, and interested users and suppliers are invited to get in touch with the team.


Data Cube

Chairs: Baumann, Peter (Jacobs University | rasdaman GmbH), Desnos, Yves-Louis (ESA-ESRIN)

14:15 - 15:45

  • 14:15 - The Datacube Manifesto
    Baumann, Peter (1); Merticariu, Vlad (1); Misev, Dimitar (1); Hogan, Patrick (2) - 1: Jacobs University, Germany; 2: NASA, USA

    Recently, the term datacube is receiving increasing attention as it has the potential of greatly simplifying "Big Earth Data" services for users by providing massive spatio-temporal data in an analysis-ready way. However, there is considerable confusion about the data and service model of such datacubes. With this Manifesto we provide a concise, vendor-neutral, standards-aware definition of datacubes, The six simple rules are based on our two decades of experience in datacube modeling, query languages, architectures, distributed processing, standards development, and active deployment of Petascale datacube services at some of the largest data centers worldwide. In particular, the intercontinental EarthServer initiative is demonstrating large-scale use of datacubes, with Petabyte portals at the time of the conference and experimental data center federation between Europe and Australia. Further, the authors are editors and main writers of geo datacube standards in OGC and ISO, as well as the generic datacube extension to ISQL SQL. We exemplify feasibility and advantages of datacube services based on large-scale running services and discuss seamless connection of servers and visual clients through open datacube standards. We hope that by providing this crisp guidance we can clarify discussion and support assessment of technologies and services in this exciting new technology field.


  • 14:45 - Copernicus Climate Data at your fingertips - ECMWF’s data archive as a data cube
    Wagemann, Julia (1); Kakaletris, George (2); Apostolopoulos, Konstantinos (2); Kouvarakis, Manos (2); Merticariu, Vlad (3); Siemen, Stephan (1); Baumann, Peter (3) - 1: European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom; 2: Communication and Information Technologies (CITE) S.A., Greece; 3: Jacobs University Bremen, Germany

    The Meteorological Archival and Retrieval System (MARS) is the world’s largest archive of meteorological and climate data. It currently holds more than 250 petabytes of operational and research data; and data of the three Copernicus services, Atmosphere Monitoring Service (CAMS), Climate Change Service (C3S) and Emergency Management Service (CEMS). ECMWF’s current data retrieval system is an efficient Download service for global fields and geographical subsets thereof. The system offers data in either GRIB or NetCDF format. The growing volume of the data, however, makes it challenging for users to fully exploit long time-series of climate reanalysis data (more than 35 years), as generally, more data than actually required has to be downloaded and then extensively processed on local machines. As part of the EarthServer-2 project, ECMWF’s MARS archive has been connected with an OGC-based standardized web service layer, in order to offer on-demand access to and server-based processing of ECMWF data for a non-meteorological audience. This connection transforms ECMWF’s data archive into a flexible data cube that allows for the efficient retrieval and processing of geographical subsets and individual point data information at the same time. Downloading ECMWF’s ERA-interim data is no longer required and data access can directly be integrated into custom processing workflows. The approach combines the efficient retrieval of data from MARS with a MARS request with intelligent data cube processing with rasdaman, an efficient array database technology. FeMME, a metadata management engine, is responsible to connect both processes and to send back the data to the user. The pilot allows users to request data via an OGC Web Coverage Processing Service (WCPS) request. The application then translates the WCS request into an appropriate MARS request to retrieve data from the archive. The returned data is on-the-fly registered with rasdaman and, after being processed as requested, returned to the user in a standardized way and a chosen format encoding (e.g. netCDF, CSV or JSON). Based on a practical use-case from the climate sciences, the presentation will showcase how open data from the Copernicus Climate Change Service from ECMWF’s MARS archive can directly be retrieved and processed with the help of a WC(P)S request. A specific focus will be set on the benefits data provider and data users gain from offering and accessing large volumes of Earth Science data as a data cube.


  • 15:00 - The Earth System Data Cube Service
    Brandt, Gunnar (1); Mahecha, Miguel (2); Fomferra, Norman (1); Permana, Hans (1); Gans, Fabian (2) - 1: Brockmann Consult GmbH, Geesthacht, Germany; 2: Max Planck Institute for Biogeochemistry, Jena, Germany

    The ability to measure and quantify our environment, including ourselves, has increased significantly in recent years. Big data is now affecting almost every aspect of life and there are no signs that this trend is going to cease any time soon. For the natural environment, Earth Observation has been driving the data revolution by developing and putting into operation many new sensors with high spatial resolution and observation frequency generating high-quality products on a global scale to which everyone has free access. With this wealth of data on hand, the chances to describe and understand the complex dynamics of our planet and particularly the role of humans in affecting the Earth System are unprecedented. The bottleneck for many applications is, however, no longer the lack of suitable data, but the means to turn the huge amounts of available data from different sources into valuable information. Accessing several product archives, handling of big data volumes, providing adequate computing and network resources, and working with different product formats and data models are challenging even for experienced users. Furthermore, coping with these issues consumes lots of time, resources that are no longer available for the actual task of data exploitation and research. The Earth System Data Cube (ESDC) service aims at reducing such overhead for the joint exploration of relevant parameters of the coupled Biosphere-Atmosphere system. Currently, more than 30 parameters, most of them from Earth Observation and numerical models, are processed to share a common spatio-temporal grid and a common data model that greatly facilitate multivariate analysis. The service is provided through a virtual research environment based on the Jupyterhub technology, which offers access to the ESDC, cloud processing, and APIs for Python, Julia, and Python. There is a growing number of tailored open source software tools for typical operations that make working with the ESDC very efficient and enable collaboration between users. Moreover, an advanced visualisation application provides a graphical user interface for intuitive and interactive exploration of the ESDC. We present here different use cases that are currently implemented with user of the service. The examples, which comprise the calculation of marine primary productivity, biogeochemical model optimization, the development of a regional Ecological Observation System, the combination of social data on human disasters with extreme events in the ESDC, underpin the diversity of potential applications of the ESDC service. As the service matures on its way to operationality, we are welcoming new users to explore the potentials the ESDC for their own applications and to further advance the capabilities of the service with us.


  • 15:15 - Application Of The Data Cube Concept For Multi‐Temporal Satellite Imagery – A Complete Open Science Workflow For Data Science In EO
    Kristen, Harald (1); Jacob, Alexander (1); Vicente, Fernando (2); Marin, Carlo (1); Monsorno, Roberto (1); Notarnicola, Claudia (1) - 1: Eurac Research, Italy; 2: DARES Technology, Spain

    ESA´s Sentinel satellites provide satellite imagery in an unprecedented temporal and spatial resolution, shared as open data to everyone. ESA foresees that its Earth Observation (EO) archive will grow from the 23 PB in 2018 to more than 51 TB in 2022, with the Copernicus Sentinel missions being the major driver. To exploit this large amount of data fully, the EO user community, both scientific and commercial, has to change the way it is using remote sensing imagery. It will not be feasible anymore to store and process EO data on single workstations, especially when it comes to time series analysis on larger scale. This paper shows a novel approach to big data science for EO, which is able to deal with the specific challenges of long time series processing using only Free and Open Source Software (FOSS). In detail, we will address the problem of land cover and vegetation mapping using time series of Sentinel-1 data, by exploiting the temporal evolution of the interferometric coherence. The analysis will be conducted for the study area of South Tyrol in Italy as part of the recently started ESA SEOM SlnCohMap project ( http://www.sincohmap.org ). All the exploited data is hosted and processed directly on the high performance cloud infrastructure of the Eurac Research Sentinel Alpine Observatory ( http://sao.eurac.edu ). In detail, the pre-processed data is organized in data cubes, implemented in the raster array database RASDMAN. This guarantees fast and standardized access to the data via the OGC Web Coverage Service (WCS), following the principle “you only get what you need”. Furthermore, a good portion of the classic processing tasks such as sub-setting, re-projection, resampling and time series aggregation are directly done in the data cubes via the OGC protocol WCPS (Web Coverage Processing Service), according to the second principle “bring the processing to the data”. Because of its simplicity, the open source policy and its rising popularity in the EO community, Python is used as programming language for the implementation of the land cover and vegetation mapping. In detail, to download and process data via WCPS, we developed a Python module that sends WCPS queries in the right format to the Rasdaman server and transforms the response in data structures that can be directly used for further analysis in Python. For the land cover and vegetation classification two well-described algorithms in this field, Random Forests and Support Vector Machine, are applied. All the Python scripts run on a Jupyter server, which again is running in the SAO cloud. GIT is used as version control system to document and publish the created code for potential users under the MIT license. This fosters collaboration as other researchers can clearly see, comment and help developing the code. With the proposed approach the EO users can process and analyze large amounts of data in a cloud infrastructure, accessing services and data remotely using a high level programming language. In this way, EO users can focus on algorithm development, instead of dealing with data preparation.


  • 15:30 - Evaluation of Array Database to Manage and Query Massive Sensor Data
    Joshi, Jigeeshu; Pebesma, Edzer; Appel, Marius - Institute für Geoinformatik (ifgi), Heisenbergstrasse 2, 48149, Münster, Germany

    Many environmental monitoring observations are recorded at a fixed number of stations with constant temporal frequency. Arrays are a natural choice to represent these, with space and time as the two dimensions. Current implementations of Sensor Observation Services (SOS) from Open Geospatial Consortium (OGC) however typically use a normalized relational database as data backend. If data size grows to billions of records, querying and updating the database becomes relatively slow, with indexes taking up a lot of resources. This study evaluates the use of SciDB, an array database as a backend for massive time series data. In multi-dimensional array database system like SciDB dimensions are used as index. Indexes are implied and not stored, which saves substantial resources. Moreover, in the array data model, data that are close together are placed close together on the disk. This is advantageous for sensor observation data with spatial and temporal patterns. Thus, proximity in storage of correlated data improves efficiency in query processing. In a use case, SciDB features like parallelization, array dimension manipulation, chunk storage and compression are demonstrated by using an air quality dataset provided by the European Environment Agency. It is essentially a time series with air quality observations recorded by all member states at selected stations in Europe. The study compares SciDB as array database to PostgreSQL (which is a popular choice in geospatial community) as relational database. A common platform was set up to test the performance and compare the approaches of the two systems. Results show that SciDB has significant advantage for queries like sequential scan, group by aggregation, join, and computation of quantiles on the used dataset. An index on PostgreSQL table reduces the response time for filter and slicing queries, but the index size grows substantially as data size increases. On the other hand, the SciDB compression technique greatly reduces the disk storage space. The result of these tests and the discussions presented are particularly useful for Earth Observation studies that require multi-dimensional data management.


Round Table TEP-DataCube-Citizen Science

16:15 - 16:35

  • 16:15 - Round Table

    Exploitation Platforms/Citizen Science


Day 3 - 27/09/2017

Open Data and Tools

Chairs: Vollrath, Andreas (UN-FAO), Fitrzyk, Magdalena (RSAC)

08:30 - 10:30

  • 08:30 - Community development of scientific applications using the SNAP Toolbox
    Brockmann, Carsten (1); Fomferra, N. (1); Veci, L. (2); Ducoin, N. (3); Gascon, Ferran (4); Engdah, Marcus (4); Regner, Peter (4); Ganz, H. (5); Totrupp, C. (6) - 1: Brockmann Consult, Germany; 2: Array System, Canada; 3: C-S System, France; 4: ESA ESRIN, Italy; 5: Odermatt & Brockmann GmbH, Switzerland; 6: DHI, Denmark

    In 2014 ESA released the first version of the open source Sentinels Application Platform, SNAP, with 3 toolboxes providing instrument specific support for Sentinel 1, 2 and 3. Today, SNAP 5 is the current version. SNAP has evolved significantly, e.g. through the support of uncertainties or the multi-resolution data model, and the Sentinel toolboxes have been amended with various new supporting tools. New instrument toolboxes have been added for Radarsat, SMOS and Proba-V. SNAP has more than 10.000 downloads. SNAP is a platform that offers users various ways to use it, to adapt it to individual needs and to extend it. Most ESA Sentinel data users are familiar with SNAP Desktop, the interactive GUI application. A steadily growing user community is exploring and enjoying the possibilities offered by the Graph Processing Framework which allows users to connect different processing steps ("processing chains"), accible via the graphical graph builder and the batch processor; the python and Java API to write own scripts / programmes to implement own algorithmic ideas the SNAP forum to share experiences, get support for difficult questions, and to get in touch with the SNAP developers In this presentation we will showcast 3 examples of public contributions to SNAP that highlight the above elements of a modern community platform for developing scientific application in an open science environment: OLCI calibration modification (vicarious calibration) using graphs MusenAlp - lake surface temperature retrieval implemented in python GlobWetlands Africa Toolbox - QGIS using SNAP Key for all user interactions is the forum, and the SNAP forum which has several thouthands of threads. The issue tracker, on the contrary, is less popular although it a much better means for directly influencing the further development of the software. We will present both community tools, experiences and pros and cons. SNAP, and its precursors, the ENVISAT BEAM and NEST toolboxes, are a microcosms in itself which allow to study the performance of community tools when working in the very specific domain of Earth Observation Science and Applications. In our conclusion we will elaborate on the lessons-learned from more than 10 years of open source EO toolbox development, and hope to stimulate discussions on evolution and further improvements.


  • 09:00 - ESA Atmospheric Toolbox
    Niemeijer, Sander - S&T, Netherlands, The

    The ESA Atmospheric Toolbox (BEAT) is one of the ESA earth observation end user toolboxes. Since its inception in 2002 it has brought data reading, analysis and visualization support for products from a wide range of atmospheric missions such as ENVISAT, ERS, Metop, Aura, ACE, Odin, GOSAT, NPP-Suomi, Atmospheric CII, and Sentinel-5P. The toolbox consists of three main components: CODA, HARP, and VISAN. CODA is the core data reading interface allowing direct product access to a wide range of product formats using a single interface. HARP is the data harmonization and inter-comparison interface which brings all data to a harmonized data format and then provides operations such as filtering, collocation, unit conversion, quantity conversion (e.g. vmr to nd), vertical integration/smoothing, and binning/regridding. VISAN is a python-based basic analysis and visualization environment with advanced interactive plotting for 2D and geographical data. All components of the toolbox are freely available, cross-platform, and fully open source.


  • 09:15 - Exploitation of EO data using ESA’s SNAP software platform
    Engdahl, Marcus (1); Fitrzyk, Magdalena (2) - 1: ESA-ESRIN, Italy; 2: RSAC c/o ESA-ESRIN

    SNAP is a software platform and a set of open source software toolboxes developed by ESA for the research and exploitation of data from the Sentinel missions, as well as other 3rd party high and medium resolution optical and SAR missions. The software consists of the SNAP (SeNtinel Applications Platform) software platform, for which EO toolboxes can be added as modules. Currently the major toolboxes running on SNAP are the Sentinel-1/2/3 toolboxes. In addition the user can install other toolboxes like the PROBA-V Toolbox, SMOS-Box Kit and the radarsat-2 polarimetric Toolkit. Each toolbox contains collection of specific processing tools, data product readers and writers that are added to the “generic” raster data analysis and visualisation tools offered by the SNAP platform. Functionalities of the Sentinel-1 toolbox include basic and advanced SAR image post-processing tools, as well as interferometric and polarimetric functionality. The toolbox is fully compatible with TOPSAR mode of Sentinel-1 data for which a complete TOPSAR Interferometric processing chains are possible. Further interferometric functionality is provided via bridges to 3rd party tools like SNAPHU (for phase unwrapping) and StaMPS (for Persistent Scatterer Interferometry (PSI)). The Toolbox also provides tools for exploitation of data including speckle filters, polarimetric decompositions, and classifiers. The software supports most other civilian spaceborne SAR missions including RADARSAT-2, TerraSAR-X/TanDEM-X, ALOS-1&2, Cosmo-Skymed, Kompsat-5, ERS-1&2 and ENVISAT. Sentinel-2 toolbox is a multi-mission toolbox, already providing support for Sentinel-2, RapidEye, Deimos, SPOT 1 to SPOT 5 datasets. In terms of processing algorithms the software includes tools specific to the Sentinel-2 mission, such as atmospheric correction module Sen2Cor, multitemporal synthesis processor (L3), biophysical products processor (L2B), deforestration detector, along with a set of more generic tools for high resolution optical data. Sentinel-3 Toolbox consists of a rich set of visualisation, analysis and processing tools for the exploitation of OLCI and SLSTR data from Sentinel-3 mission. It also supports the ESA missions Envisat (MERIS & AATSR), ERS (ATSR), SMOS as well as third party data from MODIS (Aqua and Terra), Landsat (TM), ALOS (AVNIR & PRISM) and others.The SNAP platform can be used interactively via the Graphical User Interface (GUI), or via the command-line enabling high-throughput processing on computer-clusters or clouds. SNAP has been extremely well received by the EO user communities with over 30000 active users in June 2017.


  • 09:30 - Batch Processing of Sentinel-1 IW GRD time series with QGIS / OTB software
    Lardeux, Cédric (1); Frison, Pierre-Louis (2) - 1: ONF International, France; 2: UPEM / IGN, France

    We present a customized version of GGIS (Quantum GIS) integrating OTB (Orfeo Toolbox) software that allows the processing of Sentinel-1 IW time series acquisitions in GRD products format. The processing is dedicated to radar non specialists that are concerned with vegetation monitoring such as Land cover estimation, or deforestation and forest degradation areas detection.


  • 09:45 - InSAR for everyone: a researcher’s perspective on the use of automated grid processing of open satellite SAR data with P-SBAS in ESA’s G-POD
    Cigna, Francesca - Italian Space Agency (ASI), Italy

    Hardware and software requirements for advanced Interferometric Synthetic Aperture Radar (InSAR) processing of long stacks of satellite SAR imagery to generate land deformation time series are notably increasing in the current era of ‘big SAR data’ (e.g. Cigna 2015). The volume and length of interferometric SAR data stacks are growing exponentially (e.g. every 6 days a new Sentinel-1 IW scene is acquired for all land areas of Europe, along each pass, ascending and descending), and with them InSAR processing workloads and demands. A vast component of SAR data handling, initial manipulation and specialised InSAR processing can be delegated to remote systems and virtual environments. For instance, ESA’s Grid-Processing On Demand (G-POD; http://gpod.eo.esa.int/) platform for EO applications offers an environment where SAR data can be processed using high-performance and sizeable computing resources. In this paper, a number of InSAR processing trials that were carried out using G-POD and its Land Information Service ‘InSAR SBAS’ will be presented. The latter is based on the automated Parallel Small BAseline Subset (P-SBAS) processing chain developed at CNR-IREA (Casu et al. 2014), and allows both the generation of interferograms and the multi-temporal analysis with extraction of land deformation time series for coherent targets. Hundreds of SAR scenes at medium resolution acquired by ESA’s ERS-1/2 and ENVISAT missions and made freely available through ESA’s Virtual Archive 4 (http://eo-virtual-archive4.esa.int/) were used to run the trials. Processing with P-SBAS was conducted through the user-friendly G-POD web portal, which allowed selection of input data, setting of processing parameters, thresholds and options, and effective monitoring of the full processing chain from remote, as well as downloading of the generated results for subsequent visualisation in Google Earth and uploading into GIS platforms for interpretation and analysis. Case studies that will be presented include major cities of Europe such as Rome, Naples and London, as well as in Mexico and the Middle-East, where natural hazards (e.g. land subsidence, volcanic activity) combine with anthropogenic factors (e.g. groundwater abstraction). An analysis of processing times to derive time series for each case study (less than half a day) and the precision (up to mm) and geological validation of the retrieved results proves the value of open platforms and tools such as the P-SBAS in G-POD, to handle demanding InSAR workloads and help the InSAR community (even non-experts) to freely generate InSAR products from open SAR data. - Casu F. et al. 2014. SBAS DInSAR parallel processing for deformation time series computation. IEEE JSTARS 7(8), 3285-3296, doi: 10.1109/JSTARS.2014.2322671 - Cigna F. 2015. Getting ready for the generation of a nationwide ground motion product for Great Britain using SAR data stacks: feasibility, data volumes and perspectives. Proc. IGARSS 2015, 1464-1467, doi: 10.1109/IGARSS.2015.7326055


  • 10:00 - Open SAR Toolkit - the simple way of processing SAR data for land applications
    Vollrath, Andreas; Lindquist, Erik; Wiell, Daniel - UN-FAO, Italy

    Compared to its optical counterpart, the community of Synthetic Aperture Radar (SAR) data users for land applications is still small. One major reason for that originates from the differences in the acquisition principle and the underlying physics of the imaging process. For non-experts, this results in difficulties of applying the correct processing steps as well as interpreting the non-intuitive backscatter image composites. On the other hand, the free and open access to Sentinel-1 data widened the community of interested users and paves the way for the integration of SAR data into operational monitoring systems. In order to ease the use and access to analysis-ready SAR data for wide-area land applications, the Food and Agriculture Organization of the United Nations develops the Open SAR Toolkit (OST) under the SEPAL project. OST includes fully automated pre-processing routines that are mainly build on top of the Sentinel Application Platform (SNAP) and other freely available open-source software such as GDAL, Orfeo Toolbox and Python. The simple and intuitive GUI is based on the R Shiny package and is accessed via a web-browser. This allows to employ OST also on cloud-platforms, as in the case of SEPAL (see abstract of Lindquist et al.). For the moment, supported data sets are the ALOS Kyoto & Carbon mosaics and Sentinel-1 products. The former is freely available for non-commercial use, and OST eases the access and preparation of the data tiles provided by JAXA. For Sentinel-1, data inventory and download routines, as well as a GRD to RTC processor allows for the rapid generation of radiometrically terrain corrected (RTC) imagery that is ready for subsequent analysis tasks such as land cover classification. More advanced and processing intensive data products, such as time-series and timescan imagery, can be easily produced as well, in a fully automatic manner. Ultimately, mosaicking generates seamless wide-area data sets. Alongside the processing routines, accompanying demos and capacity building material provide the user a gentle entry into the world of radar remote sensing for land applications and refer to a wealth of relevant literature for a more profound study of the subject. The presentation includes nationwide, wall-to-wall Sentinel-1 timescan and time-series mosaics, that have been combined with optical and ALOS K&C data for biomass and land cover mapping.


  • 10:15 - EnMAP-Box 3.0 – a free and open source Python plug-in for QGIS
    van der Linden, Sebastian; Jakimow, Benjamin; Rabe, Andreas; Hostert, Patrick - Humboldt-Universität zu Berlin, Germany

    The EnMAP-Box is designed to process imaging spectroscopy data and particularly developed to handle data from the upcoming EnMAP (Environmental Mapping and Analysis Program) sensor. It serves as a platform for sharing and distributing algorithms and methods among scientists and potential end-users. Starting with version 3.0 the EnMAP-Box is designed as a free and open source (FOSS) Python plug-in for the geographic information system QGIS, which is also FOSS. The two main goals are to provide (i) state-of-the-art applications for the processing of high dimensional spectral and temporal remote sensing data and (ii) a graphical user interface (GUI) that enhances the GIS oriented visualization capabilities in QGIS by applications for visualization and exploration of multi-band remote sensing data. Therefore, the algorithms provided in the EnMAP-Box will be of high value for many other, especially multi- and hyperspectral EO missions. The EnMAP-Box plug-inbridges and combines efficiently all advantages of QGIS (e.g. for visualisation, vector data processing), packages like GDAL (for data IO or working with virtual raster files) and abundant libraries for Python (e.g. SciKits learn for EO data classification and pyqtgraph for fast and interactive chart drawing). The plug-in consists of a (i) graphical user interface for hyperspectral data visualisation and e.g. spectral library management, (ii) a set of algorithms, and (iii) a high-level abstraction application programming interface (EnMAP API). The EnMAP-Box can be started from QGIS or stand-alone, and is planned to become registered in the QGIS plug-in repository. The algorithms are integrated in the QGIS processing framework, thus they may be used in the QGIS graphical model builder and chained with algorithms provided by other plugins. The EnMAP-Box API allows easy domain specific workflows with high level data types and operators, which allows the integration of a multitude of powerful algorithms with only few lines of code. In our presentation we will illustrate the concept of the EnMAP-Box and how it efficiently integrates various FOSS components. Based on this we will address the value and possibilities of this concept for integration e.g. with the Sentinel-2 toolbox.


Open Data and Tools Continuation

Chairs: Brockmann, Carsten (Brockmann Consult), Engdahl, Marcus (ESA-ESRIN)

11:15 - 13:15

  • 11:15 - dtwSat: An R Package for Land Cover Classification Using Satellite Image Time Series
    Maus, Victor - International Institute for Applied Systems Analysis, Austria

    Open access to satellite data has boosted the development of new approaches to quantify and understand Earth's changes. The large spatiotemporal availability of satellite imagery, for example, has improved our capability to map and monitor land use and land cover changes over vast areas. Given the open availability of large image data sets, the Earth Observation community would get much benefit from methods that are openly available, reproducible and comparable. This paper presents the R package dtwSat, which provides an implementation of the Time-Weighted Dynamic Time Warping (TWDTW) method for land cover mapping using sequences of multi-band satellite images. Methods based on Dynamic Time Warping (DTW) are suitable to handle irregularly sampled and out-of-phase time series, which is frequently the case of those from remote sensing. TWDTW algorithm has achieved significant results using MODIS, Landsat, and Sentinel-2 time series to classify natural vegetation and crop types in different regions. Using existing R packages as building blocks dtwSat supports the full cycle of land cover classification using satellite time series, ranging from selecting temporal patterns to visualizing and assessing the results. To handle the satellite images, dtwSat uses well-known data structures from the R package raster, which offers the option to work with large raster data sets stored on disk instead of loading into memory (RAM) at once. The current version of the dtwSat package provides pixel-based time series classification, i.e., each time series is processed independently from each other, and therefore, the code is easily parallelizable. dtwSat is open source and distributed under a GNU General Public License GPL (≥ 2). A binary version is available from the Comprehensive R Archive Network (https://cran.r-project.org/web/packages/dtwSat) and the development version from GitHub (https://github.com/vwmaus/dtwSat). Future versions of the package envisage new features to reduce border effects, increase spatial homogeneity (i.e., reduce the called 'salt and pepper effect') and improve the temporal consistency of land cover transitions. dtwSat makes it straightforward to apply and compare the TWDTW approach with other methods, contributing to rapid advance automated and semi-automated methods to analyze satellite time series.


  • 11:30 - Open Data, Open Protocols and Formats, Open-Source Software: WebWorldWind
    Voumard, Yann (1); Sacramento, Paulo (1); Collins, Paul David (2); Hogan, Patrick (2) - 1: Solenix Deutschland GmbH, Germany; 2: National Aeronautics and Space Administration (NASA), USA

    WebWorldWind is an open-source software library for building 3D virtual globes on the web. The joint development team from ESA and NASA working on the project strives to make open data easily accessible and manageable in a 3D environment. For achieving this goal, the library relies heavily on highly-recognised open protocols, such as those standardised and promoted by the Open Geospatial Consortium (WMS/WMTS, WFS, WCS, OpenSearch for Earth Observation, etc.), and formats (Shapefiles, JPEG, PNG, GeoTIFF, GeoJSON, KML, Collada, etc.) for bringing data to the globe and interpreting them. However, implementing the right protocols and formats is only the first step towards truly supporting a wider community in the development of applications using open data. The application developers need access to the data in a meaningful and customisable way. These principles are at the core of the architecture of WebWorldWind both via its public interfaces and its extensibility. The open-source nature of the project is, in itself, a contribution to an application development landscape based on openness. In fact, interested developers can get invaluable insight into the workings of the framework simply by examining its code in detail. This is useful not only to understand the concepts and how to best leverage the framework, but also to contribute to its evolution and improvement over time. This presentation will first review some of the architectural concepts of WebWorldWind in support of open data and protocols. The current state of the art in terms of WebWorldWind features will then be presented. Concrete applications developed by the European community will come to exemplify the applicability of the developed concepts. Finally, a roadmap of improvements and new functionality to be implemented will be shown as a means to gather feedback from the community.


  • 11:45 - Crowd-driven Tools for the Calibration and Validation of Earth Observation Products
    Moorthy, Inian (1); See, Linda (1); Fritz, Steffen (1); McCallum, Ian (1); Perger, Christoph (1); Duerauer, Martina (1); Dresel, Christopher (1); Sturn, Tobias (1); Karner, Mathias (1); Schepaschenko, Dmitry (1); Lesiv, Myroslava (1); Danylo, Olha (1); Laso Bayas, Juan Carlos (1); Salk, Carl (1); Maus, Victor (1); Fraisl, Dilek (1); Domian, Dahlia (1); Mathieu, Pierre Philippe (2) - 1: International Institute for Applied Systems Analysis, Laxenburg, Austria; 2: European Space Agency, ESRIN, Frascati, Italy

    In recent years there has been a rapid diffusion in open access Earth Observation (EO) data available at global scales to help scientists address planetary challenges including climate change, food security and disaster management. For example, since 2016 the European Space Agency (ESA), via its Sentinel-2 satellites, has been providing frequent (5 day repeat cycle) and fine-grained (10 meter resolution) optical imagery for open and public use. As such, the EO community is faced with the need to design methods for transforming this abundance of EO data into well-validated environmental monitoring products. To help facilitate the training and validation of these products (i.e. land cover, land use), several crowd-driven tools that engage stakeholders (within and outside the scientific community) in various tasks, including satellite image interpretation, and online interactive mapping, have been developed. This paper will highlight the new results and potential of a series of such tools developed at the International Institute for Applied Systems Analysis (IIASA), namely the Geo-Wiki engagement platform, the LACO-Wiki validation tool, and Picture Pile, a mobile application for rapid image assessment and change detection. Through various thematic data collection campaigns, these tools have helped to collect citizen-observed information to improve global maps of cropland and agricultural field size, to validate various land cover products and to create post natural disaster damage assessment maps. Furthermore, Picture Pile is designed as a generic and flexible tool that is customizable to many different domains and research avenues that require interpreted satellite images as a data resource. Such tools, in combination with the recent emergence of Citizen Observatories (i.e. LandSense, GROW, GroundTruth 2.0, SCENT funded by Horizon2020), present clear opportunities to integrate citizen-driven observations with established authoritative data sources to further extend GEOSS and Copernicus capacities, and support comprehensive environmental monitoring systems. In addition, these applications have considerable potential in lowering expenditure costs on in-situ data collection and current calibration/validation approaches within the processing chain of environmental monitoring activities both within and beyond Europe.


  • 12:00 - rasdaman: the Spatio-Temporal Datacube Reference Implementation
    Baumann, Peter (1,2); Misev, Dimitar (1,2); Merticariu, Vlad (1,2) - 1: Jacobs University, Germany; 2: rasdaman GmbH, Germany

    Datacubes increasingly are accepted as a promising paradigm that simplifies user access and allows for better scaling server architectures. By aggregating all scenes pertaining to a particular satellite instrument over space and time, a single multi-dimensional object is created which can be sliced and diced, aggregated and combined through open standards interfaces. One particularly promising technology for implementing datacubes are Array Databases, a field coined by the open-source rasdaman ("raster data manager") array engine. Based on a formal data model, the rasdaman query language enables declarative queries (which typically are generated through visual clients, transparent to the user). This language forms the blueprint for the OGC WCPS geo datacube query language standard as well as the ISO SQL MDA ("Multi-Dimensional Arrays") candidate standard. Today, rasdaman is both OGC and INSPIRE official reference implementation for the WCS/WCPS datacube standards. Operational services exist on dozens of Terabytes of Earth and Planetary remote sensing data. In Fall 2017, a rasdaman instance will go on board the ESA OPS-SAT satellite, effectively transforming it into an on-board, queryable datacube processor. We present the open-source rasdaman datacube tool, its concepts, architecture, as well as sample services using it, including newly formed SMEs grounding their service model on the use of rasdaman.


  • 12:15 - Geo-Scientific Platform as a Service - Tools and Solutions for Efficient Access to and Analysis of Oceanographic Data
    Hansen, Morten Wergeland; Korosov, Anton; Vines, Aleksander - Nansen Environmental and Remote Sensing Center, Norway

    Existing international and Norwegian infrastructure projects, e.g., ESA GlobCurrent, NRC NorDataNet, NRC NM C and NRC NORMAP, provide open data access through the OPeNDAP protocol following the conventions for CF (Climate and Forecast) metadata, designed to promote the processing and sharing of files created with the NetCDF application programming interface (API). This approach is now also being implemented in the Norwegian Sentinel Data Hub (satellittdata.no) to provide satellite EO data to the user community. Simultaneously with providing simplified and unified data access, these projects also seek to establish and use common standards for use and discovery metadata allowing development of standardized tools for data search and (subset) web streaming to perform actual scientific analysis. A combination of software tools and actual data access, which we call the Geo-Scientific Platform as a Service (SPaaS), takes advantage of these opportunities to harmonize and streamline the search, retrieval and analysis of integrated satellite and auxiliary observations of the oceans in a seamless system. The core part of the Geo-SPaaS is a metadata catalog to store granular metadata describing the structure, location and content of available satellite, model, and in situ datasets. Data analysis tools include software for visualization, interactive in-depth analysis, and server-based processing chains. The Geo-SPaaS components are integrated in virtual machines (e.g., VirtualBox, VMware), of which provisioning and deployment are automatized using existing state-of the-art open-source tools (e.g., Vagrant, Ansible, Git, Python). The open-source code for scientific tools and virtual machine configurations is available on GitHub (https://github.com/nansencenter/), and is coupled to an online continuous integration system (Travis CI). The Geo-SPaaS enables researchers to more quickly develop and test scientific algorithms, which then can be operationalized on server systems (Geo-SPaaS nodes).


  • 12:30 - Foreshore Assessment with Space Technology (FAST): Services To Support Nature-Based Solutions For Coastal Flood And Erosion Risk Reduction
    Morris, Edward Peter (1); Peralta, Gloria (1); Dijkstra, Jasper (2); Evans, Ben (3); Oteman, Bas (4); van der Meulen, Myra (2); Scrieciu, Albert (5); Hendriksen, Gerrit (2); Gomez-Enri, Jesus (1); Benevante, Javier (1); Smith, Geoff (6); Bouma, Tjeerd (4); Stanica, Adrian (5); Möller, Iris (3); van der Wal, Daphne (4); van Wesenbeeck, Bregje (2); de Vries, Mindert (2) - 1: Universidad de Cádiz, Spain; 2: Deltares, Netherlands; 3: University of Cambridge, UK; 4: Royal Netherlands Institute for Sea Research (NIOZ), Netherlands; 5: National Institute for Research and Development of Marine Geology and Geoecology (GeoEcoMar), Romania; 6: Spectro-natura, UK

    The FAST project (EU-FP-SPACE 607131, www.fast-space-project.eu) has developed the MI-SAFE package, which provides Earth Observation (EO) and Open Source (OS) modeling services to support Nature-Based Solutions (NBS) for coastal flood and erosion risk reduction. Developed by an international consortium of experts and relying on the European Copernicus Program, MI-SAFE aims to deliver smart, data-driven services that will facilitate the widespread use of ecosystem-engineering concepts in coastal protection, improving cost-efficiency and the well-being of coastal communities. The unique features of MI-SAFE include; a combination of global coastal coverage and high-resolution local analysis that can provide flood-hazard related parameters in data-sparse regions, a transparent and verifiable scientific basis implemented using Open Source (OS) tools supported by the Open Earth software community, automated coupling of EO, sea-level, wave and vegetation modeling, delivery of data products as Open Geospatial Consortium (OGC) data streams, and a versatile, modular structure with the capacity to include advanced functionality driven by specific-user requirements. The package is made up of three key components: (1) The MI-SAFE viewer (http://fast.openearth.eu/), a user-friendly on-line viewer that provides easy access to data products and services; (2) OGC data streams, a web catalog service that provides Open Access (OA) geospatial data layers; and (3) OS Modeling, using a specifically calibrated version of XBEACH that includes the effects of coastal vegetation on wave attenuation. Services are provided at three different levels, which represent differences in the spatial scale, quality of input data and predictions, and complexity of the modeling approach: Educational, an overview of potential wave attenuation by vegetation at the global scale; Expert, high-resolution data with a specifically calibrated model at case-study sites; and Advanced, on-demand, tailor-made solutions potentially linked to other domains (such as inundation and damage modeling) to be used in rapid assessment and design settings. Educational and Expert services are OA, whereas Advanced services (including consultation, training, development of new functionalities, and insitu calibration/validation) are offered by the consortium on request; with the idea that these will offset the package's running costs, leading to self-sufficiency. Here we present an overview of the MI-SAFE package and demonstrate some of its key functionalities, discuss scientific and technical aspects of sustainably meeting end-users requirements, and invite feedback from the audience about the future development of the package.


  • 12:45 - SIMOcean: Maritime Open Data and Services Platform for Portuguese Institutions
    Almeida, Nuno Miguel (1); Grosso, Nuno (1); Deus, Ricardo (2); Oliveira, Paulo (2); Almeida, Sara (3); Alves, Margarida (3) - 1: DEIMOS, Portugal; 2: IPMA, Portugal; 3: Instituto Hidrográfico, Portugal

    The existence of an integrated management of Portuguese marine environment is crucial to monitor a wide range of interdependent domains. A system like this assimilates data and information from different thematic areas, ranging from ocean and atmosphere state variables to higher level datasets describing human activities and related environmental, social and economic impacts. These datasets are collected by a wide number of public and private institutions with very diverse purposes leading to dataset duplication, inexistence of common data and metadata standards across organizations, and the propagation of closed information systems with different implementation solutions. This lack of coordination and visibility hinders the marine management, monitoring and vigilance capabilities, not only by making it more difficult to access, or even be aware of, the existence of certain datasets, but also by minimizing the ability to create added value products or services through dataset integration from different sources. The adoption of an Open Data philosophy will bring significant benefits by reducing the cost of information exchange and data integration, promoting the extensive use of this data. SIMOcean (System for Integrated Monitoring of the Ocean), co-funded by the EEA Grants Programme, is integrated in the initiative of the Portuguese Government to develop a set of coordinated systems providing access to national marine data. These systems aim to improve the Portuguese marine management capabilities, aggregating different data, including specific human activities datasets (vessel traffic, fishing records), and environment variables. Those datasets, currently scattered among different departments of the Portuguese Meteorological (IPMA) and the Navy's Hydrographic (IH) Institutes, are brought together in the SIMOcean system that exploit this data through the following flagship services: 1) Characterisation of Fishing Areas; 2) Wave Alerts for Sea Ports; and 3) Diagnostic of Meteo-Oceanographic Fields. The specifications of these services were driven by end users such as Civil Protection Authorities, Port Authorities and Fishing Associations, where these new products are expected to create a significant positive impact in their operations. SIMOcean is based on open source GIS interoperable solutions, compliant with OGC standards and INSPIRE directive. The Catalogue solution (based on ckan) uses a INSPIRE compliant metadata profile for marine environment developed by SNIMAR project, the guidelines provided by the directive 2013/37/EU and the Goldbook provided by the European Data portal. The system is deployed on a scalable infrastructure, providing authorised entities a single access point for data catalogue, visualisation, processing and deployment of value added services. It will be presented the challenges faced during the different phases of the project through different perspectives, Data Providers, System Integrators and End Users showing what was needed to put in place to have an attractive product for the community.


  • 13:00 - Elastic Search to Geo Search: Delivering web-based search tools for large volume, heterogeneous airborne, in-situ and satellite-based observations
    Stephens, Ag (1); Smith, Richard (1); Garland, Wendy (1); Parton, Graham (1); Kershaw, Philip (2) - 1: NCAS / Centre for Environmental Data Analysis, RAL Space, STFC Rutherford Appleton Laboratory; 2: NCEO / Centre for Environmental Data Analysis, RAL Space, STFC Rutherford Appleton Laboratory

    We describe the development of web-based graphical map-based tools for interactive search and discovery of large volume heterogenous airborne, in-situ and satellite-based observations along with the underpinning technologies to index and catalogue the associated metadata. This work is in the context of a multi-petabyte archive of environmental science datasets managed by STFC's Centre for Environmental Data Analysis (CEDA). With data holdings of over 250 million files it is essential that CEDA provides an effective set of tools to facilitate data discovery and access. Requirements for a web-based map-interface originated with the EUropean Facility for Airborne Research (EUFAR2) project and UK Facility for Airborne Atmospheric Measurements (FAAM) and the need to expose the associated airborne in-situ and remote sensing research flight data more readily to the user community. The EFF indexes temporal, spatial, file-system and miscellaneous properties in a catalogue implemented using ElasticSearch. This is challenging given the variety of data types and formats, each requiring its own specific adaptor to parse metadata content and incorporate into the ElasticSearch index. The EFF front-end is a lightweight web-app in which Google Maps provides a map-interface and a thin JavaScript layer manages the interactions between the spatio-temporal search and Elasticsearch's RESTful API. Building on the success of the EFF, CEDA has recently developed its Satellite Data Finder (SDF) tool. Whilst the EFF holds 10s of thousands of flight records the SDF needed to scale to millions of satellite scenes. Starting with the Sentinel missions, the fundamentals of the EFF were extended to increase the capability of the tool and in particular, revise the interface to effectively manage and present the large volume of data resulting from search queries. Additional filters were added to allow searching by satellite and usability testing was used to inform and improve the design making incremental improvements to deliver a more intuitive interface. Scaling to these data volumes has also presented challenges with the back-end, ElasticSearch was scaled up to an 8-node virtual cluster (8 x 16GB RAM) to allow both test and production indexes to be accommodated with 100GB set aside for the storage of metadata indexes. In addition, indexing of product metadata has taken advantage of CEDA's JASMIN (jasmin.ac.uk) batch compute cluster, Lotus, to effectively scan large volumes of products in parallel. The finished version of the SDF provides a simple web-tool that allows users to quickly narrow down their search to an area and period of interest and access the matching data. In the wider context, many environmental datasets, such as the ESA Sentinel missions, are publicly available and have the potential to be exploited across the academic, public and commercial sectors. By providing interactive web-search tools CEDA is increasing the usage and value of its data holdings. Looking forward, the challenge is now to integrate the rest of the CEDA archive under these interfaces.


Open Data andTools Continuation

Chairs: Lardeux, Cedric (ONF International), Ramoino, Fabrizio (SERCO c/o ESA-ESRIN)

14:15 - 16:15

  • 14:15 - A Tool to Explore Spectral, Spatial and Temporal Features of Smallholder Crops
    de By, Rolf A.; Zurita-Milla, Raul; Pasha Zadeh, Parya; Calisto, Luis - ITC, University of Twente, The Netherlands

    The increasing availability of very high spatial resolution satellite images has opened the door to systematic studies of smallholder systems from space. In this work, we present a database of crop characteristics plus a web-based open data exploration tool produced in the context of the STARS project (www.stars-project.org). STARS aims to address the information scarcity around the heterogeneous smallholder farming systems that are common in Africa and Asia. For this, we conducted a number of studies in sites in West and East Africa as well as South Asia, which brought together fieldwork- and image-derived characteristics of farm fields into a central database known as the Crop Spectrotemporal Signature Library (CSSL). The CSSL does not hold image data but statistical characterizations derived from analyzing both multispectral and panchromatic images through a fully automated workflow. This workflow allowed us to calculate and extract around one hundred field-specific characteristics. These include spectral characteristics (including common vegetation indices), their in-field variability, and a number of GLCM-based textural characteristics (with different lags, different angles). We continue to enrich that list with other image-based methods, and we apply a.o. machine-learning techniques to analyze these. At the same time, STARS project teams conducted fieldwork that provided in situ agronomic measurements, helping to characterize crop development and field management. The field and image data was semi-synchronously collected throughout the crop season at regular two-week intervals. Our hypothesis is that a collection of this nature can support studies in crop identification, farm field delineation, farm practice detection and other crop-related phenomena in smallholder contexts. In addition to the CSSL, we present an online tool that allows the exploration of spectral, spatial and temporal characteristics and their relationships as detected in classification studies. This comes with an invitation the wider scientific community to use our collection of field- and image-derived characteristics. Our data exploration tool accommodates the comparison of time series, for instance, between different vegetation indices and textural or in situ measurements. Both the data and the tool will become free and open products so that other scientists can use them in their smallholder farming projects. We believe that the CSSL and the accompanying web tool can contribute to define agricultural baselines and we hope that we can continue to enrich the CSSL database with more crops, more years, and a wider geographic coverage.


  • 14:30 - Woody Biomass Assessment using High Resolution Data for Food Security
    Yasmin, Naila; D' Annunzio, Remi; Jonckheere, Inge - FAO of the United Nations, Italy

    About 2.8 billion people in developing countries dependent on biomass fuels (e.g. fuel wood, charcoal and animal dung) to meet their energy needs for cooking food (IEA, 2010). Fuel wood plays a vital role in ensuring the food security of millions of people and its consumption must be better understood in order to address resource shortages and forest decline (MacDonald, Adamowicz and Luckert, 2001). However, the supply and use of fuel wood are often embedded in complex systems that include external factors of a non-forestry nature, which influence the capacity to provide forestry-based solutions (FAO 1983). Natural resources including fuel wood must therefore be carefully managed and monitored to meet current demands and ensure sustainability (Warner, 2000). Tree and shrub cover are often the primary sources of deadwood used as fuel wood, however fuel wood can also be acquired through pruning. In the case of dense forest with high canopy cover, a coarser resolution may provide reliable estimates like Landsat satellite images, while in areas with open woodland high resolution images are the best reliable option. Detailed study of the degradation of tree and shrub cover and area change requires a minimum spatial resolution of 1.5 m (FAO, 2016). The current study focuses on woody biomass assessment around two communities Auno and Dusuman communities, in the Borno state of Nigeria, where refugees are hosted by local communities and the primary source of fuelwood is woody biomass in the surroundings. Nigeria’s landscape consist on Sambisa forest where sparse vegetation is the norm. The forest consists of a mixture of open woodland and sections of very dense vegetation of short trees about two meters high and thorny bushes. Multiple factors hinder detailed field inventory measurements in the area, among these factors the most important being security issues. In this scenario, the use of high resolution remote sensing images along with field data can provide detailed analysis of change in resources and can lead to the development of plans for sustainable use of existing resource to guarantee food security. References FAO. 1983. Wood fuel surveys: Forestry for local community development programme. Rome (available at www.fao.org/docrep/Q1085e/Q1085e00.htm). FAO.2016. Assessing wood fuel supply and demand in displacement settings, A technical handbook. Rome (available at http://www.fao.org/3/a-i5762e.pdf). IEA. 2010. World Energy Outlook. Paris, International Energy Agency (available at www.worldenergyoutlook.org/media/weo2010.pdf). MacDonald, D., Adamowicz, W. & Luckert, M. 2001. Fuelwood collection in North-Eastern Zimbabwe: valuation and caloric expenditures. J Forest Econ, 7: 29-52 (available at www.cabdirect.org/abstracts/20013019924.html;jsessionid=C10ADF3785ECEFAFC4AF8CAEF619E5D2?freeview=true) Warner, K. 2000. Forestry and sustainable livelihoods. Unasylva, 51: 3-12 (available at www.fao.org/docrep/x7273e/x7273e02.htm#P0_0).


  • 14:45 - Large scale exploitation of Copernicus Sentinel Data on Earth Engine
    Aparício, Sara - ESA-ESRIN, Italy

    Copernicus, the most ambitious earth observation programme to date, is now possible to be exploited on Google Earth Engine - an open cloud computing platform which is boosting (even more) the usage of Sentinel data. This comprises a very interesting tool for tackling global and regional issues with short and intuitive scripts since all Sentinel-2 data and Sentinel-2 GRD scenes are among a growing set of free datasets on Earth Engine.


  • 15:00 - Monitoring Urban Heat Island through Google Earth Engine resources: potentialities and difficulties in the case of Phoenix
    Ravanelli, Roberta (1); Nascetti, Andrea (1); Cirigliano, Valeria (1,2); Monti, Paolo (2) - 1: Geodesy and Geomatics Division - DICEA - University of Rome La Sapienza, Rome, Italy; 2: DICEA - University of Rome La Sapienza, Rome, Italy

    The aim of this work is to leverage the global-scale analysis capabilities of Google Earth Engine (GEE) [1] to study the temporal variations of the Urban Heat Island (UHI) effect at large-scale. GEE is indeed the computing platform recently released by Google “for petabyte-scale scientific analysis and visualization of geospatial datasets”. Using a dedicated HPC (High Performance Computing) infrastructure, it enables researchers to easily and quickly access more than thirty years of free and public data archives, including historical imagery and scientific datasets, for global and large scale remote sensing applications. In this way, many of the limitations related to the data downloading, storage and processing, that usually occur when a such large amount of information (Big Data) is analyzed, are effortlessly overcome. Specifically, the work was focused on the Phoenix Metropolitan Area (PMA) (Ariziona, USA) which, from 1983 to 2010, was subjected to a significant expansion, changing from a mostly agricultural region to a metropolis predominantly characterized by residential suburbs [2]. In fact, an UHI is an urban or metropolitan area significantly warmer than its surrounding rural region (the temperature difference usually is larger at night than during the day, and is most evident when winds are weak) and it is widely acknowledged that this phenomenon is due to the impact of the alterations of land use and land cover caused by human activities. In particular, the Climate Engine Application [3], powered by GEE, was used to compute the annual mean of the Land Surface Temperature (LST) from Landsat Top of Atmosphere Reflectance Data for every year of the temporal range comprised between the 1992 and the 2011 on a Region Of Interest (ROI) corresponding to the PMA. The USGS National Land Cover Database (NCD) was directly retrieved from GEE for the same temporal range and ROI. At a first stage, a pixel-wise analysis was performed through dedicated scripts developed in python; for every pixel of the ROI, the parameters of a simple linear model describing the LST trend as a function of time were robustly estimated. Overall, a positive trend for LST was retrieved, but with rates variable with locations. Therefore, a spatial analysis was developed to cluster the pixels with similar rates, in order to highlight areas with homogeneous behaviors and to investigate their relationship with the most significant urban expansion areas. The obtained results allow to provide possible predictions for future trends, thus giving valuable indications to address the urban planning of the city. [1] Google Earth Engine Team. Google Earth Engine: A Planetary-scale Geospatial Analysis Platform, (2015), https://earthengine-google.com. [2] Lee, T-W., J. Y. Lee, and Zhi-Hua Wang. "Scaling of the urban heat island intensity using time-dependent energy balance." Urban Climate. Volume 2 (2012), 16-24. [3] Huntington, J. L., Hegewisch, K. C., Daudert, B., Morton, C. G., Abatzoglou, J. T., McEvoy, D. J., and Erickson, T. "Climate Engine: Cloud Computing and Visualization of Climate and Remote Sensing Data for Advanced Natural Resource Monitoring and Process Understanding." Bulletin of the American Meteorological Society, (2017).


  • 15:15 - CitySmart: Open Framework for Urban Infrastructure Management
    Hogan, Patrick (1); Del Castillo, Miguel (1); Melick, Brandt (2) - 1: NASA, United States of America; 2: City of Springfield, Oregon, USA

    CitySmart is a suite of open source tools to benefit city operations, such as management of urban infrastructure (utilities, traffic, services, etc.) for more efficient operations and with ultimate consideration for increasing sustainability and quality of urban life. The framework is architected to continually add and advance functionalities serving urban management. Almost every city needs the same data management ^tools^ as every other city. If cities are able to share their solutions with each other, this would multiply their investment by the number of cities participating, massively increasing Earth’s collective productivity for planet livability. This CitySmart application provides the basis for that enterprise.


  • 15:30 - K2space: Providing New Market Opportunities To Added Value Companies In The New Space Economy Era
    Licciardi, Giorgio; Scatena, Lorenzo; Lucibello, Flavio - Research Consortium Hypatia, Italy

    In the era of open data policies, a vast amount of data is provided freely and openly accessible to its users. In this framework the European Union started the Copernicus Programme aiming at the development of a European information services based on satellite Earth Observation and in situ (non-space) data that are freely and openly accessible to its users. This data is intended to provide information to help service providers, public authorities, international organisations as well as SMEs, to improve the quality of life for the citizens of Europe. However, there are several issues that still need to be addressed. On one hand, there are several entities that could take great advantages from the use of open EO data in their specific fields of interests but do not have the right instruments, experience or knowledge to extract relevant information from EO data. On the other hand, the creation of valuable content from large and growing volume of EO derived data is by Research organizations, governments and companies, do not always find a market exploitation. Moreover, the cooperation between industry and research centers is not always easy, leading to a sort of short-circuit in the complete development of space economy. In order to bring order to such a disordered market, we introduce K2SPACE with the intent to define protocols and standards to facilitate connection, coordination, and collaboration between entities. K2SPACE is a platform, defined as a business model that allows multiple sides to interact by providing an infrastructure to connect them. In particular we subdivided the interacting entities in peer producers (Space SMEs, Research institutions and Universities), and peer consumers (Non-space SMEs). The aim of the K2SPACE platform is to revolutionize the space economy in Europe. Operating as hub, K2SPACE “organizes” the interaction, skills, resources outside of traditional organization boundaries and shapes of the markets. This is made possible by providing reduced barriers to entry, a shared storefront and an overall enabling set of services to all sides of a market. This approach will give advantages to both peer producers and consumers. In particular, through the use of K2SPACE, peer producers will acknowledge the developed know-how, will have access to external funding and consequently will increase revenues, improve company visibility and be open to new market opportunities. Similarly, the peer consumers that interact with other entities via the K2SPACE platform will have access to high-level technologies allowing the improvement of the quality of the offered services, resulting in reduced expenses and increase in revenues.


  • 15:45 - The FABSPACE 2.0 Project For Geodata-Driven Innovation
    Del Frate, Fabio (1); Carbone, Francesco (1); Mothe, Josiane (2); Baker, Aurélie (3); Paraskevas, Iosif S. (4); Soundris, Dimitrios (4); Fumel, Aurélie (5); Barbier, Christian (5); Islam, Md Bayzidul (6); Becker, Matthias (6); Olszewski, Robert (7); Bialczak, Anna (8) - 1: University of "Tor Vergata", Italy; 2: University of Toulouse, France; 3: Aerospace Valley, France; 4: Corallia, Greece; 5: University of Liege, Belgium; 6: Technical University of Darmstadt, Germany; 7: Warsaw University of Technology, Poland; 8: Opegieka, Poland

    Now that the Galileo and Copernicus satellite programmes are entering their operational phase, innovation possibilities in the field of satellite data driven applications are getting wider. Thanks to these two massive investments in technology, European and worldwide companies are starting to benefit from increasing, regular and cheaper (not to say free of charge) data flows, which could lead to the development of new and innovative applications and services in an incredibly vast range of markets, including non-space markets [1]. The exploitation of satellite data, as well as open data (from public authorities in particular) has the potential to generate a lot of innovative solutions. In this context the FabSpace 2.0 project aims at putting the Universities at the front line for the take-off of Earth Observation based applications in Europe and worldwide. This can be pursued by hosting and animating open places dedicated to space and geodata-driven innovation where young developers from the civil society, experienced developers from industry or academic and research institutes, public administrations as well as civil organizations can meet, work together and co-create new tools and business models. They can create an ecosystem fitting (and developed according to) the particularities of geodata-driven innovation, in particular for the emergence of Space data downstream services. In this innovative environment, innovation is driven by the needs of users through the involvement of civil society in the innovation process and in the definition of new challenges. Moreover the actors making innovation will be anonymous civilians (students and researchers in particular) and will thus be at the same time developers and end-users of the applications they develop. That is why the FabSpace 2.0 project is expected to improve the capacity of Universities to generate more innovations and generate positive socio-economic impacts. All partner universities are centers of excellence in research in the field of geomatics and space based information. They are not only offering a highly-qualified human capital likely to generate innovation, but also providing open access to data generated within previous research works. Thus the FabSpace 2.0 project can be a particularly relevant opportunity for research teams to make a step forward towards Science 2.0. [1] Harris H. and I. Baumann, “Open data policies and satellite Earth observation,” Space Policy, Vol. 32, May 2015, pp. 44-53


  • 16:00 - GeoAvalanche: geoprocessing of Earth Observation and crowdsourced data for snow avalanche information
    Bartoli, Francesco - GEOBEYOND SRL, Italy

    GeoAvalanche is an ecosystem of integrated applications for the avalanche risk management which fosters the use of geographical information, Earth Observation and crowdsourced data. It is a unique system capable of providing near real-time information on the avalanche risk with the highest level of accuracy and precision. Its algorithms can consume elevation models, crowdsourcing and Earth Observation data about snowpack trend (Snow Cover, Snow Water Equivalent, Snow Surface Temperature) at a resolution of 30m. Public authorities and Avalanche Warning Services can build spatial data infrastructures and geoportals for sharing snow avalanche information and maps of situational awareness like the public demo (http://geoavalanche.org). At the core there is GeoAvalanche server, a custom GeoServer (https://github.com/geoavalanche/geoavalanche-server) which has been extended with several geoprocessing published through a set of standard OGC WPS web services and the app-schema extension for sharing avalanche data in compliance with the GML profile of CAAML. All GeoAvalanche WPS processes (Slope, Aspect, Curvature, SnowPack, Crowd) have been atomically designed to achieve specific functionalities and can be orchestrated within a workflow for performing more complex processes like Avalanche Terrain Exposure or the Avalanche Risk Index of each requested feature collection. GeoAvalanche server can also be used as geospatial back-end of GeoNode for building snow avalanche geoportals through the development of custom GeoNode projects which exploit those WPS processes toward avalanche risk insight tools like the web mapping client (http://geoavalanche.org/mygeoss/public/) awarded by the MyGEOSS competition. All the software is released under the GeoAvalanche organization (https://github.com/geoavalanche) in GitHub with an open source license.


Round Table Open Data and Tools

16:45 - 17:05

  • 16:45 - Round Table

    Education and Communication Virtual Labs&Visualisation


Lightning Talks

17:00 - 18:15

  • 17:00 - EO-based Mapping of Environmental Health Risks: the H2020 EOXPOSURE Project
    Dell'Acqua, Fabio (1); De Vecchi, Daniele (1); Frery Orgambide, Alejandro C. (2); Gamba, Paolo (1); Lage, André (2); Marinoni, Andrea (1); Plaza, Antonio (3); Plaza, Javier (3); Scavuzzo, Marcelo (4,5); Shimoni, Michal (6); Lanfri, Mario Alberto (5) - 1: University of Pavia, Pavia, Italy; 2: Universidade Federal de Alagoas, Maceió, Brazil; 3: Universidad de Extremadura, Cáceres, Spain; 4: Universidad Nacional de Córdoba, Córdoba, Argentina; 5: Comisión Nacional de Actividades Espaciales, Córdoba, Argentina; 6: Ecole Royale Militaire - Koninklijke Militaire School, Brussels, Belgium

    The EU H2020 EOxposure project [1] started in March 2017. Its goal is to build tools to quantify the exposure of population and economic assets to multiple risks using novel information layers from current and future Earth Observation (EO) missions, including open data from Copernicus [2], as well as the growing sensor web on the ground. The project exploits the novel concept of the human exposome [3], i.e. the set of exposures to which an individual is subjected through his/her own existence. It includes the entire history of interactions with the environment, including air and water quality, food and exercises, as well as living habits and diseases that may spread. The cutting-edge fusion of this concept with EO and sensor data aims at measuring the human exposure to threats that are external to each individual, and quantify the interactions between human beings and the environment. By building open geospatial information tools upon data coming from multiple sources, at different spatial and temporal scales, the EOxposure project aims at providing free public information (“open”) services, enabling citizens to understand the threats to which they are exposed, and decision makers to take more informed and effective actions against them. Specifically, EOxposure will focus on threats connected to housing conditions, disease spread, as well as security and health issues in urban and peri-urban areas, where population is concentrated. The new tools will build upon the consortium expertise on nutrition- and vector-borne disease models, urban heat monitoring and material characterisation, satellite data processing, and geospatial data fusion, realising interdisciplinary working groups dedicated to the above-mentioned applications. To do so, EOxposure enlists institutions from Europe and South America, merging expertise on exposure to risk in both developed and developing countries. The full paper will report more details on the project content and projected goals, and will present its future development plans. References [1] Tools for Mapping Human Exposure to Risky Environmental conditions by means of Ground and Earth Observation Data (EOXPOSURE). A EU H2020 project. Web site at http://www.h2020-eoxposure.eu/ [2] The EU Copernicus initiative. Online at http://copernicus.eu/ [3] C.P. Wild, “The exposome, from concept to utility,” Int. Journal of Epidemiology, vol. 41, pp. 24-32, 2012.


  • 17:03 - WebAssembly, How This New Web Standard For In Browser Processing Can Help EO Data Valorization
    Decoster, Nicolas (1); Nosavan, Julien (2) - 1: Magellium, France; 2: CNES, France

    JavaScript, as a dynamic language, can be slow. Some processings are simply too huge to be run in the browser. And even if processing time would be OK in JavaScript, most processings don’t have implementation in JavaScript, and being able to use them in the browser means rewriting the code. For this reasons, for now, this kind of processings have to be executed server side, which could have an impact on user experience. Well, in fact, until WebAssembly. WebAssembly (or wasm) is a new Web standard that is useable now and which is supported by all major browsers’ vendors. Wasm is a low-level binary format that is not meant to be written by hand but that is a compilation target. One can see it as a kind of bytecode or assembly language. It lives alongside JavaScript and complements it in terms of processing powers (wasm can have near native performance). Moreover wasm as a compilation target allows us to use existing processings written in other languages (mainly C/C++ for now, more to come) in the browser. So WebAssembly is a new technology that opens new doors for Web architectures. There are lots of scenarios where it can be used. One has limited processing server but its users are OK to host some processings? One needs to do some complex real-time processing for some interactive data visualization? Some users don’t want to upload some of their confidential data on some processing server? One needs a bit more power for a mobile version of a Web site or a Web app? One has an existing image processing algorithm, but it is written in C and wants to use it client side? Etc. WebAssembly might help on these cases. And of course, Earth Observation and its data with great variety of usages and kinds of processing can greatly benefit from WebAssembly. In this talk, we will show what WebAssembly is, how it integrates in Web architecture, how one can use existing C/C++ code to target WebAssembly for use in the browser and how to use WebAssembly for EO data valorization, using an illustrating proof of concept that integrates raw data management (i.e. not served as a classic image server for example), its visualization and some existing or new processings, all in the browser.


  • 17:06 - Standardized Access and Processing of Earth Observation Data within a Regional Data Middleware
    Eberle, Jonas; Truckenbrodt, John; Schmullius, Christiane - Friedrich Schiller University Jena, Germany

    Increasing archives of global satellite data present a new challenge to handle multi-source satellite data in a user-friendly way. Any user is confronted with different data formats and data access services. In addition the handling of time-series data is complex as an automated processing and execution of data processing steps is needed to supply the user with the desired product for a specific area of interest. In order to simplify the access to data archives of various satellite missions and to facilitate the subsequent processing, a regional data and processing middleware has been developed. The aim of this system is to provide standardized and web-based interfaces to multi-source time-series data for individual regions on Earth. For further use and analysis uniform data formats and data access services are provided. Interfaces to data archives of the sensor MODIS (NASA) as well as the satellites Landsat (USGS) and Sentinel (ESA) has been integrated in the middleware. Also the OGC services from the Sentinel-Hub are currently being tested for a simpler data access. Various scientific algorithms, such as the calculation of trends and breakpoints of time-series data, can be carried out on the preprocessed data. Jupyter Notebooks are linked to the data and further processing can be conducted directly on the server using Python and the statistical language R. Standardized web services as specified by OGC are provided for all tools of the middleware. Currently, the use of cloud services is being researched to bring algorithms to the data.


  • 17:09 - The DASE "Standardized Open Dataset" by the IEEE GRSS: an open test bench for classification and target detection algorithms
    Dell'Acqua, Fabio (1,2); Iannelli, Gianni Cristian (1,2); Kerekes, John (3); Moser, Gabriele (4); Pierce, Leland (5); Goldoni, Emanuele (6) - 1: Ticinum Aerospace s.r.l. - Pavia, Italy; 2: University of Pavia, Italy; 3: Rochester Institute of Technology, Rochester NY, USA; 4: University of Genoa, Genoa, Italy,; 5: University of Michigan, Ann Arbor MI, USA; 6: IT consultant, Mantova, Italy

    Scientific advances in classification and target detection on Earth Observation (EO) data are difficult to quantitatively assess against each other in the absence of a common dataset onto which their results can be evaluated. In order to ensure homogeneity in performance assessment of algorithms for information extraction that are proposed in literature, standardized remotely sensed datasets are particularly useful and welcome. As a contribution towards implementing fair comparison, the IEEE [1] Geoscience and Remote Sensing Society (GRSS) [2] and especially its Image Analysis and Data Fusion Technical Committee (IADF), has been organizing the Data Fusion Contest (DFC) [3] for some years now. On every new edition of the DFC one more specific open dataset is made available to the scientific community at large; contestant scientists and researchers can download it and use it to test their freshly developed algorithms. A consistent test dataset for all participating groups makes it possible to consistently assess results and makes it legitimate to rank them. After the contest deadline, the user who scored the highest is proclaimed as the winner. The “technical backing” of this effort is the so-called Data and Algorithm Standard Evaluation (DASE) website [4]. DASE can distribute to registered users a limited set of possible “standard” open datasets, together with some ground truth info, and automatically assess the processing results provided by the users. In this paper, we report on the birth of this initiative and present some recently introduced features. References [1] The Institute of Electrical and Electronics Engineers (IEEE) official website. Online at http://www.ieee.org/ [2] The IEEE Geoscience and Remote Sensing Society official website. Online at https://www.grss-ieee.org/ [3] D. Tuia, G. Moser, B. Le Saux, B. Bechtel and L. See, "2017 IEEE GRSS Data Fusion Contest: Open Data for Global Multimodal Land Use Classification [Technical Committees]," in IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 1, pp. 70-73, March 2017. doi: 10.1109/MGRS.2016.2645380 [4] IEEE GRSS Data and Algorithm Standard Evaluation (DASE) website. Online at http://dase.ticinumaerospace.com/


  • 17:12 - Automatic Processing of Sentinel data for Forestry Management in Guatemala
    Marti, Paula; Brillatz, Chloe; Petit, David; Costantini, Fabiano - Deimos Space UK, United Kingdom

    The forest in Guatemala covered a total of 3,711,366 hectares in 2012, which is a 34% of the country. The illegal exploitation of the forest environment is a real concern to the Guatemalan government. The government has made efforts to tackle this problem by embracing digital technologies, improving its processes and by pooling information between all stakeholder agencies. The FMAP (Forestry Management and Protection) project is part of the International Partnership Programme and its aim is to support the Guatemalan agencies by providing remote sensing data and derived information. The FMAP project is led by Astrosat and the project partners include Telespazzio Vega, EO Inc and Deimos Space UK. The Guatemalan partner institutions for FMAP are the National Forestry Institute (INAB), the National Council of Protected Areas (CONAP) and the Ministry of Agriculture (MAGA) amongst others. INAB and CONAP run currently a set of programmes to promote the recovery, restoration, management and production of the Guatemalan forests. These programmes offer monetary incentives to people to reforest or exploit the forest whenever it is done following a management plan that has been submitted and approved. The incentive programme has been very successful, resulting on more than 30,000 areas being registered in the system. These areas are spread all over the country and it is a challenge for CONAP and INAB to send field workers to check that they meet all necessary requirements for the incentive they are claiming. The work presented here shows how the KORE platform developed by Deimos is able to process and deliver automatically and constantly to the end users satellite imagery from the Sentinels and derived products such as vegetation indices, chlorophyll content, forest cover and changes in coverage for the areas they monitor. These products are used to highlight changes and to trigger warnings to INAB and CONAP when the conditions for the incentive are not met in an area or when the area is not used as reported. In addition, the KORE platform is also used to deliver data that allows monitoring illegal logging, which usually happens in remote areas and areas of difficult access.


  • 17:15 - A Collection Of Data Analytics For Earth Observation Time Series Analysis
    Filipponi, Federico; Valentini, Emiliana; Taramelli, Andrea - ISPRA, Italy

    Extended time series of Earth Observation products are increasingly providing consistent information to support downstreaming services in a wide variety of application domains and disciplines. The establishment of the Copernicus European Programme, served by specifically designed Sentinel satellites, activated the generation of large harmonized spatio-temporal datasets, freely available to users under six thematic services for the analysis of spatial features and temporal patterns as well as the monitoring of changes and anomalies. In last decades climate modeling employed many mathematical and statistical methodologies to extract concise information from large spatio-temporal datasets. More recently the availability of extended Earth Observation time series took advantage from the data analytics developed for climate modeling to analyse spatial features and temporal patterns. Despite there is a demanding need for data analytics to extract information from large EO product datasets, few open tools are collecting available techniques to handle raster time series. In addition, many techniques for spatio-temporal analysis can not deal with incomplete time series and require appropriate gap-filling methodologies to interpolate raster time series. We present the newly developed 'rtsa' (Raster Time Series Analysis) package for R programming language providing a collection of analytics to perform spatio-temporal analysis from raster time series. The package 'rsta' acts as a front-end to already available functions in various R packages, specifically designed to handle geographic datasets provided as raster time series. The available functions within the package allow the direct input of raster time series to extract concise and comprehensive information taking advantages of techniques like Empirical Orthogonal Function, Empirical Orthogonal Teleconnections, Self Organized Maps, Seasonal Trend Decomposition using Loess, Breaks For Additive Season and Trend, X-13-ARIMA seasonal adjustment. Since some techniques for spatio-temporal analysis can not deal with incomplete raster time series, a selection of gap-filling methods like DINEOF, spatio-temporal gapfill, linear and spline interpolation are incorporated in the package in order to interpolate missing values when required by both user and technique. Memory usage is optimized by the adoption of raster masks and the parallel processing support using multiple CPUs is available for some of the analysis techniques. The 'rtsa' package is available to R users in a free and open-source software.


  • 17:18 - Application of Time-continuous Approximation of Vegetation Activity Indicators Obtained from Multispectral Satellite Imagery to Monitoring of Lower Volga Arid Wetlands
    Kozlov, Alexander (1); Kozlova, Maria (2); Gorelits, Olga (2); Zemlianov, Igor (2) - 1: Lomonosov Moscow State University, Russian Federation; 2: N.N. Zubov State Oceanographic Institute, Russian Federation

    The key features of vegetation activity in arid wetlands are its high temporal dynamics, considerable spatial heterogeneity and extreme sensitivity to the environmental changes. These are the reasons why application of conventional techniques based on remote sensing data analysis to ecological monitoring of arid wetlands becomes challenging and can even produce misleading results. The lower Volga arid wetlands play very important role in local economy, being almost the only source of fresh water and the only region of naturally moistened soil to be used for agriculture and livestock among the surrounding desert and dry steppe. The vegetation activity proved to be the primary indicator of its state, that can be observed directly using satellite multispectral imagery. Because of the substantially big area of the region under study (over 9000 square kilometers), field observations cannot go on continuously all over its territory, and thus cannot provide certainly unbiased and representative results. The only source of information for balanced and broad view is remotely sensed data combined with meteorological, hydrological, geobotanical and other in-situ data. In this study we focus on indicators derived from the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) time series, a quantity that is recognized as one of Essential Climate Variables (ECV). It is developed for multiple satellite sensors and thus pretends to be universal for past and future global studies. Due to the above-mentioned exceptional features inherent to arid wetlands, FAPAR time series obtained from satellite imagery of a particular territory needs to be specially processed in order to produce more informative results. We intend to report a technique of constructing a time-continuous approximation of FAPAR time series along with several quantitative indicators for vegetation activity and its seasonal and annual dynamics. These indicators appear to reflect intrinsic properties of various plant communities of arid wetlands in lower Volga region, which cannot be derived from FAPAR time series directly. Our approach showed a great potential to be used in the analysis of ecosystem state. Examples of its application and comparison to the field data include plant cover classification, comprehensive ecosystem state estimation and quantitative mapping of key vegetation activity parameters for its monitoring. Once first published, the methodology undergoes continuous development and validation. New results on its application are going to be presented.


  • 17:21 - Sea ice monitoring system based on satellite data to support safe navigation in Arctic Seas
    Korosov, Anton; Babiker, Mohamed; Park, Jeong-Won; W. Hansen, Morten - Nansen Environmental and Remote Sensing Center, Norway

    A sea ice monitoring system is developed to support safe operations and navigation in Arctic Seas. The system is capitalized on the Geo-Scientific Platform as a Service (GeoSPaaS) developed at NERSC for aggregation of satellite, modeling and in situ data. It exploits Synthetic Aperture Radar (SAR) data from satellites as a major component of this system. Sentinel-1 data is delivered every day in near realtime for monitoring of sea ice and other environmental parameters. The system is based on the algorithms for (1) sea ice classification of different ice types and open water; (2) ice drift with sufficient resolution to map mesoscale ice motion and deformation fields; and (3) iceberg detection by combined use of SAR and high-resolution optical images. Furthermore, the system integrates SAR data with AIS data (Automatic Identification System) from vessels operating in sea ice areas. With AIS positions combined with SAR images it will be possible for ship captains to find sailing routes through open leads or thin ice and avoid areas ice with ridges and other difficult ice situations. Key user groups for the system include be shipping companies, oil and gas companies, operational met-ocean services, coastal and ship traffic authorities, risk management, and environmental organizations working in the Arctic. The system is developed under project SONARC, supported by the Research Council of Norway (proj. Number 243608).


  • 17:24 - Scalable Extraction of Small Woody Features (SWF) at the Pan-European Scale using Open Source Solutions
    Faucqueur, Loïc (1); Merciol, François (2); Damodaran, Bharatt Bhashan (2); Rémy, Pierre-Yves (1); Desclée, Baudouin (1); Dazin, Fabrice (1); Lefèvre, Sébastien (2); Sannier, Christophe (1) - 1: Systèmes d'Information à Référence Spatiale (SIRS), France; 2: IRISA (UBS Site) Campus de Tohannic BP 573 56 017 Vannes Cédex, France

    The Mapping of Small Woody Features (SWF) is to be included as part of a new high resolution layer covering the whole of Europe from Iceland to Turkey as part of the Copernicus pan-European component of the land monitoring service. Small Woody Features (SWF) represent some of the most stable vegetated linear and small landscape features providing numerous ecological and socio-cultural functions related to (i) soil and water conservation, (ii) climate protection and adaptation, (iii) biological diversity and (iv) cultural identity. By definition, SWF are small in width and spatial extent. Very High Spatial Resolution (VHSR) image data such as the available VHR_Image_2015 dataset from the European Space Agency (ESA) Copernicus Space Component Data Access (CSCDA) is required to detect their presence. Traditionally, mapping of linear features is done using Computer Aided Photo-Interpretation (CAPI) techniques, but existing automated approaches are not suitable for SWF mainly due to their very small characteristics. Furthermore, the mapping of SWF over such a large area (almost 6 million km²) requires a completely new automated approach to feature extraction that is capable to deal with (i) the large amount of data (greater than 100TB), (ii) the large number of individual image scenes (close to 25,000), (iii) the diversity of the European landscapes and (iv) process these data in a timely manner whilst ensuring a satisfactory degree of precision. A new scalable solution relying on open source components was developed to fulfil these requirements. In a first step, feature extraction is achieved through a novel, efficient implementation of differential attribute profiles (DAP) that are among the state-of-the-art image descriptors in remote sensing. In a second step, SWF are extracted in a semi-supervised context using an open source implementation of the popular Random Forests classifier. Besides demonstrating the strength of open source software for helping the large-scale production of land cover maps, we also introduce several new developments related to the DAP features. Those features are straightforwardly extracted from a prior tree-based, multiscale representation of the image. They allow to gather both spectral and spatial information in an efficient manner. Our proposal leads to a significant reduction of both computation time and memory footprints w.r.t. available codes, thus making possible to use such features at a very large scale under strong operational constraints (i.e. processing a Gigapixels image in a few minutes). The new algorithm is being implemented as part of the SWF HR Layer cloud computing based production chain and is currently under integration as an open source component of the Orfeo Tool Box (OTB) software suite.


  • 17:27 - Tool For SARAL/AltiKa Waveform Clustering
    Dutta, Suvajit (1,3); Ghosh, Surajit (1,2); Thakur, Praveen Kumar (1); Bhattacharyya, Soumya (2) - 1: Indian Institute of Remote Sensing, ISRO, India; 2: National Institute of Technology Durgapur, India; 3: Vellore Institute of Technology, India

    This article describes a classification tool for classify the SARAL/AltiKa waveforms. The tool was made by Python scripting language. Radar altimetry systems (like SARAL/AltiKa) determine the distance from the satellite to a target surface by measuring the satellite-to-surface round-trip time of a radar pulse. An altimeter waveform represents the energy reflected by the earth’s surface to the satellite antenna with respect to time. This tool helps to cluster the altimetry waveforms data into desired groups. For the clustering, we used evolutionary minimize indexing function (EMIF) with k-means cluster mechanism (Ghosh et al., 2016). The tool presented here is divided into two parts one part does the data indexing classification part, and one part is for developing the interface for ease of interaction with the user. For interface designing Tkinter package is being used, Tkinter is Python's de-facto standard GUI package. For algorithm calculation parts numpy-MKL package has been used, NumPy is the fundamental package for scientific computing with Python. From the tool user have to select the specific folder which contains the waveforms, then another destination folder has to select to store the waveforms after applying EMIF algorithm on it.The idea is to develop a simple interface which takes the altimetry waveforms data from a folder as inputs and provides single value (using EMIF algorithm) for each waveform which will be further used for clustering. The tool is simple and easy to interact with the users, and it also takes very low disk space. Ghosh, S., Thakur, P., Dutta, S., Sharma, R., Nandy, S., Garg, V., Aggarwal, S., Bhattacharyya, S., 2016. A new method for SARAL/AltiKa Waveform Classification: contextual analysis over the Maithon reservoir, Jharkhand, India. In SPIE Asia-Pacific Remote Sensing International Society for Optics and Photonics. 98780G-98780G. doi:10.1117/12.2223777.


  • 17:30 - Fusing Machine Learning and Earth Observation for Sustainable Development in Dar es Salaam
    Iliffe, Mark (1); Anderson, Edward (2) - 1: University of Nottingham, United Kingdom; 2: World Bank, Tanzania

    As one of the fastest growing cities in Africa, Dar es Salaam is growing fast, adding more than 35,000 people to the city’s population each month. With over 70% of the city being unplanned and living in unplanned settlements there is a substantial risk to critical infrastructure, public health, clean water provision, and social stability. Floor Space Index (FSI) is one such tool that can assist governments in urban planning and development. The ratio of a building’s total floor area to the size of the piece of land upon which it is built, FSI offers critical insights into urban density, zoning, service provision, taxation, and indicators of change over time. In Dar es Salaam, the World Bank is leveraging the revolution in high-cadence geospatial imagery to develop new analytical capacity to deliver frequent and consistent FSI mapping across urban areas – creating a practical tool to assist government agencies in urban growth and development decisions. Starting in early 2017, we have utilised a stream of Earth Observation streams, from commercial imagery products from Planet to freely available open data from Sentinel, to understand how FSI can be achieved, then updated in a timely fashion. This approach has used novel machine learning techniques, such as Convolutional Neural Networks to automate FSI extraction, this has been alongside a manual survey of FSI, that has verified and validated the approach of this work. As such, this project derives benefits from both the datasets as outputs but also the process of developing new algorithmic and machine learning techniques. This paper presents these outputs and lessons learned from this project, discusses the impact of this work and presents conclusions on the relative merits of the spatial datasets involved for integrating stakeholders in understanding these complex outputs.


  • 17:33 - Sepal - System for Earth Observation Data Access, Processing and Analysis for Land Monitoring
    Lindquist, Erik; D'Annunzio, Remi; Finegold, Yelena; Fontanarossa, Roberto; Fox, Julian; Ortmann, Antonia; Togna, Cosimo; Vollrath, Andreas; Wiell, Daniel - UN-FAO, Italy

    The Forestry Department of the Food and Agriculture Organization of the UN introduces the System for Earth Observation Data Access, Processing and Analysis for Land Monitoring or SEPAL. SEPAL is a web-based platform for fast access and processing of different remotely sensed data sources designed to assist national forest monitoring and reporting for REDD+. It is addressing the existing challenges countries face when developing such monitoring systems due to difficulties accessing and processing remotely sensed data. Users are able to quickly process large amounts of data without requiring high network bandwidth or the need to invest in high-performance computing infrastructure. It further features an intuitive graphical user interface that enables also non remote sensing experts and novices to exploit earth observation data. As a result, SEPAL will help countries establishing and maintaining a Satellite Land Monitoring System (SLMS) capable of producing the information required to make consequential decisions about forest and land management. SEPAL includes different components that address a wide range of aspects with regard to forest and land monitoring. Automated pre-processing routines of freely available, high resolution optical (i.e. Landsat, Sentinel-2) and Synthetic Aperture Radar (SAR) data (i.e. Sentinel-1, ALOS K&C mosaics) allows for the rapid creation of nationwide, analysis-ready data sets. Those can be complemented by sample-based training and validation data generated via the Collect Earth Online module using very-high resolution imagery from various data providers. Modules for image segmentation, change detection, time-series analysis, data fusion and classification enables the user to derive relevant value-added products of high-quality. From the derived maps, area estimations on land use and land cover (change) are calculated by using a dedicated module that provides the user with the relevant statistics for the subsequent reporting process. The presentation will give an overview of the functionality of the platform, including some relevant examples of different use cases.


  • 17:36 - ESA BRAT (Broadview Radar Altimetry Toolbox) and GUT (GOCE User Toolbox) toolboxes
    Ambrózio, Américo (1); Restano, Marco (2); Benveniste, Jérôme (3) - 1: DEIMOS/ESRIN, Italy; 2: SERCO/ESRIN, Italy; 3: ESA-ESRIN

    The scope of this work is to showcase ESA BRAT (Broadview Radar Altimetry Toolbox) and GUT (GOCE User Toolbox) toolboxes. BRAT is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including the upcoming Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's last iteration (4.1.0) was released in April 2017. Based on the community feedback, the frontend has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. GUT is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.1 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's VCM (Variance-Covariance Matrix) tool for analysing GOCE's variance-covariance matrices. Both toolboxes are still being actively supported and developed by their respective consortiums, towards fulfilling the scientific community needs and increasing their didactic potential. BRAT and GUT toolboxes can be freely downloaded, along with ancillary material, at https://earth.esa.int/brat and https://earth.esa.int/gut.


  • 17:39 - Automated Processing Chains Via Python And Esa Snap Toolbox For Earth Observation: Two Applications For Sentinel-2 And Sentinel-3 Data Products
    Vecoli, Antonio - Tech For Space, Italy

    SNAP is the ESA application platform that supports the scientific exploitation of the Sentinels Earth Observation Missions by providing a common architecture for the toolboxes of Sentinel-1 , Sentinel-2 and Sentinel-3. The SNAP developers provided a specific Python module called Snappy that give access to the SNAP API, developed in JAVA, directly in Python. This approach can lead to the definition of automated Python processing chains that improves the exploitation of the Sentinels data products with the interaction between SNAP and the Python scientific computing packages. Here two examples of these SNAP - Python applications will be introduced in the digital poster, focusing on Sentinel-2 and Sentinel-3 data products. The Sentinel-2 MSI product will be used to implement a Python-SNAP processing chain including the atmospheric correction (Sen2cor processor) and two simple band ratio algorithms ,that can provide computationally fast and easy to use parameters. These indexes came out to be in good correlation with water quality parameters for lake waters (CHL, CDOM,DOC). The Sentinel-3 OLCI data product will be processed with an automated Python processing chain that can compute the TOA reflectance of each band from the TOA spectral radiances provided by OLCI, and then the true color image of the scene will be obtained after the implementation of a contrast enhancing technique available in the Python scientific packages. The automated Python processing chains provide important advantages for the general community of SNAP users and for the development of the ESA toolbox for Earth Observation itself. First of all the noticeable flexibility of the Python programming language will be used to work directly on the Sentinels data product, avoiding the interaction with the desktop version of the SNAP toolbox. But the most important point is that the processing chains in Python could be introduced to implement new scientific algorithms or plugins, that are called SNAP operators, in the original SNAP toolbox. So each user could be at the same time also an independent developer providing new enhancements and extensions to the official ESA SNAP toolbox with an open source approach


  • 17:42 - SAR Altimetry Processing On Demand Service for CryoSat-2 and Sentinel-3 at ESA G-POD
    Benveniste, Jérôme (1); Dinardo, Salvatore (2); Sabatino, Giovanni (3); Restano, Marco (4); Ambrózio, Américo (5) - 1: ESA.ESRIN, Italy; 2: He Space/EUMETSAT; 3: Progressive Systems/ESRIN; 4: SERCO/ESRIN; 5: DEIMOS/ESRIN

    The scope of this presentation is to feature the ESA-ESRIN G-POD SARvatore service to users for the exploitation of the CryoSat-2 data, which was designed and developed by the Altimetry Team at ESA-ESRIN EOP-SER (Earth Observation – Exploitation, Research and Development). The G-POD service coined SARvatore (SAR Versatile Altimetric Toolkit for Ocean Research & Exploitation) is a web platform that allows any scientist to process on-line, on-demand and with user-selectable configuration CryoSat-2 SAR/SARin data, from L1a (FBR) data products up to SAR/SARin Level-2 geophysical data products. The Processor takes advantage of the G-POD (Grid Processing On Demand) distributed computing platform to timely deliver custom-processed data products and to interface with ESA-ESRIN FBR data archive (155’000 SAR passes and 41’000 SARin passes - 118 TB of CryoSat data storage). The output data products are generated in standard NetCDF format (using CF Convention), therefore being compatible with the Multi-Mission Radar Altimetry Toolbox (BRAT) and other NetCDF tools. By using the G-POD graphical interface, it is straightforward to select a geographical area of interest within the time-frame related to the Cryosat-2 SAR/SARin FBR data products availability in the service catalogue. The processor prototype is versatile allowing users to customize and to adapt the processing, according to their specific requirements by setting a list of configurable options (which can be augmented upon user request). After the task submission, users can follow, in real time, the status of the processing. From the web interface, users can choose to generate experimental SAR data products as Stack data and Range Integrated Power (RIP) waveforms. The processing service, initially developed to support the development contracts awarded by confronting the deliverables to ESA’s, is now made available to the worldwide SAR Altimetry Community for research & development experiments, for on-site demonstrations/training in training courses and workshops, for cross-comparison to third party products (e.g. CLS/CNES CPP or ESA SAR COP data products), and for the preparation of the Exploitation of the Sentinel-3 Surface Topography Mission, by producing data and graphics for publications, etc. Initially, the processing was designed and uniquely optimized for open ocean studies. It was based on the SAMOSA model developed for Sentinel-3 Ground Segment using CryoSat data. However, since June 2015, a new retracker (SAMOSA+) is offered within the service as a dedicated retracker for coastal zone, inland water and sea-ice/ice-sheet. In view of the Sentinel-3 data exploitation, a new flavor of the service has been initiated, exclusively dedicated to the processing of Sentinel-3 mission data products. The scope of this new service is to maximize the exploitation of the upcoming Sentinel-3 Surface Topography Mission’s data over all surfaces. Moreover, since June 2016, the high resolution EIGEN6C4 geoid based on GOCE DIR5 data is available in output products. The service is open, free of charge (supported by the SEOM Programme Element) for worldwide scientific applications and available at https://gpod.eo.esa.int/services/CRYOSAT_SAR/ More info can be read at: http://wiki.services.eoportal.org/tiki-index.php?page=GPOD+CryoSat-2+SARvatore+Software+Prototype+User+Manual


  • 17:45 - Copernicus Big Data Explotation For A Global Glaciers Monitoring Service
    Nascetti, Andrea; Di Tullio, Marco; Emanuelli, Nico; Nocchi, Flavia; Camplani, Andrea; Crespi, Mattia - Geodesy and Geomatics Division ­ DICEA ­ University of Rome La Sapienza, Rome, Italy

    The glaciers are a natural global resource and one of the principal climate change indicator at global and local scale, being influenced by temperature and snow precipitation changes. Among the parameters used for glacier monitoring, the glaciers surface velocity is an important element, since it influences the events connected to glaciers changes (mass balance, hydro balance, glaciers stability, landscape erosion). The surface glacier velocity can be measured using both in-situ survey and remote sensing techniques. Although the in-situ surveys are accurate and have the advantage of allowing ice-flow monitoring at a high temporal resolution, it is difficult to cover wide and not accessible areas. On the other hand, satellite imagery enable the continuous monitoring of wide areas and provide information independent from logistic constraints. SAR data with respect to the optical one has several advantages: SAR imagery is characterized by very precise satellite orbit that provides high resolution and accurate mapping capabilities; moreover, SAR sensor has the remarkable advantage to collect images in any illumination and weather conditions. Thanks to Copernicus program, Sentinel-1imagery are available under a free access policy with very short revisit time (down to 6 days with the launch of the Sentinel-1B satellite) and the high amplitude resolution (up to 5 m), supplying huge amount of data for spatial and temporal studies. Therefore, it is necessary to change the processing way from the standard procedure ‘bring data to users’ to the opposite ‘bring users to data’ moving towards the Big Data paradigm also for the analysis of satellite and geospatial data. As a matter of fact, the users can directly upload algorithms to the dedicated infrastructure removing the required time for data transfer and allowing the development of innovative applications. The leading idea of this work is to continuously retrieve glaciers surface velocity using Sentinel-1 SAR amplitude data and exploiting the potentialities of the Google Earth Engine (GEE). GEE has been recently released by Google as ‘a platform for petabyte-scale scientific analysis and visualization of geospatial datasets’. It is a computing platform which can be used to run geospatial analysis using a dedicated High Performance Computing infrastructure, enabling researchers to access geospatial information and satellite imagery for global and large scale remote sensing applications. The archive includes more than thirty years of historical imagery and scientific datasets, daily updated and expanded; overall, GEE contains over two petabytes of geospatial data instantly available for analysis. The algorithm of SAR off-set tracking developed at the Geodesy and Geomatics Division of the University of Rome “La Sapienza” has been integrated in a cloud based platform that automatically processes large stacks of Sentinel-1 data to retrieve glacier surface velocity filed time series. Several results related to relevant glaciers (i.e. Baltoro (Karakoram), San Rafael and San Quintin (Chile), Aletsch (Switzerland)) also validated with respect already available and renown software (i.e. ESA SNAP, CIAS) highlight the potential of the Big Data analysis to automatically monitor glacier surface velocity fields at global scale, exploiting the synergy between GEE and Sentinel-1 imagery for implementing a global monitoring service.


  • 17:48 - A way to monitor and share thunderstorms from satellite data using the StormTrek app
    Michele, de Rosa (1); Picchiani, Matteo (1); Del Frate, Fabio (2); Sist, Massimiliano (2) - 1: GEO-K srl, Italy; 2: Tor Vergata University, Italy

    The number of extreme weather events is increasing during the latest years as result of the climate changes in-progress. Novel applications of weather satellite data may provide an unevaluable support to the risk mitigation and the damages reduction phases in addressing extreme events occurrence. StormTrek is a mobile app, available on the Android platforms, which can detect, track and predict the behaviour of convective cells up to 30 minutes ahead. It is based on the output of the StormTrack algorithm which analyses the Meteosat Second Generation images in near real time every 15 minutes, or every 5 minutes over Europe with the rapid update service. The thunderstorms are displayed on a geolocalized map together other information like the cloud top temperature, the convective cell severity, the cloud direction, the cloud area and the very short term forecasts (nowcasting). These data are integrated over different layers with other Meteosat RGB products provided by the Eumetsat Web Map Service to provide a complete characterization of the weather situation to the final user. The latter can interact with the app’s interface sending a feedback about the nowcasting accuracy. This methodology has been prototyped to develop a further step of validation for the StormTrack algorithm, adopting a citizen science approach. In this way the user is not only stimulated to use the application, but he is invited to contribute to the development. At present, the app covers Europe, Africa, South America, Middle East and India and it's actively used by users mainly from South Africa, India and Eastern Europe. The increasing number of users from different countries has pushed the development of a new multilingual interface for app. Furthermore, in South Africa a virtual community was created to support the project.


  • 17:51 - Foss4G DATE: an open resource for DSMs generation from optical and sar satellite imagery
    Di Rita, Martina; Nascetti, Andrea; Crespi, Mattia - La Sapienza, Italy

    One of the most important application of remote sensing is the generation of Digital Surface Models (DSMs), that have a large relevance in many engineering, environmental, Earth sciences, safety and security applications. The fully automatic generation of DSMs is still an open research issue. To this aim, the software DATE was developed to process both optical and SAR satellite imagery. DATE is a FOSS4G, conceived as an OSSIM (Open Source Software Image Map) plug-in, whose development started in summer 2014 in the framework of 2014 Google Summer of Code. The implemented tool is based on a hybrid procedure, whereby photogrammetric and computer vision algorithms are mixed in order to take the best of both. As regards the dense matching algorithm, for example, the Semi Global Matching as implemented in the OpenCV library is exploited. A special attention was paid in finding a solution for the epipolar resampling, which is not straightforward in case of optical and SAR satellite imagery due to their multiple projection centers; an original and computationally efficient approach was defined and implemented, introducing the GrEI (Ground quasi-Epipolar Imagery). In this work, the results obtained with DATE on some optical and SAR datasets are presented and assessed. As regards optical imagery, the ISPRS Working Group 4 Commission I on “Geometric and Radiometric Modelling of Optical Spaceborne Sensors”, provides a benchmark dataset with several stereo data sets from space borne stereo sensors; Worldview-1 and Cartosat-1 datasets were used. The accuracy in terms of NMAD ranges from 1 to 3 m for Wordview-1, and from 4 to 6 m for Cartosat-1. The results obtained show a general better 3D reconstruction for Worldview-1 DSMs with respect to Cartosat-1, and a different completeness level for the three analysed tiles, characterized by different slopes and land cover. As far as concerns SAR imagery, a dataset composed by two SAR stacks (one ascending and one descending) of three TerraSAR-X images each have been analysed. The accuracy evaluation has been carried out comparing the DSMs extracted to a more accurate reference DSM obtained with LiDAR technology. Results from the ascending and the descending stacks have been evaluated: quite better results, with RMSE even less than 6 m and almost no bias, are achieved with the descending stack, while the ascending stack shows a RMSE of about 7.5 m and a high bias, not really compliant with the absolute geolocation accuracy of the zero-Doppler SAR model. Instead, exploiting all the available DSMs in the merged DSM, global accuracy around 6 m is achieved, with a much higher completeness. These results are compliant with the accuracy required in several applications such as canopy volume and density estimation, urban volumes computation, emergency mapping. Furthermore, as far as some active and passive high resolution data are freely available (e.g. the Sentinel constellation), a massive processing of such data could contribute to the generation of a high resolution open source DSM, allowing, in the meantime, thanks to the revisiting time of the satellite, to produce also multitemporal analysis able to detect ground changes.


  • 17:54 - Fostering Open Source Software Resources for Geospatial Data Exploitation in Europe
    Pradhan, Chetan (1); Ilie, Codrina Maria (2); Tabasco, Antonio (3) - 1: EARSC, Belgium; 2: Terrasigna, Romania; 3: GMV, Spain

    Open Source Software is no longer a novelty in the realm of geospatial data exploitation. The time when these solutions were regarded with a sceptical and hesitant eye have faded away. Today, in the private sector new business models have evolved, developing new links with open source; whilst in the R&D and academic environment, the requirement of building products and services using open source has become the status quo. There are many stable Open Source Solutions in the Geospatial Information domain, used all over the world. These solutions vary from desktop to server-side, from libraries to web tools. However, the open source environment is of great complexity, and this presents a barrier to much more widespread adoption, and the building of communities around different solutions. In this context, EARSC has started an initiative to understand the Open Source Solutions available for the Geospatial environment, to map out the different governance and community engagement models in existence, and to explore how to promote and sustain further developments on all facets, from technical to legislative. One particular challenge is that a lot of what is published as open source is developed in various R&D projects, most commonly as a response to an explicit request by the funding entity. Such solutions fail to develop a community that would drive the project beyond its initial resources. And that is the pivotal point that eventually determines whether an open source solution will be successful or not. The EARSC initiative aims to explore whether it is possible to create an environment that sustains such solutions: an environment that would not allow the resources invested in solutions to be lost once they no longer have the support that drove their initial development. In this presentation, EARSC will describe the activities of its Open Source Initiative working group, which has been analysing the rationale for a more coordinated method of managing the open source tools and components being developed for EO data exploitation, and defining the associated governance needed to foster a vibrant and active community of contributors and ensure the long-term sustainability of the initiative with benefit to the whole community including industry, institutional and research.


  • 17:57 - CATE – Access to and Analysis of Climate data by the ESA CCI Toolbox
    Fomferra, N. (1); Zühlke, M. (1); Gailis, J. (2); Bernat, C. (3); Hollmann, R. (4); Corlyon, A. (3); Pechorro, E. (5) - 1: Brockmann Consult, Germany; 2: S&T, Norway; 3: Telespazio Vega, UK; 4: Deutscher Wetterdienst, Germany; 5: ESA- ECSAT, UK

    The ESA Climate Change Initiative has the objective to realise the full potential of the long term global EO archives that ESA together with its member states have established over the last 30 years, as a significant and timely contribution to the ECV databases required by the UNFCCC. As part of this programme ESA is making available to its climate data users a dedicated toolbox supporting the analysis of data across the various ECVs. Currently ESA ECV comprise 14 climate variables (aerosols, cloud, fire, greenhouse gases, glaciers, ice sheets, land cover, ocean colour, ozone, sea ice, sea level, sea surface temperature, soil moisture). This toolbox is called CATE = Climate Analysis Tooling Environment. The main objective of CATE is to equip climate users with the means to operate on CCI ECV data, overcoming three cardinal challenges: 1. Limited means to ingest ECV data spanning different ECV types into a common data model. 2. Limited means to apply algorithms homogeneously across data associated with different ECV types, 3. Limited means to conveniently analyse and visualise data drawn from 1. and 2. The CCI toolbox CATE provides easy access to all ESA CCI data available from the CCI Open Data Portal easy access to any climate data available through OPEnDAP easy access to local data in netCDF format deployable in the cloud through backend - frontend architectural design programming interface in scientific python 3 user interfaces: API (programming in scientific python), CLI (scripting), GUI (App)python backend desktop app based on Web Technologies as Frontend (Electron, React/Redux, TypeScript, Cesium Globe, OpenLayers)easiest integration of new operations in both CLI and GUI by adding annotated Python functionsintegrated workflow management workspace-based GUItime series analysisThe CCI Toolbox is targeting at a broad user community. Users of CCI data span a variety of communities, from ECV data scientific users to high-level policy drivers and decision makers. Moreover, the user community includes education and the knowledgeable public. Consultation with all users of CCI data is important to establish the detailed requirements of the toolbox and to review the different development stages of the toolbox.In this presentation we will highlight the user requirements and the corresponding main concepts of CATE, give a demonstration of the software and its usage from CATE Desktop and via the API, and stress the specifies of global climate time series data. We will encourage the users of the workshop to test CATE, to critically reflect on our software solution, and thus to foster the interactions between users and developers.


  • 18:00 - Serving earth observation data with GeoServer: addressing real
    Giannecchini, Simone; Aime, Andrea - GeoSolutions, Italy

    The presentation will cover GeoSolutions experience in setting up GeoServer based production systems providing access to earth observation products, with indications of technical challenges, solutions, and deployment suggestion. The presentation will cover topics such as setting up a single unified mosaic from all the available data sources, tailoring access to it to different users, determining the most appropriate stacking order, dealing with multiresolution, different coordinate systems, multiband data, SAR integration, searching for the most appropriate products using a mix of WFS, CSW and so on, serving imagery with high performance WMS and WMTS, performing small and large data extractions with WCS and WPS, closing up with deployment examples and suggestions. The presentation will also cover latest developments like the development of the OpenSearch for EO extension that allows to expose EO Collection and products via OpenSearch, the enhancements to ImageMosaic for improved management of EO Time Series as well as the improved REST Interface for administering EO collections and products. Challenges for preprocessing EO data like Sentinel and Landsat will also be introduced.


  • 18:03 - Exploring New Capabilities for Global Ocean Colour Innovative Applications
    Isaac, Dura - HeraSpace, Spain

    “Exploring New Capabilities for Global Ocean Colour innovative applications” Isaac Dura1 More than 10 years ago, ESA kicked off the Technology Transfer Programme Office (TTPO). Their mission is to inspire, and facilitate the use of space technology, systems and know-how for non-space applications. ESA has been very active, promoting this program in the entrepreneurial scene, for example the 26th of November 2016, ESA sponsored the Junction hackaton in Helsinki, where there were more than 1400 competitors. ESA offered, one of the two main prizes to the best idea for the Arctic, the application HeraSpace. HeraSpace proposed an app for optimising ocean fishing standards and best practices. Combining Copernicus satellite data with actual fishing data, fishing routes and selection could eventually be drastically improved. Particularly interesting were the features aimed at supporting sustainable exploitation of Arctic resources. ESA link from the Junction event. The HeraSpace team is composed for an international and highly experienced team, which is adding high doses of innovation, putting together a hype tech stack conformed, by real time data from the EUMETSAT Sentinel 3 Marine Copernicus satellite from ESA, and the Blockchain which warranties that the data performs in an unhackable system. The HeraSpace algorithm, uses data from the Sentinel 3 satellite launched by ESA in 2014, particularly interesting is the use of the OLCI, in order to analyze the existence of chlorophyll. The data retrieved from the Sentinel 3 satellite, delivers a very high quality real time data, contemplating variables like temperature, salinity, deep of the waters, pollution and levels of O2. This data is crossed, with data from an expert knowledge DB (seafood domain), the preferences of the user seafood company, and of course, another input is coming from the DB of the different legal regulations of each region. From the dynamical & real time intersection of all the mentioned inputs, HeraSpace builds the best possible route, in order to increase the company revenue, avoid administrative fines, and warranty the sustainability of the raw material (seafood). The space tech stack is served by the Blockchain, making sure that the optimal routes can´t be hacked (copied) by pirates, and it warranties in front of the administration that the company, complies with the current legal regulations. This is due to the unhackable Blockchain connection, built by HeraSpace between the administration, and the seafood enterprises. The toolbox for the Sentinel-3 satellite optical mission, supporting OLCI and SLSTR is of critical importance for HeraSpace. Like the parallel developments for Sentinel 1 and 2, the Sentinel 3 Toolbox is based on an evolution of the BEAM development platform. This common platform is called SNAP – SentiNel Application Platform. With the use of the Copernicus data access, his graphical user interface and the Open Data Protocol (ODATA). HeraSpace uses two dedicated Application Program Interfaces (API) for accessing the EO data stored in the file downloaded, using the OData protocol which accepts REST web services. OData Service Root URI for the CODA Web Service https://coda.eumetsat.int/odata/v1 CODA Web Service Resource Paths: /Products /Collections Query Options admitted by the CODA Web Service: $format: Specifies the HTTP response format of the record e.g. XML or JSON; $filter: Specifies an expression or function that must evaluate to true for a record to be returned in the collection; $orderby: Determines what values are used to order a collection of records; $select: Specifies a subset of properties to return; $skip: Sets the number of records to skip before it retrieves records in a collection; $top: Determines the maximum number of records to return; The default response format is Atom[RFC4287], a XML-based document format that describes Collections of related information known as “feeds”. The resulting products from the query can be filtered by ingestionDate, evictionDate, and also by UUID (Unique User Identifier). They are served to the user in MD5 downloads, that can be checked using the CODA Checksum function, in order to confirm the quality of the data with respect to the original one. Once HeraSpace will be deployed in the ESA cloud servers (Red Hat Enterprise Linux), HeraSpace will moderate the seafood industry, helping to measure the captures, boosting sustainability, and maintaining a healthy ocean ecosystem. 1. Email: isaacdura@heraspace.com Address: OsterBrooksWeg 14 b, Schenefeld, Hamburg. Germany. EUMETSAT EUM/OPS-SEN3/MAN/16/880763 v2 Draft, 24 January 2017 http://www.esa.int/spaceinimages/Images/2016/11/Junction_ESA_Arctic_Special_Prize_winner_HeraSpace


Exhibitions - TEP Demo - Social Event

18:15 - 19:00

  • 18:15 - Exhibitions - TEP Demos 3

    Exhibitions - TEP Demos 3


Day 4 - 28/09/2017

Education and Communication

Chairs: Hogan, Patrick (NASA), Sarti, Francesco (ESA- ESRIN), Stewart, Chris (RSAC), Eberle, Jonas (Friedrich Schiller University Jena)

08:30 - 10:30

  • 08:30 - NASA Europa Challenge Winners 2017
    Hogan, Patrick (1); Prestifilippo, Gabriele (2) - 1: NASA, United States of America; 2: Politecnico di Milano at Como

    The fifth year for the NASA World Wind Europa Challenge will climax in Helsinki Finland during the last week of August. The theme this year, as last year, is Solutions For Sustainable Cities. The Europa Challenge has always had Europe's INSPIRE Directive to guide project development. This year we continue to have INSPIRE guide us and more specifically, we are looking for urban management solutions. Almost every city needs the same data management tools as every other city. How can we help cities work together to be more sustainable, more livable and more resilient? The top six teams are gathering in Helsinki to share their work and help to improve each others application. We will present the winning results of those top six teams from this global challenge.


  • 09:00 - Online education for Earth observation
    Eckardt, Robert (1); Eberle, Jonas (1); Urbazaev, Mikhail (1); Pathe, Carsten (1,2); Schmullius, Christiane (1) - 1: Friedrich-Schiller University Jena, Germany; 2: Earth Observation Services jena

    The project SAR-EDU is a joint education initiative of the Friedrich-Schiller-University of Jena (FSU), the German Aerospace Center (DLR) and numerous partners in radar-related scientific institutions. In a previous project phase two main cornerstones for education in the field of applied radar remote sensing were established. Since 2013 the FSU is hosting a yearly summer school on applied radar remote sensing. Furthermore DLR and FSU published the SAR-EDU learning portal in late 2014 (https://saredu.dlr.de). This is the foundation of a new generation of web portal, due to be launched in mid 2017, that provides access to a vast range of teaching material, online courses and a community of learners. This platform will host the first Massive Open Online Course (MOOC) on radar remote sensing in fall 2017. Furthermore, the platform is designed to serve as host for courses and resources from several international organistions working with EO data and tools, therefore providing a central point of entry for EO education in the future. This presentation will give insights into the features of the new web portal, the contents of the Radar MOOC as well as some general topics on E-learning.


  • 09:15 - Training on Remote Sensing for Ecosystem Modelling (TruStEE)
    Panigada, Cinzia (1); Migliavacca, Mirco (2); Mahecha, Miguel (2); Reichstein, Markus (2); Anderson, Karen (3); Martín, M.Pilar (7); Uwe, Rascher (8); Van der Tol, Christiaan (4); Vescovo, Loris (5); Delalieux, Stephanie (6); Reusen, Ils (6); Ginelle, Damiano (5); van der Wal, Tamme (9); Rossini, Micol (1) - 1: University of Milano Bicocca, Italy; 2: Max Planck Institute for Biogeochemistry Department Biogeochemical Integration, Germany; 3: University of Exeter, United Kingdom; 4: University of Twente, Netherlands; 5: Fondazione Edmund Mach, Italy; 6: Vlaamse Instelling Voor Technologisch Onderzoek (VITO), Belgium; 7: Spanish National Research Council (CSIC), Spain; 8: Forschungszentrum Julich Gmbh, Germany; 9: Aerovision BV, Netherlands

    In this contribution the Training on Remote Sensing for Ecosystem modelling (TruStEE), funded by Horizon 2020 Marie Curie ITN 2016, is presented. TRuStEE aims to capacitate the next generation of scientists to understand and deal with the increasing pressure of environmental change on ecosystem functioning and land-atmosphere interactions. Specifically, TRuStEE will train a new generation of scientists with complementary and interdisciplinary skills in ecosystem modelling, plant physiology, remote sensing technologies and big data analysis, addressing the specific objectives: 1) to identify essential biodiversity variables (EBVs) and the link with plant traits (PTs) and ecosystem functional properties (EFPs), inferable from remote sensing, 2) to investigate a completely new avenue for assessing vegetation photosynthetic efficiency from remote sensing measurements of canopy fluorescence, 3) to assimilate diverse remote sensing data streams with varying spatial and temporal resolution in dynamic ecosystem models and 4) to exploit new satellite missions (e.g. ESA-FLEX, ESA-Sentinels, NASA-GEDI) and Earth Observation products for the upscaling of PTs, EBVs and EFPs. Understanding and predicting ecosystem functions remains a major challenge in evaluating ecosystem services and biophysical controls on biosphere-atmosphere interactions, as current dynamic vegetation models are still not capable of grasping the spatial and temporal variability in ecosystem processes. Remote sensing (RS) data at a range of scales from proximal observations to global extent sampling can detect essential changes in plant traits (PTs), biodiversity and ecosystem functioning, providing a method for scaling-up. However, there are still methodological and technical constraints that hamper a systematic incorporation of RS in ecosystem models, including scalability and multi-source data integration issues. These are the main topics developed in the individual PhD projects of twelve early stage researchers (ESRs) involved in TRuStEE training network. The students will attend a 3-year training and they will strongly benefit from the network of internationally recognized scientists and private companies with relevant expertise in these topics through training courses, summer schools and secondments.


  • 09:30 - Sentinels’ Eyes Enhance STEM Education – Digital and Interactive Applications of Remote Sensing in School Lessons
    Rienow, Andreas; Ortwein, Annette; Lindner, Claudia; Schultz, Johannes; Jürgens, Carsten - Ruhr-University Bochum, Germany

    "Man must rise above the Earth – to the top of the atmosphere and beyond – for only thus will he fully understand the world in which he lives". Following the quote of the Greek philosopher Socrates, the bird’s eye perspective of satellites enables humankind to explore the spatial patterns on our Earth, detached from the limited scope of the human eye. High-technology sensors amplify the scope of perception to the global and the invisible. The presentation shows how Copernicus data is used to introduce young people to the benefits of remote sensing. In the light of the Copernicus services it will be explained how Sentinel-based teaching units can be developed in order to communicate the knowledge about and the handling of natural and man-made phenomena in times of global change. Built on the basis of intermediality, interactivity, and interdisciplinarity, children are introduced to the world of data behind fancy-colored satellite images. The activities focus not only on the curriculum of applied subjects like Geography and Biology, but also on Physics, Mathematics, and Computer Science. Exemplarily, different digital learning units will be demonstrated: (1) “Oases – Explored from Near and Far” introduces basic models of different types of oases to pupils. The riverine oasis is the central example of the unit. In an interactive process, the pupils can deduce a thematic map from a Sentinel-2 image. In (2) “Brown Coal – Land Use Change through Surface Mining”, the pupils compare and evaluate the development of different areas affected by surface mining with the help of a combined Landsat and Sentinel-2 time series. In so doing, the pupils see the relevance of brown coal surface mining for the German energy supply and are able to assess the effect of recultivation. Last but not least, (3) “Summer in the City” deals with thermal infrared remote sensing and Sentinel-3 images taken from Berlin. Finally it will be shown, how augmented reality and the development of miniature massive open online courses are going to play an important role in encouraging young students to engage in earth observation and gain media literacy.


  • 09:45 - Networking of research and Outreach Organizations for the Study and Promotion of Enviromental Issues: a Report from the BuioMetria Partecipativa Summer Campaign in Tuscany on Night Sky Quality Characterization
    Giacomelli, Andrea (1); Massetti, Luciano (2); Maggi, Elena (3) - 1: pibinko.org, Italy; 2: Institute of Biometeorology, Italian National Research Council, Italy; 3: University of Pisa, Department of Biology

    The presentation will report the activities conducted during the Summer by a team composed by staff from a university department, a National Research Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities will combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: - the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; - the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; - the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concern in-situ data, the campaign will account also for remote sensing data, such as VIIRS imagery, thus considering hetereogenous data sources. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Given the intertwining of monitoring a communication actions in the context of the project, it is also expected that this effort will contribute to the promotion of night skies with a good quality as an important asset for sustainability. All information on previous activities related to the teams' activities, as well as the documentation of progress of the Summer campaign, may be accessed via the http://www.buiometriapartecipativa.org (all content is made available in English and Italian). The attachment is a copy of a poster presented at the fourth international confernece on artificiali light at night, held in Cluj-Napoca, Romania, in September 2016, which provides a visual overview of our activities in relation to the proposed topic, to that date.


  • 10:00 - Communication of Science: Insights from an Empirical Survey in India
    Scaria, Arul George (1,2); Ray, Shreyashi (1) - 1: Centre for Innovation, Intellectual Property and Competition; 2: National Law University, Delhi

    Science is going through a major crisis, which includes numerous issues in its communication. One of the issues is access to scientific outputs. Dissemination of scientific outputs, including those from publicly funded research, is often restricted to individuals and institutions belonging to certain socio-economic positions. Journal subscription and data access fees are exorbitant, and hence create access barriers. Transparency is another issue. Communication which lacks transparency- notably with respect to the methodology adopted, data relied upon, negative results obtained, and source of funding for the research- can result in misleading information. Moreover, the style of communication in most cases is such that unless one is familiar with the relevant field, understanding and applying the knowledge generated can be extremely difficult. These problems have significant impact on accessibility and reproducibility of science, and hinder collaboration and social engagement. In a developing, multi-lingual country like India with inequitable access to education, means of communication, and scarce research resources, this crisis develops new dimensions. It therefore needs to be addressed in accordance with systemic deficiencies specific to the Indian scientific community, and social needs peculiar to the country. It is in this context that we have conducted a survey among researchers in five different disciplines - Economics, Law, Mechanical Engineering, Medicine, and Physics; with the objective to understand their practices pertaining to conducting and communicating research, and policies governing the same. One of the aims of the survey was to understand the individual and systemic factors which affect scientific communication, in order to recommend suitable changes to existing incentive structures. The survey was part of a larger project on ‘Open Science’, which seeks to facilitate a sustainable open science movement in India by identifying optimal legal and policy interventions. In our presentation, we seek to highlight the major insights from the survey. In particular, we would like to focus upon: a) factors relevant to the researchers while deciding modes of publication and data sharing; b) transparency in scientific communication, i.e., whether researchers share details of research methodology adopted, research tools used, limitations to their research (including errors in research/ data), negative results, and source of funding; c) policies adopted by institutions and funding agencies with regard to sharing practices, and relevant compliance mechanisms; and d) researchers’ satisfaction with the existing rules or practices relating to scientific communication in their institute, and attempts made to change the same. The data from our study demonstrate exorbitant article processing charges for open access publishing, high focus on impact factor of journals, lack of clear sharing policies with robust compliance mechanisms, and a general lack of motivation towards making scientific communication accessible and understandable, as significant problems in the Indian scientific research community. We hope that our presentation will help provide the Indian perspective on the multidimensional crisis in scientific communication. The feedback received on our presentation and discussions at the conference will be extremely helpful for our project and in refining recommendations for facilitating a sustainable open science movement in India.


Visualisation and Virtual labs

Chairs: fab, FABRICE (OCEANDATALAB), Natali, Stefano (SISTEMA gmbH)

11:00 - 13:15

  • 11:00 - Easy-Install WMS Server
    Hogan, Patrick; Collins, Paul David; Schubert, Bruce; Glueckert, Zach; Del Castillo, Miguel - NASA, United States of America

    The NASA World Wind Server Kit (WWSK) is an open source Java project that assembles GeoServer for easy distribution and implementation. The WWSK uses Apache Maven for a build system. To build GeoServer and its dependencies run maven from the root of the WWSK repository. Example: $ mvn clean install. Support is provided for several options to run, including as a standalone, or deployed with the worldwind-geoserver.war file (found in the worldwind-geoserver/target folder) to your preferred servlet container, e.g., Apache Tomcat, or NetBeans, or Jetty from Maven, or as a binary distribution. Support is provided for each of these instances. WWSK adds support for reading and writing OGC GeoPackages in GeoServer. WWSK manifests the GeoPackage (tiles) raster data source for OGC GeoPackages. WWSK has integrated support the GeoWebCache (GWC) enabled by default. Tile Caching options available on Layers are applicable. We will describe and demonstrate installation of the World Wind Server Kit along with how to incorporate the included base set of data for imagery and terrain to establish a virtual globe.


  • 11:30 - NASA Web World Wind taking part in GSoC
    Kilsedar, Candan Eylül (1); Battaglia, Simone (2); Prestifilippo, Gabriele (1); Balhar, Jakub (3); Hogan, Patrick (4); Brovelli, Maria Antonia (1) - 1: Politecnico di Milano, Italy; 2: University of Bologna, Italy; 3: Gisat, Czech Republic; 4: NASA Ames Research Center, CA USA

    Open Source Geospatial Foundation (OSGeo) has participated in Google Summer of Code (GSoC) initiative since the summer of 2007. GSoC is a program that sponsors the development of open source projects by involving university students and letting them work side by side with developers and managers of these projects. In the last two years NASA Web World Wind projects have participated in this venture. NASA Web World Wind is a 3D virtual globe Application Programming Interface (API)-centric SDK (Software Development Kit). This summer of 2017, two projects have been selected that provide additional functionality to NASA Web World Wind. These two 2017 projects are 3D OpenStreetMap (OSM) Plugin and Marker Clustering Plugin. The goal of the 3D OSM Plugin is to provide capability to easily display 3D OSM data in NASA Web World Wind. The first feature of 3D OSM Plugin will be to display buildings, with the ability to further extend this feature. First, OSM data is fetched in real time, based on a bounding box or a URL for OSM data. Then this plugin offers a function to extrude the polygons in the fetched data with an arbitrary height value. The performance of 3D rendering will be optimized by using triangle meshes instead of polygons. Additionally, the project has the ability to incorporate actual heights of buildings using Digital Surface Model (DSM) data and to apply these heights to the extrusion. 3D OSM Plugin will also improve performance via various caching techniques and tiling schemes. In the case of tiling being implemented, a new plugin can be created to also tile any GeoJSON data. The Marker Clustering Plugin improves the functionality of visualizing placemarks (markers) on a map with the possibility of rendering, at run time, a large number of markers without requiring the need for high GPU performance. This is achieved thanks to a clustering algorithm running in the background creating clusters on the fly for the placemarks that need to be rendered. Moreover, the plugin supports a high degree of customization for the markers via an easy-to-use interface.


  • 11:45 - Space Big Image Tool
    Iacobellis, Michele; Agrimano, Luigi; Amoruso, Leonardo - Planetek Italia s.r.l., Italy

    SpaceBIT is a platform whose main purpose is to help the scientist and the algorithm designer to change their way of thinking from "sequential" to "massively parallel". This is accomplished by creating a processing and visualization pipeline conceived from scratch, entirely devoted to the exploitation of GPUs and next-generation multi-core CPUs. The scientist and the designer are able to prototype and experiment their ideas by writing programs suited for running on GPUs that are injected into the processing pipeline in real-time. The infrastructure takes care of managing the input and output of very large SAR and multi-spectral/hyper-spectral images, collecting results just-in-time and displaying them on a visualization medium, focusing on next generation devices for virtual and augmented reality. By SpaceBIT the user is able to interact with images of tens GB, in standard data formats as HDF5, HDF-EOS, TIFF, JPEG, FITS.


  • 12:00 - SAMI: High Resolution 3D Visualisation of Earth Observation Satellite Missions
    Pinol Sole, Montserrat; Zundo, Michele - ESA/ESTEC, Netherlands, The

    This paper presents a software application for visualization of high-resolution 3D satellite mission scenarios distributed by the ESA-ESTEC EOP System Support Division to users part of the ESA Earth Observation Earth Explorer and Copernicus satellites community. The SAMI (SAtellite MIssion Editor & Player) application plays stunning high-resolution 3D and 2D animations of ESA Earth Observation satellites. SAMI displays the satellite orbit ground-tracks and footprints of the instruments on-board, flags the entering in area of visibility between the satellite and the ground stations and applies user-selected global Earth map images as layer texture. The realistic Sun illumination allows observing shadow casting from the various satellite model elements. It is possible as well to trigger the deployment sequence of solar arrays and antennas and schedule thruster firing events. The time window in the application can be configured as simulated time (in the past or in the future) or real-time. In addition an endless loop simulation mode is available, with the objective to replay a given sequence. With the editing capabilities of SAMI, the user can drive the various camera views (camera attached to the Earth or to the satellite) and enable disable objects in the scene, generating standalone animation for kiosk type application and export it to HD video or series of snapshots. The missions currently supported are Sentinel 1A/1B, Sentinel 2A/2B, Sentinel 3A/3B, Sentinel5P, SWARM, Cryosat, SMOS and Aeolus. The capability to seamlessly display several satellites simultaneously is one of the stronger features of SAMI. The coherence and accuracy of the orbital and geometrical calculations within the SAMI application is ensured by the use of embedded Earth Observation CFI Software libraries (EOCFI SW). The libraries are used to obtain the satellite position, orbit ground-track, attitude and swath footprint. Typical use cases for this application would be the playback of a scenario in time to observe a particular satellite geometry, export screenshots or video to be used as media content or share it with stakeholders to illustrate mission concepts. The application runs on desktop platforms (Mac OS X, Windows) and mobile platforms (iOS based, e.g. iPad). REFERENCES [REF 1 ] SAMI website: http://eop-cfi.esa.int/index.php/applications/sami


  • 12:15 - Ocean multisensor synergy from Sentinel 1-2-3
    Collard, Fabrice (1); Gaultier, Lucile (1); Herlédan, Sylvain (1); El Khoury Hanna, Ziad (1); Le Seach, Guillaume (1); Guitton, Gilles (1); Konik, Marta (2); Korosov, Anton (3) - 1: OCEANDATALAB, France; 2: IOPAN, Poland; 3: NERSC, Norway

    The fully operational Ocean Sentinel-1-2-3 constellation provides a wide range of view angles to the ocean surface from the coast to the open ocean, at various scales and from physical to biological processes. Discovering jointly this huge heterogeneous dataset in a simple, fast and convenient way is now possible using the Ocean Virtual Laboratory portal online or the standalone version. Today, these tools are widely used by the scientific community to better understand and monitor oceanic processes. A collection of use cases will be demonstrated to illustrate the functionalities of these tools: - Collocating Sentinel-1 and Sentinel-3 data enables to detect oceanic fronts and eddies, highlighting strong and energetic ocean currents. - Using jointly Sentinel-1-2 and Sentinel-3 helps to detect oil spills as well as their displacement. - Analyzing overlapping Sentinel-1-2-3 helps to assess ocean wave parameters and their intensification by surface currents or bottom topography. Interactive demo of the tools will also be available on the OceanDataLab booth. Online tool is available at http://ovl.oceandatalab.com


  • 12:30 - The Coastal Waters Research Synergy Framework, For Unlocking Our Potential For Coastal Innovation Growth
    Terra-Homem, Miguel (1); Grosso, Nuno (1); Catarino, Nuno (1); Scarrot, Rory (2); Politi, Eirini (2); Cronin, Abigail (2) - 1: Deimos Engenharia, Portugal; 2: University College Cork, Ireland

    Until recently, scientists had to deal with the daunting task of mining large datasets for suitable data, and often downloading EO information from various different sources. In addition, as the datasets increased in volume, the processing has become slower and demanding of better computing facilities. The Coastal Waters Research Synergy Framework (Co-ReSyF) project aims to tackle these issues, by developing a platform for combined data access, processing, visualisation and output in one place. The platform is based on cloud computing to maximise processing effort and task orchestration. The platform is to support researchers in the field of monitoring the economic and social coastal activities (e.g. fisheries, harbour operations, ship traffic monitoring, oil spill detection) in a changing world. Co-ReSyF is a 3-year project (2016-2018) funded by the European Union, within the European Union’s Horizon 2020 research and innovation programme under grant agreement No 687289. The project supports research applications using Earth Observation (EO) data for Coastal Water Research. Co-ReSyF will create a cloud platform, which simplifies integration of EO data use into multi-disciplinary research activities. This platform aims to be user friendly and accessible to inexperienced scientists as well as EO and coastal experts. We will reach a wide community of coastal and oceanic researchers, who are offered the opportunity to experience, test and guide the development of the platform, whilst using it as a tool for their own research. The platform will include a set of 5 core Research Applications, developed under the project, and also a set of tools that the researchers can use to build their own applications in a user friendly manner. Additionally other potential tools or applications can be added by the research community for sharing with other reseatchers that may find it useful. The set of core applications to be developed during the project lifetime are: • Bathymetry Determination from SAR Images • Determination of bathymetry, benthic classification and water quality from optical sensors • Vessel and oil spill detection • Time-series processing for hyper-temporal optical data analysis • Ocean coastal altimetry Additionally a group of 8 Master/PhD students have been selected to attend a Summer School where they will learn how to use the platform and will also contribute with their own tools and/or applications to be incorporated into the platform.


  • 12:45 - VRE for Aquaculture: how EO data can help the Blue Growth
    Longépé, Nicolas (1); Goacolou, Manuel (1); Vadaine, Rodolphe (1); Blondel, Emmanuel (2); Pagano, Pasquale (3); Ellenbroek, Anton (2); Lebras, Jean-Yves (1) - 1: CLS, France; 2: FAO, Italy; 3: CNR, Italy

    The EU-funded BlueBRIDGE project deliver Virtual Research Environments (VREs) in various domains (e.g. fisheries, biology, economics, statistics, environment, mathematics, social sciences, natural sciences, computer science) that support knowledge generation from data collection and aggregation to the production of indicators and indices or other information products such as fact-sheets, reports, and data repositories. In the context of the Blue Growth strategy, the need for services that collect and combine Environmental Observation (EO) data with aquaculture data has been identified by the EU. In this context, fundamental services are needed to monitor the spatial distribution of human activities including aquaculture and fishing, allowing for performance analysis based on environmental and socio-economic indicators. This presentation will highlight VREs that support a computing intensive ontology driven feature analysis of SAR and multispectral optical imagery (using Sentinel-1 and -2 data, and Very High Resolution optical imagery), where the results are displayed on maps for human reviewers. A first VRE is specialized in recognizing aquaculture activity in Greece, while the second is specialized in identifying coastal aquaculture ponds in Indonesia.


  • 13:00 - The GEO ECOPOTENTIAL Virtual Laboratory: a virtual research environment for ecosystem open science
    Nativi, Stefano (1); Mazzetti, Paolo (1); Santoro, Mattia (1); Manakos, Ioannis (2); Kordelas, Georgios (2); Lucas, Richard (3) - 1: CNR-IIA, Italy; 2: CERTH-ITI, Greece; 3: UNSW, Australia

    The ECOPOTENTIAL project, funded under the Horizon 2020 Research and Innovation programme aims at building a unified framework for ecosystem studies and management of protected areas. To achieve such objective, open and interoperable access to data and knowledge is assured by the GEO Ecosystem Virtual Laboratory Platform (ECOPOTENTIAL VLab). The concept of ECOPOTENTIAL VLab stems from the need of moving from open data to open science as a new vision of participatory scientific research. Therefore, it aims not only to data sharing but more generally to support the ecosystem community-of-practice in research activities for informed decision-making in ecosystem management. The ECOPOTENTIAL VLab provides multiple entry points to access information at different semantic level depending on the user’s specific interest, ranging from ecosystems, protected areas, storylines (e.g. user scenarios for protected areas study and management), workflows (e.g. business processes necessary for storylines), algorithms (e.g. models and procedures necessary for implementing workflows) and data. All the information artifacts have an open representation and are linked together according to a general ECOPOTENTIAL ontology, allowing users’navigation among different concepts. In particular, users have access to in-situ data from selected campaigns, and from European and global observation networks and programmes, like LTER DEIMS, OBIS, GBIF, LIFEWATCH. They have also access to raw and pre-processed remote-sensing data including Sentinel 1/2 and Landsat. Users have also access to workflows represented as BPMN diagrams, and through the ECOPOTENTIAL VLab they can run workflows selecting input data, to generate essential variables, indicators and indices. Users can share algorithm as code through GitHub, or processing services as OGC WPS and integrate them in new workflows. The architecture of the GEO Ecosystem Virtual Laboratory is based on a set of principles currently shared in the scientific research communities, with particular reference to the GEO initiative, including GEOSS Data Sharing Principles, GEOSS Data Management Principles and GEOSS Architecture Principles. Moreover, since ECOPOTENTIAL participates in the Horizon 2020 pilot action on open access to research data, the activities of the ECOPOTENTIAL Consortium for the definition of the ECOPOTENTIAL Data Management Plan are a fundamental input for the architecture of the ECOPOTENTIAL VLab. The design of the ECOPOTENTIAL Virtual Laboratory puts its basis on past experiences in building System of Systems through a brokering approach. The mature data brokering approach will be complemented with innovative semantic technologies – including concept-based queries and annotations – and support of discovery and invocation of workflows implementing storylines on multiple protected areas contributing to enable the open science vision in ecosystem science.


Summaries and Roadmap

14:15 - 15:55

  • 14:15 - Summaries and Roadmap

    Conclusions