The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B4-2020
https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-463-2020
https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-463-2020
25 Aug 2020
 | 25 Aug 2020

RAPID VISUAL PRESENTATION TO SUPPORT GEOSPATIAL BIG DATA PROCESSING

E. Levin, W. Roland, R. Habibi, Z. An, and R. Shults

Keywords: Visual Analytics, EEG, Event Related Presentations

Abstract. Given the limited number of human GIS/image analysts at any organization, use of their time and organizational resources is important, especially in light of Big Data application scenarios when organizations may be overwhelmed with vast amounts of geospatial data. The current manuscript is devoted to the description of experimental research outlining the concept of Human-Computer Symbiosis where computers perform tasks, such as classification on a large image dataset, and, in sequence, humans perform analysis with Brain-Computer Interfaces (BCIs) to classify those images that machine learning had difficulty with. The addition of the BCI analysis is to utilize the brain’s ability to better answer questions like: “Is the object in this image the object being sought?” In order to determine feasibility of such a system, a supervised multi-layer convolutional neural network (CNN) was trained to detect the difference between ‘ships’ and ‘no ships’ from satellite imagery data. A prediction layer was then added to the trained model to output the probability that a given image was within each of those two classifications. If the probabilities were within one standard deviation of the mean of a gaussian distribution centered at 0.5, they would be stored in a separate dataset for Rapid Serial Visual Presentations (RSVP), implemented with PsyhoPy, to a human analyst using a low cost EMOTIV “Insight” EEG BCI headset. During the RSVP phase, hundreds of images per minute can be sequentially demonstrated. At such a pace, human analysts are not capable of making any conscious decisions about what is in each image; however, the subliminal “aha-moment” still can be detected by the headset. The discovery of these moments are parsed out by exposition of Event Related Potentials (ERPs), specifically the P300 ERPs. If a P300 ERP is generated for detection of a ship, then the relevant image would be moved to its rightful designation dataset; otherwise, if the image classification is still unclear, it is set aside for another RSVP iteration where the time afforded to the analyst for observation of each image is increased each time. If classification is still uncertain after a respectable amount of RSVP iterations, the images in question would be located within the grid matrix of its larger image scene. The adjacent images to those of interest on the grid would then be added to the presentation to give an analyst more contextual information via the expanded field of view. If classification is still uncertain, one final expansion of the field of view is afforded. Lastly, if somehow the classification of the image is indeterminable, the image is stored in an archive dataset.