COMPARISON OF TRAINING STRATEGIES FOR CONVNETS ON MULTIPLE SIMILAR DATASETS FOR FACADE SEGMENTATION
Keywords: Convolutional Network, Facade Segmentation, Fine-Tuning, Multi-Task Learning
Abstract. In this paper, we analyze different training strategies and accompanying architectures for Convolutional Networks (ConvNets) when multiple similar datasets are available using the semantic segmentation of rectified facade images as example. Additionally to direct training on the target dataset we analyze multi-task learning and fine-tuning. When using multi-task learning to train a ConvNet, multiple objectives are optimized in parallel. Fine-tuning optimizes these objectives sequentially. For both strategies, the tasks share a common part of the ConvNet for which we vary the depth. We present results for all strategies and compare them regarding the overall pixel-wise accuracy and show that for the special case of facade segmentation there are no significant differences using multiple datasets or not or training a ConvNet with different strategies.