Most current state-of-the-art methods for semantic segmentation on remote sensing imagery require large labeled data, which is scarcely available. Due to the distribution shifting phenomenon inherent in remote sensing imagery, the reuse of pre-trained models on new areas of interest rarely yield satisfactory results. In this paper, we approach this problem from an adversarial learning perspective toward unsupervised domain adaptation. The core concept is to infuse fully convolutional neural networks and adversarial networks for semantic segmentation assuming the structures in the scene and objects of interest are similar in two set of images. Models are trained on a source dataset where ground truth is available and adapted to new target dataset iteratively via a adversarial loss on unlabeled samples. We use two real large scale datasets to validate the framework: 1) cross city road extraction and 2) cross country building extraction. The preliminary results show the usefulness of considering adversarial learning for indirect re-use of the pre-trained models. Experimental validation suggests significant benefits over models without adaptation.