Achievement: Members and students of the Computational Urban Sciences group demonstrated a method for generating scenarios of urban neighborhood growth based on existing physical structures and placement of buildings in neighborhoods. The method combined two different generative adversarial networks (GAN) approaches along with previous work using convolutional neural networks to identify neighborhoods as residential, commercial or mixed. GANs are combinations of two types of neural networks, a generator and a discriminator, that use computer vision to analyze and generate images. The generator tries to generate images that match the real images, while the discriminator must decide whether each generated image is real or fake. The model runs thousands of times, each time evaluating the error between the real and the generated images. The error is fed back through the networks at each iteration so that the generator can improve its attempts each time at producing a realistic image.
Awarded Best Paper at ARIC '22: Proceedings of the 5th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities, Seattle, WA, November, 2022.
Significance and Impact: This capability for urban development scenario generation allows for analysis of the effect of new built infrastructure on future meteorology and climate at neighborhood scale. Analysis at this scale is at the forefront of climate research.
Research Details
- An image-to-image GAN was used to produce an ensemble of height encoded raster-type neighborhood morphologies at 30-m resolution based on geo co-located images of land cover (USGS).
- A Unet generator that allows for low-level features (in this case, roads and waterways) to flow through the generator from layer 𝑖 to layer 𝑛−𝑖, where 𝑛 is the number of layers . The encoder and decoder of the generator include standardized blocks of convolutional, batch normalization, dropout, and activation layers.
- The discriminator is a PatchGAN classifier that determines whether sections of a generated image are real or fake–not just the image as a whole. This discriminator is run convolutionally across the image to provide decision output.
Facility: Work on this project used the CUSG computing cluster, Sunsphere.
Sponsor/Funding: DOE BER
PI and affiliation: Melissa Allen-Dumas, ORNL
Team: Abigail Wheelis, Centre College, KY; Levi Sweet-Breu, ORNL; Joshua Anantharaj, UTK; Kuldeep Kurte, ORNL
Citation and DOI: Allen-Dumas, M.R., Wheelis, A.R., Sweet-Breu, L.T., Anantharaj, J. and Kurte, K. (2022). “Generative Adversarial Networks for the Prediction of Future Urban Morphology.” In: ARIC '22: Proceedings of the 5th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities, 1–6. https://doi.org/10.1145/3557916.3567819
Summary: As city planners design and adapt cities for future resilience and intelligence, interactions among neighborhood morphological development with respect to changes in population and resultant built infrastructure's impact on the natural environment must be considered. For deep understanding of these interactions, explicit representation of future neighborhoods is necessary for future city modeling. Generative Adversarial Networks (GANs) have been shown to produce spatially accurate urban forms at scales representing entire cities to those at neighborhood and single building scale. We demonstrated a GAN method for generating an ensemble of possible new neighborhoods given land use characteristics and designated neighborhood type for specific locations within the Los Angeles, California area.