DC FieldValueLanguage
dc.contributor.authorArzoumanidis, Lukas-
dc.contributor.authorHecht, Jonathan-
dc.contributor.authorDehbi, Youness-
dc.date.accessioned2024-07-24T13:16:19Z-
dc.date.available2024-07-24T13:16:19Z-
dc.date.issued2024-06-27-
dc.identifier.citationISPRS TC IV (WG IV/9) 19th 3D GeoInfo Conference 2024en_US
dc.identifier.urihttps://repos.hcu-hamburg.de/handle/hcu/1036-
dc.description.abstractFigure-ground maps play a key role in many disciplines where urban planning or analysis is involved. In this context, the automatic generation of such maps with respect to certain requirements and constraints is an important task. This paper presents a first step towards a deep automatic generation of figure-ground maps where the built density of the generated scenes is controlled and taken into account. This is preformed building upon a Geographic Data Translation model which has been applied to generate less available geospatial features, e.g. building footprints, from more widely available geospatial data, e.g. street network data, using conditional Generative Adversarial Networks. A novel processing approach is introduced to incorporate the population density and the built density accordingly. Furthermore, the impact of both the level of detail of the street network, i.e. its sparsity or density, and the spatial resolution of the training data on the generated figure-ground maps has been investigated. The generated maps and the qualitative results reveal an obvious impact of these parameters on the layout of built and unbuilt areas. Our approach paves the way for the expansion of existing districts by figure-ground maps of future neighbourhoods considering factors such as density and further parameters which will be subject of future work.en
dc.language.isoenen_US
dc.publisherCopernicusen_US
dc.subjectGenerative Adversarial Networksen
dc.subjectGeographical Data Translationen
dc.subjectFigure-ground Mapsen
dc.subjectUrban Morphologyen
dc.subjectBuilt Densityen
dc.subjectVolunteered Geographic Informationen
dc.subject.ddc710: Landschaftsgestaltung, Raumplanungen_US
dc.titleTowards a Deep Automatic Generation of Figure-ground Mapsen
dc.typeinBooken_US
dc.relation.conference19th 3D GeoInfo Conference 2024, 1-3 July 2024, Vigo, Spainen_US
dc.type.dinibookPart-
dc.type.driverbookPart-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/en_US
dc.type.casraiBook Chapter-
dcterms.DCMITypeText-
tuhh.identifier.urnurn:nbn:de:gbv:1373-repos-13279-
tuhh.oai.showtrueen_US
tuhh.publisher.doi10.5194/isprs-annals-X-4-W5-2024-33-2024-
tuhh.publication.instituteComputational Methodsen_US
tuhh.type.opusInBuch (Kapitel / Teil einer Monographie)-
tuhh.container.startpage33en_US
tuhh.container.endpage39en_US
tuhh.relation.ispartofseriesnumberX-4/W5-2024en_US
tuhh.relation.ispartofseriesISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciencesen_US
tuhh.type.rdmfalse-
openaire.rightsinfo:eu-repo/semantics/openAccessen_US
item.seriesrefISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences;X-4/W5-2024-
item.openairecristypehttp://purl.org/coar/resource_type/c_3248-
item.fulltextWith Fulltext-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.tuhhseriesidISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences-
item.creatorGNDArzoumanidis, Lukas-
item.creatorGNDHecht, Jonathan-
item.creatorGNDDehbi, Youness-
item.creatorOrcidArzoumanidis, Lukas-
item.creatorOrcidHecht, Jonathan-
item.creatorOrcidDehbi, Youness-
item.openairetypeinBook-
crisitem.author.deptComputational Methods-
crisitem.author.deptComputational Methods-
crisitem.author.orcid0000-0001-6668-1695-
crisitem.author.orcid0000-0003-0133-4099-
Appears in CollectionPublikationen (mit Volltext)
Files in This Item:
File Description SizeFormat
isprs-annals-X-4-W5-2024-33-2024.pdf5.46 MBAdobe PDFView/Open
Show simple item record

Page view(s)

2,630
checked on Sep 1, 2024

Download(s)

9
checked on Sep 1, 2024

Google ScholarTM

Check

Export

This item is licensed under a Creative Commons License Creative Commons