Property valuation exemplifies the broader challenge of deriving actionable insights from complex, location-based information. Geographic factors—from neighborhood characteristics and infrastructure connectivity to environmental conditions—interact with temporal trends and diverse data streams, making traditional analyses labor-intensive and sometimes opaque. Manual appraisals and retrospective sales comparisons often suffer from subjectivity and limited scope, underscoring the need for more systematic, data-driven approaches. Large Language Models (LLMs) hold promise for integrating textual, visual, and numerical inputs, yet they typically lack robust geospatial reasoning and struggle to fuse multi-modal sources such as satellite imagery, map features, and socioeconomic indicators. This research project seeks to bridge these gaps by developing frameworks that empower LLMs to understand and leverage spatial relationships, incorporate heterogeneous datasets, and generate interpretable assessments across a range of location-dependent domains. By advancing methods for geospatial reasoning, fine-tuning multi-modal AI architectures, and enhancing model explainability, this research project aims to democratize access to high-quality, transparent insights—supporting informed decision-making for stakeholders in urban planning, environmental monitoring, logistics, and beyond.
Code
bof/baf/4y/2025/01/051
Duration
01 January 2025 → 31 December 2026
Funding
Regional and community funding: Special Research Fund
Promotor
Research disciplines
-
Natural sciences
- Statistical data science
-
Social sciences
- Economic geography
-
Engineering and technology
- Modelling and simulation
- Numerical computation
Keywords
geospatial modeling
deep learning
large language models
neural networks
Project description