Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results [preprint]

Preprint date

October 29, 2021

Authors

Jayant Gupta (Ph.D. student), Carl Molnar (M.S. student), Gaoxiang Luo (undergraduate research assistant), Joe Knight, Shashi Shekhar (professor)

Abstract

Given Spatial Variability Aware Neural Networks (SVANNs), the goal is to investigate mathematical (or computational) models for comparative physical interpretation towards their transparency (e.g., simulatibility, decomposability and algorithmic transparency). This problem is important due to important use-cases such as reusability, debugging, and explainability to a jury in a court of law. Challenges include a large number of model parameters, vacuous bounds on generalization performance of neural networks, risk of overfitting, sensitivity to noise, etc., which all detract from the ability to interpret the models. Related work on either model-specific or model-agnostic post-hoc interpretation is limited due to a lack of consideration of physical constraints (e.g., mass balance) and properties (e.g., second law of geography). This work investigates physical interpretation of SVANNs using novel comparative approaches based on geographically heterogeneous features. The proposed approach on feature-based physical interpretation is evaluated using a case-study on wetland mapping. The proposed physical interpretation improves the transparency of SVANN models and the analytical results highlight the trade-off between model transparency and model performance (e.g., F1-score). We also describe an interpretation based on geographically heterogeneous processes modeled as partial differential equations (PDEs).

Link to full paper

Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results

Keywords

spatial-temporal systems, neural networks, machine learning

Share