🐐 Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains

1Seminar for Applied Mathematics, ETH Zurich, Switzerland
2ETH AI Center, Zurich, Switzerland
3Department of Mechanical and Process Engineering, ETH Zurich, Switzerland
4School of Computer Science, CMU, USA
5Centre for Applicable Mathematics, TIFR, India
GAOT Model Performance Radar Chart

Abstract

The very challenging task of learning solution operators of PDEs on arbitrary domains accurately and efficiently is of vital importance to engineering and industrial simulations. Despite the existence of many operator learning algorithms to approximate such PDEs, we find that accurate models are not necessarily computationally efficient and vice versa. We address this issue by proposing a geometry aware operator transformer (GAOT) for learning PDEs on arbitrary domains. GAOT combines novel multiscale attentional graph neural operator encoders and decoders, together with geometry embeddings and (vision) transformer processors to accurately map information about the domain and the inputs into a robust approximation of the PDE solution. Multiple innovations in the implementation of GAOT also ensure computational efficiency and scalability. We demonstrate this significant gain in both accuracy and efficiency of GAOT over several baselines on a large number of learning tasks from a diverse set of PDEs, including achieving state of the art performance on a large scale three-dimensional industrial CFD dataset.

GAOT Model Architecture

Efficiency

How the training throughput of a model scales with increasing input and model size, is absolutely crucial for evaluating whether it can be used to process large-scale datasets (input scalability) or whether it can serve as a backbone of foundation models (model scalability) which require large model sizes. To evaluate the scalability of different models, we plot how the training throughput changes as input size and model size for GAOT and three selected baselines (RIGNO-18 for Graph-based, GINO for FNO-based and Transolver for Transformer-based models). These experiments are conducted on one NVIDIA-4090 GPU with float32 precision.

Grid Resolution vs. Throughput

Grid Resolution vs. Throughput

Model vs. Throughput

Model vs. Throughput

Accuracy

We extensively test GAOT on 24 challenging benchmarks for both time-independent and time-dependent PDEs of various types, ranging from regular grids to random point clouds to highly unstructured adapted grids, and compare it with 13 widely used baselines.

Visualization

Use the dropdown menu below to see visualizations of different test samples.

Selected visualization

BibTeX

@article{wen2025gaot,
  title        = {Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains},
  author       = {Wen, Shizheng and Kumbhat, Arsh and Lingsch, Levi and Mousavi, Sepehr and Zhao, Yizhou and Chandrashekar, Praveen and Mishra, Siddhartha},
  year         = {2025},
  eprint       = {2505.18781},
  archivePrefix= {arXiv},
  primaryClass = {cs.LG}
}