The very challenging task of learning solution operators of PDEs on arbitrary domains accurately and efficiently is of vital importance to engineering and industrial simulations. Despite the existence of many operator learning algorithms to approximate such PDEs, we find that accurate models are not necessarily computationally efficient and vice versa. We address this issue by proposing a geometry aware operator transformer (GAOT) for learning PDEs on arbitrary domains. GAOT combines novel multiscale attentional graph neural operator encoders and decoders, together with geometry embeddings and (vision) transformer processors to accurately map information about the domain and the inputs into a robust approximation of the PDE solution. Multiple innovations in the implementation of GAOT also ensure computational efficiency and scalability. We demonstrate this significant gain in both accuracy and efficiency of GAOT over several baselines on a large number of learning tasks from a diverse set of PDEs, including achieving state of the art performance on a large scale three-dimensional industrial CFD dataset.
How the training throughput of a model scales with increasing input and model size, is absolutely crucial for evaluating whether it can be used to process large-scale datasets (input scalability) or whether it can serve as a backbone of foundation models (model scalability) which require large model sizes. To evaluate the scalability of different models, we plot how the training throughput changes as input size and model size for GAOT and three selected baselines (RIGNO-18 for Graph-based, GINO for FNO-based and Transolver for Transformer-based models). These experiments are conducted on one NVIDIA-4090 GPU with float32 precision.
Grid Resolution vs. Throughput
Model vs. Throughput
We extensively test GAOT on 24 challenging benchmarks for both time-independent and time-dependent PDEs of various types, ranging from regular grids to random point clouds to highly unstructured adapted grids, and compare it with 13 widely used baselines.
Dataset | Median relative L¹ error [%] | |||||
---|---|---|---|---|---|---|
Time-Independent | GAOT | RIGNO-18 | Transolver | GNOT | UPT | GINO |
Poisson-C-Sines | 3.10 | 6.83 | 77.3 | 100 | 100 | 20.0 |
Poisson-Gauss | 0.83 | 2.26 | 2.02 | 88.9 | 48.4 | 7.57 |
Elasticity | 1.34 | 4.31 | 4.92 | 10.4 | 12.6 | 4.38 |
NACA0012 | 6.81 | 5.30 | 8.69 | 6.89 | 16.1 | 9.01 |
NACA2412 | 6.66 | 6.72 | 8.51 | 8.82 | 17.9 | 9.39 |
RAE2822 | 6.61 | 5.06 | 4.82 | 7.15 | 16.1 | 8.61 |
Bluff-Body | 2.25 | 5.76 | 1.78 | 44.2 | 5.81 | 3.49 |
Time-Dependent | GAOT | RIGNO-18 | RIGNO-12 | GeoFNO | FNO DSE | GINO |
NS-Gauss | 2.91 | 2.29 | 3.80 | 41.1 | 38.4 | 13.1 |
NS-PwC | 1.50 | 1.58 | 2.03 | 26.0 | 56.7 | 5.85 |
NS-SL | 1.21 | 1.28 | 1.91 | 13.7 | 22.6 | 4.48 |
NS-SVS | 0.46 | 0.56 | 0.73 | 9.75 | 26.0 | 1.19 |
CE-Gauss | 6.40 | 6.90 | 7.44 | 42.1 | 30.8 | 25.1 |
CE-RP | 5.97 | 3.98 | 4.92 | 18.4 | 27.7 | 12.3 |
Wave-Layer | 5.78 | 6.77 | 9.01 | 11.1 | 28.3 | 19.2 |
Wave-C-Sines | 4.65 | 5.35 | 6.25 | 13.1 | 5.52 | 5.82 |
Dataset | Median relative L¹ error [%] | |||||
---|---|---|---|---|---|---|
Structured | GAOT | RIGNO-18 | RIGNO-12 | CNO | scOT | FNO |
NS-Gauss | 2.29 | 2.74 | 3.78 | 10.9 | 2.92 | 14.41 |
NS-PwC | 1.23 | 1.12 | 1.82 | 5.03 | 7.12 | 12.55 |
NS-SL | 0.98 | 1.13 | 1.82 | 2.12 | 2.49 | 2.08 |
NS-SVS | 0.46 | 0.56 | 0.75 | 0.70 | 1.01 | 7.52 |
CE-Gauss | 5.28 | 5.47 | 7.56 | 22.0 | 9.44 | 28.69 |
CE-RP | 4.98 | 3.49 | 4.43 | 18.4 | 9.74 | 38.48 |
Wave-Layer | 5.40 | 6.75 | 8.97 | 8.28 | 13.44 | 28.13 |
Model | Pressure | Wall Shear Stress | ||
---|---|---|---|---|
MSE | Mean AE | MSE | Mean AE | |
GAOT | 4.9409 | 1.1017 | 8.7370 | 1.5674 |
FIGConvNet | 4.9900 | 1.2200 | 9.8600 | 2.2200 |
TripNet | 5.1400 | 1.2500 | 9.5200 | 2.1500 |
RegDGCNN | 8.2900 | 1.6100 | 13.8200 | 3.6400 |
GAOT (NeurField) | 12.0786 | 1.7826 | 22.9160 | 2.5099 |
Use the dropdown menu below to see visualizations of different test samples.
@article{wen2025gaot,
title = {Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains},
author = {Wen, Shizheng and Kumbhat, Arsh and Lingsch, Levi and Mousavi, Sepehr and Zhao, Yizhou and Chandrashekar, Praveen and Mishra, Siddhartha},
year = {2025},
eprint = {2505.18781},
archivePrefix= {arXiv},
primaryClass = {cs.LG}
}