Industrial Engineering Just Got a Turbo Boost. Emmi AI is Live!

Universal Physics Transformers

Universal Physics Transformers (UPTs) present a unified deep learning paradigm for scaling neural operators across diverse spatio-temporal problems. By compressing heterogeneous simulation data into a fixed-size latent space and propagating dynamics using transformer-based approximators, UPTs efficiently handle both grid-based and particle-based simulations.

This framework supports arbitrary space–time queries and scales robustly across various simulation modalities.
Unified Neural Operator Architecture
UPTs eliminate the dependency on grid- or particle-based latent structures, unifying diverse simulation methods. This results in a flexible framework adaptable to a wide range of spatio-temporal problems.
Efficient High-Dimensional Data Compression
The framework compresses high-dimensional simulation data into a fixed-size latent space. This efficient compression dramatically reduces computational overhead while preserving critical physical information.
Accelerated Latent Space Propagation
UPTs rapidly propagate dynamics within the compressed latent space using transformer-based approximators. This approach accelerates simulation time without compromising the accuracy of the modeled physics.
Flexible Space-Time Querying
The model allows for querying the latent representation at any point in space and time. This flexibility enhances its utility across varied simulation scenarios, making it highly adaptable to different research needs.
Scalable Across Eulerian and Lagrangian Systems
Universal Physics Transformers are designed to scale seamlessly across both Eulerian and Lagrangian simulation paradigms. Their robust performance across diverse simulation types positions them as a foundational tool for advanced physics modeling.

Paper

ARXIV

Authors

Benedikt Alkin
Andreas Fürst
Simon Schmid
Lukas Gruber
Markus Holzleitner
Johannes Brandstetter