Skip to main content

Understanding Eulerian and Lagrangian Fluid Simulations

In fluid simulations, there are two dominant ways to represent a fluid: Eulerian and Lagrangian. So, what are they? Which can we use to make a good fluid simulation for a specific project? And how can we implement them?

First, let's discuss the Navier-Stokes equations, which generalize the motion of Newtonian fluids. The momentum of a particle in a Newtonian fluid is as follows:

\begin{equation}\rho\frac{\partial\overrightarrow{\mathbf{u}}}{\partial t}+\rho(\nabla\cdot\overrightarrow{\mathbf{u}})\overrightarrow{\mathbf{u}}=\rho\frac{d\overrightarrow{\mathbf{u}}}{dt}=-\nabla p+\mu\nabla^2\overrightarrow{\mathbf{u}}+\Sigma F\end{equation}

The equation above gives us a fluid's motion at a particular point, and a fluid simulation works by calculating the velocity of lots of these points and interpolating between them. (For this model, we'll ignore the viscosity term and assume that the fluid we're simulating is a superfluid, just to make things slightly more manageable.) But at which points should we solve Navier-Stokes?

One approach is to track the position of each particle in the fluid and figure out its motion at its position. This is the Lagrangian approach, which is mathematically expressed as

\begin{equation}\mathbf{X}(\mathbf{x}_0, t)\end{equation}

Simulating every molecule seems like it would be very accurate, and it is. However, it's also very computationally expensive. The Eulerian approach tracks the fluid globally, setting the reference point to a fixed position. It calculates Navier-Stokes at fixed points along the fluid, which can be much faster computationally, but it's also less accurate. Mathematically, this approach is expressed as

\begin{equation}\mathbf{u}(\mathbf{x}, t)\end{equation}

Both the Eulerian and Lagrangian approaches are functions of position and time, and they can be related with the equation

\begin{equation}\mathbf{u}((\mathbf{X}(\mathbf{x}_0, t), t)=\frac{\partial\mathbf{X}}{\partial t}(\mathbf{x}_0, t)\end{equation}

Comments

Popular posts from this blog

Emotion Classification NN with Keras Transformers and TensorFlow

  In this post, I discuss an emotional classification model I created and trained for the Congressional App Challenge last month. It's trained on the Google GoEmotions dataset and can detect the emotional qualities of a text. First, create a training script and initialize the following variables. checkpoint = 'distilbert-base-uncased' #model to fine-tune weights_path = 'weights/' #where weights are saved batch_size = 16 num_epochs = 5 Next, import the dataset with the Hugging Face datasets library. dataset = ds . load_dataset( 'go_emotions' , 'simplified' ) Now, we can create train and test splits for our data. def generate_split(split): text = dataset[split] . to_pandas()[ 'text' ] . to_list() labels = [ int (a[ 0 ]) for a in dataset[split] . to_pandas()[ 'labels' ] . to_list()] return (text, labels) (x_text, x_labels) = generate_split( 'train' ) (y_text, y_labels) =...

Exploring Active Ragdoll Systems

  Active ragdolls is the name given to wobbly, physics-based character controllers which apply forces to ragdolls. You may have seen them implemented in popular games such as Human Fall Flat  and Fall Guys . This post introduces a technique I developed to create active ragdolls for a personal project, implemented in Unity. The system I will demonstrate is surprisingly simple and only requires a small amount of code. Unity has these beautiful things called Configurable Joints , which are joints that can, as the name suggests, be configured, with simulated motors on the X and YZ axes providing force to the joints. What we can do with this is map the motions of a regular  game character with an Animation Controller (an "animator clone") to our active ragdoll. Doing this means we only have to animate the animator clone for the active ragdoll to automatically be animated with it! Firstly, I created a ragdoll from a rigged character. (Side note: Mixamo is a great tool to q...

RRT Algorithm Visualization in Python

I'm continuing my experiment in procedural generation by "exploring" a fascinating algorithm for generating procedural trees. The rapidly-exploring random tree algorithm, or RRT for short, is a Monte-Carlo method traditionally used for autonomous navigation. However, before I dive into the code, let me show you this algorithm's result in an image. (Source: Wikipedia) It's a fractal! A  stochastic  fractal, to be precise. And so began my journey to replicate this algorithm, not for robotic navigation, but for artistic purposes. I decided to write the code for my implementation with Python since the PIL library makes it easy to quickly generate images. import random import math from PIL import Image, ImageDraw WIDTH = 500 HEIGHT = 500 RECURSIONS = 125 class Node : def __init__( self , x, y): self . x = x self . y = y self . parent = None def __repr__( self ): return ( self . x, self . y...