Alexis de la Fournière
Research & Innovation Hardware Engineer
Engineer at Qarnot, working on thermal integration of new hardware and high performance clusters

Introduction to reduced order models

February 23, 2022 - Numerical simulation

Introduction

Reduced order models are computationally light models derived from more complex physical models. The goal of this reduction is to decrease the time needed to simulate the models. This is typically done by solving a given problem multiple times under a set of parameters, and then deducing from that a simpler model. 

ROMs may have different usages:

They can be useful if real-time computation is eventually needed. For example, when building a digital-twin, people need to simulate in real time the functioning of the device. This cannot be done through normal simulation techniques, so building a ROM is relevant in this case.

When building a very large, system level simulation, it can be useful to simulate each piece and build ROMs to then assemble them together. This will dramatically lower the simulation time and memory usage of the entire system’s model.

Optimizing the design of a piece typically requires performing a lot of simulations to fine tune all the parameters. When doing this, it can be relevant to use the results from the first simulations to accelerate future ones.

As you can already imagine, since this technique involves performing a number of training simulations, distributed computing on the cloud is very relevant in that case.

In this article, we will see how the main reduction technique works. Then we will show a concrete application of this technique and an implementation based on Python and the libraries PyMOR and FeniCS, with calculations done on Qarnot’s platform.

Mathematical background

In this section, we will present the main reduction technique. This will require explaining how physical equations are typically solved through numerical techniques. Don’t worry if you are not familiar with math, we will not dive into details here, but only present the ideas used.

Partial derivative equation

 

 

First, let’s define the kind of problem we are trying to solve. Physical phenomena such as the spread of heat through a metallic part, the flow of air around a plane or the propagation of electromagnetic waves are all mathematically described by equations. These state the relation between the variations in time and space of physical quantities. For example, the equation of heat tells us how temperature changes in time according to its spacial distribution at one point of time. In three dimensions, it can be written:

 

 

Discretization

There are two things to keep in mind about these equations:

They are equations on functions. The unknown is a function that associates a value to every point in a domain and every time in a range. For example, if we are to look at the spread of heat, we would be looking for a temperature function T(t, x, y, z). These problems thus involve an infinite quantity of data. Conversely, we are not looking at models that only describe macro-quantities like the speed of the whole plane. This infinite nature makes the equation quite difficult.

These equations link the variation of temperature in time with its variation in space. That’s what makes them partial derivative equations (PDE). To understand why these links between partial derivatives appear, let’s take the example of temperature. The variation of temperature in time (e.g. its time derivative, ∂T/∂t) depends on the flow of heat. Yet, heat flows from high temperature points to low temperature points. Therefore, the expression of heat flow involves the variation of temperature in space (∂T/∂x, ∂T/∂y, ∂T/∂z). The fact that the equation links different partial derivatives together makes it all the more difficult.

It is generally impossible to mathematically find the solution of such a partial derivative equation. In fact, some common PDEs are so hard that mathematicians struggle to demonstrate anything about them. This is the case of the Navier-Stokes equation (describing the flow of a fluid) where it has not been shown yet that a smooth solution always exists (a prize of 1 million is actually on this problem).

Therefore, engineers and scientists look for approximate solutions through numerical means. In these problems, some parameters may come into play and can change. They can be for example physical property of the material that can change if we decide to replace one material by another or if we study the use of different possibilities. Parameters can also be the ambient temperature, or the plane speed, etc. For these reasons, we may have the same geometry and the same model, but we would need to compute the solutions every time we want to study what happens for a new set of parameters.

To solve equations numerically, the first step is always to derive a finite model from the infinite equation given. Indeed, as we have seen, the models we are trying to solve contains an infinite number of unknowns since we must find the values at each point and time. This can’t be done by a computer, since it only has finite memory and computational power. Therefore, we need to make this problem finite. Several techniques can be used to perform this discretization process, but they often share the same two steps: meshing and equation discretization. Both these steps are explained underneath.

Meshing

The idea of this step is to say that since we cannot calculate the values at every point and time, we will only compute it at some points and time values. Then, when we need to know the value at a given point, we will infer it using the neighbor points where the values have been calculated. For example, it can be calculated through linear interpolation (e.g. taking the balanced mean of neighboring points).

A basic example is as follows. Consider a thin bar of heated metal of length 10. Then, the piece has only one dimension, and we could divide the length in a number of segments (let’s say 5). If we know the temperature at every one of the extremity points, we can create a linear curve that will be close to the real solution.

 

 

The central idea in this process is that now, the solutions we are looking for are fully described by a finite number of information. Here, it is the values at 0, 2, 4, 6, 8 and 10. Thus, we can say that the space we are searching a solution in now has a finite dimension (6 here), or that the solution has a finite number of degrees of freedom (DOFs).

In higher dimensions, discretizing space often involves building a mesh, which is a partition of the object into small triangles or tetrahedrons. On each of these primitives, the final function will be linear. Here is the meshing of a 3D piece.

 

Meshing of a profiled bar

 

On the other hand, time is very easily discretized by cutting the total time windows into small time steps.

To understand reduction order techniques, it is important to note that it is analogous to say that the function is an interpolation of values at some points and that it is a linear combination of some simple functions. If we take back our example, we can see that the total solution is the sum of the six function represented in dots:

 

 

Every one of this basics functions are the scaled version of a more “primitive” functions whose values are 1 at a point and 0 at each other point (here the value is 10, so we can see better).

 

 

In the mathematical languages, these 6 basic functions are called a basis of our solution space. This means that we are looking for solutions that can be expressed as linear combinations (e.g. a sum of these functions each multiplied by a scaling factor) of these functions.

Equation derivation

Now that our problem has been restrained to a limited number of degrees of freedom, we must adapt the equation to that sort of solutions. There is a principle in math that states that for a system to have a unique solution, there has to be as many equations as degrees of freedom.

There are different techniques that can be used to do so. The simplest one would be to rewrite the equation at each point, approximating space derivative by (T(x+Δx) - T(x))/Δx, e.g. by the difference with neighbor points. This is known as the finite difference method and though it is the simplest one, it is only rarely used. The most used one are the finite element method (FEM) and the finite volume method (mainly for fluid dynamics).

Below is a quick explanation of the finite element method. If you are not interested in the details, or you already know about it, you may jump to the next section that explains the core idea of model order reduction.

To understand the idea behind the finite element method, we can go back to our heat equation for a metal bar and the more natural finite difference technique. In this technique, we approximate the derivative in one point by the difference with the neighboring points : ∂T/∂x = (T(x+Δx) - T(x))/Δx where Δx is the distance with the next point. The problem is that, as we see, we divide our difference by Δx to have a slop. If one wants a fine mesh for a precise calculation, then Δx will be very small. Because we divide by that very small number, ∂T/∂x can get very big and very sensible to numerical error. In the heat equation, it is even worse because we consider the second derivative, which is in Δx².

Therefore, the finite difference method is quite unstable. One of its other disadvantages is that there are not many theorems on that technique. For example, we have little results about the error we make when using this technique.

To tackle the instability issue of the finite difference, the idea of the finite element method is to use the opposite operation of derivation, integration. Since integration operates like a mean, it seems logical that it makes for more stable models. However, integrating over the whole domains makes for a loss of information. Indeed, it is not because two function have the same means that they are equal. To solve this and preserve the richness of our function equation, we multiply both sides by a function, called the test function (written v in most cases). It can be seen as a windowing function that will only preserve a part of the unknown function. This time, if two equation have the same mean over each window, then we can expect them to be equal. In fact, under the right hypothesis, we have an equivalence between the two formulations. For example, considering the heat equation in steady state, the two following formulation are equivalent (where V is the appropriate space of test function):

 

The equation on the right-hand side is called the weak form equation and is the one that is used in the finite element method. After modifying the expression, the key idea of FEM is not only to search for a solution in a space Vh of finite dimension n, but also to test it with function v in that same space. Since Vh is of finite dimension and the equation linear in v, verifying the equation for all v in Vh is equivalent to verifying it for all element of its basis (v1, …, vn). That leads naturally to a set of n equations. By considering the n coefficients λ1, …, λn so that T = Σ λivi, we obtain a system of n equation with n unknowns. This system can be systematically solved.

As a conclusion, the finite element method consists in rewriting the equation in another form, the weak form, which involves a test function. The discretization is done by using test functions that have the same degrees of liberty as the solution space. For mathematical reasons, it happens that, because we have used a weak formulation, we have a lot more mathematical results and a much better understanding of that technique. For example, Céa’s lemma gives an upper limit for the error we make using this discretization technique. More precisely, it gives an upper bound for the distance between the real solution and the best approximation of that solution in our solution space.

Model order reduction

The core idea of model order reduction is to look for a solution in a specific basis that contains only a few elements but that we know will suffice to describe the solutions. The type of basis exemplified in the meshing section is very relevant when we don’t know what the solution will look like. Indeed, it offers a good flexibility and we are sure that we will be able to describe well any function on the domain. Yet it is also quite naive:

First, we can expect the degree of liberty of the solution to be somewhat comparable to the number of parameters. A mesh can contain millions of points, while we often consider only a handful of parameters.

Second, the elements of that basis are simple and convenient for calculation but not physical at all. Indeed, in these elements, values at a point are totally uncorrelated to the neighboring points while in reality there is strong link between values at points that are near. To give an example, the same thing happens with numerical images. The most basic way to describe one is to tell the color of each pixel. It is a convenient and simple description, but it doesn’t use the fact that the color of one pixel is typically close to the ones around. From that assessment, people had the idea to express an image in some other basis (using Fourier decomposition) to compress the size of the image. This is the core of image compressing and file format like .jpeg

Thus, the idea is to simulate some solutions using a full-order model and then to deduce from it a smaller basis that will be able to express solutions close to the full order ones. Then, we will only need to derive equations in this new solution space. Since this space has a limited number of degrees of freedom, the derived model will only consist in that same low number of equations. Therefore, it will be almost immediate to solve. Moreover, since the solution space was built to be able to describe solution that are close to the real one, we know that the solution derived from our reduced order model will be close to the full order one.

There is not a lot more to know about the basics of model order reduction. We may now explore some ways to build the reduction basis. The most basic way to build it is simply to do some simulations and use directly the solutions to make out the basis (we would only orthonormalize the basis). However, it is possible to do better. Below is a quick description of some algorithms that can be used.

Proper Orthogonal Decomposition

This technique is close to a technique called PCA (principal component analysis), which is a very common technique in data science. The idea is to extract a basis of lower dimension that describes as well as possible some vectors. So, when using POD, the pipeline is first to perform a number of simulation (let’s say 100) and then to extract a smaller basis (say 20) that describes as well as possible the results obtained.

Strong greedy algorithm

In this algorithm, the idea is to build the basis steps by steps. Starting from a set of results, we will choose one that will be our first vector of the reduction basis. The reduced order model related to the basis is created and one can compute the solutions for all the other values from the set of parameters. Then, the solution with the biggest difference between full order and reduced order is added to the basis. In doing so, we lower the error for the solution that was the least well described. This operation is then repeated over and over again till we reach the desired number of vectors.

Weak greedy algorithm

The two aforementioned algorithms required computing the solutions for every parameter of a training set. In the weak greedy algorithm, we will follow the approach of the strong greedy algorithm, yet, we won’t need to compute all the exact solutions because the error is not computed but solely estimated from the ROM.

To learn more about these algorithms, I recommend reading this tutorial, which goes deeper into how these works and their advantages and drawbacks.

Implementation

In this section, we now introduce a concrete application of this technique where we efficiently build a reduced model for a thermal simulation, that could be later used for design optimization. For example, thanks to this model, we will be able to very quickly emulate a change of one material to the other.

An electronic use case

The problem we are going to study is the cooling of an electronic die. The domain is composed of a heating die encased in a case that touches a heat sink, only separated by a thin layer of thermal paste. We will not simulate the air flow, but only consider a Newton’s law of cooling. It states that the heat flow between a solid’s surface and a fluid for a small section dS is dq = -h(T-Text)dS. Here, Text is the air temperature.

The geometry is the following:

In purple is the heat sink. It would typically be in metal, either aluminum or copper.

The yellow part is the die, which emits heat with a constant power density.

The die is encased in the green part, which is a packaging. It can be made of plastic.

Between the heat sink and the casing is a very thin blue layer of thermal paste.

The whole assembly would normally be on top of a PCB (electronic card) but I decided not to model it for the sake of simplicity.

 

 

 

The parameters considered are:

  • Thermal conductivity and volumetric thermal capacity (ρc) for each of the 4 materials.
  • The die power density Φ.
  • The film coefficient h that intervene in Newton’s law.
  • The model is solved in time for a duration of 10 minutes, which is enough to reach steady state.

Tools used

The full pipeline carried out in this implementation uses 3 tools:

FeniCS is an open-source solver of PDEs that uses the finite element method. Its python interface makes it very convenient to use it in addition to other tools and scripting. It is only a solver, so it means that the user needs to specify by himself the equations to solve. This makes it a little less accessible than physical modelers. However, simple models' mathematical expressions are easy to find on the internet or can be found in FeniCS manual.

pyMOR is a python library for building reduced order models. It uses abstract interfaces, so it can be interfaced with an external solver like FeniCS. In fact, some bindings are provided to integrate FeniCS in pyMOR. It can perform all the algorithms mentioned above and many others.

Qarnot is an economical and green cloud computing platform. We will use it to perform all the initial simulations in parallel to save a lot of time. Thanks to it, we will be able to build our reduced model tens of times quicker.

Writing the problem in FeniCS

The first step in building the reduced order model is to build the high-dimension model to have sample solutions. To do so, we use FeniCS. All the code related to the model is written in the file model.py.

The first step in making the high-order model is making the mesh. In our case, we simply load it from a file. The mesh was made using the CAD modeler FreeCAD in addition to its integrated FEM modules that calls a dedicated mesher. In the following code, we start by importing dolfin which is the python interface to FeniCS. After defining some geometric constants, we define a function to load the mesh.

import dolfin as df
from pymor.models.basic import InstationaryModel

''''These are the geometric constants that were used for the mesh creation
They are needed for subdomains definition'''
die_l = 20/1000
die_h = 3/1000
die_w = die_l
casing_l = 30/1000
casing_h = 5/1000
casing_w = casing_l
paste_h = 0.2/1000
block_l = 40/1000
block_h = 5/1000
block_w = 40/1000

domain_dict = {'block':1,
               'paste':2,
               'casing':3,
               'die':4}


def load_mesh(filename):
    return df.Mesh(filename)

Once the mesh is loaded, we need to define the subdomains for the different materials and boundary conditions. This is done in make_measures(mesh) which returns two objects called measures which contain information over the subdomains.

def make_measures(mesh):
    '''Defines subdomains for the mesh and returns associated measures'''
    tol = 1e-7  # Constant for numerical precision

    boundary_air = df.CompiledSubDomain('on_boundary && x[2] > 0 - tol',
                                tol=tol)

    sub_die = df.CompiledSubDomain('x[0] <= xb + tol && x[0] >= -xb - tol 
                                 && x[1] <= yb + tol && x[1] >= -yb - tol 
                                 && x[2] <= z1 + tol && x[2] >= z0 - tol',
                                   tol=tol, xb = die_l/2, yb = die_w/2,
                                   z1 = -paste_h-casing_h+die_h, z0 = -paste_h-casing_h)

    sub_casing = df.CompiledSubDomain('x[0] <= xb + tol && x[0] >= -xb - tol 
                                 && x[1] <= yb + tol && x[1] >= -yb - tol 
                                 && x[2] <= z1 + tol && x[2] >= z0 - tol',
                                   tol=tol, xb = casing_l/2, yb = casing_w/2,
                                   z1 = -paste_h, z0 = -paste_h-casing_h)

    sub_paste = df.CompiledSubDomain('x[0] <= xb + tol && x[0] >= -xb - tol 
                                 && x[1] <= yb + tol && x[1] >= -yb - tol 
                                 && x[2] <= z1 + tol && x[2] >= z0 - tol',
                                   tol=tol, xb = casing_l/2, yb = casing_w/2,
                                   z1 = 0, z0 = -paste_h)

    markers = df.MeshFunction('size_t', mesh, 3)
    markers.set_all(domain_dict['block'])
    sub_casing.mark(markers, domain_dict['casing'])
    sub_die.mark(markers, domain_dict['die'])
    sub_paste.mark(markers, domain_dict['paste'])

    boundary_markers = df.MeshFunction('size_t', mesh, 2)
    boundary_markers.set_all(0)
    boundary_air.mark(boundary_markers, 1)

    dx = df.Measure('dx', domain=mesh, subdomain_data=markers)
    ds = df.Measure('ds', domain=mesh, subdomain_data=boundary_markers)
    return dx, ds

Then, we need to write the heat equation in FeniCS. We will first establish its strong form and then its weak form. The latter is needed for FeniCS.

The heat equation comes from the combination of two equations. The first one states that the increase in temperature is proportional to the heat received which can be written as the heat produced at the point (written p here) and the difference between the incoming and out-coming heat flows (which can be written -∇·j where j is the heat flow):

 

The second equation is Fourier’s law, which says that the heat flow is proportional to the gradient of temperature (which can be written ∇T):

 

Combining these two equations, we obtain the heat equation (ΔT is the Laplace operator applied to T which is a shorthand for ∇·(∇T)):

 

The boundary conditions are the following, where Tair is the air temperature. Γair designates the boundary of the domain where the heat sink is in contact with air. The expression naturally comes from Newton’s law of cooling.

 

Since T - Tair isn’t dependent on the air temperature, we can simplify with Tair = 0. We also consider the initial condition T = Tair = 0. Finally, p is defined constant equal to Φ inside the die and 0 elsewhere.

To obtain the weak form, we multiply both sides by the test function and integrate:

 

Then, we use Green’s identity:

 

Using the expression of the boundary conditions and the simplification Tair = 0, we obtain the final form:

 

The first term contains the inertia of the system and leads to the mass matrix. The second represents the diffusion phenomena and the third the boundary condition with air. The fourth term is the source term. Here, cp, ρ, λ only depends on the material, so they are constant over each subdomain. This means that we can write each integral as a sum of integrals over each subdomain. For example, on the mass side we can write:

 

In the function make_pymor_bindings we define each integral part over each subdomain and determine their matrix thanks to the following code:

def make_pymor_bindings(V, dx, ds, solver_options):
    import pymor.basic as pmb
    from pymor.bindings.fenics import FenicsVectorSpace, FenicsMatrixOperator
    from pymor.algorithms.timestepping import ImplicitEulerTimeStepper

    space = FenicsVectorSpace(V)

    # Operators matrix
    u = df.TrialFunction(V)
    v = df.TestFunction(V)
    mass_mat = [df.assemble(df.inner(u, v)*dx(i)) for i in domain_dict.values()]
    diff_mat = [df.assemble(df.inner(df.grad(u), df.grad(v))*dx(i)) for i in domain_dict.values()]
    robin_left_mat = df.assemble(u*v*ds(1))
    source_mat = df.assemble(v*dx(domain_dict['die']))
    
    # Function continues here with pymor bindings (see below)

Notice the use of dx(i) to denote an integration over the subdomain i. Now, the total mass matrix would be something like: mass_mat_tot = sum([c_p[i]*rho[i]*mass_mat[i] for i in domain_dict.values()]). Then, we could build the total diffusion matrix and solve the total system. However, we are not going to do this directly and rather use pyMOR functionalities to handle the parametric part.

That’s why we are leaving FeniCS at this point (actually, we also create some matrix to calculate error norms, which will be useful later). The only other parts where FeniCS is used is for reading and writing purposes.

Binding FeniCS with pyMOR

Now that we have made the base operators, we are switching to the higher level pyMOR interface. The way pyMOR works is through interactions with abstract objects called Operator. These objects encapsulate matrices and vectors used by the solvers.

Therefore, the first thing is to create a pyMOR operator object for each FeniCS operator we have made before (pmb is the pymor module):

# Operators
mass_op = [FenicsMatrixOperator(M, V, V, solver_options=solver_options) for M in mass_mat]
diff_op = [FenicsMatrixOperator(M, V, V, solver_options=solver_options) for M in diff_mat]
robin_left_op = FenicsMatrixOperator(robin_left_mat, V, V, solver_options=solver_options)
source_op = pmb.VectorOperator(space.make_array([source_mat]))

Now, we create two list of parameters, lamb and kappa, whose components are the different λi and cp,iρi.

project_lamb = [pmb.ProjectionParameterFunctional(f'lamb_{dom}', 1, 0) for dom in domain_dict.keys()]
project_capa = [pmb.ProjectionParameterFunctional(f'capa_{dom}', 1, 0) for dom in domain_dict.keys()]

Then, we can create the final operators : mass, op (diffusion + air boundary), rhs (source):

mass = pmb.LincombOperator(mass_op, project_capa)

op = pmb.LincombOperator([*diff_op,
                          robin_left_op],
                          [*project_lamb,
                          pmb.ExpressionParameterFunctional('h[0]', {'h':1})])

rhs = pmb.LincombOperator([source_op],
                          [pmb.ExpressionParameterFunctional('phi[0]', {'phi':1})])

Here we introduced two new parameters in pyMOR, h and phi, both of dimension 1. The very useful thing that we did with this construction, is that since we only introduced the parameters in pyMOR side and wrote the operators as linear combinations of non-parametric operators with a coefficient that only depends on the parameters, we are in a case of parameter separation. Thanks to that, pyMOR will be able to perform an online/offline decomposition. This means that when we will project the model to build our reduced basis, we will be able to project all the non-parametric part, so assembling our system will be very quick.

Let’s continue on with our modeling. The next step is to create our model:

end_time = 3*60
dt = 1
fom = InstationaryParametricMassModel(
                        T = end_time, 
                        initial_data=space.make_array([df.project(df.Constant(0), V).vector()]),
                        operator=op,
                        rhs=rhs,
                        mass=mass,
                        time_stepper=ImplicitEulerTimeStepper(end_time//dt),
                        num_values= None,
                         products={'l2': FenicsMatrixOperator(l2_mat, V, V),
                                   'l2_0': FenicsMatrixOperator(l2_0_mat, V, V),
                                   'h1': FenicsMatrixOperator(h1_mat, V, V),
                                   'h1_0_semi': FenicsMatrixOperator(h1_0_mat, V, V)},
                        visualizer = None
                        )
return fom

This step is basically made by passing all our previously built operators to a Model object from pyMOR. Here, we couldn’t use pyMOR’s standard InstationnaryModel because it doesn’t support the mass matrix being parametric. Since we needed a parametric mass to perform parameter separation, we just wrote a child class that computes the mass matrix before calling the standard InstationnaryModel method. The parameter time_stepper is an object that will perform all the time discretization work for us. Here we used an implicit Euler scheme. Now that we have our model, we just need to define a parameter value mu and call fom.solve(mu) to have the solution for these parameters.

We finally define make_fom that builds the whole model and make_param_space that specifies bounds for each parameter value and returns a ParameterSpace object.

def make_fom(filename, option={'solver': 'cg', 'preconditioner': 'ilu'}):
    '''Returns the full order model specified from the mesh filename and solver options'''
    mesh = load_mesh(filename)
    dx, ds = make_measures(mesh)
    V = make_space(mesh)
    option = {'solver': 'cg', 'preconditioner': 'ilu'}
    return make_pymor_bindings(V, dx, ds, option)


def make_param_space():
    ''''Define ranges for parameter values and return the associated parameter space
        Return:
            pymor.ParameterSpace: the parametre space'''
    import pymor.basic as pmb
    params = pmb.Parameters({
                    'lamb_die': 1,
                    'lamb_block': 1,
                    'lamb_paste': 1,
                    'lamb_casing': 1,
                    'capa_die': 1,
                    'capa_block': 1,
                    'capa_paste': 1,
                    'capa_casing': 1,
                    'h': 1,
                    'phi': 1})

    capa_range = (10**5, 10**7)
    param_range = {'lamb_die': (5, 100),
                   'lamb_block': (100, 500),
                   'lamb_paste' : (0.5, 10),
                   'lamb_casing': (10, 200),
                   'capa_die': capa_range,
                   'capa_block': capa_range,
                   'capa_paste': capa_range,
                   'capa_casing': capa_range,
                   'h' : (5, 100),
                   'phi': (2 / die_h / die_l / die_l, 30 / die_h/ die_l / die_l)}
    return pmb.ParameterSpace(params, param_range)

If we were to perform the model reduction locally, it would be as simple as this:

from model import make_fom, make_param_space
import pymor.basic as pmb

fom = make_fom('mesh.xml')
param_space = make_param_space(fom)

param_set = param_space.sample_randomly(120)
T = fom.solution_space.empty()
for mu in param_set:
  T.append(fom.solve(mu))
  
reduction_basis = pod(U_train, product=fom.h1_0_semi_product, modes=25)
reductor = pmb.InstationaryRBReductor(fom, 
                                      RB=reduction_basis, 
                                      product=fom.h1_0_semi_product)
rom = reductor.reduce()

If we break down this code, it consists in the following steps :

Some imports

Creating our model and the parameter space

Sampling 120 parameter values from the parameter space and solving the model for each of them. We append all solutions for each time steps in a VectorArrays T which is basically a list of vectors.

Computing a reduction basis through the POD algorithm. We decided to keep 25 vectors for our reduction basis.

Creating a pyMOR Reductor object that is used to reduce the model and returns it as rom. Now, if we were to call rom.solve the result would be given in a few millisecond while fom.solve takes several minutes on a laptop.

However, doing this on a personal computer would take hours, even for this quite simple model. That’s why we are going to use Qarnot’s cloud computing platform to perform all the solution computations in parallel.

Parallelizing with Qarnot

The idea of using Qarnot is to start a handful of instances on the cloud that will divide all the different parameter values among themselves. They will be able to compute all the solutions in parallel, thus dividing the time needed by the number of instances. Then another instance will gather the results to build the reduced order. After that, it will be possible to save the ROM and use it in other tasks to test it and validate its effectiveness.

Before digging into the full task architecture, let’s see what is needed to send a computation task on Qarnot’s platform. In general, three things are needed:

A computation environment. Indeed, for the computation to run on the node, we require pyMOR and FeniCS to be installed on it. This environment is created through docker container. Qarnot has created a specific container for that purposes, that has pymor on top of the official FeniCS container. In the code, the container is specified through the constants DOCKER_REPO and DOCKER_TAG.

Resource files. These are the files that will be available on the server. For example, to build the model, we’ll need the mesh and the python file model.py. On Qarnot, data management is done through S3 buckets. It is possible to load a file from the disk into a bucket or to download the content of a bucket to the disk. A bucket can also be specified to receive the results from a task.

A command that will be run on the node. It can be specified under the constant DOCKER_CMD. In most cases, it will be to run a python script sent with the resource files. All the scripts used are in the input directory.

As an example, here is a snippet of the code run.py to create the task to solve the model for 120 parameter values on 30 nodes (this is the train task that we will see afterwards):

conn = qarnot.Connection('qarnot.conf')
# [...]
DOCKER_REPO = 'qarnotlab/pymor_fenics'
DOCKER_TAG = '2020.2.0_2019.1.0'

TRAIN_PARAM_NB = 120  # Number of parameter value for training
TRAIN_INST = 30  # Number of instance that will compute in parrallel
# [...]
input_bucket = conn.create_bucket('input')
input_bucket.sync_directory('input')  # adding the content of directory ./input into ressource bucket
fom_res_bucket = conn.create_bucket('fom-results')
# [...]
train_task = conn.create_task('train', 'docker-batch', TRAIN_INST, job=job)
train_task.constants['DOCKER_REPO'] = DOCKER_REPO
train_task.constants['DOCKER_TAG'] = DOCKER_TAG
# h5repack compresse the solution files to quicken data exchanges
train_task.constants['DOCKER_CMD'] = f"'python3 main.py -n {TRAIN_PARAM_NB} -o train/u &&" + 
                                "h5repack -f GZIP=1 train/u${INSTANCE_ID}.h5 train/u${INSTANCE_ID}_c.h5'"
train_task.resources.append(input_bucket)
train_task.results = fom_res_bucket
train_task.results_whitelist = r'_c.h5'  # we only want to withdraw compressed files
# [...]
train_task.submit()

The full pipeline to build the reduced order model and test it contains 6 tasks. Here are how they work together.

 

The workflow is the following:

  1. The train task computes on a number of instances the solutions for the full order model on a random set of parameters
  2. rom-build then builds the reduction basis and reduces the model with the POD algorithm, thanks to the previously computed solution. The basis and the model are then saved on bucket rom
  3. write-param then writes a set of parameter to a file. These will be the parameters used to validate the ROM performances.
  4. rom-val and fom-val then read this file and solve the model (respectively for the reduced and the full-order model) for these parameters. The output solutions are then stored in two buckets.
  5. rom-compare finally compares the solutions and tells the error made by the ROM with respect to the full-order model.

In addition to the bucket represented on the graph, all the task also receive the bucket input, which contains all the python scripts and data to build the model.

The whole code to make these task and coordinate them together is in run.py. It is pretty straightforward. The dependencies between tasks is handled by Qarnot by creating a job and creating all the task as part of the job.

You can download all the code here. To run it, simply unzip the file, move to the folder containing run.py, copy your qarnot configuration file in the same folder and run python3 run.py. At any point, you can monitor your tasks progress in the Tasq. You will also find help in the readme.

Results

In the test case, I used 120 solutions to build the reduction basis. From each of these solutions, 14 timestamps are kept to form the list of solutions. These timestamps are chosen to put emphasis on the early time, since the transitory phase shows more changes than the steady part. 50 vectors are finally kept to make out the reduction bases.

To validate our model, we solve the model for 50 additional parameter values and compare the results between the full order model and the reduced order model. The worst relative error we have is in the task rom-compare:

 

A scatter plot of the relative error for each simulation at each time step is in the bucket compare:

 

Therefore, our ROM is very precise. We can also see the evolution of the error with respect to the number of vector we incorporate in the reduction basis:

 

We can see the strong exponential decrease of the error with the basis size. This is coherent with mathematical results.

The other important thing to measure is the improvement in time efficiency. Here we compare the different execution times needed for the various steps. It can be seen in the output of the run.py script:

 

We can see that the reduced order model is a thousand times quicker, which proves its efficiency and its suitability for real-time simulation.

On top of that, the training time took 6h of calculation, but thanks to our parallelization work it only lasted 15 minutes. The total time for the process is around 30 minutes.

That’s it for this post ! If you want to go further, I would encourage you to read the excellent pymor tutorials.

Share on networks