Inconsistent Results with Different Methods

I’m facing an issue in a C++ project where two different methods for computing the same values are giving inconsistent results, despite both using the same input data from the same array.

Method 1 (Direct Calculation): This method calculates values on-the-fly for each element, using data directly from the array each time it is called.

Method 2 (Precomputed Values): This method computes all values once at the beginning and stores them in a separate array, which is then used for further calculations.

obtain density for the first method
1
2
3
4
5
6
7
8
9
double Node::Density()
{
	double density = 0;
	for (int dir = 0; dir < _nLatNodes; dir++)
	{
		density += this->m_distributions[dir];	
	}
	return density;
}

obtain velocity for the first method
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
std::vector<double> Node::Velocity(std::shared_ptr<VelocitySet> velSet)
{
	std::vector<double> velocity;
	velocity.reserve(3);
	velocity.push_back(0);
	velocity.push_back(0);
	velocity.push_back(0);

	for (int dir = 0; dir < _nLatNodes; dir++)
	{
		for (int dim = 0; dim < 3; dim++)
		{
			velocity[dim] += velSet->GetDirection(dir)[dim] * this->m_distributions[dir];
		}
	}

	for (int dim = 0; dim < 3; dim++)
	{
		velocity[dim] /= Density();
	}

	return velocity;
}

compute density and velocity for the second method- density_ and velocity_ are class variables
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
void Node::Compute_macroscopic(std::shared_ptr<VelocitySet> velSet) {

	density_ = 0;
	Velocity_.resize(3, 0.0);
	for (int dir = 0; dir < _nLatNodes; dir++) {

		density_ +=this->m_distributions[dir];
		Velocity_[0] += velSet->GetDirection(dir)[0] * this->m_distributions[dir];
		Velocity_[1] += velSet->GetDirection(dir)[1] * this->m_distributions[dir];
		Velocity_[2] += velSet->GetDirection(dir)[2] * this->m_distributions[dir];
	}
	Velocity_[0] /= density_;
	Velocity_[1] /= density_;
	Velocity_[2] /= density_;
}

Equilibrium values on-the-fly
1
2
3
4
5
6
7
double Node::Equilibrium(std::shared_ptr<VelocitySet> velSet, int dir)
{
	double du = Velocity(velSet)[0] * velSet->GetDirection(dir)[0] + Velocity(velSet)[1] * velSet->GetDirection(dir)[1] + Velocity(velSet)[2] * velSet->GetDirection(dir)[2];
	double u_sqr = Velocity(velSet)[0] * Velocity(velSet)[0] + Velocity(velSet)[1] * Velocity(velSet)[1] + Velocity(velSet)[2] * Velocity(velSet)[2];

	return velSet->GetWeight(dir) * Density() * (1 + 3 * du + 9.0 / 2.0 * du * du - 3.0 / 2.0 * u_sqr);
}

second way of obtaining Equilibrium values
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
void Node::F_Eq(std::shared_ptr<VelocitySet> velSet)
{
	double du = 0.0;
	double u_sqr = 0.0;
	m_eqdistributions.resize(_nLatNodes);
	Compute_macroscopic(velSet);
	for (int dir = 0; dir < _nLatNodes; dir++) {
		du = Velocity_[0] * velSet->GetDirection(dir)[0] +
			 Velocity_[1] * velSet->GetDirection(dir)[1] +
			 Velocity_[2] * velSet->GetDirection(dir)[2];
		u_sqr = Velocity_[0] * Velocity_[0] +
			    Velocity_[1] * Velocity_[1] +
			    Velocity_[2] * Velocity_[2];

		m_eqdistributions[dir] = velSet->GetWeight(dir) *
			density_ * (1 + 3 * du + 9.0 / 2.0 * du * du - 3.0 / 2.0 * u_sqr);
	}
}

Now In Simulation class I am obtaining the equilibrium values in 2 different ways and I'm getting different answers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void Simulation::Collision()
{

    for (int f = 0; f < SubDomain_.Involved_lat_.size(); f++) {
        if (SubDomain_.Involved_lat_[f]->NodeType_ != NodeType::Halo) {

            int i = SubDomain_.Involved_lat_[f]->x_position;
            int j = SubDomain_.Involved_lat_[f]->y_position;
            int k = SubDomain_.Involved_lat_[f]->z_position;

            SubDomain_.Involved_lat_[f]->F_Eq(SubDomain_.m_velSet);

            for (int dir = 0; dir < _nLatNodes; dir++) {

                //SubDomain_.m_lattice[i][j][k]->m_newDistributions[dir] =
                //    SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                //    (SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                //        SubDomain_.m_lattice[i][j][k]->Equilibrium(SubDomain_.m_velSet,dir)) * (m_dt / m_relaxation);

                SubDomain_.m_lattice[i][j][k]->m_newDistributions[dir] =
                    SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                    (SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                        SubDomain_.m_lattice[i][j][k]->m_eqdistributions[dir]) * (m_dt / m_relaxation);
            }
        }
    }
}

Any help would be appreciated.
Last edited on
well, right off, its one of 2 things... either one or both method have a bug, or you have floating point problems.

if the values are 'correct' up to some # of significant digits and diverge way out in the far decimal places, its floating point aggravations. That can even roll up until the error is so much the answers become totally wrong for calculations with a great many steps.

If the answers are totally wrong, its probably a bug. If the answers are right up to some # of digits, and you need more precision, the first thing you need is the right answer... maybe from a high end math package like maple, or matlab, etc. If you lack that, you may have to craft inputs that give known outputs in a smaller, simpler problem, and see what your code does to that.

in either case, the fastest way to find it is to stop and print intermediate values at each step of the calculation to see where one of the two processes gets a different result. Then figure out what is behind that.
Last edited on
@jonnin thanks for your explanation. shouldn't both methods give the same results even if they are wrong. I mean in either case I have to get the same wrong answer. by the way I am getting almost correct answer for method 1.
Last edited on
Use a debugger to step through the code and find where in that expression the computation starts to diverge.

Is there any way you can share enough code to compile and run?
no, you may or may not get the exact same answer. Its pretty common even with the same approach but a different order of operations (sometimes, caused by the compiler/optimizations and not easy to see) to have minor variations out in the extreme digits. This isn't important until you have nearly the right answer from both pieces of code. Then you can deal with minor floating point problems if it is necessary.

sounds like you have a bug hunt from here..
Last edited on
@mbozzi
Thank you for your answer, Unfortunately this is very big code which I have been writing for almost 2 years. this code simulates blood flow containing cancer cell. In the lattice Boltzmann method, each grid point holds a distribution function representing fluid properties like density and velocity. For our case, we're using the D3Q27 and D3Q39 models, meaning each lattice point holds a vector of 27 or 39 distribution functions. During collision, these functions are updated based on collision rules, yielding new values. Then, during streaming, these updated values propagate to 26 or 38 neighboring lattice points, depending on the model used. So, the process involves updating the distribution functions at each grid point, colliding them to get new values, and then streaming these values to adjacent points. The code worked fine for many cases and also works well for few iterations in this case but after some points the issue arises. and that's why pint pointing the bug is a little tricky. any suggestion on how to narrow down the problem is appreciated.
@jonnin Thank you for the explanation. So, are you implying that there might be a bug causing the disparity in results between the two methods mentioned?
Hi,

Some suggestions:

What warning levels do you have when compiling? And which compiler are you using? Try to turn on as many warnings as is feasible, as any reported warnings are quite likely to lead to run time problems. Really, one is not done until all warnings have been fixed.
Line 4 of void Simulation::Collision() should have attracted a warning because of mixed signed types, because the size() function returns std::size_t. Make f of type std::size_t to make this go away.

I would reiterate the idea of using a debugger, or printing out values to see where things go wrong

Some helper functions might make the code cleaner, potentially avoiding confusion. In the function void Node::F_Eq(std::shared_ptr<VelocitySet> velSet)and the other functions too, the following look like dot products to me:

1
2
3
4
5
6
du = Velocity_[0] * velSet->GetDirection(dir)[0] +
			 Velocity_[1] * velSet->GetDirection(dir)[1] +
			 Velocity_[2] * velSet->GetDirection(dir)[2];
u_sqr = Velocity_[0] * Velocity_[0] +
			    Velocity_[1] * Velocity_[1] +
			    Velocity_[2] * Velocity_[2];


So you could make them functions. C++ already has std::inner_product to do this.
https://en.cppreference.com/w/cpp/algorithm/inner_product

Also consider using the range based for loop when you want to do something to all the elements of a container, it's less error prone.

1
2
3
for(auto& Item : MyContainer) { // can use const auto& if you are not changing them. MyContainer is any STL container like e.g. std::vector
    // do something with Item
}


Range based for can also have an initialiser, and structured bindings.

When there are complex expressions consider inventing some new variables to make a simpler expression at the end that is easier to read. In void Simulation::Collision():

1
2
3
4
SubDomain_.m_lattice[i][j][k]->m_newDistributions[dir] =
                    SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                    (SubDomain_.m_lattice[i][j][k]->m_distributions[dir] -
                        SubDomain_.m_lattice[i][j][k]->m_eqdistributions[dir]) * (m_dt / m_relaxation);




One could create some new variables, or do a similar thing with using statements.

I hope these suggestions help to make your coding easier :+)
Last edited on
@ TheIdeasMan Thanks for the tips; they're really useful. I mainly use Visual Studio on Windows. I often run into warnings about conversions from size_t to int (mostly in MPI part of the code)and there are also some warnings related to the VTK library. In my code, I've got a header called Utility where I've stashed away all the helper functions, including the dot-product. For simplicity's sake in this post, I skipped using extra functions. You're spot-on about creating new variables to simplify expressions, but I held off initially to avoid hogging extra memory. However, I know I can free up memory later on, so your suggestion totally makes sense. I applied your suggestions and still the same issue is there.
@CplusC

Ok, cool :+)

So the motivation for my last post was to make style type suggestions to make code easier and simpler to read, which makes it less error prone in terms of typos and copy/paste errors, and probably easier to debug as well. I guess the hope was to make errors more obvious, alas it looks more debugging required.

Can you make the smallest lattice that has known results, then test your code against that? If it works, make the lattice larger until it starts to fail. Consider writing testing code that prints intermediate results in conjunction with using the debugger. There are ways to break when a loop reaches a certain point, examine the variables then. This is called conditional break points.

Have you got situations where doubles are intentionally converted to float at any point? If so, look very carefully at what happens there. If it is supposed to be double throughout, consider using braces when initialising, the compiler will complain at compile time if there is a loss of precision. The same for function arguments:

1
2
3
4
5
6
7
double a {2.0f} //compiler complains loss of precision
double b  {SomeFunctionCall}; //compiler complain if return value is float or int

void FunctionWithDblArgs(double c) { ...}

float c = 3.0f;
FunctionWithDblArgs({c}); // again compiler complains loss of precision because of braces around argument 


Also try to make constructors explicit

Consider using the attribute [[nodiscard]] everywhere a function returns a value. The compiler will complain if a return value is unused. This is a very useful thing, use it everywhere.

1
2
3
[[nodiscard]]
double Node::Equilibrium(std::shared_ptr<VelocitySet> velSet, int dir)
{ // ....} 


Edit: Silly me, it's part of the expression above
Just on that point: In Simulation::Collision() , line 18 (commented out) in the call to Equilibrium , the return value is not used?

Where is the data coming from? Is it hard coded at compile time, or is it coming from a file or some other source at runtime? If it is the former that can make it easier for debugging because the compiler can reason about more things; there are more things one can do to catch problems at compile time. So this is a good reason to have a minimal amount of hard coded data for testing as mentioned earlier. If the data is available at runtime only, then one has to do more defensive coding, at least: validate raw data; check in-variants*; check denominators for divide by zero or close to zero etc.

* in-variants are things that must remain true throughout the lifetime of the class object.

Another style point: avoid using i and j or anything else that looks similar like m and n because that can be anther source of problems that can be very hard to see or detect. Consider changing [i][j][k] to [x][y][z] if that's what they are. I am not sure how much embuggerance (read nuisance value) that will be for your project size :+)

Also, are you using some sort of version control like git? It makes management of the code base much easier, for example one can roll back to last time the code compiled cleanly (or some other point where one is happy with the code) if one has some sort of naming system for the commits to reflect that. VS has a plugin for git, it's easy to use.

Anyway, Good Luck !!




Last edited on
Thank you, @TheIdeasMan, for your explanation.
To answer all your questions, I thought it's better to make a reproducible code. I've managed to develop two versions of the same code for fluid flow simulation inside a cavity. One version is implemented in a simple C style (Test2: https://github.com/Resacd/Test2.git ) , while the other follows Object-Oriented Programming principles (Test1: https://github.com/Resacd/Test1.git ) .

Despite their apparent similarity, these two implementations yield different results. My concern primarily lies in the repeated access and modification of class variables, which may lead to numerical instability or other unforeseen issues.

You can download the projects, build them in Visual Studio, and run the simulations for closer inspection. Any insights or suggestions you may have regarding the discrepancy between the two implementations would be greatly appreciated. Looking forward to hearing from you.
Last edited on
Hi,

After a quick perusal, I saw a bunch of integer divisions in void ZouHe::ApplyBC() which would result in some of the distributions being unchanged. I am not sure where this function is being called from.

I am not sure if the following is a radical idea: make constexpr variables for the fractions, and use them in the expressions.

For example.

constexpr double TwoThirds = 2.0 / 3.0;

It could save a tiny amount of time, the program doesn't have to recompute that value every time it loops.

I will see how I go creating a project out of this. It will be Visual Studio Code on Linux with g++. Will have to install vtk package.
@TheIdeasMan, thank you for your attention. In the domain class, I've defined 'm_lattice' as a 'shared_ptr' because I intended it to accommodate two different types of objects: Node and ZouHe. ZouHe inherits from Node. When the object type is ZouHe, the 'applyBc()' function will be overridden by the method from ZouHe. This allows for setting the velocity and density on these specific lattices differently, as needed. While it may seem simpler to create a function in the simulation or Node to handle this, there are other reasons for this approach.
You mentioned that in the ZouHe class, there are divisions by an integer, but the code isn't currently using those if statements; it directly proceeds to the else condition. By the way, you are absolutely right about it and also constexpr double TwoThirds=2.0_3.0 was a good point too. If you want the code to consider those left and right corner cases as ZouHe and uses those if statements, please modify the 'Domain::CreateNodes' function as follows:

Change:

1
2
3
if (j == m_domainSize[1] - 1 && i > 0 && i < m_domainSize[0] - 1) {
    m_lattice[i][j]=std::make_shared<ZouHe>(i, j,m_velSet,m_uLid);
}


To

1
2
3
if (j == m_domainSize[1] - 1 && i >= 0 && i <= m_domainSize[0] - 1) {
    m_lattice[i][j]=std::make_shared<ZouHe>(i, j,m_velSet,m_uLid);
}
Last edited on
@TheIdeasMan I just wanted to let you know that I found the bug in the code in case if you're working on it.
Thank you.
Ok, groovey :+)

Care to share what it was?
@TheIdeasMan,
Sorry for the late reply.
In the collision function when obtaining the du and u_sqr to get equilibrium distribution functions, I was using the velocity of inlet. This led after collision distribution functions remain close to equilibrium distributions functions.
Registered users can post here. Sign in or register to post.