Refactoring Matrix.cpp/.h

Here's the place for discussion related to coding in FreeCAD, C++ or Python. Design, interfaces and structures.
Forum rules
Be nice to others! Respect the FreeCAD code of conduct!
dpoen
Posts: 49
Joined: Wed Feb 02, 2022 10:26 pm

Re: Refactoring Matrix.cpp/.h

Post by dpoen »

The downside is that this always requires a tmp. matrix object which affects performance.
Ok, I'm coming back on this point, a friend teach me that if we use the += (resp -=) to build + (resp -), we don't need additionally temporary object. It would also comply the the standard way of doing it in C++.
Would that be a good solution for you ?

Note that the fact to add an additional code call level would not degrade performance, as all of it is inlined.

BTW: I've been doing some testing, and having to build a temporary object while using the "*" to build the "*=" doesn't result on any measurable difference. (granted, I didn't spend time to setup a first class test setup and timing, but still, the actual computation should be far longer that any temporary object creation)

BTW2: Inline cause a slight increase in performance, but as much a my testing goes, it's around the 1% increase. I don't know how much it increase overall software size, but I would be interested in experimenting on that part. Are there any rationals on it for inlining all of it ?
wmayer
Founder
Posts: 20243
Joined: Thu Feb 19, 2009 10:32 am
Contact:

Re: Refactoring Matrix.cpp/.h

Post by wmayer »

Ok, I'm coming back on this point, a friend teach me that if we use the += (resp -=) to build + (resp -), we don't need additionally temporary object. It would also comply the the standard way of doing it in C++.
Would that be a good solution for you ?
You mean to make the + operator based on += operator? So, the code will be like:

Code: Select all

inline Matrix4D Matrix4D::operator  +  (const Matrix4D& rclMtrx) const
{
  Matrix4D  clMat(*this);
  return clMat += rclMtrx;
}
This should be fine.
Note that the fact to add an additional code call level would not degrade performance, as all of it is inlined.
I tested your original suggestion where you wanted to make += to be based on the + operator. I did two tests:

Code: Select all

    auto test = []() {
        Matrix4D mat, mat2;
        mat += mat2;
    };

    for (int i=0; i<100000000; i++) {
        test();
    }
This took 10 sec in debug mode with no optimization while

Code: Select all

    auto test = []() {
        Matrix4D mat, mat2;

        Matrix4D mat3;
        mat3 = mat + mat2;
        mat = mat3;
    };

    for (int i=0; i<100000000; i++) {
        test();
    }
took 23 sec.
dpoen
Posts: 49
Joined: Wed Feb 02, 2022 10:26 pm

Re: Refactoring Matrix.cpp/.h

Post by dpoen »

Code: Select all

You mean to make the + operator based on += operator? So, the code will be like: 
Yes, that exactly that ! :)

Ok for the record : I was surprised by your result... So I double checked !
And yes, you are right, I got with a slightly different code the same ratio (about twice as fast).
However, when applying this logic to *=, it behave another way, because the actual calculation of the matrix product is far longer than any overhead during call. hence we can't measure any significant performance cost :)

I have checked also that using += to build + didn't cost any performance, and it's ok ! So I will implement it as it

Thanks for your challenging answer, that was very interesting ! :)
User avatar
jnxd
Posts: 951
Joined: Mon Mar 30, 2015 2:30 pm
Contact:

Re: Refactoring Matrix.cpp/.h

Post by jnxd »

dpoen wrote: Fri Feb 04, 2022 12:09 pm 12) int hasScale(double tol=0.0) const; define an default value for tol, but in that case, given that we do

Code: Select all

   if (tol == 0.0)
        tol = 1e-9;
in the function beginning, would not be better to simply use 1e-9 as a default value in the prototype ?
Just making an educated guess here.

I'm not sure what the significance of 1e-9 is, but usually these tolerances are defined keeping the floating point precision in mind. This can change if we start using different precisions (say double to single). So if, hypothetically, we were compiling FC for a computer using single precision and a computer using double precision (pretty sure we only do the latter), these numbers should be different. Thus an API default tolerance of 0.0 (just to say make your own judgment, computer) makes sense, but then the actual default should be set based on our precision.

That said there's nothing stopping us in this function to use a lower precision.
My latest (or last) project: B-spline Construction Project.
dpoen
Posts: 49
Joined: Wed Feb 02, 2022 10:26 pm

Re: Refactoring Matrix.cpp/.h

Post by dpoen »

jnxd wrote: Thu Jun 23, 2022 9:30 am Just making an educated guess here.

I'm not sure what the significance of 1e-9 is, but usually these tolerances are defined keeping the floating point precision in mind. This can change if we start using different precisions (say double to single). So if, hypothetically, we were compiling FC for a computer using single precision and a computer using double precision (pretty sure we only do the latter), these numbers should be different. Thus an API default tolerance of 0.0 (just to say make your own judgment, computer) makes sense, but then the actual default should be set based on our precision.

That said there's nothing stopping us in this function to use a lower precision.
Yes, and as openBrain said earlier, "IMO, the best would be to use the standard Precision::Confusion", and word have been made that this would introduce OCC dependencies and such...
Idk, I've no clear answers on what to do and how to do the fix...
wmayer
Founder
Posts: 20243
Joined: Thu Feb 19, 2009 10:32 am
Contact:

Re: Refactoring Matrix.cpp/.h

Post by wmayer »

The Precision class only consists of a couple of static methods and we can easily copy & paste the relevant parts into FreeCADBase. I don't remember in which thread we have discussed it.

However, Precision::Confusion() would be the wrong tolerance since this returns 1e-7 which is even higher by factor of 100. A better way might be using epsilon.
dpoen
Posts: 49
Joined: Wed Feb 02, 2022 10:26 pm

Re: Refactoring Matrix.cpp/.h

Post by dpoen »

TBF, I've never got a clear opinion on what is the best way to go, so I will take whatever your decision is :) DBL_EPSILON could be alright, as all the computation are made with double.

Independently from the choice of the value, could we agree that it should belong in the header as default value (instead of the 0.0, which get replaced by the 1e-9, currently)
That would allows to remove two LoC, would allows the user to use 0.0 if he really want to, and make the default value more apparent to the user, given he wouldn't have to dig in the cpp file to find the default value, but only read the header file.
wmayer
Founder
Posts: 20243
Joined: Thu Feb 19, 2009 10:32 am
Contact:

Re: Refactoring Matrix.cpp/.h

Post by wmayer »

Independently from the choice of the value, could we agree that it should belong in the header as default value (instead of the 0.0, which get replaced by the 1e-9, currently)
I don't agree because then Matrix.h requires a further header and slows down the build process. The better way is to implement an overloaded version of hasScale() (a version without argument and a version with the double). The version without argument then would be:

Code: Select all

ScaleType Matrix4D::hasScale() const
{
    return hasScale(std::numeric_limits<T>::epsilon());
}
User avatar
jnxd
Posts: 951
Joined: Mon Mar 30, 2015 2:30 pm
Contact:

Re: Refactoring Matrix.cpp/.h

Post by jnxd »

dpoen wrote: Thu Jun 23, 2022 10:59 am Yes, and as openBrain said earlier,...
Sorry this is a dense thread so I'm missing a whole lot of information discussed here.
wmayer wrote: Thu Jun 23, 2022 1:14 pm However, Precision::Confusion() would be the wrong tolerance since this returns 1e-7 which is even higher by factor of 100. A better way might be using epsilon.
Agreed that Precision::Confusion() is probably too high. However, machine precision may be too low. Roundoff errors increase quite fast in matrix operations, and using too low roundoffs can cause instability. That said, I still don't know what the actual value should be but it should be (some constant)*epsilon.

In one case that comes to my mind, for line-triangle intersection the tolerance was 1e-10, when double precision float has an epsilon of the order 1e-16.
wmayer wrote: Thu Jun 23, 2022 2:56 pm I don't agree because then Matrix.h requires a further header and slows down the build process. The better way is to implement an overloaded version of hasScale() (a version without argument and a version with the double). The version without argument then would be:

Code: Select all

ScaleType Matrix4D::hasScale() const
{
    return hasScale(std::numeric_limits<T>::epsilon());
}
This is an interesting concept. Thanks.
My latest (or last) project: B-spline Construction Project.
wmayer
Founder
Posts: 20243
Joined: Thu Feb 19, 2009 10:32 am
Contact:

Re: Refactoring Matrix.cpp/.h

Post by wmayer »

jnxd wrote: Thu Jun 23, 2022 5:49 pm This is an interesting concept. Thanks.
Here some thoughts about default arguments: https://quuxplusone.github.io/blog/2020 ... the-devil/
Post Reply