This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Heap corruption of Pardiso with Eigen

Tags: None
(comma "," separated)
User avatar
yaniell
Registered Member
Posts
9
Karma
0
Dear all,

I am stuck with a heap corruption problem when I call PardisoLU in eigen. It is strange that my code works well for a testing problem (small number of unknowns), while it stops, for a real large problem, at

"error = internal::pardiso_run_selector<Index>::run(m_pt, 1, 1, m_type, 12, m_size,
m_matrix.valuePtr(), m_matrix.outerIndexPtr(), m_matrix.innerIndexPtr(),
m_perm.data(), 0, m_iparm.data(), m_msglvl, NULL, NULL);"

in PardisoSupport.h with the following info:

"Critical error detected c0000374
Windows has triggered a breakpoint in FullWaveA.exe.

This may be due to a corruption of the heap, which indicates a bug in FullWaveA.exe or any of the DLLs it has loaded.

This may also be due to the user pressing F12 while FullWaveA.exe has focus."

My eigen version is 3.2.8, compiler is intel 11.0 64bit, and the MKL libraries I linked are "mkl_intel_lp64.lib mkl_core.lib mkl_sequential.lib". Does anyone know what happened?

Thanks in advance,
Yaniel
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
It could be that it ran out of memory. Look at the PARDISO doc to see if it supports a out-of-core mode, and if so enable it this way:

PardisoLU<...> solver;
solver.pardisoParameterArray()[key] = value;

with appropriate key and value as defined in PARDISO's doc.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
Many thanks, ggael. The default setting of in / out of core mode in PardisoSupport.h is

"m_iparm[59] = 1; // Automatic switch between In-Core and Out-of-Core modes".

It won't help even if I set m_iparm[59] = 2, with which the Out-of-Core mode is fully used. I hope the bug is somewhere else and I can figure it out by gradually increasing the dimension of the matrix system.

The corruption happens also for small problems. I check the case of a banded matrix with small dimension, PardisoLU won't work.

Code: Select all
...
#include <Eigen\PardisoSupport>
using namespace std;
using namespace Eigen;

typedef complex<double> dCmplx;

int main()
{   
   int dim = 3;
   zSpMatx A(dim, dim);   
   vector<zTrip> Aa;
   zVect b(dim), x(dim);

   for(int ii = 0; ii < dim - 1; ii++)
   {
      b[ii] = dCmplx(ii + 1, 0.0);
      Aa.push_back(zTrip(ii, ii, dCmplx(ii + 1, 0.0)));
      Aa.push_back(zTrip(ii, ii + 1, dCmplx(ii + 2, 0.0)));
   }
   b[dim - 1] = dCmplx(dim, 0.0);
   Aa.push_back(zTrip(dim - 1, dim - 1, dCmplx(dim, 0.0)));
   A.setFromTriplets(Aa.begin(), Aa.end());
   A.makeCompressed();

   cout << "Solve the system with Pardiso" << endl;
   PardisoLU<zSpMatx> solver2;
   solver2.compute(A);
   cout << solver2.info() << endl;
   x = solver2.solve(b);
   cout << solver2.info() << endl;
   cout << "x: " << endl << x << endl;

   return 0;
}


ggael wrote:It could be that it ran out of memory. Look at the PARDISO doc to see if it supports a out-of-core mode, and if so enable it this way:

PardisoLU<...> solver;
solver.pardisoParameterArray()[key] = value;

with appropriate key and value as defined in PARDISO's doc.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
It seems that the bug is the error function in PardisoSupport.h. No matter how simple the matrix system is (except for a diagonal matrix), the following part

"error = internal::pardiso_run_selector<Index>::run(m_pt, 1, 1, m_type, 12, m_size,
m_matrix.valuePtr(), m_matrix.outerIndexPtr(), m_matrix.innerIndexPtr(),
m_perm.data(), 0, m_iparm.data(), m_msglvl, NULL, NULL);"

always leads to the heap corruption. After I comment this function (and also "manageErrorCode(error);"), the code works well to return me correct solution.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Alright, I can reproduce segfaults too on OSX, but those are random. Switching to in-core (iparm[59] = 0) fixes the issue here.

I don't known what's going on.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
Thanks, ggael. I have stand-along code which calls pardiso of MKL and it works well. So I think somewhere in the interface between Eigen and MKL may have problem.

ggael wrote:Alright, I can reproduce segfaults too on OSX, but those are random. Switching to in-core (iparm[59] = 0) fixes the issue here.

I don't known what's going on.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
I check the PardisoSupport module carefully. It seems that the matrix format of Eigen and that of MKL don't match well. If compressed sparse row format is applied, the row and column indices start from 0 for Eigen, whereas 1 for MKL. Do we have a quick remedy of this problem?

yaniell wrote:Thanks, ggael. I have stand-along code which calls pardiso of MKL and it works well. So I think somewhere in the interface between Eigen and MKL may have problem.

ggael wrote:Alright, I can reproduce segfaults too on OSX, but those are random. Switching to in-core (iparm[59] = 0) fixes the issue here.

I don't known what's going on.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
nope, see:

m_iparm[34] = 1; // <=> 0-based, C indexing

otherwise out unit tests would completely fail.

In the devel branch and 3.2 branch, I've defaulted to in-core only computation.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
Thanks, but I get confused. In the PARDISO user guide, it is claimed that "he row and column numbers start from 1." As the matrices in Eigen start from 0, how can I get my Pardiso work with Eigen? m_iparm[34] = 1 is the default setting in my edition. Does it mean the bug is not figured out yet?

ggael wrote:nope, see:

m_iparm[34] = 1; // <=> 0-based, C indexing

otherwise out unit tests would completely fail.

In the devel branch and 3.2 branch, I've defaulted to in-core only computation.
User avatar
yaniell
Registered Member
Posts
9
Karma
0
Problem solved. That's my computer's fault. I can run the code successfully on another computer right now. Thank you all.


Bookmarks



Who is online

Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]