Discussion:
Sparse Matrix Inversion using PETSc
Dr. Timothy Stitt
2007-08-15 11:49:10 UTC
Permalink
Hi all,

I am currently investigating the best way to perform the inversion of a
large sparse matrix and came upon the idea of using PETSc as a framework
for testing various strategies from direct to iterative methods on my
sample matrices. In this setup for an NxN sparse matrix A I would have N
rhs's representing the Identity matrix and then solve for X. I wanted to
experiment with both parallel and serial strategies ranging from LU
Decomposition using SuperLU, MUMPS etc. to iterative methods using GMRES
etc. Am I right in thinking that all this can be done in PETSc by
setting up a core framework and then varying the solver methods etc?

I have looked over the sample KSP Solver codes although they only seem
to suggest single vectors for x and b. Can this be changed to accept
multiple vectors? Can anyone suggest a sample code that maybe
demonstrates the sort of thing I want to achieve...if it is in fact
possible.

Thanks in advance for any assistance given,

Regards,

Tim.
Aron Ahmadia
2007-08-15 13:21:28 UTC
Permalink
Dear Tim,

It is possible to carry out the explicit inversion of a sparse matrix
using the PETSc framework with the methodology you outlined below. I
would encourage you to consider Cholesky/LU factorizations of the
matrix, which occassionally result in sparser triangular solve times
than an explicit inverse-matrix-vector multiply would.

As for the correct way to do this, I would start with the fastest
methods for multiple right hand sides and reasonably sized matrices, a
direct method such as LU. I'm unaware of any functionality in PETSc
for handling multiple right hand sides, but PETSc will keep the
factorization from a previous direct solve, so A\b2 will be much
faster than A\b1. I think the best bet is a naive for loop over each
of the vectors to assemble the matrix piece by piece.

The PETSc developers may have some more thoughts on this.

Good luck,
~Aron
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a
large sparse matrix and came upon the idea of using PETSc as a framework
for testing various strategies from direct to iterative methods on my
sample matrices. In this setup for an NxN sparse matrix A I would have N
rhs's representing the Identity matrix and then solve for X. I wanted to
experiment with both parallel and serial strategies ranging from LU
Decomposition using SuperLU, MUMPS etc. to iterative methods using GMRES
etc. Am I right in thinking that all this can be done in PETSc by
setting up a core framework and then varying the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem
to suggest single vectors for x and b. Can this be changed to accept
multiple vectors? Can anyone suggest a sample code that maybe
demonstrates the sort of thing I want to achieve...if it is in fact
possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Hong Zhang
2007-08-15 14:54:48 UTC
Permalink
Tim,

As suggested by Aron, you should do following:

1. MatLUFactorSymbolic(A,...,&Fact);
MatLUFactorNumeric(A,&Fact);

2. for (i=0; i<N; i++){
MatSolve(Fact,rhs_vecs[i],sol_vecs[i]);
}

For Cholesky factorization, in which matrix Fact is stored
in petsc SBAIJ format, we support MatSolves(). Thus you can call
MatSolves(Fact,rhs_vecs,sol_vecs);
where rhs_vecs and sol_vecs are multivectors.
See
http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSolves.html

Petsc multivector Vecs - Collection of vectors where the data for the
vectors is stored in one contiguous memory. It is a
temporary construct for handling multiply right hand side solves.
We like to add support for other type of multivectors though.

See ~petsc/src/mat/examples/tests/ex76.c. Other examples are
available under ~petsc/src/mat/examples/tests/.
Note: petsc only supports sequential Cholesky/LU.
For parallel LU, you must use superlu_dist or mumps.
Simply run the same petsc code with runtime option
'-mat_type superlu_dist' or '-mat_type aijmumps'.
I would recommend start from a petsc example.

Hong
Post by Aron Ahmadia
Dear Tim,
It is possible to carry out the explicit inversion of a sparse matrix
using the PETSc framework with the methodology you outlined below. I
would encourage you to consider Cholesky/LU factorizations of the
matrix, which occassionally result in sparser triangular solve times
than an explicit inverse-matrix-vector multiply would.
As for the correct way to do this, I would start with the fastest
methods for multiple right hand sides and reasonably sized matrices, a
direct method such as LU. I'm unaware of any functionality in PETSc
for handling multiple right hand sides, but PETSc will keep the
factorization from a previous direct solve, so A\b2 will be much
faster than A\b1. I think the best bet is a naive for loop over each
of the vectors to assemble the matrix piece by piece.
The PETSc developers may have some more thoughts on this.
Good luck,
~Aron
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a
large sparse matrix and came upon the idea of using PETSc as a framework
for testing various strategies from direct to iterative methods on my
sample matrices. In this setup for an NxN sparse matrix A I would have N
rhs's representing the Identity matrix and then solve for X. I wanted to
experiment with both parallel and serial strategies ranging from LU
Decomposition using SuperLU, MUMPS etc. to iterative methods using GMRES
etc. Am I right in thinking that all this can be done in PETSc by
setting up a core framework and then varying the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem
to suggest single vectors for x and b. Can this be changed to accept
multiple vectors? Can anyone suggest a sample code that maybe
demonstrates the sort of thing I want to achieve...if it is in fact
possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Barry Smith
2007-08-15 16:50:06 UTC
Permalink
Tim,

How large are you matrices?

Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a large
sparse matrix and came upon the idea of using PETSc as a framework for testing
various strategies from direct to iterative methods on my sample matrices. In
this setup for an NxN sparse matrix A I would have N rhs's representing the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in thinking that
all this can be done in PETSc by setting up a core framework and then varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the sort of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Dr. Timothy Stitt
2007-08-15 18:27:09 UTC
Permalink
Firstly, many thanks to everyone who has replied with information. It
has been very useful indeed. Much appreciated.

Barry, in this case the sparse matrices would be of ~ order 5000x5000.
They could grow in size but this is the sample matrices I am working
with right now. We would love a scalable approach so we can deal with
more interesting problems and hence larger sparse matrices. Hope that helps.

Many thanks again.

Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a large
sparse matrix and came upon the idea of using PETSc as a framework for testing
various strategies from direct to iterative methods on my sample matrices. In
this setup for an NxN sparse matrix A I would have N rhs's representing the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in thinking that
all this can be done in PETSc by setting up a core framework and then varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the sort of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Barry Smith
2007-08-15 21:01:12 UTC
Permalink
Tim,

A dense matrix with 100,000 rows and columns requires 80 gigabytes to store
the result. 1,000,000 rows and columns requires 8,000 gigabytes.

With this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?

Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.

Barry
Firstly, many thanks to everyone who has replied with information. It has been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a large
sparse matrix and came upon the idea of using PETSc as a framework for testing
various strategies from direct to iterative methods on my sample matrices. In
this setup for an NxN sparse matrix A I would have N rhs's representing the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in thinking that
all this can be done in PETSc by setting up a core framework and then varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the sort of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Dr. Timothy Stitt
2007-08-15 21:23:50 UTC
Permalink
Barry,

The group I am working with are calculating what they call retarded
Green's Functions of the form:

G_r=(E-H)^(-1)

where (E-H) is a matrix. Apparently they say there is no way to avoid
this calculation.

Tim.
Post by Barry Smith
Tim,
A dense matrix with 100,000 rows and columns requires 80 gigabytes to store
the result. 1,000,000 rows and columns requires 8,000 gigabytes.
With this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?
Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.
Barry
Firstly, many thanks to everyone who has replied with information. It has been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a large
sparse matrix and came upon the idea of using PETSc as a framework for testing
various strategies from direct to iterative methods on my sample matrices. In
this setup for an NxN sparse matrix A I would have N rhs's representing the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in thinking that
all this can be done in PETSc by setting up a core framework and then varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the sort of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Barry Smith
2007-08-15 21:29:33 UTC
Permalink
Tim,

Do you know what the G_r is then used for?

Thanks

Barry
Post by Dr. Timothy Stitt
Barry,
The group I am working with are calculating what they call retarded Green's
G_r=(E-H)^(-1)
where (E-H) is a matrix. Apparently they say there is no way to avoid this
calculation.
Tim.
Post by Barry Smith
Tim,
A dense matrix with 100,000 rows and columns requires 80 gigabytes to store
the result. 1,000,000 rows and columns requires 8,000 gigabytes.
With this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?
Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.
Barry
Firstly, many thanks to everyone who has replied with information. It has been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a
large
sparse matrix and came upon the idea of using PETSc as a framework for
testing
various strategies from direct to iterative methods on my sample
matrices.
In
this setup for an NxN sparse matrix A I would have N rhs's
representing
the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in
thinking
that
all this can be done in PETSc by setting up a core framework and then
varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the
sort
of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Dr. Timothy Stitt
2007-08-16 12:29:38 UTC
Permalink
Barry,

If it helps I was speaking to some of the project members and they
mention that they actually only need the first M columns of the inverse.
The equations can also be rewritten so that they only require the last M
columns also or first/last M rows. I believe the retarded Green's
Function forms an integral. For the integral in the real plane (which is
repeated for thousands of energy points and hence requires thousands of
inversions) they need only the first few columns as described above.

I would be grateful if you could suggest a possible approach based on
this extra information. I much appreciate your comments already.

Thanks,

Tim.
Post by Barry Smith
Tim,
Do you know what the G_r is then used for?
Thanks
Barry
Post by Dr. Timothy Stitt
Barry,
The group I am working with are calculating what they call retarded Green's
G_r=(E-H)^(-1)
where (E-H) is a matrix. Apparently they say there is no way to avoid this
calculation.
Tim.
Post by Barry Smith
Tim,
A dense matrix with 100,000 rows and columns requires 80 gigabytes to store
the result. 1,000,000 rows and columns requires 8,000 gigabytes.
With this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?
Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.
Barry
Firstly, many thanks to everyone who has replied with information. It has been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion of a
large
sparse matrix and came upon the idea of using PETSc as a framework for testing
various strategies from direct to iterative methods on my sample
matrices.
In
this setup for an NxN sparse matrix A I would have N rhs's
representing
the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in
thinking
that
all this can be done in PETSc by setting up a core framework and then varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the
sort
of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
--
Dr. Timothy Stitt <timothy_dot_stitt_at_ichec.ie>
HPC Application Consultant - ICHEC (www.ichec.ie)

Dublin Institute for Advanced Studies
5 Merrion Square - Dublin 2 - Ireland

+353-1-6621333 (tel) / +353-1-6621477 (fax) / +353-874195427 (mobile)
Barry Smith
2007-08-16 14:46:28 UTC
Permalink
Tim,

Ok, this is a bit more reasonable :-)

The proceedure I discussed previously is the same. Just
use for the right hand side matrix a dense vector with
M columns whose entries are the first M columns of the identity,
then solve with it.

Barry
Post by Dr. Timothy Stitt
Barry,
If it helps I was speaking to some of the project members and they mention
that they actually only need the first M columns of the inverse. The equations
can also be rewritten so that they only require the last M columns also or
first/last M rows. I believe the retarded Green's Function forms an integral.
For the integral in the real plane (which is repeated for thousands of energy
points and hence requires thousands of inversions) they need only the first
few columns as described above.
I would be grateful if you could suggest a possible approach based on this
extra information. I much appreciate your comments already.
Thanks,
Tim.
Post by Barry Smith
Tim,
Do you know what the G_r is then used for?
Thanks
Barry
Post by Dr. Timothy Stitt
Barry,
The group I am working with are calculating what they call retarded Green's
G_r=(E-H)^(-1)
where (E-H) is a matrix. Apparently they say there is no way to avoid this
calculation.
Tim.
Post by Barry Smith
Tim,
A dense matrix with 100,000 rows and columns requires 80 gigabytes
to
store
the result. 1,000,000 rows and columns requires 8,000 gigabytes. With
this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?
Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse
matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.
Barry
Post by Dr. Timothy Stitt
Firstly, many thanks to everyone who has replied with information. It
has
been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion
of
a
large
sparse matrix and came upon the idea of using PETSc as a framework
for
testing
various strategies from direct to iterative methods on my sample
matrices.
In
this setup for an NxN sparse matrix A I would have N rhs's
representing
the
Identity matrix and then solve for X. I wanted to experiment with
both
parallel and serial strategies ranging from LU Decomposition using
SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in
thinking
that
all this can be done in PETSc by setting up a core framework and
then
varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only
seem
to
suggest single vectors for x and b. Can this be changed to accept
multiple
vectors? Can anyone suggest a sample code that maybe demonstrates
the
sort
of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
Dr. Timothy Stitt
2007-08-16 15:55:10 UTC
Permalink
Perfect...thanks Barry.
Post by Barry Smith
Tim,
Ok, this is a bit more reasonable :-)
The proceedure I discussed previously is the same. Just
use for the right hand side matrix a dense vector with
M columns whose entries are the first M columns of the identity,
then solve with it.
Barry
Post by Dr. Timothy Stitt
Barry,
If it helps I was speaking to some of the project members and they mention
that they actually only need the first M columns of the inverse. The equations
can also be rewritten so that they only require the last M columns also or
first/last M rows. I believe the retarded Green's Function forms an integral.
For the integral in the real plane (which is repeated for thousands of energy
points and hence requires thousands of inversions) they need only the first
few columns as described above.
I would be grateful if you could suggest a possible approach based on this
extra information. I much appreciate your comments already.
Thanks,
Tim.
Post by Barry Smith
Tim,
Do you know what the G_r is then used for?
Thanks
Barry
Post by Dr. Timothy Stitt
Barry,
The group I am working with are calculating what they call retarded Green's
G_r=(E-H)^(-1)
where (E-H) is a matrix. Apparently they say there is no way to avoid this
calculation.
Tim.
Post by Barry Smith
Tim,
A dense matrix with 100,000 rows and columns requires 80 gigabytes
to
store
the result. 1,000,000 rows and columns requires 8,000 gigabytes. With
this range of sizes I wouldn't even consider iterative solvers and
would only use direct solvers. I would divide MPI_COMM_WORLD into N
subcommunicators of size n each and have each subcommunicator work on a
collection of columns of the inverse. For matrix sizes of 5,000 to say 20,000?
I'd make n be 1 and just use PETSc's native LU solver and use MatMatSolve()
and not use KSP at all. For larger matrices you may be able to use an n of 2
to possibly as large as 8?
Now my numerical analysis training :-) requires me to state the following.
It is completely insane to compute the EXPLICIT inverse of large sparse matrices
since they are dense. Please tell me what the inverses are used for and perhaps
we can come up with an approach the does not require computing them.
Barry
Post by Dr. Timothy Stitt
Firstly, many thanks to everyone who has replied with information. It
has
been
very useful indeed. Much appreciated.
Barry, in this case the sparse matrices would be of ~ order 5000x5000. They
could grow in size but this is the sample matrices I am working with right
now. We would love a scalable approach so we can deal with more interesting
problems and hence larger sparse matrices. Hope that helps.
Many thanks again.
Tim.
Post by Barry Smith
Tim,
How large are you matrices?
Barry
Post by Dr. Timothy Stitt
Hi all,
I am currently investigating the best way to perform the inversion
of
a
large
sparse matrix and came upon the idea of using PETSc as a framework for
testing
various strategies from direct to iterative methods on my sample
matrices.
In
this setup for an NxN sparse matrix A I would have N rhs's
representing
the
Identity matrix and then solve for X. I wanted to experiment with both
parallel and serial strategies ranging from LU Decomposition using SuperLU,
MUMPS etc. to iterative methods using GMRES etc. Am I right in
thinking
that
all this can be done in PETSc by setting up a core framework and then
varying
the solver methods etc?
I have looked over the sample KSP Solver codes although they only seem
to
suggest single vectors for x and b. Can this be changed to accept multiple
vectors? Can anyone suggest a sample code that maybe demonstrates the
sort
of
thing I want to achieve...if it is in fact possible.
Thanks in advance for any assistance given,
Regards,
Tim.
--
Dr. Timothy Stitt <timothy_dot_stitt_at_ichec.ie>
HPC Application Consultant - ICHEC (www.ichec.ie)

Dublin Institute for Advanced Studies
5 Merrion Square - Dublin 2 - Ireland

+353-1-6621333 (tel) / +353-1-6621477 (fax) / +353-874195427 (mobile)
Loading...