Discussion:
[petsc-users] How rto use multigrid?
w_ang_temp
2012-10-28 13:08:57 UTC
Permalink
Hello,
I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
So are there some typical examples or details about multigrid? Is it used just like other preconditioners
like jacobi, sor, which can be simply used in the cammand line options?
Thanks.
Jim
Jed Brown
2012-10-28 13:17:00 UTC
Permalink
Algebraic multigrid can be used directly, -pc_type gamg
-pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
interface to set interpolation (and provide a coarse operator for
non-Galerkin) or use a DM that provides coarsening capability.

What kind of problem are you solving?
On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

> Hello,
> I want to use the multigrid as a preconditioner. The introduction about
> it in the manual is little.
> So are there some typical examples or details about multigrid? Is it used
> just like other preconditioners
> like jacobi, sor, which can be simply used in the cammand line options?
> Thanks.
> Jim
>
>
>
w_ang_temp
2012-10-28 13:38:40 UTC
Permalink
Hello, Jed
Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
solvers and preconditioners. I used the command line option like this:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
So, I also want to use the multigrid just like the simple command. There is only a little introduction
about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
to use it simply in PETSc to solve the Ax=b system.
Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim
Matthew Knepley
2012-10-28 13:46:23 UTC
Permalink
On Sun, Oct 28, 2012 at 9:38 AM, w_ang_temp <***@163.com> wrote:

> Hello, Jed
> Thanks for your timely reply. I deal with the soil-water coupled
> problem in geotechnical engineering,
> whose stiffness matrix is ill-conditioned. I have did some work about it,
> mainly finding the effective
> solvers and preconditioners. I used the command line option like this:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15
> -ksp_converged_reason.
>

If SOR works as a preconditioner, then definitely use AMG as Jed suggested.
It is almost certain to work.

Matt


> So, I also want to use the multigrid just like the simple command.
> There is only a little introduction
> about multigrid in the manual. Multigrid is complex and not a easy thing
> for me, so I just need to konw how
> to use it simply in PETSc to solve the Ax=b system.
> Thanks.
> Jim
>
>
>
>
>
> >At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >Algebraic multigrid can be used directly, -pc_type gamg
> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
> interface to set >interpolation (and provide a coarse operator for
> non-Galerkin) or use a DM that provides coarsening capability.
>
> >What kind of problem are you solving?
> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>
>> >>Hello,
>> >> I want to use the multigrid as a preconditioner. The introduction
>> about it in the manual is little.
>> >>So are there some typical examples or details about multigrid? Is it
>> used just like other preconditioners
>> >>like jacobi, sor, which can be simply used in the cammand line options?
>> >> Thanks.
>> >>
>> Jim
>>
>>
>>
>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
w_ang_temp
2012-10-28 13:48:44 UTC
Permalink
Thanks. I will have a try.

Jim







ÔÚ 2012-10-28 21:46:23£¬"Matthew Knepley" <***@gmail.com> ÐŽµÀ£º
On Sun, Oct 28, 2012 at 9:38 AM, w_ang_temp <***@163.com> wrote:

Hello, Jed
Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
solvers and preconditioners. I used the command line option like this:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.


If SOR works as a preconditioner, then definitely use AMG as Jed suggested. It is almost certain to work.


Matt

So, I also want to use the multigrid just like the simple command. There is only a little introduction
about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
to use it simply in PETSc to solve the Ax=b system.
Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim











--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
Jed Brown
2012-10-28 14:25:31 UTC
Permalink
Try the simple option I just sent.
On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:

> Hello, Jed
> Thanks for your timely reply. I deal with the soil-water coupled
> problem in geotechnical engineering,
> whose stiffness matrix is ill-conditioned. I have did some work about it,
> mainly finding the effective
> solvers and preconditioners. I used the command line option like this:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15
> -ksp_converged_reason.
> So, I also want to use the multigrid just like the simple command.
> There is only a little introduction
> about multigrid in the manual. Multigrid is complex and not a easy thing
> for me, so I just need to konw how
> to use it simply in PETSc to solve the Ax=b system.
> Thanks.
> Jim
>
>
>
>
>
> >At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >Algebraic multigrid can be used directly, -pc_type gamg
> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
> interface to set >interpolation (and provide a coarse operator for
> non-Galerkin) or use a DM that provides coarsening capability.
>
> >What kind of problem are you solving?
> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>
>> >>Hello,
>> >> I want to use the multigrid as a preconditioner. The introduction
>> about it in the manual is little.
>> >>So are there some typical examples or details about multigrid? Is it
>> used just like other preconditioners
>> >>like jacobi, sor, which can be simply used in the cammand line options?
>> >> Thanks.
>> >>
>> Jim
>>
>>
>>
>
>
w_ang_temp
2012-10-29 12:49:36 UTC
Permalink
Hello, Jed
I use the command:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
The error is as follows:
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
[0]PETSC ERROR: Unable to find requested PC type gamg!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29 05:40:13 2012
[0]PETSC ERROR: Libraries linked from /home/geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
[0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
[0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2 --download-f-blas-lapack=1 --with-x=1 --with-debugging=0 --download-parmetis --download-mumps --download-scalapack --download-blacs
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
[0]PETSC ERROR: PCSetFromOptions() line 184 in src/ksp/pc/interface/pcset.c
[0]PETSC ERROR: KSPSetFromOptions() line 286 in src/ksp/ksp/interface/itcl.c
When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason', it is ok. So
what is the possible reason?
Thanks.
Jim







At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:


Try the simple option I just sent.

On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:

Hello, Jed
Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
solvers and preconditioners. I used the command line option like this:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
So, I also want to use the multigrid just like the simple command. There is only a little introduction
about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
to use it simply in PETSc to solve the Ax=b system.
Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim
Alexander Grayver
2012-10-29 13:02:24 UTC
Permalink
On 29.10.2012 13:49, w_ang_temp wrote:
> Hello, Jed
> I use the command:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths
> 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
> The error is as follows:
> [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing
> external package needed for type
> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
> [0]PETSC ERROR: Unable to find requested PC type gamg!
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> *[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15
> 09:30:51 CDT 2012 *

You should use petsc-3.3-p3.



> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29
> 05:40:13 2012
> [0]PETSC ERROR: Libraries linked from /home
> /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
> [0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
> [0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2
> --download-f-blas-lapack=1 --with-x=1 --with-debugging=0
> --download-parmetis --download-mumps --download-scalapack --download-blacs
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
> [0]PETSC ERROR: PCSetFromOptions() line 184 in
> src/ksp/pc/interface/pcset.c
> [0]PETSC ERROR: KSPSetFromOptions() line 286 in
> src/ksp/ksp/interface/itcl.c
> When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol
> 1.0e-15 -ksp_converged_reason', it is ok. So
> what is the possible reason?
> Thanks.
> Jim
>


--
Regards,
Alexander
Mark F. Adams
2012-10-29 13:57:49 UTC
Permalink
You need an updated PETSc. (I thought 3.2 had an early version of gamg … but you need 3.3 or dev)

Mark

On Oct 29, 2012, at 8:49 AM, w_ang_temp <***@163.com> wrote:

> Hello, Jed
> I use the command:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
> The error is as follows:
> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
> [0]PETSC ERROR: Unable to find requested PC type gamg!
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29 05:40:13 2012
> [0]PETSC ERROR: Libraries linked from /home /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
> [0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
> [0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2 --download-f-blas-lapack=1 --with-x=1 --with-debugging=0 --download-parmetis --download-mumps --download-scalapack --download-blacs
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
> [0]PETSC ERROR: PCSetFromOptions() line 184 in src/ksp/pc/interface/pcset.c
> [0]PETSC ERROR: KSPSetFromOptions() line 286 in src/ksp/ksp/interface/itcl.c
> When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason', it is ok. So
> what is the possible reason?
> Thanks.
> Jim
>
>
>
>
>
> At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:
> Try the simple option I just sent.
>
> On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:
> Hello, Jed
> Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
> whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
> solvers and preconditioners. I used the command line option like this:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
> So, I also want to use the multigrid just like the simple command. There is only a little introduction
> about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
> to use it simply in PETSc to solve the Ax=b system.
> Thanks.
> Jim
>
>
>
>
>
> >At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:
> >Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.
>
> >What kind of problem are you solving?
>
> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
> >>Hello,
> >> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
> >>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
> >>like jacobi, sor, which can be simply used in the cammand line options?
> >> Thanks.
> >> Jim
>
>
>
>
>
>
w_ang_temp
2012-10-29 14:03:29 UTC
Permalink
Thanks. I will have a try.

Jim







ÔÚ 2012-10-29 21:57:49£¬"Mark F. Adams" <***@columbia.edu> ÐŽµÀ£º
You need an updated PETSc. (I thought 3.2 had an early version of gamg ¡­ but you need 3.3 or dev)


Mark


On Oct 29, 2012, at 8:49 AM, w_ang_temp <***@163.com> wrote:


Hello, Jed
I use the command:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
The error is as follows:
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
[0]PETSC ERROR: Unable to find requested PC type gamg!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29 05:40:13 2012
[0]PETSC ERROR: Libraries linked from /home /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
[0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
[0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2 --download-f-blas-lapack=1 --with-x=1 --with-debugging=0 --download-parmetis --download-mumps --download-scalapack --download-blacs
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
[0]PETSC ERROR: PCSetFromOptions() line 184 in src/ksp/pc/interface/pcset.c
[0]PETSC ERROR: KSPSetFromOptions() line 286 in src/ksp/ksp/interface/itcl.c
When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason', it is ok. So
what is the possible reason?
Thanks.
Jim







At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:


Try the simple option I just sent.

On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:

Hello, Jed
Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
solvers and preconditioners. I used the command line option like this:
mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
So, I also want to use the multigrid just like the simple command. There is only a little introduction
about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
to use it simply in PETSc to solve the Ax=b system.
Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim
w_ang_temp
2012-11-01 13:47:25 UTC
Permalink
Hello,
I have just used the latest version 'petsc-3.3-p4.tar.gz'. The codes which are ok under version 'petsc-3.2-p7.tar.gz'
are not ok now. The error infomation is as follows. So if can I still use the original codes without any
modification and also use multigrid with -pc_type gamg -pc_gamg_agg_nsmooths 1.
Thanks.
Jim
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Object is in wrong state!
[0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
[0]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
[0]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
[0]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
[1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message ------------------------------------
[2]PETSC ERROR: Object is in wrong state!
[2]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
[2]PETSC ERROR: ------------------------------------------------------------------------
[2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
[2]PETSC ERROR: See docs/changes/index.html for recent updates.
[2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[2]PETSC ERROR: See docs/index.html for manual pages.
[2]PETSC ERROR: ------------------------------------------------------------------------
[2]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
[2]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
[2]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
[2]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
[2]PETSC ERROR: ------------------------------------------------------------------------
[2]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
[3]PETSC ERROR: --------------------- Error Message ------------------------------------
[3]PETSC ERROR: Object is in wrong state!
[3]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
[3]PETSC ERROR: See docs/changes/index.html for recent updates.
[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[3]PETSC ERROR: See docs/index.html for manual pages.
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
[3]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
[3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
[3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
--------------------- Error Message ------------------------------------
[1]PETSC ERROR: Object is in wrong state!
[1]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
[1]PETSC ERROR: ------------------------------------------------------------------------
[1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
[1]PETSC ERROR: See docs/changes/index.html for recent updates.
[1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[1]PETSC ERROR: See docs/index.html for manual pages.
[1]PETSC ERROR: ------------------------------------------------------------------------
[1]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
[1]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
[1]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
[1]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
[1]PETSC ERROR: ------------------------------------------------------------------------
[1]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
[1]PETSC ERROR: [2]PETSC ERROR: [3]PETSC ERROR: --------------------- Error Message ------------------------------------
[3]PETSC ERROR: Object is in wrong state!
[3]PETSC ERROR: Matrix is missing diagonal entry 0!
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
[3]PETSC ERROR: See docs/changes/index.html for recent updates.
[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[3]PETSC ERROR: See docs/index.html for manual pages.
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
[3]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
[3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
[3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
[3]PETSC ERROR: ------------------------------------------------------------------------
[3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1641 in src/mat/impls/aij/seq/aijfact.c
[3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1745 in src/mat/impls/aij/seq/aijfact.c
[3]PETSC ERROR: MatILUFactorSymbolic() line 6130 in src/mat/interface/matrix.c
[3]PETSC ERROR: PCSetUp_ILU() line 216 in src/ksp/pc/impls/factor/ilu/ilu.c
[3]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c
[3]PETSC ERROR: KSPSetUp() line 278 in src/ksp/ksp/interface/itfunc.c
[3]PETSC ERROR: PCSetUpOnBlocks_BJacobi_Singleblock() line 715 in src/ksp/pc/impls/bjacobi/bjacobi.c
[3]PETSC ERROR: PCSetUpOnBlocks() line 865 in src/ksp/pc/interface/precon.c
[3]PETSC ERROR: KSPSetUpOnBlocks() line 154 in src/ksp/ksp/interface/itfunc.c
[3]PETSC ERROR: KSPSolve() line 403 in src/ksp/ksp/interface/itfunc.c
Fatal error in MPI_Send: Invalid count, error stack:
MPI_Send(173): MPI_Send(buf=0x100, count=-199040697, MPI_REAL, dest=0, tag=3, MPI_COMM_WORLD) failed
MPI_Send(97).: Negative count, value is -199040697









>ÔÚ 2012-10-29 21:57:49£¬"Mark F. Adams" <***@columbia.edu> ÐŽµÀ£º
>You need an updated PETSc. (I thought 3.2 had an early version of gamg ¡­ but you need 3.3 or dev)


>Mark


>On Oct 29, 2012, at 8:49 AM, w_ang_temp <***@163.com> wrote:


>Hello, Jed
> I use the command:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
> The error is as follows:
>[0]PETSC ERROR: --------------------- Error Message ------------------------------------
>[0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
>[0]PETSC ERROR: Unable to find requested PC type gamg!
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012
>[0]PETSC ERROR: See docs/changes/index.html for recent updates.
>[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>[0]PETSC ERROR: See docs/index.html for manual pages.
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29 05:40:13 2012
>[0]PETSC ERROR: Libraries linked from /home /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
>[0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
>[0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2 --download-f-blas-lapack=1 --with-x=1 --with-debugging=0 -->download-parmetis --download-mumps --download-scalapack --download-blacs
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
>[0]PETSC ERROR: PCSetFromOptions() line 184 in src/ksp/pc/interface/pcset.c
>[0]PETSC ERROR: KSPSetFromOptions() line 286 in src/ksp/ksp/interface/itcl.c
> When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason', it is ok. So
>what is the possible reason?
> Thanks.
> Jim







>>At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:


>>Try the simple option I just sent.

>>On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello, Jed
>> Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
>>whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
>>solvers and preconditioners. I used the command line option like this:
>>mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
>> So, I also want to use the multigrid just like the simple command. There is only a little introduction
>>about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
>>to use it simply in PETSc to solve the Ax=b system.
>> Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim
Matthew Knepley
2012-11-01 13:49:35 UTC
Permalink
On Thu, Nov 1, 2012 at 9:47 AM, w_ang_temp <***@163.com> wrote:

> Hello,
> I have just used the latest version 'petsc-3.3-p4.tar.gz'. The codes
> which are ok under version 'petsc-3.2-p7.tar.gz'
> are not ok now. The error infomation is as follows. So if can I still use
> the original codes without any
> modification and also use multigrid with -pc_type gamg
> -pc_gamg_agg_nsmooths 1.
> Thanks.
> Jim
> [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [0]PETSC ERROR: Object is in wrong state!
> [0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
> argument 1 "mat" before MatGetOwnershipRange()!
>

This says what is wrong. Now you must either preallocate your matrix or
turn of this error using MatSetOption()

Matt


> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51
> CDT 2012
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1
> 06:10:23 2012
> [0]PETSC ERROR: Libraries linked from
> /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> [0]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
> [0]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> [0]PETSC ERROR:
> ------------------------------------------------------------------------
> [0]PETSC ERROR: MatGetOwnershipRange() line 5992 in
> src/mat/interface/matrix.c
> [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [2]PETSC ERROR: Object is in wrong state!
> [2]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
> argument 1 "mat" before MatGetOwnershipRange()!
> [2]PETSC ERROR:
> ------------------------------------------------------------------------
> [2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51
> CDT 2012
> [2]PETSC ERROR: See docs/changes/index.html for recent updates.
> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [2]PETSC ERROR: See docs/index.html for manual pages.
> [2]PETSC ERROR:
> ------------------------------------------------------------------------
> [2]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1
> 06:10:23 2012
> [2]PETSC ERROR: Libraries linked from
> /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> [2]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
> [2]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> [2]PETSC ERROR:
> ------------------------------------------------------------------------
> [2]PETSC ERROR: MatGetOwnershipRange() line 5992 in
> src/mat/interface/matrix.c
> [3]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [3]PETSC ERROR: Object is in wrong state!
> [3]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
> argument 1 "mat" before MatGetOwnershipRange()!
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51
> CDT 2012
> [3]PETSC ERROR: See docs/changes/index.html for recent updates.
> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [3]PETSC ERROR: See docs/index.html for manual pages.
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1
> 06:10:23 2012
> [3]PETSC ERROR: Libraries linked from
> /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> [3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
> [3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: MatGetOwnershipRange() line 5992 in
> src/mat/interface/matrix.c
> --------------------- Error Message ------------------------------------
> [1]PETSC ERROR: Object is in wrong state!
> [1]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
> argument 1 "mat" before MatGetOwnershipRange()!
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51
> CDT 2012
> [1]PETSC ERROR: See docs/changes/index.html for recent updates.
> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [1]PETSC ERROR: See docs/index.html for manual pages.
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1
> 06:10:23 2012
> [1]PETSC ERROR: Libraries linked from
> /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> [1]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
> [1]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: MatGetOwnershipRange() line 5992 in
> src/mat/interface/matrix.c
> [1]PETSC ERROR: [2]PETSC ERROR: [3]PETSC ERROR: ---------------------
> Error Message ------------------------------------
> [3]PETSC ERROR: Object is in wrong state!
> [3]PETSC ERROR: Matrix is missing diagonal entry 0!
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51
> CDT 2012
> [3]PETSC ERROR: See docs/changes/index.html for recent updates.
> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [3]PETSC ERROR: See docs/index.html for manual pages.
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1
> 06:10:23 2012
> [3]PETSC ERROR: Libraries linked from
> /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> [3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
> [3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> [3]PETSC ERROR:
> ------------------------------------------------------------------------
> [3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1641 in
> src/mat/impls/aij/seq/aijfact.c
> [3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1745 in
> src/mat/impls/aij/seq/aijfact.c
> [3]PETSC ERROR: MatILUFactorSymbolic() line 6130 in
> src/mat/interface/matrix.c
> [3]PETSC ERROR: PCSetUp_ILU() line 216 in src/ksp/pc/impls/factor/ilu/ilu.c
> [3]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c
> [3]PETSC ERROR: KSPSetUp() line 278 in src/ksp/ksp/interface/itfunc.c
> [3]PETSC ERROR: PCSetUpOnBlocks_BJacobi_Singleblock() line 715 in
> src/ksp/pc/impls/bjacobi/bjacobi.c
> [3]PETSC ERROR: PCSetUpOnBlocks() line 865 in src/ksp/pc/interface/precon.c
> [3]PETSC ERROR: KSPSetUpOnBlocks() line 154 in
> src/ksp/ksp/interface/itfunc.c
> [3]PETSC ERROR: KSPSolve() line 403 in src/ksp/ksp/interface/itfunc.c
> Fatal error in MPI_Send: Invalid count, error stack:
> MPI_Send(173): MPI_Send(buf=0x100, count=-199040697, MPI_REAL, dest=0,
> tag=3, MPI_COMM_WORLD) failed
> MPI_Send(97).: Negative count, value is -199040697
>
>
>
>
>
>
>
>
> >圚 2012-10-29 21:57:49"Mark F. Adams" <***@columbia.edu> 写道
>
> >You need an updated PETSc. (I thought 3.2 had an early version of gamg 

> but you need 3.3 or dev)
>
> >Mark
>
> >On Oct 29, 2012, at 8:49 AM, w_ang_temp <***@163.com> wrote:
>
> >Hello, Jed
> > I use the command:
> > mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths
> 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
> > The error is as follows:
> >[0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> >[0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external
> package needed for type
> >
> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
> >[0]PETSC ERROR: Unable to find requested PC type gamg!
> >[0]PETSC ERROR:
> ------------------------------------------------------------------------
> >[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51
> CDT 2012
> >[0]PETSC ERROR: See docs/changes/index.html for recent updates.
> >[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> >[0]PETSC ERROR: See docs/index.html for manual pages.
> >[0]PETSC ERROR:
> ------------------------------------------------------------------------
> >[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29
> 05:40:13 2012
> >[0]PETSC ERROR: Libraries linked from /home
> /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
> >[0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
> >[0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2
> --download-f-blas-lapack=1 --with-x=1 --with-debugging=0
> -->download-parmetis --download-mumps --download-scalapack --download-blacs
> >[0]PETSC ERROR:
> ------------------------------------------------------------------------
> >[0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
> >[0]PETSC ERROR: PCSetFromOptions() line 184 in
> src/ksp/pc/interface/pcset.c
> >[0]PETSC ERROR: KSPSetFromOptions() line 286 in
> src/ksp/ksp/interface/itcl.c
> > When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol
> 1.0e-15 -ksp_converged_reason', it is ok. So
> >what is the possible reason?
> > Thanks.
> > Jim
>
>
>
>
>
> >>At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >>Try the simple option I just sent.
> >>On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:
>
>> >>Hello, Jed
>> >> Thanks for your timely reply. I deal with the soil-water coupled
>> problem in geotechnical engineering,
>> >>whose stiffness matrix is ill-conditioned. I have did some work about
>> it, mainly finding the effective
>> >>solvers and preconditioners. I used the command line option like this:
>> >>mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15
>> -ksp_converged_reason.
>> >> So, I also want to use the multigrid just like the simple command.
>> There is only a little introduction
>> >>about multigrid in the manual. Multigrid is complex and not a easy
>> thing for me, so I just need to konw how
>> >>to use it simply in PETSc to solve the Ax=b system.
>> >> Thanks.
>> Jim
>>
>>
>>
>>
>>
>> >At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:
>>
>> >Algebraic multigrid can be used directly, -pc_type gamg
>> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
>> interface to set >interpolation (and provide a coarse operator for
>> non-Galerkin) or use a DM that provides coarsening capability.
>>
>> >What kind of problem are you solving?
>> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>>
>>> >>Hello,
>>> >> I want to use the multigrid as a preconditioner. The introduction
>>> about it in the manual is little.
>>> >>So are there some typical examples or details about multigrid? Is it
>>> used just like other preconditioners
>>> >>like jacobi, sor, which can be simply used in the cammand line options?
>>> >> Thanks.
>>> >>
>>> Jim
>>>
>>>
>>>
>>
>>
>
>
>
>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
w_ang_temp
2012-11-01 13:57:51 UTC
Permalink
Hello, Matthew
Do you mean that the two versions have a different in this point? If I use the new version, I have to
make some modifications on my codes?
Thanks.
Jim







>ÔÚ 2012-11-01 21:49:35£¬"Matthew Knepley" <***@gmail.com> ÐŽµÀ£º
>On Thu, Nov 1, 2012 at 9:47 AM, w_ang_temp <***@163.com> wrote:

>Hello,
> I have just used the latest version 'petsc-3.3-p4.tar.gz'. The codes which are ok under version 'petsc-3.2-p7.tar.gz'
>are not ok now. The error infomation is as follows. So if can I still use the original codes without any
>modification and also use multigrid with -pc_type gamg -pc_gamg_agg_nsmooths 1.
> Thanks.
Jim
>[0]PETSC ERROR: --------------------- Error Message ------------------------------------
>[0]PETSC ERROR: Object is in wrong state!
>[0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!



>This says what is wrong. Now you must either preallocate your matrix or turn of this error using MatSetOption()


> Matt

>>[0]PETSC ERROR: ------------------------------------------------------------------------
>>[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
>>[0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>[0]PETSC ERROR: See docs/index.html for manual pages.
>>[0]PETSC ERROR: ------------------------------------------------------------------------
>>[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
>>[0]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
>>[0]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
>>[0]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
>>[0]PETSC ERROR: ------------------------------------------------------------------------
>>[0]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
>>[1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message ------------------------------------
>>[2]PETSC ERROR: Object is in wrong state!
>>[2]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
>>[2]PETSC ERROR: ------------------------------------------------------------------------
>>[2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
>>[2]PETSC ERROR: See docs/changes/index.html for recent updates.
>>[2]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>[2]PETSC ERROR: See docs/index.html for manual pages.
>>[2]PETSC ERROR: ------------------------------------------------------------------------
>>[2]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
>>[2]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
>>[2]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
>>[2]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
>>[2]PETSC ERROR: ------------------------------------------------------------------------
>>[2]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
>>[3]PETSC ERROR: --------------------- Error Message ------------------------------------
>>[3]PETSC ERROR: Object is in wrong state!
>>[3]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
>>[3]PETSC ERROR: See docs/changes/index.html for recent updates.
>>[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>[3]PETSC ERROR: See docs/index.html for manual pages.
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
>>[3]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
>>[3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
>>[3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
>>--------------------- Error Message ------------------------------------
>>[1]PETSC ERROR: Object is in wrong state!
>>[1]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange()!
>>[1]PETSC ERROR: ------------------------------------------------------------------------
>>[1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
>>[1]PETSC ERROR: See docs/changes/index.html for recent updates.
>>[1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>[1]PETSC ERROR: See docs/index.html for manual pages.
>>[1]PETSC ERROR: ------------------------------------------------------------------------
>>[1]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
>>[1]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
>>[1]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
>>[1]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
>>[1]PETSC ERROR: ------------------------------------------------------------------------
>>[1]PETSC ERROR: MatGetOwnershipRange() line 5992 in src/mat/interface/matrix.c
>>[1]PETSC ERROR: [2]PETSC ERROR: [3]PETSC ERROR: --------------------- Error Message ------------------------------------
>>[3]PETSC ERROR: Object is in wrong state!
>>[3]PETSC ERROR: Matrix is missing diagonal entry 0!
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
>>[3]PETSC ERROR: See docs/changes/index.html for recent updates.
>>[3]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>[3]PETSC ERROR: See docs/index.html for manual pages.
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by ubu Thu Nov 1 06:10:23 2012
>>[3]PETSC ERROR: Libraries linked from /home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
>>[3]PETSC ERROR: Configure run at Thu Nov 1 05:54:48 2012
>>[3]PETSC ERROR: Configure options --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
>>[3]PETSC ERROR: ------------------------------------------------------------------------
>>[3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ_ilu0() line 1641 in src/mat/impls/aij/seq/aijfact.c
>>[3]PETSC ERROR: MatILUFactorSymbolic_SeqAIJ() line 1745 in src/mat/impls/aij/seq/aijfact.c
>>[3]PETSC ERROR: MatILUFactorSymbolic() line 6130 in src/mat/interface/matrix.c
>>[3]PETSC ERROR: PCSetUp_ILU() line 216 in src/ksp/pc/impls/factor/ilu/ilu.c
>>[3]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c
>>[3]PETSC ERROR: KSPSetUp() line 278 in src/ksp/ksp/interface/itfunc.c
>>[3]PETSC ERROR: PCSetUpOnBlocks_BJacobi_Singleblock() line 715 in src/ksp/pc/impls/bjacobi/bjacobi.c
>>[3]PETSC ERROR: PCSetUpOnBlocks() line 865 in src/ksp/pc/interface/precon.c
>>[3]PETSC ERROR: KSPSetUpOnBlocks() line 154 in src/ksp/ksp/interface/itfunc.c
>>[3]PETSC ERROR: KSPSolve() line 403 in src/ksp/ksp/interface/itfunc.c
>>Fatal error in MPI_Send: Invalid count, error stack:
>>MPI_Send(173): MPI_Send(buf=0x100, count=-199040697, MPI_REAL, dest=0, tag=3, MPI_COMM_WORLD) failed
>>MPI_Send(97).: Negative count, value is -199040697









>ÔÚ 2012-10-29 21:57:49£¬"Mark F. Adams" <***@columbia.edu> ÐŽµÀ£º
>You need an updated PETSc. (I thought 3.2 had an early version of gamg ¡­ but you need 3.3 or dev)


>Mark


>On Oct 29, 2012, at 8:49 AM, w_ang_temp <***@163.com> wrote:


>Hello, Jed
> I use the command:
> mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_rtol 1.0e-15 -ksp_converged_reason
> The error is as follows:
>[0]PETSC ERROR: --------------------- Error Message ------------------------------------
>[0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
>[0]PETSC ERROR: Unable to find requested PC type gamg!
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012
>[0]PETSC ERROR: See docs/changes/index.html for recent updates.
>[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>[0]PETSC ERROR: See docs/index.html for manual pages.
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: ./ex4f on a arch-linu named ubuntu by geo Mon Oct 29 05:40:13 2012
>[0]PETSC ERROR: Libraries linked from /home /geo/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib
>[0]PETSC ERROR: Configure run at Mon Jul 2 20:33:17 2012
>[0]PETSC ERROR: Configure options --with-mpi-dir=/home/geo/soft/mpich2 --download-f-blas-lapack=1 --with-x=1 --with-debugging=0 -->download-parmetis --download-mumps --download-scalapack --download-blacs
>[0]PETSC ERROR: ------------------------------------------------------------------------
>[0]PETSC ERROR: PCSetType() line 67 in src/ksp/pc/interface/pcset.c
>[0]PETSC ERROR: PCSetFromOptions() line 184 in src/ksp/pc/interface/pcset.c
>[0]PETSC ERROR: KSPSetFromOptions() line 286 in src/ksp/ksp/interface/itcl.c
> When I use 'mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason', it is ok. So
>what is the possible reason?
> Thanks.
> Jim







>>At 2012-10-28 22:25:31,"Jed Brown" <***@mcs.anl.gov> wrote:


>>Try the simple option I just sent.

>>On Oct 28, 2012 6:38 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello, Jed
>> Thanks for your timely reply. I deal with the soil-water coupled problem in geotechnical engineering,
>>whose stiffness matrix is ill-conditioned. I have did some work about it, mainly finding the effective
>>solvers and preconditioners. I used the command line option like this:
>>mpiexec -n 4 ./ex4f -ksp_type cgs -pc_type sor -ksp_rtol 1.0e-15 -ksp_converged_reason.
>> So, I also want to use the multigrid just like the simple command. There is only a little introduction
>>about multigrid in the manual. Multigrid is complex and not a easy thing for me, so I just need to konw how
>>to use it simply in PETSc to solve the Ax=b system.
>> Thanks.
Jim







>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim



















--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
Jed Brown
2012-11-01 14:00:28 UTC
Permalink
Yes, it's faster to understand this error message than to have
"mysteriously slow performance".

** Preallocation routines now automatically set
MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than
necessary then use
MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the
error generation.
*
http://www.mcs.anl.gov/petsc/documentation/changes/33.html

On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:

> Do you mean that the two versions have a different in this point? If I use
> the new version, I have to
> make some modifications on my codes?
>
w_ang_temp
2012-11-03 15:27:10 UTC
Permalink
Hello,
I have tried AMG, but there are some problems. I use the command:
mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15 -ksp_converged_reason.
The matrix has a size of 30000. However, compared with -pc_type asm,
the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several tests
and got the same conclusion. When it begins, the screen shows the information:
[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there is some
parameters that affect the performance of AMG?
Besides, I want to confirm a conception. In my view, AMG itself can be a solver
like gmres. It can also be used as a preconditioner like jacobi and is used by combining
with other solver. Is it right? If it is right, how use AMG solver?
My codes are attached.
Thanks.
Jim

-----------codes------------------------------------------
call MatCreate(PETSC_COMM_WORLD,A,ierr)
call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m,n,ierr)
call MatSetType(A, MATMPIAIJ,ierr)
call MatSetFromOptions(A,ierr)
!premalloc
!find the max non-zero numbers of all rows
maxnonzero=0
do 19,II=1,m
!no-zero numbers of this row
maxnonzeroII=NROWIN(II+1)-NROWIN(II)
if (maxnonzeroII>maxnonzero) then
maxnonzero=maxnonzeroII
endif
19 continue
call MatMPIAIJSetPreallocation(A,maxnonzero,PETSC_NULL_INTEGER,
& maxnonzero,PETSC_NULL_INTEGER,ierr)
call MatGetOwnershipRange(A,Istart,Iend,ierr)
!set values per row
do 10,II=Istart+1,Iend
!no-zero numbers of this row
rowNum=NROWIN(II+1)-NROWIN(II)

allocate(nColPerRow(rowNum))
allocate(valuePerRow(rowNum))

kValStart=NROWIN(II)+1-1
kValEnd=NROWIN(II)+rowNum-1

!column index
nColPerRow=NNZIJ(kValStart:kValEnd)-1
valuePerRow=VALUE(kValStart:kValEnd)
nRow=II-1
call MatSetValues(A,ione,nRow,rowNum,nColPerRow,valuePerRow,
& INSERT_VALUES,ierr)
deallocate(nColPerRow)
deallocate(valuePerRow)
10 continue
call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,m,b,ierr)
call VecSetFromOptions(b,ierr)
call VecDuplicate(b,u,ierr)
call VecDuplicate(b,x,ierr)
call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)
call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)
call PetscOptionsHasName(PETSC_NULL_CHARACTER,'-my_ksp_monitor', &
& flg,ierr)
if (flg) then
call KSPMonitorSet(ksp,MyKSPMonitor,PETSC_NULL_OBJECT, &
& PETSC_NULL_FUNCTION,ierr)
endif
call KSPSetFromOptions(ksp,ierr)
call PetscOptionsHasName(PETSC_NULL_CHARACTER, &
& '-my_ksp_convergence',flg,ierr)
if (flg) then
call KSPSetConvergenceTest(ksp,MyKSPConverged, &
& PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,ierr)
endif
!Assing values to 'b'
bTemp=Iend-Istart
ioneb=1

do 12,II=Istart,Iend-1
voneb=F(II+1)
call VecSetValues(b,ioneb,II,voneb,INSERT_VALUES,ierr)
12 continue
call VecAssemblyBegin(b,ierr)
call VecAssemblyEnd(b,ierr)
call KSPSolve(ksp,b,x,ierr)
------codes-------------------------------------------------










>At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:

>Yes, it's faster to understand this error message than to have "mysteriously slow performance".


>* Preallocation routines now automatically set MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than necessary then >use MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the error generation.

>http://www.mcs.anl.gov/petsc/documentation/changes/33.html


>On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:

>Do you mean that the two versions have a different in this point? If I use the new version, I have to
>make some modifications on my codes?
Jed Brown
2012-11-03 15:31:52 UTC
Permalink
1. *Always* send -log_summary when asking about performance.
2. AMG setup costs more, the solve should be faster, especially for large
problems.
3. 30k degrees of freedom is not large.


On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <***@163.com> wrote:

> Hello,
> I have tried AMG, but there are some problems. I use the command:
> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg
> -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15
> -ksp_converged_reason.
> The matrix has a size of 30000. However, compared with -pc_type asm,
> the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several tests
> and got the same conclusion. When it begins, the screen shows the
> information:
> [0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there is
> some
> parameters that affect the performance of AMG?
> Besides, I want to confirm a conception. In my view, AMG itself can be
> a solver
> like gmres. It can also be used as a preconditioner like jacobi and is
> used by combining
> with other solver. Is it right? If it is right, how use AMG solver?
> My codes are attached.
> Thanks.
> Jim
>
> -----------codes------------------------------------------
> call MatCreate(PETSC_COMM_WORLD,A,ierr)
> call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m,n,ierr)
> call MatSetType(A, MATMPIAIJ,ierr)
> call MatSetFromOptions(A,ierr)
> !premalloc
> !find the max non-zero numbers of all rows
> maxnonzero=0
> do 19,II=1,m
> !no-zero numbers of this row
> maxnonzeroII=NROWIN(II+1)-NROWIN(II)
> if (maxnonzeroII>maxnonzero) then
> maxnonzero=maxnonzeroII
> endif
> 19 continue
> &nbs p; call
> MatMPIAIJSetPreallocation(A,maxnonzero,PETSC_NULL_INTEGER,
> & maxnonzero,PETSC_NULL_INTEGER,ierr)
> call MatGetOwnershipRange(A,Istart,Iend,ierr)
> !set values per row
> do 10,II=Istart+1,Iend
> !no-zero numbers of this row
> rowNum=NROWIN(II+1)-NROWIN(II)
>
> allocate(nColPerRow(rowNum))
> allocate(valuePerRow(rowNum))
>
> kValStart=NROWIN(II)+1-1
> kValEnd=NROWIN(II)+rowNum-1
>
> !column index
> nColPerRow=NNZIJ(kValStart:kValEnd)-1
> valuePerRow=VALUE(kValStart:kValEnd)
> &nb sp; nRow=II-1
> call MatSetValues(A,ione,nRow,rowNum,nColPerRow,valuePerRow,
> & INSERT_VALUES,ierr)
> deallocate(nColPerRow)
> deallocate(valuePerRow)
> 10 continue
> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
> call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,m,b,ierr)
> call VecSetFromOptions(b,ierr)
> call VecDuplicate(b,u,ierr)
> call VecDuplicate(b,x,ierr)
> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)
> call KSPSetOperators(ksp,A,A,DIF FERENT_NONZERO_PATTERN,ierr)
> call PetscOptionsHasName(PETSC_NULL_CHARACTER,'-my_ksp_monitor', &
> & flg,ierr)
> if (flg) then
> call KSPMonitorSet(ksp,MyKSPMonitor,PETSC_NULL_OBJECT, &
> & PETSC_NULL_FUNCTION,ierr)
> endif
> call KSPSetFromOptions(ksp,ierr)
> call PetscOptionsHasName(PETSC_NULL_CHARACTER, &
> & '-my_ksp_convergence',flg,ierr)
> if (flg) then
> call KSPSetConvergenceTest(ksp,MyKSPConverged, &
> & PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,ierr)
> endif
> !Assing values to 'b'
> bTemp=Iend-Istart
> ioneb=1
>
> do 12,II=Istart,Iend-1
> voneb=F(II+1)
> call VecSetValues(b,ioneb,II,voneb,INSERT_VALUES,ierr)
> 12 continue
> call VecAssemblyBegin(b,ierr)
> call VecAssemblyEnd(b,ierr)
> call KSPSolve(ksp,b,x,ierr)
> ------codes-------------------------------------------------
>
>
>
>
>
>
>
>
>
> >At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >Yes, it's faster to understand this error message than to have
> "mysteriously slow performance".
>
> >** Preallocation routines now automatically set
> MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than
> necessary then *>*use
> MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the
> error generation.
> *
> >http://www.mcs.anl.gov/petsc/documentation/changes/33.html
>
> >On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:
>
>> >Do you mean that the two versions have a different in this point? If I
>> use the new version, I have to
>> >make some modifications on my codes?
>>
>
>
>
>
w_ang_temp
2012-11-03 15:52:22 UTC
Permalink
Is there something that need attention when setting up PETSc? The -log_summary
is no use in my system.



>At 2012-11-03 23:31:52,"Jed Brown" <***@mcs.anl.gov> wrote:
>1. Always send -log_summary when asking about performance.
>2. AMG setup costs more, the solve should be faster, especially for large problems.
>3. 30k degrees of freedom is not large.



>>On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <***@163.com> wrote:

>>Hello,
>> I have tried AMG, but there are some problems. I use the command:
>> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15 ->>ksp_converged_reason.
>> The matrix has a size of 30000. However, compared with -pc_type asm,
>>the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several tests
>>and got the same conclusion. When it begins, the screen shows the information:
>>[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there is some
>>parameters that affect the performance of AMG?
>> Besides, I want to confirm a conception. In my view, AMG itself can be a solver
>>like gmres. It can also be used as a preconditioner like jacobi and is used by combining
>>with other solver. Is it right? If it is right, how use AMG solver?
>> My codes are attached.
>> Thanks.
>> Jim



>At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:

>Yes, it's faster to understand this error message than to have "mysteriously slow performance".


>* Preallocation routines now automatically set MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than necessary then >use MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the error generation.

>http://www.mcs.anl.gov/petsc/documentation/changes/33.html


>On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:

>Do you mean that the two versions have a different in this point? If I use the new version, I have to
>make some modifications on my codes?
Jed Brown
2012-11-03 15:53:42 UTC
Permalink
Just pass it as a command line option. It gives profiling output in
PetscFinalize().


On Sat, Nov 3, 2012 at 10:52 AM, w_ang_temp <***@163.com> wrote:

> Is there something that need attention when setting up PETSc? The
> -log_summary
> is no use in my system.
>
>
> >At 2012-11-03 23:31:52,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >1. *Always* send -log_summary when asking about performance.
> >2. AMG setup costs more, the solve should be faster, especially for large
> problems.
> >3. 30k degrees of freedom is not large.
>
>
> >>On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <***@163.com> wrote:
>
>> >>Hello,
>> >> I have tried AMG, but there are some problems. I use the command:
>> >> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg
>> -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15
>> ->>ksp_converged_reason.
>> >> The matrix has a size of 30000. However, compared with -pc_type asm,
>> >>the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several
>> tests
>> >>and got the same conclusion. When it begins, the screen shows the
>> information:
>> >>[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there
>> is some
>> >>parameters that affect the performance of AMG?
>> >> Besides, I want to confirm a conception. In my view, AMG itself can
>> be a solver
>> >>like gmres. It can also be used as a preconditioner like jacobi and is
>> used by combining
>> >>with other solver. Is it right? If it is right, how use AMG solver?
>> >> My codes are attached.
>> >> Thanks.
>> >> Jim
>>
>>
>>
>> >At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:
>>
>> >Yes, it's faster to understand this error message than to have
>> "mysteriously slow performance".
>>
>> >** Preallocation routines now automatically set
>> MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than
>> necessary then *>*use
>> MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the
>> error generation.
>> *
>> >http://www.mcs.anl.gov/petsc/documentation/changes/33.html
>>
>> >On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:
>>
>>> >Do you mean that the two versions have a different in this point? If I
>>> use the new version, I have to
>>> >make some modifications on my codes?
>>>
>>
>>
>>
>>
>
>
>
w_ang_temp
2012-11-03 17:05:28 UTC
Permalink
(1) using AMG
[0]PCSetData_AGG bs=1 MM=7601
Linear solve converged due to CONVERGED_RTOL iterations 445
Norm of error 0.2591E+04 iterations 445
0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
-2.105776715959587E-017 0.000000000000000E+000 0.000000000000000E+000
26.4211453778391 -3.262172452839194E-017 -2.114490133288630E-017
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat Nov 3 09:29:28 2012
Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
Max Max/Min Avg Total
Time (sec): 3.198e+02 1.00002 3.198e+02
Objects: 4.480e+02 1.00000 4.480e+02
Flops: 2.296e+09 1.08346 2.172e+09 8.689e+09
Flops/sec: 7.181e+06 1.08344 6.792e+06 2.717e+07
Memory: 2.374e+07 1.04179 9.297e+07
MPI Messages: 6.843e+03 1.87582 5.472e+03 2.189e+04
MPI Message Lengths: 2.660e+07 2.08884 3.446e+03 7.542e+07
MPI Reductions: 6.002e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 3.1981e+02 100.0% 8.6886e+09 100.0% 2.189e+04 100.0% 3.446e+03 100.0% 6.001e+04 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------

##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################

Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 6291 1.0 2.1431e+01 4.2 1.01e+09 1.2 1.9e+04 3.4e+03 0.0e+00 4 42 86 85 0 4 42 86 85 0 170
MatMultAdd 896 1.0 4.8204e-01 1.1 2.79e+07 1.2 1.3e+03 3.4e+03 0.0e+00 0 1 6 6 0 0 1 6 6 0 208
MatMultTranspose 896 1.0 2.2052e+00 1.3 2.79e+07 1.2 1.3e+03 3.4e+03 1.8e+03 1 1 6 6 3 1 1 6 6 3 45
MatSolve 896 0.0 1.4953e-02 0.0 2.44e+06 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 163
MatLUFactorSym 1 1.0 1.7595e-04 4.7 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 1 1.0 1.3995e-0423.5 1.85e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 132
MatConvert 2 1.0 3.1026e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 0 0 0 0 0 0 0
MatScale 6 1.0 8.9679e-03 5.7 3.98e+05 1.2 6.0e+00 3.4e+03 4.0e+00 0 0 0 0 0 0 0 0 0 0 158
MatAssemblyBegin 37 1.0 2.1544e-01 1.7 0.00e+00 0.0 5.4e+01 6.4e+03 4.2e+01 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 37 1.0 2.5336e-01 1.4 0.00e+00 0.0 9.0e+01 6.8e+02 3.1e+02 0 0 0 0 1 0 0 0 0 1 0
MatGetRow 26874 1.2 9.8243e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 0.0 2.5988e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 0.0 1.5616e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e-01 0 0 0 0 0 0 0 0 0 0 0
MatCoarsen 2 1.0 2.9671e-02 1.0 0.00e+00 0.0 2.4e+01 7.0e+03 3.8e+01 0 0 0 0 0 0 0 0 0 0 0
MatAXPY 2 1.0 6.1393e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatTranspose 2 1.0 1.8161e-01 1.1 0.00e+00 0.0 3.0e+01 7.1e+03 9.4e+01 0 0 0 0 0 0 0 0 0 0 0
MatMatMult 2 1.0 1.1968e-01 1.0 3.31e+05 1.2 3.6e+01 1.7e+03 1.1e+02 0 0 0 0 0 0 0 0 0 0 10
MatMatMultSym 2 1.0 9.8982e-02 1.0 0.00e+00 0.0 3.0e+01 1.4e+03 9.6e+01 0 0 0 0 0 0 0 0 0 0 0
MatMatMultNum 2 1.0 2.1248e-02 1.1 3.31e+05 1.2 6.0e+00 3.4e+03 1.2e+01 0 0 0 0 0 0 0 0 0 0 56
MatPtAP 2 1.0 1.7070e-01 1.1 2.36e+06 1.2 5.4e+01 3.3e+03 1.1e+02 0 0 0 0 0 0 0 0 0 0 50
MatPtAPSymbolic 2 1.0 1.3786e-01 1.1 0.00e+00 0.0 4.8e+01 3.1e+03 1.0e+02 0 0 0 0 0 0 0 0 0 0 0
MatPtAPNumeric 2 1.0 4.7638e-02 2.2 2.36e+06 1.2 6.0e+00 4.8e+03 1.2e+01 0 0 0 0 0 0 0 0 0 0 180
MatTrnMatMult 2 1.0 7.9914e-01 1.0 1.14e+07 1.3 3.6e+01 2.1e+04 1.2e+02 0 0 0 1 0 0 0 0 1 0 48
MatGetLocalMat 10 1.0 6.7852e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetBrAoCol 6 1.0 3.6962e-02 2.4 0.00e+00 0.0 4.2e+01 4.5e+03 1.6e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetSymTrans 4 1.0 4.4394e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 913 1.0 2.8472e+00 3.0 5.28e+08 1.0 0.0e+00 0.0e+00 9.1e+02 1 24 0 0 2 1 24 0 0 2 741
VecNorm 1367 1.0 2.7202e+00 1.6 7.12e+06 1.0 0.0e+00 0.0e+00 1.4e+03 1 0 0 0 2 1 0 0 0 2 10
VecScale 4950 1.0 6.0693e-02 1.2 1.96e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1169
VecCopy 1349 1.0 7.8685e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 4972 1.0 1.2852e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 7624 1.0 1.3610e-01 1.1 6.44e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 1676
VecAYPX 7168 1.0 1.5877e-01 1.2 4.01e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 896
VecMAXPY 1366 1.0 6.3739e-01 1.3 5.35e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 25 0 0 0 0 25 0 0 0 3353
VecAssemblyBegin 21 1.0 4.7891e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+01 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 21 1.0 4.5776e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecPointwiseMult 5398 1.0 1.2278e-01 1.3 2.42e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 698
VecScatterBegin 8107 1.0 3.9436e-02 1.3 0.00e+00 0.0 2.2e+04 3.4e+03 0.0e+00 0 0 99 98 0 0 0 99 98 0 0
VecScatterEnd 8107 1.0 1.5414e+01344.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
VecSetRandom 2 1.0 1.1868e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 1366 1.0 2.7266e+00 1.6 1.07e+07 1.0 0.0e+00 0.0e+00 1.4e+03 1 0 0 0 2 1 0 0 0 2 15
KSPGMRESOrthog 913 1.0 8.5743e+00 1.1 1.06e+09 1.0 0.0e+00 0.0e+00 3.6e+04 3 49 0 0 60 3 49 0 0 60 492
KSPSetUp 7 1.0 2.4805e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 5.7656e+01 1.0 2.30e+09 1.1 2.2e+04 3.4e+03 6.0e+04 18100100100100 18100100100100 151
PCSetUp 2 1.0 2.1524e+00 1.0 2.03e+07 1.3 3.8e+02 6.0e+03 1.1e+03 1 1 2 3 2 1 1 2 3 2 33
PCSetUpOnBlocks 448 1.0 1.5607e-03 1.8 1.85e+04 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 12
PCApply 448 1.0 4.4331e+01 1.1 1.08e+09 1.2 1.9e+04 3.4e+03 2.3e+04 14 44 86 85 38 14 44 86 85 38 87
PCGAMGgraph_AGG 2 1.0 6.1282e-01 1.0 3.36e+05 1.2 6.6e+01 5.7e+03 1.9e+02 0 0 0 0 0 0 0 0 0 0 2
PCGAMGcoarse_AGG 2 1.0 8.8854e-01 1.0 1.14e+07 1.3 9.0e+01 1.2e+04 2.1e+02 0 0 0 1 0 0 0 0 1 0 44
PCGAMGProl_AGG 2 1.0 6.3711e-02 1.1 0.00e+00 0.0 4.2e+01 2.3e+03 1.0e+02 0 0 0 0 0 0 0 0 0 0 0
PCGAMGPOpt_AGG 2 1.0 3.4247e-01 1.0 6.22e+06 1.2 9.6e+01 2.8e+03 3.3e+02 0 0 0 0 1 0 0 0 0 1 65
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 68 68 27143956 0
Matrix Coarsen 2 2 704 0
Vector 296 296 13385864 0
Vector Scatter 18 18 11304 0
Index Set 47 47 34816 0
Krylov Solver 7 7 554688 0
Preconditioner 7 7 3896 0
PetscRandom 2 2 704 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.3113e-06
Average time for MPI_Barrier(): 9.62257e-05
Average time for zero size MPI_Send(): 0.00019449
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_gmres_restart 170
-ksp_rtol 1.0e-15
-ksp_type gmres
-log_summary
-pc_gamg_agg_nsmooths 1
-pc_type gamg
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Thu Nov 1 05:54:48 2012
Configure options: --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
-----------------------------------------
Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
Machine characteristics: Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
Using PETSc arch: arch-linux2-c-debug
-----------------------------------------
Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include -I/home/ubu/soft/petsc/petsc-3.3-p4/include -I/home/ubu/soft/petsc/petsc-3.3-p4/include -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include -I/home/ubu/soft/mpich2/include
-----------------------------------------
Using C linker: /home/ubu/soft/mpich2/bin/mpicc
Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
Using libraries: -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc -lpthread -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack -lfblas -L/home/ubu/soft/mpich2/lib -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21 -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32 -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90 -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
-----------------------------------------
time 14.4289010000000
time 14.4289020000000
time 14.4449030000000
time 14.4809050000000



(2) using asm
Linear solve converged due to CONVERGED_RTOL iterations 483
Norm of error 0.2591E+04 iterations 483
0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
4.866092420969481E-018 0.000000000000000E+000 0.000000000000000E+000
26.4211453778395 -4.861214483821431E-017 5.379151535696287E-018
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat Nov 3 10:00:43 2012
Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
Max Max/Min Avg Total
Time (sec): 2.952e+02 1.00006 2.952e+02
Objects: 2.040e+02 1.00000 2.040e+02
Flops: 1.502e+09 1.00731 1.496e+09 5.983e+09
Flops/sec: 5.088e+06 1.00734 5.067e+06 2.027e+07
Memory: 2.036e+07 1.01697 8.073e+07
MPI Messages: 1.960e+03 2.00000 1.470e+03 5.880e+03
MPI Message Lengths: 7.738e+06 3.12820 3.474e+03 2.042e+07
MPI Reductions: 4.236e+04 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 2.9517e+02 100.0% 5.9826e+09 100.0% 5.880e+03 100.0% 3.474e+03 100.0% 4.236e+04 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------

##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################

Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
MatMult 485 1.0 3.1170e+00 5.9 1.36e+08 1.0 2.9e+03 3.4e+03 0.0e+00 1 9 49 48 0 1 9 49 48 0 173
MatSolve 486 1.0 6.8313e-01 1.3 1.49e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 10 0 0 0 0 10 0 0 0 842
MatLUFactorNum 1 1.0 6.0117e-02 1.2 1.54e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 99
MatILUFactorSym 1 1.0 1.5973e-01 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 2 1.0 6.0572e-02 9.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 2 1.0 1.7764e-02 1.6 0.00e+00 0.0 1.2e+01 8.5e+02 1.9e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 3.8147e-06 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetSubMatrice 1 1.0 1.9829e-01 2.1 0.00e+00 0.0 3.0e+01 2.1e+04 1.0e+01 0 0 1 3 0 0 0 1 3 0 0
MatGetOrdering 1 1.0 1.2739e-02 5.8 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatIncreaseOvrlp 1 1.0 1.8877e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecMDot 483 1.0 4.5957e+00 2.5 5.98e+08 1.0 0.0e+00 0.0e+00 4.8e+02 1 40 0 0 1 1 40 0 0 1 521
VecNorm 487 1.0 2.0843e+00 1.2 7.40e+06 1.0 0.0e+00 0.0e+00 4.9e+02 1 0 0 0 1 1 0 0 0 1 14
VecScale 486 1.0 1.2140e-02 1.1 3.69e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1217
VecCopy 3 1.0 4.2915e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 979 1.0 8.1432e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 6 1.0 2.3413e-04 1.3 9.12e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1558
VecMAXPY 486 1.0 7.0027e-01 1.2 6.06e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 3460
VecAssemblyBegin 1 1.0 5.7101e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 1 1.0 3.0994e-06 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterBegin 1457 1.0 9.1357e-02 2.6 0.00e+00 0.0 5.8e+03 3.4e+03 0.0e+00 0 0 99 97 0 0 0 99 97 0 0
VecScatterEnd 1457 1.0 2.5327e+00323.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 486 1.0 2.0858e+00 1.2 1.11e+07 1.0 0.0e+00 0.0e+00 4.9e+02 1 1 0 0 1 1 1 0 0 1 21
KSPGMRESOrthog 483 1.0 1.1152e+01 1.2 1.20e+09 1.0 0.0e+00 0.0e+00 4.0e+04 3 80 0 0 94 3 80 0 0 94 429
KSPSetUp 2 1.0 1.1989e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 2.0990e+01 1.0 1.50e+09 1.0 5.9e+03 3.5e+03 4.2e+04 7100100100100 7100100100100 285
PCSetUp 2 1.0 3.5928e-01 1.0 1.54e+06 1.1 4.2e+01 1.5e+04 3.5e+01 0 0 1 3 0 0 0 1 3 0 17
PCSetUpOnBlocks 1 1.0 2.2166e-01 1.7 1.54e+06 1.1 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 27
PCApply 486 1.0 5.9831e+00 2.8 1.49e+08 1.1 2.9e+03 3.4e+03 1.5e+03 1 10 50 48 3 1 10 50 48 3 96
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 5 5 8051336 0
Vector 182 182 11109304 0
Vector Scatter 2 2 1256 0
Index Set 10 10 121444 0
Krylov Solver 2 2 476844 0
Preconditioner 2 2 1088 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 9.05991e-07
Average time for MPI_Barrier(): 0.000297785
Average time for zero size MPI_Send(): 0.000174284
#PETSc Option Table entries:
-ksp_converged_reason
-ksp_gmres_restart 170
-ksp_rtol 1.0e-15
-ksp_type gmres
-log_summary
-pc_type asm
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Thu Nov 1 05:54:48 2012
Configure options: --with-mpi-dir=/home/ubu/soft/mpich2/ --download-f-blas-lapack =1
-----------------------------------------
Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
Machine characteristics: Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
Using PETSc arch: arch-linux2-c-debug
-----------------------------------------
Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include -I/home/ubu/soft/petsc/petsc-3.3-p4/include -I/home/ubu/soft/petsc/petsc-3.3-p4/include -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include -I/home/ubu/soft/mpich2/include
-----------------------------------------
Using C linker: /home/ubu/soft/mpich2/bin/mpicc
Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
Using libraries: -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc -lpthread -Wl,-rpath,/home/u
bu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack -lfblas -L/home/ubu/soft/mpich2/lib -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21 -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32 -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32 -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90 -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
-----------------------------------------
time 5.72435800000000
time 5.40433800000000
time 5.39233700000000
time 5.51634499999999






>At 2012-11-03 23:53:42,"Jed Brown" <***@mcs.anl.gov> wrote:
>Just pass it as a command line option. It gives profiling output in PetscFinalize().



>On Sat, Nov 3, 2012 at 10:52 AM, w_ang_temp <***@163.com> wrote:

>Is there something that need attention when setting up PETSc? The -log_summary
>is no use in my system.



>At 2012-11-03 23:31:52,"Jed Brown" <***@mcs.anl.gov> wrote:

>1. Always send -log_summary when asking about performance.
>2. AMG setup costs more, the solve should be faster, especially for large problems.
>3. 30k degrees of freedom is not large.



>>On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <***@163.com> wrote:

>>Hello,
>> I have tried AMG, but there are some problems. I use the command:
>> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15 ->>ksp_converged_reason.
>> The matrix has a size of 30000. However, compared with -pc_type asm,
>>the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several tests
>>and got the same conclusion. When it begins, the screen shows the information:
>>[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there is some
>>parameters that affect the performance of AMG?
>> Besides, I want to confirm a conception. In my view, AMG itself can be a solver
>>like gmres. It can also be used as a preconditioner like jacobi and is used by combining
>>with other solver. Is it right? If it is right, how use AMG solver?
>> My codes are attached.
>> Thanks.
>> Jim



>At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:

>Yes, it's faster to understand this error message than to have "mysteriously slow performance".


>* Preallocation routines now automatically set MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than necessary then >use MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the error generation.

>http://www.mcs.anl.gov/petsc/documentation/changes/33.html


>On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:

>Do you mean that the two versions have a different in this point? If I use the new version, I have to
>make some modifications on my codes?
Jed Brown
2012-11-03 17:08:26 UTC
Permalink
1. What kind of equation are you solving? AMG is not working well if it
takes that many iterations.

2.

* ##########################################################*
* # #*
* # WARNING!!! #*
* # #*
* # This code was compiled with a debugging option, #*
* # To get timing results run ./configure #*
* # using --with-debugging=no, the performance will #*
* # be generally two or three times faster. #*
* # #*
* ##########################################################*


On Sat, Nov 3, 2012 at 12:05 PM, w_ang_temp <***@163.com> wrote:

> (1) using AMG
> [0]PCSetData_AGG bs=1 MM=7601
> Linear solve converged due to CONVERGED_RTOL iterations 445
> Norm of error 0.2591E+04 iterations 445
> 0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
> -2.105776715959587E-017 0.000000000000000E+000 0.000000000000000E+000
> 26.4211453778391 -3.262172452839194E-017 -2.114490133288630E-017
>
> ************************************************************************************************************************
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
> -fCourier9' to print this document ***
> ********************************************************************
> ****************************************************
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
> ./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat
> Nov 3 09:29:28 2012
> Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
> Max Max/Min Avg Total
> Time (sec): 3.198e+02 1.00002 3.198e+02
> Objects: 4.480e+02 1.00000 4.480e+02
> Flops: 2.296e+09 1.08346 2.172e+09 8.689e+09
> Flops/sec: 7.181e+06 1.08344 6.792e+06 2.717e+07
> Memory: 2.374e+07 1.04179 9.297e+07
> MPI Messages: 6.843e+03 1.87582 5.472e+03 2.189e+04
> MPI Message Lengths: 2.660e+07 2.08884 3.446e+03 7.542e+07
> MPI Reductions: 6.002e+04 1.00000
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
> e.g., VecAXPY() for real vectors of length N
> --> 2N flops
> and VecAXPY() for complex vectors of length N
> --> 8N flops
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
> --- -- Message Lengths -- -- Reductions --
> Avg %Total Avg %Total counts
> %Total Avg %Total counts %Total
> 0: Main Stage: 3.1981e+02 100.0% 8.6886e+09 100.0% 2.189e+04
> 100.0% 3.446e+03 100.0% 6.001e+04 100.0%
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
> &n bsp; Count: number of times phase was executed
> Time and Flops: Max - maximum over all processors
> Ratio - ratio of maximum to minimum over all processors
> Mess: number of messages sent
> Avg. len: average message length
> Reduct: number of global reductions
> Global: entire computation
> Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
> %T - percent time in this phase %f - percent flops in this
> phase
> %M - percent messages in this phase %L - percent message lengths
> in this phase
> %R - percent reductions in this phase
> Total Mflop/s: 10e-6 * (sum of flops o ver all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
> ##########################################################
> # #
> # WARNING!!! #
> # & nbsp; #
> # This code was compiled with a debugging option, #
> # To get timing results run ./configure #
> # using --with-debugging=no, the performance will #
> # be generally two or three times faster. #
> # &n bsp; #
> ##########################################################
>
> Event Count Time (sec)
> Flops --- Global --- --- Stage --- Total
> Max Ratio Max Ratio Max Ratio Mess Avg len
> Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
> --- Event Stage 0: Main Stage
> MatMult 6291 1.0 2.1431e+01 4.2 1.01e+09 1.2 1.9e+04 3.4e+03
> 0.0e+00 4 42 86 85 0 4 42 86 85 0 170
> MatMultAdd 896 1.0 4.8204e-01 1.1 2.79e+07 1.2 1.3e+03 3.4e+03
> 0.0e+00 0 1 6 6 0 0 1 6 6 0 208
> MatMultTranspose 896 1.0 2.2052e+00 1.3 2.79e+07 1.2 1.3e+03 3.4e+03
> 1.8e+03 1 1 6 6 3 1 1 6 6 3 45
> MatSolve 896 0.0 1.4953e-02 0.0 2.44e+06 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 163
> MatLUFactorSym 1 1.0 1.7595e -04 4.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 5.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatLUFactorNum 1 1.0 1.3995e-0423.5 1.85e+04 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 132
> MatConvert 2 1.0 3.1026e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 0
> MatScale 6 1.0 8.9679e-03 5.7 3.98e+05 1.2 6.0e+00 3.4e+03
> 4.0e+00 0 0 0 0 0 0 0 0 0 0 158
> MatAssemblyBegin 37 1.0 2.1544e-01 1.7 0.00e+00 0.0 5.4e+01 6 .4e+03
> 4.2e+01 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 37 1.0 2.5336e-01 1.4 0.00e+00 0.0 9.0e+01 6.8e+02
> 3.1e+02 0 0 0 0 1 0 0 0 0 1 0
> MatGetRow 26874 1.2 9.8243e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetRowIJ 1 0.0 2.5988e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetOrdering 1 0.0 1.5616e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 5.0e-01 0 0 0 0 0 0 0 0 0 0 0
> MatCoarsen 2 1.0 2.9671e-02 1.0 0.00e+00 0.0 2.4e+01 7.0e+03
> 3.8e+01 0 0 0 0 0 0 0 0 0 0 0
> MatAXPY 2 1.0 6.1393e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatTranspose 2 1.0 1.8161e-01 1.1 0.00e+00 0.0 3.0e+01 7.1e+03
> 9.4e+01 0 0 0 0 0 0 0 0 0 0 0
> MatMatMult 2 1.0 1.1968e-01 1.0 3.31e+05 1.2 3.6e+01 1.7e+03
> 1.1e+02 0 0 0 0 0 0 0 0 0 0 10
> MatMatMultSym 2 1.0 9.8982e-02 1.0 0.00e+00 0.0 3.0e+01 1.4e+03
> 9.6e+01 0 0 0 0 0 0 0 0 0 0 0
> MatMatMultNum 2 1.0 2.1248e-02 1.1 3.31e+05 1.2 6.0e+00 3.4e+03
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 56
> MatPtAP 2 1.0 1.7070e-01 1.1 2.36e+06 1.2 5.4e+01 3.3e+03
> 1.1e+02 0 0 0 0 0 0 0 0 0 0 50
> MatPtAPSymbolic 2 1.0 1.3786e-01 1.1 0.00e+00 0.0 4.8e+01 3.1e+03
> 1.0 e+02 0 0 0 0 0 0 0 0 0 0 0
> MatPtAPNumeric 2 1.0 4.7638e-02 2.2 2.36e+06 1.2 6.0e+00 4.8e+03
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 180
> MatTrnMatMult 2 1.0 7.9914e-01 1.0 1.14e+07 1.3 3.6e+01 2.1e+04
> 1.2e+02 0 0 0 1 0 0 0 0 1 0 48
> MatGetLocalMat 10 1.0 6.7852e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.4e+01 0 0 0 0 0 0 0 0 0 0 0
> MatGetBrAoCol 6 1.0 3.6962e-02 2.4 0.00e+00 0.0 4.2e+01 4.5e+03
> 1.6e+01 0 0 0 0 0 &nbsp ; 0 0 0 0 0 0
> MatGetSymTrans 4 1.0 4.4394e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecMDot 913 1.0 2.8472e+00 3.0 5.28e+08 1.0 0.0e+00 0.0e+00
> 9.1e+02 1 24 0 0 2 1 24 0 0 2 741
> VecNorm 1367 1.0 2.7202e+00 1.6 7.12e+06 1.0 0.0e+00 0.0e+00
> 1.4e+03 1 0 0 0 2 1 0 0 0 2 10
> VecScale 4950 1.0 6.0693e-02 1.2 1.96e+07 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 1 0 0 0 0 1 0 0 0 1169
> VecCopy 1349 1.0 7.8685e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecSet 4972 1.0 1.2852e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 7624 1.0 1.3610e-01 1.1 6.44e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 3 0 0 0 0 3 0 0 0 1676
> VecAYPX 7168 1.0 1.5877e-01 1.2 4.01e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 2 0 0 0 &nbsp ; 0 2 0 0 0 896
> VecMAXPY 1366 1.0 6.3739e-01 1.3 5.35e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 25 0 0 0 0 25 0 0 0 3353
> VecAssemblyBegin 21 1.0 4.7891e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+01 0 0 0 0 0 0 0 0 0 0 0
> VecAssemblyEnd 21 1.0 4.5776e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecPointwiseMult 5398 1.0 1.2278e-01 1.3 2.42e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 1 0 0 0 0 1 0 0 0 698
> VecScatterBegin 8107 1.0 3.9436 e-02 1.3 0.00e+00 0.0 2.2e+04 3.4e+03
> 0.0e+00 0 0 99 98 0 0 0 99 98 0 0
> VecScatterEnd 8107 1.0 1.5414e+01344.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
> VecSetRandom 2 1.0 1.1868e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecNormalize 1366 1.0 2.7266e+00 1.6 1.07e+07 1.0 0.0e+00 0.0e+00
> 1.4e+03 1 0 0 0 2 1 0 0 0 2 15
> KSPGMRESOrthog 913 1.0 8.5743e+00 1.1 1.06e+09 1.0 0.0e+00 0.0e+00
> 3.6e+04 3 49 0 0 60 &nb sp; 3 49 0 0 60 492
> KSPSetUp 7 1.0 2.4805e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 5.7656e+01 1.0 2.30e+09 1.1 2.2e+04 3.4e+03
> 6.0e+04 18100100100100 18100100100100 151
> PCSetUp 2 1.0 2.1524e+00 1.0 2.03e+07 1.3 3.8e+02 6.0e+03
> 1.1e+03 1 1 2 3 2 1 1 2 3 2 33
> PCSetUpOnBlocks 448 1.0 1.5607e-03 1.8 1.85e+04 0.0 0.0e+00 0.0e+00
> 8.0e+00 0 0 0 0 0 0 0 0 0 0 & nbsp; 12
> PCApply 448 1.0 4.4331e+01 1.1 1.08e+09 1.2 1.9e+04 3.4e+03
> 2.3e+04 14 44 86 85 38 14 44 86 85 38 87
> PCGAMGgraph_AGG 2 1.0 6.1282e-01 1.0 3.36e+05 1.2 6.6e+01 5.7e+03
> 1.9e+02 0 0 0 0 0 0 0 0 0 0 2
> PCGAMGcoarse_AGG 2 1.0 8.8854e-01 1.0 1.14e+07 1.3 9.0e+01 1.2e+04
> 2.1e+02 0 0 0 1 0 0 0 0 1 0 44
> PCGAMGProl_AGG 2 1.0 6.3711e-02 1.1 0.00e+00 0.0 4.2e+01 2.3e+03
> 1.0e+02 0 0 0 0 0 0 0 0 0 0 0
> PCGAMGPOpt_AGG 2 1.0 3.4247e-01 1.0 6.22e+06 1.2 9.6e+01 2.8e+03
> 3.3e+02 0 0 0 0 1 0 0 0 0 1 65
>
> ------------------------------------------------------------------------------------------------------------------------
> Memory usage is given in bytes:
> Object Type Creations Destructions Memory Descendants' Mem.
> Reports information only for process 0.
> --- Event Stage 0: Main Stage
> Matrix 68 68 27143956 0
> Matrix Coarsen 2 2 704 0
> Vector 296 296 13385864 0
> Vector Scatter 18 18 11304 0
> & nbsp; Index Set 47 47 34816 0
> Krylov Solver 7 7 554688 0
> Preconditioner 7 7 3896 0
> PetscRandom 2 2 704 0
> &nbsp ; Viewer 1 0 0 0
>
> ========================================================================================================================
> Average time to get PetscTime(): 1.3113e-06
> Average time for MPI_Barrier(): 9.62257e-05
> Average time for zero size MPI_Send(): 0.00019449
> #PETSc Option Table entries:
> -ksp_converged_reason
> -ksp_gmres_restart 170
> -ksp_rtol 1.0e-15
> -ksp_type gmres
> -log_summary
> -pc_gamg_agg_nsmooths 1
> -pc_type gamg
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure run at: Thu Nov 1 05:54:48 2012
> Configure options: --with-mpi-dir=/home/ub u/soft/mpich2/
> --download-f-blas-lapack =1
> -----------------------------------------
> Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
> Machine characteristics:
> Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
> Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
> Using PETSc arch: arch-linux2-c-debug
> -----------------------------------------
> Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g
> ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g
> ${FOPTFLAGS} ${FFLAGS}
> -----------------------------------------
> Using include paths:
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/mpich2/include
> -----------------------------------------
> Using C linker: /home/ubu/soft/mpich2/bin/mpicc
> Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
> Using libraries:
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc
> -lpthread
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack
> -lfblas -L/home/ubu/soft/mpich2/lib
> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21
> -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32
> -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90
> -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf
> -lsvml -lipgo -ldecimal -lci lkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
> -----------------------------------------
> time 14.4289010000000
> time 14.4289020000000
> time 14.4449030000000
> time 14.4809050000000
>
>
> (2) using asm
> Linear solve converged due to CONVERGED_RTOL iterations 483
> Norm of error 0.2591E+04 iterations 483
> 0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
> 4.866092420969481E-018 0.000000000000000E+000 0.000000000000000E+000
> 26.4211453778395 -4.861214483821431E-017 5.379151535696287E-018
>
> ************************************************************************************************************************
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
> -fCourier9' to print this document ***
>
> ************************************************************************************************************************
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
> ./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat
> Nov 3 10:00:43 2012
> Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
> Max Max/Min Avg Total
> Time (sec): 2.952e+02 1.00006 2.952e+02
> Objects: 2.040e+02 1.00000 2.040e+02
> Flops: 1.502e+09 1.00731 1.496e+09 5.983e+09
> Flops/sec: 5.088e+06 1.00734 5.067e+06 2.027e+07
> Memory: 2.036e+07 1.01697 8.073e+07
> MPI Messages: 1.960e+03 2.00000 1.470e+03 5.880e+03
> MPI Message Lengths: 7.738e+06 3.12820 3.474e+03 2.042e+07
> MPI Reductions: 4.236e+04 1.00000
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
> e.g., VecAXPY() for real vectors of length N
> --> 2N flops
> and VecAXPY() for complex vectors of length N
> --> 8N flops
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
> --- -- Message Lengths -- -- Reductions --
> Avg %Total Avg %Total counts
> %Total Avg %Total counts %Total
> 0: Main Stage: 2.9517e+02 100.0% 5.9826e+09 100.0% 5.880e+03
> 100.0% 3.474e+03 100.0% 4.236e+04 100.0%
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
> Count: number of times phase was executed
> Time and Flops: Max - maximum over all processors
> Ratio - ratio of maximum to minimum over all processors
> Mess: number of messages sent
> Avg. len: average message length
> Reduct: number of global reductions
> Global: entire computation
> Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
> %T - percent time in this phase %f - percent flops in this
> phase
> & nbsp; %M - percent messages in this phase %L - percent message
> lengths in this phase
> %R - percent reductions in this phase
> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
> ##########################################################
> # #
> # WARNING!!! #
> # & nbsp; #
> # This code was compiled with a debugging option, #
> # To get timing results run ./configure #
> # using --with-debugging=no, the performance will #
> # be generally two or three times faster. #
> # &n bsp; #
> ##########################################################
>
> Event Count Time (sec)
> Flops --- Global --- --- Stage --- Total
> Max Ratio Max Ratio Max Ratio Mess Avg len
> Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
> --- Event Stage 0: Main Stage
> MatMult 485 1.0 3.1170e+00 5.9 1.36e+08 1.0 2.9e+03 3.4e+03
> 0.0e+00 1 9 49 48 0 1 9 49 48 0 173
> MatSolve 486 1.0 6.8313e-01 1.3 1.49e+08 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 10 0 0 0 0 10 0 0 0 842
> MatLUFactorNum 1 1.0 6.0117e-02 1.2 1.54e+06 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 99
> MatILUFactorSym 1 1.0 1.5973e-01 2.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyBegin 2 1 .0 6.0572e-02 9.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 2 1.0 1.7764e-02 1.6 0.00e+00 0.0 1.2e+01 8.5e+02
> 1.9e+01 0 0 0 0 0 0 0 0 0 0 0
> MatGetRowIJ 1 1.0 3.8147e-06 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetSubMatrice 1 1.0 1.9829e-01 2.1 0.00e+00 0.0 3.0e+01 2.1e+04
> 1.0e+01 0 0 1 3 0 0 0 1 3 0 0
> MatGetOrdering 1 1.0 1.2739e-02 5.8 0.00e+00 0.0 0.0e+00 0 .0e+00
> 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatIncreaseOvrlp 1 1.0 1.8877e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecMDot 483 1.0 4.5957e+00 2.5 5.98e+08 1.0 0.0e+00 0.0e+00
> 4.8e+02 1 40 0 0 1 1 40 0 0 1 521
> VecNorm 487 1.0 2.0843e+00 1.2 7.40e+06 1.0 0.0e+00 0.0e+00
> 4.9e+02 1 0 0 0 1 1 0 0 0 1 14
> VecScale 486 1.0 1.2140e-02 1.1 3.69e+06 1.0 0.0e+00 0.0e+00
> 0.0 e+00 0 0 0 0 0 0 0 0 0 0 1217
> VecCopy 3 1.0 4.2915e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecSet 979 1.0 8.1432e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 6 1.0 2.3413e-04 1.3 9.12e+04 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 1558
> VecMAXPY 486 1.0 7. 0027e-01 1.2 6.06e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 41 0 0 0 0 41 0 0 0 3460
> VecAssemblyBegin 1 1.0 5.7101e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 3.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAssemblyEnd 1 1.0 3.0994e-06 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecScatterBegin 1457 1.0 9.1357e-02 2.6 0.00e+00 0.0 5.8e+03 3.4e+03
> 0.0e+00 0 0 99 97 0 0 0 99 97 0 0
> VecScatterEnd 1457 1.0 2.5327e+00323.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0&n bsp; 0 0
> VecNormalize 486 1.0 2.0858e+00 1.2 1.11e+07 1.0 0.0e+00 0.0e+00
> 4.9e+02 1 1 0 0 1 1 1 0 0 1 21
> KSPGMRESOrthog 483 1.0 1.1152e+01 1.2 1.20e+09 1.0 0.0e+00 0.0e+00
> 4.0e+04 3 80 0 0 94 3 80 0 0 94 429
> KSPSetUp 2 1.0 1.1989e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 2.0990e+01 1.0 1.50e+09 1.0 5.9e+03 3.5e+03
> 4.2e+04 7100100100100 7100100100100 285
> PCSetUp & nbsp; 2 1.0 3.5928e-01 1.0 1.54e+06 1.1 4.2e+01
> 1.5e+04 3.5e+01 0 0 1 3 0 0 0 1 3 0 17
> PCSetUpOnBlocks 1 1.0 2.2166e-01 1.7 1.54e+06 1.1 0.0e+00 0.0e+00
> 6.0e+00 0 0 0 0 0 0 0 0 0 0 27
> PCApply 486 1.0 5.9831e+00 2.8 1.49e+08 1.1 2.9e+03 3.4e+03
> 1.5e+03 1 10 50 48 3 1 10 50 48 3 96
>
> ------------------------------------------------------------------------------------------------------------------------
> Memory usage is given in bytes:
> Object Type Creations Destructions Memory Descendants' Mem.
> Reports information only for process 0.
> --- Event Stage 0: Main Stage
> Matrix 5 5 8051336 0
> Vector 182 182 11109304 0
> Vector Scatter 2 2 1256 0
> Index Set 10 10 121444 0
> & nbsp; Krylov Solver 2 2 476844 0
> Preconditioner 2 2 1088 0
> Viewer 1 0 0 0
>
> ========================================================================================================================
> Average time to get PetscTime(): 9.05991e-07
> Average time for MPI_Barrier(): 0.000297785
> Average time for zero size MPI_Send() : 0.000174284
> #PETSc Option Table entries:
> -ksp_converged_reason
> -ksp_gmres_restart 170
> -ksp_rtol 1.0e-15
> -ksp_type gmres
> -log_summary
> -pc_type asm
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure run at: Thu Nov 1 05:54:48 2012
> Configure options: --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> -----------------------------------------
> Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
> Machine characteristics:
> Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
> Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
> Using PETSc arch: arch-linux2-c-debug
> -----------------------------------------
> Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g
> ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g
> ${FOPTFLAGS} ${FFLAGS}
> -----------------------------------------
> Using include paths:
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/mpich2/include
> -----------------------------------------
> Using C linker: /home/ubu/soft/mpich2/bin/mpicc
> Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
> Using libraries:
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc
> -lpthread -Wl,-rpath,/home/u
> bu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack
> -lfblas -L/home/ubu/soft/mpich2/lib
> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21
> -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32
> -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90
> -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf
> -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
> -----------------------------------------
> time 5.72435800000000
> time 5.40433800000000
> time 5.39233700000000
> time 5.51634499999999
>
>
>
>
> >At 2012-11-03 23:53:42,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >Just pass it as a command line option. It gives profiling output in
> PetscFinalize().
>
>
> >On Sat, Nov 3, 2012 at 10:52 AM, w_ang_temp <***@163.com> wrote:
>
>> >Is there something that need attention when setting up PETSc? The
>> -log_summary
>> >is no use in my system.
>>
>>
>> >At 2012-11-03 23:31:52,"Jed Brown" <***@mcs.anl.gov> wrote:
>>
>> >1. *Always* send -log_summary when asking about performance.
>> >2. AMG setup costs more, the solve should be faster, especially for
>> large problems.
>> >3. 30k degrees of freedom is not large.
>>
>>
>> >>On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <***@163.com>wrote:
>>
>>> >>Hello,
>>> >> I have tried AMG, but there are some problems. I use the command:
>>> >> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg
>>> -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15
>>> ->>ksp_converged_reason.
>>> >> The matrix has a size of 30000. However, compared with -pc_type
>>> asm,
>>> >>the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several
>>> tests
>>> >>and got the same conclusion. When it begins, the screen shows the
>>> information:
>>> >>[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there
>>> is some
>>> >>parameters that affect the performance of AMG?
>>> >> Besides, I want to confirm a conception. In my view, AMG itself
>>> can be a solver
>>> >>like gmres. It can also be used as a preconditioner like jacobi and is
>>> used by combining
>>> >>with other solver. Is it right? If it is right, how use AMG solver?
>>> >> My codes are attached.
>>> >> Thanks.
>>> >> Jim
>>>
>>>
>>>
>>> >At 2012-11-01 22:00:28,"Jed Brown" <***@mcs.anl.gov> wrote:
>>>
>>> >Yes, it's faster to understand this error message than to have
>>> "mysteriously slow performance".
>>>
>>> >** Preallocation routines now automatically set
>>> MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than
>>> necessary then *>*use
>>> MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the
>>> error generation.
>>> *
>>> >http://www.mcs.anl.gov/petsc/documentation/changes/33.html
>>>
>>> >On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <***@163.com> wrote:
>>>
>>>> >Do you mean that the two versions have a different in this point? If I
>>>> use the new version, I have to
>>>> >make some modifications on my codes?
>>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
w_ang_temp
2012-11-03 17:17:47 UTC
Permalink
>At 2012-11-04 01:08:26,"Jed Brown" <***@mcs.anl.gov> wrote:


>1. What kind of equation are you solving? AMG is not working well if it takes that many iterations.


I just deal with the typical soil-water coupled geotechnical problems. It is a typical finite element equation. The matrix is 30000X30000 and ill-conditioned.




2.



##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
It is true a debugging version. And I used the same version dealing with the same problem, one preconditioner is asm
and the other is amg. The time with amg is about 3 times as with asm. I do not know the reason. And I also do not know the
meaning of '[0]PCSetData_AGG bs=1 MM=7601'.
Matthew Knepley
2012-11-03 17:21:59 UTC
Permalink
On Sat, Nov 3, 2012 at 1:17 PM, w_ang_temp <***@163.com> wrote:

> >At 2012-11-04 01:08:26,"Jed Brown" <***@mcs.anl.gov> wrote:
>
> >1. What kind of equation are you solving? AMG is not working well if it
> takes that many iterations.
>
> I just deal with the typical soil-water coupled geotechnical problems. It
> is a typical finite element equation. The matrix is 30000X30000 and
> ill-conditioned.
>
>
We are now at the root of your problem. Solvers do not work on
discretizations, they work on equations. No
solver is designed for "finite elements", and there is no typical finite
element problem.

Multigrid works best on elliptic equations with smooth coefficients.
Without that, you have to do special things.

I can tell from the above discussion that you have not spent a lot of time
researching successful preconditioning
strategies for your problem in the literature. This is always the first
step to building a high performance solver.

Thanks,

Matt


> 2.
>
> * ##########################################################*
> * # #*
> * # WARNING!!! #*
> * # #*
> * # This code was compiled with a debugging option, #*
> * # To get timing results run ./configure #*
> * # using --with-debugging=no, the performance will #*
> * # be generally two or three times faster. #*
> * # #*
> * ##########################################################*
> It is true a debugging version. And I used the same version dealing with
> the same problem, one preconditioner is asm
> and the other is amg. The time with amg is about 3 times as with asm. I do
> not know the reason. And I also do not know the
> meaning of '[0]PCSetData_AGG bs=1 MM=7601'.
>
>
>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
w_ang_temp
2012-11-03 17:38:33 UTC
Permalink
Hello, Matthew

I just mean that the problem that I am resolving is a finite element problem. The linear system of it is true elliptic equations.
I heared that AMG was an efficient solver, so I just want to have a try about AMG and find that if it is efficient.
By the way, I want to confirm a conception. In my view, AMG itself can be a solver like gmres. It can also be used as a preconditioner
like jacobi and is used by combining with other solver. Is it right? If it is right, how use AMG solver?

Thanks.

Jim




>ÔÚ 2012-11-04 01:21:59£¬"Matthew Knepley" <***@gmail.com> ÐŽµÀ£º
><On Sat, Nov 3, 2012 at 1:17 PM, w_ang_temp <***@163.com> wrote:

>At 2012-11-04 01:08:26,"Jed Brown" <***@mcs.anl.gov> wrote:


>1. What kind of equation are you solving? AMG is not working well if it takes that many iterations.


>I just deal with the typical soil-water coupled geotechnical problems. It is a typical finite element equation. The matrix is 30000X30000 and ill-conditioned.



>We are now at the root of your problem. Solvers do not work on discretizations, they work on equations. No
>solver is designed for "finite elements", and there is no typical finite element problem.


>Multigrid works best on elliptic equations with smooth coefficients. Without that, you have to do special things.


>I can tell from the above discussion that you have not spent a lot of time researching successful preconditioning
>strategies for your problem in the literature. This is always the first step to building a high performance solver.


> Thanks,


> Matt


>2.



##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option, #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
>It is true a debugging version. And I used the same version dealing with the same problem, one preconditioner is asm
>and the other is amg. The time with amg is about 3 times as with asm. I do not know the reason. And I also do not know the
>meaning of '[0]PCSetData_AGG bs=1 MM=7601'.










--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
Matthew Knepley
2012-11-03 17:52:13 UTC
Permalink
On Sat, Nov 3, 2012 at 1:38 PM, w_ang_temp <***@163.com> wrote:

> Hello, Matthew
>
> I just mean that the problem that I am resolving is a finite
> element problem. The linear system of it is true elliptic equations.
> I heared that AMG was an efficient solver, so I just want to have a try
> about AMG and find that if it is efficient.
>

And I meant it when I said, you MUST look it up .Next time you ask us what
AMG can do, please include
a reference for a paper in which they are attacking this problem with it
and we can help.



> By the way, I want to confirm a conception. In my view, AMG
> itself can be a solver like gmres. It can also be used as a preconditioner
> like jacobi and is used by combining with other solver. Is it right? If it
> is right, how use AMG solver?
>

This is true of almost all KSP and PC objects. These are all jsut
approximate solvers.

Matt


>
> Thanks.
>
>
> Jim
>
>
> >圚 2012-11-04 01:21:59"Matthew Knepley" <***@gmail.com> 写道
>
> ><On Sat, Nov 3, 2012 at 1:17 PM, w_ang_temp <***@163.com> wrote:
>
>> >At 2012-11-04 01:08:26,"Jed Brown" <***@mcs.anl.gov> wrote:
>>
>> >1. What kind of equation are you solving? AMG is not working well if it
>> takes that many iterations.
>>
>> >I just deal with the typical soil-water coupled geotechnical problems.
>> It is a typical finite element equation. The matrix is 30000X30000 and
>> ill-conditioned.
>>
>>
> >We are now at the root of your problem. Solvers do not work on
> discretizations, they work on equations. No
> >solver is designed for "finite elements", and there is no typical finite
> element problem.
>
> >Multigrid works best on elliptic equations with smooth coefficients.
> Without that, you have to do special things.
>
> >I can tell from the above discussion that you have not spent a lot of
> time researching successful preconditioning
> >strategies for your problem in the literature. This is always the first
> step to building a high performance solver.
>
> > Thanks,
>
> > Matt
>
>
>> >2.
>>
>> * ##########################################################*
>> * # #*
>> * # WARNING!!! #*
>> * # #*
>> * # This code was compiled with a debugging option, #*
>> * # To get timing results run ./configure #*
>> * # using --with-debugging=no, the performance will #*
>> * # be generally two or three times faster. #*
>> * # #*
>> * ##########################################################*
>> >It is true a debugging version. And I used the same version dealing with
>> the same problem, one preconditioner is asm
>> >and the other is amg. The time with amg is about 3 times as with asm. I
>> do not know the reason. And I also do not know the
>> >meaning of '[0]PCSetData_AGG bs=1 MM=7601'.
>>
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
Mark F. Adams
2012-11-04 13:52:23 UTC
Permalink
Just to add to Jed and Matt's comments:

1) What are your equations (I really don't care what physics you are modeling, its the equations that we see). Is is a scalar div (alpha(x) grad ) u?

2) Try using hyper (-pc_type hyper -pc_hypre_type boomeramg). Look at "PCApply 448", this should be like 10-20. Configure your systems with --download-hyper=1 if you do not have it.

3) Why do you say it is ill conditioned? Stretched grids? large jumps in material coefficients? Why do you think it is illoconditioned?

On Nov 3, 2012, at 1:52 PM, Matthew Knepley <***@gmail.com> wrote:

> On Sat, Nov 3, 2012 at 1:38 PM, w_ang_temp <***@163.com> wrote:
> Hello, Matthew
>
> I just mean that the problem that I am resolving is a finite element problem. The linear system of it is true elliptic equations.
> I heared that AMG was an efficient solver, so I just want to have a try about AMG and find that if it is efficient.
>
> And I meant it when I said, you MUST look it up .Next time you ask us what AMG can do, please include
> a reference for a paper in which they are attacking this problem with it and we can help.
>
>
> By the way, I want to confirm a conception. In my view, AMG itself can be a solver like gmres. It can also be used as a preconditioner
> like jacobi and is used by combining with other solver. Is it right? If it is right, how use AMG solver?
>
> This is true of almost all KSP and PC objects. These are all jsut approximate solvers.
>
> Matt
>
>
> Thanks.
>
> Jim
>
>
> >圚 2012-11-04 01:21:59"Matthew Knepley" <***@gmail.com> 写道
> ><On Sat, Nov 3, 2012 at 1:17 PM, w_ang_temp <***@163.com> wrote:
> >At 2012-11-04 01:08:26,"Jed Brown" <***@mcs.anl.gov> wrote:
> >1. What kind of equation are you solving? AMG is not working well if it takes that many iterations.
>
> >I just deal with the typical soil-water coupled geotechnical problems. It is a typical finite element equation. The matrix is 30000X30000 and ill-conditioned.
>
>
> >We are now at the root of your problem. Solvers do not work on discretizations, they work on equations. No
> >solver is designed for "finite elements", and there is no typical finite element problem.
>
> >Multigrid works best on elliptic equations with smooth coefficients. Without that, you have to do special things.
>
> >I can tell from the above discussion that you have not spent a lot of time researching successful preconditioning
> >strategies for your problem in the literature. This is always the first step to building a high performance solver.
>
> > Thanks,
>
> > Matt
>
> >2.
>
>
> ##########################################################
> # #
> # WARNING!!! #
> # #
> # This code was compiled with a debugging option, #
> # To get timing results run ./configure #
> # using --with-debugging=no, the performance will #
> # be generally two or three times faster. #
> # #
> ##########################################################
> >It is true a debugging version. And I used the same version dealing with the same problem, one preconditioner is asm
> >and the other is amg. The time with amg is about 3 times as with asm. I do not know the reason. And I also do not know the
> >meaning of '[0]PCSetData_AGG bs=1 MM=7601'.
>
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
Jed Brown
2012-11-04 14:24:13 UTC
Permalink
On Sun, Nov 4, 2012 at 7:52 AM, Mark F. Adams <***@columbia.edu>wrote:

> 2) Try using hyper (-pc_type hyper -pc_hypre_type boomeramg). Look at "PCApply
> 448", this should be like 10-20. Configure your systems with
> --download-hyper=1 if you do not have it.
>

Jim, you can use -pc_type hypre (note the spelling "hypre", not "hyper").


>
> 3) Why do you say it is ill conditioned? Stretched grids? large jumps in
> material coefficients? Why do you think it is illoconditioned?
>

This question is important.

PETSc devs, we may need to write an FAQ about how saying that your problem
is "ill-conditioned" is useless, and explaining the sort of context that is
necessary to say something useful.
w_ang_temp
2012-11-05 01:52:26 UTC
Permalink
>At 2012-11-04 21:52:23,"Mark F. Adams" <***@columbia.edu> wrote:
>Just to add to Jed and Matt's comments:


>1) What are your equations (I really don't care what physics you are modeling, its the equations that we see). Is is a scalar div (alpha(x) grad ) u?

The linear system Ax=b is resulted from the finite element discretization. It is a algebraic equation, based from a elliptic equation, my opinion.


>2) Try using hyper (-pc_type hyper -pc_hypre_type boomeramg). Look at "PCApply 448", this should be like 10-20. Configure your systems with -->download-hyper=1 if you do not have it.


>3) Why do you say it is ill conditioned? Stretched grids? large jumps in material coefficients? Why do you think it is illoconditioned?

Pragmatic geotechnical problems often involve materials with highly varied material zones, such as in soil-structure interaction problems or soil-water
coupled problems.([1]summary;[2-4])
Reference:
[1]Krishna Bahadur Chaudhary. Preconditioners for soil-structure interaction problems with significant material stiffness contrast[D].
[2]S.H.Chan. A modified Jacobi preconditioner for solving ill-conditioned Biot's consolidation equations using symmetric quasi-minimal residual
method.International journal for numerical and analytical methods in geomechanics[J].
[3]Kok-Kwang Phoon. Iterative solution of large-scale consolidation and constraint finite element equations for 3D problems.International e-Conference on
Modern Trends in Foundation Engineering[J].
[4]Massimiliano, etc. Ill-conditioning of finite element poroelasticity equations. International Journal of Solids and Structures[J].

Thanks.

Jim
Jed Brown
2012-11-06 03:44:07 UTC
Permalink
On Sun, Nov 4, 2012 at 7:52 PM, w_ang_temp <***@163.com> wrote:

>
>
> >At 2012-11-04 21:52:23,"Mark F. Adams" <***@columbia.edu> wrote:
>
> >Just to add to Jed and Matt's comments:
>
> >1) What are your equations (I really don't care what physics you are
> modeling, its the equations that we see). Is is a scalar div (alpha(x)
> grad ) u?
>
> The linear system Ax=b is resulted from the finite element discretization.
> It is a algebraic equation, based from a elliptic equation, my opinion.
>
>
If you use the formulation in the first cited reference, then you have a
saddle point problem. This is extremely important and explains why naive
application of algebraic multigrid did not work. AMG can handle the
variable coefficients (and "ill-conditioning"), but it can't
"automatically" handle the saddle. You can use PCFIELDSPLIT for these
problems, but it's not a black box and it requires that you _understand_
what you are trying to do.

You need to choose a respectable reference on preconditioners for saddle
point problems. Best would be a review paper written since 2000 and with
more than 100 citations. (There are a handful, almost any will be fine.)
Read it first, then read a couple papers specific to poroelasticity and
frame those methods in the context from the saddle point paper. Also read
section 4.5 of the PETSc user's manual on PCFIELDSPLIT.

Yes, this takes time, but if you want to solve these problems robustly, you
have to understand enough about the methods to troubleshoot.

>2) Try using hyper (-pc_type hyper -pc_hypre_type boomeramg). Look at "PCApply
> 448", this should be like 10-20. Configure your systems with
> -->download-hyper=1 if you do not have it.
>
> >3) Why do you say it is ill conditioned? Stretched grids? large jumps in
> material coefficients? Why do you think it is illoconditioned?
>
> Pragmatic geotechnical problems often involve materials with highly
> varied material zones, such as in soil-structure interaction problems or
> soil-water
> coupled problems.([1]summary;[2-4])
> Reference:
> [1]Krishna Bahadur Chaudhary. Preconditioners for soil-structure
> interaction problems with significant material stiffness contrast[D].
> [2]S.H.Chan. A modified Jacobi preconditioner for solving ill-conditioned
> Biot's consolidation equations using symmetric quasi-minimal residual
> method.International journal for numerical and analytical methods in
> geomechanics[J].
> [3]Kok-Kwang Phoon. Iterative solution of large-scale consolidation and
> constraint finite element equations for 3D problems.International
> e-Conference on
> Modern Trends in Foundation Engineering[J].
> [4]Massimiliano, etc. Ill-conditioning of finite element poroelasticity
> equations. International Journal of Solids and Structures[J].
>
> Thanks.
>
>
> Jim
>
>
>
>
>
>
TAY wee-beng
2012-10-30 11:41:33 UTC
Permalink
On 28/10/2012 2:17 PM, Jed Brown wrote:
>
> Algebraic multigrid can be used directly, -pc_type gamg
> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the
> PCMG interface to set interpolation (and provide a coarse operator for
> non-Galerkin) or use a DM that provides coarsening capability.
>
> What kind of problem are you solving?
>

Hi,

May I know if there is an example which explains using the geometric MG,
either using PCMG or DM?


Yours sincerely,

TAY wee-beng


> On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com
> <mailto:***@163.com>> wrote:
>
> Hello,
> I want to use the multigrid as a preconditioner. The
> introduction about it in the manual is little.
> So are there some typical examples or details about multigrid? Is
> it used just like other preconditioners
> like jacobi, sor, which can be simply used in the cammand line
> options?
> Thanks.
> Jim
>
>
Matthew Knepley
2012-10-30 12:44:49 UTC
Permalink
On Tue, Oct 30, 2012 at 7:41 AM, TAY wee-beng <***@gmail.com> wrote:

> On 28/10/2012 2:17 PM, Jed Brown wrote:
>
> Algebraic multigrid can be used directly, -pc_type gamg
> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
> interface to set interpolation (and provide a coarse operator for
> non-Galerkin) or use a DM that provides coarsening capability.
>
> What kind of problem are you solving?
>
>
> Hi,
>
> May I know if there is an example which explains using the geometric MG,
> either using PCMG or DM?
>

SNES ex5 and ex19 both use MG.

Matt


> Yours sincerely,
>
> TAY wee-beng
>
>
> On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>
>> Hello,
>> I want to use the multigrid as a preconditioner. The introduction
>> about it in the manual is little.
>> So are there some typical examples or details about multigrid? Is it used
>> just like other preconditioners
>> like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
>> Jim
>>
>>
>>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
TAY wee-beng
2012-10-30 22:34:28 UTC
Permalink
On 30/10/2012 1:44 PM, Matthew Knepley wrote:
> On Tue, Oct 30, 2012 at 7:41 AM, TAY wee-beng <***@gmail.com
> <mailto:***@gmail.com>> wrote:
>
> On 28/10/2012 2:17 PM, Jed Brown wrote:
>>
>> Algebraic multigrid can be used directly, -pc_type gamg
>> -pc_gamg_agg_nsmooths 1. Geometric either required that you use
>> the PCMG interface to set interpolation (and provide a coarse
>> operator for non-Galerkin) or use a DM that provides coarsening
>> capability.
>>
>> What kind of problem are you solving?
>>
>
> Hi,
>
> May I know if there is an example which explains using the
> geometric MG, either using PCMG or DM?
>
>
> SNES ex5 and ex19 both use MG.
>
> Matt
Are there examples which uses ksp and geometric MG? I want to solve a
Poisson eqn which has been discretized to give a sys of linear eqns.

I also found this thread :

http://lists.mcs.anl.gov/pipermail/petsc-users/2012-August/015073.html

Is it using geometric MG to solve a sys of linear eqns?
>
> Yours sincerely,
>
> TAY wee-beng
>
>
>> On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com
>> <mailto:***@163.com>> wrote:
>>
>> Hello,
>> I want to use the multigrid as a preconditioner. The
>> introduction about it in the manual is little.
>> So are there some typical examples or details about
>> multigrid? Is it used just like other preconditioners
>> like jacobi, sor, which can be simply used in the cammand
>> line options?
>> Thanks.
>> Jim
>>
>>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
Matthew Knepley
2012-10-30 22:49:33 UTC
Permalink
On Tue, Oct 30, 2012 at 6:34 PM, TAY wee-beng <***@gmail.com> wrote:

> On 30/10/2012 1:44 PM, Matthew Knepley wrote:
>
> On Tue, Oct 30, 2012 at 7:41 AM, TAY wee-beng <***@gmail.com> wrote:
>
>> On 28/10/2012 2:17 PM, Jed Brown wrote:
>>
>> Algebraic multigrid can be used directly, -pc_type gamg
>> -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG
>> interface to set interpolation (and provide a coarse operator for
>> non-Galerkin) or use a DM that provides coarsening capability.
>>
>> What kind of problem are you solving?
>>
>>
>> Hi,
>>
>> May I know if there is an example which explains using the geometric MG,
>> either using PCMG or DM?
>>
>
> SNES ex5 and ex19 both use MG.
>
> Matt
>
> Are there examples which uses ksp and geometric MG? I want to solve a
> Poisson eqn which has been discretized to give a sys of linear eqns.
>

This is geometric MG. Did you run any of the tutorial runs? For example,
src/snes/examples/tutorials/makefile has

runex5:
-@${MPIEXEC} -n 1 ./ex5 -pc_type mg -ksp_monitor_short -snes_view
-pc_mg_levels 3 -pc_mg_galerkin -da_grid_x 17 -da_grid_y 17
-mg_levels_ksp_monitor_short -mg_levels_ksp_norm_type unpreconditioned
-snes_monitor_short -mg_levels_ksp_chebyshev_estimate_eigenvalues 0.5,1.1
-mg_levels_pc_type sor -pc_mg_type full > ex5_1.tmp 2>&1; \
if (${DIFF} output/ex5_1.out ex5_1.tmp) then true; \
else echo ${PWD} "\nPossible problem with with ex5, diffs above
\n========================================="; fi; \
${RM} -f ex5_1.tmp

and many more.

Matt


> I also found this thread :
>
> http://lists.mcs.anl.gov/pipermail/petsc-users/2012-August/015073.html
>
> Is it using geometric MG to solve a sys of linear eqns?
>
>
>
>> Yours sincerely,
>>
>> TAY wee-beng
>>
>>
>> On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>>
>>> Hello,
>>> I want to use the multigrid as a preconditioner. The introduction
>>> about it in the manual is little.
>>> So are there some typical examples or details about multigrid? Is it
>>> used just like other preconditioners
>>> like jacobi, sor, which can be simply used in the cammand line options?
>>> Thanks.
>>>
>>> Jim
>>>
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>


--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
w_ang_temp
2012-11-09 15:05:01 UTC
Permalink
Hello,Jed

Sorry again for the interruption.

As you said, if I want to use Algebraic multigrid, I can just use '-pc_type gamg
-pc_gamg_agg_nsmooths 1' in the command line. I think there are many other -pc_gamg_XXX
parameters. How can I find all the -pc_gamg_XXX?

Besides, I want to know more about AMG. Are there any details from PETSc, like manual,
documentaion, FAQ, etc?
Also, from the above post, I want to know that if AMG can not be used in 3.2 by command
line option, because now I have to use 3.3 for using AMG. I just donot want to change my
version 3.2 now.

I am major in Civil Engineering and not good at math and programming. So please
forgive my weakness.
Thanks.
Jim








>At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:


>Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.

>What kind of problem are you solving?

>>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:

>>Hello,
>> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
>>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
>>like jacobi, sor, which can be simply used in the cammand line options?
>> Thanks.
> > Jim
Mark F. Adams
2012-11-09 15:15:34 UTC
Permalink
On Nov 9, 2012, at 10:05 AM, w_ang_temp <***@163.com> wrote:

> Hello,Jed
>
> Sorry again for the interruption.
>
> As you said, if I want to use Algebraic multigrid, I can just use '-pc_type gamg
> -pc_gamg_agg_nsmooths 1' in the command line. I think there are many other -pc_gamg_XXX
> parameters. How can I find all the -pc_gamg_XXX?

-help

>
> Besides, I want to know more about AMG. Are there any details from PETSc, like manual,
> documentaion, FAQ, etc?

I would recommend the book "Multigrid" by Trottenberg, et al, and the classic "Multigrid Tutorial" is good to look at also.

> Also, from the above post, I want to know that if AMG can not be used in 3.2 by command
> line option, because now I have to use 3.3 for using AMG. I just donot want to change my
> version 3.2 now.

You can use 3.2 if you like (free country) but this code was developed a lot between 3.2 and 3.3. I forget what was in 3.2 exactly. If you have a straight forward problem and 3.2 works then its is probably OK for your current purposes.

>
> I am major in Civil Engineering and not good at math and programming. So please
> forgive my weakness.
> Thanks.
> Jim
>
>
>
>
>
>
> >At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote:
> >Algebraic multigrid can be used directly, -pc_type gamg -pc_gamg_agg_nsmooths 1. Geometric either required that you use the PCMG interface to set >interpolation (and provide a coarse operator for non-Galerkin) or use a DM that provides coarsening capability.
>
> >What kind of problem are you solving?
>
> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
> >>Hello,
> >> I want to use the multigrid as a preconditioner. The introduction about it in the manual is little.
> >>So are there some typical examples or details about multigrid? Is it used just like other preconditioners
> >>like jacobi, sor, which can be simply used in the cammand line options?
> >> Thanks.
> > > Jim
>
>
>
>
w_ang_temp
2012-11-09 15:28:02 UTC
Permalink
By the way, as a graduate student, I find that it is difficult to write a paper just using PETSc
to deal with a large problem. Because it seems that there is no an innovative idea by just using
the available things. It is also not easy to get something new based on the src of PETSc.
Any suggestions?

Thanks.
Jim
Matthew Knepley
2012-11-09 15:38:00 UTC
Permalink
On Fri, Nov 9, 2012 at 10:28 AM, w_ang_temp <***@163.com> wrote:
> By the way, as a graduate student, I find that it is difficult to write a
> paper just using PETSc
> to deal with a large problem. Because it seems that there is no an
> innovative idea by just using
> the available things. It is also not easy to get something new based on the
> src of PETSc.

You are supposed to be writing papers about your results.

Matt

> Any suggestions?
> Thanks.
> Jim
>
>



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener
Jed Brown
2012-11-09 15:44:45 UTC
Permalink
On Fri, Nov 9, 2012 at 9:28 AM, w_ang_temp <***@163.com> wrote:

> By the way, as a graduate student, I find that it is difficult to write a
> paper just using PETSc
> to deal with a large problem. Because it seems that there is no an
> innovative idea by just using
> the available things.
>

Reimplementing an existing method is not new either. Now there are many
papers that reimplement existing methods under a new name, without citing
its existing use, and usually with less rigorous analysis than earlier
work. This is caused either by stubborn ignorance or intentional deception.
It's not science and only survives due to negligent reviewers.

Creating genuinely new methods that solve meaningful problems and work
better than existing methods is Hard. To start with, it's important to
understand the capabilities and limitations of existing methods. Unless
your intended research area is pretty focused, expect to read at least 1000
papers and write dozens of experimental codes. You might be able to rely on
your advisor to accelerate this process, but advisors are not always right.


> It is also not easy to get something new based on the src of PETSc.
>

If you work in an application area, focus on the modeling and method
components specific to your problem. Build the special ingredients using
PETSc and let the library do the rest of the work. Better understanding of
methods will help you do this faster and in more powerful ways.
w_ang_temp
2012-11-09 16:14:22 UTC
Permalink
Sincerely thank you for your advices. Thanks Matthew, too. I will try my best.

Jim





>On 2012-11-09 23:44:45£¬"Jed Brown" <***@mcs.anl.gov> ÐŽµÀ£º

>>On Fri, Nov 9, 2012 at 9:28 AM, w_ang_temp <***@163.com> wrote:

>>By the way, as a graduate student, I find that it is difficult to write a paper just using PETSc
>>to deal with a large problem. Because it seems that there is no an innovative idea by just using
>>the available things.


>Reimplementing an existing method is not new either. Now there are many papers that reimplement existing methods under a new name, without citing its >existing use, and usually with less rigorous analysis than earlier work. This is caused either by stubborn ignorance or intentional deception. It's not science >and only survives due to negligent reviewers.


>Creating genuinely new methods that solve meaningful problems and work better than existing methods is Hard. To start with, it's important to understand >the capabilities and limitations of existing methods. Unless your intended research area is pretty focused, expect to read at least 1000 papers and write >dozens of experimental codes. You might be able to rely on your advisor to accelerate this process, but advisors are not always right.

>>It is also not easy to get something new based on the src of PETSc.


>If you work in an application area, focus on the modeling and method components specific to your problem. Build the special ingredients using PETSc and >let the library do the rest of the work. Better understanding of methods will help you do this faster and in more powerful ways.
Barry Smith
2012-11-09 22:34:44 UTC
Permalink
I remember a conversation in 1994 I had with Lennart Johnsson who led the development of the math libraries for the Thinking Machines (which were way ahead of their time). He recounted being told by various academic types that they would not be purchasing Thinking Machines but instead IBM Sps and Intel paragons because those other machines had no math libraries thus there was plenty of "research" the academic types could do on them that would be redundant (and hence unpublishable) on the Thinking Machines.

So what you do depends on your research plans.

1) If you are in a scientific area (materials, engineering, …. whatever) then you should use PETSc to do NEW simulations that other people have not done yet (and cannot do) with new or more detailed models etc. For example, if everyone in your community drops some "stuff" from their model because they "are too hard" to include you can include them in the model and then demonstrate they are important. Don't just run the standard models people have been running for decades do models that other people can only dream about.

2) If you are in mathematics/CS numerical analysis then you need to do innovative things that combine several capabilities in new ways.

Barry



On Nov 9, 2012, at 9:28 AM, w_ang_temp <***@163.com> wrote:

> By the way, as a graduate student, I find that it is difficult to write a paper just using PETSc
> to deal with a large problem. Because it seems that there is no an innovative idea by just using
> the available things. It is also not easy to get something new based on the src of PETSc.
> Any suggestions?
> Thanks.
> Jim
>
>
Mark F. Adams
2012-11-09 23:59:50 UTC
Permalink
I would just add a (3) or extend (2) by saying that PETSc provides an ideal language to develop new algorithms, and just use PETSc's parallel arithmetic and tools. Assuming your algorithm is not so different that it can not even use PETSc's tools.

On Nov 9, 2012, at 5:34 PM, Barry Smith <***@mcs.anl.gov> wrote:

>
> I remember a conversation in 1994 I had with Lennart Johnsson who led the development of the math libraries for the Thinking Machines (which were way ahead of their time). He recounted being told by various academic types that they would not be purchasing Thinking Machines but instead IBM Sps and Intel paragons because those other machines had no math libraries thus there was plenty of "research" the academic types could do on them that would be redundant (and hence unpublishable) on the Thinking Machines.
>
> So what you do depends on your research plans.
>
> 1) If you are in a scientific area (materials, engineering, …. whatever) then you should use PETSc to do NEW simulations that other people have not done yet (and cannot do) with new or more detailed models etc. For example, if everyone in your community drops some "stuff" from their model because they "are too hard" to include you can include them in the model and then demonstrate they are important. Don't just run the standard models people have been running for decades do models that other people can only dream about.
>
> 2) If you are in mathematics/CS numerical analysis then you need to do innovative things that combine several capabilities in new ways.
>
> Barry
>
>
>
> On Nov 9, 2012, at 9:28 AM, w_ang_temp <***@163.com> wrote:
>
>> By the way, as a graduate student, I find that it is difficult to write a paper just using PETSc
>> to deal with a large problem. Because it seems that there is no an innovative idea by just using
>> the available things. It is also not easy to get something new based on the src of PETSc.
>> Any suggestions?
>> Thanks.
>> Jim
>>
>>
>
>
A.L. Siahaan
2012-11-09 16:37:43 UTC
Permalink
Is PCGAMG a combination of AMG and geometric multigrid ? Do we have to
choose either pure AMG (when using -pc_type gamg -pc_gamg_agg_nsmooths 1)
or pure geometric MG (type="geo") ?
Mark F. Adams
2012-11-09 17:09:58 UTC
Permalink
On Nov 9, 2012, at 11:37 AM, "A.L. Siahaan" <***@cam.ac.uk> wrote:

> Is PCGAMG a combination of AMG and geometric multigrid ? Do we have to choose either pure AMG (when using -pc_type gamg -pc_gamg_agg_nsmooths 1) or pure geometric MG (type="geo") ?

Thing of GAMG as an AMG solver. "geo" is an unstructured GMG method that is there as a reference implementation, to stress the framework a bit, but is not for real use.

> From my thread search in this mailing list about using PCGAMG, it is mostly advised to use pure AMG (-pc_type gamg -pc_gamg_agg_nsmooths 1).

Yes

>
> Of all algebraic multigrid implemented or interfaced in PETSc, i.e. BoomerAMG (PCHypre), ML(PCML), and PCGAMG("agg"), what is the order of the cost (memory) to set up the preconditioner (setting nullspace etc) ?

Not a simple question. ML and GAMG use very similar algorithms and ML is implemented in a more "native" way than hypre (GAMG is completely native). GAMG/ML are probably better for elasticity, hypre is great for 2D low order discretizations of Laplacian, and it is a very hardened implementation. hypre is not native so there are some things that don't work as well.

>
> Regards,
> Antony
>
>
>
> On Nov 9 2012, Mark F. Adams wrote:
>
>>
>> On Nov 9, 2012, at 10:05 AM, w_ang_temp <***@163.com> wrote:
>>
>>> Hello,Jed
>>> Sorry again for the interruption.
>>> As you said, if I want to use Algebraic multigrid, I can just use '-pc_type gamg -pc_gamg_agg_nsmooths 1' in the command line. I think there are many other -pc_gamg_XXX parameters. How can I find all the -pc_gamg_XXX?
>>
>> -help
>>
>>> Besides, I want to know more about AMG. Are there any details from PETSc, like manual, documentaion, FAQ, etc?
>>
>> I would recommend the book "Multigrid" by Trottenberg, et al, and the classic "Multigrid Tutorial" is good to look at also.
>>
>>> Also, from the above post, I want to know that if AMG can not be used in 3.2 by command line option, because now I have to use 3.3 for using AMG. I just donot want to change my version 3.2 now.
>>
>> You can use 3.2 if you like (free country) but this code was developed a lot between 3.2 and 3.3. I forget what was in 3.2 exactly. If you have a straight forward problem and 3.2 works then its is probably OK for your current purposes.
>>
>>> I am major in Civil Engineering and not good at math and programming. So please forgive my weakness.
>>> Thanks.
>>> Jim > At 2012-10-28 21:17:00,"Jed Brown" <***@mcs.anl.gov> wrote: > Algebraic multigrid can be used directly, -pc_type gamg > -pc_gamg_agg_nsmooths 1. Geometric either required that you use the > PCMG interface to set >interpolation (and provide a coarse operator > for non-Galerkin) or use a DM that provides coarsening capability.
>>> >What kind of problem are you solving?
>>> >>On Oct 28, 2012 6:09 AM, "w_ang_temp" <***@163.com> wrote:
>>> >>Hello,
>>> >> I want to use the multigrid as a preconditioner. The introduction >> about it in the manual is little.
>>> >> So are there some typical examples or details about multigrid? Is it >> used just like other preconditioners like jacobi, sor, which can be >> simply used in the cammand line options?
>>> >> Thanks.
>>> > > > > Jim
>>
>>
>
A.L. Siahaan
2012-11-09 17:26:11 UTC
Permalink
Thank's a lot, Mark ! What does 'native' mean here ? That it is PETSc's
built-in and not interface ?

Antony

On Nov 9 2012, Mark F. Adams wrote:

>> Of all algebraic multigrid implemented or interfaced in PETSc, i.e.
>> BoomerAMG (PCHypre), ML(PCML), and PCGAMG("agg"), what is the order of
>> the cost (memory) to set up the preconditioner (setting nullspace etc) ?
>
> Not a simple question. ML and GAMG use very similar algorithms and ML is
> implemented in a more "native" way than hypre (GAMG is completely
> native). GAMG/ML are probably better for elasticity, hypre is great for
> 2D low order discretizations of Laplacian, and it is a very hardened
> implementation. hypre is not native so there are some things that don't
> work as well.
>
>>
>> Regards,
>> Antony
>>
Mark F. Adams
2012-11-09 17:50:35 UTC
Permalink
yes
On Nov 9, 2012, at 12:26 PM, "A.L. Siahaan" <***@cam.ac.uk> wrote:

> Thank's a lot, Mark ! What does 'native' mean here ? That it is PETSc's built-in and not interface ?
>
> Antony
>
> On Nov 9 2012, Mark F. Adams wrote:
>
>>> Of all algebraic multigrid implemented or interfaced in PETSc, i.e. BoomerAMG (PCHypre), ML(PCML), and PCGAMG("agg"), what is the order of the cost (memory) to set up the preconditioner (setting nullspace etc) ?
>>
>> Not a simple question. ML and GAMG use very similar algorithms and ML is implemented in a more "native" way than hypre (GAMG is completely native). GAMG/ML are probably better for elasticity, hypre is great for 2D low order discretizations of Laplacian, and it is a very hardened implementation. hypre is not native so there are some things that don't work as well.
>>
>>> Regards,
>>> Antony
>
>
A.L. Siahaan
2012-11-09 18:13:11 UTC
Permalink
When you mentioned that ML is implemented in a more "native" way than
Hypre, did you mean that PCML is not exactly a PETSc's interface to the
Trilinos ML ?

On Nov 9 2012, Mark F. Adams wrote:

>yes
>On Nov 9, 2012, at 12:26 PM, "A.L. Siahaan" <***@cam.ac.uk> wrote:
>
>> Thank's a lot, Mark ! What does 'native' mean here ? That it is PETSc's
>> built-in and not interface ?
>>
>> Antony
>>
>> On Nov 9 2012, Mark F. Adams wrote:
>>
>>>> Of all algebraic multigrid implemented or interfaced in PETSc, i.e.
>>>> BoomerAMG (PCHypre), ML(PCML), and PCGAMG("agg"), what is the order of
>>>> the cost (memory) to set up the preconditioner (setting nullspace etc)
>>>> ?
>>>
>>> Not a simple question. ML and GAMG use very similar algorithms and ML
>>> is implemented in a more "native" way than hypre (GAMG is completely
>>> native). GAMG/ML are probably better for elasticity, hypre is great for
>>> 2D low order discretizations of Laplacian, and it is a very hardened
>>> implementation. hypre is not native so there are some things that don't
>>> work as well.
>>>
>>>> Regards,
>>>> Antony
>>
>>
>
>
Jed Brown
2012-11-09 18:23:31 UTC
Permalink
On Fri, Nov 9, 2012 at 12:13 PM, A.L. Siahaan <***@cam.ac.uk> wrote:

> When you mentioned that ML is implemented in a more "native" way than
> Hypre, did you mean that PCML is not exactly a PETSc's interface to the
> Trilinos ML ?


ML exposes the mulitgrid hierarchy, allowing great smoother and coarse grid
flexibility, monitoring, and even composition with other systems. In
contrast, BoomerAMG is a "black box" so only those methods they
specifically support are available.
Thomas Witkowski
2012-11-09 17:37:40 UTC
Permalink
While we're at it, can some of you tell me some world about parallel
scaling of algebraic multigrid methods? In most of my codes I use them
to precondition some simple blocks, e.g. Laplace matrix. I'm pretty sure
that parallel scaling of my solver is limited mostly by scaling of the
AMG method which is used. What are the expectations, when going to 10^3
or 10^4 cores?

Thomas

Am 09.11.2012 18:09, schrieb Mark F. Adams:
> On Nov 9, 2012, at 11:37 AM, "A.L. Siahaan" <***@cam.ac.uk> wrote:
>
>> Is PCGAMG a combination of AMG and geometric multigrid ? Do we have to choose either pure AMG (when using -pc_type gamg -pc_gamg_agg_nsmooths 1) or pure geometric MG (type="geo") ?
> Thing of GAMG as an AMG solver. "geo" is an unstructured GMG method that is there as a reference implementation, to stress the framework a bit, but is not for real use.
>
>>
Jed Brown
2012-11-09 17:40:23 UTC
Permalink
Don't speculate, send -log_summary and information about the problem and
machine. Some setup operations degrade, but solves should scale pretty well.

On Fri, Nov 9, 2012 at 11:37 AM, Thomas Witkowski <
***@tu-dresden.de> wrote:

> While we're at it, can some of you tell me some world about parallel
> scaling of algebraic multigrid methods? In most of my codes I use them to
> precondition some simple blocks, e.g. Laplace matrix. I'm pretty sure that
> parallel scaling of my solver is limited mostly by scaling of the AMG
> method which is used. What are the expectations, when going to 10^3 or 10^4
> cores?
Thomas Witkowski
2012-11-09 17:43:55 UTC
Permalink
:) It was just a question whether is makes sense to run my solver in
this range of number of cores, or whether AMG by itself will not scale
here anymore. As my computational time is limited, I have to think
carefully when using 10^4 cores.

Thomas


Am 09.11.2012 18:40, schrieb Jed Brown:
> Don't speculate, send -log_summary and information about the problem
> and machine. Some setup operations degrade, but solves should scale
> pretty well.
>
> On Fri, Nov 9, 2012 at 11:37 AM, Thomas Witkowski
> <***@tu-dresden.de
> <mailto:***@tu-dresden.de>> wrote:
>
> While we're at it, can some of you tell me some world about
> parallel scaling of algebraic multigrid methods? In most of my
> codes I use them to precondition some simple blocks, e.g. Laplace
> matrix. I'm pretty sure that parallel scaling of my solver is
> limited mostly by scaling of the AMG method which is used. What
> are the expectations, when going to 10^3 or 10^4 cores?
>
>
Jed Brown
2012-11-09 17:49:40 UTC
Permalink
On Fri, Nov 9, 2012 at 11:43 AM, Thomas Witkowski <
***@tu-dresden.de> wrote:

> :) It was just a question whether is makes sense to run my solver in this
> range of number of cores, or whether AMG by itself will not scale here
> anymore. As my computational time is limited, I have to think carefully
> when using 10^4 cores.


Yes, but do a shorter run before a long production run if you need an
accurate estimate of how long it will take and how much memory it will
need. Depending on the architecture, you can extrapolate, but be sure to
control for subdomain sizes.
Mark F. Adams
2012-11-09 18:40:45 UTC
Permalink
If your machine can "scale" a dot product then it can scale AMG. That said, implementing AMG to scale well takes work. GAMG should be able to go to 10^4 processes. We are working on some scaling problems in the setup code and I'm not too sure what and where GAMG will break as you scale up. I'd be curious to know. hypre should scale pretty well.


On Nov 9, 2012, at 12:43 PM, Thomas Witkowski <***@tu-dresden.de> wrote:

> :) It was just a question whether is makes sense to run my solver in this range of number of cores, or whether AMG by itself will not scale here anymore. As my computational time is limited, I have to think carefully when using 10^4 cores.
>
> Thomas
>
>
> Am 09.11.2012 18:40, schrieb Jed Brown:
>> Don't speculate, send -log_summary and information about the problem and machine. Some setup operations degrade, but solves should scale pretty well.
>>
>> On Fri, Nov 9, 2012 at 11:37 AM, Thomas Witkowski <***@tu-dresden.de> wrote:
>> While we're at it, can some of you tell me some world about parallel scaling of algebraic multigrid methods? In most of my codes I use them to precondition some simple blocks, e.g. Laplace matrix. I'm pretty sure that parallel scaling of my solver is limited mostly by scaling of the AMG method which is used. What are the expectations, when going to 10^3 or 10^4 cores?
>>
>
Loading...