Discussion:
[petsc-users] GAMG and zero pivots follow up
David Knezevic
2015-11-11 02:39:44 UTC
Permalink
I'm looking into using GAMG, so I wanted to start with a simple 3D
elasticity problem. When I first tried this, I got the following "zero
pivot" error:

-----------------------------------------------------------------------

[0]PETSC ERROR: Zero pivot in LU factorization:
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot, row 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
[0]PETSC ERROR: /home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real
on a arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
[0]PETSC ERROR: Configure options --with-shared-libraries=1
--with-debugging=0 --download-suitesparse --download-parmetis
--download-blacs
--with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
--CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
--download-metis --download-superlu_dist
--prefix=/home/dknez/software/libmesh_install/opt_real/petsc
--download-hypre --download-ml
[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #3 MatSOR() line 3697 in
/home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 37 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #11 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #14 PCApply_MG() line 338 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #15 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #16 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
[0]PETSC ERROR: #18 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c

-----------------------------------------------------------------------

I saw that there was a thread about this in September (subject: "gamg and
zero pivots"), and that the fix is to use "-mg_levels_pc_type jacobi." When
I do that, the solve succeeds (I pasted the -ksp_view at the end of this
email).

So I have two questions about this:

1. Is it surprising that I hit this issue for a 3D elasticity problem? Note
that matrix assembly was done in libMesh, I can look into the structure of
the assembled matrix more carefully, if needed. Also, note that I can solve
this problem with direct solvers just fine.

2. Is there a way to set "-mg_levels_pc_type jacobi" programmatically,
rather than via the command line?

Thanks,
David

-----------------------------------------------------------------------

ksp_view output:


KSP Object: 1 MPI processes type: cg maximum iterations=5000 tolerances:
relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using
nonzero initial guess using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5
cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices
GAMG specific options Threshold for dropping small values from graph 0 AGG
specific options Symmetric graph false Coarse grid solver -- level
------------------------------- KSP Object: (mg_coarse_) 1 MPI processes
type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances:
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes
type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for
all blocks, in the following KSP and PC objects: KSP Object:
(mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place
factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on
blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill
ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI
processes type: seqaij rows=30, cols=30, bs=6 package used to perform
factorization: petsc total: nonzeros=540, allocated nonzeros=540 total
number of mallocs used during MatSetValues calls =0 using I-node routines:
found 9 nodes, limit used is 5 linear system matrix = precond matrix: Mat
Object: 1 MPI processes type: seqaij rows=30, cols=30, bs=6 total:
nonzeros=540, allocated nonzeros=540 total number of mallocs used during
MatSetValues calls =0 using I-node routines: found 9 nodes, limit used is 5
linear system matrix = precond matrix: Mat Object: 1 MPI processes type:
seqaij rows=30, cols=30, bs=6 total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 9 nodes, limit used is 5 Down solver (pre-smoother) on
level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI
processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.335276,
max = 3.68804 Chebyshev: eigenvalues estimated using gmres with
translations [0 0.1; 0 1.1] KSP Object: (mg_levels_1_esteig_) 1 MPI
processes type: gmres GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy
breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left
preconditioning using NONE norm type for convergence test maximum
iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning using nonzero initial guess using NONE norm type for
convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi
linear system matrix = precond matrix: Mat Object: 1 MPI processes type:
seqaij rows=72, cols=72, bs=6 total: nonzeros=1728, allocated nonzeros=1728
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 23 nodes, limit used is 5 Up solver (post-smoother) same as
down solver (pre-smoother) Down solver (pre-smoother) on level 2
------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes
type: chebyshev Chebyshev: eigenvalue estimates: min = 0.260121, max =
2.86133 Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1] KSP Object: (mg_levels_2_esteig_) 1 MPI processes type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances:
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
NONE norm type for convergence test maximum iterations=2 tolerances:
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
nonzero initial guess using NONE norm type for convergence test PC Object:
(mg_levels_2_) 1 MPI processes type: jacobi linear system matrix = precond
matrix: Mat Object: 1 MPI processes type: seqaij rows=174, cols=174, bs=6
total: nonzeros=5796, allocated nonzeros=5796 total number of mallocs used
during MatSetValues calls =0 using I-node routines: found 57 nodes, limit
used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down
solver (pre-smoother) on level 3 ------------------------------- KSP
Object: (mg_levels_3_) 1 MPI processes type: chebyshev Chebyshev:
eigenvalue estimates: min = 0.267401, max = 2.94141 Chebyshev: eigenvalues
estimated using gmres with translations [0 0.1; 0 1.1] KSP Object:
(mg_levels_3_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_3_) 1 MPI
processes type: jacobi linear system matrix = precond matrix: Mat Object: 1
MPI processes type: seqaij rows=828, cols=828, bs=6 total: nonzeros=44496,
allocated nonzeros=44496 total number of mallocs used during MatSetValues
calls =0 using I-node routines: found 276 nodes, limit used is 5 Up solver
(post-smoother) same as down solver (pre-smoother) Down solver
(pre-smoother) on level 4 ------------------------------- KSP Object:
(mg_levels_4_) 1 MPI processes type: chebyshev Chebyshev: eigenvalue
estimates: min = 0.224361, max = 2.46797 Chebyshev: eigenvalues estimated
using gmres with translations [0 0.1; 0 1.1] KSP Object:
(mg_levels_4_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_4_) 1 MPI
processes type: jacobi linear system matrix = precond matrix: Mat Object:
() 1 MPI processes type: seqaij rows=2676, cols=2676, bs=3 total:
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5 Up solver (post-smoother) same
as down solver (pre-smoother) linear system matrix = precond matrix: Mat
Object: () 1 MPI processes type: seqaij rows=2676, cols=2676, bs=3 total:
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5
Matthew Knepley
2015-11-11 03:00:17 UTC
Permalink
Post by David Knezevic
I'm looking into using GAMG, so I wanted to start with a simple 3D
elasticity problem. When I first tried this, I got the following "zero
-----------------------------------------------------------------------
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot, row 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
[0]PETSC ERROR: /home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real
on a arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
[0]PETSC ERROR: Configure options --with-shared-libraries=1
--with-debugging=0 --download-suitesparse --download-parmetis
--download-blacs
--with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
--CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
--download-metis --download-superlu_dist
--prefix=/home/dknez/software/libmesh_install/opt_real/petsc
--download-hypre --download-ml
[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #3 MatSOR() line 3697 in
/home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 37 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #11 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #14 PCApply_MG() line 338 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #15 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #16 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
[0]PETSC ERROR: #18 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
-----------------------------------------------------------------------
I saw that there was a thread about this in September (subject: "gamg and
zero pivots"), and that the fix is to use "-mg_levels_pc_type jacobi."
When I do that, the solve succeeds (I pasted the -ksp_view at the end of
this email).
1. Is it surprising that I hit this issue for a 3D elasticity problem?
Note that matrix assembly was done in libMesh, I can look into the
structure of the assembled matrix more carefully, if needed. Also, note
that I can solve this problem with direct solvers just fine.
Yes, this seems like a bug, but it could be some strange BC thing I do not
understand.

Naively, the elastic element matrix has a nonzero diagonal. I see that you
are doing LU
of size 5. That seems strange for 3D elasticity. Am I missing something? I
would expect
block size 3.
Post by David Knezevic
2. Is there a way to set "-mg_levels_pc_type jacobi" programmatically,
rather than via the command line?
I would really discourage you from doing this. It makes your code fragile
and inflexible.

Thanks,

Matt
Post by David Knezevic
Thanks,
David
-----------------------------------------------------------------------
relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using
nonzero initial guess using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5
cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices
GAMG specific options Threshold for dropping small values from graph 0 AGG
specific options Symmetric graph false Coarse grid solver -- level
------------------------------- KSP Object: (mg_coarse_) 1 MPI processes
type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes
type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for
(mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place
factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on
blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill
ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI
processes type: seqaij rows=30, cols=30, bs=6 package used to perform
factorization: petsc total: nonzeros=540, allocated nonzeros=540 total
found 9 nodes, limit used is 5 linear system matrix = precond matrix: Mat
nonzeros=540, allocated nonzeros=540 total number of mallocs used during
MatSetValues calls =0 using I-node routines: found 9 nodes, limit used is 5
seqaij rows=30, cols=30, bs=6 total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 9 nodes, limit used is 5 Down solver (pre-smoother) on
level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI
processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.335276,
max = 3.68804 Chebyshev: eigenvalues estimated using gmres with
translations [0 0.1; 0 1.1] KSP Object: (mg_levels_1_esteig_) 1 MPI
processes type: gmres GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy
breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left
preconditioning using NONE norm type for convergence test maximum
iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning using nonzero initial guess using NONE norm type for
convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi
seqaij rows=72, cols=72, bs=6 total: nonzeros=1728, allocated nonzeros=1728
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 23 nodes, limit used is 5 Up solver (post-smoother) same as
down solver (pre-smoother) Down solver (pre-smoother) on level 2
------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes
type: chebyshev Chebyshev: eigenvalue estimates: min = 0.260121, max =
2.86133 Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1] KSP Object: (mg_levels_2_esteig_) 1 MPI processes type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
(mg_levels_2_) 1 MPI processes type: jacobi linear system matrix = precond
matrix: Mat Object: 1 MPI processes type: seqaij rows=174, cols=174, bs=6
total: nonzeros=5796, allocated nonzeros=5796 total number of mallocs used
during MatSetValues calls =0 using I-node routines: found 57 nodes, limit
used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down
solver (pre-smoother) on level 3 ------------------------------- KSP
eigenvalue estimates: min = 0.267401, max = 2.94141 Chebyshev: eigenvalues
(mg_levels_3_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_3_) 1 MPI
processes type: jacobi linear system matrix = precond matrix: Mat Object: 1
MPI processes type: seqaij rows=828, cols=828, bs=6 total: nonzeros=44496,
allocated nonzeros=44496 total number of mallocs used during MatSetValues
calls =0 using I-node routines: found 276 nodes, limit used is 5 Up solver
(post-smoother) same as down solver (pre-smoother) Down solver
(mg_levels_4_) 1 MPI processes type: chebyshev Chebyshev: eigenvalue
estimates: min = 0.224361, max = 2.46797 Chebyshev: eigenvalues estimated
(mg_levels_4_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_4_) 1 MPI
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5 Up solver (post-smoother) same
as down solver (pre-smoother) linear system matrix = precond matrix: Mat
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
David Knezevic
2015-11-11 03:21:54 UTC
Permalink
On Tue, Nov 10, 2015 at 8:39 PM, David Knezevic <
Post by David Knezevic
I'm looking into using GAMG, so I wanted to start with a simple 3D
elasticity problem. When I first tried this, I got the following "zero
-----------------------------------------------------------------------
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot, row 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
/home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real on a
arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
[0]PETSC ERROR: Configure options --with-shared-libraries=1
--with-debugging=0 --download-suitesparse --download-parmetis
--download-blacs
--with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
--CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
--download-metis --download-superlu_dist
--prefix=/home/dknez/software/libmesh_install/opt_real/petsc
--download-hypre --download-ml
[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #3 MatSOR() line 3697 in
/home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 37 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #11 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #14 PCApply_MG() line 338 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #15 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #16 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
[0]PETSC ERROR: #18 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
-----------------------------------------------------------------------
I saw that there was a thread about this in September (subject: "gamg and
zero pivots"), and that the fix is to use "-mg_levels_pc_type jacobi."
When I do that, the solve succeeds (I pasted the -ksp_view at the end of
this email).
1. Is it surprising that I hit this issue for a 3D elasticity problem?
Note that matrix assembly was done in libMesh, I can look into the
structure of the assembled matrix more carefully, if needed. Also, note
that I can solve this problem with direct solvers just fine.
Yes, this seems like a bug, but it could be some strange BC thing I do not
understand.
OK, I can look into the matrix in more detail. I agree that it should have
a non-zero diagonal, so I'll have a look at what's happening with that.
Naively, the elastic element matrix has a nonzero diagonal. I see that you
are doing LU
of size 5. That seems strange for 3D elasticity. Am I missing something? I
would expect
block size 3.
I'm not sure what is causing the LU of size 5. Is there a setting to
control that?

Regarding the block size: I set the vector and matrix block size to 3
via VecSetBlockSize and MatSetBlockSize. I also
used MatNullSpaceCreateRigidBody on a vector with block size of 3, and set
the matrix's near nullspace using that.
Post by David Knezevic
2. Is there a way to set "-mg_levels_pc_type jacobi" programmatically,
rather than via the command line?
I would really discourage you from doing this. It makes your code fragile
and inflexible.
OK. The reason I asked is that in this case I have to write a bunch of code
to set block sizes and the near nullspace, so I figured it'd be good to
also set the corresponding solver options required to make this work...
anyway, if I can fix the zero diagonal issue, I guess this will be moot.

David
-----------------------------------------------------------------------
Post by David Knezevic
relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using
nonzero initial guess using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5
cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices
GAMG specific options Threshold for dropping small values from graph 0 AGG
specific options Symmetric graph false Coarse grid solver -- level
------------------------------- KSP Object: (mg_coarse_) 1 MPI processes
type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes
type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for
(mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place
factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on
blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill
ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI
processes type: seqaij rows=30, cols=30, bs=6 package used to perform
factorization: petsc total: nonzeros=540, allocated nonzeros=540 total
found 9 nodes, limit used is 5 linear system matrix = precond matrix: Mat
nonzeros=540, allocated nonzeros=540 total number of mallocs used during
MatSetValues calls =0 using I-node routines: found 9 nodes, limit used is 5
seqaij rows=30, cols=30, bs=6 total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 9 nodes, limit used is 5 Down solver (pre-smoother) on
level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI
processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.335276,
max = 3.68804 Chebyshev: eigenvalues estimated using gmres with
translations [0 0.1; 0 1.1] KSP Object: (mg_levels_1_esteig_) 1 MPI
processes type: gmres GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy
breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left
preconditioning using NONE norm type for convergence test maximum
iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning using nonzero initial guess using NONE norm type for
convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi
seqaij rows=72, cols=72, bs=6 total: nonzeros=1728, allocated nonzeros=1728
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 23 nodes, limit used is 5 Up solver (post-smoother) same as
down solver (pre-smoother) Down solver (pre-smoother) on level 2
------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes
type: chebyshev Chebyshev: eigenvalue estimates: min = 0.260121, max =
2.86133 Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1] KSP Object: (mg_levels_2_esteig_) 1 MPI processes type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
(mg_levels_2_) 1 MPI processes type: jacobi linear system matrix = precond
matrix: Mat Object: 1 MPI processes type: seqaij rows=174, cols=174, bs=6
total: nonzeros=5796, allocated nonzeros=5796 total number of mallocs used
during MatSetValues calls =0 using I-node routines: found 57 nodes, limit
used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down
solver (pre-smoother) on level 3 ------------------------------- KSP
eigenvalue estimates: min = 0.267401, max = 2.94141 Chebyshev: eigenvalues
(mg_levels_3_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_3_) 1 MPI
processes type: jacobi linear system matrix = precond matrix: Mat Object: 1
MPI processes type: seqaij rows=828, cols=828, bs=6 total: nonzeros=44496,
allocated nonzeros=44496 total number of mallocs used during MatSetValues
calls =0 using I-node routines: found 276 nodes, limit used is 5 Up solver
(post-smoother) same as down solver (pre-smoother) Down solver
(mg_levels_4_) 1 MPI processes type: chebyshev Chebyshev: eigenvalue
estimates: min = 0.224361, max = 2.46797 Chebyshev: eigenvalues estimated
(mg_levels_4_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_4_) 1 MPI
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5 Up solver (post-smoother) same
as down solver (pre-smoother) linear system matrix = precond matrix: Mat
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
Matthew Knepley
2015-11-11 03:24:31 UTC
Permalink
Post by David Knezevic
On Tue, Nov 10, 2015 at 8:39 PM, David Knezevic <
Post by David Knezevic
I'm looking into using GAMG, so I wanted to start with a simple 3D
elasticity problem. When I first tried this, I got the following "zero
-----------------------------------------------------------------------
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot, row 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
/home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real on a
arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
[0]PETSC ERROR: Configure options --with-shared-libraries=1
--with-debugging=0 --download-suitesparse --download-parmetis
--download-blacs
--with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
--CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
--download-metis --download-superlu_dist
--prefix=/home/dknez/software/libmesh_install/opt_real/petsc
--download-hypre --download-ml
[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #3 MatSOR() line 3697 in
/home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 37 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #11 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #14 PCApply_MG() line 338 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #15 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #16 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
[0]PETSC ERROR: #18 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
-----------------------------------------------------------------------
I saw that there was a thread about this in September (subject: "gamg
and zero pivots"), and that the fix is to use "-mg_levels_pc_type
jacobi." When I do that, the solve succeeds (I pasted the -ksp_view at the
end of this email).
1. Is it surprising that I hit this issue for a 3D elasticity problem?
Note that matrix assembly was done in libMesh, I can look into the
structure of the assembled matrix more carefully, if needed. Also, note
that I can solve this problem with direct solvers just fine.
Yes, this seems like a bug, but it could be some strange BC thing I do
not understand.
OK, I can look into the matrix in more detail. I agree that it should have
a non-zero diagonal, so I'll have a look at what's happening with that.
Naively, the elastic element matrix has a nonzero diagonal. I see that
you are doing LU
of size 5. That seems strange for 3D elasticity. Am I missing something?
I would expect
block size 3.
I'm not sure what is causing the LU of size 5. Is there a setting to
control that?
Regarding the block size: I set the vector and matrix block size to 3
via VecSetBlockSize and MatSetBlockSize. I also
used MatNullSpaceCreateRigidBody on a vector with block size of 3, and set
the matrix's near nullspace using that.
Can you run this same example with -mat_no_inode? I think it may be a
strange blocking that is causing this.

Thanks,

Matt
Post by David Knezevic
Post by David Knezevic
2. Is there a way to set "-mg_levels_pc_type jacobi" programmatically,
rather than via the command line?
I would really discourage you from doing this. It makes your code fragile
and inflexible.
OK. The reason I asked is that in this case I have to write a bunch of
code to set block sizes and the near nullspace, so I figured it'd be good
to also set the corresponding solver options required to make this work...
anyway, if I can fix the zero diagonal issue, I guess this will be moot.
David
-----------------------------------------------------------------------
Post by David Knezevic
relative=1e-12, absolute=1e-50, divergence=10000 left preconditioning using
nonzero initial guess using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5
cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices
GAMG specific options Threshold for dropping small values from graph 0 AGG
specific options Symmetric graph false Coarse grid solver -- level
------------------------------- KSP Object: (mg_coarse_) 1 MPI processes
type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes
type: bjacobi block Jacobi: number of blocks = 1 Local solve is same for
(mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place
factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on
blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill
ratio given 5, needed 1 Factored matrix follows: Mat Object: 1 MPI
processes type: seqaij rows=30, cols=30, bs=6 package used to perform
factorization: petsc total: nonzeros=540, allocated nonzeros=540 total
found 9 nodes, limit used is 5 linear system matrix = precond matrix: Mat
nonzeros=540, allocated nonzeros=540 total number of mallocs used during
MatSetValues calls =0 using I-node routines: found 9 nodes, limit used is 5
seqaij rows=30, cols=30, bs=6 total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 9 nodes, limit used is 5 Down solver (pre-smoother) on
level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI
processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.335276,
max = 3.68804 Chebyshev: eigenvalues estimated using gmres with
translations [0 0.1; 0 1.1] KSP Object: (mg_levels_1_esteig_) 1 MPI
processes type: gmres GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy
breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left
preconditioning using NONE norm type for convergence test maximum
iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning using nonzero initial guess using NONE norm type for
convergence test PC Object: (mg_levels_1_) 1 MPI processes type: jacobi
seqaij rows=72, cols=72, bs=6 total: nonzeros=1728, allocated nonzeros=1728
total number of mallocs used during MatSetValues calls =0 using I-node
routines: found 23 nodes, limit used is 5 Up solver (post-smoother) same as
down solver (pre-smoother) Down solver (pre-smoother) on level 2
------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes
type: chebyshev Chebyshev: eigenvalue estimates: min = 0.260121, max =
2.86133 Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1] KSP Object: (mg_levels_2_esteig_) 1 MPI processes type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement GMRES: happy breakdown
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using
(mg_levels_2_) 1 MPI processes type: jacobi linear system matrix = precond
matrix: Mat Object: 1 MPI processes type: seqaij rows=174, cols=174, bs=6
total: nonzeros=5796, allocated nonzeros=5796 total number of mallocs used
during MatSetValues calls =0 using I-node routines: found 57 nodes, limit
used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down
solver (pre-smoother) on level 3 ------------------------------- KSP
eigenvalue estimates: min = 0.267401, max = 2.94141 Chebyshev: eigenvalues
(mg_levels_3_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_3_) 1 MPI
processes type: jacobi linear system matrix = precond matrix: Mat Object: 1
MPI processes type: seqaij rows=828, cols=828, bs=6 total: nonzeros=44496,
allocated nonzeros=44496 total number of mallocs used during MatSetValues
calls =0 using I-node routines: found 276 nodes, limit used is 5 Up solver
(post-smoother) same as down solver (pre-smoother) Down solver
(mg_levels_4_) 1 MPI processes type: chebyshev Chebyshev: eigenvalue
estimates: min = 0.224361, max = 2.46797 Chebyshev: eigenvalues estimated
(mg_levels_4_esteig_) 1 MPI processes type: gmres GMRES: restart=30, using
Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative
refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10,
initial guess is zero tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using NONE norm type for convergence
test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50,
divergence=10000 left preconditioning using nonzero initial guess using
NONE norm type for convergence test PC Object: (mg_levels_4_) 1 MPI
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5 Up solver (post-smoother) same
as down solver (pre-smoother) linear system matrix = precond matrix: Mat
nonzeros=94014, allocated nonzeros=94014 total number of mallocs used
during MatSetValues calls =0 has attached near null space using I-node
routines: found 892 nodes, limit used is 5
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
David Knezevic
2015-11-11 03:28:10 UTC
Permalink
On Tue, Nov 10, 2015 at 9:21 PM, David Knezevic <
Post by David Knezevic
On Tue, Nov 10, 2015 at 8:39 PM, David Knezevic <
Post by David Knezevic
I'm looking into using GAMG, so I wanted to start with a simple 3D
elasticity problem. When I first tried this, I got the following "zero
-----------------------------------------------------------------------
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[0]PETSC ERROR: Zero pivot, row 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.1, Jul, 22, 2015
/home/dknez/akselos-dev/scrbe/build/bin/fe_solver-opt_real on a
arch-linux2-c-opt named david-Lenovo by dknez Tue Nov 10 21:26:39 2015
[0]PETSC ERROR: Configure options --with-shared-libraries=1
--with-debugging=0 --download-suitesparse --download-parmetis
--download-blacs
--with-blas-lapack-dir=/opt/intel/system_studio_2015.2.050/mkl
--CXXFLAGS=-Wl,--no-as-needed --download-scalapack --download-mumps
--download-metis --download-superlu_dist
--prefix=/home/dknez/software/libmesh_install/opt_real/petsc
--download-hypre --download-ml
[0]PETSC ERROR: #1 PetscKernel_A_gets_inverse_A_5() line 48 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/baij/seq/dgefa5.c
[0]PETSC ERROR: #2 MatSOR_SeqAIJ_Inode() line 2808 in
/home/dknez/software/petsc-3.6.1/src/mat/impls/aij/seq/inode.c
[0]PETSC ERROR: #3 MatSOR() line 3697 in
/home/dknez/software/petsc-3.6.1/src/mat/interface/matrix.c
[0]PETSC ERROR: #4 PCApply_SOR() line 37 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/sor/sor.c
[0]PETSC ERROR: #5 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #6 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #7 KSPInitialResidual() line 63 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: #8 KSPSolve_GMRES() line 235 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: #9 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #10 KSPSolve_Chebyshev() line 381 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cheby/cheby.c
[0]PETSC ERROR: #11 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: #12 PCMGMCycle_Private() line 19 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #13 PCMGMCycle_Private() line 48 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #14 PCApply_MG() line 338 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/impls/mg/mg.c
[0]PETSC ERROR: #15 PCApply() line 482 in
/home/dknez/software/petsc-3.6.1/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #16 KSP_PCApply() line 242 in
/home/dknez/software/petsc-3.6.1/include/petsc/private/kspimpl.h
[0]PETSC ERROR: #17 KSPSolve_CG() line 139 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/impls/cg/cg.c
[0]PETSC ERROR: #18 KSPSolve() line 604 in
/home/dknez/software/petsc-3.6.1/src/ksp/ksp/interface/itfunc.c
-----------------------------------------------------------------------
I saw that there was a thread about this in September (subject: "gamg
and zero pivots"), and that the fix is to use "-mg_levels_pc_type
jacobi." When I do that, the solve succeeds (I pasted the -ksp_view at the
end of this email).
1. Is it surprising that I hit this issue for a 3D elasticity problem?
Note that matrix assembly was done in libMesh, I can look into the
structure of the assembled matrix more carefully, if needed. Also, note
that I can solve this problem with direct solvers just fine.
Yes, this seems like a bug, but it could be some strange BC thing I do
not understand.
OK, I can look into the matrix in more detail. I agree that it should
have a non-zero diagonal, so I'll have a look at what's happening with that.
Naively, the elastic element matrix has a nonzero diagonal. I see that
you are doing LU
of size 5. That seems strange for 3D elasticity. Am I missing something?
I would expect
block size 3.
I'm not sure what is causing the LU of size 5. Is there a setting to
control that?
Regarding the block size: I set the vector and matrix block size to 3
via VecSetBlockSize and MatSetBlockSize. I also
used MatNullSpaceCreateRigidBody on a vector with block size of 3, and set
the matrix's near nullspace using that.
Can you run this same example with -mat_no_inode? I think it may be a
strange blocking that is causing this.
That works. The -ksp_view output is below.

Thanks,
David



KSP Object: 1 MPI processes
type: cg
maximum iterations=5000
tolerances: relative=1e-12, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=5 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
GAMG specific options
Threshold for dropping small values from graph 0
AGG specific options
Symmetric graph false
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 1 MPI processes
type: bjacobi
block Jacobi: number of blocks = 1
Local solve is same for all blocks, in the following KSP and PC
objects:
KSP Object: (mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
factor fill ratio given 5, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=30, cols=30, bs=6
package used to perform factorization: petsc
total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=30, cols=30, bs=6
total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=30, cols=30, bs=6
total: nonzeros=540, allocated nonzeros=540
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (mg_levels_1_) 1 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = -1.79769e+307, max = -inf
Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1]
KSP Object: (mg_levels_1_esteig_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_1_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=72, cols=72, bs=6
total: nonzeros=1728, allocated nonzeros=1728
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (mg_levels_2_) 1 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = -1.79769e+307, max = -inf
Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1]
KSP Object: (mg_levels_2_esteig_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_2_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=174, cols=174, bs=6
total: nonzeros=5796, allocated nonzeros=5796
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 3 -------------------------------
KSP Object: (mg_levels_3_) 1 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = -1.79769e+307, max = -inf
Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1]
KSP Object: (mg_levels_3_esteig_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_3_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=828, cols=828, bs=6
total: nonzeros=44496, allocated nonzeros=44496
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 4 -------------------------------
KSP Object: (mg_levels_4_) 1 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 0.0998367, max = 1.0982
Chebyshev: eigenvalues estimated using gmres with translations [0
0.1; 0 1.1]
KSP Object: (mg_levels_4_esteig_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
maximum iterations=2
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (mg_levels_4_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
linear system matrix = precond matrix:
Mat Object: () 1 MPI processes
type: seqaij
rows=2676, cols=2676, bs=3
total: nonzeros=94014, allocated nonzeros=94014
total number of mallocs used during MatSetValues calls =0
has attached near null space
not using I-node routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Mat Object: () 1 MPI processes
type: seqaij
rows=2676, cols=2676, bs=3
total: nonzeros=94014, allocated nonzeros=94014
total number of mallocs used during MatSetValues calls =0
has attached near null space
not using I-node routines

Loading...