Discussion:
[petsc-users] summary of the bandwidth received with different number of MPI processes
TAY wee-beng
2015-10-31 16:41:19 UTC
Permalink
Hi,

It's mentioned that for a batch sys, I have to:

1. cd src/benchmarks/steams
2. make MPIVersion
3. submit MPIVersion to the batch system a number of times with 1, 2, 3,
etc MPI processes collecting all of the output from the runs into the
single file scaling.log.
4. copy scaling.log into the src/benchmarks/steams directory
5. ./process.py createfile ; process.py

So for 3, how do I collect all of the output from the runs into the
single file scaling.log.

Should scaling.log look for this:

Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)

...
--
Thank you.

Yours sincerely,

TAY wee-beng
Barry Smith
2015-10-31 17:17:37 UTC
Permalink
Yes, just put the output from running with 1 2 etc processes in order into the file
Post by TAY wee-beng
Hi,
1. cd src/benchmarks/steams
2. make MPIVersion
3. submit MPIVersion to the batch system a number of times with 1, 2, 3, etc MPI processes collecting all of the output from the runs into the single file scaling.log.
4. copy scaling.log into the src/benchmarks/steams directory
5. ./process.py createfile ; process.py
So for 3, how do I collect all of the output from the runs into the single file scaling.log.
Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)
...
--
Thank you.
Yours sincerely,
TAY wee-beng
TAY wee-beng
2015-11-01 04:26:45 UTC
Permalink
Post by Barry Smith
Yes, just put the output from running with 1 2 etc processes in order into the file
Hi,

I just did but I got some errors.

The scaling.log file is:

Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)
Number of MPI processes 12 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53162.5346 Rate (MB/s)
Number of MPI processes 24 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 101455.6581 Rate (MB/s)
Number of MPI processes 48 Processor names n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07
Triad: 115575.8960 Rate (MB/s)
Number of MPI processes 96 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07
Triad: 223742.1796 Rate (MB/s)
Number of MPI processes 192 Processor names n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
n12-07 n12-07 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09
n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09
n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09
n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09
n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09
n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
Triad: 436940.9859 Rate (MB/s)

When I tried to run "./process.py createfile ; process.py", I got

np speedup
Traceback (most recent call last):
File "./process.py", line 110, in <module>
process(len(sys.argv)-1)
File "./process.py", line 34, in process
speedups[sizes] = triads[sizes]/triads[1]
KeyError: 1
Traceback (most recent call last):
File "./process.py", line 110, in <module>
process(len(sys.argv)-1)
File "./process.py", line 34, in process
speedups[sizes] = triads[sizes]/triads[1]
KeyError: 1

How can I solve it? Thanks.
Post by Barry Smith
Post by TAY wee-beng
Hi,
1. cd src/benchmarks/steams
2. make MPIVersion
3. submit MPIVersion to the batch system a number of times with 1, 2, 3, etc MPI processes collecting all of the output from the runs into the single file scaling.log.
4. copy scaling.log into the src/benchmarks/steams directory
5. ./process.py createfile ; process.py
So for 3, how do I collect all of the output from the runs into the single file scaling.log.
Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)
...
--
Thank you.
Yours sincerely,
TAY wee-beng
Barry Smith
2015-11-01 16:11:47 UTC
Permalink
Just plot the bandwidth yourself using gunplot or Matlab or something.

Also you might benefit from using process binding http://www.mcs.anl.gov/petsc/documentation/faq.html#computers
Post by TAY wee-beng
Post by Barry Smith
Yes, just put the output from running with 1 2 etc processes in order into the file
Hi,
I just did but I got some errors.
Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)
Number of MPI processes 12 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53162.5346 Rate (MB/s)
Number of MPI processes 24 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 101455.6581 Rate (MB/s)
Number of MPI processes 48 Processor names n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
Triad: 115575.8960 Rate (MB/s)
Number of MPI processes 96 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07
Triad: 223742.1796 Rate (MB/s)
Number of MPI processes 192 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-06 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-07 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-09 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10 n12-10
Triad: 436940.9859 Rate (MB/s)
When I tried to run "./process.py createfile ; process.py", I got
np speedup
File "./process.py", line 110, in <module>
process(len(sys.argv)-1)
File "./process.py", line 34, in process
speedups[sizes] = triads[sizes]/triads[1]
KeyError: 1
File "./process.py", line 110, in <module>
process(len(sys.argv)-1)
File "./process.py", line 34, in process
speedups[sizes] = triads[sizes]/triads[1]
KeyError: 1
How can I solve it? Thanks.
Post by Barry Smith
Post by TAY wee-beng
Hi,
1. cd src/benchmarks/steams
2. make MPIVersion
3. submit MPIVersion to the batch system a number of times with 1, 2, 3, etc MPI processes collecting all of the output from the runs into the single file scaling.log.
4. copy scaling.log into the src/benchmarks/steams directory
5. ./process.py createfile ; process.py
So for 3, how do I collect all of the output from the runs into the single file scaling.log.
Number of MPI processes 3 Processor names n12-06 n12-06 n12-06
Triad: 27031.0419 Rate (MB/s)
Number of MPI processes 6 Processor names n12-06 n12-06 n12-06 n12-06 n12-06 n12-06
Triad: 53517.8980 Rate (MB/s)
...
--
Thank you.
Yours sincerely,
TAY wee-beng
Loading...