next up previous contents
Next: About this document ... Up: user Previous: C. Examples of geometries   Contents

Subsections

D. Running NWChem

The command required to invoke NWChem is machine dependent, whereas most of the NWChem input is machine independentD.1 .

D.1 Sequential execution

To run NWChem sequentially on nearly all UNIX-based platforms simply use the command nwchem and provide the name of the input file as an argument (See section 2.1 for more information). This does assume that either nwchem is in your path or you have set an alias of nwchem to point to the appropriate executable.

Output is to standard output, standard error and Fortran unit 6 (usually the same as standard output). Files are created by default in the current directory, though this may be overridden in the input (section 5.2).

Generally, one will run a job with the following command:

nwchem input.nw >& input.out &


D.2 Parallel execution on UNIX-based parallel machines including workstation clusters using TCGMSG

These platforms require the use of the TCGMSGD.2 parallel command and thus also require the definition of a process-group (or procgroup) file. The process-group file describes how many processes to start, what program to run, which machines to use, which directories to work in, and under which userid to run the processes. By convention the process-group file has a .p suffix.

The process-group file is read to end-of-file. The character # (hash or pound sign) is used to indicate a comment which continues to the next new-line character. Each line describes a cluster of processes and consists of the following whitespace separated fields:

  userid hostname nslave executable workdir

For example, if your file "nwchem.p" contained the following

 d3g681 pc 4 /msrc/apps/bin/nwchem /scr22/rjh
then 4 processes running NWChem would be started on the machine pc running as user d3g681 in directory "/scr22/rjh". To actually run this simply type:
  parallel nwchem big_molecule.nw

N.B. : The first process specified (process zero) is the only process that

Thus, if your file systems are physically distributed (e.g., most workstation clusters) you must ensure that process zero can correctly resolve the paths for the input and database files.

N.B. In releases of NWChem prior to 3.3 additional processes had to be created on workstation clusters to support remote access to shared memory. This is no longer the case. The TCGMSG process group file now just needs to refer to processes running NWChem.

D.3 Parallel execution on UNIX-based parallel machines including workstation clusters using MPI

To run with MPI, parallel should not be used. The way we usually run nwchem under MPI are the following

D.4 Parallel execution on MPPs

All of these machines require use of different commands in order to gain exclusive access to computational resources.

D.5 IBM SP

If using POE (IBM's Parallel Operating Environment) interactively, simply create the list of nodes to use in the file "host.list" in the current directory and invoke NWChem with

  nwchem <input_file> -procs <n>
where n is the number of processes to use. Process 0 will run on the first node in "host.list" and must have access to the input and other necessary files. Very significant performance gains may be had by setting the following environment variables before running NWChem (or setting them using POE command line options). In addition, if the IBM is running PSSP version 3.1, or later

For batch execution, we recommend use of the llnw command which is installed in /usr/local/bin on the EMSL/PNNL IBM SP. If you are not running on that system, the llnw script may be found in the NWChem distribution directory contrib/loadleveler. Interactive help may be obtained with the command llnw -help. Otherwise, the very simplest job to run NWChem in batch using Load Leveller is something like this

#!/bin/csh -x
# @ job_type         =    parallel
# @ class            =    small
# @ network.lapi     = css0,not_shared,US
# @ input            =    /dev/null
# @ output           =    <OUTPUT_FILE_NAME>
# @ error            =    <ERROUT_FILE_NAME>
# @ environment      =    COPY_ALL; MP_PULSE=0; MP_SINGLE_THREAD=yes; MP_WAIT_MODE=yield; restart=no
# @ min_processors   =    7
# @ max_processors   =    7
# @ cpu_limit        =    1:00:00
# @ wall_clock_limit =    1:00:00
# @ queue
#

cd /scratch

nwchem <INPUT_FILE_NAME>

Substitute <OUTPUT_FILE_NAME>, <ERROUT_FILE_NAME> and <INPUT_FILE_NAME> with the full path of the appropriate files. Also, if you are using an SP with more than one processor per node, you will need to substitute

# @ network.lapi     = css0,shared,US
# @ node             = NNODE
# @ tasks_per_node   = NTASK
for the lines
# @ network.lapi     = css0,not_shared,US
# @ min_processors   =    7
# @ max_processors   =    7
where NNODE is the number of physical nodes to be used and NTASK is the number of tasks per node.

These files and the NWChem executable must be in a file system accessible to all processes. Put the above into a file (e.g., "test.job") and submit it with the command

  llsubmit test.job
It will run a 7 processor, 1 hour job in the queue small. It should be apparent how to change these values.

Note that on many IBM SPs, including that at EMSL, the local scratch disks are wiped clean at the beginning of each job and therefore persistent files should be stored elsewhere. PIOFS is recommended for files larger than a few MB.

D.6 Cray T3E

  mpprun -n <npes> $NWCHEM_TOP/bin/$NWCHEM_TARGET/nwchem <input_file>

where npes is the number of processors and input_file is the name of your input file.

D.7 Linux

If running in parallel across multiple machines you should consider applying this patch to your kernel to boost the performance of TCP/IP

D.8 Alpha systems with Quadrics switch

  prun -n <npes> $NWCHEM_TOP/bin/$NWCHEM_TARGET/nwchem <input_file>

where npes is the number of processors and input_file is the name of your input file.

D.9 Windows 98 and NT

   $NWCHEM_TOP/bin/win32/nw32 <input_file>

where and input_file is the name of your input file. If you use WMPI, uou must have a file named nw32.pg in the $NWCHEM_TOP/bin/win32 directory; the file must only contains the following single line

   local 0

D.10 Tested Platforms and O/S versions


next up previous contents
Next: About this document ... Up: user Previous: C. Examples of geometries   Contents
2003-10-08