Frequently Asked Questions
Table of Contents

Installation

I got the tarball. What do I do next?

Let's assume you downloaded sundials-x.y.z.tar.gz. First, you'll have to uncompress and unpack it:

% tar -xzf sundials-x.y.z.tar

This will create a directory sundials-x.y.z, which we will call srcdir. (Note that if you only downloaded an individual solver tarball, call it solver-x.y.z.tar.gz, uncompressing and unpacking it will result in the directory solver-x.y.z.)

Next, you need to configure and build the SUNDIALS libraries. You can get some information on how to do this here, but more details are available in the user guides (e.g. sundials/cvode/doc/cv_guide.pdf).

If your system does not support configure scripts, see How do I build SUNDIALS without using the configure script?

Otherwise, the first thing to decide now is where you'd like to install the SUNDIALS libraries and header files. We will call this directory instdir. You will specify this through the --prefix option to configure (if --prefix is not specified, instdir defaults to usr/local.).

WARNING: In no circumstances, should you use the same instdir as the source directory srcdir, as this would result in errors when attempting to install the exported header files.

Next, from within srcdir, you configure, build, and install SUNDIALS:

% cd srcdir
% ./configure --prefix=instdir <configure-options>
% make
% make install

If everything went fine, the SUNDIALS libraries were installed in instdir/lib, the exported header files were installed in various subdirectories of instdir/include, and the sundials-config executable script was installed in instdir/bin. If compilation of the SUNDIALS examples was enabled through the --enable-examples option to configure then these were installed in various subdirectores of exinstdir, where exinstsdir was specified through the --with-examples-instdir option (default instdir/examples). Similarly, if compilation of the Matlab toolbox sundialsTB was enabled at configuration time (--enable-sundialsTB), then it was installed in stbinstdir/sundialsTB, where stbinstdir was specified with --with-sundialsTB-instdir (default $MATLAB/toolbox).

What configure flags do I really need?

That depends a lot on how your system is configured. If all compilers, libraries, and system header files are in the usual places, configuring SUNDIALS can be as simple as:

% ./configure
% make
% make install

In certain circumstances, more sophisticated configure lines may be needed, especially when building SUNDIALS with MPI support and/or the Fortran-C interfaces. For example, to build SUNDIALS using gcc as the serial C compiler, g77 as the serial Fortran compiler, using the MPI compiler scripts mpicc and mpif77 from a particular MPICH distribution, and using the additional compiler flag -g3, you would use:

% configure CC=gcc F77=g77 --with-cflags=-g3 --with-fflags=-g3 \
      --with-mpicc=/usr/apps/mpich/1.2.4/bin/mpicc \
      --with-mpif77=/usr/apps/mpich/1.2.4/bin/mpif77

Can I use a C++ compiler to build SUNDIALS?

Yes. However, if building SUNDIALS using a C++ compiler and with parallel support, you must take care not to include MPI C++ bindings. This can be done by specifying an implementation-dependent MPI flag using --with-mpi-flags:

  • for MPICH, use --with-mpi-flags=-DMPICH_SKIP_MPICXX
  • for LAM MPI, use --with-mpi-flags=-DLAM_BUILDING

On the other hand, in order to resolve all necessary symbols while linking the parallel Fortran libraries, you must either use the MPI wrapper using the C++ compiler (i.e. specify --with-mpif77=mpiCC), or else add the stdc++ library (i.e. specify --with-libs=-lstdc++ ).

Note that, if you want to use an MPI script, you must specify, through --with-mpicc, the wrapper using the C++ compiler (mpiCC or mpic++ , depending on your MPI distribution). If a C++ compiler is specified and --with-mpicc is not specified, the SUNDIALS configure script will automaticall select the correct MPI script.

How do I install SUNDIALS without using the configure script?

Complete details on bulding SUNDIALS without using the default build system are provided in the "Building SUNDIALS without the configure script" section of the installation chapter in any of the SUNDIALS user guides. This type of installation requires manually creating the sundials_config.h header file.

How do I install SUNDIALS under Windows?

One way of obtaining Windows libraries for the SUNDIALS solvers is to use cygwin ([http://www.cygwin.com]), in which case the installation procedure is the same as for any other *NIX system.
An alternative is to use CMake (see installation-cmake).

How can I generate shared libraries under cygwin?

In order to create shared libraries (DLLs) under cygwin, you must regenerate the configure script using cygwin's autotools and use the -no-undefined option to libtool.

  • Remove, if they exist, the aclocal.m4 file and autom4te.cache directory:
% rm -rf aclocal.m4 autom4te.cache
  • Regenerate the configure script:
% aclocal
% autoconf
% autoheader
  • Configure SUNDIALS, enabling creation of shared libraries and using the required flag:
% ./configure --enable-shared --with-ldflags=-no-undefined

Should I build 32-bit or 64-bit SUNDIALS libraries?

Typically, the "bitness" of a CPU refers to the size of the linear address space a process can address (e.g., an IA-32 processor can address up to 4GB). Consequently, if your program does not require more than 4GB of memory, then there is probably no benefit to building a 64-bit library.

People are also often confused by the nomenclature, but the "bitness" of a CPU has nothing to do with the accuracy of floating-point computations or the size of floating-point numeric data types. Regardless of the "bitness" of the processor, real numbers are stored in the IEEE Standard 754 floating-point format. Therefore, building a 64-bit library will not increase the accuracy of your numerical results.

Everything installed fine! How do I link the SUNDIALS libraries to my own application?

The easiest way to determine the preprocessor and linker flags required to link the SUNDIALS libraries to
a user application is to invoke, from within the Makefile, the executable script sundials-config
available in instdir/bin. Its usage is as follows:

Usage: sundials-config -m cvode|cvodes|ida|kinsol -t s|p -l c|f [-s libs|cppflags -hv]
Requires: standard GNU commands
Options:
    -m cvode|cvodes|ida|kinsol   SUNDIALS module
    -t s|p                       use serial or parallel vectors
    -l c|f                       use C or Fortran
    -s libs|cppflags             show linking flags or C preprocessor flags.
                                 (show both if option not given.)
    -h                           usage and options (this help)
    -v                           view this script
Notes:
    '-l f' is not valid for '-m cvodes'
    '-s cppflags' returns an empty string for '-l f'

With this, a sample Makefile to compile a CVODE application witten in C and using the serial NVECTOR module could look as follows:

SHELL = /bin/sh

CC       = gcc
CFLAGS   = -g -O
CPPFLAGS =
LDFLAGS  =

SUN_DISTRO = /home/radu/CODES/sundials-2.3.0-mpich1_gcc

MY_APPS = app1 app2

all:
        @sun_cppflags=`eval "${SUN_DISTRO}/bin/sundials-config -m cvode -t s -l c -s cppflags"`; \
         sun_ldflags=`eval "${SUN_DISTRO}/bin/sundials-config -m cvode -t s -l c -s libs"`;      \
         for i in ${MY_APPS} ; do                                                                \
           echo "--- Making $${i} ---" ;                                                         \
           eval "make SUN_CPPFLAGS='$${sun_cppflags}' SUN_LDFLAGS='$${sun_ldflags}' $${i}";      \
         done

app1: app1.c
        ${CC} ${CPPFLAGS} ${SUN_CPPFLAGS} ${CFLAGS} -c app1.c
        ${CC} -o app1 app1.o ${CFLAGS} ${LDFLAGS} ${SUN_LDFLAGS}
app2: app2.c
        ${CC} ${CPPFLAGS} ${SUN_CPPFLAGS} ${CFLAGS} -DMY_PREPROC_DIR -c app2.c
        ${CC} -o app2 app2.o ${CFLAGS} ${LDFLAGS} ${SUN_LDFLAGS}

clean:
        rm -f *.o
        rm -f ${MY_APPS}

CVODE / CVODES

How do I choose tolerances?

For many users, the appropriate choices for tolerance values are a concern. The following pieces of advice are relevant.

  • The scalar relative tolerance reltol is to be set to control relative errors. So reltol = 1.0E-4 means that errors are controlled to .01%. We do not recommend using reltol larger than 1.0E-3. On the other hand, reltol should not be so small that it is comparable to the unit roundoff of the machine arithmetic (generally around 1.0E-15).
  • The absolute tolerances abstol (whether scalar or vector) need to be set to control absolute errors when any components of the solution vector y may be so small that pure relative error control is meaningless. For example, if y[i] starts at some nonzero value, but in time decays to zero, then pure relative error control on y[i] makes no sense (and is overly costly) after y[i] is below some noise level. Then abstol (if scalar) or abstol[i] (if a vector) needs to be set to that noise level. If the different components have different noise levels, then abstol should be a vector. See the example cvdenx in the CVODE package, and the discussion of it in the CVODE Examples document. In that problem, the three components vary betwen 0 and 1, and have different noise levels; hence the abstol vector. It is impossible to give any general advice on abstol values, because the appropriate noise levels are completely problem-dependent. The user or modeler hopefully has some idea as to what those noise levels are.
  • Finally, it is important to pick all the tolerance values conservatively, because they control the error committed on each individual time step. The final (global) errors are some sort of accumulation of those per-step errors. A good rule of thumb is to reduce the tolerances by a factor of .01 from the actual desired limits on errors. So if you want .01% accuracy (globally), a good choice is reltol = 1.0E-6. But in any case, it is a good idea to do a few experiments with the tolerances to see how the computed solution values vary as tolerances are reduced.

How do I choose what linear solver to use for the stiff case?

  • If the problem is size is fairly small (say N < 100), then using the dense solver is probably best; it is the simplest to use, and reasonably inexpensive for small N. For larger N, it is important to take advantage of sparsity (zero-nonzero) structure within the problem.
  • If there is local (nearest-neighbor) coupling, or if the coupling is local after a suitable reordering of y, then use the banded linear solver. Local coupling means that the i-th component of the RHS or residual function depends only on components y_j for which |i-j| is small relative to N. (Note that the dense and band solvers are only applicable for the serial version of the solver.)
  • For even larger problems, consider one of the Krylov iterative methods. These are hardest to use, because for best results they usually require preconditioning. However they offer the best opportunity to exploit the sparsity structure in the problem. The preconditioner is a matrix which, at least crudely, approximates the actual matrix in the linear system to be solved, and is typically built from an approximation of the relevant Jacobian matrix. Typically, that approximation uses only part of the true Jacobian, but as a result is much less expensive to solve. If the Jacobian can be approximated by a matrix that is banded (serial case) or block-diagonal with banded blocks (parallel case), SUNDIALS includes preconditioner modules for such cases. In each of the user guides, the section 'Linear solver specification functions' and the section on preconditioner modules contain more detailed comments on preconditioning.
  • On the construction of preconditioners for problems arising from the spatial discretization of time-dependent partial differential equation systems, there is considerable discussion in the paper P. N. Brown and A. C. Hindmarsh, "Reduced Storage Matrix Methods in Stiff ODE Systems," J. Appl. Math. & Comp., 31 (1989), pp.40-91.

How do I handle a data-defined function within the RHS function?

Often the RHS or residual function depends on some function A(t) that is data-defined, i.e. defined only at a set of discrete set of times t. The solver must be able to obtain values of the user-supplied functions at arbitrary times t in the integration interval. So the user must fit the data with a reasonably smooth function A(t) that is defined continuously for all relevant t, and incorporate an evaluation of that fit function in the user function involved. This may be as simple as a piecewise linear fit, but a smoother fit (e.g. spline) would make the integration more efficient. If there is noise in the data, the fit should be a least-squares fit instead of a straight interpolation. The same advice applies if the user function has a data-defined function A(y) that involves one or more components of the dependent variable vector y. Of course, if more that one component is involved, the fit is more complicated.

How do I control unphysical negative values?

In many applications, some components in the true solution are always positive or non-negative, though at times very small. In the numerical solution, however, small negative (hence unphysical) values can then occur. In most cases, these values are harmless, and simply need to be controlled, not eliminated. The following pieces of advice are relevant.

  • The way to control the size of unwanted negative computed values is with tighter absolute tolerances. Again this requires some knowledge of the noise level of these components, which may or may not be different for different components. Some experimentation may be needed.
  • If output plots or tables are being generated, and it is important to avoid having negative numbers appear there (for the sake of avoiding a long explanation of them, if nothing else), then eliminate them, but only in the context of the output medium. Then the internal values carried by the solver are unaffected. Remember that a small negative value in y returned by CVODE, with magnitude comparable to abstol or less, is equivalent to zero as far as the computation is concerned.
  • The user's right-hand side routine f should never change a negative value in the solution vector y to a non-negative value, as a "solution" to this problem. This can cause instability. If the f routine cannot tolerate a zero or negative value (e.g. because there is a square root or log of it), then the offending value should be changed to zero or a tiny positive number in a temporary variable (not in the input y vector) for the purposes of computing f(t,y).

How do I treat discontinuities in the RHS function?

If the jumps at the discontinuities are relatively small, simply keep them in the RHS function, and let the integrator respond to them (possibly taking smaller steps through each point of discontinuity). If the jumps are large, it is more efficient to stop at the point of discontinuity and restart the integrator with a readjusted ODE model. To stop when the location of the discontinuity is known, simply make that location a value of tout. To stop when the location of the discontinuity is determined by the solution, use the rootfinding feature. In either case, it is critical that the RHS function not incorporate the discontinuity, but rather have a smooth extention over the discontinuity, so that the step across it (and subsequent rootfinding, if used) can be done efficiently. Then use a switch within the RHS function that can be flipped between the stopping of the integration and the restart, so that the restarted problem uses the new values (which have jumped).

When is it advantegeous to supply my own EwtFn function?

The main situation where this is a good idea is where the problem needs something "in between" the cases covered by CV_SS and CV_SV. Namely, suppose there are a few groups of variables (relative to the total number of variables) such that all the variables in each group require the same value of abstol, but these values are very different from one group to another. Then a user EwtFn function can keep an array of those values and construct the ewt vector without any additional storage. Also, <i>in rare cases</i>, one may want to use this option to apply different values of reltol to different variables (or groups of variables).

How do switch on/off forward sensitivity computations in CVODES?

If you want to turn on and off forward sensitivity calculations during several successive integrations (such as if you were using CVODES within a dynamically-constrained optimization loop, when sometimes you want to only integrate the states and sometimes you also need sensitivities computed), it is most efficient to use CVodeSensToggleOff.

Sensitivity calculations are enabled by the following functions: CVodeSensMalloc and CVodeSensReInit and are disabled by CVodeSensFree (after calling this one, they can be re-enabled only by calling CVodeSensMalloc) and CVodeSensToggleOff.

What is the role of plist in CVODES?

The argument plist to CVodeSetSensParams is used to specify the problem parameters with respect to which solution sensitivities are to be computed.

plist is used only if the sensitivity right-hand sides are evaluated using the internal difference-quotient approximation function. In that case, plist should be declared as an array of Ns integers and should contain the indeces in the array of problem parameters p with respect to which sensitivities are desired. For example, if you want to compute sensitivities with respect to the first and third parameters in the p array, p[0] and p[2], you need to set

plist[0] = 0
plist[1] = 2

If plist is not provided, CVODES will compute sensitivities with respect to the first Ns parameters in the array p (i.e. it will use plist[i]=i, i=0,1,…Ns).

If the user provides a function to evaluate the righ-hand sides of the sensitivity equations or if the default values are desired, a NULL pointer can be passed to CVodeSetSensParams.

What is the role of pbar in CVODES?

The argument pbar to CVodeSetSensParams is used to specify scaling factors for the problem parameters.

pbar is used only if

  • the internal difference-quotient functions are used for the evaluation of the sensitivity right-hand sides, in which case pbar is used in computing an appropriate perturbation for the finite-difference approximation; or
  • the tolerances for the sensitivity variables are estimated automatically by CVODES from those specified for the state variables.

If provided, pbar should be declared as an array of Ns realtypes and should contain non-zero scaling factors for the Ns parameters with respect to which sensitivities are to be computed. For non-zero problem parameters, a good choice is

pbar[i] = p[plist[i]]

If pbar is not provided, CVODES will use pbar[i]=1.0, i=0,1,…Ns-1.

If the user provides a function to evaluate the righ-hand sides of the sensitivity equations and also specifies tolerances for the sensitivity variables (through CVodeSetSensTolerances) or if the default values are desired, a NULL pointer can be passed to CVodeSetSensParams.

What is pure quadrature integration?

Suppose your ODE is y'=F(t,y) and you integrate it from 0 to T and that you are also interested in computing an integral of the form

G = int_0^T g(t,y(t)) dt

for some function g. The most efficient way of computing z is by appending one additional differential equation to your ODE system:

z' = g(t,y)

with initial condition z(0)=0, in which case

G = z(T)

This additional equation is "a pure quadrature equation" and its main characteristic is that the new differential variable z does not appear in the right hand side of the extended ODE system. If CVODES is notified of such "pure quadrature equations", it can take advantage of this property and do less work than if it didn't know about them (these variables need not be considered in the nonlinear system solution).

The main reason for the special treatment of "pure quadrature equations" in CVODES is that such integrals (very often a large number of them) need to be computed for adjoint sensitivity.

IDA

How do I choose tolerances in IDA?

See How do I choose tolerances? in the CVODE/CVODES section.

How do I choose what linear solver to use?

How do I handle a data-defined function within the residual function?

How do I control unphysical negative values in IDA?

See How do I control unphysical negative values? in the CVODE/CVODES section.

In addition, IDA provides the option of enforcing positivity or non-negativity on components. But these constraint options should only be exercised if the use of absolute tolerances to control the computed values is unsuccessful, because they involve some extra overhead cost.

How do I treat discontinuities in the residual function in IDA?

See How do I treat discontinuities in the RHS function? in the CVODE/CVODES section.

When is it advantegeous to supply my own EwtFn function?

See How do I treat discontinuities in the RHS function? in the CVODE/CVODES section.

KINSOL

How do I reinitialize KINSOL within a C/C++ program?

Although KINSOL does not provide a reinitialization function, it is possible to reinitialize the solver (meaning reuse a KINSOL object), but only if the problem size remains unchanged. To reinitialize KINSOL, begin by making any necessary changes to the problem definition by calling the appropriate KINSet* functions (e.g., KINSetSysFunc). Next, if you would like to use a different linear solver, call the appropriate function (e.g., KINDense) followed by any calls to the corresponding KIN*Set* functions. Then you can call the KINSol function to solve the updated nonlinear algebraic system.

Why is the system function being evaluated at points that violate the constraints?

If you have not supplied a function to compute either J(u) (of type KINDenseJacFn or KINBandJacFn) or J(u)v (of type KINSpilsJacTimesVecFn), then the internal function may be the culprit. The default function used to compute a difference quotient approximation to the Jacobian (direct methods) or Jacobian matrix-vector product (Kylov methods) evaluates the user-supplied system function at a slightly perturbed point, but does not check if that point violates the constraints.

Miscellaneous

How do I determine which version of SUNDIALS I have?

If you still have access to the distribution files, then the SUNDIALS release number is indicated in the header of sundials/README and the corresponding solver versions can be determined by reading the appropriate row of the "Release History" table. Otherwise, you will need to find the value associated with the PACKAGE_VERSION or SUNDIALS_PACKAGE_VERSION variable defined in the sundials_config.h header file (located under either <install_dir>/include or <install_dir>/sundials/include), and then read the appropriate row of the following table:

SUNDIALS CVODE CVODES IDA KINSOL
1.0 2.0 1.0 2.0 2.0
2.0 2.2.0 2.1.0 2.2.0 2.2.0
2.0.1 2.2.1 2.1.1 2.2.1 2.2.1
2.0.2 2.2.2 2.1.2 2.2.2 2.2.2
2.1.0 2.3.0 2.2.0 2.3.0 2.3.0
2.1.1 2.3.0 2.3.0 2.3.0 2.3.0
2.2.0 2.4.0 2.4.0 2.4.0 2.4.0
2.3.0 2.5.0 2.5.0 2.5.0 2.5.0

How do I apply a patch under UNIX or Linux?

To apply patch foobar.patch and rebuild SUNDIALS complete the following steps:

% cp foobar.patch <sundials_source_dir>
% cd <sundials_source_dir>
% patch -p1 < foobar.patch
% cd <sundials_source_dir>
% make
% make install
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.