When developing a CVODE based code, you will spend most of your time on the implementation and debugging of the f function. When running the code through the debugger, you will notice that often the t point is moving forward. Usually the solver will accelerate and time points grow rapidly.
When using CVODE with the Newton method, things can look differently sometimes. Occasionally you will notice that the time point stays the same for some time, or even goes back in time.
In order to understand this you need to know that the f-function is called
- during the normal iteration of of the problem, and
- during the composition of the Jacobian matrix (needed for solving the equation system).
In the high-level description of the algorithm the point 2 actually consists of three subtasks:
- compose the Jacobian matrix
- create an LU factorization of the matrix
- solve the equation system with a back-solving algorithm
The first two of these tasks (and in particular the second tasks, for large systems with big bandwidths) are the performance bottlenecks of the overall code. Therefore CVODE implements an algorithm that decides, when the Jacobian matrix is to updated. In a well-running code, this will happen on average every 10th step or so (this number depends highly on your problem, of course).
Now whenever the Jacobian matrix is generated, you have two choices: a) provide an analytical Jacobian matrix or b) create the Jacobian matrix with difference-quotient approximations. For most users, option b) is the only suitable choice.
When composing the Jacobian matrix using difference quotients you need to calculate the f() function several times for slightly perturbated solution variables. And, since the Jacobian matrix is generated for a specific time point, the calls to f will all be done with the same time point as argument. When monitoring the solution variables you will see small changes of your solution variables that 'move' through your $y$-vector. As soon as the Jacobian matrix has been generated, the solver can solve the equation system and finish the overall task 2 in the high-level algorithm.
This is one reason why the f function gets called several times with the same time point. To find out if a call to f originates from the difference quotient algorithm, check the call stack in your debugger.
Another reason for having the same time points is subsequent iteration calls. Usually, it may take 2 or 3 function evaluations, until an acceptable solution for the corrector step is found. During the iterations the time points will also remain the same. However, compared to the Jacobian generation, now the solution in the $y$-vector will typically change more significantly.
Now we have identified two reasons why f can get called with the same time point several times. But why does it sometimes go back in time? Consider again high-level description of the algorithm and look at item 2.3.2:
"If a recoverable failure occurred in the NLS, reduce the step size and reattempt the step (goto 1)"
So if the last successful step was done a $t_{n-1}$ and we had tried a time step of $h_n$, that last calls to f would have been at time point $t_{n} = t_{n-1} + h_n$. If we now reduce $h_n$ and try again, the new time point $t_{n,new}$ will of course lie before the first attempt. And inside our f() function, we notice this failure and new attempt by a time point going backwards (sometimes several times, if the time step has to be reduced further).