Posts Tagged ‘DT’

What is Delta Time (DT)?

August 3rd, 2010 15 comments

After reading Karim Chichakly’s recent post on Integration Methods and DT, I was reminded that delta time (DT) has always been a tricky modeling concept for me to grasp.   Beginning modelers don’t usually need to think about changing DT since STELLA and iThink set it to a useful default value of 0.25.   But once you progress with your modeling skills, you might consider the advantages and risks of playing with DT.

The DT setting is found in the Run Specs menu.

By definition, system dynamics models run over time and DT controls how frequently calculations are applied each unit of time.  Think of it this way, if your model was a movie, then DT would indicate the time interval between still frames in the strip of movie film.  For a simulation over a period of 12 hours, a DT of 1/4 (0.25) would give you a single frame every 15 minutes.  Lowering the DT to 1/60 would give a frame every minute.   The smaller the DT is, the higher the calculation frequency (1/DT).

Beware of the Extremes

A common tendency for modelers is to set the calculation frequency too high.  Without really thinking too hard about it, more data seems to imply a higher quality model – just like more frames in movie film make for smoother motion.  If your model calculates more data for every time unit, its behavior will begin to resemble the behavior of a smoothly continuous system.  But a higher frequency of calculations can greatly slow down your model’s run performance and more data does not directly translate to a better simulation.

Beware of Discrete Event Models

Another situation where DT can often lead to unexpected behavior is with models that depend on discrete events.   My eyes were opened to this when I attended one of isee’s workshops taught by Corey Peck and Steve Peterson of Lexidyne LLC.

One of the workshop exercises involved a simple model where the DT is set to the default 0.25, the inflow is set to a constant 10, and the outflow is set to flush out the stock’s contents as soon as it reaches 50.   This is how the model’s structure and equations looked:

Discrete Model

Stock = 0

inflow = 10

outflow = IF Stock >= 50 THEN 50 ELSE 0

I would have expected the value of the stock to plunge to zero after it reached or exceeded 50, but this graph shows the resulting odd saw-tooth pattern.

Sawtooth Model Behavior

The model ends up behaving like a skipping scratched record, in a perpetual state of never progressing far enough to reach the goal of zero.  (Click here to download the model.)

What is happening in the model?  In the first DT after the stock’s value reaches exactly 50, the outflow sets itself to 50 in order to remove the contents from the stock. So far so good, but now the DT gotcha begins to occur.   Since the outflow works over time, its value is always per time.  To get the quantity of material that actually flowed, you must multiply the outflow value (or rate) by how long the material was flowing.  When DT is set to 0.25,  the material flows 0.25 time units each DT.  Hence, the quantity of material removed from the stock is 50*0.25 = 12.50.

Suddenly we are in a situation where only 12.50 has been removed from the stock but the stock’s value is now less than 50.  Since the stock is no longer greater than or equal to 50, the outflow sets itself back to 0 and never actually flushes out the full contents of the stock. 

So what do we do?  One solution to this problem would be to use the PULSE built-in to remove the full value from the stock.   Here’s what the equation for the outflow would look like:

outflow = IF Stock >= 50 THEN PULSE(Stock) ELSE 0

(Note: This option will only work using Euler’s integration method.)

Further Reading

STELLA and iThink have great help documentation on DT.  The general introduction provides a good explanation of how DT works. The more advanced DT Situations Requiring Special Care section focuses more on artifactual delays and the discrete model issues mentioned in this post.  Delta time and resulting model behaviors are reminders that system dynamics models run over time, but they achieve this by applying numerous discrete calculations in order to simulate the smooth behavior of actual systems.

Categories: Modeling Tips Tags: ,

Integration Methods and DT

July 14th, 2010 10 comments

The simulation engine underlying STELLA® and iThink® uses numerical integration.  Numerical integration differs from the integration you may have learned in Calculus in that it uses algorithms that approximate the solution to the integration.  The two approximations currently available are known as Euler’s method and the Runge-Kutta method.  All algorithms require a finite value for DT, the integration step-size, rather than the infinitesimally small value used in Calculus.  On the surface, it may seem that the smaller DT is, the more accurate the results, but this turns out not to be true.

Compound Interest:  Euler’s Method over Runge-Kutta

To introduce Euler’s method, let’s take a look at the simple problem of compound interest.  If we have $100 that we invest at 10% (or 0.1) compounded annually, we can calculate the interest after N years by adding in the interest each year and recalculating:

1st year:  interest = $1000 × 0.1 = $100; Balance = 1000 + 100 = $1100
2nd year: interest = $1100 × 0.1 = $110; Balance = 1100 + 110 = $1210
3rd year:  interest = $1210 × 0.1 = $121; Balance = 1210 + 121 = $1331

And so on up to year N.  We have just seen the essence of how Euler’s method works.  It calculates the new change in the stock for this DT (in this case, interest) and then adds that to the previous value of the stock (Balance) to get the new value of the stock.  In this example, DT = 1 year.

By noticing we always add the existing balance in, we can instead just multiply the previous year’s balance by 1 + rate = 1 + 0.1 = 1.1:

1st year:  Balance = $1000 × 1.1 = $1100
2nd year: Balance = $1100 × 1.1 = $1210
3rd year:  Balance = $1210 × 1.1 = $1331

And so on up to year N. We can further generalize by noticing we are multiplying by 1.1 N times and thus arrive at the compound interest formula:

Balance = Initial_Balance*(1 + rate)^N

Checking this, we find our Balance at the end of year 3 is 1000*1.1^3 = $1331.  In the general case of the formula, rate is the fractional interest rate per compounding period and N is the number of compounding periods (an integer).  In our example, the compounding period is one year, so rate is the annual fractional interest rate and N is the number of years.  However, if interest is compounded quarterly (four times a year), the interest rate has to be adjusted to a per quarter rate by dividing by 4 (so rate = 0.1/4 = 0.025) and N must be expressed as the number of quarters (N = number of years*4 = 3*4 = 12 for the end of year 3).  We can use this formula in our model to test the accuracy of Euler’s method.  Note that for quarterly compounding, we would set DT = 1/4 = 0.25 years.

To explore the differences between Euler’s and Runge-Kutta, the following structure will be used for all of the examples in this post.  This structure models the compound interest problem outlined above.


The equations change for each example and can be seen in the individual model files (accessed by clicking here).  For this example, the actual value is calculated using the compound interest formula, Initial_Balance*(1 + rate)^TIME.  The approximated value is calculated by integrating rate*Approx_Balance (into Approx_Balance).

In addition to the actual and approximate values, three errors are also calculated across the model run:  the maximum absolute error, the maximum relative error, and the root-mean-squared error (RMSE).  The absolute error is:


The relative error is:


and is usually expressed as a percentage.  The RMSE is found by averaging the values of the absolute error squared, and then taking the square root of that average.

Read more…