Posts Tagged ‘Version 9.1.2’

Using PEST to Calibrate Models

January 14th, 2011 21 comments

There are times when it is helpful to calibrate, or fit, your model to historical data. This capability is not built into the iThink/STELLA program, but it is possible to interface to external programs to accomplish this task. One generally available program to calibrate models is PEST, available freely from In this blog post, I will demonstrate how to calibrate a simple STELLA model using PEST on Windows. Note that this method relies on the Windows command line interface added in version 9.1.2 and will not work on the Macintosh. The export to comma-separated value (CSV) file feature, added in version 9.1.2, is also used.

The model and all files associated with its calibration are available by clicking here.

The Model

The model being used is the simple SIR model first presented in my blog post Limits to Growth. The model is shown again below. There are two parameters: infection rate and recovery rate. Technically, the initial value for the Susceptible stock is also a parameter. However, since this is a conserved system, we can make an excellent guess as to its value and do not need to calibrate it.


The Data Set

We will calibrate this model to two data sets. The first is the number of weekly deaths caused by the Hong Kong flu in New York City over the winter of 1968-1969 (below).


The second is the number of weekly deaths per thousand people in the UK due to the Spanish flu (H1N1) in the winter of 1918-1919 (shown later).

In both cases, I am using the number of deaths as a proxy for the number of people infected, which we do not know. This is reasonable because the number of deaths is directly proportional to the number of infected individuals. If we knew the constant of proportionality, we could multiply the deaths by this constant to get the number of people infected.

Read more…

Converting a Sector-based Model to Modules

March 17th, 2010 5 comments

I generally do not use modules to build very small models (only a couple of stocks and flows), which may then lead me to use sectors as the model grows because they are very convenient.  By the time I have three sectors, though, it starts to become clear that I should have used modules.  I will then need to convert my sector-based model into a module-based model.  Historically, I also have a number of sector-based models that are crying to be module-based.

Converting from sectors to modules is not very difficult:

  1. Make sure there are no connections or flows between sectors.  Replace any of these with ghosts in the target sector.
  2. In a new model, create one module for every sector.
  3. Copy and paste the structure from each sector into its corresponding module.
  4. Connect the modules:  At this point, the model structure has been rearranged into modules, but none of the modules are connected.  The ghosts that were in the sectors became real entities when they were pasted into the modules.  Go back to identify all of these connections and reconnect them in the module-based model.

Stepping Through a Sample Model

Let’s walk through an example.  A small sector-based model is shown below (and is available by clicking here).


This model violates what I would call good sector etiquette:  there are connectors that run between the sectors.  This is often useful in a small model such as this because it makes the feedback loops visible.  However, in a larger model, this can lead to problems such as crossed connections and difficulty in maintaining the model because sectors cannot be easily moved.

Read more…

Running Mean and Standard Deviation

October 22nd, 2009 6 comments

This is an update to post published on August 31, 2009.  The attached model was updated to find negative means and an alternate method was included at the end.

I am frequently asked which built-in function gives either the running mean or running standard deviation of a model variable.  Unfortunately, there is no such built-in at this time (no, that is not what MEAN() does).

Luckily, however, we can replicate the behavior we desire from built-in functions by creating a reusable module.  I can create a module that calculates a running average and a running standard deviation from any model variable.

When building a reusable module component, it is important to carefully define what the input to the module will be (i.e., what are the parameters to the built-in function) and what the output of the module will be (i.e., what is the result or return value of the built-in function).  In this particular case, the input will be the variable whose running average or running standard deviation we wish to find.  There are two outputs:  the running average and the running standard deviation.  Note we do not have to use both outputs all the time.

Thus, our new module can be used as shown below:


Note the name of the module was chosen to give a meaningful context to the running mean and standard deviation variables, which have fixed names defined inside the reusable module.  As in this example, it is always a good idea to give the module outputs general names that make sense when qualified by a context (the module name).

The reusable module itself was built and tested in iThink, and can also be used in STELLA.  The input parameter was given an equation to allow the model to be completely tested and debugged before being reused.  The model appears below and can be downloaded by clicking here.


Note the input to the module is named value.  After importing the module, this will need to be assigned to the variable in question, Cash in the above example.  This can be done from outside the module by right-clicking on Cash and choosing “Module->Assign to”, or right-clicking on value and choosing “Module->Assign Input to”.  The outputs can be assigned in a similar way, or the Ghost tool can be used.

This method, while relatively easy to understand, does accurately compute the standard deviation when the mean of the running sum of squares is close in magnitude to the running mean squared.  An alternate method that does not suffer this problem was developed by Welford in 1962 and is implemented in the model that can be downloaded by clicking here.

Finally, I am including a simple reusable module that finds the maximum value of a model variable across the entire run of a simulation.  It can be downloaded by clicking here.  It uses a stock to hold the maximum value seen so far, and takes advantage of the fact that uniflows cannot be negative.  It is used the same way as the running mean and standard deviation module, but only has one output called maximum.

For more information about modules, consult the iThink and STELLA help files.  These on-line resources are also available:

Using Modules Webinar

Module FAQs

Spatial Modeling with isee Spatial Map

April 15th, 2009 11 comments

Editor’s Note: This is part 3 of a 3-part series on spatial modeling in iThink and STELLA. Part 1 is available here. Part 2 is available here.

Last time, we explored a two dimensional diffusion problem by looking at a metal plate with constant heat applied to the center. The model is available here: 2d-diffusion. The results, using isee Spatial Map, of the start (left) and end (right) of a six-minute simulation are shown below.


I am frequently asked how to set up Spatial Map. isee Spatial Map is a simple program that can be used to display any dataset as a two dimensional grid with specific colors assigned to data ranges. Since it is stand-alone, iThink and STELLA communicate with it through the Export Data functionality. If you wish to plot simulation results in Spatial Map, you must first set up a persistent link to a CSV file. This persistent link is always going to be from a table that contains just one element of the array you wish to view in Spatial Map.

In this example, a table named “Temp Export Table” was created to export the temperature data. The first element, temperature[1, 1], was placed in the table. There is a subtlety here that cannot be overlooked. I wish to plot the values of the stock T as it varies over time. Yet I export a different variable named “temperature”. Why is this?

This is necessary because although stocks can be exported in the format Spatial Map expects, the export settings that are compatible with Spatial Map only export their initial values no matter where the simulation is. If we export T, we will only ever see the initial conditions in Spatial Map. Thus, when displaying a stock in Spatial Map, and we almost always do display stocks, it is necessary to create a converter that is set identically equal to the stock. The converter will export its current values, and since it is equal to the stock, the stock’s current values will be exported. The converter used for this purpose in this sample model is named “temperature”.

Next it is necessary to set up the persistent link. Choose Export Data… from the Edit menu. The Export Type should already be set to Persistent and Dynamic. Under Export Data Source, select “Export variables in table” and choose the table with the array element in it from the pop-up menu. In this case, that table is called “Temp Export Table”. Also select “One set of values” under Interval. This forces the data to be export in the format required by Spatial Map. These settings are shown below.


To finish setting up the export, choose the CSV file to export to and press OK. For this model, the file is named “2D Diffusion.csv”. Note that all of this has already been set up in the attached sample, so you will not be able to set it up again. You can examine the settings, though, by choosing Manage Persistent Links in the Edit menu and then pressing the Edit link at the end of the “Temp Export Data” line in the Export block.

The value of “temperature” will now be exported once at the start of each run and once at the end. If you wish to see the simulation unfold in Spatial Map, it will be necessary to set a Pause interval, as dynamic links are also exported every time the simulation pauses. Under Runs Specs… in the Run menu, you can see that I have set the Pause Interval to 20. This forces the Spatial Map to update every 20 seconds during the simulation run. This also forces the user to keep pressing Run to advance the simulation.

Read more…

Spatial Modeling in Two Dimensions

April 7th, 2009 10 comments

Editor’s Note:  This is part 2 of a 3-part series on spatial modeling in iThink and STELLA.  Part 1 is available here.  Part 3 is available here.

Last time, we explored spatial modeling using the one-dimensional diffusion problem as an example.  Many spatial applications, however, require two dimensional formulations.  As an extension, we will now explore the two-dimensional diffusion problem.  Instead of a one-meter metal bar with constant heat applied at its ends, the two-dimensional diffusion problem looks at the response of a one-meter by one-meter metal plate with constant heat applied to its center.  We then watch the heat diffuse across the plate.

At first blush, one might think the two-dimensional case is much more difficult than the one-dimensional case.  In particular, if a grid is superimposed over the plate, each finite element on the plate has eight neighbors, as shown below.  It is tempting, therefore, to consider radiating heat in each of these eight directions.


However, without looking at the two-dimensional diffusion equations, if we consider just the physical layout of this system, the four corners of the finite element only touch the four corner neighbors (1, 3, 5, and 7) at one point.  In contrast, the four sides of the finite element are shared with each of its four immediate neighbors (2, 4, 6, and 8).  This suggests that heat only radiates to (and from) these four neighbors, not all eight.  In fact, if we examine the two-dimensional diffusion equation, we find that there are only component contributions in the x– and the y-directions.  There are no contributions on the diagonal (which would appear in the equation as ∂2u/∂xy and ∂2u/∂yx terms).

Intuitively, then, we have a finite element that is very similar to the one-dimensional case.  We only need to add corresponding flows in the y-direction.  This leads to the following model with the individual finite elements arrayed.


The array T is now two-dimensional, in x and in y.  In addition, dx can differ from dy, so the diffusion constant C must be broken down into its constituent parts Cx = k/dx2 and Cy = k/dy2.  This leads to the following set of equations for the radiant flows through the plate:

in left = Cx*T[X – 1, Y]                               in top = Cy*T[X, Y – 1]
out left
= Cx*T[X, Y]                                  out top = Cy*T[X, Y]
out right
= Cx*T[X, Y]                              out bottom = Cy*T[X, Y]
in right
= Cx*T[X + 1, Y]                          in right = Cy*T[X, Y + 1]

X and Y are dimension names for the elements in the x– and ­y-directions, respectively.

Using isee Spatial Map, it is possible to view the results of this diffusion across two dimensions.  Spatial Map displays an array as a one-dimensional or two-dimensional grid (depending on the array).  Each cell in the grid is filled with a color corresponding to the value in the corresponding cell of the array.  Below are two spatial maps.  The one on the left shows the initial conditions of the metal plate.  Note that heat only appears in the center of the plate, where it is being externally applied.  The map on the right shows the distribution of heat across the plate at the end of a six-minute simulation.


The model is available here:  2d-diffusion.  It is already configured to use isee Spatial Map.  In the final installment of this 3-part series, I will describe how to set up isee Spatial Map.

Spatial Modeling in iThink and STELLA

March 31st, 2009 No comments

STELLA and iThink provide capabilities to model spatial problems.  Version 9.1.2 expands these capabilities to allow more intuitive equation formulations.

Spatial modeling is concerned with modeling behavior over space as well as time.  Common applications include modeling flows in lakes and streams or modeling development in an urban landscape.  To demonstrate the new capabilities in version 9.1.2, the one-dimensional diffusion problem will be modeled in STELLA.

Consider the diffusion of heat through a one meter metal bar when a constant temperature is applied to both ends.  This problem is typically introduced as the following partial differential equation for temperature function u(x, t) with the given boundary conditions:

u/∂tk2u/∂x2 = 0
u(x, 0) = 0, 0 < x < 1
u(0, t) = u0, t ≥ 0
(1, t) = u0, t ≥ 0

Here, k is the diffusion coefficient (based on thermal conductivity, density, and heat capacity) and u0 is the constant temperature applied to both ends of the bar.  The finite difference solution to this problem (with the above initial conditions) is:

u(x, t + dt) = u(x, t) + ∂u/∂t
u/∂t = k (u(x + dx, t) – 2u(x, t) + u(xdx, t))/dx2

It is also possible to derive a closed-form solution.  Unless you are an astrophysicist, this solution probably does little to help you understand what is happening.  It is far more intuitive to develop a physical model of the underlying mechanics than to wrestle with the mathematics.

Read more…

A Minor Release with Major Features

March 26th, 2009 No comments
Karim and Rob listening to customers at User Conference Lab

Karim & Rob listening to customers at User Conference Lab

We’ve been working hard these past few months preparing for the latest release of STELLA and iThink and are happy to finally share it with everyone.  Version 9.1.2 is now available!

As an update to STELLA and iThink, you could easily be fooled into thinking that it is made up of just a few minor bug fixes.  To the contrary, this release is full of new features and enhancements that make it much more exciting than its version number suggests.

After receiving wonderful feedback at our User Conference last October, our development team was motivated to incorporate as many suggestions as they could into this particular update. Better yet, version 9.1.2 is a FREE update for anyone who has purchased a software license or an Annual Support Contract within the last 12 months.

A series of in-depth posts about specific features in 9.1.2 already exist and we’ll be adding more in response to your comments in the coming weeks. In addition to providing new functionality, we worked with beta testers on eliminating performance bottlenecks especially with larger models spanning multiple modules.

Version 9.1.2 shows impressive performance. This is a significant step in productivity! — Albert Mauritius, Beta Tester

Check out the complete list of what’s new in version 9.1.2 on our web site or download your copy now.

Running Models from the Command Line

March 20th, 2009 4 comments

Version 9.1.2 introduces the oft-requested ability to run models from the command line outside of iThink or STELLA (Windows only).  This feature opens up a number of possibilities for users who need to automate their modeling tasks.  For example, you could use the command line to automatically run a model multiple times, start a model from a script, or create a shortcut that opens and runs a model when you double-click on the icon.

I have set up a simple example that illustrates how you could run an iThink model multiple times to skirt the 32,767 time step limit that advanced users sometimes run up against.  The model itself does nothing extraordinary, it simply increments the value of a Stock by one at each time step. What makes it worth noticing is that it runs 32,000 iterations three times to mimic a 96,000 step run.

>> Download the Sample Files

This example will open and run the model file named multiple_run.itm, import data at the beginning of each run, and export data at the end of each run.  You can double-click on the “Start batch runs” shortcut to kick things off or you can use the batch file that is provided. Note that you may have to edit these start files to make them work on your computer

Command Line Syntax

"c:\program files\isee systems\iThink 9.1.2\iThink.exe" -rn 3 multiple_run.itm

Shortcut Properties

The command line above was entered into the “Target” field of the “Start batch runs” Properties dialog.  Note the “Start in” field is intentionally left blank so that the shortcut will run the model from the current directory.  If you move the shortcut file to a different directory,  you’ll need to enter that directory into this field.

The identical command line syntax is used in the supplied batch file named “go.bat” and can be edited using Notepad.

Sample Model and Spreadsheet

The sample model uses a table to report the value of the Stock at the end of each run so that it can be exported to the “multiple_run_data.xls” Excel file. In Excel, I linked the exported value of the Stock to an “Import” worksheet.   This way, one run hands off the final data to start the subsequent run like runners in a relay race passing a baton.  Note the initial Stock value will need to be reset in Excel before starting a new batch of runs.

Running the sample command line puts iThink into a macro mode.  It opens just as if you double-clicked its icon and manually started the runs yourself.   Sit back and watch the model open, and let the model run three times on its own.  Try to leave the process alone while it executes, I did find that if the runs were interrupted the Excel file could sometimes lose its formatting.

For an experiment you could add “-nq” to the command so that the model stays open after running.  After adding the new parameter your command would look like this:

"c:\program files\isee systems\iThink 9.1.2\iThink.exe" -rn 3 -nq multiple_run.itm

There are many more command options available.  View the full list here and experiment with other parameters.


Another advantage to running from the command line is that you can start and run a model from inside Excel. Not only can you run the model this way, but you can take advantage of all the parameters.

Create a shortcut as described in the post and save it. In Excel, pick a cell and select Insert hyperlink. Browse to and select the shortcut and then click OK. It’s that easy!!