### Archive

Posts Tagged ‘iThink/STELLA’

## Working with Array Equations in Version 10

STELLA/iThink version 10 introduces several new array features, including simplified and more powerful Apply-To-All equations that are designed to reduce the need to specify equations for every individual element.

Dimension names are optional

When an equation is written using other array names, the dimension names are not normally needed.  For example, given arrays A, B, and C, each with the dimensions Dim1 and Dim2, A can be set to the sum of B and C with this equation:

B + C

Dimension names are still needed when the dimensions do not match.  For example, to also add in the first 2-dimensional slice of the 3-dimensional array D[Dim1, Dim2, Dim3], the equation becomes:

B + C + D[Dim1, Dim2, 1]

The wildcard * is optional

When an array builtin is used, the * is normally not needed.  For example, to find the sum of the elements of a 2-dimensional array A[Dim1, Dim2] requires this equation:

SUM(A)

If, however, the sum of only the first column of A is desired, the * is still needed:

SUM(A[*, 1])

Simplified array builtins

There are five array builtins:  SIZE, SUM, MEAN, STDDEV, and RANK.  In addition, the MIN and MAX functions have been extended to take either one or two array arguments.  All but RANK can also be applied to queues and conveyors.

SUM, MEAN, and STDDEV all work in a similar way (see examples of SUM above).

Using the MAX function, it is possible to find the maximum value in array A,

MAX(A)

the maximum value in array A, or zero if everything is negative,

MAX(A, 0)

or the maximum across two arrays A and B,

MAX(A, B)

MIN works the same way, but finds the minimum.

The SIZE function requires an array parameter, but within an array, the special name SELF can be used to refer to the array whose equation is being set.  In addition, wildcards can be used to determine the size of any array slice.  In the equation for array A[Dim1, Dim2],

SIZE(SELF)

gives the total number of elements in array A while

SIZE(SELF[*, 1])

gives the size of the first dimension of A, i.e., the number of elements – or rows – in the first column.  Likewise,

SIZE(SELF[1, *])

gives the size of the second dimension of A, i.e., the number of elements – or columns – in the first row.

Since RANK returns the index of the element with the given rank, it can also be used to find the index of the minimum element (using rank 1) or the maximum element (using rank SIZE(array)).  Given array A[Dim1, Dim2], the index of the minimum element in the first row can be found with the equation:

RANK(A[1, *], 1)

However, to find the minimum element in the entire array, use:

RANK(A, 1)

This returns a single index that can be mapped to an array element using the special parentheses subscripting:

A(RANK(A, 1))

will be the value of the minimum element in A, i.e, the same value as MIN(A).  However, if array B has the same dimensions as A (i.e., for this example, B[Dim1, Dim2]), the value of the element in B that corresponds to the minimum element in A is found with:

B(RANK(A, 1))

Accessing elements of queues and conveyors

Use an array subscript to access an element of a queue or conveyor.  The indices start on the outflow side (at 1) and increase toward the inflow side (up to SIZE(queue) or SIZE(conveyor)).  This allows the entire contents of a queue or conveyor to be assigned to an array allowing additional calculations, for example, a weighted average.  Given a conveyor named Lag, a new array weighted_by_time[Slat] can be created with the equation:

(Slat*DT)*Lag[Slat]

Note the subscript is required for the conveyor.  Otherwise, the total value of the conveyor will be used.  Note also that the size of the dimension Slat must be at least large enough to hold all of the conveyor elements (the remaining elements in weighted will be set to zero).  The value of Slat*DT is the amount of time remaining before the material in that slat exits the conveyor.

A converter, average_latency, which is the average time remaining for the contents to exit (a weighted mean), can now be defined with the equation:

SUM(weighted_by_time)/Lag

Transposition

It is sometimes helpful to transpose an array.  To facilitate this, the ' (apostrophe) operator was added.  Given arrays A[Dim1, Dim2, Dim3] and B[Dim3, Dim2, Dim1], the array A can be set equal to B transposed with the following equation:

B '

Note that a space is required between the array name and the apostrophe.  This is equivalent to the following equation that uses dimension names:

B[Dim3, Dim2, Dim1]

This is especially helpful for square matrices or other arrays that use the same dimension name many times.  Given arrays C[Dim, Dim, Dim] and D[Dim, Dim, Dim], the array C can be set equal to D transposed with the following equation, which reverses all the dimensions:

D '

This is equivalent to the following equation that uses the new positional dimension names:

D[@3, @2, @1]

Within a subscript, the @ operator can be followed by an integer that represents the dimension position in the array whose equation is being set.  In the example above, @3 represents the third dimension name of A.  This is particularly useful if straight transposition is not needed and all the dimension names are the same.  For example,

D[@2, @1, @3]

flips the first two dimensions of D (when assigning to A) while leaving the third alone.

Subscript expressions

Subscripts can contain any valid expression.  Given an array A and a variable x, an element at a variable index that is one more than twice x can be accessed with:

A[2*x + 1]

Element labels can also appear within these expressions.

In Apply-To-All arrays, dimension names can be used.  The following equation sets the values in array A[Dim1] to every even-indexed elements in array B[Dim1], filling the second half of A with zeroes:

B[2*Dim1]

Dimension names can also be used outside subscripts.  The following equation slides the elements of B up one position in A, placing 10 in the first element of A (without the IF, the first element would contain 0).

IF Dim1 = 1 THEN 10 ELSE B[Dim1 - 1]

Even if Dim1 is labeled, it must be compared to the numeric index 1 in the IF expression because element labels can only be used within a subscript.  Note that numeric indices are always valid for any array dimension, even if it is labeled.

Array Ranges

A range of an array can be specified using the range operator : (colon), which takes a lower bound on the left and an upper bound on the right (e.g., 1:10 means “from 1 to 10”).  Just as wildcards allow control over which dimensions to include, ranges control which range of elements to include in each dimension.  For example, the follow equation sums the top-left 3×4 rectangle of array A[Dim1, Dim2]:

SUM(A[1:3, 1:4))

We hope you find these new array capabilities useful in your modeling work and welcome any comments and suggestions.

Categories: Modeling Tips Tags:

## What is the difference between STELLA and iThink?

The question we get asked most frequently by just about anyone who wants to know more about our modeling software is “What is the difference between STELLA and iThink?”  From a functional perspective, there are no differences between the STELLA and iThink software — they are two different brands of the same product.

The STELLA brand is targeted toward individuals in educational and research settings.  Supporting materials such as An Introduction to Systems Thinking with STELLA and sample models cover the natural and social sciences.

iThink, on the other hand, is targeted toward an audience of users in business settings.  An Introduction to Systems Thinking with iThink is written with the business user in mind and model examples apply the software to areas such as operations research, resource planning, and financial analysis.

Aside from the different program icons and other graphic design elements that go along with branding, there are just a few minor differences in the default settings for STELLA and iThink.  These differences are intended to pre-configure the software for the model author.  They do not limit you in any way from configuring the default setup to match your own individual preferences.

Below is a list of all the differences between the default settings for STELLA and iThink.

Opening Models

When opening a model with STELLA on Windows, by default, the software looks for files with a .STM extension.  Similarly, iThink looks for files with an .ITM extension.  If you want to open an iThink model using STELLA or vice-versa, you need to change the file type in the Open File dialog as shown below.

On Macs, the open dialog will show both iThink and STELLA models as valid files to open.

If you open a model with a file type associated with the different product than the one you are using, you’ll get a message similar to the one below warning you that the model will be opened as “Untitled”.  Simply click OK to continue.

Saving Models

When saving a model in STELLA, by default, the software saves the model with a .STM file extension.  Similarly, iThink saves model s with an .ITM extension.  If you’re using STELLA and want to save your model as an iThink file or vice-versa, use the Save As… menu option and select the appropriate type as shown below.

STELLA on Windows save dialog

STELLA on Mac save dialog

Run Specs

Since iThink is targeted toward business users who tend to measure performance monthly, the default Unit of time for iThink is set to Months.  It’s also easier to think about simulations starting in month 1 (rather than month zero) so we set the default simulation length in iThink to run from 1 to 13.  STELLA on the other hand, reports the Unit of time as “Time” and, by default, runs simulations from 0 to 12.

Run Spec Default Settings Comparison

Table Reporting

In a business context, financial results are generally reported at the end of a time period and the values are summed over the report interval.  For example, in a report showing 2010 revenues we would assume the values reflect total revenues at the end of the year.  In line with this assumption, the default Table settings in iThink include reporting Ending balances, Summed flow values, and a report interval of one time step.

In a research setting, scientists tend to prefer reporting precise values at a particular time.   For this reason, the default Table settings in STELLA are configured to report Beginning balances, Instantaneous flow values, and a report interval of Every DT.

Table Default Settings Comparison

STELLA or iThink

When choosing between STELLA or iThink, try to think about the kinds of models you intend to build and the problems you are looking to solve.  If your objective is to drive business improvement, chances are iThink will be a better fit.  If your purpose is to understand the dynamics of a natural environment or social system, STELLA will likely be your brand of choice.  Whatever you decide, both products will provide you with the exact same functionality and can easily be configured to suit your own preferences.

Categories: STELLA & iThink Tags:

## Using PEST to Calibrate Models

There are times when it is helpful to calibrate, or fit, your model to historical data. This capability is not built into the iThink/STELLA program, but it is possible to interface to external programs to accomplish this task. One generally available program to calibrate models is PEST, available freely from www.pesthomepage.org. In this blog post, I will demonstrate how to calibrate a simple STELLA model using PEST on Windows. Note that this method relies on the Windows command line interface added in version 9.1.2 and will not work on the Macintosh. The export to comma-separated value (CSV) file feature, added in version 9.1.2, is also used.

The model and all files associated with its calibration are available by clicking here.

The Model

The model being used is the simple SIR model first presented in my blog post Limits to Growth. The model is shown again below. There are two parameters: infection rate and recovery rate. Technically, the initial value for the Susceptible stock is also a parameter. However, since this is a conserved system, we can make an excellent guess as to its value and do not need to calibrate it.

The Data Set

We will calibrate this model to two data sets. The first is the number of weekly deaths caused by the Hong Kong flu in New York City over the winter of 1968-1969 (below).

The second is the number of weekly deaths per thousand people in the UK due to the Spanish flu (H1N1) in the winter of 1918-1919 (shown later).

In both cases, I am using the number of deaths as a proxy for the number of people infected, which we do not know. This is reasonable because the number of deaths is directly proportional to the number of infected individuals. If we knew the constant of proportionality, we could multiply the deaths by this constant to get the number of people infected.

Categories: Modeling Tips Tags:

## What are “Mental Models”? Part 2

Editor’s note:  This post is part two of a two part series on mental models.  You can read the first post by clicking here.

In part one of this series I stated “A mental model is a model that is constructed and simulated within a conscious mind.”  A key part of this definition is that mental models are not static; they can be played forward or backward in your mind like a video player playing a movie.  But even better than a video player, a mental model can be simulated to various outcomes, many times over, by changing the assumptions.

Mental Simulation

Remember the example from part one of the child reaching for the hot stove?  One possible outcome we can simulate is that the child does not get burned.  We can simulate this outcome by altering our assumptions. We could include a parent in the room who rescues the child in the nick of time.  Or, we could simulate the child slipping just before reaching the stovetop because the hardwood floor appears slippery.  This kind of mental simulation allows us to evaluate what may happen, given different conditions, and inform our decision making.  We don’t have to make any decisions while looking at the picture, but imagine what actions you might take if the scene above was actually unfolding in front of you.

It seems effortless to mentally simulate these types of mental models.  Most of the time we are not even aware that we are doing it.  But other times, it becomes very obvious that our brain is working rather hard.  For example, looking at the chess board below, can you determine if the configuration is a checkmate?

It is indeed.  But I’ll bet it took noticeably more effort for you to mentally simulate the chess game than it did with the child near the stove scenarios.  Think about the mental effort that the players make trying to simulate the positions on the board just a few moves ahead in the game.

The paper “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by G.A. Miller (1956) established that people can generally hold seven objects (numbers, letters, words, etc.) simultaneously within their working memory.  Think of “working memory” as you would think of memory in a computer.  It’s like the amount of RAM we have available to perform computations within our mind.  And it’s not very much.  This means if people want to do any really complex information processing, they’ll need some help.  Over the last 50 years or so, the help has come from computers.  (In fact, IBM designed a computer specifically for playing chess, dubbed ‘Deep Blue’).

Digital computers have catapulted humankind’s ability to design, test, and build new technology to unbelievable levels in a relatively short period of time.  Space exploration, global telecommunication, and modern health care technology would not have been possible without the aid of computers.  We are able to perform the computation required to simulate complex systems using a computer instead of our minds.  Running simulations with a computer is faster and more reliable.

What makes a model useful?

Models that we can simulate using computers come in many forms.  For example, a model could be a financial model in a spreadsheet, an engineering design rendered with a CAD program, or a population dynamics model created with STELLA.  But what makes any of these models useful?  Is it the model’s results?  Its predictions?  I think the ability to explain the results is what makes a model truly useful.

Models are tools that can contribute to our understanding and decision making processes.  To make decisions, a person needs to have some understanding of the system the model represents.  A business finance model, for example, can be a useful tool if you understand how the business works.

Consider a model that does not provide any explanatory content, only results.  This type of model is often referred to as a black box.  It gives you all the answers, but you have no idea how it works.  People rarely trust these types of models and they are often not very useful for generating understanding.

The most useful models are structured so that the model itself will provide an explanatory framework that enables someone to ask useful questions of it.  Those questions may be answered by experimenting with the model (simulating) which, in turn, can help deepen a person’s understanding of the system.

This is an important feedback loop in a person’s learning process.  This feedback loop can be accelerated if the model provides explanations and can be simulated with a computer.

Categories: Systems Thinking Tags:

## What is Delta Time (DT)?

After reading Karim Chichakly’s recent post on Integration Methods and DT, I was reminded that delta time (DT) has always been a tricky modeling concept for me to grasp.   Beginning modelers don’t usually need to think about changing DT since STELLA and iThink set it to a useful default value of 0.25.   But once you progress with your modeling skills, you might consider the advantages and risks of playing with DT.

The DT setting is found in the Run Specs menu.

By definition, system dynamics models run over time and DT controls how frequently calculations are applied each unit of time.  Think of it this way, if your model was a movie, then DT would indicate the time interval between still frames in the strip of movie film.  For a simulation over a period of 12 hours, a DT of 1/4 (0.25) would give you a single frame every 15 minutes.  Lowering the DT to 1/60 would give a frame every minute.   The smaller the DT is, the higher the calculation frequency (1/DT).

Beware of the Extremes

A common tendency for modelers is to set the calculation frequency too high.  Without really thinking too hard about it, more data seems to imply a higher quality model – just like more frames in movie film make for smoother motion.  If your model calculates more data for every time unit, its behavior will begin to resemble the behavior of a smoothly continuous system.  But a higher frequency of calculations can greatly slow down your model’s run performance and more data does not directly translate to a better simulation.

Beware of Discrete Event Models

Another situation where DT can often lead to unexpected behavior is with models that depend on discrete events.   My eyes were opened to this when I attended one of isee’s workshops taught by Corey Peck and Steve Peterson of Lexidyne LLC.

One of the workshop exercises involved a simple model where the DT is set to the default 0.25, the inflow is set to a constant 10, and the outflow is set to flush out the stock’s contents as soon as it reaches 50.   This is how the model’s structure and equations looked:

Stock = 0

inflow = 10

outflow = IF Stock >= 50 THEN 50 ELSE 0

I would have expected the value of the stock to plunge to zero after it reached or exceeded 50, but this graph shows the resulting odd saw-tooth pattern.

The model ends up behaving like a skipping scratched record, in a perpetual state of never progressing far enough to reach the goal of zero.  (Click here to download the model.)

What is happening in the model?  In the first DT after the stock’s value reaches exactly 50, the outflow sets itself to 50 in order to remove the contents from the stock. So far so good, but now the DT gotcha begins to occur.   Since the outflow works over time, its value is always per time.  To get the quantity of material that actually flowed, you must multiply the outflow value (or rate) by how long the material was flowing.  When DT is set to 0.25,  the material flows 0.25 time units each DT.  Hence, the quantity of material removed from the stock is 50*0.25 = 12.50.

Suddenly we are in a situation where only 12.50 has been removed from the stock but the stock’s value is now less than 50.  Since the stock is no longer greater than or equal to 50, the outflow sets itself back to 0 and never actually flushes out the full contents of the stock.

So what do we do?  One solution to this problem would be to use the PULSE built-in to remove the full value from the stock.   Here’s what the equation for the outflow would look like:

outflow = IF Stock >= 50 THEN PULSE(Stock) ELSE 0

(Note: This option will only work using Euler’s integration method.)

STELLA and iThink have great help documentation on DT.  The general introduction provides a good explanation of how DT works. The more advanced DT Situations Requiring Special Care section focuses more on artifactual delays and the discrete model issues mentioned in this post.  Delta time and resulting model behaviors are reminders that system dynamics models run over time, but they achieve this by applying numerous discrete calculations in order to simulate the smooth behavior of actual systems.

Categories: Modeling Tips Tags:

## Converting a Sector-based Model to Modules

I generally do not use modules to build very small models (only a couple of stocks and flows), which may then lead me to use sectors as the model grows because they are very convenient.  By the time I have three sectors, though, it starts to become clear that I should have used modules.  I will then need to convert my sector-based model into a module-based model.  Historically, I also have a number of sector-based models that are crying to be module-based.

Converting from sectors to modules is not very difficult:

1. Make sure there are no connections or flows between sectors.  Replace any of these with ghosts in the target sector.
2. In a new model, create one module for every sector.
3. Copy and paste the structure from each sector into its corresponding module.
4. Connect the modules:  At this point, the model structure has been rearranged into modules, but none of the modules are connected.  The ghosts that were in the sectors became real entities when they were pasted into the modules.  Go back to identify all of these connections and reconnect them in the module-based model.

Stepping Through a Sample Model

Let’s walk through an example.  A small sector-based model is shown below (and is available by clicking here).

This model violates what I would call good sector etiquette:  there are connectors that run between the sectors.  This is often useful in a small model such as this because it makes the feedback loops visible.  However, in a larger model, this can lead to problems such as crossed connections and difficulty in maintaining the model because sectors cannot be easily moved.

Categories: Modeling Tips Tags:

## Modeling Bass Diffusion with Rivalry

This is the last of a three-part series on the Limits to Growth Archetype.  The first part can be accessed here and the second part here.

Last time, we explored the effects of Type 1 rivalry (rivalry between different companies in a developing market) on the Bass diffusion model by replicating the model structure.  This part will generalize this structure and add Type 2 rivalry (customers switching between brands).

Bass Diffusion with Type 1 Rivalry

To model the general case of an emerging market with multiple competitors, we can return to the original single company case and use arrays to add additional companies.  In this case, everything except Potential Customers needs to be arrayed, as shown below (and available by clicking here).

For this example, three companies will be competing for the pool of Potential Customers.  Each array has one-dimension, named Company, and that dimension has three elements, named A, B, and C, one for each company.  Although each different parameter, wom multiplier, fraction gained per \$K, and marketing spend in \$K, can be separately specified for each company, all three companies use the same values initially.  All three companies, however, do not enter the market at the same time.  Company A enters the market at the start of the simulation, company B enters six months later, and company C enters six months after that.

Recall that the marketing spend is the trigger for a company to start gaining customers.  Thus, the staggered market entrance can be modeled with the following equation for marketing spend in \$K:

STEP(10, STARTTIME + (ARRAYIDX() – 1)*6)

The STEP function is used to start the marketing spend for each company at the desired time.  The ARRAYIDX function returns the integer index of the array element, so it will be 1 for company A, 2 for company B, and 3 for company C.  Thus, the offsets from the start of the simulation for the launch of each company’s marketing campaign are 0, 6, and 12, respectively.

This leads to the following behavior:

Note that under these circumstances, the first company to enter the market retains a leadership position.  However, companies B and C could anticipate this and market more strongly.  What if company B spent 50% more and company C spent 100% more than company A on marketing that is similarly effective?  This could be modeling by once again changing the equation for marketing spend in \$K, this time to:

STEP(10 + (ARRAYIDX() – 1)*5, STARTTIME + (ARRAYIDX() – 1)*6)

Categories: STELLA & iThink Tags:

## Developing a Market Using the Bass Diffusion Model

This is part two of a three part series on Limits to Growth.  Part one can be accessed here and part three can be accessed here.

In part one of this series, I explained the Limits to Growth archetype and gave examples in epidemiology and ecology. This part introduces the Bass diffusion model, an effective way to implement the capture of customers in a developing market. This is also used to implement what Kim Warren calls Type 1 rivalry in his book Strategy Management Dynamics, that is, rivalry between multiple companies in an emerging market.

The Bass Diffusion Model

The Bass diffusion model is very similar to the SIR model shown in part one. Since we do not usually track customers who have “recovered” from using our product, the model only has two stocks, corresponding loosely to the Susceptible and Infected stocks. New customers are acquired through contact with existing customers, just as an infection spreads, but in this context this is called word of mouth (wom). This is, however, not sufficient to spread the news of a good product, so the Bass diffusion model also includes a constant rate of customer acquisition through advertising. This is shown below (and can be downloaded by clicking here).

The feedback loops B1 and R are the same as the balancing and reinforcing loops between Susceptible and Infected in the SIR model. Instead of an infection rate, there is a wom multiplier which is the product of the Bass diffusion model’s contact rate and the adoption rate. If you are examining policies related to these variables, it would be important to separate them out in the model.

The additional feedback loop, B2, starts the ball rolling and helps a steady stream of customers come in the door. If you examine the SIR model closely, you will see that the initial value of Infected is one. If no one is infected, the disease cannot spread. Likewise, if no one is a customer, there is no one to tell others how great the product is so they want to become customers also. By advertising, awareness of the product is created in the market and some people will become customers without having encountered other customers who are happy with the product.

The behavior of this model is shown below. Note it is not different in character from the SIR model or the simple population model.

Categories: STELLA & iThink Tags: