What is the difference between STELLA and iThink?

March 9th, 2011 3 comments

The question we get asked most frequently by just about anyone who wants to know more about our modeling software is “What is the difference between STELLA and iThink?”  From a functional perspective, there are no differences between the STELLA and iThink software — they are two different brands of the same product.

The STELLA brand is targeted toward individuals in educational and research settings.  Supporting materials such as An Introduction to Systems Thinking with STELLA and sample models cover the natural and social sciences.

iThink, on the other hand, is targeted toward an audience of users in business settings.  An Introduction to Systems Thinking with iThink is written with the business user in mind and model examples apply the software to areas such as operations research, resource planning, and financial analysis.

Aside from the different program icons and other graphic design elements that go along with branding, there are just a few minor differences in the default settings for STELLA and iThink.  These differences are intended to pre-configure the software for the model author.  They do not limit you in any way from configuring the default setup to match your own individual preferences.

Below is a list of all the differences between the default settings for STELLA and iThink.

Opening Models

When opening a model with STELLA on Windows, by default, the software looks for files with a .STM extension.  Similarly, iThink looks for files with an .ITM extension.  If you want to open an iThink model using STELLA or vice-versa, you need to change the file type in the Open File dialog as shown below.

STELLA file open dialog

On Macs, the open dialog will show both iThink and STELLA models as valid files to open.

If you open a model with a file type associated with the different product than the one you are using, you’ll get a message similar to the one below warning you that the model will be opened as “Untitled”.  Simply click OK to continue.

STELLA file conversion dialog

Saving Models

When saving a model in STELLA, by default, the software saves the model with a .STM file extension.  Similarly, iThink saves model s with an .ITM extension.  If you’re using STELLA and want to save your model as an iThink file or vice-versa, use the Save As… menu option and select the appropriate type as shown below.

STELLA save as dialog

STELLA on Windows save dialog

 

STELLA on Mac save dialog

STELLA on Mac save dialog

Run Specs

Since iThink is targeted toward business users who tend to measure performance monthly, the default Unit of time for iThink is set to Months.  It’s also easier to think about simulations starting in month 1 (rather than month zero) so we set the default simulation length in iThink to run from 1 to 13.  STELLA on the other hand, reports the Unit of time as “Time” and, by default, runs simulations from 0 to 12.

Run Spec comparison

Run Spec Default Settings Comparison

Table Reporting

In a business context, financial results are generally reported at the end of a time period and the values are summed over the report interval.  For example, in a report showing 2010 revenues we would assume the values reflect total revenues at the end of the year.  In line with this assumption, the default Table settings in iThink include reporting Ending balances, Summed flow values, and a report interval of one time step.

In a research setting, scientists tend to prefer reporting precise values at a particular time.   For this reason, the default Table settings in STELLA are configured to report Beginning balances, Instantaneous flow values, and a report interval of Every DT.

table default settings comparison

Table Default Settings Comparison

STELLA or iThink

When choosing between STELLA or iThink, try to think about the kinds of models you intend to build and the problems you are looking to solve.  If your objective is to drive business improvement, chances are iThink will be a better fit.  If your purpose is to understand the dynamics of a natural environment or social system, STELLA will likely be your brand of choice.  Whatever you decide, both products will provide you with the exact same functionality and can easily be configured to suit your own preferences.

Using PEST to Calibrate Models

January 14th, 2011 21 comments

There are times when it is helpful to calibrate, or fit, your model to historical data. This capability is not built into the iThink/STELLA program, but it is possible to interface to external programs to accomplish this task. One generally available program to calibrate models is PEST, available freely from www.pesthomepage.org. In this blog post, I will demonstrate how to calibrate a simple STELLA model using PEST on Windows. Note that this method relies on the Windows command line interface added in version 9.1.2 and will not work on the Macintosh. The export to comma-separated value (CSV) file feature, added in version 9.1.2, is also used.

The model and all files associated with its calibration are available by clicking here.

The Model

The model being used is the simple SIR model first presented in my blog post Limits to Growth. The model is shown again below. There are two parameters: infection rate and recovery rate. Technically, the initial value for the Susceptible stock is also a parameter. However, since this is a conserved system, we can make an excellent guess as to its value and do not need to calibrate it.

image

The Data Set

We will calibrate this model to two data sets. The first is the number of weekly deaths caused by the Hong Kong flu in New York City over the winter of 1968-1969 (below).

clip_image004

The second is the number of weekly deaths per thousand people in the UK due to the Spanish flu (H1N1) in the winter of 1918-1919 (shown later).

In both cases, I am using the number of deaths as a proxy for the number of people infected, which we do not know. This is reasonable because the number of deaths is directly proportional to the number of infected individuals. If we knew the constant of proportionality, we could multiply the deaths by this constant to get the number of people infected.

Read more…

Shifting the Burden

December 22nd, 2010 3 comments

The Shifting the Burden Systems Archetype shows how attacking symptoms, rather than identifying and fixing fundamental problems, can lead to a further dependence on symptomatic solutions.  This Systems Archetype was formally identified in Appendix 2 of The Fifth Discipline by Peter Senge (1990).  The Causal Loop Diagram (CLD) is shown below.

image

When a problem symptom appears, two options present themselves:  1) apply a short-term fix to the symptom, or 2) identify and apply a longer-term fix to the fundamental issue.  The second option is less attractive because it involves a greater time delay and probably additional cost before the problem symptom is relieved.  However, applying a short-term fix, as a result of relieving the problem symptoms sooner, reduces the desire to identify and apply a more permanent fix.  Often the short-term fix also induces a secondary unintended side-effect that further undermines any efforts to apply a long-term fix.  Note that the short-term fix only relieves the symptoms, it does not fix the problem.  Thus, the symptoms will eventually re-appear and have to be addressed again.

Classic examples of shifting the burden include:

  • Making up lost time for homework by not sleeping (and then controlling lack of sleep with stimulants)
  • Borrowing money to cover uncontrolled spending
  • Feeling better through the use of drugs (dependency is the unintended side-effect)
  • Taking pain relievers to address chronic pain rather than visiting your doctor to try to address the underlying problem
  • Improving current sales by focusing on selling more product to existing customers rather than expanding the customer base
  • Improving current sales by cannibalizing future sales through deep discounts
  • Firefighting to solve business problems, e.g., slapping a low-quality – and untested – fix onto a product and shipping it out the door to placate a customer
  • Repeatedly fixing new problems yourself rather than properly training your staff to fix the problems – this is a special form known as “shifting the burden to the intervener” where you are the intervener who is inadvertently eroding the capabilities and confidence of your staff (the unintended side-effect)
  • Outsourcing core business competencies rather than building internal capacity (also shifting the burden to the intervener, in this case, to the outsource provider)
  • Implementing government programs that increase the recipient’s dependency on the government, e.g., welfare programs that do not attempt to simultaneously address low unemployment or low wages (also shifting the burden to the intervener, in this case, to the government)

Read more…

The Politics of Economic Recovery

December 3rd, 2010 3 comments

Editor’s Note: This is a guest post from isee’s training and consulting partner, Corey Peck of Lexidyne LLC.

The mid-term elections are now a month behind us and the political airwaves are still abuzz with commentary about the results.  Exit polls showed that unemployment was at the top of most voters’ list of issues, and that concerns about the federal government’s financial condition (record deficits and debt levels) were a hot topic as well.  Voters appeared to be asking “How can the federal government spend so much money and have so little positive impact on the nation’s economy?” 

The responses by politicians to such an important question are all over the map.  Democrats are claiming that economic conditions would have been much worse if not for massive federal bailouts and stimulus spending.  Republicans are touting the situation as a death knoll for the Obama platform in an effort to position themselves for 2012.  And the Tea Party movement has emerged to push for a roll-back of what they see as an intrusive and ineffective “Big Government”.

But, this political posturing reminds me that one of the true strengths of Systems Thinking is to force people to think very clearly and very operationally about the structure/behavior link embedded in such cases.  A little over a year ago, we sat down with Dr. Mark Paich, who used some very simple stock/flow language and some well-established principles of macroeconomics to lay out some relevant dynamics about the economic crisis and its aftermath:

  • Why the collapse of the housing market made consumers re-evaluate their net asset position and hence started saving more of their incomes to pay off high interest credit card debt.
  • How such actions on the part of consumers, in aggregate, kicked off a vicious cycle of decreased spending and contracting national output.
  • Why government stimulus spending could close some, but not all, of the gap left by suddenly thrifty consumers, and that the recovery was likely to be a long, slow one.

We certainly don’t know how the future will play out, but the data suggest that consumers are indeed cutting back spending, and paying off debt.  (The Bureau of Economic Analysis has terrific historical data on household balance sheets and income.)  The unemployment numbers remain stubbornly high (around 9.5%), and although the recession is technically over, few economists are predicting rapid post-crisis economic expansion.

For a bit of clarity amidst all the rhetoric, you may want to check out Mark’s video offering.  His model and associated explanation do not provide a “magic bullet” of a solution, but they do provide some substance (and perhaps insight) to this vexing situation.  Now if only the politicians could follow suit!

To read a previous blog post about Modeling the Economic Crisis or view a 5-minute video trailer, click here.

What are “Mental Models”? Part 2

November 3rd, 2010 7 comments

Editor’s note:  This post is part two of a two part series on mental models.  You can read the first post by clicking here.

In part one of this series I stated “A mental model is a model that is constructed and simulated within a conscious mind.”  A key part of this definition is that mental models are not static; they can be played forward or backward in your mind like a video player playing a movie.  But even better than a video player, a mental model can be simulated to various outcomes, many times over, by changing the assumptions.

Mental Simulation

Child reaching toward hot stoveRemember the example from part one of the child reaching for the hot stove?  One possible outcome we can simulate is that the child does not get burned.  We can simulate this outcome by altering our assumptions. We could include a parent in the room who rescues the child in the nick of time.  Or, we could simulate the child slipping just before reaching the stovetop because the hardwood floor appears slippery.  This kind of mental simulation allows us to evaluate what may happen, given different conditions, and inform our decision making.  We don’t have to make any decisions while looking at the picture, but imagine what actions you might take if the scene above was actually unfolding in front of you.

It seems effortless to mentally simulate these types of mental models.  Most of the time we are not even aware that we are doing it.  But other times, it becomes very obvious that our brain is working rather hard.  For example, looking at the chess board below, can you determine if the configuration is a checkmate?

Chess board

It is indeed.  But I’ll bet it took noticeably more effort for you to mentally simulate the chess game than it did with the child near the stove scenarios.  Think about the mental effort that the players make trying to simulate the positions on the board just a few moves ahead in the game.

The paper “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by G.A. Miller (1956) established that people can generally hold seven objects (numbers, letters, words, etc.) simultaneously within their working memory.  Think of “working memory” as you would think of memory in a computer.  It’s like the amount of RAM we have available to perform computations within our mind.  And it’s not very much.  This means if people want to do any really complex information processing, they’ll need some help.  Over the last 50 years or so, the help has come from computers.  (In fact, IBM designed a computer specifically for playing chess, dubbed ‘Deep Blue’).

Digital computers have catapulted humankind’s ability to design, test, and build new technology to unbelievable levels in a relatively short period of time.  Space exploration, global telecommunication, and modern health care technology would not have been possible without the aid of computers.  We are able to perform the computation required to simulate complex systems using a computer instead of our minds.  Running simulations with a computer is faster and more reliable.

What makes a model useful?

Models that we can simulate using computers come in many forms.  For example, a model could be a financial model in a spreadsheet, an engineering design rendered with a CAD program, or a population dynamics model created with STELLA.  But what makes any of these models useful?  Is it the model’s results?  Its predictions?  I think the ability to explain the results is what makes a model truly useful.

Models are tools that can contribute to our understanding and decision making processes.  To make decisions, a person needs to have some understanding of the system the model represents.  A business finance model, for example, can be a useful tool if you understand how the business works.

Consider a model that does not provide any explanatory content, only results.  This type of model is often referred to as a black box.  It gives you all the answers, but you have no idea how it works.  People rarely trust these types of models and they are often not very useful for generating understanding.

Mental model of the learning processThe most useful models are structured so that the model itself will provide an explanatory framework that enables someone to ask useful questions of it.  Those questions may be answered by experimenting with the model (simulating) which, in turn, can help deepen a person’s understanding of the system.

This is an important feedback loop in a person’s learning process.  This feedback loop can be accelerated if the model provides explanations and can be simulated with a computer.

Transforming your mental models into visual models that are easier to understand and experiment with, will deepen your understanding, and help you communicate your models more effectively.

Read more…

2010 isee User Conference “Making Connections”

October 21st, 2010 3 comments

Connecting at the welcome reception

Earlier this month, we had the pleasure of hosting the 2010 isee User Conference in Providence, Rhode Island. During this amazing gathering of isee customers, partners, friends, and iThink/STELLA enthusiasts, we learned about the important work that is being done applying Systems Thinking to solve real-world problems, shared ideas, and made connections with one another.

For two and a half days, I saw participants immersed in keynote presentations, breakout sessions, and hands-on workshops.  As important, however, were the less structured round table discussions, poster presentations, and social activities.  Everywhere I looked, folks were engaged in conversation and connecting with one another.  Even better, I knew that many of those connections would continue to be fostered and developed well after the conference was over.

Presentations and models

Conference presentations and models are now available on the isee systems web site.  Please download and share these materials with your colleagues and friends. You’ll get a glimpse into the wide range of fields where Systems Thinking is being used to better understand the interconnections of dynamic systems including business, healthcare, education, energy, and the environment.  You can also download and listen to audio recordings of the keynote presentations:

A Conference Highlight

Steve Peterson describes the modeling process

One of the highlights of the conference for me was listening to the story that Steve Peterson and Paul Bothwell told about using Systems Thinking and dynamic modeling to help communities in high-violence Boston neighborhoods.

In their work with the Youth Violence Systems Project, there were two objectives:

  • Improve understanding of community-based violence in Boston
  • Help communities strategize and achieve sustained reductions in violence

What made this project different from other attempts to research and solve the youth violence problem in Boston was that it engaged “the community” in the development of the model.  From the get go, they included youth in the modeling process.  Gang members, in particular, turned out to be an important missing link to understanding violence and the dynamic system behavior.  Both Steve and Paul described some of the harsher realities of working with young people whose family members and friends were victims of violence.  The modeling process actually helped community members to articulate the “slippery slope dynamics” that move youth through the different stocks to gang involvement.  If you have a chance, I highly recommend listening to the audio recording!

Staying Connected

Participants engaged in round table discussions

Having time to interact with other participants was an important part of the conference experience.  It was wonderful to see the excitement and energy that is created when Systems Thinkers have an opportunity to connect with one another.  The cross fertilization that occurs so naturally between field experts, modelers, and educators was inspiring.

Please stay connected and let us know if there are other ways in which we can foster our growing community of STELLA and iThink modelers!

System Dynamics Conference in Seoul

August 10th, 2010 No comments
isee systems is proud to have sponsored the 28th International System Dynamics Conference held in Seoul, Korea last month.  We especially enjoyed supporting the conference again this year through the Barry Richmond Scholarship Award.   The scholarship was established in 2007 to honor and continue the legacy of our company founder, Barry Richmond.  Barry was devoted to helping others become better “Systems Citizens”.  It was his mission to make Systems Thinking and System Dynamics accessible to people of all ages and in all fields.
Presenting the scholarship in Seoul was isee’s longtime consulting and training partner, Mark Heffernan.  Mark had this story to tell about Barry:

I first met Barry 20 years ago, when I had to trudge through the snow to get to his small wooden office.  I was building a discrete event model using STELLA and I wanted him to make some changes to the software so I didn’t have these “egg timer“ structures everywhere.  Barry was horrified with what I had done with his software and said words to the effect that it’s not meant for that, it was created to spread the gospel of System Dynamics.  Despite the fact that I was a civil engineer, he encouraged me to take a look at SD.  Such was his passion and conviction that 20 years later I’m still attending this conference.”

Tony Phuah accepts Scholarship Award from Mark Heffernan

Through most of his career Barry saw education as the key to spreading Systems Thinking.  As a teacher and a mentor, he dedicated much of his time to developing tools and methodologies for learning.  It is fitting therefore that this year’s award was presented to Tony Phuah, a Master’s student in System Dynamics at the University of Bergen.

Tony’s work includes an experimental study that explores the question: How can we improve people’s understanding of basic stock and flow behavior?  His experiment uses two different methods for teaching stock and flow behavior — the standard method (using graphical integration) and a method he calls “running total”.  Tony presented his paper at a parallel session during the conference and it can be downloaded by clicking here.  Although the results of his study favor traditional methods for teaching stock and flow behavior, we all should be encouraged by the work being done to try to improve Systems Thinking education and communication.  In Tony’s own words:

Speeding up ‘Systems Thinkers beget more Systems Thinkers’ growth will make us one step closer to Barry Richmond’s vision of a systems citizen world.”

Congratulations Tony and thank you Mark for helping us to celebrate Barry’s passion!

Applications for the 2011 Barry Richmond Scholarship Award will be available on the isee systems and System Dynamics Society web sites.  Check those sites for more information.

What is Delta Time (DT)?

August 3rd, 2010 15 comments

After reading Karim Chichakly’s recent post on Integration Methods and DT, I was reminded that delta time (DT) has always been a tricky modeling concept for me to grasp.   Beginning modelers don’t usually need to think about changing DT since STELLA and iThink set it to a useful default value of 0.25.   But once you progress with your modeling skills, you might consider the advantages and risks of playing with DT.

The DT setting is found in the Run Specs menu.

By definition, system dynamics models run over time and DT controls how frequently calculations are applied each unit of time.  Think of it this way, if your model was a movie, then DT would indicate the time interval between still frames in the strip of movie film.  For a simulation over a period of 12 hours, a DT of 1/4 (0.25) would give you a single frame every 15 minutes.  Lowering the DT to 1/60 would give a frame every minute.   The smaller the DT is, the higher the calculation frequency (1/DT).

Beware of the Extremes

A common tendency for modelers is to set the calculation frequency too high.  Without really thinking too hard about it, more data seems to imply a higher quality model – just like more frames in movie film make for smoother motion.  If your model calculates more data for every time unit, its behavior will begin to resemble the behavior of a smoothly continuous system.  But a higher frequency of calculations can greatly slow down your model’s run performance and more data does not directly translate to a better simulation.

Beware of Discrete Event Models

Another situation where DT can often lead to unexpected behavior is with models that depend on discrete events.   My eyes were opened to this when I attended one of isee’s workshops taught by Corey Peck and Steve Peterson of Lexidyne LLC.

One of the workshop exercises involved a simple model where the DT is set to the default 0.25, the inflow is set to a constant 10, and the outflow is set to flush out the stock’s contents as soon as it reaches 50.   This is how the model’s structure and equations looked:

Discrete Model

Stock = 0

inflow = 10

outflow = IF Stock >= 50 THEN 50 ELSE 0

I would have expected the value of the stock to plunge to zero after it reached or exceeded 50, but this graph shows the resulting odd saw-tooth pattern.

Sawtooth Model Behavior

The model ends up behaving like a skipping scratched record, in a perpetual state of never progressing far enough to reach the goal of zero.  (Click here to download the model.)

What is happening in the model?  In the first DT after the stock’s value reaches exactly 50, the outflow sets itself to 50 in order to remove the contents from the stock. So far so good, but now the DT gotcha begins to occur.   Since the outflow works over time, its value is always per time.  To get the quantity of material that actually flowed, you must multiply the outflow value (or rate) by how long the material was flowing.  When DT is set to 0.25,  the material flows 0.25 time units each DT.  Hence, the quantity of material removed from the stock is 50*0.25 = 12.50.

Suddenly we are in a situation where only 12.50 has been removed from the stock but the stock’s value is now less than 50.  Since the stock is no longer greater than or equal to 50, the outflow sets itself back to 0 and never actually flushes out the full contents of the stock. 

So what do we do?  One solution to this problem would be to use the PULSE built-in to remove the full value from the stock.   Here’s what the equation for the outflow would look like:

outflow = IF Stock >= 50 THEN PULSE(Stock) ELSE 0

(Note: This option will only work using Euler’s integration method.)

Further Reading

STELLA and iThink have great help documentation on DT.  The general introduction provides a good explanation of how DT works. The more advanced DT Situations Requiring Special Care section focuses more on artifactual delays and the discrete model issues mentioned in this post.  Delta time and resulting model behaviors are reminders that system dynamics models run over time, but they achieve this by applying numerous discrete calculations in order to simulate the smooth behavior of actual systems.

Categories: Modeling Tips Tags: ,