Archive

Author Archive

Drifting Goals

March 9th, 2016 No comments

The Drifting Goals Archetype applies to situations where short-term solutions lead to the deterioration of long-term goals.  Also known as Eroding Goals, this is a special case of Shifting the Burden.  This Systems Archetype was formally identified in Appendix 2 of The Fifth Discipline by Peter Senge (1990).  The Causal Loop Diagram (CLD) is shown below.

image

When a gap exists between the current state of the system and our goal (or desired state), we take action proportional to the gap to move the system state toward our goal.  There is always a delay between the action we take and the effect on the system.  Simultaneously, pressure is exerted to instead adjust the goal to close the gap.  Adjusting the goal leads to a situation where the goal floats independently of any standard.  It often leads to goals being reduced, or eroded.

Classic examples of drifting goals include:

  • Reducing quality targets to improve measured quality performance (relative to goal) or to improve delivery schedule
  • Reducing quality of ingredients or parts below company standards to improve profits
  • Increasing time to deliver to match existing capacity and save on overtime
  • Reducing a new product’s feature set to meet deadlines; this works the other way also, i.e., extending the deadline to include all of the features
  • Reducing pollution targets when reduction implementation costs are too high
  • Increasing budget deficit limits rather than decreasing spending (or increasing taxes)
  • Adapting to unacceptable social circumstances rather than leave that environment
  • Reducing entrance requirements because not enough applicants meet them
  • Reducing level of patient care below recommended minimum due to understaffing
  • Reducing margin to spur sales and meet revenue targets
  • Lowering your own expectations in life, leading to lower personal success

Note that in many of these cases, there are competing goals and one is held more sacred than another.  Drifting Goals is an insidious process that seeks to lower your standards to the level of the current state of the system.  Stay aware of not just how the state of the system adjusts to your goal, but also of how your goal varies over time.  Changing a goal should be a conscious decision that does not undermine other objectives.

Read more…

Generating Random Numbers from Custom Probability Distributions

May 29th, 2014 No comments

STELLA® and iThink® provide many useful probability distribution functions (listed here).  However, sometimes you need to draw random numbers from a different probability distribution, perhaps one you have developed yourself.  In these cases, it is possible to invert the cumulative probability distribution and use a uniformly distributed random number between zero and one (using the RANDOM built-in) to draw a number from the intended distribution.  With a lot of math, this can be done analytically (briefly described here).  With no math at all, it can be closely approximated using the graphical function.

Find the Cumulative Distribution Function

Every probability distribution has a probability density function (PDF) that relates a value with its probability of occurring.  The most famous continuous PDF is the bell curve for the normal distribution:

image

From the PDF, we can see that the probability of randomly drawing 100 is just under 0.09 while the probability of randomly drawing 88 or 112 is close to zero.  Note that applying the techniques described in this article to a continuous probability distribution will only approximate that distribution.  The accuracy of the approximation will be determined by the number of data points included in the graphical function.

For discrete probability functions, the PDF resembles a histogram:

image

From this PDF, we can see that the probability of randomly drawing 1 is 0.4, while the probability of drawing 3 is 0.15.  As discrete probability distributions can be represented exactly within graphical functions, the remainder of this article will focus on them.

Read more…

Generating Custom Reports Using XMILE

September 4th, 2013 No comments

XMILE is an open standard for describing system dynamics models in XML.  Version 10 of iThink and STELLA output their models in the XMILE format.  One of the advantages of XML is that it is a text-based format that can be easily queried and manipulated.  This post will show you how to use XMLStarlet, a free XML command line management tool available for Windows, Macintosh, and Linux, to easily extract information from a XMILE model.  It will also demonstrate how to modify the XML style sheet (XSLT) generated by XMLStarlet to create custom HTML reports.

Our goal is to create a report that lists the stocks, flows, and converters in the susceptible-infected-recovered (SIR) model of infection shown below (available by clicking here).  Each model variable will be listed with its own equation and sorted by name.

SIR

XMLStarlet uses the select command (sel) for making queries to an XML file and formatting the results.  We will use all of the following select command options:

-t (template): define a set of rules (below) to be applied to the XML file
-m “XPath query” (match): find and select a set of nodes in the XML file
-s <options> “XPath expression” (sort): sort selected nodes by XPath expression
-v “XPath expression” (value): output value of XPath expression
-o “text” (output): output the quoted text
-n (newline): start a new line in the output

Reporting Stock Names

Let’s start by outputting the names of the stocks in the model.  In a XMILE file, stocks are identified by the <stock> tag, which is nested inside the <xmile> and <model> tags:

<xmile …>
   <model>
      <stock name="Infected">
         <eqn>1</eqn>
      </stock>
   </model>
</xmile>

There is one <stock> tag for every stock in the model and each stock has, at a minimum, both a name (in the “name” attribute) and an initialization equation (in the <eqn> tag).  To get the names of all stocks in the model, we can build a template using these XMLStarlet command options:

sel –t -m “_:xmile/_:model/_:stock” -v “@name” -n

The “sel” chooses the select command and the –t begins the template (the set of rules used to extract and format information from the XML file).  The –n at the end puts each stock name on its own line.

The –m option defines the XML path to any stock from the root.  In this case, the –m option is selecting all the XML nodes named stock (i.e., <stock> tags) that are under any <model> tags in the <xmile> tag.  From the XMILE file, one might expect the XML path to be “xmile/model/stock,” but the tags in the XMILE file are in the XMILE namespace and XPath, which is being used for this query, requires namespaces to be explicitly specified.  Luckily, XMLStarlet, starting in version 1.5.0, allows us to use “_” for the name of the namespace used by the XML file, in this case the XMILE namespace.  Thus, every XMILE name in a query must be preceded by “_:”.

Finally, the –v option allows us to output the name of each node selected with -m (stocks, in this case).  The “@” tells XPath that “name” is an attribute, not a tag, i.e., it is of the form name=”…” rather than <name>…</name>.

To build a full command, we need to add the path to XML Starlet to the beginning and the name of the XML file being queried to the end:

XMLStarlet_path/xml <options above> SIR.stmx

The entire command without the path to XMLStarlet is:

xml sel -t -m “_:xmile/_:model/_:stock” -v “@name” -n SIR.stmx

This command produces the following output:

Infected
Susceptible
Recovered

Read more…

XMILE – An open standard for system dynamics models

July 19th, 2013 No comments

In June, isee systems and IBM sponsored a new technical committee in OASIS, a large standards organization. This committee is developing a new system dynamics modeling standard called XMILE. This blog post will answer some important questions about XMILE.

1. What is XMILE?

XMILE is an open XML protocol for the sharing, interoperability, and reuse of system dynamics models and simulations.

2. What’s the difference between XMILE and SMILE?

XMILE is the XML representation of a system dynamics model. SMILE is the underlying system dynamics language that is represented in XML using XMILE. In this way, it is very similar to the DYNAMO language originally used to create system dynamics models. SMILE could eventually be encoded using something other than XML.

3. How does XMILE benefit iThink and STELLA users?

There are several immediate benefits to iThink and STELLA users:

  • XML files can be reformatted and styled with XSLT files. There are programs available that generate reports directly from XML files.
  • Model files can be examined and edited in a text editor, facilitating searches and simple replaces.
  • Because XMILE is a text file format, proper versioning of model files, showing meaningful differences between revisions, can be done with version control software such as SVN and Git.
  • Because XMILE is textual, platform-neutral, and descriptive, rather than a binary representation of the implementation, it is more resilient to possible file corruption.
  • As the standard becomes more widely adopted additional benefits will include a broader market for models and the ability to share models with colleagues working in different modeling software packages.

4. How will the adoption of the XMILE standard benefit the field of system dynamics?

The benefits of this standard are:

  • System dynamics models can be re-used to show how different policies produce different outcomes in complex environments.
  • Models can be stored in cloud-based libraries, shared within and between organizations, and used to communicate different outcomes with common vocabulary.
  • Model components can be re-used and plugged into other simulations.
  • It will allow the creation of online repositories modeling many common business decisions.
  • It will increase acceptance and use of system dynamics as a discipline.
  • It will help ISVs make new tools that help businesses to develop and understand models and simulations.
  • It will enable vendors to develop standards-based applications for new markets such as mobile and social media.

5. What is the connection to Big Data?

XMILE opens up system dynamics models to a broader audience and for new uses, including embedding models within larger processes. System dynamics models provide a new way to analyze Big Data, especially when pulling live data streams into a running model to determine the impacts of our decisions in real time against future outcomes, to hopefully avoid unintended consequences of our actions. Note, however, that the presumption of Big Data, or the addition of Big Data, does not automatically lead to large, complicated models. You do not have to create giant models just because you have a lot of data. We’re aggregating the data and looking at it in a more homogenous way, so the models can still stay relatively understandable.

6. Can I adapt existing iThink and STELLA models to XMILE?

All of the isee systems products (version 10 and later) already use the XMILE standard in its draft form. As the standard evolves, isee systems products will be updated to meet the changing standard and your models will be translated forward so they remain XMILE-compatible

7. Do you plan to extend XMILE to include discrete event or agent-based simulations?

XMILE focuses on the language of classic system dynamics, rooted in DYNAMO. While we anticipate the language to expand to include both discrete simulation and agent-based modeling, version one of the XMILE specification is restricted to system dynamics modeling.

8. Could you show an example of how XMILE is used in a model?

XMILE is used to describe the model and is the format used for saving it. A model snippet is shown below with the XMILE that completely describes both its simulation and its drawing properties (in the display tag).

image

xmile

9. A big part of system dynamics is graphical, will XMILE include this part of models?

Yes, all graphical information is stored within the display tag, as shown in the earlier example.

10. Why would you want to store visual layout in Xmile? Why not separate structure from layout?

The structure is actually separate from the layout in the XML file. All visual information is embedded within display tags and can be ignored. XMILE defines three separate levels of compliance, with the lowest level being simulation information only (i.e., structure). A model does not need to include display information and any application is free to ignore it.

11. Will XMILE include data from running the model?

XMILE only represents the model structure, so no data is included.

12. Where can I get more information?

The OASIS technical committee for XMILE maintains a public record at https://www.oasis-open.org/committees/xmile/. This page is regularly updated with new information.

The draft standard can be found in these two documents:

http://www.iseesystems.com/community/support/SMILEv4.pdf http://www.iseesystems.com/community/support/XMILEv4.pdf

In addition, isee systems maintains a web page, http://www.iseesystems.com/community/support/XMILE.aspx, that will be updated periodically with new information about XMILE.

Categories: News & Announcements Tags:

Working with Array Equations in Version 10

December 17th, 2012 3 comments

STELLA/iThink version 10 introduces several new array features, including simplified and more powerful Apply-To-All equations that are designed to reduce the need to specify equations for every individual element.

Dimension names are optional

When an equation is written using other array names, the dimension names are not normally needed.  For example, given arrays A, B, and C, each with the dimensions Dim1 and Dim2, A can be set to the sum of B and C with this equation:

B + C

Dimension names are still needed when the dimensions do not match.  For example, to also add in the first 2-dimensional slice of the 3-dimensional array D[Dim1, Dim2, Dim3], the equation becomes:

B + C + D[Dim1, Dim2, 1]

The wildcard * is optional

When an array builtin is used, the * is normally not needed.  For example, to find the sum of the elements of a 2-dimensional array A[Dim1, Dim2] requires this equation:

SUM(A)

If, however, the sum of only the first column of A is desired, the * is still needed:

SUM(A[*, 1])

Simplified array builtins

There are five array builtins:  SIZE, SUM, MEAN, STDDEV, and RANK.  In addition, the MIN and MAX functions have been extended to take either one or two array arguments.  All but RANK can also be applied to queues and conveyors.

SUM, MEAN, and STDDEV all work in a similar way (see examples of SUM above).

Using the MAX function, it is possible to find the maximum value in array A,

MAX(A)

the maximum value in array A, or zero if everything is negative,

MAX(A, 0)

or the maximum across two arrays A and B,

MAX(A, B)

MIN works the same way, but finds the minimum.

The SIZE function requires an array parameter, but within an array, the special name SELF can be used to refer to the array whose equation is being set.  In addition, wildcards can be used to determine the size of any array slice.  In the equation for array A[Dim1, Dim2],

SIZE(SELF)

gives the total number of elements in array A while

SIZE(SELF[*, 1])

gives the size of the first dimension of A, i.e., the number of elements – or rows – in the first column.  Likewise,

SIZE(SELF[1, *])

gives the size of the second dimension of A, i.e., the number of elements – or columns – in the first row.

Read more…

Using PEST to Calibrate Models

January 14th, 2011 21 comments

There are times when it is helpful to calibrate, or fit, your model to historical data. This capability is not built into the iThink/STELLA program, but it is possible to interface to external programs to accomplish this task. One generally available program to calibrate models is PEST, available freely from www.pesthomepage.org. In this blog post, I will demonstrate how to calibrate a simple STELLA model using PEST on Windows. Note that this method relies on the Windows command line interface added in version 9.1.2 and will not work on the Macintosh. The export to comma-separated value (CSV) file feature, added in version 9.1.2, is also used.

The model and all files associated with its calibration are available by clicking here.

The Model

The model being used is the simple SIR model first presented in my blog post Limits to Growth. The model is shown again below. There are two parameters: infection rate and recovery rate. Technically, the initial value for the Susceptible stock is also a parameter. However, since this is a conserved system, we can make an excellent guess as to its value and do not need to calibrate it.

image

The Data Set

We will calibrate this model to two data sets. The first is the number of weekly deaths caused by the Hong Kong flu in New York City over the winter of 1968-1969 (below).

clip_image004

The second is the number of weekly deaths per thousand people in the UK due to the Spanish flu (H1N1) in the winter of 1918-1919 (shown later).

In both cases, I am using the number of deaths as a proxy for the number of people infected, which we do not know. This is reasonable because the number of deaths is directly proportional to the number of infected individuals. If we knew the constant of proportionality, we could multiply the deaths by this constant to get the number of people infected.

Read more…

Shifting the Burden

December 22nd, 2010 3 comments

The Shifting the Burden Systems Archetype shows how attacking symptoms, rather than identifying and fixing fundamental problems, can lead to a further dependence on symptomatic solutions.  This Systems Archetype was formally identified in Appendix 2 of The Fifth Discipline by Peter Senge (1990).  The Causal Loop Diagram (CLD) is shown below.

image

When a problem symptom appears, two options present themselves:  1) apply a short-term fix to the symptom, or 2) identify and apply a longer-term fix to the fundamental issue.  The second option is less attractive because it involves a greater time delay and probably additional cost before the problem symptom is relieved.  However, applying a short-term fix, as a result of relieving the problem symptoms sooner, reduces the desire to identify and apply a more permanent fix.  Often the short-term fix also induces a secondary unintended side-effect that further undermines any efforts to apply a long-term fix.  Note that the short-term fix only relieves the symptoms, it does not fix the problem.  Thus, the symptoms will eventually re-appear and have to be addressed again.

Classic examples of shifting the burden include:

  • Making up lost time for homework by not sleeping (and then controlling lack of sleep with stimulants)
  • Borrowing money to cover uncontrolled spending
  • Feeling better through the use of drugs (dependency is the unintended side-effect)
  • Taking pain relievers to address chronic pain rather than visiting your doctor to try to address the underlying problem
  • Improving current sales by focusing on selling more product to existing customers rather than expanding the customer base
  • Improving current sales by cannibalizing future sales through deep discounts
  • Firefighting to solve business problems, e.g., slapping a low-quality – and untested – fix onto a product and shipping it out the door to placate a customer
  • Repeatedly fixing new problems yourself rather than properly training your staff to fix the problems – this is a special form known as “shifting the burden to the intervener” where you are the intervener who is inadvertently eroding the capabilities and confidence of your staff (the unintended side-effect)
  • Outsourcing core business competencies rather than building internal capacity (also shifting the burden to the intervener, in this case, to the outsource provider)
  • Implementing government programs that increase the recipient’s dependency on the government, e.g., welfare programs that do not attempt to simultaneously address low unemployment or low wages (also shifting the burden to the intervener, in this case, to the government)

Read more…

Integration Methods and DT

July 14th, 2010 10 comments

The simulation engine underlying STELLA® and iThink® uses numerical integration.  Numerical integration differs from the integration you may have learned in Calculus in that it uses algorithms that approximate the solution to the integration.  The two approximations currently available are known as Euler’s method and the Runge-Kutta method.  All algorithms require a finite value for DT, the integration step-size, rather than the infinitesimally small value used in Calculus.  On the surface, it may seem that the smaller DT is, the more accurate the results, but this turns out not to be true.

Compound Interest:  Euler’s Method over Runge-Kutta

To introduce Euler’s method, let’s take a look at the simple problem of compound interest.  If we have $100 that we invest at 10% (or 0.1) compounded annually, we can calculate the interest after N years by adding in the interest each year and recalculating:

1st year:  interest = $1000 × 0.1 = $100; Balance = 1000 + 100 = $1100
2nd year: interest = $1100 × 0.1 = $110; Balance = 1100 + 110 = $1210
3rd year:  interest = $1210 × 0.1 = $121; Balance = 1210 + 121 = $1331

And so on up to year N.  We have just seen the essence of how Euler’s method works.  It calculates the new change in the stock for this DT (in this case, interest) and then adds that to the previous value of the stock (Balance) to get the new value of the stock.  In this example, DT = 1 year.

By noticing we always add the existing balance in, we can instead just multiply the previous year’s balance by 1 + rate = 1 + 0.1 = 1.1:

1st year:  Balance = $1000 × 1.1 = $1100
2nd year: Balance = $1100 × 1.1 = $1210
3rd year:  Balance = $1210 × 1.1 = $1331

And so on up to year N. We can further generalize by noticing we are multiplying by 1.1 N times and thus arrive at the compound interest formula:

Balance = Initial_Balance*(1 + rate)^N

Checking this, we find our Balance at the end of year 3 is 1000*1.1^3 = $1331.  In the general case of the formula, rate is the fractional interest rate per compounding period and N is the number of compounding periods (an integer).  In our example, the compounding period is one year, so rate is the annual fractional interest rate and N is the number of years.  However, if interest is compounded quarterly (four times a year), the interest rate has to be adjusted to a per quarter rate by dividing by 4 (so rate = 0.1/4 = 0.025) and N must be expressed as the number of quarters (N = number of years*4 = 3*4 = 12 for the end of year 3).  We can use this formula in our model to test the accuracy of Euler’s method.  Note that for quarterly compounding, we would set DT = 1/4 = 0.25 years.

To explore the differences between Euler’s and Runge-Kutta, the following structure will be used for all of the examples in this post.  This structure models the compound interest problem outlined above.

image

The equations change for each example and can be seen in the individual model files (accessed by clicking here).  For this example, the actual value is calculated using the compound interest formula, Initial_Balance*(1 + rate)^TIME.  The approximated value is calculated by integrating rate*Approx_Balance (into Approx_Balance).

In addition to the actual and approximate values, three errors are also calculated across the model run:  the maximum absolute error, the maximum relative error, and the root-mean-squared error (RMSE).  The absolute error is:

ABS(Actual_BalanceApprox_Balance)

The relative error is:

absolute_error/ABS(Actual_Balance)

and is usually expressed as a percentage.  The RMSE is found by averaging the values of the absolute error squared, and then taking the square root of that average.

Read more…