link rel=colorSchemeMapping href="may01prb_files/colorschememapping.xml">

Problem Of The Month

May 2001—Process Reliability Plots With Flat Line Slopes (Small b’s)
(Small
b’s are bad news because they are associated with large production losses!)

 

This Problem Of The Month Is A Primer On Process Reliability

Click here to download a copy of this Problem Of The Month as a PDF file.

Reliability experts are pragmatists. They decide on the use of specific types of probability paper when data falls onto a straight line. 

 

Most production data fits a straight line on Weibull probability paper.  The same production data makes a very poor straight line fit for other types of probability paper.  Therefore, pragmatically, production data is considered to fit a Weibull distribution.  The Weibull probability plots of production data tell you about the “vital signs” for a production process in a single glance.  The Weibull plots are an important tool for mergers and acquisitions teams to determine the “state” of the new production process and how it compares to similar processes within the company doing the acquisition.

 

The Weibull production plots tell you the reliability of the production process by looking at the big picture from a high altitude.  From the overview you can see patterns in output, which foretell of difficulties.  Based on the patterns, you can assess the need for root cause analysis or fine tuning using six-sigma techniques as the task is broken into bite-size and understandable segments based on the production output evidence. 

  • The first cusp on the trend line, as you move down and to the left, on the probability curve
                tell about process reliability—this is the point where things go sour quickly.
    The reliability point is seldom known or identified for corrective action by the production or management team.  The problems are often driven by cause and effect events.  The reliability point is quantified from the Weibull plot and read directly from the Y-axis.
  • Gaps between the nameplate line and the demonstrated production line
                tell about the efficiency and utilization losses.
    These problems are usually associated with management—which has tacitly accepted these losses without changing the paradigm to make these conditions unacceptable.  The gaps between tne nameplate line and the demonstrated production line define what we have and the potential for what we could have!  Often these problems are driven by multiple small problems and this zone is usually the territory for six-sigma experts.  The efficiency and utilization losses are quantified from the Weibull plot.
  • Gaps between the demonstrated production line and the actual data points below the reliability cusps
                tell about the reliability losses .
    These problems are usually associated with things you can put your finger on as the reason for losses.  These problems are often the territory for reliability engineering projects.  Reliability losses are quantified from the Weibull plot.

 

Thus on one sheet of paper you can get an assessment of the health of your production process and quantify your losses into categories to build a top level Pareto distribution for corrective action.  The Weibull processes reliability technique helps define a strategic course for working toward significant improvements to reduce losses and correct problems.  Hyperlinks to other process reliability articles are listed at the bottom of this page. 

 

Weibull process reliability plots of actual production data provide high altitude vital signs about the health of the production process.

  • The first vital sign is the Weibull beta slope of the demonstrated production line. 
  • The demonstrated production line Weibull slope should be steep (small variations in production output). 
  • The demonstrated production line should be without cusps (cusps represent failures of the process and add variability to the output in the undesirable direction). 
  • The demonstrated production line Weibull slope beta should be very near the nameplate line slope beta so that efficiency and utilization losses are small.
  • A second vital sign is where the demonstrated production line crosses the 36.8% Reliability or 63.2% CDF line.  This defines a point estimate for demonstrated process output and nameplate output.
  • A third vital sign is the gap between the demonstrated line and the nameplate line to quantify efficiency/utilization losses.
  • A fourth vital sign is the point on the demonstrated production line, which is a cusp to identify process reliability.
  • A fifth vital sign is the gap between the demonstrated line and the actual data points below the process reliability to quantify reliability losses.

 

Vital signs obtained for humans, by health professionals, tell about human health and require the use of some medical judgment.  For example just because you have a fever detected by a thermometer in your mouth does not mean the problem is occurring inside your mouth!  You must search for the root of the problem and correct it. 

 

A lower altitude search for Weibull process reliability problems must be conducted using asset utilization reports, and other exception reports, to identify the root(s) of the deficiencies.  Where possible, the root cause issue should be converted to cause and effect relationships. 

 

Corrective action to eliminate production losses identified by =process reliability plots, first requires a corrective action strategy.  The strategy then flows to the tactics to be used for solving problems, and a high level Pareto distribution of losses is important for guiding the strategic decisions.  

 

Too often manufacturing organizations think their collections of problem solving tactics produces a strategy!  In fact, the strategy must always drive the tactics.

 

Two ways of looking at the daily production data—which is right?

  • Traditional views of daily production output are viewed in a time sequence.  It is difficult to see the signal among all the noise of variation in production output.
  • Non-traditional Weibull plots of production data disconnect the effects of time.  The data is viewed in rank order to observe patterns not so easily seen in time based plots

Each method is right for it’s particular purpose.  Both viewing techniques are correct.  Using the traditional method (time sequence) results in variability confusing the signal.  The Weibull plot shows patterns of performance not viewed in traditional plots because the data is viewed in rank order.

 

The Weibull data problem: Weibull probability plots take scalar production data arranged in rank order for the X-axis of the probability plot.  A Y-axis plotting position is found from a statistical tool called Benard’s median rank position.  

 

Benards’ median rank concept is explained in  The New Weibull Handbook, 4th edition, by Dr. Robert B. Abernethy.   Each ranked scalar X-data from the process has an equivalent Y-position on the probability scale from Benard’s method so the information can be plotted on special Weibull probability paper.  The Y-axis of the Weibull probability plot is the log of another log so you can expect it will have unusual divisions.  The X-axis of the Weibull probability plot is a logarithm scale. 

 

Review the figures below.  Look for patterns to appear in the Weibull plots.  The shapes/patters will give clues for what deficiencies to search for to correct the losses.  As losses are reduced, the scatter in the data is reduced and the process output becomes more predictable.  The concepts behind Weibull process reliability plots are in concert with six-sigma projects. 

 

The Weibull process reliability issues are more in tune with management concepts and less in tune with the mathematics because the Weibull distributions are non-symmetrical—refer to items I-V at the ASQ website for body of knowledge issues concerning:

        I.     Enterprise-wide deployment

      II.     Business Process Management

    III.     Project Management

    IV.     Six Sigma Improvement Methodology and Tools—Define

      V.     Six Sigma Improvement Methodology and Tools—Measure

the other items in the body of knowledge which bear some relationship to the Weibull process reliability method are

    VI.     Six Sigma Improvement Methodology and Tools--Analyze

  VII.     Six Sigma Improvement Methodology and Tools—Improve

VIII.     Six Sigma Improvement Methodology and Tools—Control

    IX.     Lean Enterprise

      X.     Design for Six Sigma

 

Figure 1 shows a steep line on the Weibull probability plot.  The line slope has a beta = 100 (world class) and eta = 700 for the nameplate line.  The nameplate line represents the potential output from the system—see label A and notice the small range in output variation. 

 

The Weibull probability plot in Figure 1 also shows a flat line slope with beta = 5 (representing very poor performance!) and eta = 494 which represents an undesirable wide range in output—see label B.  Flat line slopes (small b values) of production data on Weibull process reliability plots are bad news!  Line A is the nameplate line (the potential).  Line B is the demonstrated production line (what you have accepted as the real world). 

 

Table 1 shows the efficiency and utilization losses.  The losses are the gap between the nameplate line and the demonstrated line for various values of beta.  Notice the flat beta values have significant losses. 

 

As a rule of thumb, as you move from a beta of 5 to a beta of 10, the losses are (roughly) cut in half.  The incentive is great for pushing the demonstrated betas to 25 or larger! You reap greater rewards of reducing losses by correcting processes with flat betas (say 5) than you do in improving steep betas (say 60).

 

Table 1

Efficiency & Utilization Losses For Various Demonstrated Beta Values When Nameplate Line Is Beta=   100, Eta = 1000

Beta-->

3

5

7.5

10

15

20

25

30

40

50

60

75

100

Losses-->

182,832

126,387

89,615

68,483

45,219

32,705

24,894

19,555

12,709

8,550

5,729

2,879

0

 

The probability density function (PDF) plots in Figure 1 show the flat Weibull slope produces a flat PDF (the PDF is the shape of the curve you would get if you made a tally sheet of output versus occurrence).  Whereas the steep Weibull slope results in a spike shaped PDF with high altitude and small span of output, which makes it very desirable and very predictable. 

 

Bricks and mortar of equipment and facilities define the maximum X-axis location of both curves  (for the Weibull and PDF).  By the way, the PDF curve’s altitude is a relative probability and the area under the PDF curve is, by definition, unity.

 

WinSMITH Weibull made the Weibull curves in Figure 1.  The PDF curves were made by WinSMITH Visual software.  Prices for the software and handbook are available on the Internet.

 

Notice the shapes of the PDF curves in Figure 1:

1.     The low beta value is almost a symmetrical, sort of bell shaped, curve with wide scatter in output.

2.     The steep beta value is clearly a tailed curve to the left with almost no tail to the right (this says you can easily get smaller output but you have almost no chance of getting greater output from the process.

3.     Notice in both the Weibull plot and the PDF plot the curves cross in the upper regions at the 365th day of production based on ranked output.

The steep beta curve is desirable for production output.  The flat slope beta curve is undesirable—it may be what you get, but it’s not what you want for a first quartile producer.

 


The undesirable flat lines shown in Figure 1 for the Weibull plot are caused.  The shape of the curves just doesn’t happen. 

 

What are the reasons for flat lines, which cause the wide scatter in output shown on the PDF plot with long tails each side of the central tendency?  What causes changes between steep nameplate lines (which you want) and shallow demonstrated production lines (which cause large amounts of unpredictability in production output and are thus undesirable)?   These are very important questions.

 

Monte Carlo simulations in Excel™ are the tools for modeling production output events causing these conditions.  The reasons for gaps between the two lines in Figure 1 can be illustrated with the simple Monte Carlo model in Excel™. 

 

The gaps between the nameplate line and the demonstrated line in Figure 1 are caused by small random events.  The random events are detractors—just one hour loss per day in 10 to 20 increments can take the world class beta = 100 to an embarrassing low value of beta (don’t kid yourself that you can makeup the losses—it never happens!).  The detractors are a take-a-way from efficiency (an input/output factor) and utilization (a measure of time wasted). 

 

None of the problems in the gap between nameplate and demonstrated lines, on their own, seem large but the effects are deleterious to the performance of the plant.  If you insist on connecting the Figure 1 gap to a single thing (rather than two things such as efficiency and utilization) it will usually be an efficiency problem [which is often driven by a utilization issue].

 

Efficiency and utilization problems were the subject of much study by Fredrick Taylor, the father of scientific management, in the early part of the 20th century.  Taylor was active in work measurement, which is the skill set for industrial engineering using time studies and work measurement, aimed at improving productivity.  Today the same scientific measurement effort has been enlarged to include ergonomics or human factors engineering, which deals with reduction of stress and strain in the work place. 

 

A long article, on Fredrick Taylor, concerning this subject is available on the Internet, which boils down to:

·       Development by management (not the workers) of the science of doing the tasks with rules intended for perfection and standardization of implements and working conditions to reduce variability in output [today in more enlightened environments, the work team will. set the rules—the Japanese 5S technique also requires rules to be set and followed].

·       Careful selection and training of people to adopt best practices [another 5S (go to Amazon.com and search for Hiroyuki Hirano’s book )requirement for the pillars of the visual workplace].

·       Bringing workers and the science of completing tasks together for effectively completing tasks without losses.

·       Equal division of work and responsibility between workers and management to effectively complete tasks as a team-based effort.

The bottom line is more output, fewer losses, and greater predictability (sounds like today’s 6-sigma programs doesn’t it!). 

 

The reason this effort sounds like 6-sigma tasks is the need for reducing built-in variability for output from the process, which often has become acceptable to the management team but in reality must be eliminated by reducing common cause variability.  Common cause variability is tough to identify and remove.  Thus elimination of common cause requires 6-sigma concentration; whereas output reductions from significant special cause events is obvious and can be identified by members of the organization with lesser skills. [5S programs help eliminate waste and allow people to work smarter, not harder.]

 

Figure 2 shows another view of the same process with a reliability problem, which is identified by the cusp on the demonstrated production line.


The gap between the continuous demonstrated production line and the data trend line to the left of the cusp (and left of the demonstrated production line) represents reliability losses.  The cusp defines reliability of the process and as with all reliability problems a failure must be defined—in this case, the demonstrated production line fails to continue on it’s path of common cause variability and this represents the failure. 

 

Monte Carlo simulation of Figure 2 shows these losses occur from specific cause/effect events.  Roughly 1 day out of every 5 days has an observable cause/effect event(s) destroying productive output and results in roughly an 80% process reliability.

 

image006Consider the case in Figure 3 for only deterioration in efficiency, which are input/output type problems such as scrap or conversion ratios.  The random numbers decide how much loss occurs between the established limits.  Notice the impact on beta summarized in the plot as it descends from beta = 100 for efficiency = 100% to beta = ~31 for efficiency = 90%:  Notice the concave downward data trends as efficiency problems reach their limits.

 

The case for only deteriorations in utilization, which are time wasters, will look the same as Figure 3 as the random numbers decide how much loss occurs between the established limits for utilization losses.  Notice the impact on beta summarized as substantial declines from the world-class output beta = 100.

 

Likewise the case for only deterioration for only shifting locations in output set points, which increase variability and decrease beta will also look like Figure 3 as the random numbers decide how much loss occurs between the established limits for utilization losses.  Notice the impact on beta summarized as declines in line slopes from the world-class trend line.

 

How do you separate the causes from each other?  The concave downward plots give you clues for where to look.  It takes first hand observations to see the reasons for the problems.  A good method to highlight the problems is to establish control charts for efficiency, utilization, and set points for the process—if you lack control charts, you are a prime candidate for inferior beta slopes with so many problems you may not observe the downward concave plots.


Formerly these concave curves shown in Figure 3 and 4 have been difficult to explain without a Monte Carlo model to find the patterns for the data..

 

When the three impacts occur simultaneously, the results are shown in Figure 4.  The values for efficiency, utilization, and shifting set points in Figure 4 are not smaller than 95%—again, note the concave downward appearance to the curves from these small abuses and also note the big scatter in output data signified by the beta slope.  In short, little abuses in the production system can destroy predictability and incur many losses.  For world-class production systems, you must do many things correctly and simultaneously to avoid losses.

 

Consider the case in Figure 5 for only deterioration in small cutbacks, which are cause and effect outages.  The Monte Carlo simulation allows random numbers to select which day will have deterioration.  A different set of random numbers decides how much loss occurs between the established limits for utilization losses.  Notice the impact on beta when 1 day out of every 5 (20% of the time) takes a cutback, and the cutbacks can be up to 90%. With both number of days and amount of cutback decided randomly.  No other loses are included for efficiency, utilization, or shifting aim point.  Note that beta has a moderate deterioration.


Figure 6 puts some of the small efficiency problems together with cutbacks for combined abuses, driven by the aggregate of random events noted above.  Notice the impact on the significant deterioration of beta from a host of small “insults” to the system.  Individually none of the efficiency problems alone are major, but in combination they have enormous impact on output and thus profitability.  Please note that a random efficiency loss between 95% and 100% will average to 97.% efficiency—so the things that destroy productivity are often small things that need to be identified and corrected.  You cannot rely on the old excuse: “We’ll make it up tomorrow” because tomorrow never comes—what’s lost is lost and will never be recovered.  You should think like the stockholder who demands his money from the process now and every day!


Figure 6 is a typical pattern from small efficiency losses and small cutback losses.  The approximate frequency of the cutback occurrences can be identified at the cusp on the demonstrated production line.

 

Monte Carlo simulation helps explain the patterns for losses observed in process reliability plots from many small things.  Unless you can demonstrate the effects in a spreadsheet model, many people will not believe your observations and conclusions.  Most people want to find a single large “thing” to fix to heal the entire process.  Sorry to say but most “medicine” for eliminating process losses comes from a 100 small things that make the difference between a first quartile (the best) performer and a fourth quartile (the worst) performer. 

 

Table 2 shows a comparison chart of demonstrated betas seen in many different industries

 

Table 2

Typical Beta Values Observed In Various Industries

 

Poor Control

Fair

Control

Tighter Control

Excellent Control

World Class Control

Seldom Achieved

Beta-->

5

10

25

50

100

200

Quartile Performance

Fourth

Third

Second

First

 

Expect most people to deny they can improve over betas of 5 to 10 with excuses that higher beta must be the result of smoke and mirrors.  The issue for making improvements in outputs to make the process predictable is all about money! 

 

The process must be designed for predictability and operated to its capability.  Flat beta slopes (undesirable) on Weibull plots are always caused.   The cause for poor performance must be eliminated to reduce losses and improve profitability. 

 

If you understand the rules for the problems then you can fix them even though the process is performing with some randomness explained by the Monte Carlo simulation.

 

The small events and the rules for the small events in the Monte Carlo simulation of production processes seems to fit the concepts of complexity analysis described in hyperlinks listed at the bottom of this page. 

 

Complexity analysis is the fine line between simplistic, deterministic systems and chaos.  Complexity analysis models start with a top down set of a few simple rules. 

Complexity analysis describes the rules non-linearly, and the complexity analysis system is driven by randomness.  The flight of “boids” described below in complexity analysis are driven by randomness, non-linear math (the Weibull distribution is highly non-linear), and a few simple rules just as has occurred above in the figures showing results of process simulations.  Boids and process reliability modeling tools are sisters under the skin, which tells you about the science involved in the process reliability analysis!

 

Consider the complexity analysis computer simulation called BOIDS (that’s birds but spoken with a Brooklyn accent).  Hyperlinks to the boids are shown below.  The boids start flying in random directions from a grid on the computer screen.  The boid have three rules:

1)     Boids must be separated by a minimum distance so they don’t fly into each other,

2)     Boids try to match their velocities with other local boids, and

3)     Boids fly to their perceived center of gravity so no boid stays perpetually on the perimeter in harms way. 

Notice that no rule exist for boids to flock—yet they do flock! 

 

When the boids fly in the hyperlinks below and they start in random directions but quickly begin to take the sense of a flock of boids (remember no rule says they’ll flock).  When a fourth rule for boids is added to provide “big” boids a field of vision then you see the boids take on V-shaped flocks as occurs with big birds. 

 

The bottom line for the boids is this:
            Starting with a few simple rules, non-linear math, and randomness in the system you get order out of chaos! 

The same type of results is achieved with the Monte Carlo simulation for process reliability for process reliability models.

Former students of the Process Reliability training class can obtain a copy of the Monte Carlo process reliability simulation by sending email to Paul Barringer by     clicking here.  The current Monte Carlo model allows for efficiency losses, utilization losses, and special causes.  Other Monte Carlo models are under construction, which will allow losses of portions of the production train.

Why is Monte Carlo modeling such a big deal?  You can take patterns for your process and “twist the knobs” of the Monte Carlo model to get it to produce similar output patterns using randomness and a few simple rules. 

 

Monte Carlo modeling adds credibility to the claims of causes for efficiency/utilization losses and reliability losses—and it can all be done on your computer. 

 

The Monte Carlo model lets you determine the “simple rules” at work in your process that produces loss patterns.  You’ve got to find the problems and eliminate them to be more productive—and many of the problems are the results of many small and simple issues.

Hyperlinks to other articles on process reliability:

·       Production Output/Problems

·       Six Sigma

·       Coefficient of Variation

·       Production Reliability Example With Nameplate Ratings

·       Key Performance Indicators From Weibull Production Plots

·       Production Nameplate Rating

·       Process Reliability Plots With Flat Line Slopes

·       Process Reliability Line Segments

·       Automating Monthly Weibull Production Plots From Excel Spreadsheets

·       Papers On Process Reliability As PDF Files For No-charge Downloads
- New Reliability Tool for the Millennium: Weibull Analysis of Production Data
- Process Reliability and Six-Sigma
- Process Reliability Concepts

Hyperlinks to other articles on complexity analysis so you can see the connections between process reliability Monte Carlo models and the BOIDS:

·       Craig Reynolds Originator Of Boids as artificial life with background

·       Gary Flakes’ V-shaped flocks with Java Applets (This applet is no longer available from the pull down menu choose BOIDS)

·       Conrad Parker’s Boids fly all over the webpage!

·       More Conrad Parker Java Boids on one page

·       Complexity: The Emerging Science At The Edge Of Order and Chaos by M. Mitchell Waldrop

Comments:

Refer to the caveats on the Problem Of The Month Page about the limitations of the following solution. Maybe you have a better idea on how to solve the problem. Maybe you find where I've screwed-up the solution and you can point out my errors as you check my calculations. E-mail your comments, criticism, and corrections to: Paul Barringer by     clicking here.  Return to top of page.

Technical tools are only interesting toys for engineers until results are converted into a business solution involving money and time. Complete your analysis with a bottom line which converts $'s and time so you have answers that will interest your management team!

PDF Copies:

Click here to download a copy of this Problem Of The Month as a PDF file.
Last revised 7/23/2013
© Barringer & Associates, Inc. 2001

Return to Barringer & Associates, Inc. homepage