[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

1273.0. "Six Sigma" by AOXOA::STANLEY (Like I told ya, what I said...) Wed Nov 14 1990 18:23

I work for the Systems Engineering Characterization Group (SECG) under
UNIX-based Software and Systems (USS) and our group was required to take the
"Deliver at Six Sigma" course.  I could only find brief mentions of Six Sigma
in this file so I am starting this new note.

It has become apparent to me after taking both the "Waterfall" course and
"Deliver..." that the Six Sigma program can only really work if it is
instituted company wide.  It appears to be a good idea to improve quality in
DEC, although difficult to implement in groups outside of manufacturing.  Why
are some groups required to embrace the Six Sigma method while others know very
little about it?

		Dave
T.RTitleUserPersonal
Name
DateLines
1273.1REGENT::POWERSThu Nov 15 1990 13:0615
> Why
> are some groups required to embrace the Six Sigma method while others know very
> little about it?

Enthusiam for Six Sigma "trickles down" from interested vice presidents.
Grant Saviers reportedly had good luck with it when he ran disks,
and he has directed his orgs to adopt it.
Other VPs have yet to be persuaded.

And yes, it will work better when everybody believes, but "everybody"
has to be all of DEC and all our suppliers too.  Staged adoption
is a fact of life, but one that will still deliver benefits even
before "everybody" does it.

- tom]
1273.2SIX SIGMA NOTEFILE LOCATIONJOSHER::CLARKThu Nov 15 1990 13:515
    
    FYI - There is a Six Sigma Notesfile.  Add SSVAX::SIX_SIGMA to
    add it to your notebook.
    
    Dianne
1273.3Press KP7BIGJOE::DMCLUREDigital charity workerThu Nov 15 1990 17:047
re: .2,

	Or (via the magic of the VAXnotes SET NOTE/CONF= command), you
    could also simply press keypad #7 and have it automagically added
    to your notebook!  ;^)

				   -davo
1273.4JARETH::EDPAlways mount a scratch monkey.Fri Nov 16 1990 18:548
    One thing I'd like to know about Six Sigma is whether measuring bugs
    per line of code is any good.  Suppose there are 46 bugs in one million
    instructions.  If a computer executes one billion instructions per
    second, that is 46,000 bugs per CPU-second, or almost 4 billion bugs
    per week.  Is that supposed to be good?
    
    
    				-- edp
1273.5what a BUG ?SMAUG::ABBASIFri Nov 16 1990 19:4510
    ref .-1
    actually we need also to define what is a BUG .
    if we judge the correctness of program by that it satisfies the
    specifications, then we could say the program is "correct".
    How do you know that a program meets the specs ? what criteria is
    used, etc.. is the important issues.
    there are also many ways to defin when is a program "correct".
    I dont know much about Six sigma , and if it talks about this.
    /naser
    
1273.6what a FEATURE?RIPPLE::PETTIGREW_MIFri Nov 16 1990 21:3411
    ref *.4 and *.5
    Lines of code is a very precise measurement that is completely irellevant.
    
    Customers judge software quality by the consistency and repeatability
    of its behavior.  If the "feature" does not work the same way all
    the time, then it's a BUG.  If the feature does not work the way
    the customer expects it to work, then it may also be a a BUG.
    
    A quality index for software would be the ratio of BUGS to FEATURES.

    Now..what is a "feature"?
1273.7Software development viewpointESCROW::KILGORE$ EXIT 98378Fri Nov 16 1990 22:555
    
    If the behavior specifically contradicts the specification, it's a bug.
    
    Anything else is a feature. :-)
    
1273.8classes of featuresSMAUG::ABBASISat Nov 17 1990 01:576
    there are two types of features, planned features are what you hoped
    for when you wrote your program, and unplanned features, ones that are
    side effects of the your planned feature.
     
    /naser
    
1273.9Slippery classes of bugsRIPPLE::PETTIGREW_MISun Nov 18 1990 05:2821
    Re: *.7
    
    You obviously have been exposed to considerably more lenient customers
    than I have.
    
    Yes, if feature does not work as described in a specification or
    a document, it's a bug.
    
    It's also a "bug" if the customer doesn't understand how the feature
    is supposed to work - even if it does work as described.
    
    And it may be a "bug" if the feature isn't what the customer wanted,
    or interferes with some other feature the customer does want.

    And it is definitely a bug if a feature does not always behave the
    same way.

    In my experience, this appears to be the way customers judge software
    quality.  It is a slippery thing to measure precisely to 6 decimal
    places.  Any ideas?
    
1273.10Right now, it is "to be or not to be" fo us.BEAGLE::WLODEKNetwork pathologist.Sun Nov 18 1990 05:4319
    Bugs/line is not complete nonsense, I hope that we can agree that a
    program with ( 0 ) bugs per line is bug free ? So, there is *some*
    correlation to software quality , which gives you a metric and a 
    quality goal . One can set up many classes of bugs/features etc, but I 
    don't think it will bring us closer to quality software .

    The approach has been tested and gives good results , there was an 
    interesting article about Japanese software quality in a note
    somewhere here.

    		wlodek 

    [ don't implement this measure, I love s**y software, it makes me a
    hero, contrary to anybody else, my job seem rather stable right now]




1273.11Accuracy! Precision! Irrelevance!RIPPLE::PETTIGREW_MISun Nov 18 1990 06:1217
    re: *.10
    
    Quality in software (as in anything else) is determined by the
    customers.  Customers do not care how many lines of code are in
    the software text.  You might just as well measure the number of
    occurences of the letter "E".
    
    Quality is consistent, predictable, invariant behaviour that matches
    expectations.

    "Lines of code" is a very rough measure of effort.  Effort does
    not necessarily mean quality.  Quality does not necessarilty mean
    effort!

    I too, love to be the hero of the day for stabilizing some ridiculous
    piece of software.  There is no shortage of such work.  Metrics?
    The easy, precise ones usually are the wrong ones.
1273.12BOLT::MINOWCheap, fast, good; choose twoMon Nov 19 1990 18:5340
From the "Waterfall course" notes:

	The goal is the same as in all work: produce defect-free products
	with the aim of achieving less than 3.4 ppm opportunites/or
	parts defective.  The goal is not measuremet itself; the goal
	is defect reduction.

"Defects per 1000 lines of code" is explicitly called an "initial, minimal"
metric.  The waterfall course also distinguishes between defects
(mistakes found with a development phase and corrected before phase
exit) and "errors" (mistakes that are passed on to the next phase).
The goal is to isolate errors early in development (when requirements
are defined) rather than later (when coding begins).

	-- Select metrics based on fundamental goals.  Work from
	   Goals to Questions to Metrics.
	-- Keep the data collection process simple.
	-- Analyze carefully, look for mitigating factors.
	-- Feedback is crucial to success.

	* Many in the software world are not satisfied with "lines of code"
	  as a unit for measurement.

	-- It's worth noting, however, that data indicate the simple
	   metric Defects per 1000 lines of Code captures Quality
	   level just about as well as much more complex metrics.
	   (The Motorola software community spent 2 years reviewing
	   approaches and and developing these initial metrics.)

	* You could use similar ... or develop your own.

	* The important point is to define a unit, adopt metrics, and
	  institute measurement and reviews that will help us reach our
	  goal of TDU (bug) reduction.

Again, the emphasis is on early detection, through more effort during, and
better management of specification and formal review, rather than on
increased efforts during coding/test/post-release phases.

Martin.
1273.13Six Sigma was brought up in note 1225RANGER::PRAETORIUSgoing through life depth firstMon Nov 19 1990 19:560
1273.14RICKS::SHERMANECADSR::SHERMAN 225-5487, 223-3326Tue Nov 20 1990 13:0611
    This bugs/line measure is a tough way to measure.  Recently, I hacked
    some example code that, at one time, was "bug free".  But, with new
    releases of other tools portions of the code became obsolete.  For
    example, structures had changed and old logicals were changed.  What's
    interesting here is that this is a classic case of software breaking
    because of changes made outside of the tool.  In other words, bugs tend
    to grow inside software when it sits on the shelf.  So, even though
    you don't change a line of code, the bug count can go up and is a
    moving target.
    
    Steve
1273.15Measuring quality is a slippery processFDCV08::CONLEYChuck Conley, DTN 223-9636Tue Nov 20 1990 15:3021
    Seems to me that people have been trying to measure software quality
    for aeons, without much success.  Is a 5,000 line program with five
    "defects" better than a 2,500 line program with five defects, but where
    both programs have the same functionality?  Is a bug caused by faulty
    program design and architecture rated the same as a bug caused by a
    misspelled instruction?

    This is a little like measuring programmer productivity by counting the
    number of lines of code written per day.

    I agree with most of the goals outlined by Martin.  It makes a lot of 
    sense to try to build quality early in the development cycle.   The 
    problem is that in spite of all our technical achievements, we still
    don't have very good metrics for measuring software quality.  Ultimately,
    the final metric of quality is the marketplace.  If a product sells,
    if people continue to buy a product and continue to *use* that product,
    assuming the users have other alternatives, we would have to say that
    the product has achieved a certain level of quality.

    fwiw,
    Chuck
1273.16not necessarily applicableWAGON::ALLENWed Nov 21 1990 14:0425
	The significance of Six Sigma and similar quality programs has
	more to do with the customers' perceptions than metrics we can
	beat each other up with.

	Even when six-sigma manufacturing is consistently achieved, it
	is possible to be the consumer stuck with a lemon.  Then what?

	If we implement a six-sigma scheme of any sort for the sake of
	more "we're DEC and you're not" we shoot ourselves in the foot
	doing it.  And regardless of the actual defect rate of product
	we ship, customer satisfaction is the only thing that counts.

	As for software ... permit me a heresy, please.   Programs are
	not manufactured.  They're a realization of an idea.  No more,
	no less.  Six-sigma ideas are a contradiction, I believe.

	No software bug is acceptable.  Period.

	The corollary:  don't program what you don't understand enough
	to completely debug.   Alternatively:  if you can't agree on a
	specification that can be completely debugged, bail out before
	the development begins, because you'll get nailed for somebody
	else's lack of attention to detail if you don't.

1273.17rocket scienceBEAGLE::WLODEKNetwork pathologist.Wed Nov 21 1990 14:4521
    This is how I think the measure should be used.

    I write a program ( let's say user application type not a device driver
    or any kernel stuff) of totally 10 000 lines. After field test and some
    customer exposure, 10 bugs are found. All of it assumes that the
    program does ( except for these 10 bugs) work accordingly to the spec.
    This gives a bug per 1000 lines. Now I have a personal quality goal ,
    improve this measurement. That is all. It serves to focus attention and
    forces some analysis of the sources of the errors. I doubt if it happens
    often that code maintainer in the Engineering tries to find the author of 
    the code to discuss why a certain bug happened.


    If after some time, I consistently score 1 bug per 10 000 lines,
    then one can say that the code quality improved. How much in real
    terms,etc..., no way to say. But still, it is a better quality code.

    		
    					wlodek

1273.18Product stovepipesCOUNT0::WELSHdbdx sccsThu Nov 22 1990 10:0972
	re .14:

>>>	What's
>>>    interesting here is that this is a classic case of software breaking
>>>    because of changes made outside of the tool.  In other words, bugs tend
>>>    to grow inside software when it sits on the shelf.  So, even though
>>>    you don't change a line of code, the bug count can go up and is a
>>>    moving target.

	This is an interesting observation, and one that we haven't taken
	sufficiently into account. I'd like to make a couple of remarks
	that follow on from it.

	(1) One of the apparent benefits of the object oriented approach (OO)
	    is that individual pieces of the software are relatively immune
	    to being broken from outside. In technical terms, this is
	    because each separate object contains both code and data
	    ("behaviour" and "state"). Instead of functions or procedures
	    being called from outside, a "message" is sent, and the
	    object responds by activating one or more "methods". Suppose,
	    for example, that an object creates and maintains a list of
	    equipment. Now, one day, an extra column is added to that
	    list. In a traditional piece of software, all code using
	    these lists has to change in order to reflect the new
	    size of the data structures. In the OO world, however,
	    this is not necessary, as all the procedures needed to
	    operate on the list is contained within the same object, and
	    is changed in step with the data structures. So, you don't
	    say "Gimme a list and I'll do what I want with it". You say
	    "Gimme the third item down", and the addressed object does
	    the work itself.

	(2) Unfortunately, at Digital we have a very narrow, parochial
	    focus on the "product" as the unit of organization. How
	    many times I have lamented this short-sighted situation!
	    Like when I noticed that CMS doesn't really work with DFS
	    across time zones, or when Rdb launched a new release which
	    impacted CDD/Plus users, or... almost every product has done
	    things like that. My favourite "product stovepipe" story
	    is about the VAXELN-based product that submitted a release
	    to SQM together with the required set of regression tests.
	    A while later, the development manager went round to SQM.
	    "The tests ran fine", they told him. "You can ship". The
	    DM looked at the printout, and noticed that they had checked
	    the compilation, link, and build phases, but hadn't actually
	    downloaded and run the resulting system. "Oh, we don't do that,"
	    came the reply. "We only check out VMS layered products, so
	    we only do the VMS bit". I'm glad to say this was years ago,
	    and that particular gap was closed soon after.

	    However, it does seem to me that Steve's complaint does reflect
	    the "product stovepipe" mentality to a certain extent. If you
	    regression test a lump of software, and you find that external
	    changes are breaking it, then a logical inference is that your
	    regression tests aren't addressing the whole system, but only
	    part of it.

	    Now, I can see that there might be practical difficulties in
	    writing and running regression tests for all the layered
	    products running on VMS. (So please don't reply with amusing
	    numbers related to the number of seconds in the life of the
	    universe!) But maybe we could be a bit more realistic. Here
	    are some suggested regression test scenarios:

	    1. Rdb/VMS with CDD/Plus, DECdesign, DECplan and RALLY.

	    2. All the major VAX languages with all the VAXset products
	       and the debugger.

	    3. ALL-IN-1 using all of its constituent parts.

	/Tom
1273.19Defect...AOXOA::STANLEYIn another time's forgotten space...Wed Nov 28 1990 13:0215
From the Deliver at Six Sigma course:

  "A defect is anything that results in customer dissatisfaction."

It's rather difficult to quantify this in most areas.  I don't think too much
time should be spent trying figure out a number.  More time should be spent on
determining who your customers are (those your deliverable go to) and what are
their needs. The summary in Deliver at Six Sigma left us with:

  "In all Six Sigma work, keep your customer requirements your main focus."

I agree that if you are pleasing your customer, then you are doing quality
work.  It's just hard to quantify.

		Dave
1273.20Which Customer?TRCC2::BOWERSDave Bowers @WHOMon Dec 03 1990 19:3633
  "A defect is anything that results in customer dissatisfaction."


A noble sentiment, but it may not always be possible to satisfy all customers.
Specifically, current customers (those who have bought and are now using our
products) and prospective customers (those who have yet to benefit from our
wodrous stuff) may have conflicting "needs".

AS an EIS consultant, I hear a lot of current customers complaining about the
rapid pace of change in our products, both hardware and software.  On the 
hardware side, the onslaught of new processors makes them feel vulnerable to 
management questions about "obsolete" systems and poorly-timed purchases.  From
a software perspective, production systems benefit more from stability than from
new features.  MIS staffs also tend to end up at least 1 major release behind
in terms of training and expertise.

Prospective customers, on the other hand, are generally looking for cutting-edge
technology, and will quickly lose interest if we don't have the newest, hottest
hardware and the most advanced software.    

Unfortunately, prospective customers also like references, so that if you
piss off the installed base by pushing the technology at the rate the prospects
demand, they give poor references!  

(I don't even want to think about the how quickly prospective customers 
become current customers! ;^)

I'm not sure how we win without greatly increasing support costs (like ongoing
bug fixes to "stabilized" older versions, while continuing development of new
features in a "current" release).  I am sure, however, that we'll continue
to get the wrong answers until we understand the dichotomy.

-dave
1273.21Precisely!RTL::HOBDAYDistribution & Concurrency: Hand in HandTue Dec 04 1990 12:559
    Re .-1 (Balancing needs of current vs. prospective customers):
    
    Excellent insight.  A number of SW engineering groups are struggling
    with how to best deal with this dilemma.  We must find a way to meet
    the stability needs of the current installed base; while trying to
    build products that will compete with other vendors state of the art
    products.
    
    -Ken
1273.22Can't get no...WORDY::JONGSteve Jong/T and N Writing ServicesThu Dec 06 1990 19:1711
    Another example, I am told, is the Bookreader program.  When they see
    it, customers invariably ask for character-cell terminal support.  In
    fact, they demand it.  OK, that's a requirement.  But when asked what
    other features they would accept not getting to see CCT support added, 
    the same customers list features they would rather have first.  So
    when customers are asked one way, one set of priorities emerges; when
    asked in another way, a different set of priorities emerges.
    What does "customer satisfaction" mean in this case?
    
    I prefer the "conformance to requirements" school of quality, because
    generating the requirements is someone else's job 8^)
1273.23Is "customer satisfaction" merely a means to an end?BIGJOE::DMCLUREDEC is a notesfileThu Dec 06 1990 20:2629
	I have to wonder that in our Six Sigma rush to satisfy the
    customer if we aren't also overlooking the people who really decide
    upon the success or failure of this company?  I'm talking about
    Digital stockholders: you know, the people Ken Olsen reports to?

	It is assumed that satisfying the customer is the ultimate goal,
    but isn't customer satisfaction merely a means to an end (the end being
    stockholder satisfaction)?  It is also assumed that by satisfying the
    customer we will also automatically satisfy the stockholders, but is
    this always necessarily the case?

	After all, many times the customer is also the competitor, so does
    it make sense to always bend over backwards for someone whose actual
    goal might actually be to deliberately try and confuse us?  Maybe I'm
    just being paranoid here, but wouldn't the ulterior motives of our
    stockholders be slightly less suspect than those of our customers?
    If nothing else, at least DEC stockholders might have a better 
    understanding of the the "big picture" and how DEC should be 
    positioning itself for the long-term.

	In addition to achieving "customer satisfaction", shouldn't "Six
    Sigma" also include the achievement of "stockholder satisfaction"?  After
    all, what good is customer satisfaction if you disappoint DEC stockholders
    (not to mention *potential* DEC stockholders) in the process?  Also,
    shouldn't at least some of the energy currently spent drumming up
    "business" be directed towards drumming-up investor interest as well?
    Isn't the DEC stockholder where the buck really stops?

				    -davo
1273.24RICKS::SHERMANECADSR::SHERMAN 225-5487, 223-3326Fri Dec 07 1990 00:106
    My understanding is that all of the folks we do business with (buyers,
    sellers, engineers, managers, stockholders) are customers.  That's one
    of the pluses/minuses about Six Sigma.  You have to please everybody
    all of the time.
    
    Steve 
1273.25Six Sigma Customer SatisfactionMEMIT::HAMERHorresco referensFri Dec 07 1990 12:4743
    I don't think all of the entries in this discussion accurately reflect
    the Six Sigma concept of customer satisfaction and what we need to do
    to achieve it. This conclusion is based on my experience helping groups
    trying to implement Six Sigma (including my own) and on instructing the
    Design and Manufacture at Six Sigma course to a variety of individuals and
    organizations over the past year.
    
    There is no question that chasing after customers, asking them what
    they want, and then willy-nilly trying to implement those "wants" will
    lead to messy products full of half-implemented features-- like
    multi-symptom cold medications that don't really relieve any symptom.
    To boot, those products will be a long time coming because we
    frequently will end up believing that customer requirements are a
    matter of progressive revelation. 
    
    The Six Sigma approach to customer satisfaction does not set customers
    up as some infallible god to dictate directly to suppliers acting as
    menials to fulfill their every whim. 
    
    The main change that Six Sigma forces, in my opinion, is a fundamental
    alteration in the traditional antagonistic relationship between
    suppliers, producers, and customers. Instead of a basic loggerhead
    approach, those three groups have to recognize that the success of any
    depends on the success of all. Unless my suppliers share my commitment
    to customer satisfaction and quality and know what my business is, I
    have no hope of delivering the same to my customers regardless of how
    hard I try.
    
    Once such a change in thinking occurs (and I don't believe it is a
    trivial matter or one to which we understand all implications), the
    question of customer satisfaction changes from one of chasing after
    and/or guessing to one of long-term collaboration and cooperation. I
    know what my customer requires not by him or her telling me (or worse,
    by deciding I know best what they need), but by working as a partner
    with that customer so I understand their needs and so they understand
    my capabilites and together we can reach an agreement on what the
    requirements are to which I can provide a satisfactory solution.
    
    What I've described is no easier a path to customer satisfaction, but I
    believe it can be followed and that Six Sigma customer satisfaction is
    an achievable goal.
    
    John H.