[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

257.0. "DEC customer surveys and the DEC employee" by ARNOLD::ROTH (Throw the switch and ponder result.) Mon Jan 26 1987 19:23

    This seems as good a place as any for this note. I want to relate
    what happened to me this morning and get a discussion going about
    this....
    

    ME:= 	Lee Roth, System Manager
    MGR:=	A F/S Unit Manager in the local branch
    
    
    ME: "You wanted to see me?"
    
    MGR: "Yes. I want to know what I need to do to get all "10's" on
         the F/S customer survey." [I as a system mangager will be
    	 receiving two or three customer surveys to be filled out and
         returned to DEC F/S corporate. This is the same survey that
         external customers get. A "10" is perfect goodness, a "0" is
         the worst. There are about 10 questions rating equipment quality,
         technical competence of the engineers, spare parts, response
         time, etc.]
    
    ME: "Not this crap again!" [Last year I dinged F/S for some weak areas
        got some real bad vibes from some of my F/S "friends" saying
        that I gave them a bum deal- i.e. they missed out on a free
        gift or trip. I'm ex F/S myself, so I guess they feel I should
        'play the game'.]
    
    MGR: "Well, its like this. My boss told all of us UM's that if we
        didn't average 8.5 or higher on our survey returns then next
        year someone else would be doing my job."
    
    ME: "That sucks! Where did that come from?"
    
    MGR: "From their boss, I think."
    
    [rest of the long discussion not printed here]

    
    Well, as far as I am concerned this is pure blackmail. Nobody should
    be threatened with this sort of thing, either directly or veiled.
    The UM has no control over what the customer writes down on the
    survey. A customer could have perfect service for 51 weeks and have
    a fiasco the week before the receive the survey and really put down
    some low numbers.
        
    The person that told me this is an old friend and I am sure that
    they feel powerless to do anything about it. And the sad part is
    that they are one of the most dedicated, hard working, underpaid
    F/S UMs that I know of!
    
    There must be some people really desparate to get good survey numbers;
    it's far worse than it was a few years ago when I was in F/S.

    What ever happened to providing excellent service to achieve excellent
    survey results?
        
    Here are my questions:
    
    1) What good is a customer survey if the customers are going to be
       begged and pleaded with to make the numbers artificially high?

    2) What should or should I not do to bring some change to a system
       that has these sort of pressures brought to bear on employees?
    

    3) Does software services have the same kind of pressure to make
       the numbers artificial?    

    
    Sadly,
    
    Lee
T.RTitleUserPersonal
Name
DateLines
257.1Excellence versus numbersATLAST::VICKERSOn the Cote D'Griffin, againMon Jan 26 1987 21:5426
    It is indeed sad.  Unfortunately, the entire services arm of the
    company seems to be totally enthralled with numbers.  The "Think
    Customer" program of a few years ago seemed to actually increase
    this infatuation with numbers.  They came up with even more programs
    to quantify customers instead of visiting customers.
    
    It sounds like the manager you were dealing with was TRYING to do
    the right thing compared to many.  Many are improving the numbers
    in ways that are immoral if not illegal.
    
    My answers to your questions:
    
    1) The customer surveys appear to be almost without any real meaning
       but no one in middle or upper management knows or seems to care.
    
    2) Encourage management to actually talk with customers.  Not just
       the big important ones or the squeaky wheels.  Encourage them to
       read "In Search of Excellence" and, the PG13 version, "A Passion
       for Excellence".  Both treat customers in the RIGHT light which
       is NOT through silly surveys.  Customers are people, not numbers.
    
    3) SWS (or SWR) is just as bad and maybe even worse.
    
    Sad but a believer that right will out,
    
    Don
257.2Measurements over True Value?TMCUK2::WARINGUK Distribution SWASTue Jan 27 1987 07:2519
    I think that customer surveys, in their current form, are a farce.
    
    What tends to happen is that everyone becomes motivated by the numbers,
    and not the true objectives. So, people in the positions closest
    to the customers do whatever they can to get the number, whether
    legitimately (better service) or (very quietly) ensuring that key
    dissatisfied customers are excluded from the list or educated that
    the mark for 'ok' is 8.5
    
    How do you measure customer satisfaction? In the long term, it's
    by bond of business levels. In the shorter term, it's not so easy.
    I would have thought that the optimum measurement is to ask how
    we compare against last year, worse, same, better, and then to ask
    for feedback as to how we might improve our service.
    
    If the corporation wants numbers, that's what they'll get irrespective
    of their true value. If we want to be best, we need to do a little
    bit more.
    							- Ian W.
257.3ECC::JAERVINENimpersonal nameTue Jan 27 1987 10:5321
    I have been involved in customer surveys both as a 'customer'
    (as a system manager for the FS survey) and on the other side
    of the fence (as a SW specialist taking care of real customers).
    
    The situation in Europe is even more difficult I guess; I agree
    that the numbers are pretty useless anyway, even within a country;
    the preception of how good, say, an "8" is will vary even more from
    country to country (and yet they compare ratings between countries).
    
    Also, there's one thing that I find outright immoral (I don't know
    if this is the case in US): the customers can return the forms
    anonymous if they so wish; there is a cryptic code number on each
    form and the cover letter assures the customer that it is used for
    statistical purposes only, and Digital has no way of associating
    which form was returned by which customer.
    
    This is plain lying; the code numbers are registered and associated
    with the customer; SW UMs get extra reminders to look after customers
    who gave especially bad reviews (no matter whether they returned
    the form anonymous or not).
    
257.4They're a crockGOBLIN::MCVAYPete McVay, VRO (Telecomm)Tue Jan 27 1987 10:5924
    I was in on the early design of the Customer Survey forms for
    Educational Services courses.  It quickly became apparent that the
    designers, statisticians, programmers, were at direct odds with
    what management wanted.

    Management wanted a single number that could rate courses and
    instructors: for example, if you were an instructor or manager and your
    Customer Surveys averaged 80% favorable for the quarter, you got a good
    review. Senior management could also use the results in public
    relations and advertising.
    
    Fine--except statistics don't work that way.  If the power failed
    in the lab, students were ticked and downgraded the course.  Also,
    instructors quickly learned that they had to tell customers what
    to mark on the surveys, for their own protection.  And very good
    instructors got marked down by students for passing out too much
    information.  There were also hundreds of other small items, each
    of which could drive the surveys up or down, almost at random.
    
    Despite years of complaints and some VERY dubious statistical analysis,
    the forms have stayed with Ed Services, because management likes
    the single number.  They have become almost as ingrained (and about
    as fuzzy and gray in application) as the SATs and GREs.  I don't
    like them.  I'd like to see the whole system scrapped.
257.5They're *more* than a crock...TIXEL::ARNOLDStop Continental Drift!Tue Jan 27 1987 11:0813
    Before I escaped from the field, we were always notified (formal
    mail, unit meetings, etc) about a month or two in advance of when
    the surveys were going to be sent out.  The message was to "especially
    be on your best personal/technical behavior", since when a customer
    is filling one of these things out, what sticks in his mind the
    clearest is the most recent events.  We were encouraged to "coach"
    the customers in how to fill out the forms, how they related to
    the longevity of the UM, and even how a raft of undesireable surveys
    could drive up Digital's consulting prices. (sws).  Based on this
    "coaching", what customer would "dare" put down anything less than
    a 9 or 10?
    
    Jon
257.6An art and a scienceMASTER::EPETERSONTue Jan 27 1987 13:106
    The reason that the survey is conducted is to accumulate some
    statistics about the acceptability of performance.  This brings
    to mind what my college statistics instructor would tell us -
    
    STATISTICS IS THE SCIENCE OF LYING WITH NUMBERS!
    
257.7ECC::JAERVINENimpersonal nameWed Jan 28 1987 11:322
    I only believe in statistics I have forged myself...
    
257.9Comments on the surveySDSVAX::SWEENEYPat SweeneyWed Jan 28 1987 21:4686
    Candidly, I'm on the side of the "cheaters".
    
    "Cheating" on this survey seems to be disclosing to customers the true
    purpose of the survey: how it will be used for ranking customer services
    personnel within Digital and how it is linked to compensation directly.
    
    When I get such a survey from a large organization that is asking
    questions such as "Are you satisfied with the attitude
    /responsiveness... of your sales rep", I think I'm evaluating the
    organization, not the person.  If I had a problem with the sales
    rep I'd take it to the sales manager and not stab him or her in
    the back.
    
    The farce is that the questions don't act as a diagnostic tool
    to point out potential problems,  only face-to-face contact with
    customers can do it.
    
    The survey form can't change.  Its proponents want year-to-year
    comparisons so that stupid questions are propagated from year to
    year.
    
    It violates just about every rule of taking a survey:
    
    (1) There's a "survey season".  So those who are going to be evaluated
    create their bias in synch with the survey.
    
    (2) The 10-point scale is ridiculous.  What is the difference between
    7 and 8?
    
    (3) Customers are coached as follows (this is official, by the way)
    "service that meets your needs rates a 10".
    
    (4) Customers are coached as follows (unofficially) "my raise
    depends on a good survey score."  Ultimately, this leads to the
    customers filling out the survey in the presence of the Digital
    employee in the expectation of some future concession.
    
    (5) The questions are not randomized.  They always appear in the
    same order, therefore, a very high or low score on one response
    bias's the others (the halo effect).
    
    (6) The questions are asked in the positive sense, therefore running
    your pencil through all 5's, 8's, or 10's is possible if one is
    bored with the survey.
    
    (7) The questions concern ORGANIZATIONAL quality of service, not
    the individual delivering the service.  Yet the survey is used to
    compensate individuals.  Has the survey ever pinpointed a problem?

    (8) POLICY problems are translated into low scores.  Individuals
    can't do anything about policy.
    
    (9) WRONG SERVICE OPTIONS are translated into low scores: ie customer
    went for minimum service, really needed comprehensive service packages
    and this dissatisfaction is reflected in the score of those who
    delivered the service.
    
    (10) WRONG ORGANIZATIONAL ASSIGNMENT is translated into low scores:
    Manufacturing problems reflect poorly on Field Service.  SDC problems
    reflect poorly on Software Services.
    
    (11) Some customers deliberately "game" the survey: They think a
    extra-low score will get them more attention as a "problem" customer.
    Or that an extra-high score will get them gratitude. 
         
    (12) Scores are so close and year-to-year changes in the scoring
    are attributed EVEN BY TOP MANAGEMENT as measurment errors.
    
    (13) (Counter-intuitive) Districts which have the least growth (ie
    the most "acclimated" installed customer base) will have the highest
    scores.
    
    (14) (Counter-intuitive) Districts which have the most add-on business
    will have the highest scores.
    
    (15) (Intuitive) Districts which have the least number of customers
    (at least relative to large dollar volume) will have the highest
    scores.
    
    The score inflation will take its toll: so many scores bunched together
    will have those who are out-ranked by 0.005 on a 10-point scale
    up in arms.
    
    Having said all that, I don't think anyone should take it on themselves
    to "expose" the customer survey farce by bringing it to the attention
    of Digital Review or Digital News...
257.10Did others have these individual surveys too?ATRISK::JEFFEverbody knows this is nowhere... -nyThu Jan 29 1987 15:0826
    
    In addition to the yearly surveys, before I escaped from SWS to
    Engineering, we had an individual survey sent out to customers
    after EACH and EVERY piece of consulting (PSS) business delivered.
    
    So I would go out and deliver two days or two weeks of consulting
    and the customer would get a survey to rank my personal performance
    from good ole 1 to 10 on around 10-12 questions.  These surveys
    were directly related to compensation and figured in heavily to
    your performance appraisal rating(s) because every UM was obsessed
    with customer satisfaction.  
    
    So not only did you have to "prostitute" yourself to get and deliver
    the consulting business, but you had to make damn sure that the
    particular customer in charge liked you as well as your work.
    These surveys typically where based on personality impressions since
    the guy (or gal) in charge (who signed your time sheets) was often not
    the guy (or gal) who was most familar with your technical contribution,
    but this was the person who filled out the survey, so you had to
    make sure to suck up to him or her and let them know "unofficially"
    what these surveys meant to your future.
    
    These surveys too, were (and I imagine still are) a total farce.
    
    jeff tancill 
    
257.11love dem surveysTIXEL::ARNOLDStop Continental Drift!Thu Jan 29 1987 16:4614
    I'm pretty sure these types of surveys (re .10) are done at the
    district level on the whim of the local dm.  We also had those before
    I escaped from sws; ours included such incredibly important questions
    like:
    
    * Did s/he dress well?
    * Is s/he a likeable person?
    * Does s/he keep their desk/work area neat & clean?
    
    And other such pertinent questions, followed near the end by:
    
    * And BTW, how is her/his technical performance?
    
    Jon
257.12Problem with surveys, OR the managers?BCSE::DMCOBURNFri Jan 30 1987 16:1120
    This topic and its replys have really irked me.  I can't believe
    that this type of behaviour is allowed around here.  And I haven't
    seen ONE manager which uses these things speak up.
    
    I used to be a DEC customer, and my co-hort and I used to sit and
    fill those ridiculous things out, laughing our heads off, sometimes
    with our F/S rep right there with us.  Once, we decided to play
    a joke on him.  We filled out the idiot thing with all 1s, 2s, and
    3s.  We kept it until we could 'present' it to the rep.  He almost
    passed away on us, right there!!
    
    It seems to me that the REAL problem is the managers who use these
    abominations cloaked as surveys for purposes they weren't intended
    for to begin with.  Do they really believe that these surveys can
    accurately reflect a field person's performance, let alone the quality
    of the product/policies of the organization?
    
    If they do, then I think they are quite naive.  If they don't, then
    I would say their performance is quite questionable.
    
257.13Problem with the VP's ODIXIE::JENNINGSDave JenningsFri Jan 30 1987 21:0511
    re .12:
    
    You better believe that the local managers take these things seriously.
    This year, the number 1 item on a district manager's goal sheet
    is the customer satisfaction number (besides the other #1 items
    like revenue and margin).  Woe be the manager whose customer
    satisfaction numbers actually _decrease_ this year.
    
    But the problem isn't at the local level, the seeming importance
    of these survey numbers come down from the VP and Senior VP level.
    Kinda scary.
257.14You remember, the BIG picture.TPVAX3::DODIERHave a good whatever........Mon Feb 02 1987 15:1023
    	It sounds as if someone higher up knows this problem exists
    and is looking at doing something about it. There was a position
    open in MK0 (req #G038159) which was looking for a Project Manager
    to look into the customer satisfaction issues. It was suggested
    that this person have a Masters in Business and required a good 
    understanding of both F.S. and manufacturing.
    	In regards to customer surveys, I agree on the fact that they
    are not utilized properly in all cases. In the branch I used to
    work in however, a bad survey indicated a site visit from the UM
    and sometimes the BM was in order. It was also stressed in my old
    branch that the surveys were to judge the level of service on a
    branch by branch level more than a tech by tech level. I for the
    most part averaged about a 9.0 on my sites so I can't really say I
    felt any repercussions from bad surveys. If the majority of someones
    surveys are good, than one or two bad ones can be looked at a number
    of different ways. You can use the good ones to your advantage just
    as easily as someone could use the bad ones against you. For example,
    if you have 20 sites and 19 of your surveys give you between an
    8 and 10 for technical competance, and one gives you a 3, what SHOULD
    this indicate to a person of average or better intelligence.
    
    RAYJ_who_thinks_some_may_be_missing_the_big_picture
    
257.15Perception is *too* often RealityCURIE::MASSEYMon Feb 02 1987 16:138
    re: .14 "RAYJ_who_thinks_some_maybe_missing_the_big_picture"
    
    	    On the other hand, what if the 19 of 20 do $5M total business
    	    with Digital, and the 1 with the "3" does $50M.  Sheer numbers
    	    don't always represent the _big_picture_.  If this one has
    	    enough clout/influence, their attidudes may be far more
    	    critical.  Digital can't afford to have ANY dissatisfied
    	    customers.
257.16Is the customer ALWAYS right?TPLVAX::DODIERHave a good whatever........Thu Feb 05 1987 12:0633
    re:15
    	Understandably, any customer giving a 3 on a survey indicates
    a problem area that needs to be dealt with. This is regardless of
    whether the customer has a PC under a basic contract or multiple
    VAX's under a 24x7 DECservice contract.
    	The big picture I was talking about is in regards to surveys.
    Some surveys typically ask the same questions worded slightly different
    to see if the person filing them out is consistant. DEC surveys
    may not do this but they are sent to multiple customers asking the
    same questions. My point is that if a tech has multiple sites with
    similar hardware, the problem in my previous example should not
    necessarily indicate that the tech is technically incompetant. If
    the customer just plain don't like the looks of you, you should
    not receive a poor rating on your performance appraisal because
    of it.
    	To give a true life example of this, I received a new site at
    a hospital. The head of the computer center thought I looked like
    a druggy. He called up my UM and said " What are you doing sending
    people like that to a place like this. Remember, this is a hospital
    and we have drugs here". Mind you, this was a large site with all
    24x7 DECservice equipment. My UM's reply was, "You should consider
    yourself lucky because he is one of the top 3 techs in my group.
    Give him some time and I'm sure you'll agree". Surveys came out
    and my first one at this site was terrible although all my other
    sites were 8's,9's, and 10's. Eventually I was able to turn the
    customer around to the point were they would log a call and
    specifically request to see me and only me. 
    	I hope this clears up the point I was trying to make. I was
    fortunate to have a good boss that stuck up for his employee's if
    he felt they were right. Some others out there sound like they're
    not as lucky.
    
    RAYJ
257.17The Customer is Always RightSAUTER::SAUTERJohn SauterThu Feb 05 1987 18:5121
    re: .16--``If the customer just plain don't like the looks of you, you
    should not receive a poor rating on your performance appraisal because
    of it.''
    
    I claim you should.  The customer should take into account everything
    he feels, and record it in the survey.  If he feels bad about you,
    *for any reason*, you should get a low mark.
    
    However, I also claim that the support you received from your
    management is the kind of support that everyone deserves, under
    your circumstances.  The fact that you were able to turn the customer's
    perceptions around means that your manager was right.  The survey
    tells the manager this--I assume that you got higher marks on later
    surveys.  Management needs this kind of feedback to tell them how
    *you* are doing in your relationship with the account.
    
    If there were an account with people that you could not get along
    with, in spite of your technical proficency, then the manager should
    pull you out of that account and assign somebody else.  Again, he
    needs the survey results to make this judgement.
        John Sauter 
257.18Lets see the other side of that coin.TPLVAX::DODIERHave a good whatever........Fri Feb 06 1987 10:4217
    re:17
    	I think we are talking apples and oranges here. If a customer
    does not like the looks of you, it should reflect on the survey.
    
    	My point is that this should definetely not reflect on your
    performance appraisal if it is not justified.
    
    	If your boss disagreed with this, an employee is put into the position
    of prostituting themselves to the customer. I believe the customer
    should get the service they pay so dearly for. If possible they
    should get better than that, but not at the cost of the employee.
    
    	If for instance, a customer threatened a tech with poor survey
    marks unless they did a sexual favor, it should be done because
    the customer is always right ?
                              
    RAYJ
257.19F.S. rep is the keyTPLVAX::DODIERHave a good whatever........Fri Feb 06 1987 11:5533
    re:0
    
    	Lee, 
    	     I think you have the right attitude and ideas. If you think
    you should be getting better service than you have been, then you
    should mark the survey the way you see fit. The survey, as you know,
    gives you the tool to get the kind of excellent service you deserve.
    If you don't use all the tools available to you, you cannot do the
    job at hand as efficiently as possible.
    	One thing that I think you could do is talk to your F.S. rep..
    This is the person providing your service and a good avenue of
    communication should exist between you and him/her. S/he should
    know exactly what you expect of them and good communication between
    you can only help both of you to reach common goals. This is the
    key to resolving your questions and any problems you may have with
    your service. Most people want to succeed and do well at their job.
    Your F.S. rep is probably no exception to this. 
    	The F.S. rep is in a position to directly create a desirable
    affect. In my eyes it should work like this. The F.S. rep having 
    established good communication with the customer, provides the high
    level of service expected. In turn, the customer is happy with the
    service they get. This reflects in the surveys, and in turn the UM is 
    happy with the F.S. rep.. The F.S. rep gets a good raise because of
    this and everyone is happy.
    	Another good avenue of communication that must exist is between
    the F.S. rep and the UM. The UM should insure that the F.S. rep
    has the proper training and skills needed to do the job. Without
    the proper tools, the odds for success are not in anyone's favor.
    	When a survey comes out, there should be no suprises. The F.S.
    rep should know what kind of service they've been providing in the
    customers eyes, if the above avenue of communication exists. 
    
    RAYJ_who's_been_there_before
257.20other side looks the same...SAUTER::SAUTERJohn SauterFri Feb 06 1987 12:0019
    re: .18
    
    I think we are in violent agreement.  ``If a customer does not like
    the looks of you, it should reflect on the survey.''  I agree.
    
    ``...should ... not reflect on your performance appraisal...''
    I agree here, too.  If your boss disagrees with this, you can get
    into the kinds of dilemmas described in .18.  My recommended solution
    is to get a new (better) boss.
    
    ``If ... a customer threatened a tech with poor survey marks unless
    they did a sexual favor, it should be done because the customer
    is always right?''  No.  If I were the tech in such a case (unlikely,
    nobody would want sexual favors from me) I would accept the poor
    survey marks, and make sure my boss understood why I was getting
    them.  If the customer persists, the boss should take me off the
    account and assign someone who will satisfy the customer, because
    the customer is always right.
        John Sauter
257.21ARNOLD::ROTHThrow the switch and ponder result.Fri Feb 06 1987 15:3036
    Re: .19 communicate with acct rep
    
    Yes, I do now, and therein lies the rub: I know all.
    
    -The account rep expects me to fill it out with all "10's" so that
     they will surely get their bonus (paid weekend, gift certificate, whatever
     it is this time).
    
    -The manager expects me to fill it out with all "10's" so that
     they can keep their job* (my original reason for this note).

     * Implied or real threat? Who knows, see root note.

    
    When I got performance reviews in F/S I was told I would almost
    have to walk on water to get a "1" (highest rating) as a F/S engineer.
    (Scale goes from 1 to 5. 3=Average). So a absolute top rating would
    be considered rare, right?
    
    I'm quite happy with my service this year but I can't say that it
    was a "10" (I'm using the mindset used when getting performance
    reviews.)
    
    I guess I'll put down what I feel is right (7's, 8's, 9's) and catch
    hell for not filling out all 10's.

    Back to the issue of job threats: Is it ethical to demand someone's
    job if the numbers aren't good enough? What if there is a backlash
    by customers not wanting to fill out artificially high numbers anymore?
    Customers have circled "2" on the response time query because that's
    how many hours it usually took the rep to get there. Will the manager's
    boss get the axe if the numbers are bad? Fooey!
    
    Blackmail sucks, but what can I/you/anyone do?
    
    Lee
257.22i saw it tooCLUSTA::NEEDLEMANMon Feb 09 1987 20:5013
    
    I KNOW Ed Services uses their Student Opinion Form (SOF) as a type of
    blackmail. From the top-down, the numbers were the measure of a class
    value. Nowhere was knowledge transference a criteria. As an instructor
    I was told my goal was to raise SOF numbers.
    
    Numbers make people look good when rolled up the chain. If perception
    is reality, then this is quality. 
    
    an ex-ed services instructor,
    
    Barry
    
257.23If HE fails, why should I suffer?SEDSWS::KORMANTGIFTue Feb 10 1987 14:4017
How do you all feel about this situation:

In the UK this year, the SWAS survey is being sent to all customers who received
project/consultancy survices (PL72) in FY86. The customer is being asked to mark
the survey on the project/consultancy work that was done.

Now, here's the rub;

99% of PL72 is delivered by the regional Application Centres, but the survey
mark is assigned to the Pre-Sales SWAS Specialist who looks after the account
for the sales office.

In other words, if an Apps Centre specialist has a problem, the SWAS Pre-Sales
guy carries the can on the survey score, and probably looses out on his next
review. Is this fair? Can anything be done about it?

257.24COVERT::COVERTJohn R. CovertTue Feb 10 1987 16:1810
>In other words, if an Apps Centre specialist has a problem, the SWAS Pre-Sales
>guy carries the can on the survey score, and probably looses out on his next
>review. Is this fair? Can anything be done about it?

Seems fair -- the SWAS Pre-Sales guy and the salesman are in the job of
"customer satisfaction."  If the Apps Centre specialist isn't doing the
job right, the local office (Sales and Software) ought to know about it
and be resolving the issue before the customer fills out the form.

/john
257.25SDSVAX::SWEENEYPat SweeneyTue Feb 10 1987 21:5413
    The complaint isn't that the survey is or isn't fair, or whether
    or not the local office is responsible.
    
    The complaint is that an "individal contributor" is being measured on
    an activity over which he or she has no direct control.  Perhaps the
    individual contributor in this case may not even have been informed of
    the problem over which this customer is taking an issue with in
    the survey. 
    
    One of my twenty complaints on the survey is that it's takes attitudes
    regarding the OVERALL CORPORATE PERFORMANCE OF DEC and takes that
    "score" and assigns it to an individual for the purpose of awarding
    raises.  It's a system that invites abuse.
257.26More inconsistency!ODIXIE::COLEJackson T. ColeWed Feb 11 1987 16:077
	The business of dinging IC's on survey results is news to me, and I
agree with Pat that it isn't fair.  If SWS does it, it isn't in the Southern
Area!

	In US Country, the lowest the surveys go, I think, is to the District,
and the top 10 districts get "treats" of some sort.  I also believe attendance
allocations to Excellence Awards will also gate on a District's CSS.
257.27Quality is job #nDENTON::AMARTINAlan H. MartinWed Feb 11 1987 19:147
Re .24:

>Seems fair -- the SWAS Pre-Sales guy and the salesman are in the job of
>"customer satisfaction."

I thought we *all* had that job.
				/AHM
257.28USFS HQ HAS BEEN LISTENING!!!!USMRW4::GMCDOUGALLTue Feb 17 1987 11:1657
    The following NOTES REPLY is co-authored by.................
    
    Gary McDougall                        Hank Millette
    Program Manager                       Services Sales Support Mgr.
    USFS Customer Satisfaction Survey     USFS HQ
    
    Reporting to Don Zereski..............Vice President, USFS
                                        

    __________________________________________________________________________
    
    YES, IT'S TRUE!!!  "KEY" USFS HQ Mgt., in Westboro, Mass., have
    been closely monitoring communications associated with the "CUSTOMER
    SURVEY NOTES FILE", since the original NOTE appeared on January 26,
    1987.                            
    
    Mgt., at HQ, TRULY CARES to listen and appreciates everyone's candor
    and boldness in stepping forward to have your voice heard.  In fact,
    the information shared thus far couldn't have come at a better time,
    as we are currently proposing a new Customer Survey strategy for
    the FY88 timeframe and beyond.  Therefore, your feedback is valuable
    in this regard.
    
    However, to assure appropriate Mgt. focus on this topic, we
    respectfully ask that you route your future communications, on ALL
    Customer Survey issues away from the NOTES FILE, to the Mgt. source
    who can potentially incorporate YOUR THOUGHTS into future planning....
    The USFS Customer Satisfaction Survey Program Manager - Gary McDougall
    
    Acceptable feedback criteria is as follows............
    
        1.  ALL Communications MUST BE PROFESSIONAL in nature.
        
        2.  Each issue MUST BE CLEARLY STATED/DEFINED.
    
        3.  RECOMMENDATIONS for change MUST ACCOMPANY EACH ISSUE.
    
        4.  ALL COMMUNICATINOS SHOULD BE IN WRITING, using Inter-office
            memo or Electronic Mail, and sent to.........   
    
                       Gary McDougall
    
                           DECmail:  Gary McDougall @YWO
                           VAXmail:  USMRW4::GMCDOUGALL
                            LOC/MS:  YWO/B3
                               DTN:  292-2290
    
    THANK YOU, IN ADVANCE, FOR YOUR ATTENTION AND COOPERATION IN THIS
    MATTER.
    
    Sincerely,
    
         Gary McDougall        Hank Millette         
    
    
    
    
257.29COVERT::COVERTJohn R. CovertTue Feb 17 1987 13:1515
>    However, to assure appropriate Mgt. focus on this topic, we
>    respectfully ask that you route your future communications, on ALL
>    Customer Survey issues away from the NOTES FILE, to the Mgt. source
>    who can potentially incorporate YOUR THOUGHTS into future planning....
>    The USFS Customer Satisfaction Survey Program Manager - Gary McDougall

As one of the moderators of this file, I'm glad you're monitoring this
conference and that you've given a contact people can send mail to.

However, the suggestion that future communications be routed "AWAY" from
this conference is inappropriate.  A more appropriate suggestion would be
to ask that anyone writing in this conference send a copy directly to you
if they want to be sure you read it.

/john
257.30Don't stop, but help out tooCRFS80::RILEYBob Riley @DDO Chicago Central AreaWed Feb 18 1987 14:2311
    
    I had a nice reply ready to .28, but when I logged-in to enter it,
    John Covert said it all in .29!
    
    I totally agree with our moderator.
    
    "jackin' the house", Bob
    Who is intimately involved with the survey, and has been for many
    years, and who also reports to Don Zereski (well, there may be 
    three other managers inbetween...)   :-}}
    
257.31Customer Satisfaction SurveysRIPPLE::KOTTERRIRich KotterFri Jun 10 1988 23:5562
    Following is a note I posted in MARKETING. Somebody pointed out
    this discussion to me, so I post it here as well.
    
    
              <<< ASIMOV::DUA2:[NOTES$LIBRARY]MARKETING.NOTE;1 >>>
                   -< Marketing - Digital Internal Use Only >-
================================================================================
Note 444.0                Customer Satisfaction Surveys                7 replies
RIPPLE::KOTTERRI "Rich Kotter"                       51 lines  10-JUN-1988 12:21
--------------------------------------------------------------------------------

    I wish to raise a concern about Customer Satisfaction Surveys.
    
    Every year, Sales, Field Service, and Software Services each sends a
    separate survey to their list of customers. Various people in these
    Digital organizations, especially the managers, are measured on the
    results of these surveys, and they are *highly* motivated to get good
    ratings from customers. 
    
    These Digital people are aware of who will be surveyed, when the survey
    will be conducted, who responded, and the nature of their response. In
    some cases, they can specify who should or should not be surveyed.
    Digital people are encouraged to "manage" the survey process, and to
    "ask" the customers to give them a high rating. They are instructed to
    "set the expectation" of the customer that a good rating is a 9 or 10
    (out of 10), and that it is considered a failing mark to be rated 7 or
    below. In some cases they are encouraged to "push" for the desired
    results.
    
    If a rating of 7 or below is received from a customer, the people
    responsible for handling that customer must "manage" the negative
    response by meeting with the customer and devising a plan to overcome
    the cause of the negative response.

    While I applaud the motivation of customer satisfaction surveys, and
    think they *can* be a useful tool, I consider the way that they are
    conducted to be almost, if not, unethical. It's like putting the
    fox in charge of the chickens. I know people who have objected to
    this process, and have been ignored and/or chastized.
    
    If we are serious about customer satisfaction, then I think these
    changes should be made:
    
    1- No more than *one* survey from Digital should be received by
    a customer per year. This idea of multiple surveys annoys my customers.
    
    2- The surveys should be conducted by an independent organization,
    and not by the organization(s) being measured.
    
    3- The surveys should be random, and at an unpredictable timing,
    to avoid "prepping" the customer for the survey.
    
    4- Customers should not be instructed on what a "good" or "bad"
    rating is. Do we take them for idiots?
    
    5- Follow up on negative responses should only be done if the customer
    has requested it, in response to a question to that effect on the
    survey.

    These are my thoughts. Anyone care to comment?
    
    Rich
257.32I know I want to get Hawaiian beach sand on my behind!NCCODE::PEREZThe project penguin is dead!Sat Jun 11 1988 05:347
Customer satisfaction surveys don't measure customer satisfaction,
they measure our ability to manage customer satisfaction surveys!

Locally, this is known as the Dialing For Tens (DFT) campaign!  After
all, we want to garner as many Excellence Awards as possible, right?

D
257.33*everyone* in the poolBOSTON::SOHNWhere does Notes go on my CLAR?Tue Jun 14 1988 01:4017
re: .31

	We *don't* have a choice as to which customers to survey (within
	limits) - there are score adjustments based on percentage of 
	customers surveyed. Obviously, if you have one bad account of
	15, yeah, you can skip it - but you're not doing too badly, anyway...

	We just went over the '88 survey. We did OK. My peeve is the scale -
	the instructions make it clear that the scale is really a 7 to 10
	scale. We have customers who purposely used a 10 scale, anyway...
	good for them.

	Perhaps our district is different, 'coz we have a small potential
	customer base, and must guard long-term relationships. 'Course,
	it could be us trying to protect our top 10 rating...

eric
257.34A customer perspectiveEUCLID::WARFIELDGone GolfingTue Jun 14 1988 17:2524
When I external to Digital we found out the hard way that the Customer 
Satisifcation Survey was an exercise in fantasy.  We had not been properly
"Coached" on how to fill out the survey.  During the previous year we had
several significant hardware problems that took an extended period for
Field Service to resolve due to logistical problems acquiring the required 
spare parts.  (We were located in the Greater Boston Area, so if we had
problems can you imagine what it would be like in a "remote" area.)  On the
survey we marked those questions about availability of parts, etc. very poorly.
But rated the Field Service Rep highly based upon his ability to fix the
problem when we received the parts.

Well after that experience we learned how to fill out the survey.  Decide
what the ratio of 9 & 10's you wanted to give, and fill in the boxes without
looking at the questions.  Now the only customer satisfaction information
I believe about Digital is that which is comes from independent 3rd party 
surveys.  

In spite of this when I left I was determined to come to Digital because it is 
a good company.  However I feel that until we are willing to realize that the 
Customer Survey Emperor has no clothes we will never become as great as we 
could be.

Larry Warfield
257.35GIDDAY::SADLERI'd rather be skiing....Wed Jun 15 1988 04:2322
    I'm currently an FSE with 5 1/2 years with DEC and definitely am
    of the opinion that the customer surveys as they stand at the moment
    are little more than a farce. Several months before the survey time
    we engineers are told to go out and start coaching customers on
    how to fill the survey forms. Letters are sent from the branch manager
    to customers, and we've even sent out a "mini-survey" to customers.
    The results from the mini-survey are used to target those customers
    that look like they may give bad responses during the real survey.
    These customers are phoned up and asked "why?" or even visited by
    a manager. What a joke. To make these surveys work as I imagine
    they are intended several things should happen.
    	- send the surveys out at to random customers at random times
    (talked about already but not happening)
    	- make allowances when customer survey results are used to judge
    engineers performance for salary reviews etc.
    	- stop sending branch managers who do well to places like
    Hawaii. This may stop the artificial results as managers no longer
    have an incentive to "fix" survey results, and itr no longer comes
    down to inter-branch competition.
    
    
    .jim.
257.36Moved by moderator - ACT2CVG::THOMPSONLet's move Engineering to FloridaWed Jun 15 1988 13:3022
================================================================================
Note XXX.0                     To be or not to be                     No replies
HANDY::COHEN "Bowling for Towels"                    17 lines  15-JUN-1988 08:18
--------------------------------------------------------------------------------

As a former Field (now Manufacturing) employee I hope we all don't lose sight of
the fact that, though the FS Survey may have some PR value, the REAL value is to
push us to provide excellent service.  Re 257.35, if a mini-survey highlights
problems and we correct them, isn't that a win for the customer.  

The drive to "make the #'s" is in lots of ways disfunctional and puts the
emphasis in the wrong place, but from the customer's perspective, it ensures
contact with customers who might not get that contact and provides a way for
people to raise issues. 

I don't believe any customer can be COACHED to say they're getting good support
if they feel their support stinks.  It just gives the customer another weapon
("either fix this or it will show up in the survey") and in the best case can
serve to improve communication, even if the results may not be what they
appear to be. 

Mark
257.37It could be better, but we DO need it...CHOVAX::YOUNGA Savage Horde is Out for Blood ... Fri Jun 24 1988 06:1883
              <<< ASIMOV::DUA2:[NOTES$LIBRARY]MARKETING.NOTE;1 >>>
                   -< Marketing - Digital Internal Use Only >-
================================================================================
Note 444.11               Customer Satisfaction Surveys                 11 of 19
CHOVAX::YOUNG "Dumb, Expensive, Dumb ... (Pick Two)" 75 lines  12-JUN-1988 11:52
                  -< It could be better, but we DO need it. >-
--------------------------------------------------------------------------------

    There certainly are big problems with the current Customer Satisfaction
    Surveys.  But it is also certainly true that we desperately NEED
    them.
    
    Despite all of their current faults, I honestly think that they
    do much more good than harm.  However I also think that they could
    be made a LOT better.
    
    HARMS:
    	1)  THREATS  --  The most serious charge I have heard leveled
    	against them is that some customers are threatened/intimidated
    	into giving the ratings that the rated group wants.  In fact,
    	though, I doubt that this is very common, for several reasons.
    	Most customers respond to threats/intmidations from "vendors"
    	exactly the way that you or I would, they raise hell.  Furthermore,
    	in all the time that I have been in the field, and before that
    	when I was a customer of Digital's, I have NEVER heard of a
    	single instance of this within my locale.  So while it may 
    	happen, it just cannot be very common.

    	2)  COACHING  --  Well this happens all the time, and I have
    	seen it all over.  The reason that it happens is that the surveys
    	give no conceptual framework for what the "Numbers" are supposed
    	to mean.  For instance is a "5 out of 10" supposed to mean
    	"Average", "About the usual I have seen", or "I am satisfied
    	about 5 tenths of the time"?  Or maybe its supposed to be like
    	grades in school:  10=A+, 9=A, 8=B, 7=C, 6=D, 5-or-less=F ???
    	Frankly, I go with percentages myself.  When the subject comes
    	up, I tell customers "Just tell us what percent satisfied you
    	are.  100%?  Then put down a 10, Only half satisfied?  then
    	put down a 5."  If the damn surveys said this on them, then
    	there would not be any latitude for telling the customer what
    	a "bad" score and a "good" score were.

    	3)  CONFUSION & IRRITATION  --  This happens too, and obviously
    	the biggest problem with increasing the frequency of the surveys
    	as many have suggested, and I also support, is that it will
    	make this problem worse.  The real reason that this happens
    	though, is that there are WAY too many questions on the forms.
    	Some are confusingly similar, there is little explanantion,
    	and combine things that should not be (like Telephone Support,
    	and the local office).  There is no way that a customer can
    	answer these simply.  They have to schedule a block of their
    	time to sit down, read over each question and then carefully
    	think about the answer.  As for myself, I do not have time
    	for such surveys that are sent to me and I usually throw them
    	out.  What the surveys SHOULD be a simple postcards that are
    	mailed to the customers in envelopes.  The customer opens it
    	fills out the 5 or fewer simple questions and drops it in the
    	mail, and then goes on to his next letter.
    
    Here for instance is my idea of what a good Survey Card (for SWS)
    would look like:

    	    DIGITAL Software Services Customer Satisfaction Survey
    
    1.  Approximately what percent satisfied would you say you are
    with the Software Services you receive from DIGITAL?
    
 0%-[] 10%-[] 20%-[] 30%-[] 40%-[] 50%-[] 60%-[] 70%-[] 80%-[] 90%-[] 100%-[]

    2.  What is the worst thing about DIGITAL's Software Services?
    ______________________________________________________________
    ______________________________________________________________
    
    3.  What is the best thing about DIGITAL's Software Services?
    ______________________________________________________________
    ______________________________________________________________

    4.  Should we: 
    	  A-[]	Keep your response anonymous.
    	  B-[]	Have someone from the local office follow up on your concerns.
    	  C-[]	Have someone other than the local office follow up on your 
    		concerns.
    	  D-[]  (none of the above).
257.38CHOVAX::YOUNGA Savage Horde is Out for Blood ... Fri Jun 24 1988 06:2155
              <<< ASIMOV::DUA2:[NOTES$LIBRARY]MARKETING.NOTE;1 >>>
                   -< Marketing - Digital Internal Use Only >-
================================================================================
Note 444.12               Customer Satisfaction Surveys                 12 of 19
CHOVAX::YOUNG "Dumb, Expensive, Dumb ... (Pick Two)" 48 lines  12-JUN-1988 12:19
--------------------------------------------------------------------------------

    Re .0: (and others...)
    
>    Digital people are encouraged to "manage" the survey process, and to
>    "ask" the customers to give them a high rating. 
    
    This I think is the root of the whole problem.  People trying to
    manage the process of measurement, rather than trying to manage
    what is being measured.  The whole point of the surveys is SUPPOSED
    to be (A) to focus our (the fields) attention on customer satisfaction.
    It is NOT supposed to be about (B) manipulating the survey results
    any way we can get away with.  Any plan to improve the surveys should
    try to improve (A) while eliminating (B).
    
    Some of your suggestions:
    
>    1- No more than *one* survey from Digital should be received by
>    a customer per year. This idea of multiple surveys annoys my customers.

    I dealt with this a little in -.1.  I think that it is the surveys
    themselves, and the way that they are put together, that is annoying.
    So if one is annoying, multiple ones are multiple-annoying.  A better
    survey is the real answer here.

    2-  The idea of an independent org. running the surveys:
    
    Unless it is DIGITAL, most of my customers would not answer them.
    This is because they consider their business practices a *SECRET*.
    They will NEVER answer business questions over the phone.  I guess
    an independent could run it, as long as it said DIGITAL(trademark)
    on it and the return address was also digital.
    
    Also, understand that part of the reason that we have the local
    office involved currently, is that Digital has next to no idea at
    the corporate level, who is and is not a customer.
    
    (Most of the other ideas in .0 I agree with)
    
    --  The idea of decoupling the survey ratings from local performance
    metrics:
    
    This is throwing the baby out with the bath water.  This would
    eliminate the problems (B above) but also eliminate the benefits
    (A above).  If managers were no longer rated on Customer Satisfaction,
    then we would be back to the old (bad) situation were dollars were
    the only things that managers were rated on, and dollars were the
    only things that anyone seemed to care about.

    --  Barry
257.39Project Bar None Idea SuggestionDIXIE1::CARNELLDTN 351-2901 David Carnell @ATOFri Jun 24 1988 12:03172
    < Note 458.0 by ODIXIE::CARNELL "DTN 351-2901 David Carnell @ATO" >
                     -< Project Bar None Idea Suggestion >-

    I placed this topic in the MARKETING conference and am placing a
    copy of it here, believing it to be pertinent.
    
    PROJECT BAR NONE IDEA SUGGESTION:
    
    I came up with an idea that I thought could help make an impact
    on building customers, revenues and profits.  I wrote it out and
    sent it to a variety of management levels but received no response.
    
    I still believe it is valid and could have a significant impact.
    
    I will create my memo here, looking for feedback on its validity,
    and perhaps if it is valid, maybe the idea will find a home since
    its creation does not fall under my current job responsibility.
    
    MARKETING IDEA SUGGESTION "Project Bar None" by David Carnell.
    
    ASSUMPTION:
    
    Digital could significantly increase product and services sales
    revenues, and customer loyalty and retention, if Digital could
    dramatically increase the "meaningfulness" of all the various Digital
    customer satisfaction surveys.
    
    CONCEPT:
    
    Using our latest bar-code information computer technology, customize
    each and every individual Digital Customer Satisfaction Survey to
    match the perceived and wanted value satisfactions of "each" individual
    customer contact to whom we are sending various annual Digital
    customer satisfaction surveys, making EACH question on EACH survey
    both quantitative and qualitative, which will reflect in each question,
    each individual customer's own words on what he feels is most important
    "particularly" to him in ensuring his customer satisfaction and
    retention.  From using this bar-code technology, we might call the
    project "Bar None."
    
    This concept could be especially important in increasing the impact
    of our total Digital efforts to get, keep and grow DNA program targeted
    accounts.
    
    BENEFITS:
    
    1.  The "quantitative" score for each question on each questionnaire
    sheet would measure precisely in the customer's own words and thinking
    what was important to him in his terms of measuring his customer
    satisfaction, which if met at a high perceived number value, would
    lead him both to higher levels of loyalty and to higher levels of
    perceived value in doing business with Digital, with the increased
    perception that only Digital can best satisfy his wanted value
    satisfactions.
    
    2.  The total score for each questionnaire sheet for each survey
    type would more accurately measure total satisfaction, by each
    individual, site and overall customer account, since the questions
    and value satisfactions are each defined and determined by each
    individual customer contact, unique to "his" questionnaire.  Scores
    would be more insightful on where Digital stands with each particular
    individual, each particular site, and each particular account, since
    all questions are individually "unique" to each individual, site
    and account.
    
    3.  The "qualitative" feedback, obtained from each question, would
    serve as a means of idea and strategy generation, both for a given
    customer contact as well as for the site and overall account, on
    what Digital actions and attributes to reinforce, and on what Digital
    actions and attributes to consider changing.
    
    4.  Overall account loyalty and retention would be increased by
    Digital more thoroughly knowing what our customer account contacts
    are "thinking" in relation to what's important to them "in particular"
    because "each" survey questionnaire is custom-tailored to what each
    contact within the account defines as being most important to our
    satisfying his desired value satisfactions.
    
    HOW "PROJECT BAR NONE" WOULD BASICALLY WORK:
    
    The customer contacts in the current databases used for the various
    surveys, along with all new customer contacts, would be asked to
    define for each type of survey (SALES, SOFTWARE, FIELD SERVICE,
    ETC.) 15 factors or so of desired, wanted value satisfactions that
    are most important to them "individually" in Digital ensuring their
    optimum customer satisfaction.  Each customer account contact would
    be asked to phrase these factors, which he determines, into questions,
    which he determines, that he wants to be asked in each respective
    Digital survey that is going to be sent to him.  We use HIS words,
    keeping every question unique.
    
    These factors/questions by individual, unique to each individual
    customer account contact, would be added to the database.  For each
    respective survey, each customer contact, his organization, his
    site location, and for each individual question, there would be
    assigned a "unique bar-code number."
    
    When appropriate for sending to the customer contact, each respective
    survey questionnaire for each specific customer contact would be
    printed out using our latest publishing/laser printing technology.
    The printed questionnaire for each contact would include the UNIQUE
    BAR-CODE assigned for each item as indicated above.
    
    Beside each question would be the traditional quantitative number
    rating circles.
    
    Below "each and every" question would be four lines, with the heading,
    FURTHER COMMENTS/LIKES/DISLIKES/SUGGESTIONS:
    
    The customer account contact receives each of our several customer
    satisfaction survey questionnaires, each having the unique questions,
    which each customer contact has determined specifically for himself
    as being most important to HIM.  He completes the surveys and returns
    them to Digital.
    
    Using optical readers/laser guns/light pens, the "quantitative"
    number scores are recorded, along with the unique bar-code for the
    customer information and unique question.
    
    This quantitative number score information goes into the database
    and is correlated and analyzed in many different ways, with the
    overall effect that the score has optimum meaning by each specific
    contact, and therefore ultimately by site, account, industry, Digital
    Unit, District, Area, Geography, Corporate.
    
    Because the scoring numbers would no longer be derived from Digital's
    selection of the standard questions common for all customer contacts
    per each type of Digital customer satisfaction survey but rather
    would now be derived from tens of thousands of different unique
    questions, all "unique to each specific individual customer account
    contact," as specifically defined by each individual customer contact,
    Digital would obtain a new measurement giving us optimum
    meaningfulness.
    
    After the above optical input has taken place from the survey
    questionnaire sheet, each survey sheet might then be sent to a
    "marketing skunkworks department" where the "qualitative" input
    from each unique question, obtained beneath "each unique question
    of importance" (that the customer contact has himself earlier
    determined as most important to him in particular), would be correlated
    and analyzed and from that "customer intelligence" recommended actions
    could be derived, which would have a high probability of being
    accurate, and which once implemented, would positively affect all
    Digital strategies for growing products and services, markets,
    customers, revenues, margins and profits.
    
    PHASES:
    
    The changeover should be relatively nominal in cost.  However, the
    impact on the traditional continuous measurement would change
    significantly since the means of measurement would no longer be
    continuous on the year-to-year same limited number of standard
    questions per respective questionnaire but would be more dynamic,
    ever changing from year to year as new customer contacts are added
    and deleted, and those tens of thousands of factors, "unique to
    each customer contact," would be ever changing.
    
    While the traditional accumulative score by Digital Group, Unit,
    District, Area, Geography, etc., would continue to be generated,
    the content measurement would shift from being "standard" question
    focused to that of being "uniquely individual customer contact
    focused," and "uniquely individual account focused" since now the
    scores would reflect what is uniquely important to each as defined
    by each contact, site and account, which from a marketing perspective,
    would increase both measurement meaningfulness as well as impact
    from Digital proactive actions derived from both the new method
    of obtained individual custom-tailored customer quantitative scores
    as well as the qualitative customer intelligence being gathered.

    


257.40HOCUS::KOZAKIEWICZShoes for industryFri Jun 24 1988 13:3630
    My own opinion is that the surveys are flawed primarily because
    of the way in which they are conducted.
    
    1.  Any organization serious about accurately measuring customer
        satisfaction using sampling techniques and statistics should
        *not* be running the process themselves.  If we want to be
        taken seriously, we should be hiring an independant firm to
        conduct the surveys.
    
    2.  On a scale of 1 to 10, 5 is the median, not 8.  It is insulting
        to our customer's intelligence to suggest appropriate scores.
        If we truly believe that only scores over 8 are "good", why
        the hell do we need all that resolution to measure the "bad"?
    
    3.  Most important, if "managing the process" can shift survey scores
        .5 to .75 points (we have been told that it does, and things
        like the distribution of excellence awards are dependent upon
        differentials of this magnitude), then the usefulness of the
        satisfaction metric as a gauge of true customer satisfaction
        is significantly diminished.  To the point, it means that score
        differences of .75 points are meaningless, yet we reward ourselves
        for changes of this magnitude.
    
    Reagrding the previous reply, I don't think that a technical solution
    is appropriate.  We need to turn the survey over to an independant
    agent and STOP MESSING WITH THE PROCESS.  Only then can we say that
    we really care about what our customers think.
    
    /Al
    
257.41CHOVAX::YOUNGA Savage Horde is Out for Blood ... Fri Jun 24 1988 19:3232
    Re .40:
    
>    1.  Any organization serious about accurately measuring customer
>        satisfaction using sampling techniques and statistics should
>        *not* be running the process themselves.  If we want to be

    This is *NOT* the point of the Surveys.  This is not a "Product
    Satisfaction" survey whose point is to be used as an advertisng
    vehicle.  Like "95% of Chevy NOVA users said they would recommend
    it to a friend...".  Nor is it meant to be a market research
    effort so that the VP's can annualy review our ratings and decide
    whether we need to "focus on the customer this year."  Nor is the 
    point "to be taken seriously."  The point is to BE serious about it.

    The POINT of the surveys is to focus the fields attention on customer
    satisfaction.  I have already mentioned the problems of having an
    independent perform the surveys (.37,.38).  What is wrong with the
    surveys is not that they are conducted by the field, but that it
    is so easy (and even ENCOURAGED) to mis-conduct them.

    2.  SCALE:  The problem is NOT whatever the scale is, or whatever
    the median is.  The problem is that, whatever it should be, we have
    NO OFFICIAL designation of what it is supposed to mean.  We do not
    tell customers ON the survey what it is supposed to mean.  So they
    either 1) decide for themselves what it means.  Since everyone has
    a different idea, you get scores all over the board for the same
    real level of satisfaction.  It means HUGE statistical error.  Or
    2) the digital folks being measured TELL them what it ought to mean.
    Not suprisingly they do much better on the survey results.


    --  Barry
257.42HOCUS::KOZAKIEWICZShoes for industryFri Jun 24 1988 20:3326
    < Note 257.41 by CHOVAX::YOUNG "A Savage Horde is Out for Blood ... " >

>    This is *NOT* the point of the Surveys. ...  The point is to 
>    BE serious about it.
    
    I know quite well what the point of the surveys are, thank you.
    How can we BE serious about satisfaction when the metric does
    as much to measure how well the process is managed as gauge 
    satisfaction?  

>    What is wrong with the
>    surveys is not that they are conducted by the field, but that it
>    is so easy (and even ENCOURAGED) to mis-conduct them.
    
    Come now.  If it is so easy to mis-conduct the surveys (I agree),
    the only real fix is to remove the responsibiity for conducting
    them from the Field.  
    
    As for the rest of your reply (scale, etc.), that was my whole point.
    What the range of absolute numbers are is unimportant.  What is
    important is that they be determined by a disinterested third-party
    with methods that are rigidly consistent from area to area and year
    to year.
    
    /Al
    
257.43Immovable objectMERIDN::BAYYou lead people, you manage thingsFri Jun 24 1988 22:3137
    re .41
    
    >The point is to BE serious about it.

    And I think the point in .42 is that putting a note in the Notesfile,
    or sending out a VAXmail to @EVERYONE, or amending the corporate
    policies or making an amendment to the constitution will not magically
    make people take things seriously.
    
    The key is to make the measurement external and subjective.  As
    long as it is easier to fudge the measurement than it is to go out
    and REALLY do what must be done to satisfy customers, the measurements
    will get fudged.
    
    I don't think the fudgers would do this if we we talking money instead
    of customer sat points, but it isn't unheard of.  How much less
    significant are CS points than real dollars?
    
    You are absolutley right that no one takes them seriously.  But
    if it becomes impossible to fudge the results, then it will have
    to be taken seriously.  Which is easier?  Turning the management
    over to a professional group that does such things for a living
    and has it down to a science, or changing all field personnel's
    attitudes?
    
    As for my other pet peave:  If DEC wants customer satisfaction to be
    taken seriously, then hit 'em (what THEM?  Hit US!) where it hurts.
    Don't use CS to determine how many excellance award slots are open in a
    district (which in my opinion don't have a global appeal anyway) - use
    it for budgeting salary increase dollars. Observe how quickly things
    turn around after the initial panic subsides. 
    
    Imagine, getting paid based on the satisfaction the customer has
    with the work you do for him/her.  Sounds remarkably like real life.
    
    Jim
    
257.44HOCUS::KOZAKIEWICZShoes for industrySat Jun 25 1988 15:136
    re: .43
    
    Amen, brother!
    
    /Al
    
257.45about having independents run it...CHOVAX::YOUNGA Savage Horde is Out for Blood ... Sun Jun 26 1988 04:5269
    Re .40,.42,.43,etc.:
    
    First of all I happen to think that the vast majority of field people
    DO take the CS surveys seriously.  The problem is not that, its
    that it has become acceptable to try to manipulate the results.
    In fact it has become clear that if you do not try to manipulate
    them, you probably are not going to end up going to Hawaii or wherever.
    
    It is also true that managers in the field have CS as one of their
    primary metrics for their performance reviews, so in a way it DOES
    affect their salary.  For that reason, I think that we already have
    the right amount of emphasis on the CS surveys.    

    What we lack is sufficient explanantion of what they are supposed
    to be and what our role in them is supposed to be.
    
    Let me directly address the idea of an independent survey taker.
    First of all what do you think that a "Professional" company will
    be able to do that we do not already do?  It has been my experience
    that:
    	A)  A customer will NOT even respond to a questionarre from
    	an 'independent' source because they regard their business
    	practices as SECRET.  The only way that they would reply to
    	these is if they here from someone they personally KNOW from
    	Digital that this is in fact being done for Digital.  Note 
    	that this has 2 results:
    		1)  Many will not bother.  They will throw it away.
    		This means less accuracy, and more statistical error.
    		2)  Those who do bother will be talking to us...
	B)  Even if they do not ask us directly, we (those who manipulate
    	that is) would still be asking them.  After all, what would
    	stop them?  We (they) work with the customer 40 hours a week.
    	As for myself, I usually find out about the surveys from my
    	customers BEFORE I find out from Digital.
    
    In short, the problem (coaching) would still be possible, it
    would still benefit those who do it, so they would STILL do it.

    The real oppurtunities for change are customers understanding and 
    our own motivations.  What we NEED is a survey that does not confuse
    the customer into coming to us to fill them out for them.  What we NEED
    is a survey that goes out so often that it is more efficient for
    management to actually work on customer satisfaction than it is
    to try and control the surveys interpretation.  And finally what
    we NEED is sufficient decisive leadership from our upper management
    so that our middle and lower management will have no doubt about
    what Digital thinks "The Right Thing" is in this situation.

    Having indepedents run the survey will not solve our problems for
    us.

Re .43:
    
>    to be taken seriously.  Which is easier?  Turning the management
>    over to a professional group that does such things for a living
>    and has it down to a science, or changing all field personnel's
>    attitudes?

    Oh you are right, the first IS easier.  I just do not think that
    we should be trying to take the easy way out here.  We got into
    this situation because too many of us are already taking the easy
    way out of a problem.  That is, there are too many us of managing the
    CS surveys and not enough actually managing CS.  I also believe that
    it is INFINITELY better to change all field personnel's attitudes.
    That is after all what the CS surveys were intended to accomplish
    in the first place.

    
    --  Barry
257.46We're not so far apart...CHOVAX::YOUNGShoes for the Dead!Sun Jun 26 1988 05:064
    I just wanted to add, that I really think that we are not too far
    apart in our opinions.  I mean we seem to agree on MOST of the issues.
    
    --  Barry (Who listens to Firsign Theater too)
257.47Digital gets a black eye every year over this ...AUSTIN::UNLANDSic Biscuitus DisintegratumMon Jun 27 1988 18:4344
    re:  .45 and the outside organization

>    First of all I happen to think that the vast majority of field people
>    DO take the CS surveys seriously.  The problem is not that, its
>    that it has become acceptable to try to manipulate the results.

    In my experience (in the field) the ONLY time the CS surveys are
    taken seriously is when it comes time to "manage the process" which
    is a euphemistic way of saying "tell the customer what to write".
    
>    First of all what do you think that a "Professional" company will
>    be able to do that we do not already do?  It has been my experience
>    that:
>    	A)  A customer will NOT even respond to a questionarre from
>    	an 'independent' source because they regard their business
>    	practices as SECRET.  The only way that they would reply to
>    	these is if they here from someone they personally KNOW from
>    	Digital that this is in fact being done for Digital.  Note 

    I really don't understand why people would not respond to a survey
    conducted by a reputable outside agency, such as any one of the
    Big Eight accounting firms like Coopers, or Authur Anderson.  These
    are the people who audit just about every Fortune 500 company in
    the country, and are well known (as well as bound by law) to respect
    the confidentiality of the companies involved.
    
    Maybe it's the negative connotations of the word "survey".  A "survey"
    implies that the results are for marketing purposes, not for direct
    application to a customer/vendor relationship.  Perhaps if we called
    the process a "Customer Satisfaction Audit", the customer would
    understand the intentions of the process a bit more.
    
    re .46 and the "threat" tactic
    
    I've been in situations where it was implied to the customer that,
    if our group did not fare well on the CS survey, then their favorite
    software rep would be assigned to another account, that seed equipment
    might become scarce, etc, etc.  The gist of the message was not that
    the customer might not suffer directly, but it would become more
    difficult for us to support them in the manner in which they had
    become accustomed.  The old "what comes around goes around" story.
    
    Geoff
257.48ATLANT::SCHMIDTTue Jun 28 1988 17:0839
257.49GIDDAY::SADLERI'd rather be skiing....Wed Jun 29 1988 21:199
    re -1
    
    Recap is still used but as far as I know it is measured against
    the total number of hours worked against a system. This is whether   
    or not the hours are logged aganist one call or multiple calls to
    that system.
    
             
    .jim.
257.50VIDEO::LEICHTERJJerry LeichterSun Jul 03 1988 17:0320
Since I'm listed as the owner of a MicroVAX I use at Yale, I get to see this
from the other side.

The first time I got one of these surveys, I read it, laughed a bit, an threw
it out.  I thought answering would be kind of a conflict of interest.  Later,
my FS rep, who I know and respect, asked me if I had sent it in.  He made it
clear that it was important to him.  (He made no direct attempt to influence
what I might say, but of course I could read between the lines.)

So, since then I fill the thing out.  If you look at the questions, you can
see that some really relate to the grunts in the field, and some to higher
levels of management.  My policy has been simple:  Since the "grunts" I've
dealt with all have tried to do their best in an often screwed up system, I
give them high ratings.  Since the system is the responsibility of the higher
levels, I successively rate lower and lower as questions seem to apply to
higher levels of management.

How long do you think the surveys would continue in the present form if
a lot of them were answered that way?  :-)
							-- Jerry
257.51Doesn't always work that wayGIDDAY::SADLERI'd rather be skiing....Tue Jul 05 1988 01:1922
    re .-1
    
    A lot of the surveys we get back give the "grunts" a good score
    ("for service above and beyond the call of duty" etc) and slam
    logistics and management. So the customers trying to tell DEC that
    the account reps are great, they respond quickly, they fix faults
    quickly, the PMs are great, the hardware is the bees knees, but
    you never have spares and the managers suck. The managers turn around
    and say this may be true, but the customer shouldn't percieve it
    this way, therefore you aren't looking after the customer correctly.
    Basically, altho' the customer is trying to get at DEC's red tape
    and some mundane managers, it's the account rep that can get the
    thin end of the stick. He/she is supposed to  manage the
    customer in such away that the customer never knows we don't have
    spares and so on. This is very hard to do at the best of times.
    This is regarding the field circus survey, I don't have any experience
    with the others. The surveys are a good concept, but perhaps it
    is time for a change. Sending out one survey a year that covers
    all DEC functions and have it done by an independent surveyer would
    be one solution.
    
    .jim.
257.52Ed. Services QA SystemBMT::COMAROWResource wait stateTue Jul 05 1988 10:4911
    I think ED. Services QA system is absurd.  If the students don't
    like the temperature in the room, the course materials, the bathrooms,
    the lack of phones, etc., even though they might love my teaching,
    it counts against me.
    
    I dropped only -1- class this year, the room was unbearably hot, in the
    upper 90s.  I'm told a QA is a QA, and this will hurt me to the tune of
    1000s of dollars.  Clearly, I am being punished for management's
    mistakes. 
    
    Ever wonder why there is such a high turnover in Ed. Services?
257.53survey says:CSCMA::CHISHOLMabsolutely.Mon May 01 1989 18:1931
    The CSC's have surveys too.  Every time we close a customer's problem
    call (arrive at a solution, tell the customer, document it) we do
    it with the knowlege that this customer may be contacted within
    the next few hours and surveyed to see how we did.
    
    Some of them are so pleased that they send mail. :^)
    
    You, your team, your manager and the Support Center as a whole are
    rated by 'the numbers'.  
    
    strengths:
    
    survey is immediate/timely, so impressions are fresh
    sample size is fairly large, and 'random'
                                             
    weaknesses:
    
    the customer is still sometimes *very* annoyed that his system was
    down for 2 hours in the middle of the morning, and he's giving you
    a 5.0 just to get even with the [insert name of failing hardware
    or software module here].
         
    the scores are inversely proportional to difficulty/complexity of
    the problem        
    
    the scores are inversely proportional to how busy we are (and how
    long the call stays in the queue)
                           
    some people just give everyone 7's all the time
    
    /jeff
257.54customer praise letters are importantSAUTER::SAUTERJohn SauterSat May 06 1989 21:575
    re: .53---I recently sat in on a group meeting in CXO, in which it was
    mentioned that one member of the group had received a letter of praise
    from a customer.  In addition, I have seen such letters posted on a
    bulletin board.  I get the impression that these letters are treasured.
        John Sauter
257.55Customer ATABOYS are indeed precious !!CSC32::M_JILSONDoor handle to door handleMon May 08 1989 14:490
257.56...and those of us in the field *THRIVE* on them!GUIDUK::BURKEBreaking through my walls!Thu May 11 1989 22:471
    
257.57They're great!!COOKIE::WILCOXDatabase Systems/WestWed May 17 1989 21:037
Boy are those customer letters precious!  I believe I have the
distinction of recieving the most of any of the VIA support
team at the Colorado Springs Customer Support Center.  (no,
I didn't pay for them and yes, I'm tootin' my horn).  I'm
no longer working there and I miss 'em :-(.

Liz
257.58CSC32::M_VALENZASome more pie, Admiral?Wed May 17 1989 22:284
    Well, Liz, if you really miss those letters that badly, you can always
    come back to work with us on the VIA team. :-)
    
    -- Mike
257.59Not for another yearCOOKIE::WILCOXDatabase Systems/WestThu May 18 1989 22:093
Aw, Mike, I'd have to work for free for the next three months, right?

Liz :-).
257.60don't knock it....SAUTER::SAUTERJohn SauterWed May 31 1989 14:183
    Working in the VIA group in Colorado Springs can be a rewarding
    experience, even if it's "for free".
        John Sauter
257.613 years laterCALL::SWEENEYPatrick Sweeney in New YorkFri May 11 1990 14:1334
    Judging from discussion elsewhere in the conference, some "new
    thinking" is going on regarding the customer satisfaction survey.
    I think that the survey can be useful.
    
    Through the magic of VAX Notes I'm able to review suggestions made
    3 years ago:
    
    BIG SUGGESTIONS 
    
    Either make the survey "customized" to precisely the set of services
    the customer has purchased in the preceding 12 months.  Why ask
    all customers about services only 1 in 100 purchase.
    
    (OR) make the survey "blind".  That is, have a third party run the
    survey and have the customer evaluate ALL their computer vendors.
    
    No aspect of the survey is tied to the compensation of any employee,
    with one exception, the CEO.
     
    SMALL POINTS
    
    (1) Change from a 10 point scale to a 5 point scale.  No one conducts
    surveys with a 10 point scale.
    
    (2) Radonmize the questions, don't make all positive answers be "5" or
    "10"
    
    (3) Make sure the survey reflects satisfaction with Digital AT ALL
    LEVELS, not merely the individual contributor in contact with the
    company
    
    In any case, if anyone knows of serious attempts to improve the survey
    please tell me who to contact.
                                          
257.62DID YOU WANT FRONT WHEEL DRIVE?JUPITR::BUSWELLWe're all temporaryFri May 11 1990 16:3012
    Side point.
    When the apple growers of New England couldn't fine enough
    cheap help to pick their crop they started marketing "pick
    your own" apples get picked and sold at a higher profit.,(win, win) 
    Now is that what the customer wanted? Or is that what the
    farmer wanted?
    We (dec) should try to create markets that plays into our hands.
    Don't waist too much time asking the customer what he/she wants,
    tell them. Tell them long enough and and then they want it.
    
    
    buz
257.63LESLIE::LESLIEAndy Leslie, CS Systems EngineeringMon May 14 1990 12:064
    There's a word I know for companies that tell the customer what they
    want and don't listen when the customer says "no I want this instead":
    
                                 "bankrupt"