[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference npss::gigaswitch

Title:GIGAswitch
Notice:GIGAswitch/FDDI Jan 97 BL3.1 914.0 documentation 412.1ion 412.1
Moderator:NPSS::MDLYONS
Created:Wed Jul 29 1992
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:995
Total number of notes:4519

980.0. "SNMP poll failures on V3.1" by COMICS::BUTT (Give me the facts real straight.) Wed May 07 1997 10:37

    Are there any known issues with V3.1 that would cause intermittent
    SNMP response failures. 
    
    A customer is reporting that a NMS is polling his 4 G/S/FDDI's and
    occassionally it fails to get a response. When this happens PINGs
    reply OK.
    
    I will get a trace off the source ring but wanted to know if this has
    been reported by anyone else. 
    
    Thanks.
T.RTitleUserPersonal
Name
DateLines
980.1NPSS::MDLYONSMichael D. Lyons DTN 226-6943Wed May 07 1997 13:218
        No, although an IPMT has been entered regarding slow response time.
    
        As indicated in the bug fix list in 914.4, there is one problem
    which might get reported as a SNMP hang, but it is not intermittent,
    and is actually ARPs to the GIGAswitch/FDDI system which are not
    getting replies.
    
    MDL
980.2NPSS::MDLYONSMichael D. Lyons DTN 226-6943Wed May 07 1997 13:238
        Usually intermittent SNMP failures indicate network losses (other
    than the GIGAswitch/FDDI system).  If you're doing a trace, make
    absolutely certain that you are getting the trace immediately before
    the GIGAswitch/FDDI system port, and immediately after the
    GIGAswitch/FDDI system output port (which are not necessarily the
    same).
    
    MDL
980.3SNMP traffic ?OPCO::NET_JPMThu May 08 1997 04:5813
    I have a GIGAswitch that is running 3.0 ,30 ports in use with
    HPopenview collecting input/ouput packets for every port every 1min
    that gets "holes" in its data collection. I was going to wait until
    after it was upgraded to 3.12 to ask the same question as the base
    note.
    This GIGAswitch does fail some  pings when the datacollection is running but
    does not appear to drop user traffic. Are snmp querrys hard on the SCP?
    Is the switch likely to drop packets if someone hits it hard with SNMP
    querrys?
    There are other GIGAswitchs in the network that do not have the same
    problem but have a lot less traffic.
    
John Murphy
980.4No problems in BRS Clusters.CHEFS::PADDICKMichael Paddick - BRS Bristol, UKThu May 08 1997 10:4017
    Richard,
    
    In the BRS (Split site) FDDI Clusters I've worked with using
    Gigaswitches, they all have status polling every 1:00 min, 
    on lots of entities, and as far as I know, there's no
    problem at all.
    
    This includes networks with up to 4 Gigaswitches.
    
    Our GigaSwitch at Bristol is polled 24 hours a day at 1 minute
    intervals, and there doesn't appear to be any problem with it.
    
    Everything's at version 3.1, except clock module at 3.0.
    
    Meic.
    
    
980.5NPSS::MDLYONSMichael D. Lyons DTN 226-6943Thu May 08 1997 13:3653
        As with everything, at some point there is a limit to what can be
    handled.  If you have a single NMS doing the polling, I think it's
    unlikely that you will see the GIGAswitch/FDDI system dropping SNMP
    requests since they are limited to one at a time by the
    request/response format.  ...that is - unless the SCP is busy doing
    other things like flooding, et cetera.
    
        In a network, SNMP frames are data frames just like any other frame
    and are subject to the same dangers of frame loss due to congestion. 
    SNMP does not get guaranteed delivery.  If a frame is lost in transit,
    it is lost.
    
        The SCP does not handle forwarding of frames except for multicast
    and flooded frames.
    
        If you look at the MIB II counters, you'll find some counters which
    may be helpful in identifying what is happening:

                             snmpInPkts = 5707
                            snmpOutPkts = 5700
                      snmpInBadVersions = 0
                snmpInBadCommunityNames = 0
                 snmpInBadCommunityUses = 6
                     snmpInASNParseErrs = 0
                          snmpInTooBigs = 0
                      snmpInNoSuchNames = 0
                        snmpInBadValues = 0
                        snmpInReadOnlys = 0
                          snmpInGenErrs = 0
                     snmpInTotalReqVars = 15142
                     snmpInTotalSetVars = 0
                      snmpInGetRequests = 32
                         snmpInGetNexts = 5632
                      snmpInSetRequests = 0
                     snmpInGetResponses = 0
                            snmpInTraps = 0
                         snmpOutTooBigs = 0
                     snmpOutNoSuchNames = 29
                       snmpOutBadValues = 0
                         snmpOutGenErrs = 6
                     snmpOutGetRequests = 0
                        snmpOutGetNexts = 0
                     snmpOutSetRequests = 0
                    snmpOutGetResponses = 5699
                           snmpOutTraps = 0
                    CounterCreationTime =  2-MAY-1997 14:30:41.32
                  snmpEnableAuthenTraps = enabled
    
    ...as well as all the standard counters for dropped frames, et cetera. 
    Correlate increases in these counters with the times you're missing
    data.
    
    MDL
980.6NPSS::MDLYONSMichael D. Lyons DTN 226-6943Thu May 08 1997 16:5810
        ....also worth mentioning is that since SNMP is a single threaded
    process on the GIGAswitch/FDDI system, if it's busy handling another
    SNMP request which takes a long time, it will not respond until it's
    done.  It would not be unusual for an NMS to time out before the
    request actually gets handled.
    
        An example of something which takes a long time is a SNMP SET like
    changing the default filter matrix - this takes many seconds.
    
    MDL
980.7More than one NMSOPCO::NET_JPMThu May 08 1997 21:5916
    The key here may be that the GIGAswitch SNMP is a single threaded
    process and there is more than one NMS. There is  an NMS for faults ie
    ports up/down , and one for Stats plus backup NMS's that are not ment
    to poll but I will check. I will also check to see if the stats
    collection  is single threaded or not on HPOV.
    Part of the problem is that there is a migration from DECmcc,SunNET
    manager and  Polycentre Netview to HPopenview so there is a lot of NMS's to
    check here. 
    Of the customers five GIGAswitches there is only one that is doing this
    but the others have no where near as many used ports so the stats
    collection may finish quicker 
    
    
    Thanks,
      John Murphy
      OMS
980.8NMS timesout slow replies.COMICS::BUTTGive me the facts real straight.Thu May 22 1997 07:4510
    The traces show that sometimes the G/S takes over 5 seconds to respond
    to SNMP gets. The NMS is polling for interface stats. It uses
    multi-object GETs and polls multiple interfaces every poll cycle. When
    the reply times are extended the NMS timesout.
    
    The poll cycle time is large but the GETs are all sent together. Are
    GETs that need line card counters particularly slow ? Any comment on
    the performance, is 5-10 seconds typical for this type of operation.
    
    R.
980.9NPSS::MDLYONSMichael D. Lyons DTN 226-6943Thu May 22 1997 13:2716
        What is the time between responses from the GIGAswitch/FDDI system? 
    (I.E. *not* the time from when you send a request to when you receive a
    response, but instead the time between GIGAswitch/FDDI responses..) 
    Remember - SNMP on the GIGAswitch/FDDI system is single threaded - if
    you send it fifty responses, it will respond to them one at a time, and
    the last one will be delayed due to the earlier responses.
    
       5-10 seconds is certainly not typical for a response to a single
    request.  I have never seen such a delay, and expect that you are
    queueing up requests, or the SCP is doing something else.
    
        ...but it is true that Interface MIB counters reside on the
    linecards and require extra overhead between the SCP and linecards to
    acquire that information.
    
    MDL
980.10Odd SNMP response performanceCOMICS::BUTTGive me the facts real straight.Fri May 23 1997 15:2423
    Re .9
    
    I tried to test this with a datascope sending a captured SNMP GET.
    
    I found that if the GETs were too close together the G/S did not
    respond at all to 2nd and subsequent GETs. The minimum delay was about
    400 ms. This is odd as tracing a normal SNMP (hubwatch) operation the
    G/S handles GETs much quicker.
    
    I then tried changing the ID numbers but this made no difference.
    
    If the scope sends identical (with or without different IDs) SNMP
    GETs to the G/S it seems not to respond to all unless they are separted
    in time. 
    
    The customers traces show a similar problem. The NMS sends 4 GET reqs
    all together. Each has 13 objects. One example asks for
    1.3.6.1.2.1.2.2.1.n.14
    The first GET gets a response 1.5 seconds later and the 3 others
    timeout (3 3 seconds).
    
    Rgds R.
     
980.11NPSS::MDLYONSMichael D. Lyons DTN 226-6943Fri May 23 1997 18:1239
        Most likely, as was requested several times, the kludge to ignore
    duplicate frames within a certain time period was extended to SNMP. 
    This is to avoid the duplication inherent with multiply connected
    rings.
    
        Can you identify any reason why someone would send duplicate
    requests back to back in any normal usage?
    
        ..and please don't forget to get the original answer, which is to
    identify the time between responses (to a *real* NMS stream, or at
    least one which looks vaguely real).  
    
        Pounding the GIGAswitch/FDDI system with back-to-back SNMP requests
    will most certainly result in getting some dropped - but it is a
    completely artificial environment.  There is no particular intent to
    optimize the GIGAswitch/FDDI system for SNMP responses - it is
    optimized for unicast frame switching.  SNMP is necessary for
    management, and if something is causing the GIGAswitch/FDDI system to
    be unmanageable, then it will be addressed.
    
        We have received *one* report apart from yours that SNMP is unable
    to keep up with a NMS (Cabletron Spectrum with a fully loaded
    GIGAswitch/FDDI platform).  It is under investigation, but it appears
    to be caused by the way their application is designed (they create 
    multiple processes which all send multiple gets without waiting for
    responses) - i.e. even though the GIGAswitch/FDDI system is answering
    in a reasonable timeframe, since the GIGAswitch/FDDI system SNMP agent
    is single threaded, there is only one process to receive n processses
    worth of requests.  After a few seconds, these multiple processes
    retransmit the same requests assuming they have been lost, causing a
    even larger backlog of requests - repeat ad infinitum.
    
        The question (which is the same as in your case) is whether the
    GIGAswitch/FDDI system slows down the response rate, or is just being
    sent requests faster than it can handle.  Some GETs take longer - those
    which require information from the linecard.  But they don't take
    forever.
    
    MDL
980.12Trace info to follow.COMICS::BUTTGive me the facts real straight.Tue May 27 1997 09:0842
    Re .11
    
    >Most likely, as was requested several times, the kludge to ignore
    >duplicate frames within a certain time period was extended to SNMP.
    
    I tried using packets with different request ID's the G/S still ignored
    the 2nd and subsequent GET requests, if sent before the 1st response.
    
    
    >Can you identify any reason why someone would send duplicate
    >requests back to back in any normal usage?
    
    No, I was trying to use this as a test to measure the response time.
    
    >.and please don't forget to get the original answer, which is to
    >identify the time between responses (to a *real* NMS stream, or at
    >least one which looks vaguely real).
    
    I have a real trace, it shows what I tried to describe in .10.
    
    "   The customers traces show a similar problem. The NMS sends 4 GET reqs
        all together. Each has 13 objects. One example asks for
    	  1.3.6.1.2.1.2.2.1.n.14
        
    The first GET gets a response 1.5 seconds later and the 3 others
    timeout (3 3 seconds). "
    
    The problem seems to be that the customer traces and my testing are
    consistent but tracing a normal SNMP GET sequence from Hubwatch goes
    much faster than this. It could be that the difference is that hubwatch
    sends one request at a time and waits for the response before sending
    the next. Testing and the customer's NMS uses multiple outstanding
    requests.
    
    The real trace is large but it can be easily searched by
    request/response ID. I will send you a copy.
    
    Thanks R.
    
    
    
    
980.13How about this scenario?NPSS::MDLYONSMichael D. Lyons DTN 226-6943Tue May 27 1997 20:1238
    ....well, you create SNMP PDUs which have thirteen GETs in it.  The
GIGAswitch/FDDI system SNMP responder is single threaded.  You have a three
second timeout period from your SNMP manager sending the requests.  ...assume
it takes 1.5 seconds for the GIGAswitch/FDDI system to process one of those
PDUs:
    
    Sequence of events
    
    Elapsed time   Event    (What is happening)
    0.0            Send frame 1 (13 objects requiring linecard response)
                   Send frame 2 ""
                   Send frame 3 ""
                   Send frame 4 ""
    0.1-1.6        GIGAswitch/FDDI system processing PDU (frame 1)
    1.6            GIGAswitch/FDDI system sends response to frame 1
    1.7-3.2        GIGAswitch/FDDI system processing PDU (frame 2)
    3.1            SNMP manager times out frame 2,3,4
    3.2            GIGAswitch/FDDI system sends response to frame 2
    3.2            SNMP manager retransmits frames 2,3,4 (now called 5,6,7)
    3.3-4.8        GIGAswitch/FDDI system processing PDU (frame 3)
    4.8            GIGAswitch/FDDI system sends response to frame 3
    4.9-5.4        GIGAswitch/FDDI system processing PDU (frame 4)
    5.4            GIGAswitch/FDDI system sends response to frame 4
    5.5-7.0        GIGAswitch/FDDI system processing PDU (frame 5)
    6.3            SNMP manager times out frames 5,6,7
    6.4            SNMP manager retransmits 5,6,7 (now called 8,9,10)
    7.0            GIGAswitch/FDDI system sends response to frame 5
    7.1-8.6        GIGAswitch/FDDI system processing PDU (frame 6)
    8.6            GIGAswitch/FDDI system sends response to frame 6
    8.7-10.2       GIGAswitch/FDDI system processing PDU (frame 7)
    9.5            SNMP manager times out 8,9,10
    10.2           GIGAswitch/FDDI system sends response to frame 7
    
    ...I think you get the idea - it can't work.  You are doing retransmissions
before the GIGAswitch/FDDI system has finished handling your original requests. 
I haven't looked at your trace yet, but I don't expect any surprises...
    
    MDL
980.14NPSS::MDLYONSMichael D. Lyons DTN 226-6943Tue May 27 1997 20:594
    P.S. In case you're wondering, from the perspective of the
    GIGAswitch/FDDI system design, 1.5 seconds to respond to a SNMP PDU
    containing 13 GETs which require linecard polling is quite reasonable.
    I would have expected worse times, myself.
980.15Could be just taking a long timeCOMICS::BUTTGive me the facts real straight.Wed May 28 1997 12:0615
    Re .13
    
    The scenario looks OK, this could be what's happening. I would like
    to be able to explain to the customer why 1.5s doesn't seem like a long
    time. It was as much a surprise to them as me.
    
    I'm still left with my testing which sends 3 Get reqs with different
    ID's back to back and counts the number of responses. Consistently I send 
    3 and get 1 response. When the time between GETs is large enough to allow 
    a response BEFORE the next GET it works as expected. I'll recheck this
    to make sure I don't have a scope programing error.
    
    Thanks R.
    
    
980.16NPSS::MDLYONSMichael D. Lyons DTN 226-6943Wed May 28 1997 13:4440
        You are sending 13 SNMP GET requests, each of which requires
    separate actions from the SNMP process on the GIGAswitch/FDDI system. 
    
        In the sample you gave, the OID was from one of the interface
    tables in MIB II.  Interface counters are kept on the line cards.  When
    you do a GET request on an interface counter, the SCP has to send ICCs
    (intercard command packets) to the linecard to obtain this information.
    If a copy were stored on the SCP, it would be out of date.  Caching
    this information would yield invalid results.
    
        SNMP is not a high priority.  Forwarding frames is a high priority.
    
        The process of exchanging ICCs with the linecard is quite expensive
    in time.  In your example, where you had 13 GETs in a single SNMP PDU,
    which took 1.5 seconds, this means it took about 0.1 second per ICC
    exchange (rough approximation) - this is quite good.  It would not be
    unexpected for an ICC to take 0.5 seconds.
    
        Your problem is that you are piggybacking lots of SNMP requests in
    a single PDU and seem to expect it to take the same amount of time.  It
    doesn't work that way.
    
        Going back to your trace - are you quite positive that you are
    waiting and capturing all the responses, and not cutting them off? 
    I will examine your trace today when I get a few minutes.
    
        You still haven't answered the question as to the separation in
    times on the GIGAswitch/FDDI system responses.  ***IGNORE THE TIME THE
    REQUESTS WERE SENT***.  They don't much matter if you are stacking up
    requests.  What is important is to determine if the GIGAswitch/FDDI
    system is consistently sending responses on a regular basis, which is
    what I expect you are seeing.  The problem is most likely that you are 
    sending requests faster than the GIGAswitch/FDDI system is capable of
    answering.
    
        This is hardly a surprise - I'll repeat myself - the
    GIGAswitch/FDDI system is designed to forward frames quickly, not to
    answer as many SNMP requests as possible.
    
    MDL
980.17NPSS::MDLYONSMichael D. Lyons DTN 226-6943Wed May 28 1997 20:5974
Richard,

        From your trace - you sent out the following SNMP GETs:

   Frame: 60366      Time: May 19@19:07:14.9844978  Length: 108  GET request
   Frame: 60383      Time: May 19@19:07:15.1459357  Length: 289  GET request
   Frame: 60384      Time: May 19@19:07:15.1533301  Length: 289  GET request
   Frame: 60385      Time: May 19@19:07:15.1570308  Length: 289  GET request
   Frame: 60386      Time: May 19@19:07:15.1605157  Length: 289  GET request
   Frame: 60434      Time: May 19@19:07:15.8563303  Length: 95  GET request
   Frame: 60578      Time: May 19@19:07:16.6516760  Length: 94  GET request
   Frame: 60691      Time: May 19@19:07:17.9210365  Length: 94  GET request
   Frame: 60727      Time: May 19@19:07:18.1617624  Length: 289  GET request
   Frame: 60728      Time: May 19@19:07:18.1633124  Length: 289  GET request
   Frame: 60730      Time: May 19@19:07:18.1650077  Length: 289  GET request
   Frame: 60938      Time: May 19@19:07:19.6610133  Length: 94  GET request
   Frame: 60997      Time: May 19@19:07:20.2529824  Length: 95  GET request
   Frame: 61064      Time: May 19@19:07:21.1718987  Length: 289  GET request
   Frame: 61066      Time: May 19@19:07:21.1735261  Length: 289  GET request

    ..and got the following responses:

   Frame: 60367      Time: May 19@19:07:14.9860001  Length: 145 GET response
   Frame: 60576      Time: May 19@19:07:16.6151022  Length: 328 GET response
   Frame: 60681      Time: May 19@19:07:17.7570057  Length: 328 GET response
   Frame: 60831      Time: May 19@19:07:18.9252834  Length: 329 GET response
   Frame: 60970      Time: May 19@19:07:19.8964039  Length: 329 GET response

    ..combined and massaged:

Frame: 60366  Time: May 19@19:07:14.9844978  GET request 20435
Frame: 60367  Time: May 19@19:07:14.9860001  GET response     20435
Frame: 60383  Time: May 19@19:07:15.1459357  GET request 20437
Frame: 60384  Time: May 19@19:07:15.1533301  GET request 20442
Frame: 60385  Time: May 19@19:07:15.1570308  GET request 20443
Frame: 60386  Time: May 19@19:07:15.1605157  GET request 20444
Frame: 60434  Time: May 19@19:07:15.8563303  GET request 43822
Frame: 60576  Time: May 19@19:07:16.6151022  GET response     20437
Frame: 60578  Time: May 19@19:07:16.6516760  GET request 20678
Frame: 60681  Time: May 19@19:07:17.7570057  GET response     20437
Frame: 60691  Time: May 19@19:07:17.9210365  GET request 20028
Frame: 60727  Time: May 19@19:07:18.1617624  GET request 20442
Frame: 60728  Time: May 19@19:07:18.1633124  GET request 20443
Frame: 60730  Time: May 19@19:07:18.1650077  GET request 20444
Frame: 60831  Time: May 19@19:07:18.9252834  GET response     20442
Frame: 60938  Time: May 19@19:07:19.6610133  GET request 20678
Frame: 60970  Time: May 19@19:07:19.8964039  GET response     20442
Frame: 60997  Time: May 19@19:07:20.2529824  GET request 44140
Frame: 61064  Time: May 19@19:07:21.1718987  GET request 20443
Frame: 61066  Time: May 19@19:07:21.1735261  GET request 20444

    ...and the last frame in your trace had a time of 19:07:21.2143821

    ...the incremental response times were:

Frame 60367-60576    1.6291012 seconds
Frame 60576-60681    1.1419035 seconds
Frame 60681-60831    1.1682777 seconds
Frame 60831-60970    0.9711295 seconds

        ...and I expect it continued after you stopped the trace.

    My analysis is that for some reason the GIGAswitch/FDDI system appears to
have processed two copies of some of your SNMP GET requests, or duplicated the
requests internally.  It was responding to the requests in order, and
probably responded to the others after the trace ended.
    
    If you want to pursue this, I suggest you confirm that there is only a
single connection to the GIGAswitch/FDDI system.  Also, this time, continue
the trace for several minutes past when you stop sending SNMP frames.  It
would be easier if you filtered the frames on the analyzer to eliminate the
non-SNMP frames.

MDL
980.18Thanks , More laterCOMICS::BUTTGive me the facts real straight.Thu May 29 1997 13:218
    Michael
    
    Thanks for the work you have done on this.
    
    I hope to post the traces from my local testing here next week.
    
    R.
    
980.19It does reply, if you wait.COMICS::BUTTGive me the facts real straight.Mon Jun 02 1997 13:5216
    Re .18
    
    I have re-written the test program and agree that the G/S will
    eventually respond to differnet SNMP GETs if all sent back to back. Sorry 
    for the inacurate info posted previously. As was, I think previously stated
    , the G/S will not respond to SNMP GET messages it considers to be 
    duplicates, I think that my problem was incorrect editing the message 
    sequence numbers on the datascope.
    
    The base note problem was the NMS 3sec timeout is too short for the
    type of operation being performed.
    
    Many thanks R.
    
    
    
980.20NPSS::MDLYONSMichael D. Lyons DTN 226-6943Mon Jun 02 1997 20:194
        ...incidently, the kludge I discussed earlier regarding duplicate
    frames has not at this point been extended to SNMP frames.
    
    MDL
980.21Back to back duplicate SNMP getsCOMICS::BUTTGive me the facts real straight.Thu Jun 05 1997 07:44110
    Re .20
    
    Test 1  with  SNMP  Gets  sent  back  to  back.  16.200.0.28  is  G/S  , 
    16.182.96.55 is the message from the datascope.

When the gets  are sent together I only get 1 response. (SCP and FGL V 3.10)


Frame ID     Arrival Time       Size  Source Node  Dest Node    Status
-------------------------------------------------------------------------------
    1 Jun 03 12:18:35.250       0198 16.182.96.55  16.200.0.28   SNMP   
    2 Jun 03 12:18:35.250       0198 16.182.96.55  16.200.0.28   SNMP   
    3 Jun 03 12:18:35.250       0198 16.182.96.55  16.200.0.28   SNMP   
    4 Jun 03 12:18:35.250       0212 16.200.0.28   16.182.96.55  SNMP   
                                                                          
------------------------------------------------------------------------------
 
Test 2 with frames sent 1 sec apart. Same packet, 3 responses.

 Frame ID     Arrival Time       Size  Source Node  Dest Node    Status
-------------------------------------------------------------------------------
    1 Jun 03 12:37:31.960        0198  16.182.96.55 16.200.0.28   SNMP   
    2 Jun 03 12:37:31.970        0212  16.200.0.28  16.182.96.55  SNMP   
    3 Jun 03 12:37:32.970        0198  16.182.96.55 16.200.0.28   SNMP   
    4 Jun 03 12:37:32.970        0212  16.200.0.28  16.182.96.55  SNMP   
    5 Jun 03 12:37:33.970        0198  16.182.96.55 16.200.0.28   SNMP   
    6 Jun 03 12:37:33.970        0212  16.200.0.28  16.182.96.55  SNMP   
 


Full details of frame sent. 
 
Frame 1 Size   198 Absolute Time Jun 03 12:18:35.250 ASCII MODE
-------------------------------------------------------------------------------
----- Level # 1 is ETHERNET Offset: 0 Size: 14
 
 Dest Address   : DEC     -a5-c2-0c                08 00 2b a5 c2 0c  ..+...
 Source Address : DEC     -a0-ae-88                08 00 2b a0 ae 88  ..+...
 Type           : DoD IP                           08 00              ..
----- Level # 2 is DoD IP Offset: 14 Size: 20
 
 Version        : 4                                4                  .
 Header Length  : 20                               5                  .
 Type Of Service: 0                                00                 .
                : 000..... Routine
                : ...0.... Normal Delay
                : ....0... Normal Throughput
                : .....0.. Normal Reliability
 Total Length   : 180                              00 b4              ..
 Identification : 39817                            9b 89              ..
 Flags          : 0
                : 0.. Reserved
                : .0. May Fragment
                : ..0 Last Fragment
 Fragment Offset: 0                                00 00              ..
 Time To Live   : 30                               1e                 .
 Protocol       : 17 (DODUDP)                      11                 .
 Header Checksum: 32479                            7e df              ~.
 Source Address : [16.182.96.55]                   10 b6 60 37        ..`7
 Dest Address   : [16.200.0.28]                    10 c8 00 1c        ....
----- Level # 3 is DoD UDP Offset: 34 Size: 8
 
 Source Port    : 2129                             08 51              .Q
 Dest Port      : 161 (SNMP)                       00 a1              ..
 Length         : 160                              00 a0              ..
 Checksum       : 55332                            d8 24              .$
----- Level # 4 is SNMP Offset: 42 Size: 152
 
 PDU Length     : 149                              30 81 95           0..
 Version        : 0                                02 01 00           ...
 Community      : 08002bxxxxxx                     04 0c 30 38 30 30  ..0800
 		:                                  32 62 61 35 63 32  2bxxxx
                :                                  30 30              xx
 PDU Type       : GETNEXT Request                  a1 81 81           ...
 Request Id     : -1282348033                      02 04 cc 6f 14 01  ...o..
 Error Status   : 0 (noError)                      02 01 00           ...
 Error Index    : 0                                02 01 00           ...
 VarBind        :                                  30 73              0s
                :                                  30 15              0.
 Object Name    : {1.3.6.1.4.1.36.2.15.3.3.3.1.4.  06 11 2b 06 01 04  ..+...
                : 3.1.1.2}                         01 24 02 0f 03 03  .$....
                :                                  03 01 04 03 01 01  ......
                :                                  02                 .
 Object Value   : NULL                             05 00              ..
                :                                  30 15              0.
 Object Name    : {1.3.6.1.4.1.36.2.15.3.3.3.1.4.  06 11 2b 06 01 04  ..+...
                : 3.1.2.2}                         01 24 02 0f 03 03  .$....
                :                                  03 01 04 03 01 02  ......
                :                                  02                 .
 Object Value   : NULL                             05 00              ..
                :                                  30 15              0.
 Object Name    : {1.3.6.1.4.1.36.2.15.3.3.3.1.4.  06 11 2b 06 01 04  ..+...
                : 3.1.3.2}                         01 24 02 0f 03 03  .$....
                :                                  03 01 04 03 01 03  ......
                :                                  02                 .
 Object Value   : NULL                             05 00              ..
                :                                  30 15              0.
 Object Name    : {1.3.6.1.4.1.36.2.15.3.3.3.1.4.  06 11 2b 06 01 04  ..+...
                : 3.1.4.2}                         01 24 02 0f 03 03  .$....
                :                                  03 01 04 03 01 04  ......
                :                                  02                 .
 Object Value   : NULL                             05 00              ..
                :                                  30 15              0.
 Object Name    : {1.3.6.1.4.1.36.2.15.3.3.3.1.4.  06 11 2b 06 01 04  ..+...
                : 3.1.5.2}                         01 24 02 0f 03 03  .$....
                :                                  03 01 04 03 01 05  ......
                :                                  02                 .
 Object Value   : NULL                             05 00              ..