[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference irocz::terminal_servers

Title:Terminal Servers
Notice:See Note 2 for Directory of important notes. Please use keywords.
Moderator:LAVC::CAHILLON
Created:Tue May 14 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:3547
Total number of notes:12300

3431.0. "DECSERVER 900 BufferSize and SLIP" by ANNECY::CHALUMEAU () Tue Feb 04 1997 09:14

    
    Is there any parameter or way to increase the Buffer size of ports
    in a DECSERVER 900 ?.
    
    A customer is using SLIP at 38400 Bds to manage DEChubs, he sees a
    high rate of lost packets from its host system on Ethernet input, 
    it there any way to reduce these errors with some setting in the
    DECserver ?
    
    Jean-Claude
    
    
    
T.RTitleUserPersonal
Name
DateLines
3431.1IROCZ::D_NELSONDave Nelson LKG1-3/A11 226-5358Tue Feb 04 1997 12:5013
RE: .0

>    A customer is using SLIP at 38400 Bds to manage DEChubs, he sees a
>    high rate of lost packets from its host system on Ethernet input, 
>    it there any way to reduce these errors with some setting in the
>    DECserver ?
 
Exactly which counter are you looking at?

Regards,

Dave

3431.2DECServer 900 BufferSize and SLIPANNECY::CHALUMEAUTue Feb 11 1997 06:0818
    follow up of note 3431    

    A customer is using SLIP at 38400 Bds on DECServer 900 to manage DEChubs 
    with HUBwatch, he sees a high rate of lost packets from its host system.
 
    the error count is observed with

    show port xx slip counters       parameter Send Packets Lost
  
    Is there any parameter to increase the Buffer size of ports
    in the server, or another way to reduce these errors with 
    some setting in the DECserver 900?


    Jean-Claude
    
    
    
3431.3more on the customer contextANNECY::CHALUMEAUWed Feb 12 1997 13:5819
        In addition to the previous note statement, 
        They have a huge network used for Air Traffic control, 
        they use in band and out of band management to monitor
        they network and modules. They have no problem with in band
        monitoring.  They have only troubles with out of band management
        of some modules, I guess DEChub 900 chassis and few modules, using 
        SLIP on DECserver 900 ports. 
        The problem is observed with the sho port command and also with the 
        monitoring command on the server. With the monitoring they can see
        the Send Packets Lost reporting errors when the Send Packet Queued
        increases. This is why they think these errors might disappear
        with a larger buffer in the DECserver.
        They have their own application that constantly monitors their
        network with snmp request. These errors seems to be a real problem
        for the people in charge of the network administration. And they
        would like to know if we have a workaround or if it is a limitation
        in the product.  

3431.4Where's the backpressure (packet flow control)?IROCZ::D_NELSONDave Nelson LKG1-3/A11 226-5358Wed Feb 12 1997 20:4524
Hmmm.... Not sure we can help.  The counters you describe indicate that the
SLIP interface has too many packets queued to it.  The UART isn't emptying
the queue fast enough.  For TCP applications, this would get throttled by 
the TCP window size.  For UDP applications like SNMP, there's not much we 
can do.  I don't think that ICMP "source quench" packets would come into 
play here.  But that might be something to look into.

First, the number of buffers that each SLIP interface can have in its output 
queue is hard coded.  Secondly, if the packet arrival rate (over a faster 
interface like Ethernet) consistently exceeds the packet transmit rate (over
a slow 38,400 SLIP interface), then no amount of buffering will do.  More 
buffers would just add a small increment in the time it takes to overflow 
the number of buffers allowed.

Applications that run over UDP expect to have to do timeouts and retransmits
in the face of packet loss.  With finite buffering and no "backpressure"
method, this seems like a problem they'll have to live with.

Does anyone else have an idea?  I may be missing something obvious...

Regards,

Dave

3431.5live with it or change their applicationANNECY::CHALUMEAUThu Feb 13 1997 11:557
    Dave, you confirmed what I thought about the problem, a solution
    would be in their application. 
    
    
    Thanks for your help
    Regards,
    jean-claude
3431.6any way to modify the transmit queue buffer size or length for async line ?TOOSRV::SELLESTue Feb 18 1997 07:4920
 i work with the same customer as jean-claude 

they dont want to change their application right now and they 
complain it should be possible to change the Transmit buffer queue size or
length  for the async line to be able to bufferize as much as 
32 UDP request of 150 bytes each so there will be no request loss 


they describe the application as this :  
they poll a variable thru the OBM port of dechub900 , and if this 
variable value changes , they issue a burst of 32 requests  also thru the OBM
port ( so the use of the async port of the decserver900 ) and they wait for 
32 replies 


is there any way to modify this queue length or size ? 
if it is hardcoded , could it be modified for the next DNAS version ( 2.1 ) ?

thanks for your replies 
regards PJ 
3431.7IROCZ::D_NELSONDave Nelson LKG1-3/A11 226-5358Tue Feb 18 1997 13:5137
RE: .6

> they dont want to change their application right now

Customers never want to change their applications, do they?  That costs
money, and far better that they can get their vendors to spend the money
to solve the customer's problem.  Sigh.  We, too, have limited resources...
 
> and they complain it should be possible to change the Transmit buffer queue 
> size or length  for the async line to be able to bufferize as much as 
> 32 UDP request of 150 bytes each so there will be no request loss 

Anything is possible, with time and money.

I will at least look at what we have now.  Keep in mind that statically
allocated buffers are allocated equally for all ports.  On a DECserver 900TM
that's 32 times the buffers for one port.  32 x 32 x 150 = 153,000 bytes.
All I can say at this point is that we'll look at the problem.

> is there any way to modify this queue length or size ?

Sure.  We could change the software to allocate more memory to each port.
That may be an option now that DNAS requires at least 4 MB of RAM.  It 
wasn't such a good idea when we were supporting DNAS in 2 MB of RAM.
 
> if it is hardcoded, could it be modified for the next DNAS version ( 2.1 ) ?

It is hard coded, but V2.1 is already shipping (in limited release).  It will 
be included with the DECserver 900MC modules.  The next general release of 
DNAS is V2.2, and we're fast approaching functionality freeze (prior to 
system test).  It is not clear whether this enhancement can be included in 
V2.2.  That depends on a number of factors.

Regards,

Dave

3431.8what are the sizes right now ?TOOSRV::SELLESWed Feb 19 1997 13:5317
thanks Dave for your reply ; 

now the customer is pressing us ( as usual ) and he is asking as much as
20Kbytes per port ( covering as much as a 120 requests burst ) ; 

he is also asking what are the buffering values used by the product right now 
( for transmit and receive queues on async port ) 

could they be patched for 2.1 ? 

is it possible to include this demand in 2.2  and what will be the delay for
this version ? 

thanks again for your quick answers

regards PJ
3431.9Decserver number of buffers and sizes for SLIPTOOSRV::SELLESWed Apr 09 1997 14:309
 hello 

as a follow up to note 3431 , the customer needs in order to tune 
its application to know the number of input/output buffers 
for slip connections and the buffer sizes for decserver 900 tm and 
decserver 90m with dnas 2.0 

thanks for your replies 

3431.10Well, here's what I've discovered...IROCZ::D_NELSONDave Nelson LKG1-3/A11 226-5358Fri Apr 18 1997 20:5655
RE: .9

> as a follow up to note 3431 , the customer needs in order to tune 
> its application to know the number of input/output buffers 
> for slip connections and the buffer sizes for decserver 900 tm and 
> decserver 90m with dnas 2.0 

Sorry to take so long on this.  We are very busy, and I had to do some
research to answer this question.  Thus it was put off until "later".

The DNAS code used mbuffers, since it is based on BSD Unix V4.3.  We
divide mbuffers into "pools".  A pool has the following properties:
a minimum number of buffers (that belong to that pool and thus are
graranteed to be available for allocation), a maximum number for buffers
(that may be allocated from the pool) and a "parent" pool (from which all
buffers above the minimum but below the maximum may be allocated).

Up until DNAS V1.5 mbuffers were 128 bytes in size and could contain 110
bytes of user data.  For DNAS V2.0 and higher mbuffers were increased to
512 bytes in size and can contain 494 bytes of user data.  However, at the
same time the number of mbuffers in the system was decreased, so that the
total memory allocated to mbuffers did not quadruple.

Each SLIP port has a pool for transmit.  When datagrams are forwarded from the
Ethernet interface to a SLIP interface, the mbuffers in the IP receive pool
for the Ethernet intreface are "transferred" to the SLIP interface's
transmit pool.  (No data is copied; this is bookkeeping only.)  If the
quota for the SLIP transmit pool is exceeded, the packets are dropped.

In DNAS V2.0 the minimum SLIP transmit pool size (for each port) is zero,
the maximum size is 48 and the parent pool is the general pool.  This means
that up to 48 mbuffers can be allocated for each SLIP port from the 
general mbuffer pool (assuming they are available).  This provides for up
to 23,712 (23K) bytes of data, including all headers.  In DNAS V1.5 this
would have been 5,280 (5K) bytes of data.  Note that 48 mbuffers are not
guaranteed.  Allocation depends on the state of the general pool.  For a
single SLIP port, on an otherwise idle DECserver, this shouldn't be a
problem.

We did discover that the guranteed socket pools, created whenever a socket
is opened, were not appropriately "adjusted" to account for the larger mbuf
size, but smaller total number of mbufs in DNAS V2.0.  It is possible that 
this bug is causing your SLIP port to not be able to allocate the full 48 
mbuf maximum before the general pool is depleted.  This would only be the 
case in extreme conditions, when many DECserver sessions (hence sockets) 
were open concurrently.  But its a possibility.  If this is your case, there
is a "patch" image available.

We have not made any adjustments to the mbuffer pooling for SLIP in DNAS V2.2.
We have no current plans to make the pool sizes user manageable.

Regards,

Dave

3431.11Oh, yes... One more important point.IROCZ::D_NELSONDave Nelson LKG1-3/A11 226-5358Fri Apr 18 1997 21:0420
RE: .8

> now the customer is pressing us ( as usual ) and he is asking as much as
> 20Kbytes per port ( covering as much as a 120 requests burst ) ; 

In interpreting the data I gave in .10 be aware that the 23K (max) of
buffering assumes that UPD packets fill mbufs with no wasted space.
Recall that mbufs come in quanta of 512 (with 18 bytes of overhead).
If your 120 requests are in 120 UDP packets, then you'll still fail.
The DNAS code never puts more than one UDP packet in one mubf (or mbuf
chain).  Mbufs are chained together for packets larger than 494 bytes.

If you ever have more than 48 individual UPD packets (which must be less
than 494 bytes) on the SLIP interface queue at one time, even though the
total data is less than 23K, then packets will be dropped.

Regards,

Dave

3431.12thanks for the detailed explanationsTOOSRV::SELLESWed Apr 30 1997 17:356
i will check with the customer that they can tune 
their application so they dont saturate the buffer pools
and how many ports are used to separate dechub900s

thanks again for the details 
PJ