[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::fddi

Title:FDDI - The Next Generation
Moderator:NETCAD::STEFANI
Created:Thu Apr 27 1989
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2259
Total number of notes:8590

522.0. "Using DECbridge 6nn/FDDI to 'sub-segment' a VAXcluster segment" by ARGYLE::LEMONS (And we thank you for your support.) Mon Mar 30 1992 17:43

[This note posted in the MOUSE::FDDI and CSSE32::CLUSTER]

Please opine on this idea, born to solve a problem on several of our LAN
 segments (see below for diagram).  

In HLO, each of our VAXclusters (boot nodes and satellites) are connected
to their own private segment.  These segments are separated from the backbone
by a LANbridge that (1) filters SCA (VAXcluster) and MOP traffic, and (2)
forwards only those packets that are destined for the 'other side'.
This isolates the backbone from much traffic that would have otherwise have
satuarted it years ago.

Several of these segments routinely see 85% utilization spikes and 60%+
steady utilization during the day.  We're planning to add more, powerful
systems (including Alpha) and feel we're close to driving the segment over
the cliff.

The idea is to take the VAXcluster boot nodes and satellites (which today
are connected to the same segment), and create 'sub-segments', placing ~15-20
satellites on each sub-segment and creating one sub-segment for each
boot/server node.

The segment (which will become an FDDI 100 MB ring) will easily handle the
spikes seen that drive our 10 MB Ethernet to 85% utilization (of which, 66%
is SCA traffic and 22% is DECnet, much of which is DECwindows-related).  And
the utilization of each sub-segment will be much less than today, as the
bridges will only pass packets that are bound off the sub-segment, and allow
in packets that are destined for nodes on the sub-segment.

Great theory, yes?  But do you see any holes in this?  For instance, we have
lingering concern about the SCA traffic moving through two bridges, and the
latency that this might cause.  The last thing we need is to construct a
topology that INCREASES latency!

You might be thinking, "But why don't you just connect everything directly
to the FDDI ring?"  Simple:  this is not an option, as (1) Digital-made FDDI
devices do not exist for most of the systems in use in our businesses today
and (2), if they did, the cost to upgrade/outplace the existing controllers
from Ethernet to FDDI would be prohibitively high (based on a quick look at
what these devices cost externally).  Most of the workstations are VAXstation
3100s; some of the boot nodes are VAX 8800/8700s.

A late-breaking thought I've had is to place one boot node on each of the
satellite sub-segments, and 'force' those satellites to be booted and
disk-served by that boot node.  I don't how to make this happen, however.

Please share your ideas and concerns with me, before we order the stuff
(hopefully, within the next week).

Thanks!
tl


                               ---
                              /   \   backbone FDDI ring
                              \   /
                               ---
                                |
                          -----------------
                          | DECbridge 6nn |
                          |_______________|
                                |
                          -----------------
                          | DECbridge 6nn |  bridge filters SCA, MOP traffic
                          |_______________|
                                |
                               ---
                              /   \    VAXcluster segment FDDI ring
                              \   /
                             / --- \
           -----------------/       \----------------
           | DECbridge 6nn |        | DECbridge 6nn | bridges forward SCA,
           |_______________|        |_______________|   MOP traffic
              |     |     |           |     |     |
              |     |     |           |     |     |
       three segments with          three segments with
       ~15-20 satellites            one boot node
       per segment                  per segment
T.RTitleUserPersonal
Name
DateLines
522.1KONING::KONINGPaul Koning, NI1DMon Mar 30 1992 19:257
Sounds reasonable.  Watch out for the bandwidth of the 6xx bridges, it isn't
quite the full 30 Mb/s you might hope for.

Latency shouldn't be an issue; another ms or two isn't going to break the 
cluster.

	paul
522.2DECNIS for back-to-back DECbridgeBONNET::LISSDANIELSMon Mar 30 1992 19:548
Shortly you should be looking at the DECNIS 600 with FDDI interfaces
to replace the back-to-back DECbridges for FDDI-FDDI connectivity.

If will route TCP, DECnet and OSI at 50000 packets/sec and bridge
all other traffic. FCS sept/oct but test units might be available
already in July...

Torbjorn
522.3There is no denying that Latency affects cluster performance...STAR::SALKEWICZIt missed... therefore, I am Tue Mar 31 1992 15:3529
    Latency should not be treated so lightly. Latency has become one
    of the two criteria that are specified in the clusters SPD (throughput
    being the other) to determine supportable LAVC (LAN) cluster
    configurations. Statements like "another ms or two of latency
    will not break the cluster" are probably true,. however,.. the SPD
    is worded to also guarantee certain performance levels. The two
    (latency and performance) are directly related, and so if you vary
    one, you vary the other. If you set up configurations with high
    latency, perhaps the cluster will not "break", but it is very easy
    to get into a situation where the performance does not live up
    to the level expected or specified in the SPD,.. which to a paying
    customer is just as good as broken. And one unhappy customer is
    just as bad for DEC as another.
    
    	There is also the distinct possibility that anotehr ms or two
    of latency will indeed break the cluster. This would have to be
    a configuration that is really pushing all the other limits, but
    it is possible.
    
    It seems in general Paul that the cluster protocol does not adhere
    to your understanding of how protocols should be implemented. I do
    not claim to be an expert on the LAVc protocol, but your replies
    based on "general understanding of the way things ought to be" are
    more often than not, wrong. Admittedly, this is because of a poorly
    designed protocol, but the level of misinformation is potentially
    dangerously high, and I ask that you avoid extrpoltaing in this area.
    
    							/Bill
    
522.4The road to minimal network latenciesORACLE::WATERSI need an egg-laying woolmilkpig.Tue Mar 31 1992 16:2023
    VAXcluster performance isn't just a question of protocol design.
    If Paul were to say "an extra 10 ms of average latency doesn't make
    must difference", we'd club him regardless of whether the comment was
    applied to VAXclusters.  Even for file-level servers, an extra 10 ms
    of latency will greatly impact the speed of, say, a string search
    through thousands of files.

    There are but two speed constants in a system that reasonably serve
    to hide other delays: disk seek time, and network packet duration.
    For Ethernet, a useful ~1000-Byte message lasts 1 ms.  So even 1 ms
    of latency can slow down applications significantly, if the CPUs were
    fast enough to exchange Ethernet packets at line rates.

    All of the newly purchased computers in HLO can (and should) be connected
    directly to FDDI segments: VAXstation 4000/60, VAX 6000-500, VAX 6000-600,
    Laser, DECstation 5000/xx, etc.  All of the equipment that operates at
    the same line speed (100 Mb/s for FDDI) enjoys an extra latency benefit:
    no store-and-forward delay in the communications equipment.  When
    talking from one FDDI host to another, the delay barely increases as
    you go through extra multi-ported FDDI bridges (if all FDDI rings in
    the path are very lightly loaded).  Of course, you can't benefit from
    this principle until multi-ported FDDI bridges with "cut-through" hit
    the market.  No "Digital has it now", but you can plan to do it some day.
522.5KONING::KONINGPaul Koning, NI1DTue Mar 31 1992 19:307
It's certainly true that a ms of latency has an impact on your
performance.  How much?  Don't know.  There are plenty of other
significant delays in the system, from protocol processing overhead to
disk access times.  Also keep in mind that the 1 ms for a bridge is under
pretty high load; it will often do the job significantly faster.

	paul
522.6Please clarify the delay of bridgesORACLE::WATERSI need an egg-laying woolmilkpig.Tue Mar 31 1992 20:2712
>disk access times.  Also keep in mind that the 1 ms for a bridge is under
>pretty high load; it will often do the job significantly faster.

    Is that true going from FDDI to Ethernet through a 10-100 bridge?
    (It certainly can't be true going from Ethernet to FDDI.)
    I assumed that DECbridge 6xx nevers forwards a packet before fully
    receiving it.  Receiving ~1000 bytes over Ethernet takes ~1 ms.
    The application in .0 is concerned about the latency going from
    Ethernet, to FDDI, and back to another Ethernet.  No matter how fast
    the bridge is, this will take at least 1 ms longer than going straight
    from an Ethernet host to an Ethernet workstation.  That's not a big
    deal, but 10 ms would be a disaster.
522.7what latency means to meQUIVER::HARVELLWed Apr 01 1992 13:1719
    re .6
    
    Because the latency of bridges and routers is usually measured by last
    bit in to the first bit out of a packet then the reception of the frame
    is usuall not included in the latency values.  So you are correct in
    saying that there is an ~1 ms cost (for an ~1000 byte packet) just by
    going through any store and forward device.  Bridge latency is on top
    of that given delay.
    
    If you take two stations that are on the same LAN and you then seperate
    them by a bridge, router or just more cable (due to the increased
    propagation delay of the cable, you have to do real extremes to measure
    this one) then you would see a drop in performance in any request/responce
    protocol between those two stations.  What an FDDI backbone buys you is not 
    increased performance between any two stations but the ability to have many 
    more pairs of stations communicating at the same time.
    
    If you really need high performance between two stations they should be
    located on the same LAN.
522.8Yes and no and maybe!STAR::SALKEWICZIt missed... therefore, I am Wed Apr 01 1992 19:0517
    
    Well having direct FDDI connections between systems *will certainly*
    improve performace of request/response protocols when comapred to
    the performance over EThernet. This assuming equvalent levels
    (kbytes/sec,.. not 'utilization') of background traffic in both
    cases. Also assuming that some reasonable amount of large datagrams
    are being exchanged (Direct Ethernet latency is potentially much
    smaller than FDDI under load, and if the packets are small, latency
    is a heavier factor than throughput or bandwidth)
    
    I agree that this is not the typical motivation for migrating to FDDI.
    The ability to allow more nodes to use the LAN at once is more
    important to most customers than getting any two machines to really
    scream.
    
    								/Bill
    
522.9more questionsCADSYS::LEMONSAnd we thank you for your support.Wed Oct 28 1992 14:5918
I'm bbbbbbbaaaaaaaccccccccckkkkkk, with a few more questions on this topic:

1) how can I tell what % of the SCA traffic contains 'data' as opposed to
'overhead'?

The scheme described in .0 is predicated on the assumption that most of the SCA
traffic is carrying data, and that a small percentage of SCA traffic is the
overhead of VAXcluster communication.  Does that sound like a valid assumption?
How can I measure this, short of breaking apart the packets?

2) In a VAX 6000 system running VMS V5.5+, can both an Ethernet and FDDI
interface device be active at the same time?  If so, what are the restrictions
(if any)?

Thanks!
tl

[cross-posted in the CLUSTER and FDDI conferences]