[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference mvblab::sable

Title:SABLE SYSTEM PUBLIC DISCUSSION
Moderator:COSMIC::PETERSON
Created:Mon Jan 11 1993
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2614
Total number of notes:10244

2151.0. "how many de500's in 2100A?" by ICELAN::JOHN () Wed May 15 1996 21:23

T.RTitleUserPersonal
Name
DateLines
2151.1I've got the same issue.NQOS01::nqsrv216.nqo.dec.com::HobsonRich Hobson 847-475-8960Fri May 17 1996 16:1612
2151.2MSBCS::NEEThu May 23 1996 14:241
2151.3OK, so if it's supported...NQOS01::nqsrv229.nqo.dec.com::HobsonRich Hobson 847-475-8960Thu May 30 1996 03:0612
2151.4What is the answer?NQOS01::16.29.16.107::PellerinThu Feb 20 1997 21:5414
I have a customer that wants to use 4 DE500's in each 2100A in a 2 node 
TruCluster ASE for an NFS service cluster.

.0 and replies did not yield an answer to the questions:

1. Can I use more than 2 (in my case 4) DE500's?

2. If not why not?  What is the reason for the limitation?

My O/S will be Digital UNIX 4.0b.

Thanks,

 -BAP
2151.5Only 3 PCI slots are available!FOUNDR::FOXDavid B. Fox -- DTN 285-2091Fri Feb 21 1997 15:196
    Sable's only have 3 PCI slots.  So unless there is an operating system
    restriction, AND you don't have any other PCI options, I'd say you
    could have 3.  There are lots of ISA or EISA slots available but I
    don't think we have a FAST Ethernet solution for those busses.
    
    	David
2151.6AFW3::RENDAMike Renda dtn: 223-4663Fri Feb 21 1997 15:5011
Although you have eight PCI slots in an AlphaServer 2100A, the following
recommendation applies with regards to the DE500-AA based on test results:

Unix and WindowsNT test results recommend a maximum of 2 DE500-AAs for
performance and a maximum of 4 for connectivity on the 2100A System.

Support for 3 or 4 DE500-AAs on the AS2100A is only for customers who are
looking for additional connectivity or failover and are willing to accept
significantly reduced throughput rates in order to attain this connectivity
or failover or for customers who do not typically run heavy network loads.
2151.7OK but confused...NQOS01::16.29.16.109::PellerinSun Feb 23 1997 12:5627
re .-1

Thanks for the reply.  

If I understand your note correctly, Digital internal tests show that beyond 2 
DE500's, the performance on a 2100A suffers.  Am I interpreting this 
correctly?  

I am a bit confused as to why, since the I/O bandwidth is rated at 667 MB/s 
(mega-bytes per second)  and a single DE500 only imposes a maximum of 100 Mb/s 
(mega-bits per second) or about 12 MB/s (mega-bytes per second).  I don't 
understand how 2 DE500s (total of around 24 mega-bytes per second) can 
overpower the 2100A backplane and I/O capability.

There is obviously something I am overooking or do not understand about the 
way an network interface card affects a system as opposed to say, a 20 MB/s 
KZPSA.  There seems to be no "guideline" for KZPSAs (except for the fact that 
they have to go in slots 4-8, I believe).  So, 4 KZPSAs will impose a 
theoretical maximum of 80 MB/s right?  What is different?

Can someone please explain why tests yeild that more than 2 DE500s inhibit 
performance?  Is this "guideline" true for the 4000/4100 or 8xxx servers? 


Thanks,

 -BAP
2151.8TARKIN::LINBill LinSun Feb 23 1997 14:2927
    re: .7 by NQOS01::16.29.16.109::Pellerin
    
    Hi BAP,
    
    I'm not all that surprised by Mike's assertion in .6, and I can come up
    with some thoughts as to why it might be so.
    
    If I may speculate...
    
    One part of the I/O throughput equation that I think you may be
    forgetting is the one of latency and limits to the amount of
    prefetching and posted-writing that one can do.  Packet sizes also come
    into play.  Apparently with only two de500s, the system hardly ever
    runs out of data to send or write buffers to put data.  Beyond that
    number, again apparently, one starts to run out of one of these
    critical resources for continuous data flow.
    
    For efficient single-stream data flow, packet sizes need to be big,
    i.e. with low command overhead relative to data content.  However, this
    works against you in decreasing latency to other devices.  It's a
    balancing act.
    
    Hope this helps and I hope I'm not too far off the mark.  ;-)
    
    Cheers,
    
    /Bill
2151.9ok - but what's different?NQOS01::nqsrv202.nqo.dec.com::PellerinMon Feb 24 1997 15:5124
re: -.1

Thanks for the analysis.  I guess I'm after the reason why a 2100A won't 
operate as efficiently with more than 2 DE500s Vs a 4000 0r 8000-class 
machine.  Since it's not backplane-speed related, then why is a 4100 (in 
documents that I have seen. I think...) allowed to have up to 4 DE500s while a 
2100A is recommended to have only 2? 

Botton line - what is the big difference in the hardware or firmware (since 
Digital UNIX is Digital UNIX) that makes for the 2100A to be less able to 
handle more than 2 DE500s?

Have we (Digital) actually tested more than 2 DE500s in a 4000-class server? 
(I know that's a question for the rawhide notesfile...)  If so, and if the 
4000-class servers can function ok with 4 DE500s, what is the significant 
difference between the 2100A and the 4000-class (I know the backplane-speed is 
different, but I believe we have sufficient backplane speed in the 2100A 
so...)

The discussion rambles on...

Regards,

 -BAP 
2151.10Bridges, etc.WONDER::WILLARDWed Feb 26 1997 12:3937
	The AS4xxx has a rather low latency path between the PCI(s)
	and host memory, and 64bit (264 MMB/S) PCIs.  The AS2xxx has 
	a relatively high latency 32bit (132 MMB/S) PCI, and also
	has a very limited number of buffers (for DMA pre-fetch).
	The AS8xxx has very long read-latency, but to compensate for
	that, it has up to 12 32bit PCIs and up to 36 PCI segments, allowing
	for lots of concurrency.

	Like all vendors, we quote PCI performance in MMB/S, which means
	Marketing MegaBytes/Second.  If you think the MMB/S spec lets
	you predict performance, you have allowed yourself to be conned
	by our HypeWare.  As the multi-DE500 case shows, there is far
	more to PCI performance than MMB/S.

	The short answer to your question about the reason for the
	PCI perfomance difference between platforms is:  bridge design.
	The PCI-host bridges are totally different for these three
	platforms, in ways that strongly affect DMA (and other)
	performance.  And, to complicate matters, some PCI widgets are
	far more sensitive to these differences than others in several
	ways.  As it happens, the KZPSA is pretty good (insensitive),
	due to its FIFO+buffers design; the DE500 is not good, due to
	being optimized for cheap little PCs; the KZPAA is absolutely
	terrible.

	Now, I'm sure that one of you noters will react to the above with a
	demand that we explain how our bridges and widgets work and
	interact; we've heard this all before.  We do not have a process
	in place to capture enough detail about the workings of bridges
	and widgets and drivers (D'U, O'V, WNT) to be able to predict 
	performance of multi-widget configurations, and IMHO we will never
	be able to justify the expenditures necessary to get one.  So,
	don't demand what you won't fund.

Sorry for sounding harsh and pessimistic, but this is deja vu all over again.

Cheers, Bob
2151.11ThanksNQOS01::nqsrv422.nqo.dec.com::PellerinThu Feb 27 1997 11:4610
Thanks.  That was the info I was looking for.  I'll encourage my customer to 
try 2 DE500s, and if need be, experiment with adding DE435s (or run additional 
DE500s in 10 MB/s mode) to gain incremental performance - and watch closely.  
I'll post any revelations.

Thans to all for responding.

Regards,

 -BAP