[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference mvblab::sable

Title:SABLE SYSTEM PUBLIC DISCUSSION
Moderator:COSMIC::PETERSON
Created:Mon Jan 11 1993
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2614
Total number of notes:10244

2546.0. "2100 / NT Memory Bottleneck Threshold Number ?" by MSE1::mse_chenis.mse.tay.dec.com::chenis (Kenneth Chenis) Mon Mar 03 1997 21:44

Short question - Should I be concerned about an
average 12 page/second count on an Alpha 2100 
being a memory bottleneck?


Long Verson - 

Does anyone know if there are updated performance threshold 
numbers for the 2100 server - as in - updated from what 
Microsoft publishes in their Opimizing Windows NT guide.

For instance - they indicate more than 5 memory pages/second
typically indicates a memory bottleneck.

Seeing as how this is a counter/second measurement - does it
make sense that the faster the processor, i/o - memory 
subsystem, etc, the higher the sustained count?  In other 
words, would thresholds like this be higher for Alpha 
processors, and/or "larger" Digital systems (2100, 4100, etc.)

If so, do we have characterization reports that indicate what
the real bottleneck thresholds are, or do we have to roll
our own based on specific hardware configurations.

Any help greatly appreciated !

Thanks,

Ken Chenis
T.RTitleUserPersonal
Name
DateLines
2546.1TARKIN::LINBill LinMon Mar 03 1997 22:1610
    re: MSE1::mse_chenis.mse.tay.dec.com::chenis
    
    Ken,
    
    I'm speaking entirely out of turn here, but my first impression is that
    the "memory bottleneck" about which Microsoft writes is the inadequacy
    of the AMOUNT of physical memory and not the speed of
    processors/memory.
    
    /Bill
2546.2Depends where they're resolvedPERFOM::HENNINGTue Mar 04 1997 14:5835
    It depends how you are resolving them:
    
    * 12 pages/second resolved in main memory would not hurt 
    
    * 12 pages/second resolved from disk with 1 io/sec wouldn't hurt much
    	- see perfmon/Add/Memory/Pages Input/sec and Pages Output/sec
    
    * 12 pages/second resolved by doing 12 disk io/sec may start being
      fairly noticeable!
    	- see perfmon/Add/Memory/Page Reads/sec and Pages Writes/sec
    
    A (simple) view of this is that disks commonly spin 60 times/sec,
    although more modern ones do 90/sec.  But suppose you have 60.  If your
    workload to the disk is doing ANY seeking, you'll be lucky to sustain
    30 i/o sec to a single drive.  So if you have one drive that is being
    used for paging (i.e. you haven't spread it over multiple spindles) and
    if you're doing 12 i/o sec, that is substantial fraction of its
    capacity.  You might start feeling it.
    
    Different question: does your system have other useful work to do
    whilst page faults are resolved?  Or are you running just the single
    application?  If so, then it'll be more noticeable.
    
    On the other hand, suppose you're running a system with lots of
    interesting work going on, and one process can pick up and run whilst
    another one waits for paging IO.  Suppose you've also spread the
    pagefiles out over several spindles.  In that case, 12 io/sec might not
    be noticeable at all.
    
    As .1 points out, this sort of memory bottleneck gets relieved when
    you buy more memory.  That's easier than buying a whole new 4100 with
    *faster* memory.
    
    /john henning
     csd performance group