[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference eps::oracle

Title:Oracle
Notice:For product status see topics: UNIX 1008, OpenVMS 1009, NT 1010
Moderator:EPS::VANDENHEUVEL
Created:Fri Aug 10 1990
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1574
Total number of notes:4428

1518.0. "Problem with multiblock reads" by BRADAN::ELECTRON (I would rather be fishing.) Thu Feb 20 1997 06:24

    Hi All,
    
    I am working on a system where I am using Oracle 7.3.2.1 and 7.3.2.2. I
    have the DB_BLOCK_SIZE set to 16K and set DB_FILE_MULTIBLOCK_READ_COUNT
    to 32. Oracle starts and gives no errors, but when I look in the
    v$parameter table DB_FILE_MULTIBLOCK_READ_COUNT is set to 8. It ignores
    what I put in the init.ora file. It seems 7.3 will not read more than
    128K. Anyone else found this. 
    
    Regards
    Electron 
T.RTitleUserPersonal
Name
DateLines
1518.1same thing hereALFAM7::GOSEJACOBThu Feb 20 1997 08:469
    re .0
    Hmmm, I just tried DB_BLOCK_SIZE=8k, DB_FILE_MULTIBLOCK_READ_COUNT=32 and
    restarted the database (7.3.2.2). Similar result here: when I do a show
    parameter db_file_multiblock_read_count I get 16 (16 * 8k == 128k).
    
    On the other hand: don't worry to much about this limit. The Unix
    kernel itself will not do anything larger than 64k I/O's.
    
    	Martin
1518.2What if ?BRADAN::ELECTRONThu Feb 20 1997 11:536
    Hi Martin,
    
    What if I am using raw devices and Oracle is reading directly.
    
    Regards
    Electron
1518.3nope, doesn't make a differenceALFAM7::GOSEJACOBThu Feb 20 1997 12:117
    re .2
    There are various layers in the kernel, LSM for example and I don't have
    a clue which else, that limit I/O's to 64k. Which means in this respect
    it doesn't really make any difference whether you put your Oracle data
    files on raw devices or a file system.
    
    	Martin 
1518.4higher limit would helpWONDER::REILLYSean Reilly, Alpha Servers, DTN 223-4375Fri Feb 21 1997 10:2114
    Raw device reads can go over 64k, so if you are not using LSM, etc.,
    there can be benefits.

    In fact the TPC-D audit used, if my information is correct, 128K
    multiblock reads only because it was an Oracle limit.  They'd have
    gone higher if possible.

    On most i/o subsystems I'll agree that bandwidth is approaching
    saturation at around 64K trabsfer sizes, but there are boosts from 
    going higher.  This may not be important for a typical customer 
    application, but it helps things like benchmarks.

    - Sean
1518.5We also used LSMBRADAN::ELECTRONTue Feb 25 1997 07:506
    Hi,
    
    We are using raw devices and LSM. We patched VOLINFO.MAX_IO to get LSM
    to write greater than 64k.
    
    Thanxs
1518.6EPS::VANDENHEUVELHeinTue Mar 25 1997 13:4227
    
    [catching up with note file. sorry for replying so late]
    
 >   We are using raw devices and LSM. We patched VOLINFO.MAX_IO to get LSM
 >   to write greater than 64k.
    
    Did you get a chance to quantify the results from this.
    
    I always thought it was sorta nice to have LSM break up a single
    large IO from the application (Oracle) into multiple concurrently
    executing parallel IOs. This would only be usefull ofcourse where
    the hardware is setup to allow for parallel activities also.
    Specifically one would want to stripe set members to be spread 
    over multiple IO adapters (eg KZPSA), and for really high through-
    puts over multiple busses (PCI).
    
    (I know of a SORT benchmark reading at 200Mb/sec using what admittedly
    seems a little bit excessive IO subsystem: 360 disks on 168 scsi busses
    behind 56 HSZs connected to 28 KZPSAs in 8 PCI busses. I can't find my 
    notes just now to verify whether they used 32Kb or 16Kb stripe sizes, 
    but for sure it was less or equal to 32Kb).
    
    thanks,
    		Hein.