| Hi Thilo,
> Now there's one thing: this file is AI-journalled, so if i'm
> just increasing the data bucket size, my ai journal file will increase
> proportionally in size, since it's written in units of data buckets (right?).
> So, increasing the data bucket size i.e. from 2 to 8, i have to face an increase
> of the journal file by a factor of 4, which might get me into trouble.
First point: RMS journals buckets to the AIJ, true, but only writes
the used portion of the bucket (does not write beyond the freespace
pointer). So doubling the bucket size may or may not double the AIJ
real estate required depending on how full your buckets are. Second
point: but your AIJ _will_ grow more rapidly with the larger bucket
size, so every precaution should be taken to allocate sufficient space
for the journal.
I'll let Elinor respond to your global buffer question.
Tom
|
|
<<The example procedure by Elinor (accessible via STARS) just uses
<<f$file(..."bks"), leading to the assumption that the size of a global buffer
<<is also based on the largest bucket size within a given file (right?). Now,
<<what happens when using a data bks of 2, index bks of 12 and using global
<<buffers?
When using any RMS intermediate buffer -- local or global -- all the buffers
(always allocated on a per file basis) will be a fixed size, and the fixed
size has to be large enough to handle the maximum-sized bucket in the file.
RMS only stores one bucket per buffer. Thus, any local or global buffer is
sized to hold the largest bucket size for that given indexed file.
<<Are my global buffers each 12 blocks in size? Do i waste 10 blocks within each
<<global buffer when holding a data bucket?
Yes -- each 12 blocks in size (whether global or local buffers). This is
why sometimes choosing the perfect bucket size for different keys or levels
(data vs. index) of an indexed file involves COMPROMISE. You generally do
not want the sizes to cover too wide a range.
Currently, global buffer performance doesn't scale well with increasing
number of buffers (usually performance peaks out at 300 or less). So the
unused memory is probably not a big factor today. However, we are planning
to implement some enhancements to global buffers in the release after OpenVMS
V7.1 that will make them scale extremely well -- even with 32,767 (assuming
that the file is large enough to benefit from so many). This, of course, will
make the unused memory a greater issue. So compromises like making a bucket
you want to be large, a little smaller, and a bucket you want to be small, a
little larger, will be more in order.
<<Or should i trust the re-use
<<algorithms within RMS that after a while only index buckets will be held in
<<the global buffer cache?
Yes, that is what RMS works to achieve (provided there are enough buffers to
hold all the index buckets read in), but I can't give you a 100% guarantee.
Most likely a large percent of a global buffer cache will hold index
buckets, but there are some pathological cases where the mix of operations
over all users of a particular indexed file works against this. For
example, an application that has some users doing random retrievals that
use index buckets while others are sequentially walking (using only data
buckets) through large portions of the file for reports. Even in the
pathological cases, RMS will continue to preferentially treat index buckets
in its re-use algorithms, but depending on how active the random accessors
are, the latter eventually could end up flooding the cache with data buckets.
-- Elinor
|