[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference humane::scheduler

Title:SCHEDULER
Notice:Welcome to the Scheduler Conference on node HUMANEril
Moderator:RUMOR::FALEK
Created:Sat Mar 20 1993
Last Modified:Tue Jun 03 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1240
Total number of notes:5017

1158.0. "Maximum size of Polycentre Scheduler 2.1b-7 database" by COMICS::JOLLEYD () Fri Sep 13 1996 13:53

Dear All,

	I have a customer who has raised the following questions.

He currently run Polycentre Scheduler 2.1b-7 on 2 VAXclusters running VMS
version 6.1. In each scheduler database there are approx 600 scheduler jobs. He
is currently in the process of planning the implementation of Digitals Business
Recovery System (BRS) and would like the following points answered -

1. Is there a maximum number of jobs that can be held in the scheduler
database? Because under BRS the 2 clusters would become 1 there would be approx
1200 jobs in the database.

2. Do any other of Digital's customers run scheduler with such a large
database?

3. Following Digital's advice from a previous log call we had to increase the
following scheduler startup parameters for our current set up (ie 2 databases
each containing 600 jobs) as it failed to start with the supplied defaults.

NSCHED_AST_LIMIT increased to 500 from supplied default 200
NSCHED_ENQUEUE_LIMIT increased to 2000 from supplied default 1000
NSCHED_QUEUE_LIMIT increased to 1000 from supplied default 2000
NSCHED_PAGE_FILE increased to 20000 from supplied default 10000
                                                                     
Would it therefore just be a case of increasing these quotas further to ensure
scheduler runs after BRS implementation. (ie 1 database containing 1200 jobs)

Any help would be helpful.

Regards

Darren.
T.RTitleUserPersonal
Name
DateLines
1158.1RMS file with 1200 records - ? no problem !RUMOR::FALEKex-TU58 KingFri Sep 13 1996 18:4026
    The Scheduler database files are RMS indexed files, and 1200 (or even
    12000) records is trivial. However, if you create and delete lots of jobs,
    the database index structure gets "hole-y" and for good performance you
    need to "compress" the database every so often. The dependency files
    (dependency.dat) was left out of the compression utility, so if you
    create and delete lots of dependencies all the time, you need to
    compress this manually using RMS utilities, or delete it (only if there
    are no wide area network dependencies!) and let the scheduler re-create it.
    
    If the job database is fairly stable (i.e. you create 1200 jobs and run
    a fairly stable set) then database maintenance doesn't have to be done
    very often.
    
    As for the maximum number of jobs that can run on a certain size system
    -  it depends on how often they run and how many dependencies. Normal
    production type data processing of a subset of a database containing 1200
    jobs on an appropriately sized machine certainly seems reasonable. Just
    Don't do stupid things like creating multiple repeating jobs that schedule
     themselves every few seconds, as that has a lot of overhead.
    
    Quotas like ASTLM and ENQLM and BYTLM for the NSCHED process don't cost
    any system resources if they are not being used, so you might as well
    make them big.  ASTLM and ENQLM affect how many jobs can run
    simultaneously. BYTLM is used for mailbox messages and if lots of jobs
    complete simultaneously and the quota is small, that's bad.