[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference clt::cma

Title:DECthreads Conference
Moderator:PTHRED::MARYSTEON
Created:Mon May 14 1990
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1553
Total number of notes:9541

1483.0. "max # threads / process, etc ??" by SNOC02::63620::GRAHAM (Russell GRAHAM) Tue Feb 11 1997 21:50

I have a customer who is porting a client-server  application from
Alpha WIN NT 3.51 (server) to Alpha OpenVMS V7.1 and UCX V4.1

They have some questions about the capability of OVMS 7.1 in this area, namely:

- what is the maximum number of threads per process?
  (the sysgen parameter MULTITHREAD indicates 16, but this is rather small.
  The customer would like several hundred threads per process. )

- what is the maximum stack space each thread can have?

- are there any documents that compare thread implementations on WIN-NT and
  OVMS?

- do any "blocking" calls for a thread on OVMS, block other threads also?

Thanks,
Russell GRAHAM
NSIS, Sydney
@SNO

T.RTitleUserPersonal
Name
DateLines
1483.1DCEIDL::BUTENHOFDave Butenhof, DECthreadsWed Feb 12 1997 09:1241
> They have some questions about the capability of OVMS 7.1 in this area,
 > namely:
>
> - what is the maximum number of threads per process?
>   (the sysgen parameter MULTITHREAD indicates 16, but this is rather small.
>   The customer would like several hundred threads per process. )

The MULTITHREAD parameter controls the maximum number of KERNEL threads per
process, and has nothing to do with how many USER threads you can create
using pthread_create(). Like Digital UNIX 4.0, Solaris 2.5, and IRIX 6.3,
OpenVMS Alpha 7.0 (and later) supports 2-level scheduling, where the user
mode thread library (DECthreads, PTHREAD$RTL.EXE) and the kernel share
scheduling responsibilities. This provides superior performance in most
cases, as well as substantially better scaling characteristics, than thread
implementations like Windows NT and AIX 4.2, where each "user thread" is
really just a kernel thread.

You can create as many threads as your virtual address space and swapping
requirements allow. There is no fixed limit, and in practice you can create
quite a lot.

> - what is the maximum stack space each thread can have?

Again, there's no fixed limit. To do anything "interesting" in a thread, it
usually needs at least a few pages of stack -- that clearly places some upper
limit in practice because virtual address space is limited. If you want
larger stacks, you can't create as many threads as you could with smaller
stacks.

> - are there any documents that compare thread implementations on WIN-NT and
  OVMS?

Not really.

> - do any "blocking" calls for a thread on OVMS, block other threads also?

No. The OpenVMS kernel "blocks" only the calling kernel thread. The 2-level
scheduling model additionally allows DECthreads to remove the blocking user
thread from that kernel thread and schedule another in its place to continue
doing useful work while the blocking operation proceeds independently in the
kernel.
1483.2Blocking behaviorWTFN::SCALESDespair is appropriate and inevitable.Wed Feb 12 1997 16:3231
.0> - what is the maximum stack space each thread can have?

When you create a thread, you can specify the size of its stack.  The only
limit on the stack size is the amount of contiguous virtual memory available
to the process at the time the thread creation is requested.


.0> - do any "blocking" calls for a thread on OVMS, block other threads also?

The answer depends on whether you have the two-level scheduling (aka
"upcalls") enabled.  With two-level scheduling enabled, the answer is no:
when a user thread blocks in a system service call it affects no other user
thread, and the kernel thread immediately begins running another user thread. 
However, with two-level scheduling disabled (which is the default), when a
user thread blocks in a system service call the whole process (all threads)
is blocked for a brief period of time, until the end of the calling thread's
execution quantum.  At the point when the blocking thread would have given up
the processor had it been compute-bound, another user thread is scheduled. 
If the blocking thread is still blocked at the point when it is next
scheduled to run, the process will block again for the duration of the
thread's execution quantum.  (This is because without two-level scheduling
there is no way that DECthreads can distinguish a running thread from one
which is blocked in a system call, and so DECthreads is forced to treat them
both as though they were running.)

Whether two-level scheduling is enabled is determined by a linker qualifier.
If you do not specify /THREAD then the process is subject to the intermitent
blocking behavior.


				Webb
1483.3what has been achieved ?SNOC02::63620::GRAHAMRussell GRAHAMThu Feb 13 1997 02:1813
Thanks for the very helpful information.

Does anyone know of what number of threads have actually been used under OVMS
V7.1?

I could write a program to test the upper number, but I was hoping someone might
have seen or heard of what some other large application has achieved in terms of
the number of threads in use.

I'd like to refine what "quite a lot" means a bit further.

Thanks,
Russell
1483.4DCETHD::BUTENHOFDave Butenhof, DECthreadsThu Feb 13 1997 09:088
The problem is that "quite a lot" doesn't have any possible absolute number,
except in each SPECIFIC case. How big is your program? What's your virtual
address quota? How big are the threads' stacks? How much OTHER dynamic data
do you allocate? All of those factors affect how many threads you can create,
and if you want to publish a number, be sure to include all of the other
numbers that affect it -- or it'll be useless to anyone else.

	/dave
1483.5You can get hundreds, but you don't want them all runnable at once!WTFN::SCALESDespair is appropriate and inevitable.Thu Feb 13 1997 13:5522
You can reassure your customer that it's reasonable to expect to be able to
create several hundred threads.

But, as Dave says, depending on what else the program does, the customer may
need to up the process's quotas and reconfigure the system parameters
appropriately.

There is one other thing that you need to make the customer understand.  If
the application has LOTS of runnable threads (i.e., threads which are not
waiting for something and which want access to the processor), then the
application is likely to run sluggishly, because each individual thread is
going to have to wait in line behind all of the others before it can run.
Also, the application will incur an increased amount of overhead, because it
will be doing _so_many_ context switches between all of those threads.

So, it may well be that the customer wants to reconsider their design if it
really calls for large numbers of active threads.  (It's OK to create LARGE
numbers of threads, but if you have many more RUNNABLE threads than you have
processors, for any length of time, then you are going to lose.)


				Webb