T.R | Title | User | Personal Name | Date | Lines |
---|
691.1 | Similar problems & solutions exist | MODEL::YARBROUGH | | Thu Apr 09 1987 20:59 | 9 |
| > The question I have then, is what is the best method for communicating
> between these processes in the LAVC? I have had suggestions of DECnet
> task-to-task communications and mailboxes. Is there anything faster,
> or better, or easier? Pointers to references would be greatly
> appreciated, as would an suggestions.
Contact Paul TLE::Winalski. He has been doing this sort of thing for
Mandelbrojt sets, using every uVAXII he can get his hands on, for some
time. You two can compete to see who gets the most CPUs active at once!
|
691.2 | | TLE::BRETT | | Fri Apr 10 1987 00:39 | 4 |
| If speed is important, forget all about this layered s/w and do
your own ethernet protocol!
/Bevin
|
691.3 | some observations and narrowing down | MUNICH::CLINCH | Life begins at... (muffled thump) | Tue Apr 21 1987 09:17 | 95 |
| Assuming your link is on a CI and/or Ethernet basis, then each
node is required to service asynchronous data events.
So whether or not you use the task-to-task network object,
it seems sensible to devise a "super-protocol" and event
driven design modules that can be "tested" by structured
walk-through.
Most likely you will categorise your passed data into various
types and design a service routine to process each type.
There will also be logistical data types (such as for example a
handshake to establish a link between an existing process and
a newly created one) that follow from the synchronization and
process control features of your basic design.
Each node will have a "watcher" in design terms, whether or not
it is a conventional "mailbox" that is being watched. This watcher
will wake up whenever data is received and pass it to a local
watcher for a particular local process which will itself invoke
a service routine depending on the data category of the packet received.
Thus all packets in the super-protocol from local process
to local sender will contain a logical ID i.e. logically
dependent on the function of the target process, while the local
sender will have already handshaked the target machine and have to
hand a look-up table of which node hosts the process with what
logical id. The syntax for the logical id will depend on your
design itself and could be as simple as one of three numbers 0,1,2
in a simple binary tree of generated parallel processes or more likely
encoded mnemonics based on relative functionality.
So irrespective of whether you bypass DECnet or not, each node
will host a set-up similar to this:- (Read from bottom upwards)
to node-id (other-logid)-(datatype)-(rest of packet)
:
|
LOCAL_SENDER--[look-up table for logids versus nodeids]
| | |
^
output functionality (otherlogid)-(data-type)
^
|invokes
|
particular data-dependent-service routine (logid)-(datatype)
^
|accordingly invokes
|
local watcher (determines logical id of recipient and
data type)
^
|invokes
|
[data coming in from another node]
Of course the very first "data service routine" will be triggered
by a synchronous process after any initialization of disk data
has taken place. I am also assuming that the processor sharing of
disk data with or without the use of global sections is taken care of.
Each data dependent service routine may perform its particular
functionality in solving the problem, before sending a completion
message somewhere or as an example
of the other kinds, it may simply pass details
of a new process for the local sender to put in its
look-up table. Note however that the specific problem solving
functionality performed by a given data service routine begins
with its input and ends with its output. So the invocation of one
routine by another is local and AST driven. Thus the
entry points for your problem solving functionality will
depend on the data design.
This is all irrespective of
any disk based data, which is not included in the diagram
and would be accessed/created by the data-dependent [event-driven]
service routines.
When you have got this far you are ready
to choose how data gets from a sender on one node to a watcher
on another and can solve this problem independently.
The above method would work under any network protocol for the
intermachine stage.
All that remains is to optimize the machine to machine transportation.
The fastest way without going to extremes is simply
the task-to-task DECnet method as explained in the networking
manual. If you want to beat this and write your own
"transportation protocol" as well, you will have to rewrite
the NETACP to perform all your specific functionality and
cut out the generic stuff you don't need. I am not sure
whether you are prepared to go that far, because you
will simply have to produce NETACP with only
local area functionality which won't be so much smaller than
rewriting DECnet.
Simon.
|
691.4 | try a multiprocessor | AKQJ10::YARBROUGH | Why is computing so labor intensive? | Thu Apr 23 1987 13:53 | 4 |
691.5 | EPIC sounds like a good fit | SEMI::LEVITIN | Sam Levitin | Fri Apr 24 1987 13:12 | 31 |
| I have attempted by phone and MAIL to reach the author of .0.
Since this topic may be of general interest, I have summarized
my response to him here.
In 1985-1986 (before LAVc's) I did some research on
multiprocessing circuit extraction. The task control,
interprocessor communication, and error handling aspects of the
application were performed by the work of a colleague at the
time, called EPIC. EPIC used DECNET, and could work either
within a VAXcluster or among non-clustered VAXes. I believe it
should work within a LAVc, but don't look for me for support.
If you are interested in finding out more about EPIC, send mail
to Ed (CADSYS::) Prentice.
Interested parties can obtain copies of MIT/LCS/TR-363,
"Exploiting Parallelism in VLSI CAD", by Joshua D. Marantz,
and of MIT/LCS/TR-378, "MACE: A Multiprocessing Approach to
Circuit Extraction", by Samuel M. Levitin, by contacting the
Laboratory for Computer Science at MIT at (617) 253-5894, or by
writing to
Publications Coordinator
Lab for Computer Science Library
545 Technology Square
Cambridge, MA 02139
Additionally, I have extra copies of my work, but not of
Marantz's.
Sam Levitin SEMI::LEVITIN DTN 225-4135 HLO2-1/G11
|
691.6 | We have nothing to hide. | PPABXB::SYSTEM | | Fri Apr 24 1987 15:33 | 28 |
| There is no need to hide reply .4. Andromeda lives, and
it is no secret as far as the Andromeda devo group is concerned.
(This note is being sent from PPABXB -- a 32 processor Andromeda,
and currently Digital's largest multiprocessor with the consent
and encouragement of management here.
Andromeda is a research project in Midrange Systems. It will not
be a product, and is to be used for research into application
techniques and multiprocessor hardware design. In the deep and
dark past, it was known as PPA.
An Andromeda system is expandable to 64 MicroVAX processors (in
increments of four) and 256 MBytes of main memory. I/O (including
DECNET and LAT support) is via CI. The system has provision for
non-invasive performance monitoring of 3/4 of the processors.
Currently, the Andromeda group has three prototype systems. One
system is dedicated to software development and runs VMS 4.4 with
a special "layered" multiprocessor scheduler and support package.
The other two systems are used for module debug.
You will be seeing more of Andromeda's results in the future (we
hope). For more information, send mail to OMEGA::SHAKSHOBER.
Andromeda Lives.
|
691.7 | The author has returned . . . | DACT6::CANNON | Tim Cannon - Washington DC, ACT | Sat Apr 25 1987 19:57 | 16 |
| I appreciate the interest in .0 -- and to those of you who have
tried to contact me, I apologize for not being around. After being
on the road for several weeks, my wife gave birth to a bouncing
baby girl. But now I'm back.
There appears to be a good deal of interest in this topic so I will
post progress reports here. In the meantime, I can be reached at:
Tim Cannon
Washington, D.C. Application Center for Technology
MEL-2/17
8400 Corporate Drive, Suite 212
Landover, MD 20785
DTN: 425-3386
(301)731-3386
|
691.8 | What algorithm are we taling about? | AKQJ10::YARBROUGH | I prefer Pi | Thu Jan 26 1989 19:52 | 8 |
| If it's really the Traveling Salesman problem you are after, rather than
just any old problem that is nicely done on multiple processors, then
Metropolis' Simulated Annealing algorithm may be promising. It is given in
"Numerical Recipes", along with code etc. I just recently got it up on my
home PDT-11 (!) and it works fine. It might be interesting to see how THAT
algorithm behaves in a multiprocessor environment.
Lynn Yarbrough
|
691.9 | Large 0-1 linear programming | DACT6::CANNON | Tim Cannon - Washington DC, ACT | Fri Jan 27 1989 10:31 | 17 |
| Actually, I'm interested in solving VERY large 0-1 linear programming
problems to optimality. "Real good," "close," and "suboptimal" are not
good enough. The TSP is usually the most easily understood algorithm
to discuss to impart the concept of 0-1 programming. Other
applications are in the areas of capital budgeting (is the projected
funded or not?), airline crew scheduling (is the crew on the flight or
not?), and distributed database segregation (is the partition on
machine 12 or not?).
Simulated annealing is an interesting approach to obtaining good
solutions to these problems. I'll look at the algorithm that you
mention, but I thought that the approach did not solve to optimality.
Do you know of any large problems floating around that others have
not solved to optimality that I can try?
Tim
|
691.10 | the excellent is the enemy of the good | HERON::BUCHANAN | Andrew @vbo/dtn8285805/ARES,HERON | Fri Jan 27 1989 11:09 | 28 |
| > Actually, I'm interested in solving VERY large 0-1 linear programming
> problems to optimality. "Real good," "close," and "suboptimal" are not
> good enough.
> Other
> applications are in the areas of capital budgeting (is the projected
> funded or not?), airline crew scheduling (is the crew on the flight or
> not?), and distributed database segregation (is the partition on
> machine 12 or not?).
Surely thse are *not* applications where "solving to optimality"
is ever relevant? More important are issues such as:
(0) modelling the real world to a sufficient degree of accuracy
(1) inability to define a single objective function
(2) modifying an existing solution as the world changes
I'm not denying the interest of the "solving to optimality"
subproblem per se. Simulated annealing doesn't give you any handle on that.
It's just a way of moving away from a local optimum, to perhaps a catchment
area for a better one.
It could well be, that by harnessing the spare cycles of some LAVCs,
you could deal with some pretty big cpu-intensive problems.
Good luck.
|
691.11 | Optimality is critical! | DACT6::CANNON | Tim Cannon - Mobil Corp. Acct. Team | Fri Jan 27 1989 12:04 | 22 |
| re: .10
Quite the contrary! Solving to optimality is becoming critical
in a lot of areas. Take capital budgeting for example. We are
working with a telecommunications company whose capital budget can
run in the hundreds of millions to a billion dollars. Heuristic
approaches and other approximations (if they are ***REAL*** good)
can approach 95% of optimality. That additional 5%, for even a
$100 million budget represents $5 million. That's worthwhile pursuing.
The same case can be made for airline scheduling where every dollar
saved is critical in that competetive environment.
You make a good point about an LAVC. That is exactly what I did
for my dissertation--use an LAVC as a parallel processor. And we
got excellent results.
Anybody got any problems to solve?
Thanks,
Tim
|
691.12 | Job-Shop? | LARVAE::TURNER | Mark Turner * DTN 849-3429 * SPYDER::TURNER | Sun Jan 29 1989 15:30 | 21 |
| How about the job-shop scheduling problem? A good reference is:
Sequencing and Scheduling: An Introduction to the
Mathematics of the Job-Shop
by S. French
Ellis Horwood, 1982
(in N. America, distributed by Halstead Press (part of Wiley))
ISBN 0-85312-364-0 or 0-470-27229-5
which covers many approaches (LP, branch-and-bound, etc.) and gives good
references to each.
Making the rounds of a few DEC manufacturing plants will almost certainly
flush out some real examples of these scheduling problems.
Regards,
Mark
|