[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

691.0. "Traveling Salesman Problem" by DACT6::CANNON (Tim Cannon - Washington DC, ACT) Thu Apr 09 1987 16:55

    
    
    
    I am working on a Digital-supported research and development effort
    that is aimed at large-scale combinatorial optimization.  Specifically,
    we (two universities and I) are studying the use of MicroVAXes in an
    LAVC to solve large Traveling Salesman Problems (TSP) to optimization.
    Because the solution is modular, CPU-intensive, and relatively little
    data is passed between processes, we believe that an LAVC approach will
    achieve faster solutions than currently realized on mainframes. 

    Our problem (and the reason for posting this note) is in trying to use
    the LAVC as a parallel processor.  The TSP (which is described below)
    solution will involve simultaneous processing of synchronous processes
    to arrive at a starting point; the 'best' solution of each of the
    processes. From there, we plan to distribute portions of the problem to
    other members of the LAVC. 

    The question I have then, is what is the best method for communicating
    between these processes in the LAVC?  I have had suggestions of DECnet
    task-to-task communications and mailboxes.  Is there anything faster,
    or better, or easier?  Pointers to references would be greatly
    appreciated, as would an suggestions. 

    Tim Cannon Washington DC 
    Application Center for Technology 
    DTN 425-3386 


    TSP discussion follows: 


    The importance of TSP comes not only from the wealth of applications
    which fall in this class, but from the fact that it is typical of other
    problems of combinatorial optimization.  The objective is to minimize
    total distance, so the problem is one of optimization.  But the method
    of differential calculus and the setting of derivatives to zero cannot
    be immediately employed because the problem is one of combinatorics.
    The choice of solutions is not over a continuum, but over the set of
    all tours. 

    The interest in TSP has continued to grow from its introduction in the
    early 1930s for several reasons.  First, it is extremely easy to state
    (move an object from a location to n-1 other locations at a minimum
    cost), takes no mathematical background to understand, and takes no
    great talent to find good solutions to large problems.  Second the TSP
    has resisted all efforts to find a 'good' optimization algorithm or
    even an approximation algorithm that is guaranteed to be effective.
    Third, it is the most prominent of the unsolved combinatorial
    optimization problems. 

    There are also practical reasons for the importance of TSP (and of
    Digital's interest in it).  Many significant real-world problems can be
    formulated as instances of TSP, for example: 

    Computer Wiring. 

    A system consists of a number of modules and several pins are located
    on each module.  The physical position of each module has been
    determined. A given subset of pins has to be interconnected by wires.
    In view of possible future changes or corrections and of the small size
    of a pin, at most two wires are to be attached to a pin.  In order to
    avoid signal cross-talk and to improve ease and neatness of wiring, the
    total length should be minimized. 

    Job Sequencing. 

    Consider the problem of sequencing n jobs on a single machine.  The
    jobs can be done in any order and the objective is to complete all of
    them in the shortest possible time.  Assume that the machine must be in
    a certain state (e.g. rotation, temperature, pressure, cutting
    thickness) in order to do each job, and that there is known time
    required to prepare the machine for subsequent jobs.  The solution of
    this problem is elegantly accomplished through TSP. 

    Our goal is to then publicize the theoretical method of solution along
    with the performance on an LAVC compared with mainframe performance. 
    
T.RTitleUserPersonal
Name
DateLines
691.1Similar problems & solutions existMODEL::YARBROUGHThu Apr 09 1987 20:599
>    The question I have then, is what is the best method for communicating
>    between these processes in the LAVC?  I have had suggestions of DECnet
>    task-to-task communications and mailboxes.  Is there anything faster,
>    or better, or easier?  Pointers to references would be greatly
>    appreciated, as would an suggestions. 

Contact Paul TLE::Winalski. He has been doing this sort of thing for 
Mandelbrojt sets, using every uVAXII he can get his hands on, for some 
time. You two can compete to see who gets the most CPUs active at once!
691.2TLE::BRETTFri Apr 10 1987 00:394
    If speed is important, forget all about this layered s/w and do
    your own ethernet protocol!
    
    /Bevin
691.3some observations and narrowing downMUNICH::CLINCHLife begins at... (muffled thump)Tue Apr 21 1987 09:1795
    Assuming your link is on a CI and/or Ethernet basis,  then each
    node is required to service asynchronous data events.
    So whether or not you use the task-to-task network object,
    it seems sensible to devise a "super-protocol" and event
    driven design modules that can be "tested" by structured
    walk-through.
    
    Most likely you will categorise your passed data into various
    types and design a service routine to process each type.
    There will also be logistical data types (such as for example a
    handshake to establish a link between an existing process and
    a newly created one) that follow from the synchronization and
    process control features of your basic design.
    
    Each node will have a "watcher" in design terms,  whether or not
    it is a conventional "mailbox" that is being watched.  This watcher
    will wake up whenever data is received and pass it to a local
    watcher for a particular local process which will itself invoke
    a service routine depending on the data category of the packet received.

    Thus all packets in the super-protocol from local process
    to local sender will contain a logical ID i.e. logically
    dependent on the function of the target process,  while the local
    sender will have already handshaked the target machine and have to
    hand a look-up table of which node hosts the process with what
    logical id.  The syntax for the logical id will depend on your
    design itself and could be as simple as one of three numbers 0,1,2
    in a simple binary tree of generated parallel processes or more likely
    encoded mnemonics based on relative functionality.

    So irrespective of whether you bypass DECnet or not,  each node
    will host a set-up similar to this:- (Read from bottom upwards)

	to node-id (other-logid)-(datatype)-(rest of packet)
		:
		|
    		LOCAL_SENDER--[look-up table for logids versus nodeids]
    		|   |	|
			^
    			output functionality (otherlogid)-(data-type)
    			^
    			|invokes
			|
    		particular data-dependent-service routine (logid)-(datatype)
    			^
    			|accordingly invokes
    			|
		local watcher (determines logical id of recipient and
    			       data type)
    			^
    			|invokes
			|
		[data coming in from another node]

    Of course the very first "data service routine" will be triggered
    by a synchronous process after any initialization of disk data
    has taken place.  I am also assuming that the processor sharing of
    disk data with or without the use of global sections is taken care of.

    Each data dependent service routine may perform its particular
    functionality in solving the problem,  before sending a completion
    message somewhere or as an example
    of the other kinds,  it may simply pass details
    of a new process for the local sender to put in its
    look-up table.  Note however that the specific problem solving
    functionality performed by a given data service routine begins
    with its input and ends with its output.  So the invocation of one
    routine by another is local and AST driven.  Thus the
    entry points for your problem solving functionality will
    depend on the data design.

    This is all irrespective of
    any disk based data,  which is not included in the diagram
    and would be accessed/created by the data-dependent [event-driven]
    service routines.

    When you have got this far you are ready
    to choose how data gets from a sender on one node to a watcher
    on another and can solve this problem independently.
    The above method would work under any network protocol for the
    intermachine stage.  

    All that remains is to optimize the machine to machine transportation.
    The fastest way without going to extremes is simply
    the task-to-task DECnet method as explained in the networking
    manual.  If you want to beat this and write your own
    "transportation protocol" as well,  you will have to rewrite
    the NETACP to perform all your specific functionality and
    cut out the generic stuff you don't need.  I am not sure
    whether you are prepared to go that far,  because you
    will simply have to produce NETACP with only
    local area functionality which won't be so much smaller than
    rewriting DECnet.

    Simon.
691.4try a multiprocessorAKQJ10::YARBROUGHWhy is computing so labor intensive?Thu Apr 23 1987 13:534
691.5EPIC sounds like a good fitSEMI::LEVITINSam LevitinFri Apr 24 1987 13:1231
I have attempted by phone and MAIL to reach the author of .0.
Since this topic may be of general interest, I have summarized
my response to him here.

In 1985-1986 (before LAVc's) I did some research on
multiprocessing circuit extraction.  The task control,
interprocessor communication, and error handling aspects of the
application were performed by the work of a colleague at the
time, called EPIC.  EPIC used DECNET, and could work either
within a VAXcluster or among non-clustered VAXes.  I believe it
should work within a LAVc, but don't look for me for support.
If you are interested in finding out more about EPIC, send mail
to Ed (CADSYS::) Prentice.


Interested parties can obtain copies of MIT/LCS/TR-363,
"Exploiting Parallelism in VLSI CAD", by Joshua D.  Marantz,
and of MIT/LCS/TR-378, "MACE:  A Multiprocessing Approach to
Circuit Extraction", by Samuel M.  Levitin, by contacting the
Laboratory for Computer Science at MIT at (617) 253-5894, or by
writing to

Publications Coordinator
Lab for Computer Science Library
545 Technology Square
Cambridge, MA  02139

Additionally, I have extra copies of my work, but not of
Marantz's.

Sam Levitin	SEMI::LEVITIN	DTN 225-4135	HLO2-1/G11
691.6We have nothing to hide.PPABXB::SYSTEMFri Apr 24 1987 15:3328
    There is no need to hide reply .4.  Andromeda lives, and
    it is no secret as far as the Andromeda devo group is concerned.
    (This note is being sent from PPABXB -- a 32 processor Andromeda,
    and currently Digital's largest multiprocessor with the consent
    and encouragement of management here.
    
    Andromeda is a research project in Midrange Systems.  It will not
    be a product, and is to be used for research into application
    techniques and multiprocessor hardware design.  In the deep and
    dark past, it was known as PPA.  
    
    An Andromeda system is expandable to 64 MicroVAX processors (in
    increments of four) and 256 MBytes of main memory.  I/O (including
    DECNET and LAT support) is via CI.  The system has provision for
    non-invasive performance monitoring of 3/4 of the processors.    
    
    Currently, the Andromeda group has three prototype systems.  One
    system is dedicated to software development and runs VMS 4.4 with
    a special "layered" multiprocessor scheduler and support package.
    The other two systems are used for module debug.  
    
    You will be seeing more of Andromeda's results in the future (we
    hope).  For more information, send mail to OMEGA::SHAKSHOBER.
    
    						Andromeda Lives.
    
    
    
691.7The author has returned . . .DACT6::CANNONTim Cannon - Washington DC, ACTSat Apr 25 1987 19:5716
    I appreciate the interest in .0 -- and to those of you who have
    tried to contact me, I apologize for not being around.  After being
    on the road for several weeks, my wife gave birth to a bouncing
    baby girl.  But now I'm back.
    
    There appears to be a good deal of interest in this topic so I will
    post progress reports here.  In the meantime, I can be reached at:
    
    	Tim Cannon
    	Washington, D.C. Application Center for Technology
    	MEL-2/17
    	8400 Corporate Drive, Suite 212
    	Landover, MD  20785
    
    	DTN: 425-3386
    	(301)731-3386
691.8What algorithm are we taling about?AKQJ10::YARBROUGHI prefer PiThu Jan 26 1989 19:528
If it's really the Traveling Salesman problem you are after, rather than 
just any old problem that is nicely done on multiple processors, then 
Metropolis' Simulated Annealing algorithm may be promising. It is given in 
"Numerical Recipes", along with code etc. I just recently got it up on my 
home PDT-11 (!) and it works fine. It might be interesting to see how THAT 
algorithm behaves in a multiprocessor environment.

Lynn Yarbrough 
691.9Large 0-1 linear programmingDACT6::CANNONTim Cannon - Washington DC, ACTFri Jan 27 1989 10:3117
    Actually, I'm interested in solving VERY large 0-1 linear programming
    problems to optimality.  "Real good," "close," and "suboptimal" are not
    good enough.  The TSP is usually the most easily understood algorithm
    to discuss to impart the concept of 0-1 programming.  Other
    applications are in the areas of capital budgeting (is the projected
    funded or not?), airline crew scheduling (is the crew on the flight or
    not?), and distributed database segregation (is the partition on
    machine 12 or not?). 
             
    Simulated annealing is an interesting approach to obtaining good
    solutions to these problems.  I'll look at the algorithm that you
    mention, but I thought that the approach did not solve to optimality.
    
    Do you know of any large problems floating around that others have
    not solved to optimality that I can try?
    
    Tim
691.10the excellent is the enemy of the goodHERON::BUCHANANAndrew @vbo/dtn8285805/ARES,HERONFri Jan 27 1989 11:0928
>    Actually, I'm interested in solving VERY large 0-1 linear programming
>    problems to optimality.  "Real good," "close," and "suboptimal" are not
>    good enough.  

>	Other
>    applications are in the areas of capital budgeting (is the projected
>    funded or not?), airline crew scheduling (is the crew on the flight or
>    not?), and distributed database segregation (is the partition on
>    machine 12 or not?). 

	Surely thse are *not* applications where "solving to optimality"
is ever relevant?   More important are issues such as:

	(0) modelling the real world to a sufficient degree of accuracy

	(1) inability to define a single objective function

	(2) modifying an existing solution as the world changes

	I'm not denying the interest of the "solving to optimality"
subproblem per se.   Simulated annealing doesn't give you any handle on that.
It's just a way of moving away from a local optimum, to perhaps a catchment
area for a better one.

	It could well be, that by harnessing the spare cycles of some LAVCs,
you could deal with some pretty big cpu-intensive problems.

Good luck.
691.11Optimality is critical!DACT6::CANNONTim Cannon - Mobil Corp. Acct. TeamFri Jan 27 1989 12:0422
    re: .10
    
    Quite the contrary!  Solving to optimality is becoming critical
    in a lot of areas.  Take capital budgeting for example.  We are
    working with a telecommunications company whose capital budget can
    run in the hundreds of millions to a billion dollars.  Heuristic
    approaches and other approximations (if they are ***REAL*** good)
    can approach 95% of optimality.  That additional 5%, for even a
    $100 million budget represents $5 million.  That's worthwhile pursuing.
    The same case can be made for airline scheduling where every dollar
    saved is critical in that competetive environment.
    
    You make a good point about an LAVC.  That is exactly what I did
    for my dissertation--use an LAVC as a parallel processor.  And we
    got excellent results.
    
    
    Anybody got any problems to solve?
    
    Thanks,
    
    Tim
691.12Job-Shop?LARVAE::TURNERMark Turner * DTN 849-3429 * SPYDER::TURNERSun Jan 29 1989 15:3021
How about the job-shop scheduling problem?  A good reference is:

	Sequencing and Scheduling: An Introduction to the 
	   Mathematics of the Job-Shop
	by S. French
	Ellis Horwood, 1982 
	   (in N. America, distributed by Halstead Press (part of Wiley))
	ISBN 0-85312-364-0 or 0-470-27229-5

which covers many approaches (LP, branch-and-bound, etc.) and gives good
references to each.

Making the rounds of a few DEC manufacturing plants will almost certainly
flush out some real examples of these scheduling problems.


						Regards,


						Mark