[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

579.0. "Acyclic Graphs & Matrix Multiplication" by CHOVAX::YOUNG (I think we're all bozos on this BUS.) Mon Sep 15 1986 22:11

    Two practical problems:
    
    
    	1)	I need to detect whether or not a directed graph, recieved
    		as input is acyclic or not.  What is the most effecient
    		method to determine this?  Alternatively, I need to
    		find out what the cycle is.
    
    	2)      I believe that the preceding requires the use of matrix
    		multiplication.  Either way, what is the most efficient
    		algorithim to multiply 2 NxN arrays?
    
    -- Barry
    
    ps.  I did look but could not find any other notes dealing with
    these.
    
T.RTitleUserPersonal
Name
DateLines
579.1Efficient matrix multipliesSMURF::DIKEMon Sep 15 1986 22:3822
    An efficient method of multiplying two NxN matrices together is
    as follows:
    
    X = A B  Y = E F
        C D      G H, where A through H are matrices, each N/2 x N/2.
    
    Then, X*Y = A*E + B*G  A*F + B*H
                C*E + D*G  C*F + D*H, where the 8 submatrix
    multriplications are, of course, accomplished by recursing.  When
    you get down to sufficiently small matrices, say 1x1 or 2x2, you
    just multply them out normally.  I believe, but forgot how to
    demonstrate, that this method has order of growth O(2**sqrt(8)).
    I also forgot whose name is attached to this.
    
    There is also another method along the same lines, which depends
    on the fact that its author found a clever way of doing a 2x2 matrix
    multiply with 7 multiplies instead of 8.  I don't remember it, and,
    since my notes from school are in boxes in Cambridge, I can't readily
    look it up.
    				Jeff
    
    
579.2CHOVAX::YOUNGI think we're all bozos on this BUS.Tue Sep 16 1986 00:0817
    I have both of those in my copy of Knuth. They are:
    
    		   N 3     N               N
    		O(2 ) = O(8 )	and	O(7 )
    	
    	respectively.  Neither of these is significantly better than
    the brute force method, and Knuth and others seemded to be
    concentrating on minmizing multiplications at the expense of additions.
    this is a non-issue to me, I need the minimum total number of
    calculations.  Knuth had said that the whole matter was in a state
    of considerable change at the time of his writing (1981 in the second
    edition), and I had hoped for a more efficient algorithim.  Thanks
    though.
    
    --  Barry
    
    Ps.  The second algorithim is credited to S. Winograd.
579.3Setting the record straight.CHOVAX::YOUNGI think we're all bozos on this BUS.Tue Sep 16 1986 00:1413
    WHoo boy, did I get those wrong.
    
    Strassen got credit for the second algorithim, but Winograd get
    credit for many other related ones.
    
    The correct orders are:
    
                   3                       lg(7)       2.8...
    		O(N )		and	O(N     ) = O(N      )
    
                                                          3
    Still, not much of an improvement over brute force O(N ).
    
579.4SQM::HALLYBFree the quarks!Tue Sep 16 1986 02:355
    Isn't Wrathall's algorithm the right way to do this?  Damn, I wish
    I could remember, but it was very fast.  Might be in a book on
    parsing algorithms.
    
      John
579.5CLT::GILBERTeager like a childTue Sep 16 1986 12:463
    Look under "Topological sorting" in the index of Vol 1 of Knuth's
    "The Art of Computer Programming".  The algorithm described there
    is quite fast.
579.6They don't get much fasterSMURF::DIKETue Sep 16 1986 14:048
    I believe that you are not going to get much faster (order of growth
    wise) with an understandable algorithm.  The current record on the
    exponent is about 2.49...  Of course, if you're going to be dealing
    with a definite range of sizes for your matrices, then you should
    forget about order of growth, implement a bunch of algorithms, and
    see which works best for your matrices.
    				Jeff
    
579.7Exponential time?TAV02::NITSANNitsan Duvdevani, Digital IsraelTue Sep 16 1986 14:052
-- Isn't the problem of deciding if a graph contains a cycle [of length k]
   an NP-complete problem?
579.8Is This Correct?BEING::POSTPISCHILAlways mount a scratch monkey.Tue Sep 16 1986 14:4127
    Make a matrix with one bit for each node in the graph.  Call this
    matrix "found".  Make another matrix called "check".  Initialize
    both matrices to all bits off.
    
    Next:  Select any node whose bit in found is off (if there is none,
    stop).  Call this node "n".  Set check[n] on and found[n] on.
    
    Loop:  Select any node whose bit in check is on (if there is no such
    node, go to Next).  Call this node "n".  Set check[n] off.  Let i be,
    in turn, each node adjacent to n.  If found[i] is on, the graph
    contains a cycle.  Otherwise, set found[i] on.  Set check[i] on.  Erase
    the arc from n to i.  Go to Loop. 
    
    If you stop without finding a cycle, there are no cycles.  If the graph
    is disconnected, the number of times you pass through Next is the
    number of components (or whatever they are called) the graph has.  If
    found is expanded from bits to node-names and set to null instead of
    off and n instead of on (for whatever value n has at the time you are
    instructed to set a bit in found on), you will have a record of the
    cycle when you find it.  The time for this algorithm is a linear
    function of the number of nodes and the number of arcs if the time
    to read and change indexed bits is constant.  If the time is not
    constant, the time should be no worse than a quadratic function.
    
    
    				-- edp
                       
579.9Matrix Multiply ReferencesGALLO::EKLUNDDave EklundTue Sep 16 1986 18:0228
    	Strassen's algorithm for matrix multiplication is/was the best
    mathematically in that it uses the fewest TOTAL (not just multiply)
    operations for large matrices (the cutover I believe is about 100*100
    size).  Up to that point it still uses the fewest multiplies at
    a cost of some additions.  However, I seem to recall that it is
    not as stable numerically as the "normal" method.
    
    	For really large matrices the problem becomes quite different
    than one might think.  The crucial problem is memory references!
    One MUST minimize these to be successful.  Therefore one of the
    best methods involves calculating partial products by walking
    down BOTH columns of the two matrices (which is just as strange
    as it sounds).
    
    	Two good references are:
    
    	Implementing Linear Algebra Algorithms for Dense Matrices on
    a Vector Pipeline Machine on pages 91-112 of the SIAM Review,
    Volume 26, No. 1, January 1984
    
    	Squeezing the Most out of an Algorithm in CRAY FORTRAN on
    pages 219-230 of the ACM Transactions on Mathematical Software
    Vol. 10, No. 3, September 1984.
    
    	Both reference vector machines.  In practice the same
    considerations apply to scalar machines, and are critically
    dependent upon the cache size.
    
579.10A simple linear algorithmTLE::FAIMANNeil FaimanTue Sep 16 1986 21:0233
    If you choose to represent your graph as an "adjacency matrix"
    M, where M[i,j] = 1 if there is an edge (i,j), = 0 otherwise,
    then you can't do any better than O(|V|^2) (proportional to the
    square of the number of vertices).
    
    However, if you have a list of all the vertices and, for each
    vertex, a list of all the successor vertices (or equivalently,
    a list of all  the outgoing edges), then there's an easy O(|E|)
    (linear in the number of edges) algorithm.  While |E| = O(|V|^2)
    in the worst case, it tends to be linear in |V| for graphs of
    practical interest.
    
    The algorithm that follows is a drastic simplification of Tarjan's
    algorithm for strongly connected components in a directed graph.
    If you actually want to know what vertices are in the cycle,
    you will probably need to use the full algorithm--a good
    presentation is in Aho, Hopcroft, and Ullman, _The_Design_and_
    _Analysis_of_Computer_Algorithms_.
    
    ===============================================================
    
    Initially, let each vertex be marked "unvisited" and "unstacked".
    
    For each vertex v, do:
    	If v is "unvisited", then SEARCH(v)
    
    Where routine SEARCH(v) ==
    	Mark v as "visited"
    	Mark v as "stacked"
    	For each successor u of v, do:
    	    If u is "stacked", then there is a cycle in the graph;
    	    Otherwise, if u is "unvisited", then SEARCH(u)
    	Mark v as "unstacked"
579.11Thanks.CHOVAX::YOUNGI think we're all bozos on this BUS.Thu Sep 18 1986 23:396
    
    Thank you all very much for your time and thoughts.  Thay have been
    VERY helpful
    
    -- Barry
    
579.12Interesting.TAV02::NITSANNitsan Duvdevani, Digital IsraelSun Sep 21 1986 05:479
Re .8

The algorithm looks correct, so the problem of deciding if a graph contains
a cycle looks polynomial, but the problem of deciding if a graph contains a
cycle OF LENGTH k is NP-complete, because the special case of k = no. of nodes
is the problem of deciding if the graph contains an Hamiltonian cycle, which is
known to be NP-complete.

Nitsan.