| >(1) (Straightforward) For M cards, what are the terminal loops?
If M is a triangular number, n(n+1)/2, then the configuration
n n-1 ... 2 1 loops to itself. Otherwise, I think the only loops involve a
reduction in the number of piles at some point, where the configuration
looks like ... 1 1, which must be preceded by ... 2 2 1, preceded by
... 3 3 2 1, etc. Any configuration with two entries of the same value will
reduce to one of these cases; however, there are configurations that
reduce in # piles but are not part of the eventual loop: 9 stacks of 1's,
for example, would reduce to 1 stack of 9, which starts the example in
(.-1).
>(2) (Beats me) For M cards, explore different starting positions. What is
>A(M), the maximum number of actions before a terminal loop is entered?
For large M, this is some obscure function of P(M), the number of
unrestricted partitions of M, and is certainly upper-bounded by P(M). Last
time I looked, there was no known formula to compute P(M), so this and the
following question look intractable, unless there is some more direct
formulation that is computable.
>(3) (Simple case of (2), but it still beats me) Conjecture:
>A(n(n+1)/2) = n(n-1).
This looks like an upper bound, offhand. (Ain't intuition wonderful?)
|
| This note show how I'd solve the first problem, and gives my results
so far for the third problem. The trickiest problem (#2) I have not gone
near yet, but I believe that the technique shown below would carry over.
Apologies for any lack of clarity.
Let h_1 >= .. >= h_n be the sizes of the n piles of cards. So
SUM(i=1 .. n)h_i = M. Visualize the cards arranged on an x-y grid as
follows:
For i = 1 .. n, put the h_i cards of the ith pile into the ith
column of the grid, so that points (i, 1) ..(i, h_i) are each occupied
by one card. In this grid representation, we do not permit more than
one card to occupy a single point.
In terms of this picture, we can view the transformation of the
cards positions as the following two stage process.
(a) Move each card. Shift card at (i, j+1) (where j >= 1) to (i+1,
j). Shift card at (i, 1) to (1, i). Thus under (a) each card moves
to the next position "south-east" in a diagonal orbit, returning to the
most "northwesterly" point when it approaches the x-axis.
(b) Reorder the columns. The left-most column may no longer be the
tallest, so let's put it in it's proper place. For j = 1 .. n, if (1,
j) is empty, and (imax, j) is full (where imax is the largest i for
which (i, j) is full), then move card from (imax, j) to (1, j). (b)
will demonstrate non-null behaviour exactly when n < h_1 - 1.
Let's clarify with an example, M = 10, and the cards are labeled to
show where each is moving. Begin with 10 piles of 1 card
123456789A
goes to...
A
9
8
7
6
5
4
3
2
1
goes to...
A
9
8
7
6
5
4
3
12
(did you notice the way (b) operated there.)
goes to...
A
9
8
7
6
5
34
12
goes to...
A
9
8
7
6
25
134
A
9
8
47
36
125
A
9
58
247
136
A
69
358
1247
Y = SUM(over all cards)(i+j) is clearly unchanged by (a), but Y is
strictly decreased by (b). There is no mechanism to increase Y. So
if a grid has n < h_1 - 1, it is not in a terminal loop.
Let's focus on the diagonals (i.e. the diagonal orbits) of the
grid, d_1, d_2, .. , where the length of d_i is i. Say a diagonal is
occupied if *all* entries are occupied. Call a diagonal empty if *all
entries are empty. Otherwise say it's mixed. Because of the way we
have the columns in order of decreasing height, if d_i is occupied,
then all shorter diagonals must be occupied. If a d_i is empty, all
longer diagonals are empty. So if d_i and d_j are mixed diagonals,
and i < k < j, then d_k is mixed.
Suppose that there are two mixed diagonals. Then by the above,
there are two adjacent mixed diagonals, d_i and d_i+1. These have
orbital periods i and i+1. It is clear that as the transformation is
repeatedly applied to the grid, after at most i(i+1) applications,
a "hole" in d_i will appear at (1, i) whilst (2, i) on d_i+1 will be
occupied, triggering non-null behaviour at (b). Therefore the
original grid was not in a terminal loop.
If there are no mixed diagonals, or just one mixed diagonal, then
there is no way that any card can be moved to a shorter diagonal, since
all shorter diagonals are full. Thus the grid is in a terminal loop.
No mixed diagonals implies M is a triangular number, and all the
grids will converge to the single fix-point. For M non-triangular,
the terminal loop membership is exactly those grids with one mixed
diagonal, which orbits round as the transformation is applied. In
most cases, there will be more than one terminal loop (e.g. for M = 8),
there are two loops, exemplified by:
* & *
** *
** ***
*** ***
----------------------------------------------------------------------
For M = n(n+1)/2, what is A(M), the maximum number of
transformations required for a grid to enter a terminal loop? The
unique terminal we'll call T_n.
Firstly, I can show that there is a grid which requires n(n-1)
transformations before Nirvana is achieved. Consider the set, G, of
those grids with n-1 full diagonals, then the nth diagonal mixed, with
exactly one "hole" and the n+1th diagonal mixed, with exactly one
occupied point. There are n choices for the hole location. On
choosing that, 2 of the n+1 locations for the singleton point are
disallowed, so |G| = n.(n+1-2) = n(n-1). Transformation rotates these
two mixed orbits at different rates, until (b) can apply. By the
reversibility of these rotations, no grid can be the image under
transformation of *two* or more grids. Therefore we can arrange these
n(n-1) grids into a sequence g_1, .. g_n(n-1) such that for i= 1..
n(n-1)-1, g_i transforms to g_i+1, and g_n(n-1) transforms to T_n.
Call this sequence the Path to Nirvana. One element of G requires
n(n-1) steps.
It is easy to see that this is the grid consisting of the piles:
(n-1) & { (n-1) (n-2) .. 2 1} 1
because that is a member of G, and it has no ancestor under the
transformation. There is only one such element, the one at the start
of the Path to Nirvana. Call it S_n.
This shows that A(n(n+1)/2) >= n(n-1). What of an upper bound?
The technique will be to examine each element, g_i, of G. What other
paths of transformations derive g_i? Can we show the maximum length
of such a path is =< the length of the segment of the Path to Nirvana
from S_n to g_i? If we can, then since every element achieves Nirvana
only through g_n(n-1) then we're home.
For a long time I was stalled on this problem, as I tried to use
the grid representation. I think that what is necessary is the a new
viewpoint: the *gap* representation.
The following general gap theory works for any M, triangular or
not.
Suppose that h_1 >= .. >= h_n are a collection of piles, C. If
there exist h_i, h_i+1 such that h_i - h_i+1 >= p, then say that C has
a p-gap. Simple enough. Oh, and let's add that if h_n >= p, C has a
p-gap.
If C has a 3-gap between h_i and h_i+1 (or h_n >= 3)
and n - h_i+1 >= 2,
then the maximum length of a path of ancestors leading to C is i.
Proof: Let A(C) be an immediate ancestor of C. i.e A(C)
transforms to (C). To get one from C to an ancestor, we need to pick
a pile h_i an "explode" it, distributing one element to each of the
other piles, and creating singleton piles with any left over. We can
pick any pile h_j for which n - h_j =< 1
Let's speak of the piles h_1 .. h_i as the left hand side (lhs),
and the piles h_i+1 .. h_n as the rhs. By the starting premises, we
cannot explode any pile in the rhs. Assume if possible that we've
picked an element in the lhs, and exploded it. What can we say about
A(C), whose parameters we'll astericise to distinguish them.
A(C) will have a 3-gap between h*_i-1 and h*_i. For:
h*_i = h_i+1 + 1
h*_i-1 >= h_i + 1 (equality unless it was pile h_i itself exploded)
So h*_i-1 - h*_i >= h_i + h_i+1 >= 3
And n* - h*_i >= 2. For:
n* >= h_i
So n* - h*_i >= h_i - h_i+1 - 1 >= 3 - 1 >= 2.
So A(C) satisfies similar criteria to C. However the lhs of A(C)
is one smaller! We can only chase the ancestry i steps from C,
therefore, before we have no lhs left. End of proof.
Incidentally, suppose that C has a 3-gap as before, but that
n - h_i+1 =< 1, ie we have an option to use a rhs element instead
of a left. If we choose to use a lhs element regardless, then the
earlier lemma applies for A(C) and we get the familiar limitation on
path length of i.
If we *do* use a rhs element, then in A(C), the gap size is
certainly not reduced, and the size of each element in the lhs is
increased by 1. Either n* >= h*_i+1 + 2 (in which case we're in the
earlier lemma's ballpark, or n* =< h*_i+1 + 1, when we're in the same
position as a paragraph before. How long can we keep using rhs
elements? One upper bound is M/i - h_i, since each step pushes at
least i elements into lhs. Adding the results for the last theorem,
one gets M/i - h_i + i (or M - h_1 + 1 if i=1). h_i is clearly > i,
so M/i I hope this is sharp enough.
All the entries on the Path to Nirvana have either a 3-gap or a
2-gap. The 3-gap ones are g_2 to g_n. So "all we need to do" is to
deal with the 2-gap cases.
-------------------------------flakiness boundary----------------------
I have deleted here a bunch of stuff which was false/foolish/
flimsy/admixture of all three. Can anyone generate a 2-gap theorem along
the lines of the 3-gap theorem? The technique does seem promising, and
although we've only dealt with 1/n of the Path of Nirvana, it was the most
*difficult* part.
Good luck,
Andrew
|