[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

1401.0. "What's algebra?" by MOVIES::HANCOCK (Peter Hancock) Fri Mar 22 1991 09:04

The following are some rather naive and puzzled thoughts about what
algebra is, prompted by coming across the idea of "algebraic
specification", in software engineering. I wonder if anyone could
recommend a paper or a bit of a book which explains just what is
"algebraic" about algebra, (let alone software engineering)?

  I have always thought that the nutshell definition of "algebra"
was "the study of equational theories", presented by free variable
(quantifier-free) laws with premises and conclusions all equations.
But I've obviously never thought what it was saying, because although
this captures groups, lattices, etc, there is something funny about
e.g. fields, where the inverse of the multiplication is not totally
defined. (Or for that matter, categories, in which composition is
not totally defined.)

  About fields, you might say that there's nothing new there, because
the laws for inv (the multiplicative inverse) have the form
  x /= 0  => x . inv(x) = 1
which is just a boolean combination of equations.
But still, if we allow only => in framing laws, which is quite natural
if you are dealing with inferential laws, then the law for inv is not
of this form: it is
  (x = 0 => ABSURDITY) => x . inv(x) = 1
Which is:
a. Odd, because as a law it has a law as its hypothesis (like say, the rule
   of => introduction), quite unlike distributivity (which has no hypotheses),
   left cancellation (which has a premise that is not hypothetical), etc.
b. A cheat, because it has 2 statement forms: equations, and ABSURDITY.

  In case you think I am confusing the implication connective => with the
inferential turnstile |-, think again. I wrote the above laws with =>
as the inference sign, because I thought it look odd if I wrote
  (x = 0 |- ABSURDITY) |- x . inv(x) = 1
But thats what I meant. The idea of nesting |- "to the left" is precisely
what is needed to explain rules such as the natural deduction =>-intro rule;
and its not just me who thinks so. There is a real sense to "equational law",
which makes distinguishes arbitrary boolean combinations of equations from
true equational laws. [It would also allow a limited form of "quantification":
a law like forall-introduction binds free variables in its hypothesis, and so
one could consider laws such as
  (x = x . y , x = x . z |-[x] x' . x = x) |- some equation in x' y and z.
But maybe thats not an algebraic law.]

  Rather than introduce the full panoply of logical connectives into
what is allowed in an algebraic law, so as to be able to say that the
laws of a field are algebraic, one could take a smaller step, and
allow inequations also in an algebraic law. For example, the law:
  0 /= s(x)
would then be algebraic. If an algebraic structure has distinguished
elements that have to be distinct, this too can be expressed algebraically.
If this is extended notion, allowing inequations, is a more worthwhile
notion of algebraic law, then presumeably one could say that an algebra
is given by some operators, and some laws whose conclusions are equations
and inequations, [and whose hypotheses are hereditarily of that kind?], which
is consistent in the sense that things are never both equal and different.

  As for things one is taught in "algebra", like categories, where
the non-totality of the operators isn't just a minor exception, but
the whole interest of the thing (in some sense!), don't they completely
destroy any idea that one is dealing, primarily, with equations?
The definition I was taught (well.. recall) of a category was that it had
a set (small, large, ..) of objects, and for each ordered pair of objects
A,B a set of morphisms hom(A,B), and for each A an identity 1[A],
and for  each  triple A,B,C, a composition ;[A,B,C], satisfying the laws:
1. f : hom(B,C) |-  1[A] ;[A,B,C] f = f  = f ;[A,B,C] 1[A]
2. f : hom(A,B), g : hom(B,C), h : hom(C,D)
   |- (f ;[A,B,C] g) ;[A,C,D] h = f ;[A,B,D] (g ;[B,C,D] h)
3. f : hom(A,B), g : hom(B,C) |- (f ;[A,B,C] g) : hom(A,C)
4. 1[A] : hom(A,A)
But the formulation of these laws at best involves a 3 place relation:
f : hom(A,A). So is algebra the study of systems presented by
a few constants, including constants for predicates and relations besides
identity, and some laws? Why not throw in all the logical constants as well?
What is peculiarly *algebraic* about algebra?
T.RTitleUserPersonal
Name
DateLines
1401.1very briefly HERON::BUCHANANHoldfast is the only dog, my duck.Fri Mar 22 1991 14:4119
	I'm not a formalist, to put it mildly, and there is much that I fail
	to understand in your note.   However, I have two thoughts...


	(1) different people can mean different things by the same word.
	There are a least 3 different meanings for the word algebra.

	(2) Can't you extend inverse to a whole field by defining inv(0) = 0,
	with the axiom:
			x.inv(x).x = x
	I think this is consistent, and it avoids a divide-by-zero check.   Yes?


	This would avoid having an inference or an implication in testing for
	the applicability of your law, so substitutional tools could use it
	happily, although it wouldn't be so useful for more general use.  

regards,
Andrew.
1401.2HPSTEK::XIAIn my beginning is my end.Fri Mar 22 1991 15:3710
    re .1,
    
    > (2)...
    
    Andrew, you can't do that because a*inv(0) = a*0 = 0 ==>
    a*inv(0)*0 = 0*0 ==> a = 0.  That means the field contains exactly one 
    element 0.
    This trivializes the field of field theory.
    
    Eugene
1401.3Sorry for obscurity: the gistMOVIES::HANCOCKPeter HancockFri Mar 22 1991 19:5836
re .1: (andrew)

  Thanks for your reply: I'm sorry the note was obscure; but its
  difficult to be clear when you're puzzled about something.

  No way am I a formalist either: what gave you that impression?

  Unlike you, I can only think of 2 definitions of "algebra":
  one: a linear space with a vector product, i.e. a particular
       kind of algebraic structure that for some reason has come
       to bear the same name as the whole subject.
  two: the study of the consequences of particular systems of
       equational laws (and the construction of models of these
       theories).
  (I'm leaving out things that verge on pure woffle, like "the study
  of abstract structure", and things of that ilk.)

  I suppose the gist of what I was trying to say was that if you take
  "equational laws" seriously, then the definition appears to fray away at the
  edges, as regards some paradigm algebraic structures, like fields, or
  categories.

  I have seen some definitions that begin: "Def: a many-sorted algebra ...",
  but none that ever gave enough examples after to convince me that the
  things algebraicists in fact study actually fall under the definition.

  Can any one recommend a good book? Whats an initial algebra? Whats a
  co-algebra? Whats a final co-algebra? Whats universal algebra?

  Quite possibly "algebra" is a term without an exact definition,
  referring to a family of methods with a core of some kind, but shading
  away at the edges into "analysis" on one side, and logic or model theory
  on the other. But before taking the easy way out, I thought I'd ask
  peoples opinions.

peter
1401.4AlgebraSMAUG::ABBASISat Mar 23 1991 02:251
    I cant define Algebra, but I know it When I see it.
1401.5NOTION::DERAMODan D'EramoSun Mar 24 1991 04:227
        re .2,
        
        No, because he said x * inv(x) * x = 0, not
        y * inv(x) * x = y, which is what you need
        to show all y = 0.
        
        Dan
1401.6HPSTEK::XIAIn my beginning is my end.Sun Mar 24 1991 17:355
    re .5
    
    No, he said x*inv(x)*x=x.  Besides, what does inv(x)*x equal to anyway?
    
    Eugene
1401.7GUESS::DERAMODan D'EramoSun Mar 24 1991 21:588
        Oops, yes, x * inv(x) * x = x; my point was that that
        does not contradict your example
        
        	a * inv(0) * 0 = a * 0 = 0
        
        because in that case "a" is 0 also.
        
        Dan
1401.8HPSTEK::XIAIn my beginning is my end.Sun Mar 24 1991 22:477
    re .8,
    
    inv(0) * 0 had better be 1; otherwise, it ain't an extension of the
    definition of the inverse.  That was what I was trying to show in the
    other note.  Come on Dan, you know it won't work. :-)
    
    Eugene
1401.9random musingsHERON::BUCHANANHoldfast is the only dog, my duck.Mon Mar 25 1991 13:4986
	Let me catch the red herring I dropped here in .1, concerning the
question of an "alternative" axiomatization of fields.   Note 1239 (which is
quite interesting, and was originally entered by Dan) says:

>    Algebraists like,
>when they can, to define algebraic structures by equational identities,
>as they do it for groups, rings, etc., where etc  does not include
>fields.  Indeed, the cartesian product of equationally defined structures
>(they're called universal algebras) is again a structure of the same type
>(with operations defined coordinate-wise). But the cartesian product
>of two fields is a ring which has zero-divisors.

     Now, I had forgotten this, and was trying to toy around with the
definition of the multiplicative inverse to fix it.   My suggested fix fails,
of course, but not for the reason that Eugene argued.

     I don't think I made myself quite clear (it was a hurried note).   I
replaced the axiom:

	for all x /= 0 there exists inv(x) such that x.inv(x) = 1
by
	for all x, there exists inv(x) such that x.inv(x).x = x

	I am at no stage claiming that inv(0)*0 = 1, Eugene, but I do claim 
that 0*inv(0)*0 = 0, which is much more reasonable.   Note that for all x /= 0,
my axiom is consistent with the original.

	However, from my axiom, I don't think it's possible to deduce the
original axiom.   For instance, in the ring Z_6, my axiom works fine, with
inv(x) = x.   And Z_6 is famously not a field.

>The definition I was taught (well.. recall) of a category was that it had
>a set (small, large, ..) of objects, and for each ordered pair of objects
>A,B a set of morphisms hom(A,B), and for each A an identity 1[A],
>and for  each  triple A,B,C, a composition ;[A,B,C], satisfying the laws:
>1. f : hom(B,C) |-  1[A] ;[A,B,C] f = f  = f ;[A,B,C] 1[A]
>2. f : hom(A,B), g : hom(B,C), h : hom(C,D)
>   |- (f ;[A,B,C] g) ;[A,C,D] h = f ;[A,B,D] (g ;[B,C,D] h)
>3. f : hom(A,B), g : hom(B,C) |- (f ;[A,B,C] g) : hom(A,C)
>4. 1[A] : hom(A,A)

	Nits:
	(1) The objects and morphisms form "collections" not "sets".
	(2) Your law 1. should read (I think)
1. f : hom(B,C) |-  1[A] ;[A,A,B] f = f  = f ;[A,B,B] 1[B]
	(3) To summarize the definition:
	We have blobs linked by arrows, and the collection of arrows is
closed under an associative composition on a chain of arrows.   Each blob
has a distinguished identity arrow, from the blob to itself.
	(4) I don't yet see why categories can't be captured within a
universal algebra, although I have no problem believing u.a.s aren't powerful
enough.   I know that it's possible to define categories
entirely in terms of morphisms, and the nodes are *identified* with the
identity morphisms (neat, I thought.)   Is this a perhaps a way round the 
"type-checking" problems?

>  Can any one recommend a good book? 
>  What's an initial algebra? 
>  What's a co-algebra? 
>  What's a final co-algebra? 
>  What's a universal algebra?

	All good questions, and I have only [half] answered the last.

>  Quite possibly "algebra" is a term without an exact definition,
>  referring to a family of methods with a core of some kind, but shading
>  away at the edges into "analysis" on one side, and logic or model theory
>  on the other. But before taking the easy way out, I thought I'd ask
>  peoples opinions.

	I suspect that the more precise definitions came later, in the same
way that in Digitalese the word "geography" has recently acquired a firm 
definition, currently unknown by the rest of puzzled civilization.

	This is consistent also with the trend to greater and greater
formalization that has taken place within maths pretty much monotonically
up until the Hilbert programme.   For instance, Descartes' development
with algebraic geometry by inventing the Cartesian plane etc was able to
squeeze a lot of the hand-waving out of geometry, and to lay the groundwork
for n-dimensional geometry, which was impractical with previous techniques.

	Of course these days geometry is much more fashionable than it used
to be, which is probably something to do with the arrival of computers.

Regards,
Andrew.
1401.10More randomnessMOVIES::HANCOCKPeter HancockFri Apr 05 1991 14:1563
re: .-1

  Thanks for the ref to 1239: it was interesting.
  I'm afraid I don't whats meant by an "equationally defined"
  structure (I have had a hunt around some books but found nothing).
  In fact, thats more or less what started me off.
    Guess: an *equationally defined structure* is given by a set of
    operation symbols with given arities (0, 1, ..), and an equivalence
    relation between expressions constructed from them, which  satisfies
    the substitution law:
	a1 = b1   ..   ak = bk
	---------------------- where  f has arity k, a1 etc are expressions
	f a1 .. ak = f b1 .. bk
    (Maybe the equivalence relation needn't be defined between *all* expressions,
    if there are sort or type constraints of some kind other than arity.)
    The *type* of the structure is given by the operation symbols alone.
    (The least equivalence relation consistent with the substitution law
    presumeably forms the *free* structure of that type.)
  Is it something like that?

  Interesting about Z6

  Nits 1: well, I thought that in order to talk about categories of sets,
	  and the like, you had to postulate the existence of a series of
	  "universes" closed under sufficiently many operations (swirls hands),
	  each contained by the next. The universes are, the ("large") set
	  of ("small") sets, the ("hyper-large"/"huge") set of large and
	  small sets, and so on. A large category could have a large object
	  set and large hom-sets, and so on. I didn't know it was possible
	  to just say "collections": I thought this business of universes
	  was necessary to avoid set-theory type paradoxes.
	2: True
	3: I'd remembered the "everything is a morphism" approach.
	   But then, composition isn't totally defined. If you ignore
	   the constraint for compostion to be properly defined, the
	   axioms for a category are just those of a monoid, I think.
	   So I'm not sure in what sense "category" could be equationally
	   defined.

  I suppose I'll just have to find something in a bookshop to tell me
  what the horrible words mean.

  My wild-goose chase in this area was prompted by reading about so-called
  algebraic specification methods in software engineering (there is
  Digital's own "Larch", by Jim Horning and others), where you more or
  less write down a set of equations to specify a state-space and its
  operations (but there's much more to it than that). I suppose I wanted
  some perspective on it, from a mathematical point of view. [I find it
  much easier to think of describing a statespace and its operations by
  explicit models.]

  What you say about formalisation is interesting.
  I think the rise of metamathematics, as a bizzare outshoot of mathematics
  (the study of mathematically defined formal systems) has given rise to
  terrible confusion about the meaning of mathematical statements. But
  I've far overspent my pomposity  budget already..

Regards





1401.11Algebra is ...NYTP03::TJIONASGeorge, NY TP Resource CenterMon Nov 18 1991 20:5646
    General definition
    ------------------
    
    Algebra is the following set:
    
    A = { E, O, R }
    
    where,
    
    E is a set of elements
    O is a set of operators you apply on the elements of E
    R is a set of rules (axioms) in using the opeartors
    
    You already are familiar with some algebras. Here area some example
    of algebra:
   
    1. Arithmetic is the algebra { N, { +, -, /, * ), { Peano axioms } }
       where N is the set of natural numbers, +-/* the arithmetic operators
    
    2. Euclidean Geometry is the algebra { { Geometric elements}, {
                             Geometrics Operators}, { Euclidean axioms } }
	where
    	    (I think) Geometrics elements are the points, lines, surface,
            space
    
    	    Geometric operators are the line intersection, line sgmentation
            etc.
    
    3. Real Calculus is the algebra { R, { +-/* }, { axioms of real
                                     calculus} } 
    
    3. Complex calculus similar to Real calculus Over C the set of complex
       numbers instead of R the set of real numbers.
    
    4. Natural Language is an algebra { { letters in an alphabet and special
       symbols } { operators, e.g. joining letters }, { Syntax and Gramatical
       rules } } 
    
    5. COBOL, Fortran, C, BASIC are algebras too
    
    6. Analytic geometry, projective geometry, Lopachevsky geometry,
       Topology, Functional analysis, Linear algebra, Hilbert algebra,
       Banach algebra, The Greek language, the English language etc are
       all algebras.
    
    George
1401.12looks good, but is it too general?STAR::ABBASIMon Nov 18 1991 22:478
    nice definition.
    
    can one then use this to say almost anything is algebra ?
    
    example: life is algebra, elements are the living things, operators
    are chemical and physiological interactions between them (reproduction,
    etc..), axioms are the rules of nature.
    
1401.13Aphorisms about mathematicsCORREO::BELDIN_RPull us together, not apartTue Nov 19 1991 10:5113
re .12               -< looks good, but is it too general? >-

A mathematician is one who knows less and less about more and more.

Mathematics is the study in which we don't know what we're talking about nor
whether what we say is true or not.

There is an inverse relation between Mathematical Generality and Deductive
Power.
   
Now _that_ is meta-mathematics, complete with self-reference.

Dick
1401.14Rigorous definitions is the key to algebraNYTP03::TJIONASGeorge, NY TP Resource CenterWed Nov 20 1991 14:1111
    .12
    
    That's not too general definition for mathematicians. The key is
    to define rigorously the three parts of it, the set of elements,
    the operators and your axioms. Then you can build your own algebra.
    All the fields in mathematics is just that.
    What the great mathematicians did is just that. They defined their
    own axioms and buit on those.
    
    George
    
1401.15are these enough for categories?MOVIES::HANCOCKPeter HancockSun Jan 24 1993 19:4445
Are the following axioms enough to define `category'?
(Just wondering how equational you could get them.)

The constants and operators are:

    M   - set of morphisms.
    dom - dom a is the identity morphism on the domain of morphism a
    cod - cod a is the identity morphism on the codomain of morphism a.
    ..;.. - Composition operator. Semicolon is an infix operator.
            In `a;b', `a' is applied before `b', so the argument
            positions are the reverse of those of the usual little-circle
            notation.

The axioms are (for a, b, c elements of M):

    closure :
        dom b, cod a are elements of M
        cod a = dom b => a ; b is an element of M

    identity :
        dom (cod a) = cod a
        cod (dom a) = dom a
        dom a ; a = a
        a ; cod a = a

    associativity of `;' :
        cod a = dom b /\ cod b = dom c => a ; (b ; c) = (a ; b) ; c

( /\ means `and', and => means `only if'.)

Some example definitions ...

    compat a b : cod a = dom b

    surjection m :
      for all f,g elements of M :
         compat m f /\ compat m g /\ m ; f = m ; g => f = g

    injection m :
      for all f,g elements of M :
         compat f m /\ compat g m /\ f ; m = g ; m => f = g

    ...