Jump to content

Talk:Affine space/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Definition

I liked the definition -- a lot more intuitive than other more formal ones.

Thank you. I put it there. It's not my original work; it's "folk mathematics", but I like to think I can claim some credit for realizing it pedagogical superiority and putting it here. Michael Hardy 23:58, 27 November 2004 (UTC)

"Affine module"?

Is there an affine equivalent of a module (mathematics), i.e. a module that has forgotten its origin? —Ashley Y 09:48, 20 July 2005 (UTC)

Transitive vector space action

That's not enough. The real line acts transitively on a circle. Can we have a better definition back? Charles Matthews 22:09, 12 October 2005 (UTC)

Hmm, that definition has been here since the inception. Would it suffice to define it as a faithful transitive vector space action? that would rule out the circle, at any rate. -lethe talk 23:54, 31 December 2005 (UTC)
Yes, it has to be a principal homogeneous space. Actually that's the real definition ... Charles Matthews 08:53, 1 January 2006 (UTC)

Axiomatic Definition of Affine Spaces

Interestingly, there is a simple intrinsic characterization of Affine Spaces over fields other than the trivial field {0,1} and the field {0,1,2}. The case of Affine Spaces over {0,1} is separately characteized as abelian group with idempotent elements. To arrive at an axiomatization for these, one needs an analogous concept of "Affine Group", which is discussed below. The case for {0,1,2} also requires special handling.

The operation defined by [A,r,B] = (1-r)A + rB may be completely characterized by the following properties

  • [A,0,B] = A
  • [A,1,B] = B
  • [A,rt(1-t),[B,s,C]] = [[A,rt(1-s),B],t,[A,rs(1-t),C]].

One recovers the vector space operations by designating a point, O, as the zero vector and then defining

  • rA = [O,r,A]
  • A+B = [[O,1/(1-t),A],t,[O,1/t,B]]

The latter operation requires that t be an element of the field other than 0 or 1.

This was initially developed in 1996 in

The characterization result and proof will both be added to this article. — Preceding unsigned comment added by 129.89.32.119 (talk) 19:00, 17 May 2006 (UTC)

Affine space before vector space

Since an affine space is like a vector space which has forgot its origin, one should regard affine spaces as a more basic mathematical object than vector spaces. Hence, I would like a precise definition of an affine space without any reference to a vector space. Afterwards one can define a vector space as an affine space together with a choice of origin.

--Toreak 18:37, 20 October 2005 (UTC)

You need the vector space for the displacements, but an affine space is more general than a vector space because it need not be a vector space itself.--Patrick 21:42, 20 October 2005 (UTC)
I think there's something to be said for the approach Toreak proposes, but in this kind of forum we have to live with the fact that everyone's already familiar with the concept of vector space. At least that's my gut reaction; I'll say more later.... Michael Hardy 21:44, 20 October 2005 (UTC)
I propose to define affine spaces as sets where you can evaluate affine combinations. See this note about affine spaces.
--Toreak 15:40, 21 October 2005 (UTC)
If you do this, please try to be sure to put a more-or-less intuitive explanation of affine combinations before the technicalities. Michael Hardy 23:11, 21 October 2005 (UTC)
If it is an alternative approach and the definition in the current version of the article is common, the alternative can be added but should not replace the existing one.--Patrick 09:03, 22 October 2005 (UTC)
The problem may be resolved, except for affine spaces over the field {0,1,2}. See the discussion below on the axiomatic definition of affine spaces.--Mark, 17 May 2006 — Preceding unsigned comment added by 129.89.32.144 (talk) 22:11, 17 May 2006 (UTC)
The case for the field {0,1,2} requires special treatment, when adopting the algebraic approach described below, as does the case of affine spaces over {0,1}.
Another route that may be later discussed here is to note that an affine space may also be thought of as a projective geometry in which a specific "subspace at infinity" has been designated. The advantage of this second approach is that (1) projective geometry only requires 3-5 axioms, depending on whether you want to include Desargues' Theorem and the Theorem of Pappus, and (2) the underlying field is internally constructible and does not need to be assumed at the outset. In dimensions more than 2, Desargues' is provable, and Pappus' Theorem is only required to show that products in the underlying field commute.
The most likely route to this approach is to split the "point" concept in Projective Geometry to "point" and "direction"; and split the "line" concept to "line" and "horizon". Then each of the axioms needs to be divided into the various special cases (some becoming redundant or reducible to less generalized forms). For instance, the axiom that asserts that every line has 3 points becomes 2 axioms: (a) every line has 2 points and a direction; (b) every horizon has 3 directions on it. The axiom that asserts (c) two points uniquely determine a line now ramifies into special cases, (d) given a point and a direction, there is a unique line containing the point lying in the given direction, and (e) given two directions, there is a unique horizon which these directions lie on.
This may be brought into more conventional terms by replacing the "direction" concept by that of the "parallelism" equivalence relation; and replacing the "horizon" concept by "plane". The only drawback to this general approach is that Projective Geometry normally assumes its spaces are of 2 or more dimensions, so you lose the 1-dimensional Affine Spaces with this approach (as well as the 0-dimensional space).
All you're really doing is writing down postulates for Descriptive Geometry and then exploiting its capability of internally defining a field to construct an affine space structure on top of it. -- Mark, 19 May 2006 — Preceding unsigned comment added by 129.89.32.144 (talk) 22:49, 19 May 2006 (UTC)

Generalization to "Affine" Groups

Just as an affine space may be thought of as a vector space in which the zero vector is no longer distinguished, one has an analogous concept for groups, where the identity element is no longer distinguished. Unfortunately, the name "affine group" is not possible, since that term is already being used for something else; hence the quotes.

An "affine" group is defined by a ternary operation a/b.c, subject to the axioms

  • a/a.b = b
  • a/b.b = a
  • a/b.(c/d.e) = (a/b.c)/d.e

It is abelian if, in addition,

  • a/b.c = c/b.a.

For abelian "affine" groups, the first two axioms are equivalent, but the third remains independent as can readily be seen by a suitable counter-example for a 2-element "affine" group.

The ternary corresponds to the operation a/b.c = ab^{-1}c. Here, the group operations are recovered by first selecting an element E to call the identity and then defining ab = a/E.b and a^{-1} = E/a.E. This defines the group "localized at E".

The group 'associated' with an "affine" group may be recovered by the equivalence relation (a,b/c.d) ~ (c/b.a,d). The corresponding equivalence class [(a,b)] defines an operation a\b which thus satisfies the identity a\(b/c.d) = (c/b.a)\d. One may then define the product by (a\b)(c\d) = a\(b/c.d) and prove that a\a is the identity and the inverse of a\b is b\a. The resulting group is isomorphic to the group localized at any element E.

An affine space over the field {0,1} is an "affine" group in which a/b.a = b. — Preceding unsigned comment added by 129.89.32.119 (talk) 19:00, 17 May 2006 (UTC)

These sorts of algebraic generalizations of affine spaces have a distinguished history. See
Baer, R., Zur Einführung des Scharbegriffs, J. Reine Angew. Math. 160 (1929), 199-207
Certaine, J., The ternary operation (abc)=ab-1c of a group, Bull. Amer. Math. Soc. 49 (1943), 869-877. Michael Kinyon 18:13, 5 September 2006 (UTC)

Affine n-space

I was redirected here from the algebraic variety article when I wondered what affine n-space was. Either the link from the aforementioned article was to the wrong article (what would the correct link target be?) or perhaps there should be a sentence or two about that notion? 128.135.100.161 16:09, 9 December 2006 (UTC)

Intrinsic Definition section

I am bothered that the section entitled Intrinsic definition of affine spaces seems to be original research based on some (obviously unrefereed) posting to the sci.math.research newsgroup. However, I have no objection to a section giving an intrinsic definition or description of an affine space, and in fact, there is a published source upon which one could draw, namely

Bertram, Wolfgang, From linear algebra via affine algebra to projective algebra, Linear Algebra Appl. 378 (2004), 109-134.

A preprint version under a different title is also available at the Jordan Theory Preprint Archive. The axioms in Bertram's paper seem more intuitive (to me, at least) than the ones in this article. However, I hesitate to be bold, because the section was added last May without anyone objecting, so maybe my concerns are misplaced. Any thoughts on this? Michael Kinyon 23:41, 3 September 2006 (UTC)

I've tagged that section for cleanup because of the notation. For example, we see this:
[A,m,B,s,C = [C,1-ms,B,(1-m)/(1-ms),A if ms is not 1.
But we should see something more like this:
[A, m, B, s, C] = [C, 1 − ms, B, (1 − m)/(1 − ms), A] if ms is not 1.
Among the differences: proper spacing, italicization of variables (but not digits and not brackets, etc.) proper minus signs instead of stubby littly hyphens, right brackets to match left brackets. This sort of stuff runs through the whole section. Michael Hardy 19:14, 18 December 2006 (UTC)
I've put most of the eqns inside <math> tags, which fixes some formatting problems, eqn 3' is suspect, but I don't know enough on the topic to know what it should be. The whole section could do with a lot of cleanup so tags still remain. --Salix alba (talk) 15:15, 24 January 2007 (UTC)

Theta:SxS to V such that (a,b) goes to a-b

In the current article (Jan 7,2006) the formal defintion says S is a set and Theta sends (a,b) in SxS to a-b in V. Then S is meant to be a subset of V? It doesn't say so, and I'm confused. Please explain, I want to learn this! -Richard Peterson

S is not a subset of V: S contains points, V contains vectors, these are not considered the same.--Patrick 11:55, 8 January 2006 (UTC)
 For a precise definition of affine space, the set S should be described.-Richard Peterson

RP has good point. The minus sign usually is shorthand for a function X x X -> X. In an affine space, it's used in a very nonstandard way, as it takes points from one space (the affine space) to a different space (the vector space). Saying that Theta(a,b) = a-b is simply a tautology; "a-b" has absolutely no meaning other than being equal to Theta(a,b). Presenting subtraction as if it is somehow an intrinsic property of the affine space separate from Theta is deceptive.--69.107.121.247 01:55, 2 October 2006 (UTC)

I added the cleanup tag. The faithfully homogeneousgobbledy gook space stuff is a whitewash of that it's not understood, learned authority in place of reason. The "Theta:SxS to..." business doesn't make sense. This has been unsatisfactory for more than a year. If the writers do know how to fix it, fine. Otherwise let's put on an experts needed tag.Rich 18:45, 16 April 2007 (UTC)
Yes well placed. You might find this pdf slideshow of help. It explains things very well. --Salix alba (talk) 22:49, 16 April 2007 (UTC)

Confusing section 'Informal descriptions'

Hi - I found it very hard to understand the section 'Informal descriptions'. I think it is confuses points and vectors, and that, when the distinction beween the two is one of the important aspects of affine spaces. In fact I had to go elsewhere to study 'affine spaces' before I was able to go back and understand staht section - and even then I think it only barely passes as meaningfull.

Anyway, for my own record I wrote up an informal description, which you are free to use as you like:

An affine space distinguishes itself from a vector space by not

requiring a notion of absolute position of points (and thus, no notion of an origin). Instead one can speak of the relative position of one point to another. The difference between two points is a vector and such a vector (in general) is something different from a point. In fact, it is an element of some vector space associated with the affine space. A point and any vector may be added to produce a new point. Addition of two points, however, is generally not defined on an

affine space.

Any vector space is an affine space with itself as the associated

vector space, but not all affine spaces are vector spaces, one example is the real numbers, R, shifted by some unknown but fixed value k, call this space K. Points are then elements on the form (x+k) where x is a real number. Since (a+k) + (b+k) = a+b+2k cannot be expressed as (x+k) without knowing the value of k, addition is not defined and thus, K is not a vector space (unless one is willing to consider more exotic definitions of addition.) It is, however, an affine space with R as the associated vector space, since (a+k) - (b+k) = a-b is a real number and (a+k) + b = ((a+b)+k) is on the form (x+k), and therefore a

point in K.

Even though one cannot add points in an affine space, one may choose

some point O as a pro forma origin and then define addition of points Pi, as O + SUM(Pi-O), but of course the result will depend on the choice of O. On the other hand, if one instead considers linear combinations as in O + SUM(ci(Pi-O)) then the result will be independent of O if and only if SUM(ci) = 1. In general any linear combination where the sum of the coefficients is 1 is called an affine

combination.

Please excuse my ASCII math ;-) Kristian 00:13, 19 July 2007 (UTC)

Another characterization (?)

Does anyone else think this section should be deleted? The "axioms" describe real affine 3-space, so not only is the construction strange (and assumes we know what point, line and plane are--might risk circular logic) but it is very weak, and probably belongs more to a full course on classical geometry. MotherFunctor 07:51, 29 April 2007 (UTC)

That was my immediate reaction when I saw it, and in fact I came to this page meaning to make the same proposal, so was happy to find yours. The reference [1] it cites is to p.192 of Coxeter, which is the second page of Chapter 13, "Affine Geometry". There is no mention of these axioms or of David Kay that I could find anywhere in the book, so the reference is bogus. My guess is that it should cite David Kay's 1969 text "College Geometry". More importantly than the incorrect citation however is that this is an embarrassingly badly organized account of synthetic affine geometry as the flip side of analytic affine geometry. The entire article should divide at the root into analytic (Cartesian) and synthetic approaches to affine geometry. Since the dominant mode of affine geometry today is analytic (in part because it handles d dimensions as gracefully as two and three), the synthetic portion need not be as comprehensive as the analytic. However the division should be made up front, and the synthetic approach be given its due as appropriate for an encylopedia article. Even if 3D affine geometry is sufficient for the synthetic approach, it should consist of more than a mere listing of axioms of questionable interpretation and completeness. For example do Kay's axioms imply that any three out of four noncoplanar points are noncollinear or is the reader supposed to take that for granted, etc. etc.? --Vaughan Pratt 21:24, 11 September 2007 (UTC)

"An affine space is a vector space that's forgotten its origin"

I think this is wrong, or at least confusing. Vector spaces are the spaces in which vectors live, in a pure vector space, there is no definition of the concepts of points, let alone origins. Even the wikipedia article on vector spaces has no mention of points or origins. Just because the most common vector spaces we use are affine spaces in an affine frame, doesn't mean that a vector space itself has the concept of a frame. 128.16.9.29 09:26, 10 October 2007 (UTC)

Except that when you call a set with a structure on it a "space", you often call its members points, and people speak of "a point in a vector space", meaning of course a vector. The zero element of a vector space is often called the origin. Michael Hardy 22:43, 10 October 2007 (UTC)

"The smith and jones story in the informal section is just wrong"

The a and b described in the story are not vectors in the first place- position 'vectors' are not vectors at all- they don't transform in the right way. —Preceding unsigned comment added by 128.230.72.213 (talk) 22:10, 12 April 2009 (UTC)

In effect they become vectors if you choose one point to be the origin. The point is that although they are not vectors, since there's no privileged point serving in the role of an origin, nonetheless not all of the structure of the vector space is lost: you can still find combinations of points in which the sum of the coefficients is–1. Michael Hardy (talk) 00:22, 13 April 2009 (UTC)

Better variable names

Instead of

 , written as (a,b) → a + b

wouldn't it be much clearer to use

 , written as (v,a) → v + a?

ThinkerFeeler (talk) 06:39, 9 June 2009 (UTC)

p-affine space?

The article states:

If the sum of the coefficients in a linear combination is 1, then Smith and Jones will agree on the answer!

or more formally:

An affine combination of vectors x1, ..., xn is a linear combination
in which the sum of the coefficients is 1, thus:
.

My question is this: is there a generic name for a combination where

.

and would such a thing be called an "affine p-space", or what? Or is it the case that only the p=1 space has properties which are disappear for p not one? (It should be clear that the case p=2 shows up in a zillion different areas of math; I haven't yet seen it be given a distinct name of its own.). linas 15:59, 5 November 2005 (UTC)

Put the norm in and you have a totally different ball game. Charles Matthews 18:22, 5 November 2005 (UTC)
You're applying a norm to a scalar? Where did you get this "p-affine space" idea? 190.224.94.57 (talk) 00:37, 11 June 2009 (UTC)

Informal tone

Don't think the tone is up to scratch, I understand this is mainly the informal section but generally this means not rigorous rather than just non-encyclopedic section. I don't understand the subject well enough to do too much however.Wolfmankurd (talk) 11:34, 9 July 2010 (UTC)

Number Line Example

In the examples section, I don't see how children doing math on a number line relates to affine spaces. If someone explains it to me I'll update the article to use that example but be more clear. Nick Garvey (talk) 15:32, 22 February 2013 (UTC)

Inaccurate and confusing

This article needs a complete rewrite. A vector space in mathematics does not have an origin, so that characterisation of an affine space is just gibberish. It appears that the authors have confused vectors with displacement vectors, or vector space with a vector field (a vector valued function over R^n). Please get someone who knows what he is talking about to sort it out. RQG (talk) 06:28, 6 July 2014 (UTC)

A vector space in mathematics does have an origin! Indeed, it is in particular a group (additive), and as such it has a neutral element; this is the origin. In order to confuse "vectors with displacement vectors" you have first to confuse a vector space with an affine space. Look closely to the definitions. In math, formally, only definitions define, not intuition, nor applications. Boris Tsirelson (talk) 08:10, 6 July 2014 (UTC)

You are confusing the identity under addition with the origin. Those are different things. Perhaps you are confusing vectors with position vectors. All sorts of things can be vectors in mathematics, e.g., functions of real or complex variables, states in quantum mechanics, but an origin specifically refers to coordinates in space. You surely can't be saying that in affine space there is no identity under addition. That would be even worse!!!RQG (talk) 12:34, 7 July 2014 (UTC)

Surely I am saying that in affine space there is no identity under addition. Indeed, there is no addition for points of an affine space. In contrast, there is addition for points of a vector space. I do not know what is worse for you, but it would be better if you pay more attention to the definitions, and leave your idea of "vectors or position vectors", since this idea does not apply directly to the matter. Boris Tsirelson (talk) 12:41, 7 July 2014 (UTC)
Or, alternatively, tell us what do you mean by "vectors and position vectors", and I'll tell you how does it relates to vector spaces and affine spaces. Boris Tsirelson (talk) 12:53, 7 July 2014 (UTC)

By a vector I mean a member of a vector space, for which the definition is as given in wikipedia. No such thing as an origin, and no such thing as a space from which the origin can be forgotten. But you are saying we must forget the identity, which basically means we are left with nothing meaningful at all; we certainly don't get A from V by doing that. By an affine space, you mean both the set A and the vector space V. But neither the set A nor the origin in A has anything to do with the definition of a vector space in the first place. So you pay attention to the definitions. Conflating A and V is of no help whatsoever, but that is what this article does throughout. (A position vector is simply a displacement vector from the origin, but again this refers to a Euclidean space and is not the meaning of vector space). RQG (talk) 13:35, 7 July 2014 (UTC)

"Conflating A and V is of no help whatsoever, but that is what this article does throughout" — why not? Each element of A leads naturally to such conflating: I really do not know why this is "of no help whatsoever" to you. Of course, no such conflating is canonical; it depends on the choice of a.
If you do not like to call the neutral element of V the origin, well, do not; but many people like to call it so. Tastes differ. Anyway, you cannot add two elements of A, and no point of A differs in its properties from any other. By the way, many people like to treat words "vector" and "point" as synonyms when dealing with a vector space. But a point of an affine space definitely should not be called a vector.
"A position vector is simply a displacement vector from the origin, but again this refers to a Euclidean space" — I do not know, do you mean here a Euclidean vector space or a Euclidean affine space. The former has the origin, the latter has not. Boris Tsirelson (talk) 18:31, 7 July 2014 (UTC)
Hmmm, after looking at "Euclidean vector#Overview" I see that this is a problem of different terminology in math and physics. "When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics..." We in math hardly use such terms as "position vector" and "displacement vector". If you mean the article could be closer to the physical terminology, try to do so. Boris Tsirelson (talk) 19:08, 7 July 2014 (UTC)
And by the way, it is written in the lead to "Euclidean vector": "...vectors as elements of a vector space. General vectors in this sense are fixed-size, ordered collections of items as in the case of Euclidean vectors, but the individual items may not be real numbers". Surely not! General vectors in this sense are not at all collections of items. Terminological mismatch. Boris Tsirelson (talk) 19:14, 7 July 2014 (U

It is not a matter of taste. It is a matter of using mathematical definitions. Otherwise it is just crank theory. But affine space is not crank theory, it is a genuine mathematical space, subject to mathematical definitions according to which points are not vectors and the identity element is not the origin. This article does not describe it. We in mathematics do not abuse language like that. And yes, a vector space can be defined over any field, not just the reals. A "collection of items" may be a weak way of saying ordered n-tuple, or of describing a function, but it is not actually wrong. This is not simply a terminological mismatch, it is that the article uses incorrect mathematical terminology so that it is not possible to see what it is saying.RQG (talk) 21:02, 7 July 2014 (UTC)

"affine space ... is a genuine mathematical space, subject to mathematical definitions according to which points are not vectors" — True.
"and the identity element is not the origin" — False.
How could you define "the identity element" of an affine space?? Just an example: an affine subspace of a vector space in general does not contain the origin (or call it the identity element if you like) of the vector space; where do you see "the identity element" of this affine space?
"A "collection of items" may be a weak way of saying ordered n-tuple" — No, you do not get my point! Where did you see "ordered n-tuple" or anything like that in the definition of a vector space? To this end you need first to choose a basis in the given vector space, and then introduce coordinates. But this is an additional construction, beyond the definition, and breaking the symmetry (under the full linear group). Boris Tsirelson (talk) 06:07, 8 July 2014 (UTC)

You need to make up your mind. Earlier you said "a point of an affine space definitely should not be called a vector" and "you cannot add two elements of A", just as the article said "you cannot add points", but you can add the identity element to any vector precisely because the identity is a vector, and you cannot remove it from a vector space because it would no longer be a vector space. These are defining properties which you cannot ignore, just as you cannot ignore a mathematical definition of affine space and use instead some vague and inaccurate notion based on not understanding the mathematical definition of vector space. That is what the informal description was doing. The existence of a basis for vector space is just about the first theorem you study, and requires no extra structure whatsoever. Every vector space has a basis, so that every vector has a coordinate representation in terms of a basis. I think, however, it is time to stop using wikipedia as a discussion forum, or for elementary lessons in vector space. RQG (talk) 08:11, 8 July 2014 (UTC)

Right; it appears we fail to understand each other. A pity. Happy editing. Bye. Boris Tsirelson (talk) 08:49, 8 July 2014 (UTC)

It all seems like a fairly straightforward textbook description of an affine space. Although, if you come from another discipline with its own accent on mathematical ideas it might look different. If discussion about rewording went any further, it'd be nice to proceed one point at a time. Rschwieb (talk) 17:01, 8 July 2014 (UTC)

Removing the story

I vote to put back the informal description which MarSch removed. Oleg Alexandrov 19:11, 7 Apr 2005 (UTC)

Wikipedia has a reputation for articles only mathematicians can understand. And removing this kind of stories, and also striving for more general approaches makes an article maybe better for mathematicians (which I am not sure) and, I think, worse for everybody else. That the story is duplicated is no big deal, it has a useful purpose in both articles. Oleg Alexandrov 19:17, 7 Apr 2005 (UTC)

I agree that the story is better as an informal description than anything else I've seen in this article. And it is misleading to say that physics sees physical space as an affine space, because the affine structure fails to indicate, for example, which lines are at right angles to each other. Michael Hardy 21:13, 7 Apr 2005 (UTC) ... and I would add that John Baez, the mathematical physicist, loved the story. Michael Hardy 21:15, 7 Apr 2005 (UTC)

I have been thinking about the physical space and you have a point: it is an affine space with a metric and conformal structure(angles). But it is still also an affine space and the only motivation thusfar that this article has ever seen. Thus I think we should put this back in. MarSch 13:11, 12 Apr 2005 (UTC)

I wish the story would concentrate on subtraction of points to vectors as per the definition. Now it focuses on adding points in an affine way which I have put into formal form in the second example. I find this a bit confusing. Further I disagree with the use of a formula for one side of the story and not the other, making it more difficult to compare. Mentioning John Baez serves no purpose whatsoever and saying that the proof is a routine exercise is also garbage. cancelling the origin yields the result. Okay so John and Smith know how to "add" two points. Does that have any meaning? Well yes, it is the halfway point. And what is that? It is the point p such that a-p=p-b. Or (a-b)/2=a-p. And why did you put _every_ tex formula on a separate line? That is really ugly and difficult to read. -MarSch 14:29, 8 Apr 2005 (UTC)

I just noticed this claim that if you add two points, you get the halfway point. That is wrong. Which point you get depends on which point is chosen as the origin or zero point. Michael Hardy (talk) 23:25, 16 July 2014 (UTC)

Tex formulas have size and font that are different from the regular text. I think that putting them on the same line is ugly.--69.107.121.247 01:48, 2 October 2006 (UTC)

"Forgotten which point is the origin": gibberish or functor?

"An affine space is what is left of a vector space after you've forgotten which point is the origin" — this phrase, recently deleted by User:RQG as "just gibberish" is in fact about the forgetful functor from the category of vector spaces to the category of affine spaces.

"Clearly every vector space has an underlying affine space (and every linear map is affine linear), giving a forgetful functor U:Vect→Aff." Quoted from: nLab:affine space. Boris Tsirelson (talk) 10:28, 8 July 2014 (UTC)

While I know colloquial descriptions like this are sometimes not totally accurate, this one is very useful. I think we should keep the phrase. Rschwieb (talk) 16:56, 8 July 2014 (UTC)
The textbook Abstract Algebra with Applications: Volume 1: Vector Spaces and Groups says "Intuitively, an affine space is a vector space without the origin specified." I think there is enough evidence of this point of view in reliable sources that it is due weight to include it in the article. We use the same sort of intuitive characterization for a heap. --Mark viking (talk) 18:18, 8 July 2014 (UTC)
Wow, I did not hear about heaps. Nice to know. Thanks. Boris Tsirelson (talk) 20:17, 8 July 2014 (UTC)

Anyone who does not understand that an affine space is what is left of a vector space after you've forgotten which point is the origin just doesn't understand what an affine space is. If you take an affine space and pick any point at all within it to serve as the origin, then you have a vector space. Michael Hardy (talk) 04:49, 9 July 2014 (UTC)

A wikipedia article should explain, not confuse. The number of complaints on this page shows that this article does confuse, and needs substantial change. In fact it is the other way around. Anyone who thinks that this discription makes sense, does not understand the mathematical definition of a vector space. Let us do what you say. Take an affine space (A,V). Choose a point a as origin. Then you have a particular representation of A, together with a vector space V. You do not have V=(A,V), which is what you are saying, and which is patent nonsense. To get the vector space from an affine space you have to forget A, not choose an origin. The origin is a point in A. It is not in V. If you start with only a vector space V, you have no origin to forget. Sure you can add structure, and construct an affine space from V, but that is not what has been said. If the intuitive definition said you get an affine space by adding translations to a vector space, that might be ok. But it doesn't. It asks you to forget something which has already been forgotten in the definition of a vector space. RQG (talk) 23:19, 14 July 2014 (UTC)
Yes, "Take an affine space (A,V). Choose a point a as origin." And now, introduce the addition into A as follows: x+y is, by definition, the affine combination x+y-a. (It is well-defined, being an affine combination, that is, sum of coefficients equals 1.) Do not pretend you always think too much formally. We are not computers.Boris Tsirelson (talk) 07:20, 15 July 2014 (UTC)
To be more formal (if still needed): x+y-a may be defined as x+(y-a), where y-a is the element of V that maps a to y, and x+(y-a) means: y-a acts on x (giving another element of A). Boris Tsirelson (talk) 07:51, 15 July 2014 (UTC)
Adding more confusion does nothing to clarify; confusing points in A with vectors in V as the article does throughout, only clarifies that it needs rewritten. Even if you do add this addition, it is not a vector addition, so once again you are just adding more confusion. Actually, in mathematics I do think formally, since otherwise it is not really mathematics at all. For example, if you study either of the formal definitions of affine space, they make complete sense and show that the informal description is plain wrong. RQG (talk) 14:58, 15 July 2014 (UTC)
"Even if you do add this addition, it is not a vector addition" — ?? It is a vector addition, in the sense that it does turn A into a vector space. (Is there any other reasonable meaning?) Boris Tsirelson (talk) 18:45, 15 July 2014 (UTC)
So what you are saying is you want to add more structure, making things more complicated, when we already have the subtraction map to get V from A, which already does everything we need to do, and at the same time you want to support the original claim, in which none of this extra structure was defined? The point is that an intuitive description ought to explain the idea to someone not familiar with it, not just make sense to someone who already understands it and who knows how to fill in the bits not given in the intuitive description.RQG (talk) 20:07, 15 July 2014 (UTC)
btw the reason A is not yet a vector space is that you have not defined multiplication by a scalar. Please don't bother. Even if you can show that there is no conflict between the subtraction map x - y and the convention x - y = x + (-1)y it is not interesting. The point was you cannot just say it is a vector space without first defining these things, and they are not already defined. The fact that you had to start defining things proved my point. In any case the requirement is to have a wikipedia article which is not filled with inaccuracy and confusion. We do not have that currently, and we do not need an even more complicated discussion on the talk page. RQG (talk) 21:11, 15 July 2014 (UTC)
@RQG: Once you've seen how addition gets defined after you've chosen one point of the affine space to serve as the origin, it's pretty clear how multiplication by a scalar should be defined. Michael Hardy (talk) 00:41, 17 July 2014 (UTC)
You ask me not to bother; OK, (you do not bother :-) ,) I do not. Boris Tsirelson (talk) 18:23, 16 July 2014 (UTC)

RQG says a vector space has no origin to forget. If one prefers to call it the zero vector rather than the origin, must that prevent one from understanding what is written by someone who calls it the origin? Certainly it's an essential part of the structure of a vector space. Michael Hardy (talk) 23:09, 16 July 2014 (UTC)

Even though OpenGL actually runs on projective 3-space and makes this unnecessary, people will give real 3D coordinates for models, and in trying to implement a rotation about a point learn the habit of translating the model by the negative of its center, performing the rotation, then translating the rotated model by the original center. That is to say that treating an affine space as a vector space with a movable origin is quite natural for anyone supplied only a vector space but presented a geometric task to perform in that space. for a scalar m and vector s in the vector space with origin a is a suitable scalar multiplication, rediscovered by anyone who learns that trick of performing affine maps by conjugating a linear map by a translation. In thinking that is a conflict, you are supposing the change of origin must be a homomorphism of vector spaces, which you are confused in thinking. Instead we have This yields , verifying that scalar multiplication by -1 will send an element to its inverse under the new vector space's addition. Yes, it's more complicated than the synthetic or homogenous space pictures, but it's commonplace. Further, you say that the problem is none of what we're describing in the article which supports the statement is in the article, but that's not a reason to remove the statement, that's a reason to expand on it. References: Definition of affine combination relative to an origin vector in a computer graphics textbook.[1] "Wherever an expression like appears, what we really mean is , where is the origin of the affine frame."[2]LokiClock (talk) 21:00, 16 July 2014 (UTC)

The location of the gap in RQG's understanding?

Quote from "RQG":

"Take an affine space (A,V). Choose a point a as origin. Then you have a particular representation of A, together with a vector space V. You do not have V=(A,V), which is what you are saying, and which is patent nonsense. To get the vector space from an affine space you have to forget A, not choose an origin. The origin is a point in A. It is not in V."

End of quote from RQG

This says "Take an affine space (A,V)". I take that to mean A is a set and V is a vector space that acts transitively on that set in a way that satisfies certain desiderata. RQG seems to say that if you then delete A from this structure, you're left with V, so that an affine space is something more than a vector space: If you start with an affine space and discard part of the structure, you're left with a vector space. That is consistent with at least this much of the way I originally learned it: An affine space has an underlying set A and some vector space that acts on A in a certain way. But this notion that an affine space is (A,V) where A and V satisfy certain conditions and are related in certain ways is only one way of encoding the concept of affine space. There are others. One of those others goes like this:

  • A vector space involves an underlying set V whose members are called vectors, and a field F whose members are called scalars, and an operation of linear combination by which one takes scalars s1,...,sn and vectors v1,...,vn, one gets a vector s1v1 + ... + snvn, and this operation of linear combination satisfies certain algebraic laws.
  • An affine space involves an underlying set A whose members let us call "points", and and a field F whose members are called scalars, and an operation of affine combination by which one takes scalars s1,...,sn satisfying s1 + ... + sn = 1, and points p1,...,pn, and gets a point s1p1 + ... + snpn, and this operation of affine combination satisfies certain algebraic laws.

This is demonstrably equivalent to the "(A,V)" characterization of affine spaces. I leave the proof of equivalence to RQG as an exercise. And any undergraduate reading this may also find it useful to go through this exercise. By this second characterization of the concept of affine space, a vector space is an affine space with this bit of additional structure: One chooses some point which we will call 0 to serve as the origin or zero or whatever you want to call it, and and then one can define a linear combination s1p1 + ... + snpn in which s1 + ... + sn need not add up to 1 by saying that it is

s1p1 + ... + snpn + (1 − s1 − ... − sn)0.

Viewed in that way, a vector space is an affine space with some additional structure. And this way of viewing it is demonstrably equivalent to the "(A,V)" point of view. Michael Hardy (talk) 22:58, 16 July 2014 (UTC)

Forgetting structure

"An affine space is a vector space that has forgotten its origin" is the quickest way to get the idea across. It's not precise, but any introduction to affine spaces must say something like this, admit it's a bit vague, then explain the idea more precisely.

You can define an affine space as a pair (V,A), and then the forgetful functor from affine spaces to vector spaces sends (V,A) to V. But you can also define an affine space in various equivalent ways that don't invoke the concept of vector space. Michael Hardy gave one, and here are more:

  • Affine space: definitions, nLab.

All these different approaches give equivalent categories. In every approach there is a forgetful functor F from vector spaces to affine spaces. In any approach, given two vector spaces V and W, an affine map f from F(V) to F(W) is F of some linear map if and only if f maps the origin of V to the origin of W.

This is the precise sense in which the affine space F(V) has "forgotten the origin" of V: the affine maps no longer need to preserve the origin, and if they do they come from linear maps.

The term forgetful is not usually given a precise definition. However, the functor F forgets structure in a precise sense: it's not full. For a full functor, every morphism in the target is the image of some morphism in the source. The functor from abelian groups to groups is full, so it doesn't forget structure; the functor from groups to sets is not full, so it does forget structure.

John Baez (talk) 01:09, 17 July 2014 (UTC)

Thank you John. I hope this will clarify that the forgetful functor forgets A, not V, and that the description as previously given was not made adequately precise, and clarifies that an affine space has more structure than a vector space.RQG (talk) 05:20, 17 July 2014 (UTC)
No, you have it backwards. The functor from vector spaces to affine spaces is the forgetful functor. Affine spaces have less structure (and so more morphisms) than vector spaces. Sławomir Biały (talk) 15:59, 17 July 2014 (UTC)
Don't just take my word for it. Read what John Baez said again, and most especially think about the actual mathematical definitionRQG (talk) 17:39, 17 July 2014 (UTC)
it's you that needs to read it again. The functor from affine spaces to vector spaces is full, not forgetful. The functor from vector spaces to affine spaces is not full, hence forgetful. Sławomir Biały (talk) 18:09, 17 July 2014 (UTC)
John said "You can define an affine space as a pair (V,A), and then the forgetful functor from affine spaces to vector spaces sends (V,A) to V." which is also what I say. Clearly (V,A) has more structure than just V, because we have translations in addition to the transformations on V, and the ability to perform transformations on V centred on any point.RQG (talk) 19:45, 17 July 2014 (UTC)
Didn't you note the "BUT..." that John said immediately after? Others did. Boris Tsirelson (talk) 19:55, 17 July 2014 (UTC)
The but was not important, because both the definitions which are given in the article do in fact use vector space, and make clear that (V,A) has more structure than just V RQG (talk) 21:30, 17 July 2014 (UTC)
@RQG: A point you're still missing is that there is more than one actual mathematical definition. Different characterizations of a concept can be equivalent to each other, and which one is taken to be "the" definition may be a matter of convenience varying with the context. As I pointed out above, one of the characterizations (as "(A,V)") superficially makes it appear that an affine space has more structure than a vector space (since the vector space is only "V" rather than "(A,V)"). But I explained why that is a superficial appearance, and that particular way of defining the concept of affine space is not the only way and is not sacred. Michael Hardy (talk) 23:25, 17 July 2014 (UTC)
Different definitions give identical structure (isomorphism), otherwise they would not be definitions of the same object. The problem with the "characterization" as given is that it describes a different object, unless it is immediately made precise as John says it must be. RQG (talk) 06:46, 18 July 2014 (UTC)
No, clearly the pair (V,V) has more structure than the pair (V,A), namely a choice of base point in A. Sławomir Biały (talk) 21:56, 17 July 2014 (UTC)
The extra structure that a vector space has is a choice of origin. That is, an affine isomorphism of the pair (V,A) consisting of a vector space V and a torsor over it, to the pair (V,V) consisting of a vector space acting on itself. As for when Baez says the functor from affine spaces to vector spaces is "forgetful", you need to read the rest of his comment: the functor is not forgetful in the sense that he then goes on to describe. Sławomir Biały (talk) 22:18, 17 July 2014 (UTC)
we are not dealing with the pair (V,V) except in the specific instance of a vector space treated as an affine space over itself. Vector space means V, not (V,V).RQG (talk) 06:40, 18 July 2014 (UTC)
The (V,A) notation, that you seem to think of as sacred, is actually a shorthand for a map that satisfies certain properties. So, yes, whenever we are dealing with a vector space, we are automatically dealing with such a "pair". Sławomir Biały (talk) 15:17, 18 July 2014 (UTC)

@RQG: You are confirming my surmise as to the location of the gap in your understanding. You seem to say:

"(A,V) has more structure than just V."

It's easy to see how it would appear that way if you don't look at the full context. What is mistaken about that, I explained above. I think you should read that. Michael Hardy (talk) 23:34, 17 July 2014 (UTC)

I have read it and answered it. John says the same thing as I do, that V forgets structure from (A,V). One should not conflate points in A with vectors in V (this is in general a bijection, not an isomorphism), or the origin in A with the zero vector in V. Some people may get away with it some of the time if they are not working at an advanced level of abstraction, but it is not the mathematical structure or the mathematical definition and the article should be mathematically correct. Origins are not mentioned in treatments of vector space. The origin is in A, not in V. The reason for the central position in mathematics of vector space and linear algebra in general is that the abstract structure has applications where an origin is not an appropriate concept. Simply choosing the origin in A is not enough to create a vector space, because the vector space operations also have to be defined. It may be easy to define them, but that is not the point. They are not already defined. In any case there is no need to do this; we already have a vector space V and it just adds confusion to define another one.RQG (talk) 06:23, 18 July 2014 (UTC)
It is entirely possible to define an affine space without relation to a vector space: here is one way. Let A be a set with a triple product [a,b,c] satisfying [a,b,c] = [c,b,a]; [[a,b,c],d,e] = [a,[b,c,d],e] = [a,b,[c,d,e]] and [a,a,c] = [c,a,a] = c. The motivation: think of [a,b,c] as the fourth corner of the parallelogram with a,b,c as vertices. We additionally suppose that there is an action by a field K compatible with [,,]. Then A is an affine space and if we choose any element O and define a+c = [a,O,c], then (A,O,+) is a vector space. Conversely, if (V,0,+) is a vector space then defining [a,b,c] = a-b+c turns (V,[,,]) into an affine space, in which there is no distinguished element. Of course I don't suggest that the article be written like this, but it suffices to show that it could be. Deltahedron (talk) 06:29, 18 July 2014 (UTC)
Yes, but of course you still end up with a vector space, (A,O,+), and in contrast to what has been said here A != (A,O,+) RQG (talk) 06:46, 18 July 2014 (UTC)

RQG, it's clear from your responses here that you do not understand John's post that began this thread. He says that there are different different mathematical definitions, all of which lead to equivalent categories. Whether some functor is forgetful does not depend on how the category is encoded. The functor from affine spaces to vector spaces is not forgetful. It is what is known as a full functor. A full functor is a functor that is surjective on Hom sets. The functor from vector spaces to affine spaces is not full: there are more affine morphisms between two vector spaces than there are linear morphisms. So, based on John's definitions, this functor is forgetful.

As for the issue of whether having a vector space as part of the definition, this is also true of a manifold for instance. A manifold is a Hausdorff space that is locally diffeomorphic to some finite dimensional vector space over the reals. In particular, every finite dimensional real vector space is a manifold but not every manifold is a vector space. So a manifold is something more general than a vector space. The functor that assigns to a vector space its underlying differential structure is forgetful. So even though, apparently, a vector space appears in the usual definition of a manifold, a manifold is not a specialization of the vector space concept. It has less structure: the underlying manifold of a real vector space does not carry a canonical linear structure. Just like with affine spaces, though, it is even possible to associate a vector space to a manifold: the tangent space. Now, according to you no doubt a manifold is a specialization of the vector space concept. However, this is an idiosyncratic view that does not accord with the standard meanings of the term "specialization" and "generalization" in mathematics. Most mathematicians would agree that the notion of a manifold generalizes the notion of a real vector space, and has less structure rather than more. Obviously you are entitled to your own opinions, but you have no right to attempt to force this marginal opinion on anyone else (e.g., by dismissively asserting that they "aren't real mathematicians" whilst trotting out your own rather middling credentials), and especially no right to promote your own marginal views in an encyclopedia, no matter how right you are convinced they are. Wikipedia must adhere to a neutral point of view, which means that we reflect the views of a subject in proportion to their weight in reliable sources. Your personal objection to the "forgets the origin" mathematical metaphor is, as far as I can tell, your own view and is not sourced to any reliable authority. In contrast, there have been a number of references unearthed in this discussion that do use the metaphor without issue. Sławomir Biały (talk) 17:57, 18 July 2014 (UTC)

Points and vectors

Regarding this edit, I agree that the section is somewhat confusing, but I think the distinction between points and vectors should be mentioned. It's very common in applied settings (e.g., computer graphics) to call the points of an affine space "points" and the displacements "vectors". See, for example, "Practical linear algebra" by Farin and Hansford. Sławomir
Biały
18:21, 7 November 2015 (UTC)

I agree that the distinction between points and vectors is not clear enough. Also the fact that, if A and B are points, is a common notation for B - A. These are not the only weaknesses of the article. For example the notion of affine coordinates, although fundamental is not clearly defined.
By the way, I m preparing a section "Affine algebraic geometry", with a content corresponding to what should be in Affine space (algebraic geometry). When done, I intent to redirect the latter article to this section. I hope that you would agree with this project which seems to correspond to a consensus in previous discussions. D.Lazard (talk) 10:52, 8 November 2015 (UTC)

Affine homomorphism versus affine transformation

Aren't these the same thing? Should content from this article be moved to affine transformation, and summarized here in summary style? Sławomir
Biały
12:21, 22 November 2015 (UTC)

Contrarily to what is written in the lead of Affine transformation, affine transformations are automorphisms (see, for example the lead of Geometric transformation), while an "affine homomorphism", aka "affine map" may be not injective and may have a target different of its source. I believe that the author(s) of Affine transformation have inserted the general definition of affine maps because it was not available elsewhere in WP.
IMO, the part of the content of Affine transformation that refers to general affine maps must be summarized and merged here. When this will be done, I suggest also to merge Affine transformation and Affine space.
The spirit of my work on this article is to provide here all basic definitions that are needed to work with affine spaces, and to leave more advanced results and considerations to more specialized articles. This will allows to clarify many articles about classical geometry. I have already done this in the lead of Projection (mathematics), by adding a link to here. D.Lazard (talk) 14:24, 22 November 2015 (UTC)
Ok. I think it is worth pointing out that there is some scope for confusion here, as a "linear transformation" is (presumably) not necessarily an automorphism, and affine transformations are often presented as generalizations of linear transformations. So we have the strange situation that not every linear transformation is an affine transformation. I assume that is intended, but it should be emphasized. Sławomir
Biały
15:38, 22 November 2015 (UTC)
As far as I know, "transformation" means invertible function from a set (with additional structure) to itself. It is clearly the case in geometry, as stated in geometric transformation. In some WP articles (such as linear map, to which Linear transformation redirects), "transformation" is presented as a synonymous of "mapping", without sourcing this equivalence. In linear map, "transformation" is linked to Transformation (function), where it is asserted that a transformation is a map from a set to itself. Thus, at least, the first sentence of linear map is self contradictory about "transformation". In Transformation (function), transformations are not supposed to be bijective, and this definition is supported by 3 references, all from semigroup theory. This can be explained by the fact that a large part of semigroup theory consists of generalizing group theory as much as possible, for example by generalizing "group of transformations" into "semigroup of transformations". This convinces me that I am right to consider "transformation" as an older word for "automorphism", with the advantage to be more aesthetic when it is possible to use it. In particular, I consider that a "linear transformation" is an automorphism of a vector space. Nevertheless I am ready to change my opinion, if you provide me convincing sources. D.Lazard (talk) 17:53, 22 November 2015 (UTC)
Linear transformations are seldom assumed to be invertible. The term "transformation" is used synonymously with "map". See, for instance, Dunford and Schwartz, Linear operators I.11; Halmos "Finite dimensional vector spaces", chapter II; van der Waerden, "Modern algebra", Volume I, Section 4.5. So I think this is a legitimate source of potential confusion. At least one high quality source uses "affine transformation" to refer to what you would call an affine endomorphism (not automorphism): Bourbaki, "Espaces vectoriels topologiques", and I think this usage is pretty common in the convex analysis literature (cf. Rockafellar, "Convex analysis"). There is a way to avoid confusion altogether, and just say "automorphism" rather than "transformation". The article already talks about affine homomorphisms, when probably the more usual term is "affine linear map" (cf. Bourbaki, for instance). Sławomir
Biały
18:37, 22 November 2015 (UTC)
Yes, regretfully or not, there coexist two different traditions of use the word "transformation"; probably we are forced to inform the reader about this fact. Boris Tsirelson (talk) 18:50, 22 November 2015 (UTC)
One more example: "measure-preserving transformation" in ergodic theory; see for instance Stein and Shakarchi "Real analysis" (Princeton 2005) Sect. 6.5 (p. 292); when invertible, it called there "measure-preserving isomorphism". Boris Tsirelson (talk) 18:59, 22 November 2015 (UTC)
Another example: Stroock "A concise introduction to the theory of integration" (Birkhäuser 1994) Sect. 2.2 (p. 32): "For any non-singular linear transformation..." Boris Tsirelson (talk) 19:14, 22 November 2015 (UTC)
This gave me the idea to search for "nonsingular affine transformation". We get Igor R. Shafarevich and Alexey Remizov "Linear Algebra and Geometry", which does not assume that affine transformations are invertible, and also the source and target spaces can be different. Sławomir
Biały
19:21, 22 November 2015 (UTC)
Now "EDM2" (Encyclopedic dictionary of mathematics, second edition, MIT Press 1993), vol. 2, p. 1420, Item C "Mappings" of article 381 "Sets": "If there exists a rule which assigns to each element of a set A an element of a set B, this rule is said to define a mapping (or simply map), function, or transformation from A into B. The term transformation is sometimes restricted to the case where A=B. Usually, letters f, g, φ, ψ stand for mappings." (And so on... not said that "transformation" is "sometimes restricted to bijection".) Boris Tsirelson (talk) 20:11, 22 November 2015 (UTC)

This page only makes sense to mathematicians

This page only makes sense to mathematicians. —Preceding unsigned comment added by 72.66.226.35 (talk) 06:05, 18 March 2011 (UTC)

You cannot explain an affine space simply, just like you cannot explain quantum spin simply. It's just a fact of the universe that some things cannot be dumbed down. It's like explaining color to someone blind from birth. —Preceding unsigned comment added by 67.11.202.195 (talk) 05:31, 26 March 2011 (UTC)
I don't think that can be true, otherwise none of us would know what an affine space is. The concept is somewhat abstract, but it is not impossible to explain it simply - it is all a matter of using the right examples. Gandalf61 (talk) 09:59, 26 March 2011 (UTC)
It's not abstract at all. Take a look at Weyl’s marvellous book “Space, Time, Matter” which makes affine geometry very understandable in terms of addition of successive localized vectors (just what we did at school in fact!). The article quotes Weyl but makes it all very abstract and incomprehensible. Learn from the masters! JFB80 (talk) 15:40, 26 May 2013 (UTC)
It is not the pupil that is lacking, it's the professor. There is no knowledge that cannot be described in simplified terms (or at least no knowledge that humans can comprehend). Everyone, scientist and layman alike, uses some kind of simplification to describe complex subjects. Choice of grammar, syntax, notation, example, etc is the foundation of education. This is what sets apart great professors from everyone else. Most educated people can't teach; at least not to the degree of the select few masters. This is why the Feynman lectures are so popular, because he took the some of the most difficult subjects in physics and taught them without unnecessary jargon and big words. Jargon especially, sets apart groups of people, sometimes intentionally, which is childish. It's like when a 3rd or 4th year student in a STEM field tries to blow people's minds with the most complex phrase they can utter on a topic. This is an improper use of terminology, which is there to simplify topics. Just remember that the 7±2 theory or working memory applies to all humans, including every genius to ever live.
On the same subject, oversimplification can be a big problem as well, especially when it uses unnecessary omission. Probably the best example of this is neglecting wind resistance in physics courses. The only time this should be neglected is during the introduction of the equations of motion at the beginning of a high school or college introductory physics course. Air resistance plays a part in nearly all situations that would warrant the use of math vs intuition, and is vital to having a full grasp on such a vital fundamental of so many subjects that are based on the physical world. It would be like teaching an English speaker Arabic or Mandarin while avoiding speaking about inflection.
Another common example is the coefficient of friction, which is design as a tool for estimation, not rigorous calculation. This comes into play with collision forensics to estimate speed. Because it's been simplified to so many people, it makes it legal for police to use it in situations where it shouldn't be used. The equations they use are undergrad level physics equations that are generally ±15-20mph for most speeds between 35 and 90mph, don't work at all for speeds out of that range (error > 50%), and only work if the coefficient of friction is accurate to within 0.01 (which is impossible to ascertain by a police officer on the scene of an accident as it includes many important variables that they cannot know, like tire temperature, suspension characteristics, weight distribution, tread flexing, braking pressure, etc). — Preceding unsigned comment added by Zephalis (talkcontribs) 02:17, 28 March 2016 (UTC)

Getting carried away with symbolic logic

If ever there was a page that needed 'or in words' after its symbolic logic, this is it. What's that square ended right hand arrow? I believe that Wikipedia is intended to provide readable explanations, given a fair background. I have a fair background, but the assumption of the writers is that the symbols of set theory, as compared to logic as compared to this relatively esoteric area is well established and understood by everyone. It isn't. Centroyd

This is common notation for a function (mathematics), although that article currently does not explain it, because it is undergoing a re-write :-( Very briefly means that the function f maps the space X to the space Y, carrying the individual element x to the element y. In old-fashioned terms, it means that f(x)=y, although the old-fashioned notation doesn't allow X and Y to be described easily. (which is why its not much used). linas 06:13, 31 December 2005 (UTC)
To this day, this is still a problem in this article. The fact that function (mathematics) had just started including it in early 2006 is a testament to its lack of commonality in mathematics. Even the today's function article did not reference where it came from until I added it after looking through it's history and running across its link to lambda calculus, which itself had not started gaining use until the 60's.
"...the old-fashioned notation doesn't allow X and Y to be described easily. (which is why its not much used)." --linas
It's used in nearly every Wiki article concerning math, nearly every mathematics paper ever published, nearly every basic textbook on mathematics, programming, etc. Most of the time it is wholly unnecessary to show the domain and co-domain of a function (which is generally understood by context, intended audience, etc) because it's generally R. It's even simpler to write the article without having to write out all of the math markup that keeps a lot of would be editors away in the first place.
So essentially, we are using newer notation to add complexity to a topic that is in and of itself a generalization of a simpler topic. This line of thought makes as much sense as using einstein notation in every article with a summation because it's been used twice as long by a much larger subset of people (includes engineers and physicists instead of just pure, cutting edge mathematicians).
The article itself is very long and still doesn't get the point across to even to myself, an electrical engineer with a minor in mathematics and a focus of electromagnetic wave theory, which is far more than a the fair background of the average reader that this article is supposed to be geared towards. Even Wolfram explains [affine space] better in 8 lines of text. — Zephalis (talk) 01:27, 28 March 2016 (UTC)
Mathworld article does not explains anything. It only provides a definition, that is not really shorter than the one given here (Affine space § Definition). Nevertheless, I have added to the article a link for the arrow notation of a function. Here this notation is difficult to avoid, as the traditional (prefix) notation f(x) would need to give a name to this bivariate function. This would be confusing, as the function is already named "+" and notation "+"(a, v) would needs some explanation. D.Lazard (talk) 11:44, 28 March 2016 (UTC)

The controversy

Here I try to formulate the essence of the controversy.

According to RQG:
(A,V) is THE definition of affine space; all its implications are true statements about affine spaces; others are not. This is formally simple, and does not confuse readers.

According to other participants of this dispute:
Like every important mathematical notion, affine space has several equivalent definitions. Every true statement about affine spaces follows from each definition. Statements sensitive to the choice of a definition (among equivalent definitions) are properties of a specific approach to affine spaces (rather than genuine properties of affine spaces). This is formally less simple, but we should do our best in explaining this important point to the reader rather than hide the truth for simplicity.

An example. According to one of several equivalent definitions of natural numbers,

An implication: This is not really a true statement about natural numbers. Rather, is.

In my opinion, the approach of RQG could be preferable for non-human readers, such as proof assistants, and maybe to some human readers inclined to heavily use their logic, not intuition. But most human readers use their intuition no less than logic. And we should not discourage them. After all, this is why proof assistants are hopelessly weaker than humans: they have no intuition.

Boris Tsirelson (talk) 08:04, 18 July 2014 (UTC)

Mathematics is, of its nature, about using logic. An article on a mathematical structure is not a place for a discussion of philosophy of mathematics. There is no reason that an informal description cannot be given, but it should still use mathematical words correctly, and it should still be a description of the same mathematical structure. Intuition does not mean doing things wrong, which describes the current state of the article. RQG (talk) 08:47, 18 July 2014 (UTC)

In fact, RQG is quite consistent in rejecting the very idea of equivalent definition. Writing such things as "A != (A,O,+)", RQG insists on literal identity of the structure. It means that for RQG two definitions are equivalent only when they lead not only to the same mathematical notion, but also to the same encoding of this notion, that is, its specific embedding into the set theory. Needless to say, this is a very original viewpoint, very far from the mainstream mathematics.

"I have a first in mathematics from Cambridge University, and a Ph.D. in mathematics from University of London", as RQG wrote at 17:39, 8 July 2014 on Wikipedia talk:WikiProject Mathematics. I am puzzled. The maturity level we observe could be typical enough for an undergraduate, but London Ph.D?.. Boris Tsirelson (talk) 09:19, 18 July 2014 (UTC)

No, really, RQG is not so consistent. He wrote "Every vector space has a basis, so that every vector has a coordinate representation in terms of a basis." replying to my "Where did you see "ordered n-tuple" or anything like that in the definition of a vector space? To this end you need first to choose a basis in the given vector space, and then introduce coordinates. But this is an additional construction, beyond the definition" (this talk page, at 08:11, 8 July 2014). Thus, RQG may introduce additional constructions when needed, but does not allow us to do so. I fail to understand this position. Boris Tsirelson (talk) 09:26, 18 July 2014 (UTC)

I have not rejected an equivalent definition. I have rejected an inequivalent definition. Moreover everything necessary to produce a basis and coordinates is already given in the axioms of vector space, without adding extra structure. If you claim that A, without a defined operation of addition is a vector space, as has been done, then it makes a nonsense of mathematics. RQG (talk) 10:16, 18 July 2014 (UTC)
I'm puzzled. Who claimed that "A, without a defined operation of addition is a vector space, as has been done", and where? Deltahedron (talk) 17:35, 18 July 2014 (UTC)

Just the opposite: there is no canonical way to introduce a basis in a vector space, but the addition operation is introduced canonically.
Note this: definitions well-known to be equivalent (according to a lot of reliable sources) are inequivalent for RQG. (QED!)
This way, again and again, RQG tries to impose on us an incompetent, idiosyncratic manner of thought, ignoring quite a few experts. What shall we do, at last? Boris Tsirelson (talk) 11:02, 18 July 2014 (UTC)

A basis is just a minimal set of elements which span the vector space. This is a most trivial property RQG (talk) 12:34, 18 July 2014 (UTC)
" I think, however, it is time to stop using wikipedia as a discussion forum, or for elementary lessons in vector space." (your words, on this talk page, above). Boris Tsirelson (talk) 13:04, 18 July 2014 (UTC)
But wait; maybe you are a talented teenager? If so then I am sorry for my tone, and wish you every success. Just do not say you are PhD. And for now, edit within your competence. If you do not know what is "canonically" you should not decide which structure is richer, etc. Boris Tsirelson (talk) 13:26, 18 July 2014 (UTC)

Conclusion. After all, I think that most RQG's contributions should be (and/or are) rejected as OR (Original Research)! Yes, this is an unusual form of OR. RQG treats mathematics in a way quite different from the usual way (but seems to believe that this is the usual way). This unusual approach to math is OR. Yes, it is not explicitly stated in RQG contributions, but it underlies these contributions; in the usual approach they are inappropriate, and often false. Boris Tsirelson (talk) 17:05, 18 July 2014 (UTC)

This is in no way WP:OR, and even if it was, your use of it here as an "unusual form [of WP:OR]" would then be WP:OR, negating, by your own notion, your own statement's existence on the grounds of WP:OR. Circular reasoning aside, your ad hominum attacks on ROG are uncalled for on WP, as they are unproductive and, more often than not, a sign of a lost argument. The fact that you had opened up so many sections on the talk page with different nuances in each during the same argument is an intellectual straw man; making a veritable trifecta of logical fallacies in an argument concerning logic itself. -- Zephalis (talk) 03:45, 28 March 2016 (UTC)
"...[ROQ] insists on literal identity of the structure. It means that for RQG two definitions are equivalent only when they lead not only to the same mathematical notion, but also to the same encoding of this notion". This is, by definition, the point of mathematics; to eliminate ambiguity. You can't have two different definitions that cannot be convolved into the same notation. If you could, mathematical proofs would not be rigorous. Think of the formal definitions like a proof and the informal definitions like a proof using a Venn diagram. One is mathematically valid, the other is intuitive and easier to understand. Both have their place. -- Zephalis (talk) 03:45, 28 March 2016 (UTC)
As for the article itself, it does need a major rewrite, as it relies too heavily on prior knowledge. You yourself stated that you hadn't heard of heaps. This is evidence of the issue. Of all subjects, one cannot expect everyone to have a significant knowledge base in the field of mathematics as it is one of the oldest, most rigorous, and has the largest knowledge base of any other subject. This is why it's such a difficult topic to write about, even to other mathematicians. If a top of the field knot theorist was writing a paper that would be read by a statistician or game theorist, they wouldn't use terminology from knot theory without a full explanation of the underlying maths. -- Zephalis (talk) 03:45, 28 March 2016 (UTC)
It's like having someone well versed in fluid mechanics, reading a topic on electromagnetics; the math is nearly identical, and there is no lack of intelligence, but the background between the subjects is quite different. Even in that case, all of the formulas will still covolve to "the same encoding of this notion" because they are both intrinsically linked in both mathematics and nature. In fact, to explore the reasons that the formula's are so similar in this particular example fully, it would require a large amount of research into the history of science and engineering itself, a seemingly unrelated subject. -- Zephalis (talk) 03:45, 28 March 2016 (UTC)
About "the same encoding of this notion" please see Equivalent definitions of mathematical structures. Boris Tsirelson (talk) 05:12, 28 March 2016 (UTC)
RQG did not think the article should attempt to summarize affine spaces in an intuitive way, despite being well-sourced. That seems like the opposite of what you are suggesting, so perhaps you have not correctly interpreted the discussions here. To reply to the substance of what you've said, many properties in modern mathematics are characterized in terms of their extensive (or "categorical") properties rather than their intensive (or "structural") properties. It's naive to suggest that mathematics needs a "literal identity" of structure in order to have precise definitions of things. Indeed, in many applications, it is not even meaningful to speak of "literal identity" of structure. An example are limits in categories, where one has a limiting object that is unique up to isomorphism. Another example is the existence of non-measurable functions: one can prove that many such things exist, without having a construction of any single such function. (This is one reason that the "structural" properties of such things are so very strange.) Sławomir
Biały
11:44, 28 March 2016 (UTC)