Discussion:
Tensor question
(too old to reply)
Viktor T. Toth
2017-07-04 20:54:00 UTC
Permalink
Raw Message
Dear Daniel,

Sorry, didn't mean to be cryptic, I thought my answer leads you to the right solution. Allow me to elaborate on three major points.


First, the question of position vectors.

Let's go through this methodically, starting with an arbitrary coordinate system x^i. A position vector R is represented by x^i, which is really shorthand for

R = x^i e_i,

Where the e_i are the covariant basis vectors.

Your point is that \partial R/\partial x^i = e_i, so differentiating R with respect to any of the coordinates should give the corresponding basis vector. It is, of course, trivially true:

\partial _i R = \partial (x^i e_i) / \partial x^i = e_i.

So what would this position vector be in polar cylindrical coordinates? Why, it is R = x^i of course, that is to say,

R : [rho, theta, z].

Indeed you can see (even without Maxima) that, say, diff(R, rho) = [1, 0, 0], which is just as it should be: this is e_rho.

But this is not what you have in your script. Instead, you have an R that is the components of a position vector in _Cartesian_ coordinates:

R : [rho*cos(theta), rho*sin(theta), z].

Why would the Cartesian components of R have any special behavior under covariant differentiation in polar cylindrical coordinates? Of course they wouldn't.


Second, the question of covariant derivatives.

You assert that the covariant derivative of the basis vectors should be zero. But is this really true?

We have three basis vectors: e_rho = [1, 0, 0], e_theta = [0, 1, 0] and e_z = [0, 0, 1]. Let us calculate their covariant derivatives by hand.

The covariant derivative of a covariant vector V_i is, of course,

D_i V_j = \partial_i V_j - Gamma_{ij}^k V_k. The only nonzero Christoffel symbols in polar cylindrical coordinates are

Gamma_{12}^2 = Gamma_{21}^2 = 1 / rho,
Gamma_{22}^1 = -rho.

Consequently, for e_rho, we have

(D_1 e_rho)_1 = \partial_rho 1 - \Gamma_{11}^1 1 = 0.
(D_1 e_rho)_2 = -Gamma_{12}^1 1 = 0.
(D_1 e_rho)_3 = -Gamma_{13}^1 1 = 0.
(D_2 e_rho)_1 = \partial_theta 1 - \Gamma_{21}^1 1 = 0.
(D_2 e_rho)_2 = \partial_rho 1 - \Gamma_{22}^1 1 = rho.

Whoops. That term is not zero, falsifying your assertion, so I don't even need to continue. (The rest of the terms are zero, by the way.)

The reason, of course, is that this is precisely what the covariant derivative measures: how the basis vectors _change_ as we transport them along the manifold. So contrary to your expectation, the covariant derivative of the basis vectors should not be zero unless all Christoffel-symbols vanish (e.g., Cartesian coordinates in flat space.)


Third, the covariant derivative of Cartesian basis vectors.

The preceding discussion leads me to my final point: the covariant derivatives of a set of basis vectors that does NOT change under parallel transport, such as the Cartesian basis vectors, should be zero! But for this, we need to do the exact opposite of what you have been doing. Instead of taking R to be a position vector in cylindrical coordinates and then express its components in Cartesian coordinates, we need to take the Cartesian basis vectors and express their coordinates using the polar cylindrical coordinate system.

So given Cartesian basis vectors I, J and K, their components in polar cylindrical coordinates would be:

I = [cos(theta), -rho*sin(theta), 0]
J = [sin(theta), rho*cos(theta), 0]
K = [0, 0, 1].

Unsurprisingly, if arranged in a matrix, this would indeed be the transpose of the matrix that you obtain from the vectors that you believed to be basis vectors.

When we take the covariant derivatives of these three vectors: that is to say, the polar cylindrical covariant derivative of the Cartesian basis vectors expressed in polar cylindrical coordinates, we indeed get zeros everywhere.


By the way, polar cylindrical coordinates are a ctensor/ct_coordsys builtin, so there is no need to construct them from scratch. (The builtin uses r instead of rho as the name of the radial coordinate.)

load(itensor)$
load(ctensor)$
ct_coordsys(polarcylindrical,all)$
ldisplay(R:[r*cos(theta),r*sin(theta),z])$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
C:apply(matrix,makelist(diff(R,ct_coords[i]),i,dim));
trigsimp(C.transpose(C).ug);
I:transpose(C)[1];
J:transpose(C)[2];
K:transpose(C)[3];
Z:I$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:J$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:K$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;


Viktor
-----Original Message-----
Sent: Monday, July 3, 2017 7:15 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi Viktor,
Thank you for your response, but I have a hard time understanding your answer.
Given the Position Vector R, I find the covariant basis vectors as follows: diff(R,ct_coords[1]),
diff(R,ct_coords[2]) and diff(R,ct_coords[3]).
On the other hand, you find the covariant basis vectors as follows: makelist(diff(R[1],ct_coords[i]),i,dim),
makelist(diff(R[2],ct_coords[i]),i,dim) and makelist(diff(R[3],ct_coords[i]),i,dim).
One seems to be the transpose of the other (if one puts all three vectors in a matrix).
I still claim that my way is the correct way as it appears in Differential Geometry books, hence my problem with the
Covariant Derivative still remains.
Thanks,
Daniel
Daniel,
Please forgive me, I have not noticed your question earlier.
Would the following answer your question?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:makelist(diff(R[1],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[2],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[3],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
Viktor
-----Original Message-----
Sent: Sunday, June 25, 2017 2:45 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi All,
Could somebody help here?
Thanks,
Daniel
Hi All,
Should the covariant derivative of a (covariant or contravariant) basis vector be zero?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:diff(R,ct_coords[1]))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
ldisplay(Q:factor(trigsimp(Q)))$
I think that this code calculates the covariant derivative of one of the covariant basis vectors in cylindrical
coordinates, but I don't get zero.
Please help.
Daniel Volinski.
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________
Maxima-discuss mailing list
https://lists.sourceforge.net/lists/listinfo/maxima-discuss
Viktor T. Toth
2017-07-12 22:54:57 UTC
Permalink
Raw Message
Dear Daniel,

I admit that I managed to confuse myself a little, too, as I was attempting to formulate a correct answer, but I think I unconfused myself now.

Your point is that the covariant derivative of basis vectors must be zero.

Let me respond by pointing out the subtle distinction between a vector, such as a basis vector (a geometric object that has a length and a direction) vs. its representation in the form of contravariant or covariant components.

Contravariant components go hand-in-hand with covariant basis vectors and vice versa. So for instance, the abstract idea of a vector R may be represented as

R = r^1 e_1 + r^2 e_2 + r^3 e_3,

or alternatively, as

R = r_1 e^1 + r_2 e^2 + r_3 e^3.

Both represent the same abstract quantity.

We know that the basis vectors are such that e_i e^j = \delta_i^j where \delta is the Kronecker delta.

So then, the product R . e^1 = r^1 e_1 e^1 + r^2 e_2 e^1 + r^3 e_3 e^1 = r^1,

and similarly,

R . e_1 = r_1 e^1 e_1 + r_2 e^2 e_1 + r_3 e^3 e_1 = r_1.

Same goes for R e^k and R e_k.

Which is to say that the components of a vector in a basis can be calculated as the inner product of the vector and the corresponding basis vectors:

r_k = R . e_k,
r^k = R . e^k.

Which immediately gives the contravariant components of the basis vector e_1 (NB: the symbol e_1 is a vector, not a component):

e_1 . e^1 = 1,
e_1 . e^2 = 0,
e_1 . e^3 = 0.

So if e_i . e^j = \delta_i^j, what about e^i . e^j? That would be g^{ij}, of course. And as we know, the covariant derivative of the metric *is* zero:

D_k g^{ij} = 0,

or

D_k (e^i . e^j) = 0.

Or, using the multiplication rule for derivatives,

(D_k e^i)e^j + e^i(D_k e^j) = 0.

Let me quickly multiple this by e_i:

(D_k e^i)e^j e_i + e^i(D_k e^j) e_i = 0.
D_k e^j + D_k e^j = 0.
D_k e^j = 0.

BUT... e^j is an abstract vector. It is not a triplet of contravariant or covariant components. So let me ask... what would be the components representing e^j?

We already know that

e_i = g_{ij} e^j.

So the covariant components of e_i are, in fact, the corresponding components of the covariant metric tensor itself.

So let's do the covariant derivative:

D_k e_i = D_k (g_{ij} e^j) = (D_k g_{ij}) e^j + g_{ij}(D_k e^j) = 0 + 0 = 0.

Note that if the covariant derivative of the metric didn't vanish (nonmetricity), either D_k e^i or D_k e^j would be nonzero.

In other words, the statement that the covariant derivative of the basis vectors vanishes is equivalent to the statement that the metric's covariant derivatives vanish.

So what about the components of the basis vectors? Well, they are trivial, aren't they. For instance, we just confirmed that e_1 = 1 e_1 + 0 e_2 + 0 e_3. So the contravariant components of e_1 are [1, 0, 0]. Similarly, the covariant components of e^1 are [1, 0, 0].

Now clearly, in most coordinate systems, a vector with covariant (or contravariant) components [1, 0, 0] will not have a vanishing covariant derivative. In contrast, its coordinate derivative is going to vanish, since the vector components are constants.

I hope this helps clarify things rather than making things more confusing.


Viktor

PS: Yes, the Jacobian.
-----Original Message-----
Sent: Friday, July 7, 2017 7:30 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi Viktor,
Thank you for your thorough response, it took me several days to "digest" it.
Regarding your second point, the question of Covariant Derivatives.
\partial_i e_j = \Gamma_{ij}^k e_k
i.e. given the basis e_i , you differentiate each basis vector with respect to each coordinate and decompose the
result according to that same basis.
The coefficients of that decomposition are then the Christoffel symbols. Hence the Christoffel symbols depend on the
basis you choose.
Now if the Covariant Derivative of a Covariant vector is
D_i V_j = \partial_i V_j - \Gamma_{ij}^k V_k
then
D_i e_j = \partial_i e_j - \Gamma_{ij}^k e_k = \Gamma_{ij}^k e_k - \Gamma_{ij}^k e_k = 0
by definition of the Christoffel symbols this must be zero.
If the Covariant Derivative of a Covariant Basis Vector is not zero, that means you are using Christoffel symbols
that do not correspond to that basis.
Regarding the Maxima code you sent, you calculate C as
C:apply(matrix,makelist(diff(R,ct_coords[i]),i,dim));
Isn't it simply the Jacobian? or rather the transpose of the Jacobian?
C:transpose(jacobian(R,ct_coords));
Thanks you,
Daniel
Dear Daniel,
Sorry, didn't mean to be cryptic, I thought my answer leads you to the right solution. Allow me to elaborate on
three major points.
First, the question of position vectors.
Let's go through this methodically, starting with an arbitrary coordinate system x^i. A position vector R is
represented by x^i, which is really shorthand for
R = x^i e_i,
Where the e_i are the covariant basis vectors.
Your point is that \partial R/\partial x^i = e_i, so differentiating R with respect to any of the coordinates should
\partial _i R = \partial (x^i e_i) / \partial x^i = e_i.
So what would this position vector be in polar cylindrical coordinates? Why, it is R = x^i of course, that is to
say,
R : [rho, theta, z].
Indeed you can see (even without Maxima) that, say, diff(R, rho) = [1, 0, 0], which is just as it should be: this is
e_rho.
But this is not what you have in your script. Instead, you have an R that is the components of a position vector in
R : [rho*cos(theta), rho*sin(theta), z].
Why would the Cartesian components of R have any special behavior under covariant differentiation in polar
cylindrical coordinates? Of course they wouldn't.
Second, the question of covariant derivatives.
You assert that the covariant derivative of the basis vectors should be zero. But is this really true?
We have three basis vectors: e_rho = [1, 0, 0], e_theta = [0, 1, 0] and e_z = [0, 0, 1]. Let us calculate their
covariant derivatives by hand.
The covariant derivative of a covariant vector V_i is, of course,
D_i V_j = \partial_i V_j - Gamma_{ij}^k V_k. The only nonzero Christoffel symbols in polar cylindrical coordinates
are
Gamma_{12}^2 = Gamma_{21}^2 = 1 / rho,
Gamma_{22}^1 = -rho.
Consequently, for e_rho, we have
(D_1 e_rho)_1 = \partial_rho 1 - \Gamma_{11}^1 1 = 0.
(D_1 e_rho)_2 = -Gamma_{12}^1 1 = 0.
(D_1 e_rho)_3 = -Gamma_{13}^1 1 = 0.
(D_2 e_rho)_1 = \partial_theta 1 - \Gamma_{21}^1 1 = 0.
(D_2 e_rho)_2 = \partial_rho 1 - \Gamma_{22}^1 1 = rho.
Whoops. That term is not zero, falsifying your assertion, so I don't even need to continue. (The rest of the terms
are zero, by the way.)
The reason, of course, is that this is precisely what the covariant derivative measures: how the basis vectors
_change_ as we transport them along the manifold. So contrary to your expectation, the covariant derivative of the
basis vectors should not be zero unless all Christoffel-symbols vanish (e.g., Cartesian coordinates in flat space.)
Third, the covariant derivative of Cartesian basis vectors.
The preceding discussion leads me to my final point: the covariant derivatives of a set of basis vectors that does
NOT change under parallel transport, such as the Cartesian basis vectors, should be zero! But for this, we need to
do the exact opposite of what you have been doing. Instead of taking R to be a position vector in cylindrical
coordinates and then express its components in Cartesian coordinates, we need to take the Cartesian basis vectors
and express their coordinates using the polar cylindrical coordinate system.
I = [cos(theta), -rho*sin(theta), 0]
J = [sin(theta), rho*cos(theta), 0]
K = [0, 0, 1].
Unsurprisingly, if arranged in a matrix, this would indeed be the transpose of the matrix that you obtain from the
vectors that you believed to be basis vectors.
When we take the covariant derivatives of these three vectors: that is to say, the polar cylindrical covariant
derivative of the Cartesian basis vectors expressed in polar cylindrical coordinates, we indeed get zeros
everywhere.
By the way, polar cylindrical coordinates are a ctensor/ct_coordsys builtin, so there is no need to construct them
from scratch. (The builtin uses r instead of rho as the name of the radial coordinate.)
load(itensor)$
load(ctensor)$
ct_coordsys(polarcylindrical,all)$
ldisplay(R:[r*cos(theta),r*sin(theta),z])$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
C:apply(matrix,makelist(diff(R,ct_coords[i]),i,dim));
trigsimp(C.transpose(C).ug);
I:transpose(C)[1];
J:transpose(C)[2];
K:transpose(C)[3];
Z:I$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:J$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:K$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Viktor
-----Original Message-----
Sent: Monday, July 3, 2017 7:15 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi Viktor,
Thank you for your response, but I have a hard time understanding your answer.
Given the Position Vector R, I find the covariant basis vectors as follows: diff(R,ct_coords[1]),
diff(R,ct_coords[2]) and diff(R,ct_coords[3]).
On the other hand, you find the covariant basis vectors as follows: makelist(diff(R[1],ct_coords[i]),i,dim),
makelist(diff(R[2],ct_coords[i]),i,dim) and makelist(diff(R[3],ct_coords[i]),i,dim).
One seems to be the transpose of the other (if one puts all three vectors in a matrix).
I still claim that my way is the correct way as it appears in Differential Geometry books, hence my problem with
the
Covariant Derivative still remains.
Thanks,
Daniel
Daniel,
Please forgive me, I have not noticed your question earlier.
Would the following answer your question?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:makelist(diff(R[1],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[2],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[3],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
Viktor
-----Original Message-----
Sent: Sunday, June 25, 2017 2:45 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi All,
Could somebody help here?
Thanks,
Daniel
Hi All,
Should the covariant derivative of a (covariant or contravariant) basis vector be zero?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:diff(R,ct_coords[1]))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
ldisplay(Q:factor(trigsimp(Q)))$
I think that this code calculates the covariant derivative of one of the covariant basis vectors in cylindrical
coordinates, but I don't get zero.
Please help.
Daniel Volinski.
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________
Maxima-discuss mailing list
https://lists.sourceforge.net/lists/listinfo/maxima-discuss
José A. Vallejo Rodríguez
2017-07-13 05:48:33 UTC
Permalink
Raw Message
In any n-dimensional Riemannian manifold (M,g), given a point p in M it is
always possible to find a local coordinate system (a chart) (U,f), with
associated coordinate functions x1,...,xn (called geodesic normal
coordinates), such that the Christoffel symbols of the Levi-Civita
connection satisfy \Gamma^i_{jk} (p)=0 (i.e, they vanish at p). This
implies that the covariant derivative of the basis vectors \partial_i are
zero at p. However, it is neither true that this happens for an arbitrary
coordinate system (hence for an arbitrary set of basis vectors), nor that
the covariant derivative of the basis vectors is zero for points other than
p.

_____________________________________
José Antonio Vallejo
Faculty of Sciences
State University of San Luis Potosi (Mexico)
http://galia.fc.uaslp.mx/~jvallejo
_____________________________________
Post by Viktor T. Toth
Dear Daniel,
I admit that I managed to confuse myself a little, too, as I was
attempting to formulate a correct answer, but I think I unconfused myself
now.
Your point is that the covariant derivative of basis vectors must be zero.
Let me respond by pointing out the subtle distinction between a vector,
such as a basis vector (a geometric object that has a length and a
direction) vs. its representation in the form of contravariant or covariant
components.
Contravariant components go hand-in-hand with covariant basis vectors and
vice versa. So for instance, the abstract idea of a vector R may be
represented as
R = r^1 e_1 + r^2 e_2 + r^3 e_3,
or alternatively, as
R = r_1 e^1 + r_2 e^2 + r_3 e^3.
Both represent the same abstract quantity.
We know that the basis vectors are such that e_i e^j = \delta_i^j where
\delta is the Kronecker delta.
So then, the product R . e^1 = r^1 e_1 e^1 + r^2 e_2 e^1 + r^3 e_3 e^1 = r^1,
and similarly,
R . e_1 = r_1 e^1 e_1 + r_2 e^2 e_1 + r_3 e^3 e_1 = r_1.
Same goes for R e^k and R e_k.
Which is to say that the components of a vector in a basis can be
calculated as the inner product of the vector and the corresponding basis
r_k = R . e_k,
r^k = R . e^k.
Which immediately gives the contravariant components of the basis vector
e_1 . e^1 = 1,
e_1 . e^2 = 0,
e_1 . e^3 = 0.
So if e_i . e^j = \delta_i^j, what about e^i . e^j? That would be g^{ij},
D_k g^{ij} = 0,
or
D_k (e^i . e^j) = 0.
Or, using the multiplication rule for derivatives,
(D_k e^i)e^j + e^i(D_k e^j) = 0.
(D_k e^i)e^j e_i + e^i(D_k e^j) e_i = 0.
D_k e^j + D_k e^j = 0.
D_k e^j = 0.
BUT... e^j is an abstract vector. It is not a triplet of contravariant or
covariant components. So let me ask... what would be the components
representing e^j?
We already know that
e_i = g_{ij} e^j.
So the covariant components of e_i are, in fact, the corresponding
components of the covariant metric tensor itself.
D_k e_i = D_k (g_{ij} e^j) = (D_k g_{ij}) e^j + g_{ij}(D_k e^j) = 0 + 0 = 0.
Note that if the covariant derivative of the metric didn't vanish
(nonmetricity), either D_k e^i or D_k e^j would be nonzero.
In other words, the statement that the covariant derivative of the basis
vectors vanishes is equivalent to the statement that the metric's covariant
derivatives vanish.
So what about the components of the basis vectors? Well, they are trivial,
aren't they. For instance, we just confirmed that e_1 = 1 e_1 + 0 e_2 + 0
e_3. So the contravariant components of e_1 are [1, 0, 0]. Similarly, the
covariant components of e^1 are [1, 0, 0].
Now clearly, in most coordinate systems, a vector with covariant (or
contravariant) components [1, 0, 0] will not have a vanishing covariant
derivative. In contrast, its coordinate derivative is going to vanish,
since the vector components are constants.
I hope this helps clarify things rather than making things more confusing.
Viktor
PS: Yes, the Jacobian.
-----Original Message-----
Sent: Friday, July 7, 2017 7:30 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi Viktor,
Thank you for your thorough response, it took me several days to
"digest" it.
Regarding your second point, the question of Covariant Derivatives.
If I understand correctly the Christoffel symbols of the second kind are
\partial_i e_j = \Gamma_{ij}^k e_k
i.e. given the basis e_i , you differentiate each basis vector with
respect to each coordinate and decompose the
result according to that same basis.
The coefficients of that decomposition are then the Christoffel symbols.
Hence the Christoffel symbols depend on the
basis you choose.
Now if the Covariant Derivative of a Covariant vector is
D_i V_j = \partial_i V_j - \Gamma_{ij}^k V_k
then
D_i e_j = \partial_i e_j - \Gamma_{ij}^k e_k = \Gamma_{ij}^k e_k -
\Gamma_{ij}^k e_k = 0
by definition of the Christoffel symbols this must be zero.
If the Covariant Derivative of a Covariant Basis Vector is not zero,
that means you are using Christoffel symbols
that do not correspond to that basis.
Regarding the Maxima code you sent, you calculate C as
C:apply(matrix,makelist(diff(R,ct_coords[i]),i,dim));
Isn't it simply the Jacobian? or rather the transpose of the Jacobian?
C:transpose(jacobian(R,ct_coords));
Thanks you,
Daniel
Dear Daniel,
Sorry, didn't mean to be cryptic, I thought my answer leads you to the
right solution. Allow me to elaborate on
three major points.
First, the question of position vectors.
Let's go through this methodically, starting with an arbitrary
coordinate system x^i. A position vector R is
represented by x^i, which is really shorthand for
R = x^i e_i,
Where the e_i are the covariant basis vectors.
Your point is that \partial R/\partial x^i = e_i, so differentiating R
with respect to any of the coordinates should
\partial _i R = \partial (x^i e_i) / \partial x^i = e_i.
So what would this position vector be in polar cylindrical coordinates?
Why, it is R = x^i of course, that is to
say,
R : [rho, theta, z].
Indeed you can see (even without Maxima) that, say, diff(R, rho) = [1,
0, 0], which is just as it should be: this is
e_rho.
But this is not what you have in your script. Instead, you have an R
that is the components of a position vector in
R : [rho*cos(theta), rho*sin(theta), z].
Why would the Cartesian components of R have any special behavior under
covariant differentiation in polar
cylindrical coordinates? Of course they wouldn't.
Second, the question of covariant derivatives.
You assert that the covariant derivative of the basis vectors should be
zero. But is this really true?
We have three basis vectors: e_rho = [1, 0, 0], e_theta = [0, 1, 0] and
e_z = [0, 0, 1]. Let us calculate their
covariant derivatives by hand.
The covariant derivative of a covariant vector V_i is, of course,
D_i V_j = \partial_i V_j - Gamma_{ij}^k V_k. The only nonzero
Christoffel symbols in polar cylindrical coordinates
are
Gamma_{12}^2 = Gamma_{21}^2 = 1 / rho,
Gamma_{22}^1 = -rho.
Consequently, for e_rho, we have
(D_1 e_rho)_1 = \partial_rho 1 - \Gamma_{11}^1 1 = 0.
(D_1 e_rho)_2 = -Gamma_{12}^1 1 = 0.
(D_1 e_rho)_3 = -Gamma_{13}^1 1 = 0.
(D_2 e_rho)_1 = \partial_theta 1 - \Gamma_{21}^1 1 = 0.
(D_2 e_rho)_2 = \partial_rho 1 - \Gamma_{22}^1 1 = rho.
Whoops. That term is not zero, falsifying your assertion, so I don't
even need to continue. (The rest of the terms
are zero, by the way.)
The reason, of course, is that this is precisely what the covariant
derivative measures: how the basis vectors
_change_ as we transport them along the manifold. So contrary to your
expectation, the covariant derivative of the
basis vectors should not be zero unless all Christoffel-symbols vanish
(e.g., Cartesian coordinates in flat space.)
Third, the covariant derivative of Cartesian basis vectors.
The preceding discussion leads me to my final point: the covariant
derivatives of a set of basis vectors that does
NOT change under parallel transport, such as the Cartesian basis
vectors, should be zero! But for this, we need to
do the exact opposite of what you have been doing. Instead of taking R
to be a position vector in cylindrical
coordinates and then express its components in Cartesian coordinates, we
need to take the Cartesian basis vectors
and express their coordinates using the polar cylindrical coordinate
system.
So given Cartesian basis vectors I, J and K, their components in polar
I = [cos(theta), -rho*sin(theta), 0]
J = [sin(theta), rho*cos(theta), 0]
K = [0, 0, 1].
Unsurprisingly, if arranged in a matrix, this would indeed be the
transpose of the matrix that you obtain from the
vectors that you believed to be basis vectors.
When we take the covariant derivatives of these three vectors: that is
to say, the polar cylindrical covariant
derivative of the Cartesian basis vectors expressed in polar cylindrical
coordinates, we indeed get zeros
everywhere.
By the way, polar cylindrical coordinates are a ctensor/ct_coordsys
builtin, so there is no need to construct them
from scratch. (The builtin uses r instead of rho as the name of the
radial coordinate.)
load(itensor)$
load(ctensor)$
ct_coordsys(polarcylindrical,all)$
ldisplay(R:[r*cos(theta),r*sin(theta),z])$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
C:apply(matrix,makelist(diff(R,ct_coords[i]),i,dim));
trigsimp(C.transpose(C).ug);
I:transpose(C)[1];
J:transpose(C)[2];
K:transpose(C)[3];
Z:I$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:J$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Z:K$ Q:zeromatrix(dim,dim)$ ev(Eq)$ Q;
Viktor
-----Original Message-----
Sent: Monday, July 3, 2017 7:15 AM
Maxima-discuss <maxima-
sourceforge.net> >
Subject: Re: [Maxima-discuss] Tensor question
Hi Viktor,
Thank you for your response, but I have a hard time understanding your
answer.
Given the Position Vector R, I find the covariant basis vectors as
follows: diff(R,ct_coords[1]),
diff(R,ct_coords[2]) and diff(R,ct_coords[3]).
makelist(diff(R[1],ct_coords[i]),i,dim),
makelist(diff(R[2],ct_coords[i]),i,dim) and
makelist(diff(R[3],ct_coords[i]),i,dim).
One seems to be the transpose of the other (if one puts all three
vectors in a matrix).
I still claim that my way is the correct way as it appears in
Differential Geometry books, hence my problem with
the
Covariant Derivative still remains.
Thanks,
Daniel
Daniel,
Please forgive me, I have not noticed your question earlier.
Would the following answer your question?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:makelist(diff(R[1],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[2],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
ldisplay(Z:makelist(diff(R[3],ct_coords[i]),i,3))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
Q;
Viktor
-----Original Message-----
]
Sent: Sunday, June 25, 2017 2:45 AM
Subject: Re: [Maxima-discuss] Tensor question
Hi All,
Could somebody help here?
Thanks,
Daniel
El Sábado 10 de junio de 2017 2:36, Daniel Volinski via
Hi All,
Should the covariant derivative of a (covariant or contravariant)
basis vector be zero?
load(itensor)$
load(ctensor)$
ct_coords:[rho,theta,z]$
dim:length(ct_coords)$
ldisplay(R:[rho*cos(theta),rho*sin(theta),z])$
ct_coordsys(append(R,[ct_coords]),all)$
ishow(Eq:Q([j,i],[])=subst([%1=m],rename(covdiff(Z([j],[]),i))))$
Eq:ic_convert(Eq);
ldisplay(Z:diff(R,ct_coords[1]))$
Q:zeromatrix(dim,dim)$
ev(Eq)$
ldisplay(Q:factor(trigsimp(Q)))$
I think that this code calculates the covariant derivative of one of
the covariant basis vectors in cylindrical
coordinates, but I don't get zero.
Please help.
Daniel Volinski.
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot______
_________________________________________
Maxima-discuss mailing list
sourceforge.net> <mailto:Maxima-
sourceforge.net> > <mailto:Maxima-
https://lists.sourceforge.net/lists/listinfo/maxima-discuss
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Maxima-discuss mailing list
https://lists.sourceforge.net/lists/listinfo/maxima-discuss
Loading...