Discussion:
matrix square
Evan Cooch
2012-12-28 15:30:31 UTC
Permalink
"? matrix" is your friend. It's not clear exactly what you did, but the
documentation there says that A^2 squares the elements of a matrix, but
A^^2 is the matrix product of A and A, which is probably what you
wanted.

Ray

> Thank you. A^^2 is indeed what I wanted.
>
>


That approach seems to work fine, so long as you are interested in
integer powers of a matrix (which seems to be the case here). The
approach I posted earlier is more general. Suppose, for example, you
want the square-root of A? A^^(0.5) doesn't evaluate.
Stavros Macrakis
2012-12-28 15:50:15 UTC
Permalink
The ^^ operator doesn't do anything with fractional powers. One reason for
that is that they are not unique. For example, there are infinitely many
solutions <http://en.wikipedia.org/wiki/Square_root_of_a_matrix> to
a^^2=ident(2); one infinite family is matrix([1,0],[%r1,-1]) for arbitrary
%r1. I don't believe there is any standard definition of a "principal
value" of the matrix square root in general.

-s


On Fri, Dec 28, 2012 at 10:30 AM, Evan Cooch <***@gmail.com> wrote:

>
> "? matrix" is your friend. It's not clear exactly what you did, but the
> documentation there says that A^2 squares the elements of a matrix, but
> A^^2 is the matrix product of A and A, which is probably what you
> wanted.
>
> Ray
>
> Thank you. A^^2 is indeed what I wanted.
>>
>>
>>
>
> That approach seems to work fine, so long as you are interested in integer
> powers of a matrix (which seems to be the case here). The approach I posted
> earlier is more general. Suppose, for example, you want the square-root of
> A? A^^(0.5) doesn't evaluate.
> ______________________________**_________________
> Maxima mailing list
> ***@math.utexas.edu
> http://www.math.utexas.edu/**mailman/listinfo/maxima<http://www.math.utexas.edu/mailman/listinfo/maxima>
>
Evan Cooch
2012-12-28 16:34:51 UTC
Permalink
There are some conditions under which there are unique square-roots for
some matrices, but not others.

For example, a : matrix([0.8,0.5],[0.2,0.5]);

Unique square-root:
matrix([0.87077787570164,0.3230553107459],[0.12922212727859,0.67694468180352])

But consider matrix matrix([33,24],[48,57]); This has *two* square
roots: matrix(91,4],8,5]) and matrix([5,2],[4,7]);


But, your point concerning no standard definition 'in general' seems
to be correct. There has been a fair bit of work on the problem - some
of which I can actually follow! ;-)


On 12/28/2012 10:50 AM, Stavros Macrakis wrote:
> The ^^ operator doesn't do anything with fractional powers. One
> reason for that is that they are not unique. For example, there are
> infinitely many solutions
> <http://en.wikipedia.org/wiki/Square_root_of_a_matrix> to
> a^^2=ident(2); one infinite family is matrix([1,0],[%r1,-1]) for
> arbitrary %r1. I don't believe there is any standard definition of a
> "principal value" of the matrix square root in general.
>
> -s
>
>
> On Fri, Dec 28, 2012 at 10:30 AM, Evan Cooch <***@gmail.com
> <mailto:***@gmail.com>> wrote:
>
>
> "? matrix" is your friend. It's not clear exactly what you did,
> but the
> documentation there says that A^2 squares the elements of a
> matrix, but
> A^^2 is the matrix product of A and A, which is probably what you
> wanted.
>
> Ray
>
> Thank you. A^^2 is indeed what I wanted.
>
>
>
>
> That approach seems to work fine, so long as you are interested in
> integer powers of a matrix (which seems to be the case here). The
> approach I posted earlier is more general. Suppose, for example,
> you want the square-root of A? A^^(0.5) doesn't evaluate.
> _______________________________________________
> Maxima mailing list
> ***@math.utexas.edu <mailto:***@math.utexas.edu>
> http://www.math.utexas.edu/mailman/listinfo/maxima
>
>
Daniel Rupistraliz Avez
2012-12-29 11:07:32 UTC
Permalink
________________________________
In the case that theall the eigenvalues are differents a simple solution for the square root is

squareRoot(a):=block([l1,l2,d,c],[l1,l2]:eigenvectors(a),d:apply(diag_matrix,l1[1]),
  c:transpose(apply(matrix,map(first,eigenvectors(a)[2]))), c . sqrt(d) . invert(c));



De: Evan Cooch <***@gmail.com>
Para: Stavros Macrakis <***@alum.mit.edu>
CC: ***@math.utexas.edu; maxima-***@math.utexas.edu
Enviado: Viernes 28 de diciembre de 2012 17:34
Asunto: Re: [Maxima] matrix square


There are some conditions under which there are unique square-roots for some matrices, but not others.

For example, a : matrix([0.8,0.5],[0.2,0.5]);

Unique square-root:  
matrix([0.87077787570164,0.3230553107459],[0.12922212727859,0.67694468180352])

But consider matrix matrix([33,24],[48,57]); This has *two* square
roots: matrix(91,4],8,5]) and matrix([5,2],[4,7]);


 But, your point concerning no standard definition 'in general'
seems to be correct. There has been a fair bit of work on the
problem - some of which I can actually follow! ;-)



On 12/28/2012 10:50 AM, Stavros Macrakis wrote:

The ^^ operator doesn't do anything with fractional powers.  One reason for that is that they are not unique.  For example, there are infinitely many solutions to a^^2=ident(2); one infinite family is matrix([1,0],[%r1,-1]) for arbitrary %r1.  I don't believe there is any standard definition of a "principal value" of the matrix square root in general.
>
>
>            -s
>
>
>
>On Fri, Dec 28, 2012 at 10:30 AM, Evan Cooch <***@gmail.com> wrote:
>
>
>>"? matrix" is your friend.  It's not clear exactly what you
did, but the
>>documentation there says that A^2 squares the elements of a
matrix, but
>>A^^2 is the matrix product of A and A, which is probably
what you
>>wanted.
>>
>>Ray
>>
>>
>>Thank you. A^^2 is indeed what I wanted.
>>>
>>>
>>>
>>
>>That approach seems to work fine, so long as you are
interested in integer powers of a matrix (which seems to be
the case here). The approach I posted earlier is more
general. Suppose, for example, you want the square-root of
A? A^^(0.5) doesn't evaluate.
>>_______________________________________________
>>Maxima mailing list
>>***@math.utexas.edu
>>http://www.math.utexas.edu/mailman/listinfo/maxima
>>
>
Aleksas Domarkas
2012-12-29 15:00:49 UTC
Permalink
see: http://www.math.utexas.edu/pipermail/maxima/2012/031320.html
Square roots of matrix

Example1. Find square roots of matrix([33,24],[48,57])
(%i1) X:matrix([a,b],[c,d])$
(%i2) A:matrix([33,24],[48,57])$
(%i3) X.X-A;
(%o3) matrix([b*c+a^2-33,b*d+a*b-24],[c*d+a*c-48,d^2+b*c-57])
(%i4) append(%[1],%[2]);
(%o4) [b*c+a^2-33,b*d+a*b-24,c*d+a*c-48,d^2+b*c-57]
(%i5) solve(%);
(%o5)
[[d=-5,c=-8,b=-4,a=-1],[d=7,c=4,b=2,a=5],[d=5,c=8,b=4,a=1],[d=-7,c=-4,b=-2,a=-5]]
(%i6) makelist(ev(X,%[k]),k,1,4);
(%o6)
[matrix([-1,-4],[-8,-5]),matrix([5,2],[4,7]),matrix([1,4],[8,5]),matrix([-5,-2],[-4,-7])]
Test:
(%i7) makelist(%[k]^^2-A,k,1,4);
(%o7)
[matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0])]

Example2. Find square roots of matrix([0.8,0.5],[0.2,0.5]);
This problem also has four solutions.
(%i8) X:matrix([a,b],[c,d])$
(%i9) A:matrix([0.8,0.5],[0.2,0.5]);
(%o9) matrix([0.8,0.5],[0.2,0.5])
(%i10) ratprint:false$ algebraic:true$
(%i12) load(diag)$
(%i13) mat_function(sqrt,A);
(%o13)
matrix([(2*sqrt(3))/(7*sqrt(10))+5/7,5/7-(5*sqrt(3))/(7*sqrt(10))],[2/7-(2*sqrt(3))/(7*sqrt(10)),(5*sqrt(3))/(7*sqrt(10))+2/7])
(%i14) sr1:expand(rootscontract(ratsimp(%))); float(%);
(%o14)
matrix([sqrt(30)/35+5/7,5/7-sqrt(30)/14],[2/7-sqrt(30)/35,sqrt(30)/14+2/7])
(%o15)
matrix([0.8707778735729,0.32305531606774],[0.1292221264271,0.67694468393226])
Other solution is:
(%i16)
sr2:matrix([5/7-sqrt(30)/35,sqrt(30)/14+5/7],[sqrt(30)/35+2/7,2/7-sqrt(30)/14]);
float(%);
(%o16)
matrix([5/7-sqrt(30)/35,sqrt(30)/14+5/7],[sqrt(30)/35+2/7,2/7-sqrt(30)/14])
(%o17)
matrix([0.55779355499852,1.10551611250369],[0.44220644500148,-0.10551611250369])
(%i18) sr3:-sr1$
(%i19) sr4:-sr2$
Test:
(%i20) [sr1^^2-A,sr2^^2-A,sr3^^2-A,sr4^^2-A]$
(%i21) ratsimp(%);
(%o21)
[matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0])]

best

Aleksas Domarkas
Stavros Macrakis
2012-12-29 15:47:10 UTC
Permalink
Yes, this often works. Sometimes, however, Maxima is unable to solve the
system, e.g. for X^^2-ident(2): "algsys: tried and failed to reduce system
to a polynomial in one variable; give up." It is possible to solve this
using Maxima step-by-step, though.

-s

On Sat, Dec 29, 2012 at 10:00 AM, Aleksas Domarkas <***@gmail.com>wrote:

> see: http://www.math.utexas.edu/pipermail/maxima/2012/031320.html
> Square roots of matrix
>
> Example1. Find square roots of matrix([33,24],[48,57])
> (%i1) X:matrix([a,b],[c,d])$
> (%i2) A:matrix([33,24],[48,57])$
> (%i3) X.X-A;
> (%o3) matrix([b*c+a^2-33,b*d+a*b-24],[c*d+a*c-48,d^2+b*c-57])
> (%i4) append(%[1],%[2]);
> (%o4) [b*c+a^2-33,b*d+a*b-24,c*d+a*c-48,d^2+b*c-57]
> (%i5) solve(%);
> (%o5)
> [[d=-5,c=-8,b=-4,a=-1],[d=7,c=4,b=2,a=5],[d=5,c=8,b=4,a=1],[d=-7,c=-4,b=-2,a=-5]]
> (%i6) makelist(ev(X,%[k]),k,1,4);
> (%o6)
> [matrix([-1,-4],[-8,-5]),matrix([5,2],[4,7]),matrix([1,4],[8,5]),matrix([-5,-2],[-4,-7])]
> Test:
> (%i7) makelist(%[k]^^2-A,k,1,4);
> (%o7)
> [matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0])]
>
> Example2. Find square roots of matrix([0.8,0.5],[0.2,0.5]);
> This problem also has four solutions.
> (%i8) X:matrix([a,b],[c,d])$
> (%i9) A:matrix([0.8,0.5],[0.2,0.5]);
> (%o9) matrix([0.8,0.5],[0.2,0.5])
> (%i10) ratprint:false$ algebraic:true$
> (%i12) load(diag)$
> (%i13) mat_function(sqrt,A);
> (%o13)
> matrix([(2*sqrt(3))/(7*sqrt(10))+5/7,5/7-(5*sqrt(3))/(7*sqrt(10))],[2/7-(2*sqrt(3))/(7*sqrt(10)),(5*sqrt(3))/(7*sqrt(10))+2/7])
> (%i14) sr1:expand(rootscontract(ratsimp(%))); float(%);
> (%o14)
> matrix([sqrt(30)/35+5/7,5/7-sqrt(30)/14],[2/7-sqrt(30)/35,sqrt(30)/14+2/7])
> (%o15)
> matrix([0.8707778735729,0.32305531606774],[0.1292221264271,0.67694468393226])
> Other solution is:
> (%i16)
> sr2:matrix([5/7-sqrt(30)/35,sqrt(30)/14+5/7],[sqrt(30)/35+2/7,2/7-sqrt(30)/14]);
> float(%);
> (%o16)
> matrix([5/7-sqrt(30)/35,sqrt(30)/14+5/7],[sqrt(30)/35+2/7,2/7-sqrt(30)/14])
> (%o17)
> matrix([0.55779355499852,1.10551611250369],[0.44220644500148,-0.10551611250369])
> (%i18) sr3:-sr1$
> (%i19) sr4:-sr2$
> Test:
> (%i20) [sr1^^2-A,sr2^^2-A,sr3^^2-A,sr4^^2-A]$
> (%i21) ratsimp(%);
> (%o21)
> [matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0]),matrix([0,0],[0,0])]
>
> best
>
> Aleksas Domarkas
>
> _______________________________________________
> Maxima mailing list
> ***@math.utexas.edu
> http://www.math.utexas.edu/mailman/listinfo/maxima
>
>
Barton Willis
2012-12-29 16:04:59 UTC
Permalink
> Maxima is unable to solve the system, e.g. for X^^2-ident(2):

Ugh--this is pretty terrible, but I think it's correct.

(%i25) algsys(flatten(args(matrix([a,b],[c,d])^^2 - ident(2))),[a,b,c]);
(%o25) [[a=-d,b=-(d^2-1)/%r2,c=%r2]]

--Barton
Stavros Macrakis
2012-12-29 17:56:12 UTC
Permalink
That misses solutions, e.g. matrix([1,0],[0,1]), matrix([1,%r],[0,-1]), and
maybe others as well.

On Sat, Dec 29, 2012 at 11:04 AM, Barton Willis <***@unk.edu> wrote:

>
> > Maxima is unable to solve the system, e.g. for X^^2-ident(2):
>
> Ugh--this is pretty terrible, but I think it's correct.
>
> (%i25) algsys(flatten(args(matrix([a,b],[c,d])^^2 - ident(2))),[a,b,c]);
> (%o25) [[a=-d,b=-(d^2-1)/%r2,c=%r2]]
>
> --Barton
>
René Grognard
2012-12-29 22:44:47 UTC
Permalink
The matrix solutions of X^^2 = Identity(n) where n is the square matrix dimension is in the linear span of an orthonormal basis of n matrices representing a Clifford algebra. Let that basis be e[1],...,e[n] then the Clifford anti-commutative rule is:
(1/2)*(e[i] . e[j] + e[j] . e[i]) = delta(i,j),
with delta(i,j) the Kronecker symbol: delta(i,j) = 1 if i==j else 0.
With X:sum(x[i]*e[i],i,1,n); X^^2 => sum(x[i]^2,i,1,n) and thus if that sum is 1, X is a unit Clifford "vector": X^^2 = 1, a generic point on the unit hypersphere. Clifford algebra generalises the theory of complex variables to any dimensions.
For n=2 one gets a matrix representation of the "hyperbolic" quaternions.
Try the following:
e[0]:matrix([1,0],[0,1]);
e[1]:matrix([0,1],[1,0]);
e[2]:matrix([0,%i],[-%i,0]);
e[3]:matrix([1,0],[0,-1]);
for i:1 thru 3 do disp(e[i]^^2);
> each item will be the identity matrix
for i:1 thru 3 do for j:i+1 thru 3 do
disp((1/2)*(e[i] . e[j]+e[j] . e[i]));
> each item will be the null matrix
e[1] . e[2];
> the answer is: -i e[3]
e[2] . e[3];
> - i e[1]
e[3] . e[1];
> -i e[2]
X:sum(x[i]*e[i],i,1,3);
expand(X^^2);
> the answer is: (x[1]^2 + x[2]^2 + x[3]^2) e[0]
With:
I:%i*e[1];
J:%i*e[2];
K:%i*e[3];
one retrieves Hamiltons' rules for his beloved quaternions
[I^^2, J^^2, K^^2];
> each item is - e[0]; thus I^^2 = J^^2 = K^^2 = - 1
I . J . K;
> - e[0]; thus I.J.K = -1.
With n=4 the set of e[i] is the set of the Dirac matrices.
Regards, Rene' ***@hotmail.com > ------------------------------

> 2. Re: matrix square (Barton Willis)

> Message: 2

> Date: Sat, 29 Dec 2012 16:04:59 +0000

> From: Barton Willis <***@unk.edu>

> To: Stavros Macrakis <***@alum.mit.edu>, Aleksas Domarkas

> <***@gmail.com>

> Cc: maxima <***@math.utexas.edu>

> Subject: Re: [Maxima] matrix square

>

> > Maxima is unable to solve the system, e.g. for X^^2-ident(2):

>

> Ugh--this is pretty terrible, but I think it's correct.

>

> (%i25) algsys(flatten(args(matrix([a,b],[c,d])^^2 - ident(2))),[a,b,c]);

> (%o25) [[a=-d,b=-(d^2-1)/%r2,c=%r2]]

>

> --Barton

> ------------------------------
René Grognard
2012-12-30 04:38:42 UTC
Permalink
Sorry, I should have added that the use of Clifford algebra to find the most general solution of the matrix equation
X^^2 = ident(n)
works only for even dimensions. I also used "n" in a confusing way.
Another nice example with n=4 (Dirac matrices).
Try the following maxima code.
/* We need the tensor product of matrices *//* Tensor Product of Matrices */
tp(A,B):= block([n,n1,n2,m,m1,m2,p1,p2,C], n:matrix_size(A), n1:first(n), n2:second(n), m:matrix_size(B), m1:first(m), m2:second(m), p1:n1*m1, p2:n2*m2, C:zeromatrix(p1,p2), for i1:1 thru n1 do for i2:1 thru n2 do for j1:1 thru m1 do for j2:1 thru m2 do C[(i1-1)*m1+j1,(i2-1)*m2+j2]:A[i1,i2]*B[j1,j2], return(C))$
/* Again we start with the hyperbolic quaternions (Pauli matrices)*/
e[0]:matrix([1,0],[0,1]);e[1]:matrix([0,1],[1,0]);e[2]:matrix([0,%i],[-%i,0]);e[3]:matrix([1,0],[0,-1]);
/* Quaternion units */
I:%i*e[1]; J:%i*e[2]; K:%i*e[3];
/* Now we form the tensor products of the elements of the basis to get a Lorentz invariant basis for 4 dimensional space-time */
E[0]:tp(e[3],e[0]);E[1]:tp(e[1],I);E[2]:tp(e[1],J);E[3]:tp(e[1],K);
/* It forms a vector basis for a Clifford algebra */
for i:0 thru 3 do disp(E[i]^^2);for i:0 thru 2 do for j:i+1 thru 3 do disp((1/2)*(E[i].E[j]+E[j].E[i]));
/* Now construct a 4-vector X with components: */
x[0]:cosh(w);x[1]:cos(u)*sin(v)*sinh(w);x[2]:sin(u)*sin(v)*sinh(w);x[3]:cos(v)*sinh(w);
/* in that Clifford basis: */
X:sum(x[i]*E[i],i,0,3);trigsimp(expand(X^^2));
> answer: X^^2 = iden(4). QED
The solution has three free parameters (u,v,w) and the set fits the hyperboloid
x[0]^2 - x[1]^2 - x[2]^2 - x[3]^2 = 1.
Its equation is Lorentz invariant (pseudo-hypersphere).
One can extend the method to any even dimension by tensor product of matrices, an operation which should be added to maxima.
Regards,
Rene'***@hotmail.com
Barton Willis
2012-12-30 12:03:59 UTC
Permalink
> /* We need the tensor product of matrices */

Thanks for your contributions; see also

(%i1) ?? kronecker_product;
-- Function: kronecker_product (<A>, <B>)
Return the Kronecker product of the matrices <A> and <B>.
(%o1) true

--Barton
Jaime Villate
2013-01-02 11:56:39 UTC
Permalink
On 12/30/2012 12:03 PM, Barton Willis wrote:
> ? kronecker_product;
Great. This is exactly what I need in another problem I'm working on :)
Last week I wrote some code to calculate the "outer product" of
matrices; I didn't know it was also called Kronecker product and was
already implemented in Maxima.
If some developer is eager to start 2013 by improving the manual, here
is an idea: improve kronecker_products documentation (linear algebra
package) by adding it to category Matrices, adding a simple example and
saying that it is also known as outer product.
Regards,
Jaime
Henry Baker
2013-01-02 13:18:46 UTC
Permalink
You may be aware that Common Lisp already provides a number of "map/reduce" operations which didn't make it into Maxima because they were added to Common Lisp after Maxima was implemented.

While Maxima does a pretty good job of mapping, I don't believe it does such a good job on "reduce" operations.

http://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node143.html

Common Lisp got the ideas for some of its mapping & reducing operations from Iverson's APL language.

APL has some useful operations:

http://en.wikipedia.org/wiki/APL_%28programming_language%29

APL has the following extremely useful operations:

1. (iota n) => sequence from 0..(n-1), i.e.,

(iota 5) => [0,1,2,3,4]

2. "generalized inner product". Given 2 binary operations "product" and "sum", and 2 vectors u,v of equal length compute

sum(i=0..(length-1),product(u[i],v[i])). This generalized inner product is extended to matrices in the usual way, such that
innerproduct("+","*",A,B) is the usual _matrix product_ of the matrices A,B.

Implementing inner product in its full generality requires some way to determine the identity element for a given operation. Thus, "+"[]=0,
"*"[]=1, "min"[]=+infinity, "max"[]=-infinity, etc.

3. "outer product". Basically, the tensor product.

4. Binary select. (Implemented in hardware on CDC & Cray computers). Given a bit-vector b and a generic vector v of the same length,

select(b,v) is the vector whose length is the "number of 1 bits in b" which contains all of the elements of v which have a corresponding
bit in b equal to 1.

5. "Scan" functions. Basically, these provide for _running sums_, etc. Given a binary operation "op" and a vector v,

scan(+,[a,b,c,d,e]) = [0,a,a+b,a+b+c,a+b+c+d,a+b+c+d+e]

I believe that APL may have done it backwards:

rscan(+,[a,b,c,d,e]) = [a+b+c+d+e,b+c+d+e,c+d+e,d+e,e,0]

6. "reverse". reverses a vector.

7. "restructure". This takes a simple 1-dimensional vector and "restructures" the elements into a multidimensional vector:

restructure(2,3,[0,1,2,3,4,5]) = matrix([0,1,2],[3,4,5])

8. multidimensional "transpose". Exchanges 2 indices in an n-dimensional array.

At 03:56 AM 1/2/2013, Jaime Villate wrote:
>On 12/30/2012 12:03 PM, Barton Willis wrote:
>>? kronecker_product;
>
>Great. This is exactly what I need in another problem I'm working on :)
>Last week I wrote some code to calculate the "outer product" of matrices; I didn't know it was also called Kronecker product and was already implemented in Maxima.
>
>If some developer is eager to start 2013 by improving the manual, here is an idea: improve kronecker_products documentation (linear algebra package) by adding it to category Matrices, adding a simple example and saying that it is also known as outer product.
>
>Regards,
>Jaime
Bill Wood
2013-01-02 16:25:07 UTC
Permalink
On Wed, 2013-01-02 at 11:56 +0000, Jaime Villate wrote:
. . .
> Great. This is exactly what I need in another problem I'm working
> on :)
> Last week I wrote some code to calculate the "outer product" of
> matrices; I didn't know it was also called Kronecker product and was
> already implemented in Maxima.
> If some developer is eager to start 2013 by improving the manual, here
> is an idea: improve kronecker_products documentation (linear algebra
> package) by adding it to category Matrices, adding a simple example
> and saying that it is also known as outer product.

It might be good to note that the outer/Kronecker/tensor product is also
known as the "direct" product. "Elementary Matrix Theory" by H.Eves
mentions that the Kronecker product is also known as the direct product
and the tensor product. The names Kronecker and tensor seem to be a
little more common in matrix application contexts: "Topics in Matrix
Analysis" by R.Horn and C.Johnson mentions both names, "Matrices:
Methods and Applications" by S.Barnett uses Kronecker product and
"Mathematics of Multidimensional Fourier Transform Algorithms" by
R.Tolimieri,M.An and C.Lu uses tensor product.

--
Bill Wood
Jaime Villate
2013-01-02 12:09:01 UTC
Permalink
On 12/30/2012 04:38 AM, René Grognard wrote:
> /* We need the tensor product of matrices */
> /* Tensor Product of Matrices */
>
> tp(A,B):=
> block([n,n1,n2,m,m1,m2,p1,p2,C],
> n:matrix_size(A), n1:first(n), n2:second(n),
> m:matrix_size(B), m1:first(m), m2:second(m),
> p1:n1*m1, p2:n2*m2,
> C:zeromatrix(p1,p2),
> for i1:1 thru n1 do
> for i2:1 thru n2 do
> for j1:1 thru m1 do
> for j2:1 thru m2 do
> C[(i1-1)*m1+j1,(i2-1)*m2+j2]:A[i1,i2]*B[j1,j2],
> return(C))$
>
Hi,
if you'd like to see another way to get the same result in Maxima, here
it is:

tp(m1,m2):= block ([rows:[],row:[]],
local(c),
for i thru length (m1) do
for j thru length (m2) do
(row: [],
for c in first (row (m1,i)) do
row: append (row, c*first (row (m2,j))),
rows: endcons (row, rows)),
apply (matrix, rows))$

but, as Barton Willis has said, that product is already implemented in
Maxima as kronecker_product(A,B)

You might also be interested in this:

tp ([matrices]) := lreduce (kronecker_product, matrices)$

which allows you to make the tensor product of any number of matrices.
tp(A)
tp(A,B)
tp(A,B,C)
...

Best regards,
Jaime
Loading...