"Verde-Star","Luis","verde@xanum.uam.mx","\section{Your title here} By {\sl names of all authors here}. \noindent Insert your abstract here By {\sl Luis Verde-Star}.\cr \cr \begin{abstract}\cr We study a family of groups of infinite matrices that includes groups of generalized \cr Pascal and Stirling matrices. To each matrix $[a_{n,k}]$ we associate a\cr generating-function $G(u,z)=\sum \sum a_{n,k} u^n z^k$ considered as formal Laurent \cr series in $z$ with coefficients that are formal Laurent series in $u$.\cr Matrix multiplication corresponds to certain convolution product of the generating-functions.\cr We show that certain families of generating-functions, related to structured matrices, are groups under convolution. In such cases, the computation of the convolution \cr is quite simple. For example, it may be substitution of a series into another one. \cr \cr We obtain one-parameter groups of generalized Stirling matrices from some of the \cr binomial formulas obtained in [L.Verde-Star and H. M. Srivastava, Some binomial formulas of the generalized Appell form, J. Math. Anal. Appl. 274 (2002) 755--771].\cr We extend results from [L. Verde-Star, Groups of generalized Pascal matrices, Linear Algebra Appl. 382 (2004) 179--194].\cr","matrix exponential, companion matrix, polynomial","15A24","65A30",10:17:57","Mon Nov 05 2007","148.206.47.32" "Arroyo","María José","mja@xanum.uam.mx","\section{Eigenvalues} By {\sl Mar\'ia Jos\'e Arroyo}. \noindent In this talk ...","eigenvalues","15A18","",16:44:05","Mon Nov 05 2007","148.206.47.50" "Shahryari","Mohammad","mshahryari@tabrizu.ac.ir","\section{$\Z_2$-graded symmetry classes of tensors} By {\sl M. Shahryari}. \noindent In this paper, we define a natural $\Z_2$-gradation on the symmetry class of tensors $V_{\chi}(G)$. We give the dimensions of {\em even} and {\em odd} parts of this gradation. Also we prove that the even part ( the odd part) of this gradation is zero, if and only if the whole symmetry class is zero.","Gradation, Symmetry classes of tensors, Characters of finite groups","15A69","20C15",06:32:14","Sun Dec 16 2007","82.205.162.118" "Al Zhour","Dr. Zeyad","math_upm@yahoo.com","\section{Matrix Results on Weighted Drazin Inverse and Some Applications} By {Zeyad Al Zhour and Adem Kilicman}. \noindent In this paper, we present two general representations of the weighted Drazin inverse A_{d,W} of an arbitrary rectangular matrix A¡ôM_{m,n} related to Moore-Penrose Inverse (MPI) and Kronecker product of matrices. These generalizations extend earlier results on the Drazin inverse A_{d}, group inverse A_{g} and usual inverse A⁻©ö. Furthermore, some necessary and sufficient conditions for Drazin and weighted Drazin inverses are given for the reverse order law (AB)_{d}=B_{d}A_{d} and (AB)_{d,Z}=B_{d,R}A_{d,W} to hold. Finally, we present the solution of the restricted singular matrix equations using our new approaches.","Kronecker Product, Weighted Drazin Inverses, General algebraic structures, Index. Nilpotent matrix.","15A69","15A09",12:51:50","Tue Dec 25 2007","217.144.8.99" "Bardsley","John","bardsleyj@mso.umt.edu","\section{STOPPING RULES FOR A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR ILL-POSED POISSON IMAGING PROBLEMS} By {\sl Johnathan M. Bardsley}. \noindent Image data is often collected by a charge coupled device (CCD) camera. CCD camera noise is known to be well-modeled by a Poisson distribution. If this is taken into account, the negative-log of the Poisson likelihood is the resulting data-fidelity function. We derive, via a Taylor series argument, a weighted least squares approximation of the negative-log of the Poisson likelihood function. The image deblurring algorithm of interest is then applied to the problem of minimizing this weighted least squares function subject to a nonnegativity constraint. Our objective in this paper is the development of stopping rules for this algorithm. We present three stopping rules and then test them on data generated using two different true images and an accurate CCD camera noise model. The results indicate that each of the three stopping rules is effective.","iterative methods, image reconstruction, regularization, statistical methods","65F20","65F30",12:14:18","Thu Jan 03 2008","150.131.67.34" "Rakotondrajao","Fanja","frakoton@univ-antananarivo.mg","\section{Euler's difference table and maximum permanents of $(0,1)$-matrices} By {\sl Fanja Rakotondrajao}. \noindent First, we will enumerate the injections from $[m]$ to $[n]$ without $k$-fixed-points, that is, injection $f$ without $i$ such that $f(i) = i+k$. We will deduce the exact values of maximum permanent of $(0,1)- m \times n $ matrices having $m-k$ numbers of zero entries for any non negative integers $0\leq k \leq m \leq n$. Unexpectedly these values are related to the numbers $d^k_n$ of $k$-fixed-points-permutations over $[n]$. The numbers $d^k_n$ are the derivate of Euler's difference table.","$k$-fixed-points, $(0,1)$-matrices, permanent, injections, $k$-fixed-points-permutations.","05A19","05B20",01:10:34","Wed Jan 16 2008","196.192.40.118" "Rump","Siegfried M.","rump@tu-harburg.de","\section{The ratio between the Toeplitz and the unstructured condition number} By {\sl S.M. Rump and H. Sekigawa}. \noindent Recently we showed that the ratio between the normwise Toeplitz structured condition number of a linear system and the general unstructured condition number has a finite lower bound. However, the bound was not explicit, and nothing was known about the quality of the bound. In a joint work with H. Sekigawa we give an explicit lower bound only depending on the dimension, and we show that this bound is almost sharp. The solution of both problems is based on the minimization of the smallest singular value of a class of Toeplitz matrices and its nice connection to a lower bound on the coefficients of the product of two polynomials.","Structured condition number, Toeplitz matrix, Mahler measure, polynomial norms","15A12","26D05","The talk will be given by the first author. "Estatico","Claudio","estatico@unica.it","\section{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.}","regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares"" "Estatico","Claudio","estatico@unica.it","\section{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.}","regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares"" "Verde-Star","Luis","verde@xanum.uam.mx","\section{Your title here} By {\sl Luis Verde}. \noindent Insert your abstract here derfred kijuhyg","matrix, vector","15A12","15A25","testing "Verde","Luis","verde@xanum.uam.mx","\section{Your title here} By {\sl names of all authors here}. \noindent Insert your abstract here","matrix","15A23","15A39","prueba opera "Estatico","Claudio","estatico@unica.it","\section{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.}","regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares"" "Estatico","Claudio","estatico@unica.it","\section{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.}","regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares"" "Maroulas","John","maroulas@math.ntua.gr","\section{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[C],\,$ where $\,C=P^{*}AP\.$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[C]\,$ shares the same tangential points with the sides of both polygons.","compression;eigenvalue;numerical range","15A60","15A18"," "Maroulas","John","maroulas@math.ntua.gr","\section{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[C],\,$ where $\,C=P^{*}AP\.$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[C]\,$ shares the same tangential points with the sides of both polygons.","compression;eigenvalue;numerical range","15A60","15A18"," "Damm","Tobias","damm@mathematik.uni-kl.de","\section{Algebraic Gramians and Model Reduction for Different System Classes} By {\sl Tobias Damm}. \noindent Model order reduction by balanced truncation is one of the best-known methods for linear systems. It is motivated by the use of energy functionals, preserves stability and provides strict bounds for the approximation error. The computational bottleneck of this method lies in the solution of a pair of dual Lyapunov equations to obtain the controllability and the observability Gramian, but nowadays there are efficient methods which work for large-scale systems as well. These advantages motivate the attempt to apply balanced truncation also to other classes of systems. For example, there is an immediate way to generalize the idea to stochastic linear systems, where one has to consider generalized versions of Lyapunov equations. Similarly, one can define energy functionals and Gramians for nonlinear systems and try to use them for order reduction. In general, however, these Gramians are very complicated and practically not available. As an approximation, one may use algebraic Gramians, which again are solutions of certain generalized Lyapunov equations and which give bounds for the energy functionals. This approach has been taken e.g.~for bilinear systems of the form \begin{eqnarray*} \dot x&=&Ax+\sum_{j=1}^k N_jxu_j+Bu\;,\\ y&=& Cx\;, \end{eqnarray*} which arise e.g.~from the discretization of diffusion equations with Robin-type boundary control. In the talk we review these generalizations for different classes of systems and discuss computational aspects.","model order reduction, Lyapunov equation, bilinear systems, stochastic systems","93","93B40"," "Maroulas","John","maroulas@math.ntua.gr","\section{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[P^{*}AP].\,$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[P^{*}AP]\,$ shares the same tangential points with the sides of both polygons.","compression;eigenvalue;numerical range","15A60","15A18"," "Maroulas","John","maroulas@math.ntua.gr","\section{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[P^{*}AP].\,$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[P^{*}AP]\,$ shares the same tangential points with the sides of both polygons.","compression;eigenvalue;numerical range","15A60","15A18"," "Rakotondrajao","Fanja","frakoton@univ-antananarivo.mg","\section{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{theorem} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography}","$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'. "Rakotondrajao","Fanja","frakoton@univ-antananarivo.mg","\section{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{theorem} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography}","$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'. "Rakotondrajao","Fanja","frakoton@univ-antananarivo.mg","\section{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{theorem} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography}","$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'. "Van Dooren","Paul","paul.vandooren@uclouvain.be","\section{H2 approximation of linear dynamical systems} By {\sl P. Van Dooren, K. Gallivan and P.A. Absil}. \noindent We consider the problem of approximating an $m\times p$ rational transfer function $H(s)$ of high degree by another $m\times p$ rational transfer function $\hH(s)$ of much smaller degree. We derive the gradients of the $\calH_2$-norm of the approximation error and show how this can be solved via tangential interpolation. We then extend these results to the discrete-time case, for both time-invariant and time-varying systems.","Tangential interpolation, H2 approximation, model reduction","15","65"," "Sivakumar","Koratti Chengalrayan","kcskumar@iitm.ac.in","\section{Least Elements of Polyhedral Sets and Nonnegative Generalized Inverses} By {\sl Debashisha Mishra and Sivakumar K.C.}. \noindent A classical result due to Cottle and Veinott gives a characterization of the existence of the least element of a specific polyhedral set defined by a matrix, in terms of nonnegativity of a left-inverse of the matrix. In this talk we present extensions of this result to semi-infinite matrices and characterize nonnegativity of certain classes of generalized inverses.","Least elements, polyhedral sets, nonnegative generalized inverse.","15A09","90C05"," "Wu","Pei Yuan","pywu@math.nctu.edu.tw","\section{Numerical ranges of nilpotent operators} By {Hwa-Long Gau and Pei Yuan Wu}. \noindent For any operator $A$ on a Hilbert space, let $w(A)$ and $w_{0}(A)$ denote its numerical radius and the distance from the origin to the boundary of its numerical range, respectively. We prove that if $A$ is nilpotent with nilpotency $n$, then $w(A)$ is at most the product of $n - 1$ and $w_{0}(A)$. When $A$ attains its numerical radius, we also determine a necessary and sufficient condition for the equality to hold.","Numerical range, numerical radius, nilpotent operator.","47A12","15A60"," "Glebsky","Lev","glebsky@cactus.iico.uaslp.mx","\section{On low rank perturbations of matrices} By {\sl Lev Glebsky and Luis Manuel Rivera}. \noindent The talk is devoted to different aspects of the question: ""What can be done with a matrix by a low rank perturbation?"" It is proved that one can change a geometrically simple spectrum drastically by a rank 1 perturbation, but the situation is quite different if one restricts oneself to normal matrices. Also the Jordan normal form of a perturbed matrix is discussed. It is proved that with respect to the distance $d(A,B)=\frac{\rank(A-B)}{n}$ (here $n$ is the size of the matrices) all almost unitary operators are near unitary.","low rank, matrices","15A03","15A18"," "Armandnejad","Ali","armandnejad@yahoo.com","\section{Right gw-majorization on $ \mathbf{M}_{n,m}$} By {A. Armandnejad} \noindent Let $\mathbf{M}_{n,m}$ be the set of all $n\times m$ matrices with entries in $\mathbb{F}$ , where $\mathbb{F}$ is the field of real or complex numbers. An $n\times n$ matrix R is said to be a g-row stochastic matrix if Re=e, where $ e= (1,...,1)^{t}\in \mathbb{F}^{n}$. We introduce the right gw-majorization on $\mathbf{M}_{n,m}$ which it say that an $n\times m$ matrix A is right gw-majorized by an $n\times m$ matrix B and denoted by $ B\succ_{rwg}A$, if there exits a g-row stochastic matrix R such that A=BR. In this paper we study some properties of the right gw-majorization and finally all linear operators that strongly preserve the right gw-majorization will become characterized.","Linear preserver, strong linear preserver, g-row stochastic matrices, right gw-majorization","15A03","15A04"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravoere}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Cravo","Glória","gcravo@uma.pt","\section{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravoere}. \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free.","Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18"," "Klein","Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Klein","Andre","A.A.B.Klein@uva.nl","\section{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands}. \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57"," "Verde","Luis","verde@uma.pt","\section{Your title here} By {\sl names of all authors here}. \noindent Insert your abstract here This is a test.this idsss//","Matrix, function","15A23","","this is a test "M. Dopico","Froilan","dopico@math.uc3m.es","\section{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices.","eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the Abstract for my Plenary talk "M. Dopico","Froilan","dopico@math.uc3m.es","\section{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices.","eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the Abstract for my Plenary talk "M. Dopico","Froilan","dopico@math.uc3m.es","\section{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices.","eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the abstract of my Plenary talk "Mena","Hermann","hmena@math.epn.edu.ec","\section{Exponential Integrators for Solving Large-Scale Differential Riccati Equations} By {\sl Peter Benner and Hermann Mena}. \noindent The differential Riccati equation (DRE) arises in several applications, especially in control theory. Partial differential equations (PDEs) constraint optimization problems often lead to formulations as abstract Cauchy problems. Imposing a quadratic cost functional, the resulting optimal control is solved by a feedback control where the feedback operator is given in terms of an operator-valued DRE. Hence, in order to apply such a feedback control strategy to PDE control, we need to solve the large-scale DREs resulting from a spatial semi-discretization. There is a variety of methods to solve DREs. One common approach is based on a linearization that transforms the DRE into a linear Hamiltonian system of first-order matrix differential equations. The analytic solution of this system is given in terms of the exponential of a 2nx2n Hamiltonian matrix. In this talk, we investigate the use of symplectic Krylov subspace methods to approximate the action of this operator and thereby solve the DRE. Numerical examples illustrating the performance of the method will be shown.","differential Riccati equation, symplectic Krylov subspace methods, Hamiltonian systems, linear-quadratic regulator, optimal control","93A15","65L99"," "foroutannia","Davoud","d_foroutan@math.com","\section{ Bounds for matrices on weighted sequence spaces } By {D. Foroutannia}. \noindent abstract Let $w=(w_n)$ be a decreasing non-negative sequence and $F$ be a partition of positive integers. If $F=(F_n)$, where each $F_n$ is a finite interval of positive integers and also for all $n$, $\max{F_n}<\min{F_{n+1}}$. The block weighted sequence space $l_p(w,F)$ is the space of all real sequences $x=(x_n)$ with $$\|x\|_{p,w,F}=\left(\sum_{n=1}^{\infty}w_n||^p\right)^{1/p}<\infty,$$ where $=\sum_{i\in F_n}x_i$. \\ In this paper, we consider inequalities of the form $\|Ax\|_{p,w,F}\le L\|Bx\|_{q,v,F}$, where $A$ and $B$ are matrix operators, $x$ decreasing non-negative sequence and $w$, $v$ are weights and also $F$ is a block. Moreover, this study is an extension of some works of which are studied before on sequence spaces $l_{p}(v)$ by J. Pecaric, I. Peric and R. Roki in [3].","Inequality; Lower bound; Upper bound; Block weighted sequence spaces; Copson matrix","",""," "Seddighin","Morteza","mseddigh@indiana.edu","Matrix Optimization in Statistics By Morteza Seddighin. \noindent Statisticians have been dealing with matrix optimization problems which similar to Matrix Antieigenvalue problems. These problems occur in areas such as statistical efficiency and canonical correlations. Statisticians have generally took a variational approach to treat these matrix optimization problems. However, we will use the techniques we have developed for computation of Antieigenvalues to provide simpler solutions. Additionally, these techniques have enabled us to generalize some of the matrix optimization problems in statistics from positive matrices to normal accretive matrices and operators. One the techniques we use is a Two Nonzero Component Lema which is first proved by the author. Another technique is converting the Antieigenvalue problem to a convex programming problem. In the latter method the problem is reduced to finding the minimum of a convex function on the numerical range of an operator (which is a convex set).","Matrix Optimization, Antieigenvalue","15","47","I have written the abstract using Scientifc Workplace. If there is any problem please let me know to provide a pdf file. "Carriegos","Miguel","miguel.carriegos@unileon.es","\section{Your title here} By {\sl names of all authors here}. \noindent Switched linear systems belong to a special class of hybrid control systems which comprises a collection of subsystems described by linear dynamics (differential/difference equations) together with a switching rule that specifies the switching between the subsystems. Such systems can be used to describe a wide range of physical and engineering problems in practice. On the other hand, switched linear systems have been attracting much attention in the recent past years because of the arising problems are not only academically challenging but also of practical importance. In this talk we consider \emph{regular switched sequential linear systems}; that is, sequential switched linear systems $$\Gamma:\underline{x}(t+1)=A_{\sigma(t)}\underline{x}(t)+B_{\sigma(t)}\underline{u}(t)$$ where the switching signals $\sigma(0)\sigma(1)\sigma(2)... \in \Sigma^{\ast}$ belong to a regular language $L_{\Gamma}\subseteq\Sigma^{\ast}$ of admissible sequences of commands of system $\Gamma$. This is actually equivalent to saying that switching signals are governed by a finite automaton. We study the notion of reachability in terms of families of matrices $A_{\sigma(-)}$ and $B_{\sigma(-)}$ by using linear algebra techniques.","hybrid system; local automaton; controllability","93B25","68A25"," "Carriegos","Miguel","miguel.carriegos@unileon.es","\section{Reachability of regular switched linear systems} By {\sl Miguel V. Carriegos}. \noindent Switched linear systems belong to a special class of hybrid control systems which comprises a collection of subsystems described by linear dynamics (differential/difference equations) together with a switching rule that specifies the switching between the subsystems. Such systems can be used to describe a wide range of physical and engineering problems in practice. On the other hand, switched linear systems have been attracting much attention in the recent past years because of the arising problems are not only academically challenging but also of practical importance. In this talk we consider \emph{regular switched sequential linear systems}; that is, sequential switched linear systems $$\Gamma:\underline{x}(t+1)=A_{\sigma(t)}\underline{x}(t)+B_{\sigma(t)}\underline{u}(t)$$ where the switching signals $\sigma(0)\sigma(1)\sigma(2)... \in \Sigma^{\ast}$ belong to a regular language $L_{\Gamma}\subseteq\Sigma^{\ast}$ of admissible sequences of commands of system $\Gamma$. This is actually equivalent to saying that switching signals are governed by a finite automaton. We study the notion of reachability in terms of families of matrices $A_{\sigma(-)}$ and $B_{\sigma(-)}$ by using linear algebra techniques.","hybrid system; local automaton; controllability","93B25","68A25"," "Plestenjak","Bor","bor.plestenjak@fmf.uni-lj.si","\section{Numerical methods for two-parameter eigenvalue problems} By {\sl Bor Plestenjak}. \noindent We consider the \emph{two-parameter eigenvalue problem} \cite{Atkinson} \begin{eqnarray} A_1x_1&=&\lambda B_1x_1+\mu C_1x_1,\nonumber\\[-2ex] \label{problem} \\[-2ex] A_2x_2&=&\lambda B_2x_2+\mu C_2x_2,\nonumber \end{eqnarray} where $A_i,B_i$, and $C_i$ are given $n_i\times n_i$ matrices over ${\mathbb C}$, $\lambda,\mu\in{\mathbb C}$, and $x_i\in {\mathbb C}^{n_i}$ for $i=1,2$. A pair $(\lambda,\mu)$ is an \emph{eigenvalue} if it satisfies (\ref{problem}) for nonzero vectors $x_1,x_2$. The tensor product $x_1\otimes x_2$ is then the corresponding \emph{eigenvector}. On the tensor product space $S:= {\mathbb C}^{n_1}\otimes {\mathbb C}^{n_2}$ of the dimension $N:=n_1n_2$ we can define \emph{operator determinants} \begin{eqnarray*} \Delta_0&=&B_1\otimes C_2-C_1\otimes B_2,\cr \Delta_1&=&A_1\otimes C_2-C_1\otimes A_2,\cr \Delta_2&=&B_1\otimes A_2-A_1\otimes B_2. \end{eqnarray*} The two-parameter problem $(\ref{problem})$ is \emph{nonsingular} if its operator determinant $\Delta_0$ is invertible. In this case $\Delta_0^{-1}\Delta_1$ and $\Delta_0^{-1}\Delta_2$ commute and problem (\ref{problem}) is equivalent to the associated problem \begin{eqnarray} \Delta_1 z&=&\lambda \Delta_0 z,\nonumber\\[-2ex] \label{drugi}\\[-2ex] \Delta_2 z&=&\mu \Delta_0 z\nonumber \end{eqnarray} for decomposable tensors $z\in S$, $z=x_1\otimes x_2$. Some numerical methods and a basic theory of the two-parameter eigenvalue problems will be presented. A possible approach is to solve the associated couple of generalized eigenproblems (\ref{drugi}), but this is only feasible for problems of low dimension because the size of the matrices of (\ref{drugi}) is $N\times N$. For larger problems, if we are interested in a part of the eigenvalues close to a given target, the Jacobi--Davidson method \cite{HP,HP2,HP3} gives very good results. Several applications lead to singular two-parameter eigenvalue problems where $\Delta_0$ is singular. Two such examples are model updating \cite{Cottin} and the quadratic two-parameter eigenvalue problem \begin{eqnarray} (S_{00}+\lambda S_{10} +\mu S_{01} + \lambda^2 S_{20} +\lambda \mu S_{11} + \mu^2 S_{02})x&=&0\nonumber\\[-1.7ex] \label{qepproblem} \\[-1.7ex] (T_{00}+\lambda T_{10} +\mu T_{01} + \lambda^2 T_{20} +\lambda \mu T_{11} + \mu^2 T_{02})y&=&0.\nonumber \end{eqnarray} We can linearize (\ref{qepproblem}) as a singular two-parameter eigenvalue problem, a possible linearization is $$\left(\left[\matrix{S_{00} & S_{10} &S_{01} \cr 0 & -I & 0 \cr 0 & 0 & -I}\right] +\lambda \left[\matrix{0 & S_{20} &{1\over 2}S_{11} \cr I & 0& 0\cr 0& 0& 0}\right] +\mu \left[\matrix{ 0&{1\over 2}S_{11} &S_{02} \cr 0&0&0\cr I&0&0}\right]\right)\widetilde x=0 $$ $$\left(\left[\matrix{T_{00} & T_{10} &T_{01} \cr 0& -I & 0\cr 0& 0& -I}\right] +\lambda \left[\matrix{ 0& T_{20} &{1\over 2}T_{11} \cr I &0&0\cr 0&0&0}\right] +\mu \left[\matrix{ 0&{1\over 2}T_{11} &T_{02} \cr 0&0&0\cr I&0&0}\right]\right)\widetilde y=0, $$ where $\widetilde x=\left[\matrix{x \cr \lambda x \cr \mu x}\right]$ and $\widetilde y=\left[\matrix{y \cr \lambda y \cr \mu y}\right]$. Some theoretical results and numerical methods for singular two-parameter eigenvalue problems will be presented. \begin{thebibliography}{99} \bibitem{Atkinson} {\sc F.~V.~Atkinson}, {\sl Multiparameter eigenvalue problems}, Academic Press, New York, 1972. \bibitem{Cottin} {\sc N.~Cottin}, {\sl Dynamic model updating --- a multiparameter eigenvalue problem}, {Mech. Syst. Signal Pr.}, {15}~(2001), pp.~649--665. \bibitem{HP} {\sc M.~E.~Hochstenbach and B.~Plestenjak}, {\sl A Jacobi--Davidson type method for a right definite two-parameter eigenvalue problem}, SIAM J. Matrix Anal. Appl., 24~(2002), pp.~392--410. \bibitem{HP2} {\sc M.~E. Hochstenbach, T.~Ko{\v{s}}ir, and B.~Plestenjak}, {\sl A {J}acobi--{D}avidson type method for the nonsingular two-parameter eigenvalue problem}, SIAM J. Matrix Anal. Appl., 26 (2005), pp.~477--497. \bibitem{HP3} {\sc M.~E.~Hochstenbach and B.~Plestenjak}, {\sl Harmonic Rayleigh--Ritz extraction for the multiparameter eigenvalue problem}, to appear in ETNA. \end{thebibliography}","two-parameter eigenvalue problem, Jacobi-Davidson method, model updating","65F15","15A18"," "Furuichi","Shigeru","jaic957@yahoo.co.jp","\section{On trace inequalities for products of matrices} By {\sl Shigeru FURUICHI}. \noindent Skew informations are expressed by the trace of products of matrices and power of matrices. In my talk, we study some matrix trace inequalities of products of matrices and the power of matrices.","trace inequality, arithmetic mean, geometric mean and nonnegative matrix","47A63","94A17"," "DJORDJEVIC","SLAVISA","slavdj@fcfm.buap.mx","\section{Manifold of proper elements} By {\sl S.V. Djordjevic and S. Sánchez Perales}. \noindent Let $X$ be a Banach space and let $B(X)$ denote the space of all bounded linear transformation on $X$. With $$Eig(X)=\{ (\lambda ,L,A)\in \mathbf C\times P_1(X)\times {\mathcal B}(X): A(L)\subset L \mbox{ and } A_{|L}=\lambda I\} $$ we denote the {\it manifold of proper elements of} $X$ and let $(\lambda_0, L_0,A_0)\in Eig (X)$ be a fix arbitrary element. In the first part of this note we give necessary and sufficient conditions that $(\lambda, L,A)\in Eig (X)$ using the system of equations determinate with $(\lambda_0, L_0,A_0)\in Eig (X)$. In the second part we apply this result to describe relation between multiplicity of eigenvalue $\lambda_0$ of the operator $A_0$ and the spectrum of the operator $\widehat{A_0}$ from quotient $X/L_0$ to itself definite with $\widehat{A_0}(x+L_0)=A_0(x)+L_0$.","Eigenvalues, Eigenvectors, Multiplicity","15A18","47A10","MS2 Eigenproblems: Theory and computation "Neumann","Michael","neumann@math.uconn.edu","\section{On Optimal Condition Numbers For Markov Chains} By {\sl Michael Neumann and Nung--Sing Sze}. \noindent Let $T=(t_{i,j})$ and $\tilde{T}=T-E$ be arbitrary nonnegative, irreducible, stochastic matrices corresponding to two ergodic Markov chains on $n$ states. A function $\kappa(\cdot)$ is called a {\it condition number for Markov chains} with respect to the $(\alpha,\beta)$--norm pair if $\|\pi-\tilde{\pi}\|_\alpha \leq \kappa(T)\|E\|_\beta$.\\ Various condition numbers, particularly with respect to the $(1,\infty)$ and $(\infty,\infty)$ have been suggested in the literature by several authors. They were ranked according to their size by Cho and Meyer in a paper from 2001. In this paper we first of all show that what we call the generalized ergodicity coefficient $\tau_p(\as)=\sup_{y^te=0} \frac{\|y^t\as\|_p}{\|y\|_1}$, where $e$ is the $n$--vector of all $1$'s, is the smallest of the condition numbers of Markov chains with respect to the $(p,\infty)$--norm pair. We use this result to identify the smallest condition number of Markov chains among the $(\infty,\infty)$ and $(1,\infty)$--norm pairs. These are, respectively, $\kappa_3$ and $\kappa_6$ in the Cho--Meyer list of $8$ condition numbers.\\ Kirkland has studied $\kappa_3(T)$. He has shown that $\kappa_3(T)\geq\frac{n-1}{2n}$ and he has characterized the properties of transition matrices for which equality holds. We prove again that $2\kappa_3(T)\leq \kappa(6)$ which appears in the Cho--Meyer paper and we characterize the transition matrices $T$ for which $\kappa_6(T)=\frac{n-1}{n}$. There is only one such matrix: $T=(J_n-I)/(n-1)$. where $J_n$ is the $n\times n$ matrix of all $1$'s. This result demands the development of the cyclic structure of a doubly stochastic matrix with a zero diagonal.}\\ Research supported by NSA Grant No. 06G--232","Markov chains, stationary distribution, stochastic matrix, group inverses, sensitivity analysis, perturbation theory, condition numbers.","15A51","65F35","This talk is for the Nonnegative and Eventually Nonnegative Matrix Mini-symposium. "Singer","Ivan","ivan.singer@imar.ro","\section MS7 {Your title here} Max-min convexity \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115.","Max-min convex set; Max-min convex function","08A72","52A01"," "Singer","Ivan","ivan.singer@imar.ro","\section MS7 {Your title here} Max-min convexity \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115.","Max-min convex set; Max-min convex function","08A72","52A01"," "Singer","Ivan","ivan.singer@imar.ro","\section MS7 {Your title here} Max-min convexity \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115.","Max-min convex set; Max-min convex function","08A72","52A01"," "Singer","Ivan","ivan.singer@imar.ro","\section{Your title here} MS7: Max-min convexity By {\sl names of all authors here} Ivan Singer \noindent Insert your abstract here The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115.","Max-min convex set; Max-min convex function","08A72","52A01"," "Singer","Ivan","ivan.singer@imar.ro","\section{Your title here} MS7: Max-min convexity By {\sl names of all authors here} Ivan Singer \noindent Insert your abstract here The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115.","Max-min convex set; Max-min convex function","08A72","52A01"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Klein","Andre","A.A.B.Klein@uva.nl","\section{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein}. \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control.","Tensor Sylvester matrix, Fisher information matrix","15A23","15A69"," "Klein","Andre","A.A.B.Klein@uva.nl","\section{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein}. \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control.","Tensor Sylvester matrix, Fisher information matrix","15A23","15A69"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Uhlig","Frank","uhligfd@auburn.edu","\section{ Convex and Non-convex Optimization Problems for the Field of Values of a Matrix} By {\sl Frank Uhlig, \ Department of Mathematics and Statistics, \ Auburn University,\ Auburn, AL 36849--5310, USA; \ uhligfd@auburn.edu \noindent We introduce and study numerical algorithms that compute the minimal and maximal distances between $0 \in \CC$ and points in the field of values $F(A) = \{ x^*Ax \mid x \in \CC^n \ , \ \|x\|_2 = 1\} \subset \CC$ for a complex matrix $A_{n,n}$. Finding the minimal distance from $0 \in \CC$ to $F(A)$ is a convex optimization problem if $0 \notin F(A)$ and thus it has a unique solution, called the Crawford number whose magnitude relates information on the stability margin of the associated system. If $0 \in F(A)$, this is a non-convex optimization problem and consequently there can be multiple solutions or local minima that are not so globally. Non-convexity also holds for the maximal distance problem between points in $F(A)$ and $0 \in \CC$. This maximal distance is commonly called the numerical radius $numrad(A)$ for which the inequality $\rho(A) \leq numrad(A) \leq \|A\|$ is well established. \\ Both cases can be solved efficiently numerically by using ideas from geometric computing, eigenanalyses of linear combinations of the hermitean and skew-hermitean parts of $A$ and the rotation method introduced by C. R. Johnson in the 1970s to compute the boundary of the field of values.","field of values, quadratic form, Crawford number, numerical radius, geometric computing, eigenvalue, convexity, convex optimization, non-convex optimization, efficiency","65F30","15A60, 1"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Mart{\'\i}nez","Jos\'e-Javier","jjavier.martinez@uah.es","\section{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007)","Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20"," "Gassó","Maria T.","mgasso@mat.upv.es","\section{The class of Inverse-Positive matrix with chekecboard pattern} By {\sl Manuel F. Abad, Maria T. Gass\'o and Juan R. Torregrosa}. \noindent In economics as well as other sciences, the inverse-positivity of real square matrices has been an important topic. A nonsingular real matrix $A$ is said to be inverse-positive if all the elements of its inverse are nonnegative. An inverse-positive matrix being also a $Z$-matrix is a nonsingular $M$-matrix, so the class of inverse-positive matrices contains the nonsingular $M$-matrices, which have been widely studied and whose applications, for example, in iterative methods, dynamic systems, economics, mathematical programming, etc, are well known. Of course, every inverse-positive matrix is not an $M$-matrix. For instance, \[ A=\left( \begin{array} {rr} -1 & 2 \\ 3 & -1 \end{array} \right) \] is an inverse-positive matrix that is not an $M$-matrix. The concept of inverse-positive is preserved by multiplication, left or right positive diagonal multiplication, positive diagonal similarity and permutation similarity. The problem of characterizing inverse-positive matrices has been extensively dealt with in the literature (see for instance \cite{BP}). The interest of this problem arises from the fact that a linear mapping $F(x)=Ax$ from ${R}^{n}$ into itself is inverse issotone if and only if $A$ is inverse-positive. In particular, this allows us to ensure the existence of a positive solution for linear systems $Ax=b$ for any $b \in R^{n}_{+}$. In this paper we present several matrices that very often occur in relation to systems of linear or nonlinear equations in a wide variety of areas including finite difference methods for contour problems, for partial differential equations, Leontief model of circulating capital without joint production, and Markov processes in probability and statistics. For example, matrices that for size $5 \times 5$ have the form \[ A=\left( \begin{array} {rrrrr} 1 & -a & 1 & -a & 1 \\ 1 & 1 & -a & 1 & -a \\ -a & 1 & 1 & -a & 1 \\ 1 & -a & 1 & 1 & -a \\ -a & 1 & -a & 1 & 1 \end{array} \right), \] where $a$ is a real parameter with economic interpretation. Are these matrices inverse-positive?. We study the answer of this question and we analyze when the concept of inverse-positive is preserved by the Hadamard product $A\circ A^{-1}$. In this work we present some conditions in order to obtain new characterizations for inverse-positive matrices. Johnson in \cite{J1} studied the possible sign patterns of a matrix which are compatible with inverse-positiveness. Following his results we analyze the inverse-positive concept for a particular type of pattern: the chekecboard pattern. An $n \times n$ real matrix $A=(a_{i,j})$ is said to have a checkerboard pattern if sign$(a_{i,j})=(-1)^{i+j}$, $i,j=1,2,\ldots,n$. We study in this paper the inverse-positivity of bidiagonal, tridiagonal and lower (upper) triangular matrices with checkerboard pattern. We obtain characterizations of the inverse-positivity for each class of matrices. Several authors have investigated about the Hadamard product of matrices. Johnson \cite{J2} showed that if the sign pattern is properly adjusted the Hadamard product of $M$-matrices is again an $M$-matrix and for any pair $M$,$N$ of $M$-matrices the Hadamard product $M\circ N^{-1}$ is again an $M$-matrix. This result does not hold in general for inverse-positive matrices. We analyze when the Hadamard product $M \circ N^{-1}$, for $M$,$N$ checkerboard pattern inverse-positive matrices , is an inverse-positive matrix. \begin{references}{99} \bibitem{BP} A. Berman, R.J. Plemmons, {\em Nonnegative matrices in the Mathematical Sciences}, SIAM 1994. \bibitem{J2} C.R. Johnson, {\em A Hadamard Product Involving $M$-matrices}, Linear Algebra and its Applications, 4 (1977) 261-264. \bibitem{J1} C.R. Johnson, {\em Sign patterns of inverse nonnegative matrices}, Linear Algebra and its Applications, 55 (1983) 69-80. \end{references}","inverse-positive matrix , sign pattern, Hadamard product.","15A09","15A48"," "Boettcher","Albrecht","aboettch@mathematik.tu-chemnitz.de","\section{Toeplitz matrices with Fisher-Hartwig symbols} By {\sl Albrecht B\""ottcher}. \noindent Asymptotic properties of large Toeplitz matrices are best understood if the matrix is constituted by the Fourier coefficients of a smooth function without zeros on the unit circle and with winding number zero. If at least one of these conditions on the generating function is violated, one speaks of Toeplitz matrices with Fisher-Hartwig symbols. \smallskip The talk is intended as an introduction to the realm of Toeplitz matrices with Fisher-Hartwig symbols for a broad audience. We show that several highly interesting and therefore very popular Toeplitz matrices are just matrices with a Fisher-Hartwig symbol and that many questions on general Toeplitz matrices, for example, the asymptotics of the extremal eigenvalues, are nothing but specific problems for matrices with Fisher-Hartwig symbols. We embark on both classical and recent results concerning the asymptotic behavior of determinants, condition numbers, eigenvalues, and eigenvectors as the matrix dimension goes to infinity.","Toeplitz matrix, Fisher-Hartwig, spectral theory, determinant","47B35","15A18","This is a plenary lecture. "Sergeev","Sergey","sergiej@gmail.com","\section{On Kleene stars and intersection of finitely generated semimodules} By {\sl Sergey Sergeev}. \noindent It is known that Kleene stars are fundamental objects in max-algebra and in other algebraic structures with idempotent addition. They play important role in solving classical problems in the spectral theory, and also in other respects. On the other hand, the approach of tropical convexity puts forward the tropical cellular decomposition, meaning that any tropical polytope (i.e., finitely generated semimodule) can be cut into a finite number of convex pieces, and subsequently treated as a cellular complex. We show that any convex piece of this complex is max-algebraic column span of a uniquely defined Kleene star. We provide some evidence that the tropical cellular decomposition can be used as a purely max-algebraic tool, with the main focus on the problem of finding a point in the intersection of several finitely generated semimodules.","max-algebra, Kleene star, decomposition","52A30","15A39"," "Butkovic","Peter","p.butkovic@bham.ac.uk","\section{On the permuted max-algebraic eigenvector problem} By {\sl Peter Butkovic}. \noindent Let $a\oplus b=\max (a,b)$, $a\otimes b=a+b$ for $a,b\in \overline{\mathbb{R}% }:=\mathbb{R}\cup \{-\infty \}$ and extend these operations to matrices and vectors as in conventional linear algebra. The following \textit{% max-algebraic} \textit{eigenvector problem} has been intensively studied in the past:\ Given $A\in \overline{\mathbb{R}}^{n\times n},$ find all $x\in \overline{\mathbb{R}}^{n},x\neq (-\infty ,...,-\infty )^{T}$ (\textit{% eigenvectors}) such that $A\otimes x=\lambda \otimes x$ for some $\lambda \in \overline{\mathbb{R}}.$ In our talk we deal with the \textit{permuted eigenvector problem}: Given $A\in \overline{\mathbb{R}}^{n\times n}$ and $% x\in \overline{\mathbb{R}}^{n},$ is it possible to permute the components of $x$ so that the arising vector $x^{\prime }$ is a (max-algebraic) eigenvector of $A$? This problem can be proved to be $NP$-complete using a polynomial transformation from BANDWIDTH. As a by-product the following \textit{permuted max-linear system} \textit{problem} can also be shown $NP$% -complete: Given $A\in \overline{\mathbb{R}}^{m\times n}$ and $b\in \overline{\mathbb{R}}^{m},$ is it possible to permute the components of $b$ so that for the arising vector $b^{\prime }$ the system $A\otimes x=b^{\prime }$ has a solution? Both problems can be solved in polynomial time when $n$ does not exceed $3$.","Eigenvector; Permutation; NP-complete","15A18","68Q25"," "Klasa-Bompoint","Jacqueline","jklasa@dawsoncollege.qc.ca","\section{FEW PEDAGOGICAL SCENARIOS IN LINEAR ALGEBRA WITH CABRI AND MAPLE.} By {\sl Jacqueline Klasa-Bompoint}. \noindent I nsert your abstract here Abstract FEW PEDAGOGICAL SCENARIOS IN LINEAR ALGEBRA WITH CABRI AND MAPLE. J. Klasa, Collège Dawson, Montréal, Canada. jklasa@dawsoncollege.qc.ca With the appearance of very rapidly improving technologies, since the 90’s we have faced many reform movements introducing much more importance on the visualization of mathematical concepts together with more socialization (Collaborative learning). Just to name few reform groups in the USA: Harvard Group for Calculus and for Linear algebra: ATLAST organized by S. Leon after the ILAS symposium of 1992 and LACSG started with D. Lay in 1990 and then continued with D. Carlson (1993) and many others. However some researchers like J.P Dorier & A. Sierpinska were not optimist and declared “It is commonly claimed in the discussions about the teaching and learning of linear algebra that linear algebra courses are badly designed and badly taught and that no matter how it is taught, linear algebra remains a cognitively and conceptually difficult subject”. On the other hand, M. Artigue advocates strongly the use of CAS’s but with a constant awareness that Mathematics learned in such an environment of software are changing. How do we really teach Linear algebra now? See the standard Anton’s text book and then the much praised book “Linear Algebra and its applications” written in 1994 by D. Lay. How hard is it really now to teach and to learn this topic? We shall repeat like J. Hillel, A. Sierpinska & T. Dreyfus that the teaching of Linear Algebra offers to students many cognitive problems related to three thinking modes intertwined: geometric, computational (with matrices) and algebraic (Symbolic). We could follow the APOS theory of E. Dubinsky and see that it will be necessary for the teacher to proceed to a genetic decomposition of every mathematical concept of Linear Algebra before being able to conceive a pedagogic scenario that will have to bring students from the ""action"" to the more elaborated state of ""process"" and then luckily make them reach the most abstract levels of ""objects"" and even higher structured ""schemes"". While devising my classes and computer-labs to my students in Linear Algebra, I was inspired by all good ideas presented by the mentioned authors and many others as: G. Bagni, J.L. Dorier and Fischbein, D. Gentner, G. Harel, J. Hillel, J.G. Molina Zavaleta. I am a mathematician who teaches in a CEGEP, which is a special college of Québec's province in Canada. Pedagogical scenarios based on Cabri and Maple will be presented in this study for some few stumbling blocks in the learning of Linear Algebra: linear transformations, eigenvectors and eigenvalues, quadratic forms and conics with changes of bases, finally singular values. When immersed in this software environment, I restrict all the demonstrations to R2 and R3. Can visualization and manipulation improve and facilitate the learning of Linear algebra? As I am biased, of course I will say yes; really we would need a strong evaluation and analysis of this teaching procedure to be able to give answers. As Ed. Dubinsky would say “This situation provides us with the opportunity to build a synthesis between the abstract and concrete…The interplay between concrete phenomena and abstract thinking.” I will add also, that students working in teams around computers (or even graphic calculators) only coached by the teacher at times, become experts in the discipline they experiment with. About the roles of the CAS Maple and the geometrical software, we will agree with the Cabrilog slogan “Cabri makes tough maths concepts easier to learn thanks to its kinaesthetic learning approach!” while Maple acts like a good big brother, doing all the boring calculations for the students and also producing instructive animations, unfortunately mostly programmed by the teacher.","Scenarios, software Cabri Maple","97","97C80","Also 97U70 "Sergeev","Sergey","sergiej@gmail.com","\section{On Kleene stars and intersection of finitely generated semimodules} By {\sl Sergey Sergeev}. \noindent It is known that Kleene stars are fundamental objects in max-algebra and in other algebraic structures with idempotent addition. They play important role in solving classical problems in the spectral theory, and also in other respects. On the other hand, the approach of tropical convexity puts forward the tropical cellular decomposition, meaning that any tropical polytope (i.e., finitely generated semimodule) can be cut into a finite number of convex pieces, and subsequently treated as a cellular complex. We show that any convex piece of this complex is max-algebraic column span of a uniquely defined Kleene star. We provide some evidence that the tropical cellular decomposition can be used as a purely max-algebraic tool, with the main focus on the problem of finding a point in the intersection of several finitely generated semimodules.","max-algebra, Kleene star, semimodule, decomposition","52A30","15A39"," "Weaver","James","jweaver@uwf.edu","Nonsingularity of Divisor Tournaments Rohan Hemasinha Dept. of Math/Stat, Univ. of West Florida, Pensacola, FL 32514, USA rhemasin@uwf.edu Jeffrey L. Stuart Dept. of Mathematics, Pacific Lutheran Univ. Tacoma, WA 98447, USA jeffrey.stuart@plu.edu James R. Weaver (speaker) Dept. of Math/Stat, Univ. of West Florida, Pensacola, FL 32514, USA jweaver@uwf.edu Abstract Matrix theoretic properties and examples of divisor tournaments are discussed. Emphasis is placed on results and conjectures about the nonsingularity of the adjacency matrix for a divisor tournament. For an integer n>2, the divisor tournament D(T_{n}) ( a directed graph on the vertices 2,3,⋯,n) is defined by: i is adjacent to j if i divides j, otherwise j is adjacent to i for 2¡ÜiThe adjacency matrix T_{n} of the directed graph D(T_{n}) with vertex set {2,3,⋯,n} is the (n-1)¡Á(n-1) matrix [t_{ij}] defined by t_{ij}=1and t_{ji}=0 if i¨Oj, t_{ij}=0 and t_{ji}=1 if i∤j for 2¡Üi 0, & \quad x(0)=x^0, \\ y(t) &=& Cx(t) + Du(t), & \quad t \geq 0, \end{array} \right. \] with $A\in \mathbf{R}^{n\times n}$, $B\in \mathbf{R}^{n\times m}$, and $C\in\mathbf{R}^{p\times n}$ arising, e.g., from the discretization and linearization of parabolic PDEs. We will assume that the system $\Sigma$ is large-scale with $n \gg m,\, p$ and that the system is unstable, satisfying \[ \Lambda(A)\cap \mathbf{C}^+ \ne \emptyset,\quad \Lambda(A)\cap \jmath\mathbf{R}=\emptyset. \] We further allow the system matrix $A$ to be dense, provided that a {\em data-sparse} representation exists. To reduce the dimension of the system $\Sigma$, we apply an approach based on the controllability and observability Gramians of $\Sigma$. The numerical solution of these Gramians is obtained by solving two algebraic Bernoulli and two Lyapunov equations. As standard methods for the solution of matrix equations are of limited use for large-scale systems, we investigate approaches based on the {\em matrix sign function} method. To make this iterative method applicable in the large-scale setting, we incorporate structural information from the underlying PDE model into the approach. By using data-sparse matrix approximations, hierarchical matrix formats, and the corresponding formatted arithmetic we obtain an efficient solver having linear-polylogarithmic complexity. Once the Gramians are computed, a reduced-order system can be obtained applying the usual {\em balanced truncation method}.","model reduction, unstable LTI systems, hierarchical matrices","93B40","65F10","talk is part of ""MS5, Linear Algebra in Model Reduction"" "Feng","Lihong","lihong.feng@mathematik.tu-chemnitz.de","\section{Model Order Reduction of Systems with Coupled Parameters\thanks{This research is supported by the Alexander von Humboldt-Foundation and by the research network \emph{SyreNe --- System Reduction for Nanoscale IC Design} within the program \textsl{Mathematics for Innovations in Industry and Services} (Mathematik f\""ur Innovationen in Industrie und Dienstleistungen) funded by the German Federal Ministry of Education and Science (BMBF).}} By {\sl Peter Benner\footnotemark[2] \and Lihong Feng\thanks{Mathematics in Industry and Technology, Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany; \texttt{benner@mathematik.tu-chemnitz.de, lihong.feng@mathematik.tu-chemnitz.de}}~\thanks{Corresponding author.}}. \noindent We consider model order reduction of parametric systems with parameters which are nonlinear functions of the frequency parameter $s$. Such systems result from, for example, the discretization of electromagnetic systems with surface losses \cite{WittigSW06}. Since the parameters are functions of the frequency $s$, they are highly coupled with each other. We see them as individual parameters when we implement model order reduction. By analyzing existing methods of computing the projection matrix for model order reduction, we show the applicability of each method and propose an optimized method for the parametric system considered in this paper. The transfer function of the parametric systems considered here take the form \begin{equation} \label{trans1} H(s)=sB^\mathrm{T}(s^2I_n-1/\sqrt{s} D+ A)^{-1}B, \end{equation} where $A,D$ and $B$ are $n\times n$ and $n\times m$ matrices, respectively, and $I_n$ is the identity of suitable size. To apply parametric model order reduction to (\ref{trans1}), we first expand $H(s)$ into a power series. Using a series expansion about an expansion point $s_0$, and defining $\sigma_1:=\frac{1}{s^2\sqrt{s}}-\frac{1}{s_0^2\sqrt{s_0}}$, $\sigma_2:=\frac{1}{s^2}-\frac{1}{s_0^2}$, we may use the three different methods below to compute a projection matrix $V$ and get the reduced-order transfer function \[ \hat{H}(s) =s\hat{B}^\mathrm{T}(s^2 I_r -1/\sqrt{s} \hat{D}+ \hat{A})^{-1}\hat{B}, \] where $\hat{A}=V^T A V$, $\hat{B}=V^T B$, etc., and $V$ is an $n\times r$ projection matrix with $V^T V= I_r$. To simplify notation, in the following we use $G:=I-\frac{1}{s_0^2\sqrt{s_0}}D+\frac{1}{s_0^2}A$, $B_M:=G^{-1}B$, $M_1:=G^{-1}D$, and $M_2:=-G^{-1}A$. \subsubsection*{Directly computing $V$} A simple and direct way for obtaining $V$ is to compute the coefficient matrices in the series expansion \begin{equation} \label{trans5} \begin{array}{rcl} H(s)&=&\frac{1}{s}B^\mathrm{T}[B_M+(M_1B_M\sigma_1 +M_2B_M\sigma_2 )+(M_1^2B_M\sigma_1^2 \\ && + (M_1M_2+M_2M_1)B_M\sigma_1\sigma_2 +M_2^2B_M\sigma_2^2)+(M_1^3B_M\sigma_1^3+\ldots)+\ldots], \end{array} \end{equation} by direct matrix multiplication and orthogonalize these coefficients to get the matrix $V$ \cite{Daniel04}. After the coefficients $B_M$, $M_1B_M, M_2B_M$, $M_1^2B_M$, $(M_1M_2+M_2M_1)B_M$, $M_2^2B_M$, $M_1^3B_M$, $\ldots$ are computed, the projection matrix $V$ can be obtained by \begin{equation} \label{directV} \textrm{range}\{V\}=\textrm{orthogonalize}\{B_M, M_1B_M, M_2B_M, M_1^2B_M, (M_1M_2+M_2M_1)B_M, M_2^2B_M, M_1^3B_M, \ldots \} \end{equation} Unfortunately, the coefficients quickly become linearly dependent due to numerical instability. In the end, the matrix $V$ is often so inaccurate that it does not possess the expected theoretical properties. \subsubsection*{Recursively computing $V$} The series expansion (\ref{trans5}) can also be written into the following formulation: \begin{equation} \label{trans6} H(s)=\frac{1}{s}[B_M+(\sigma_1 M_1+\sigma_2 M_2)B_M+\ldots+(\sigma_1 M_1+\sigma_2 M_2)^iB_M+\ldots] \end{equation} Using (\ref{trans6}), we define \begin{equation} \label{recR} \begin{array}{rcl} R_0&=&B_M,\\ R_1&=&[M_1, M_2]R_0,\\ \vdots\\ R_j&=&[M_1,M_2]R_{j-1},\\ \vdots. \end{array} \end{equation} We see that $R_0, R_1, \ldots, R_j, \ldots$ include all the coefficient matrices in the series expansion (\ref{trans6}). Therefore, we can use $R_0, R_1, \ldots, R_j, \ldots$ to generate the projection matrix $V$: \begin{equation} \label{recursiveV} \textrm{range}\{V\}=\textrm{colspan}\{R_0, R_1,\ldots, R_m\}. \end{equation} Here, $V$ can be computed employing the recursive relations between $R_j, \ j=0,1,\ldots, m$ combined with the modified Gram-Schmidt process \cite{FengBICIAM07}. \subsubsection*{Improved algorithm for recursively computing $V$} Note that the coefficients $M_1M_2B_M$ and $M_2M_1B_M$ are two individual terms in (\ref{recR}), which are computed and orthogonalized sequentially within the modified Gram-Schmidt process. Observing that they are actually both coefficients of $\sigma_1\sigma_2$, they can be combined together as one term during the computation as in (\ref{directV}). Based on this, we develop an algorithm which can compute $V$ in (\ref{directV}) by a modified Gram-Schmidt process. By this algorithm, the matrix $V$ is numerically stable which guarantees the accuracy of the reduced-order model. Furthermore, the size of the reduced-order model is smaller than that of the reduced-order model derived by (\ref{recursiveV}). Therefore, this improved algorithm is optimal for the parametric system considered in this paper. \begin{thebibliography}{1} \bibitem{WittigSW06} T. Wittig, R. Schuhmann, and T. Weiland. \newblock Model order reduction for large systems in computational electromagnetics. \newblock {\em Linear Algebra and its Applications}, 415(2-3):499-530, 2006. \bibitem{Daniel04} L.~Daniel, O.C. Siong, L.S. Chay, K.H. Lee, and J.~White. \newblock A multiparameter moment-matching model-reduction approach for generating geometrically parameterized interconnect performance models. \newblock {\em IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.}, 22 (5):678--693, 2004. \bibitem{FengBICIAM07} L. Feng and P. Benner. \newblock A Robust Algorithm for Parametric Model Order Reduction.\newblock {\em Proc. Appl. Math. Mech.}, 7, 2008 (to appear). \end{thebibliography}","Model order reduction, parametric system, coupled parameters","65P","94C","the talk is part of ""MS5, Linear Algebra in Model Reduction"" "Fasbender","Heike","h.fassbender@tu-bs.de","\section{On the numerical solution of large-scale sparse discrete-time Riccati equations} By {\sl Heike Fa\ss bender and Peter Benner}. \noindent Inspired by a large-scale sparse discrete-time Riccati equation which arises in a spectral factorization problem the efficient numerical solution of such Riccati equations is studied in this work. Spectral factorization is a crucial step in the solution of linear quadratic estimation and control problems. A variety of methods has been developed over the years for the computation of canonical spectral factors for processes with rational spectral densities, see, e.g., the survey \cite{SayK01}. One approach involves the spectral factorization via a discrete-time Riccati equation. Whenever possible, we consider the generalized discrete--time algebraic Riccati equation \begin{eqnarray} 0 ~=~ \mathcal{R}(X) &=& C^TQC + A^T X A - E^T X E \label{dare} \\ &&\;\; - (A^T XB + C^T S) (R + B^TXB)^{-1} (B^T XA + S^T C), \nonumber\end{eqnarray} where $A, E \in \mathbb{R}^{n \times n}, B \in \mathbb{R}^{n \times m}, C \in \mathbb{R}^{p \times n}, Q \in \mathbb{R}^{p \times p}, R \in \mathbb{R}^{m \times m},$ and $S \in \mathbb{R}^{p \times m}.$ Furthermore, $Q$ and $R$ are assumed to be symmetric and $A$ and $E$ are large and spare. For the particular application above, we have \[ A = \left[ \begin{array}{cccc} 0 & 1 & \\ & \ddots & \ddots \\ &&0 & 1\\ &&& 0\end{array}\right]. \] The function $\mathcal{R}(X)$ is a rational matrix function, $\mathcal{R}(X) = 0$ defines a system of nonlinear equations. Newton's method for the numerical solution of DAREs can be formulated as follows\\ \phantom{BBBB} {\bf for} {$k = 0,\,1,\,2,\,\ldots$}\\ \phantom{BBBBB} 1. $K_k \gets K(X_k) = (R + B^T X_k B)^{-1} (B^T X_k A + S^T C)$.\\ \phantom{BBBBB} 2. $A_k \gets A - B K_k$.\\ \phantom{BBBBB} 3. $\mathcal{R}_k \gets \mathcal{R}(X_k)$.\\ \phantom{BBBBB} 4. Solve for $N_k$ in the Stein equation \begin{equation}\label{stein} A_k^T N_k A_k - E^T N_k E = -\mathcal{R}_k. \end{equation} \phantom{BBBBB} 5. $X_{k+1} \gets X_k + N_k.$\\ \phantom{BBBB}{\bf end for} The computational cost for this algorithm mainly depends upon the cost for the numerical solution of the Stein equation (\ref{stein}). This can be done using the Bartels--Stewart algorithm \cite{BarS72} or an extension to the case $E \not= I$ \cite{GarLAM92,GarWLAM92,Pen97}. The Bartels-Stewart algorithm is the standard direct method for the solution of Stein equations of small to moderate size. This method requires the computation of a Schur decomposition, and thus is not appropriate for large scale problems. The cost for the solution of the Stein equation is $\approx 73n^3$ flops. Iterative schemes have been developed including the Smith method \cite{Smi68}, the sign-function method \cite{Rob80}, and the alternating direction implicit (ADI) iteration method \cite{Wac88}. Unfortunately, all of these methods compute the solution in dense form and hence require ${\cal O}(n^2)$ storage. In case the solution to the Stein equation has low numerical rank (i.e., the eigenvalues decay rapidly) one can take advantage of this low rank structure to obtain approximate solutions in low rank factored form. If the effective rank is $r \ll n$, then the storage is reduced from ${\cal O}(n^2)$ to ${\cal O}(nr)$. This approach will be discussed here in detail. \begin{thebibliography}{10} \bibitem{BarS72} {\sc R.H. Bartels and G.W. Stewart}, {\em Solution of the matrix equation ${AX}+{XB}={C}$: {A}lgorithm 432}, Comm. ACM, 15 (1972), pp.~820--826. \bibitem{GarLAM92} {\sc J.D. Gardiner, A.J. Laub, J.J. Amato, and C.B. Moler}, {\em Solution of the {S}ylvester matrix equation ${AXB}+{CXD}={E}$}, {ACM} Trans. Math. Software, 18 (1992), pp.~223--231. \bibitem{GarWLAM92} {\sc J.D. Gardiner, M.R. Wette, A.J. Laub, J.J. Amato, and C.B. Moler}, {\em Algorithm 705: A {F}ortran-77 software package for solving the {S}ylvester matrix equation ${AXB^T}+{CXD^T}={E}$}, {ACM} Trans. Math. Software, 18 (1992), pp.~232--238. \bibitem{Pen97} {\sc T.~Penzl}, {\em Numerical solution of generalized {L}yapunov equations}, Adv. Comp. Math., 8 (1997), pp.~33--48. \bibitem{Rob80} {\sc J.D. Roberts}, {\em Linear model reduction and solution of the algebraic {R}iccati equation by use of the sign function}, Internat. J. Control, 32 (1980), pp.~677--687. \newblock (Reprint of Technical Report No. TR-13, CUED/B-Control, Cambridge University, Engineering Department, 1971). \bibitem{SayK01} {\sc A.H. Sayed and T.~Kailath}, {\em A survey of spectral factorization methods}, Num. Lin. Alg. Appl., 8 (2001), pp.~467--496. \bibitem{Smi68} {\sc R.A. Smith}, {\em Matrix equation {$XA + BX = C$}}, {SIAM} J. Appl. Math., 16 (1968), pp.~198--201. \bibitem{Wac88} {\sc E.L. Wachspress}, {\em Iterative solution of the {L}yapunov matrix equation}, Appl. Math. Letters, 107 (1988), pp.~87--90. \end{thebibliography}","discrete-time algebraic Riccati equation, Stein equation, large, sparse, Newton method","15A24","","MInsymposium ""MATRIX FUNCTIONS AND MATRIX EQUATIONS"", "Peña","Juan Manuel","jmpena@unizar.es","\section{From Total Positivity to Positivity: related classes of matrices} By {\sl Juan Manuel Peña}. \noindent Matrices with all their minors nonnegative (respectively, positive) are usually called totally nonnegative (respectively, totally positive). These matrices present nice stability properties as well as interesting spectral, factorization and variation diminishing properties. They play an important role in many applications to other fields such as Approximation Theory, Mechanichs, Economy, Optimization, Combinatorics or Computer Aided Geometric Design. We revisit some of the properties and applications of these matrices and show some recent advances. Moreover, we show that some results and techniques coming from Total Positivity theory have been extended to other classes of matrices which are also closely related to positivity. Among these other clases of matrices we consider sign regular matrices (which generalize totally nonnegative matrices), some classes of P-matrices (matrices whose principal minors are positive), including M-matrices, and conditionally positive definite (and conditionally negative definite) matrices.","Total Positivity; Nonnegative matrices; P-matrices; M-matrices; Stability; Factorizations","15A48","65F05","It is the LAMA Conference "Castro-González","Nieves","nieves@fi.upm.es","\section{Representations for the generalized Drazin inverse of additive perturbations} By {\sl N. Castro-Gonz\'{a}lez and M.F. Mart\'{i}nez-Serrano\\ Facultad de Inform\'{a}tica, Universidad Polit\'{e}cnica de Madrid, Spain}. \noindent Let ${\cal B}$ be a unital complex Banach algebra. An element $a\in {\cal B}$ is said to have a {\it generalized Drazin inverse} if there exists $x\in {\cal B}$ such that \[ xa=ax, \quad x=ax^2, \quad a-a^2x \text{ is quasinilpotent}.\] In this case, the generalized Drazin inverse of $a$ is unique and is denoted by $a^D$. If in the previous definition $a-a^2x$ is in fact nilpotent then $a^D$ is the conventional {\it Drazin inverse} of $a$. It is well known that if $a$ and $b$ have generalized Drazin inverse and $ab=ba=0$, then $(a + b)^D=a^D + b^D$. This result was generalized in [Djordjevi\'{c} and Wei, Additive result for the generalized Drazin inverse, J. Austral. Math. Soc. 73 (2002) 115-125] under the one side condition $ab=0$. Recently, in [Castro and Koliha, New Additive results for the $g$-Drazin inverse, Proc. Roy. Soc. Edinburgh Sect. A 134 (2005) 657-666], [Cvetkovi\'{c}-Ili\'{c} {\it et al.}, Additive results for the generalized Drazin inverse in a Banach algebra, Linear Algebra Appl. 418 (2006) 53-61], weaker conditions were given under which $(a+b)^D$ could be explicitly expressed in terms of $a$, $a^D$, $b$, and $b^D$.\par In this paper we study the generalized Drazin inverse of the sum $a+b$, where the perturbation $b$ is a quasinilpotent element, and we obtain a representation for $(a+b)^D$ under new conditions which relax the condition $ab=0$. Our approach is based on a representation for the resolvent of a $2\times 2$ matrix with entries in a Banach algebra, which we provide, and the Laurent expansion of the resolvent in terms of the generalized Drazin inverse. Our results can be applied to obtain different representations of the generalized Drazin inverse of block matrices $M=\begin{pmatrix} A & C \\ B & D\end{pmatrix}$, under certain conditions, in terms of the individual blocks. In particular, we can write $M$ as the sum of a block triangular matrix and a nilpotent matrix and apply the additive perturbation result given to obtain a representation for $M^D$. It extends the result of Meyer and Rose for the Drazin inverse of a block triangular matrix. Finally, we present a numerical example for the Drazin inverse of $2\times 2$ block matrices over the complex numbers.\newline This research is partly supported by Project MTM2007-67232, ``Ministerio de Educaci\'{o}n y Ciencia"" of Spain.","Generalized Drazin inverse, Banach algebras, additive perturbation, block matrices","15A09","46H30"," "Dodig","Marija","dodig@cii.fc.ul.pt","\section{Singular systems, state feedback problem} By {\sl Marija Dodig}. \noindent In this talk, the strict equivalence invariants by state feedback for singular systems are studied. As the main result we give the necessary and sufficient conditions under which there exists a state feedback such that the resulting system has prescribed pole structure as well as row and column minimal indices. This result presents a generalization of previous results of state feedback action on singular systems.","Matrix pencils, singular systems, state feedback, pole placement, Kronecker invariants, completion","15A21","15A22"," "Semrl","Peter","peter.semrl@fmf.uni-lj.si","\section{Locally linearly dependent operators} By {\sl Peter \v Semrl}. \noindent Let $U$ and $V$ be vector spaces. Linear operators $T_1 , \ldots , T_n : U \to V$ are locally linearly dependent if for every $u\in U$ the vectors $T_1 u , \ldots , T_n u$ are linearly dependent. Some recent results on such operators will be presented.","locally linearly dependent operators, spaces of operators","15A03","15A04"," "Benner","Peter","benner@mathematik.tu-chemnitz.de","\section{Balancing-Related Model Reduction for Large-Scale Unstable Systems} By {\sl Peter Benner}. \noindent Model reduction is an increasingly important tool in analysis and simulation of dynamical systems, control design, circuit simulation, structural dynamics, CFD, etc. In the past decades many approaches have been developed for reducing the order of a given model. Here, we will focus on balancing-related model reduction techniques that have been developed since the early 80ies in control theory. The mostly used technique of balanced truncation (BT) \cite{Moo81} applies to stable systems only. But there exist several related techniques that can be applied to unstable systems as well. We are interested in techniques that can be extended to large-scale systems with sparse system matrices which arise, e.g., in the context of control problems for instationary partial differential equations (PDEs). Semi-discretization of such problems leads to linear, time-invariant (LTI) systems of the form \begin{equation}\label{lti} \begin{array}{rcl} \dot{x}(t) &=& Ax(t) + Bu(t), \\ y(t) &=& Cx(t) + Du(t), \end{array} \end{equation} where $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, $C\in\mathbb{R}^{p\times n}$, $D\in\mathbb{R}^{p\times m}$, and $x^0\in\mathbb{R}^n$. Here, $n$ is the order of the system and $x(t)\in\mathbb{R}^n$, $y(t)\in\mathbb{R}^p$, $u(t)\in\mathbb{R}^m$ are the state, output and input of the system, respectively. We assume $A$ to be large and sparse and $n\gg m,p$. Applying the Laplace transform to (\ref{lti}) (assuming $x(0)=0$), we obtain \[ Y(s) = (C(s I - A)^{-1}B+D) U(s) =: G(s) U(s), \] where $s$ is the Laplace variable, $Y,U$ are the Laplace transforms of $y,u$, and $G$ is called the {\em transfer function matrix (TFM)} of (\ref{lti}). The TFM describes the input-output mapping of the system. The model reduction problem consists of finding a reduced-order LTI system, \begin{equation}\label{rom} \begin{array}{rcl} \dot{\hat{x}}(t) &=& \hat{A} \hat{x}(t) + \hat{B} u(t), \\ \hat{y}(t) &=& \hat{C} \hat{x}(t) + \hat{D} u(t), \end{array} \end{equation} of order $r$, $r \ll n$, with the same number of inputs $m$, the same number of outputs $p$, and associated TFM $\hat{G}(s) = \hat{C} (s I - \hat{A} )^{-1}\hat{B} +\hat{D}$, so that for the same input function $u\in L_2(0,\infty;\mathbb{R}^m)$, we have $y(t)\approx \hat{y}(t)$ which can be achieved if $G\approx \hat{G}$ in an appropriate measure. If all eigenvalues of $A$ are contained in the left half complex plane, i.e., [\ref{lti}) is stable, BT is a viable model reduction technique. It is based on balancing the controllability and observability Gramians $W_c$, $W_o$ of the system~(\ref{lti}) given as the solutions of the Lyapunov equations \begin{equation}\label{WcWo} A W_c + W_c A^T + B B^T = 0, \qquad A^T W_o + W_o A + C^T C = 0. \end{equation} Based on $W_c,W_o$ or Cholesky factors thereof, matrices $V,W\in\mathbb{R}^{n\times r}$ can be computed so that with \[ \hat{A} := W^T A V, \quad \hat{B} := W^T B, \quad \hat{C} := C V, \quad \hat{D} = D, \] the reduced-order TFM satisfies \begin{equation}\label{bound} \sigma_{r+1}\leq \Vert G - \hat{G}\Vert_{\infty} \leq 2 \sum_{k=r+1}^n \sigma_k, \end{equation} where $\sigma_1\geq \ldots \geq \sigma_n\geq 0$ are the Hankel singular values of the system, given as the square roots of the eigenvalues of $W_cW_o$. The key computational step in BT is the solution of the Lyapunov equations (\ref{WcWo}). In recent years, a lot of effort has been devoted to the solution of these Lyapunov equations in the large and sparse case considered here. Nowadays, BT can be applied to systems of order up to $n=10^6$, see, e.g., \cite{BenMS05,LiW02}. Less attention has been payed so far to unstable systems, i.e., systems where $A$ may have eigenvalues with nonnegative real part. Such systems arise, e.g., from semi-discretizing parabolic PDEs with unstable reactive terms. We will review methods related to BT that can be applied in this situation and discuss how these methods can also be implemented in order to become applicable to large-scale problems. The basic idea of these methods is to replace the Gramians $W_c$ and $W_o$ from (\ref{WcWo}) by other positive semidefinite matrices that are associated to (\ref{lti}) and to employ the algorithmic advances for BT also in the resulting model reduction algorithms. \begin{thebibliography}{10} \bibitem{BenMS05} P.~Benner, V.~Mehrmann, and D.~Sorensen, editors. {\em Dimension Reduction of Large-Scale Systems}, volume~45 of {\em Lecture Notes in Computational Science and Engineering}. Springer-Verlag, Berlin/Heidelberg, Germany, 2005. \bibitem{LiW02} J.-R. Li and J.~White. Low rank solution of {L}yapunov equations. {\em {SIAM} J. Matrix Anal. Appl.}, 24(1):260--280, 2002. \bibitem{Moo81} B.~C. Moore. Principal component analysis in linear systems: Controllability, observability, and model reduction. {\em {IEEE} Trans. Automat. Control}, AC-26:17--32, 1981. \end{thebibliography}","model reduction, balanced truncation, Lyapunov equations, Riccati equations","93B11","65F30"," "Cortes","Vanesa","vcortes@unizar.es","\section{Some properties of the class sign regular matrices and its subclasses} By {\sl V. Cort\'es and J.M. Pe{\~n}a}. \noindent An $m\times n$ matrix is called {\it sign regular} with signature $\varepsilon $ if, for each $k\le \min \{m,n\}$, all its $k\times k$ minors have the same sign or are zero. The common sign may differ for different $k$: the corresponding sequence of signs provides the signature of the sign regular matrix. These matrices play an important role many fields, such as Statistics, Approximation Theory or Computer Aided Geometric Design. In fact, nonsingular sign regular matrices are characterizated as variation-diminishing linear maps: the maximum number of sign changes in the consecutive components of the image of a nonzero vector is bounded above by the minimum number of sign changes in the consecutive components of the vector. We study several properties of these matrices, focusing our analysis on some sublasses of sign regular matrices with certain particular signatures.","Sign regular matrices; Test; Zero pattern; Inverses","15A48","15A15"," "Cortes","Vanesa","vcortes@unizar.es","\section{Some properties of the class sign regular matrices and its subclasses} By {\sl V. Cort\'es and J.M. Pe{\~n}a}. \noindent An $m\times n$ matrix is called {\it sign regular} with signature $\varepsilon $ if, for each $k\le \min \{m,n\}$, all its $k\times k$ minors have the same sign or are zero. The common sign may differ for different $k$: the corresponding sequence of signs provides the signature of the sign regular matrix. These matrices play an important role many fields, such as Statistics, Approximation Theory or Computer Aided Geometric Design. In fact, nonsingular sign regular matrices are characterizated as variation-diminishing linear maps: the maximum number of sign changes in the consecutive components of the image of a nonzero vector is bounded above by the minimum number of sign changes in the consecutive components of the vector. We study several properties of these matrices, focusing our analysis on some sublasses of sign regular matrices with certain particular signatures.","Sign regular matrices; Test; Zero pattern; Inverses","15A48","15A15"," "Damm","Tobias","damm@mathematik.uni-kl.de","\section{Algebraic Gramians and Model Reduction for Different System Classes} By {\sl Tobias Damm}. \noindent Model order reduction by balanced truncation is one of the best-known methods for linear systems. It is motivated by the use of energy functionals, preserves stability and provides strict bounds for the approximation error. The computational bottleneck of this method lies in the solution of a pair of dual Lyapunov equations to obtain the controllability and the observability Gramian, but nowadays there are efficient methods which work for large-scale systems as well. These advantages motivate the attempt to apply balanced truncation also to other classes of systems. For example, there is an immediate way to generalize the idea to stochastic linear systems, where one has to consider generalized versions of Lyapunov equations. Similarly, one can define energy functionals and Gramians for nonlinear systems and try to use them for order reduction. In general, however, these Gramians are very complicated and practically not available. As an approximation, one may use algebraic Gramians, which again are solutions of certain generalized Lyapunov equations and which give bounds for the energy functionals. This approach has been taken e.g.~for bilinear systems of the form \begin{eqnarray*} \dot x&=&Ax+\sum_{j=1}^k N_jxu_j+Bu\;,\\ y&=& Cx\;, \end{eqnarray*} which arise e.g.~from the discretization of diffusion equations with boundary control. In the talk we review these generalizations for different classes of systems and discuss computational aspects.","algebraic Gramians, energy functionals, model reduction, bilinear systems, stochastic systems","93A15","65F30","MS5, Linear Algebra in Model Reduction. "van den Driessche","Pauline","pvdd@math.uvic.ca","\section{Bounds for the Perron root using max eigenvalues} By {Ludwig Elsner, P van den Driessche}. \noindent Using the techniques of max algebra, a new proof of Al'pin's lower and upper bounds for the Perron root of a nonnegative matrix is given. The bounds depend on the row sums of the matrix and its directed graph. If the matrix has zero diagonal entries, then these bounds may improve the classical row sum bounds. This is illustrated by a generalized tournament matrix.","Max eigenvalue, Nonnegative matrix, Perron root","15A18","15A42"," "Li","Chi-Kwong","ckli@math.wm.edu","\section{Eigenvalues of the sum of matrices \\ from unitary similarity orbits} By {\sl Chi-Kwong Li, Yiu-Tung poon and Nung-Sing Sze.} \noindent Let $A$ and $B$ be $n\times n$ complex matrices. Characterization is given for the set ${\cal E}(A,B)$ of eigenvalues of matrices of the form $U^*AU+V^*BV$ for some unitary matrices $U$ and $V$. Consequences of the results are discussed and computer algorithms and programs are designed to generate the set ${\cal E}(A,B)$. The results refine those of Wielandt on normal matrices. Extensions of the results to the sum of matrices from three or more unitary similarity orbits are also considered.","Eigenvalues, sum of matrices,","15A18","","This is a talk for the mini-symposium: Eigenproblems: Theory and Computation "Hogben","Leslie","lhogben@iastate.edu","\section{Minimum Rank Problems: Recent Developments} By {\sl Leslie Hogben}. \noindent This talk will survey recent developments in the problem of determining the minimum rank of families of matrices described by a graph, digraph or pattern.","minimum rank, symmetric minimum rank, asymmetric minimum rank, ditree, directed tree, inverse eigenvalue problem","05C50","15A03","This abstract is for my invited plenary lecture "Huylebrouck","Dirk","Huylebrouck@gmail.com","\section{Applications of generalized inverses in art.} By {\sl D. Huylebrouck}. \noindent The “Moore-Penrose inverse” of a matrix A corresponds to the (unique) matrix solution X of the system AXA=A, XAX=X, (AX)*=AX, (XA)*=XA. S. L. Campbell and C. D. Meyer Jr, wrote a now classical book “Generalized Inverses of Linear Transformations” (Pitman Publishing Limited, London, 1979), in which they gave an excellent account on the MP-inverse and other generalized inverses as well. They gave many interesting examples, ranging from Gauss’ historical prediction for finding Ceres to modern electrical engineering problems. The present paper provides new applications related to art studies: a first one about mathematical colour theory, and a second about curve fitting in architectural drawings or paintings. Firstly, in colour theory, a frequent problem is finding the combination of colours approximating a desired colour as closely as possible using a given set of colours. Plaid fabrics are made by a limited number of threads and when a desired tone cannot be formed by a combination, a least squares approach may be mandatory. Some colour theory specialists suggested “sensations”, such as the observation of colour, should involve logarithmic functions, but using Campbell and Meyer’s general set-up, this does not give rise to additional difficulties. Of course, the practical use of this theory should still show the benefit of the proposed mathematical tool, but even as stands it already provides a colourful mathematical diversion. In addition, colour theory as taught today in many art schools and as used in numerous printing or computer problems, is in need of a more rigid mathematical approach, for sure. Thus, this example of an application of the theory of general inverses in art may be welcomed. Secondly, we turn to the formerly very popular activity in architectural circles of drawing all kinds of geometric figures on images of artworks and buildings. Until some 20 years ago, triangles, rectangles, pentagons or circles sufficed, but later more general mathematical figures were used as well, especially since fractals became trendy. Recognizing well-known curves and polygons was seen as a part of the “interpretation” of an architectural edifice or painting. Eventually, certain proportions in the geometric figures were emphasized, among which the golden section surely was the most (in)famous. Diehards continue this tradition, though curve drawing has lost some credit in recent times, in particular due to some exaggerated golden section interpretations. Today, many journals tend to reject “geometric readings in architecture”, and the reasons to do so are many. For instance, an architect may have had the intention of constructing a certain curve, but for structural, technical or whatever practical reason, the final realization may not confirm that intention. Or else, a certain proportion may have been used in an artwork, consciously or not, but when such a “hidden” proportion is discovered afterward, even the author of the artwork may disagree on having used it. Consequently, statements about the presence of a certain proportion or about the good fit of a curve in art often are subjective matters, and thus unacceptable for scientific journals. However, a similarity between these geometric studies in architecture and the history of (celestial) mechanics, as explained “Generalized Inverses of Linear Transformations”, suggests the so-called “least squares method”, developed in that field, could be applied to examples in art as well. Just as astronomy struggled for centuries to get rid of its astrological past, an objective approach for the described art studies would be most welcome. Of course, it can be opposed the mathematical method presents an overkill with respect to the intended straightforward artistic applications, but nowadays software considerably reduces the computational aspects. The method turns out to be useful indeed: for instance, while a catenary approximates architect Gaudi’s Paelle Guell better than a parabola, the least squares method shows a catenary or a parabola can be used for the shape of Gaudi’s Collegio Teresiano with a comparable error. These results were confirmed by Prof. A. Monreal, a Gaudi specialist from the architect’s hometown, Barcelona. Another amusing example is the profile of a nuclear power plant, which is described in many schoolbooks as an example of a hyperbola, but an ellipse fits even better. Engineers confirmed the hyperbolic shape is modified at the top to reduce wind resistance. Finally, it is shown how proportions in the Mona Lisa can be studied using generalized inverses, but it remains unsure this application will make the present paper as widely read as Dan Brown’s “da Vinci Code”.","Generalised inverses, art, colour theory, curve fitting.","15A","15.15","The paper is a contribution for the ""Linear Algebra in Education"" section. "Tanguay","Denis","tanguay.denis@uqam.ca","\section{A fundamental paradox in learning algebra} By {\sl Denis Tanguay & Claudia Corriveau}. \noindent The generalizing, formalizing and unifying nature of some of the concepts of Linear Algebra leads to a high level of abstraction, which in turn constitutes a source of difficulties for students. When asked to deal with new expressions, new symbolism and rules of calculation, students face what researchers in mathematics education — such as Dorier, Rogalski, Sierpinska or Harel — have identified as ‘the obstacle of formalism’. Teachers bring in new mathematical objects, sometimes in a non explicit way, by using at once the symbols referring to these objects or to the related relations, without explaining or justifying the meaning or the relevance of their choices, regarding this new symbolism. Calculations and manipulations with these new objects build up to new algebras (vector or matrix algebras) more complex than basic (high school) algebra, but nevertheless syntactically modelled on it. The gap thus caused reveals itself when students bring out inconsistent or meaningless writings : “The obstacle of formalism manifests itself in students who operate at the level of the form of expressions without seeing these expressions as referring to something other than themselves. One of the symptoms is the confusion between categories of mathematical objects ; for example, sets are treated as elements of sets, transformations as vectors, relations as equations, vectors as numbers, and so on” (Sierpinska et al., 1999, p. 12). For too many students attending their first course in Linear Algebra, the latter is nothing but a catalogue of very abstract notions, for which they have almost no understanding, being overwhelmed by a flood of new words, new symbols, new definitions and new theorems (Dorier, 1997). Our talk will be based on a study conducted within the context of a master degree in mathematics education (maîtrise en didactique des mathématiques, Université du Québec à Montréal ; cf. Corriveau & Tanguay, 2007). Through this study, we tried to have a better understanding of transitional difficulties, due to the abrupt increase in what is expected from students with respect to formalism and proof, when going from Secondary schools to ‘Cegeps’ (equivalent in Québec of ‘upper secondary’ or ‘high-school’, 17-19 years of age). The Linear Algebra courses having been identified as those in which such transitional problems are the most acute, we first selected, among all problems submitted in a given L. A. course — the teacher of which was ready to participate in the study — those involving a proof or a reasoning at least partly deductive. Through the systematic analysis of these problems, we evaluated and compared their level of difficulty, as well as students' preparation for coping with such difficulties, from an ‘introduction-to-formalism’ perspective. The framework used to analyse the problems stemmed from a remodelling of Robert's framework (1998). The remodelling was a consequence of having compared/confronted an a priori analysis of three problems (using Robert's framework), with the analysis of their erroneous solutions as they appeared in twelve students' homework copies. Among the conclusions brought up by the study, we shall be interested in the following ones:  Mathematical formalism allows a ‘compression’ of the mathematical discourse, simplification and systematization of the syntax, by which one operates on this discourse with better efficiency. But this improvement in efficiency is achieved to the detriment of meaning. As in Bloch and al. (2007), the study confirms that “...formal written discourse does not carry per se the meaning of neither the laws that it states nor the objects that it sets forth.” For many students, symbolic manipulations are difficult in Linear Algebra because meaning has been lost somewhere. By trying to have a better understanding of the underlying obstacle, we came to identify what we call ‘the fundamental paradox in learning [a new] algebra’, some elements of which will be discussed further in the talk.  The analysis of students' written productions brings us to observe that in the process of proving, difficulties caused by the introduction of new objects and new rules of calculation on the one hand, and difficulties related to controlling the deductive reasoning and its logical structure on the other, are reinforcing one another.  A better understanding of students' errors, by an error-analysis such as the one done in the study, allows a better evaluation of the difficulty level of what is asked to students, and thus a better understanding of the problems linked to academic transitions (from lower-secondary to upper-secondary to university) in mathematics. Such analyses could give Linear Algebra teachers better tools, for estimating the difficulties in the tasks they submit to their students, as well as for understanding the underlying cognitive gaps and ruptures. It would be advisable that teachers be introduced to such error-analysis work, in the setting of their pre-service or in-service instruction. Bloch, I., Kientega, G. & Tanguay, D. (2007). Synthèse du Thème 6 : Transition secondaire / post-secondaire et enseignement des mathématiques dans le postsecondaire. To appear in Actes du Colloque EMF 2006. Université de Sherbrooke. Corriveau, C. & Tanguay, D. (2007). Formalisme accru du secondaire au collégial : les cours d'Algèbre linéaire comme indicateurs. To appear in Bulletin AMQ, Vol. XLVII, n°4. Dorier, J.-L., Harel, G., Hillel, J., Rogalski, M., Robinet, J., Robert, A. & Sierpinska, A. (1997). L’enseignement de l’algèbre linéaire en question. J.-L. Dorier, ed. La Pensée Sauvage. Grenoble, France. Harel, G. (1990). Using Geometric Models and Vector Arithmetic to Teach High-School Students Basic Notions in Linear Algebra. International Journal of Mathematical Education in Science and Technology, Vol 21, n°3, pp. 387-392. Harel, G. (1989). Learning and Teaching Linear Algebra : Difficulties and an Alternative Approach to Visualizing Concepts and Processes. Focus on Learning Problems in Mathematics, Vol. 11, n°2, pp. 139-148. Robert, A. (1998). Outils d’analyse des contenus mathématiques à enseigner au lycée et à l’université. Recherches en didactique des mathématiques, vol. 18, n°2, pp. 139-190. Rogalski, M. (1990). Pourquoi un tel échec de l'enseignement de l'algèbre linéaire ? In Enseigner autrement les mathématiques en DEUG Première Année, Commission inter-IREM université (ed.), pp. 279-291. IREM de Lyon. Sierpinska, A., Dreyfus, T. & Hillel, J. (1999). Evaluation of a Teaching Design in Linear Algebra : the Case of Linear Transformations. Recherches en didactiques des mathématiques, Vol. 19, n°1, pp. 7-40.","Linear Algebra Formalism apprenticeship Proof apprenticeship Error Analysis","97","15","It exceeds 5000 characters but it is because we added the Bibliography "Mathewkutty","Habel","habelmath@habelmath.com","NUMBER THEORY.Polyhedrons are Geometrical shapes enclosed by polygons. Numbers on them can be represented by Habel Math formula Akn = 2{k(n-1)² + 1} Habelmath sum = Hkm = (m/3){k(m - 1)(2m - 1) + 6} 2+ 2(k +1)+ 2(4k + 1)+ 2(9k +1) + 2(16k +1)+.........+ 2{(m-1)²k + 1}=H where H = (m/3){k(m-1)(2m - 1) + 6} Habel Math's wonderful formula for sum to m terms of all Polyhedral numbers. Remember k = 1 for Tetrahedron, and k=29 for Soccerball because we know the soccerball numbers are A29n = 2{29(n - 1)² + 1} They are 2, 60, 234, 524, ................... So H29m = (m/3){29(m - 1)(2m - 1) + 6} When m=4 it should be 2+60+234+524 = 820 By Prof. Habel Mathewkutty M. Sc.(Math/Agra), Ph. D. Speaker of SIAM conference NW08 in Rome 21-24 July 2008. Former Researcher of Indian Institutes of Technology and Instructor of Houston Community College System.","Polyhedrons, Habel Math","11","74","Thanks! "Hanaish","Ibrahim","henaish@yahoo.com","\section{Your title here} By {\sl names of all authors here}. Ibrahim Hanaish and Abdunnabi M. Ali Elbouzedi \noindent Insert your abstract here The estimation of the mean vector of multivariate normal population with special covariance matrix is considered when uncertain non-sample prior information is available. In this paper, four possible estimators are considered, namely, the usual maximum likelihood estimator (UE), the restricted estimator (RE), the preliminary test estimator (PTE) and the shrinkage estimator (SE) under more general setting. The performances of the estimators are compared based on the criteria of unbiasedness and the risk function to a specific quadratic loss function in order to search for best estimator. Both analytical and graphical methods are explored. It is shown that neither PTE nor SE dominates each other, though they fare well compare to UE and RE.","Preliminary test estimator, Stein-rule estimator, multivariate normal,","",""," "Kaibah","Hussein","hu_mic99@yahoo.com","\section{Your title here} Asymptotic Behavior of Solutions of Stochastic Equations and Applications in Statistical Parameter Estimation By {\sl names of all authors here}. Hussein Salem Kaibah \noindent Insert your abstract here In different models that appear in numerical mathematics , stochastic optimization problems , statistical parameter estimation we come to the necessity to study the behavior of solutions of stochastic equations. Let us consider the following example . Example : suppose that we would like to find a solution of a deterministic equation where is some continuous function, and is some bounded region. But according to the real scheme of calculations we measure the function with random errors in the form : where are jointly independent families of random function (fields) such that . In this case it is reasonable to approximate the function by the averaging Therefore a natural question appears : in what sense and under which condition a solution of a stochastic equation approximates a solution of the first equation as .","Stochastic Equations","",""," "Hanaish","Ibrahim","henaish@yahoo.com","\section{Your title here}Shrinkage Estimators for Estimation the Multivariate Normal Mean Vector under Degrees of Distrust By {\sl names of all authors here}. Ibrahim Hanaish and Abdunnabi M. Ali Elbouzedi \noindent Insert your abstract here The estimation of the mean vector of multivariate normal population with special covariance matrix is considered when uncertain non-sample prior information is available. In this paper, four possible estimators are considered, namely, the usual maximum likelihood estimator (UE), the restricted estimator (RE), the preliminary test estimator (PTE) and the shrinkage estimator (SE) under more general setting. The performances of the estimators are compared based on the criteria of unbiasedness and the risk function to a specific quadratic loss function in order to search for best estimator. Both analytical and graphical methods are explored. It is shown that neither PTE nor SE dominates each other, though they fare well compare to UE and RE.","Preliminary test estimator, Stein-rule estimator, multivariate normal,","","","forgotten title in last email "Cox","Steven","cox@rice.edu","\section{Eigen-reduction of Large Scale Neuronal Networks} By {\sl Tony Kellems, Derrick Roos, Nan Xiao and Steve Cox}. \noindent The modest pyramidal neuron has over 100 branches with tens of synapses per branch. Partitioning each branch into 3 compartments, with each compartment carrying say 3 membrane currents, yields at least 20 variables per branch and so, in total, a nonlinear dynamical system of roughly 2000 equations. We linearize this system to, x'=Ax+Bu, y=Cx, where B permits synaptic input into each compartment and C observes only the soma potential. We reduce this system by retaining the dominant singular directions of the associated controllability and observability Grammians. We evaluate the error in soma potential between the full and reduced models for a number of true morphologies over a broad (in space and time) class of synaptic input patterns, and find that reduced systems of dimension less then 10 accurately reflect the full quasi-active dynamics. This savings will permit, for the first time, one to simulate large networks of biophysically accurate cells over realistic time spans.","model reduction, synaptic integration","34C20","92C20"," "Zimmermann","Karel","Karel.Zimmermann@mff.cuni.cz","\section{Solving two-sided (max,plus)-linear equation systems.} By {\sl Karel Zimmermann}. \noindent Insert your abstract here Systems of equations of the following form will be considered: \begin{equation}\label{e1} a_i(x) = b_i(x) ~ i \in I, \end{equation} where $I = \{1,\ldots, m\}, ~ J = \{1, \ldots, m\}$, $$a_i(x) = max_{j \in J}(a_{ij} + x_j), ~ b_i(x) = max_{j \in J}(b_{ij} + x_j)~~ \forall i \in I$$ and $a_{ij},~b_{ij}$ are given real numbers. \newline The aim of the contribution is to propose a polynomial method for solving system (\ref{e1}). Let $M$ be the set of all solutions of (\ref{e1}), let $M(\overline{x})$ denote the set of solutions of system (\ref{e1}) satisfying the additional constraint $x \leq \overline{x}$, where $ \overline{x}$ is a given fixed element of $R^n$. The proposed method either finds the maximum element of the set $M(\overline{x})$ (i.e. element $ \hat{x} \in M(\overline{x}) $, for which $x \in M(\overline{x})$ implies $ x~ \leq \hat{x}$), or finds out that $M(\overline{x}) = \emptyset $. The results are based on the following properties of system (\ref{e1}) (to simplify the notations we will assume in the sequel w.l.o.g. that $a_i(\overline{x}) \geq b_i(\overline{x})~~ \forall~~ i \in I$ and $\overline{x} \not \in M(\overline{x})$): \newline \newline (i) $M(\overline{x}) ~=~ \emptyset~ \Rightarrow M ~ = ~ \emptyset$. \newline (ii) Let $K_i = \{ k \in J~;~ a_{ik}\leq b_{ik}\}~ \forall i \in I$. If for some $i_0 \in I$ the set $K_{i_0} = \emptyset$, then $M(\overline{x})= \emptyset$. \newline (iii) Let $\beta_i(\overline{x}) = max_{k \in K_i}(b_{ik} + \overline{x}_k)$, $L_i(\overline{x}) = \{ j \in J~;~ a_{ij} + \overline{x}_j ~>~ \beta_i(\overline{x})\}$, $~ \forall~ i \in I$. If $\bigcup_{i \in I}L_i(\overline{x}) = J$, then $M(\overline{x})= \emptyset$. \newline (iv) Let $V_j(\overline{x}) = \{ i \in I ; j \in L_i(\overline{x}) \}$, let $ \overline{x}_j^{(1)} = min_{i \in V_j(\overline{x})}(\beta_i(\overline{x})- a_{ij})$ for all $j \in J$, for which $V_j(\overline{x}) \neq \emptyset$ and $ \overline{x}_j^{(1)} = \overline{x}_j $ otherwise. Let $\beta_i(\overline{x}^{(1)})~<~ \beta_i(\overline{x})$ for all $i \in I$. Then for at least one $i \in I$ the value $\beta_i(\overline{x}^{(1)})$ is equal to at least one of the threshold values $b_{ij} + \overline{x}_j ~< ~\beta_i(\overline{x})$. \newline \newline The method successively determines variables, which have to be decreased if equalities in (\ref{e1}) should be reached. If all variables have to be set in movement, no solution of (\ref{e1}) exists. If the set of unchanged variables is nonempty, the maximum element of (\ref{e1}) is obtained. Using these properties a polynomial behavior of the proposed method can be proved (in case of rational or integer inputs). Possibilities of further generalizations and usage in optimization with constraints (\ref{e1}), as well as applications to synchronization problems will be briefly discussed.","max algebra, (max,plus)-linear systems of equations, operations research.","65H10","15A78","for Max-algebra, MS7 "Wojciechowski","Piotr","piotrw@utep.edu","\section{Orderings of matrix algebras and their applications} By {\sl Piotr Wojciechowski}. \noindent The full matrix algebra $M_n({\bf F})$ over a totally-ordered subfield ${\bf F}$ of the reals becomes a {\it partially ordered algebra} by a partial order relation $\leq$ on the set $M_n({\bf F})$, if for any $A, B, C \in M_n({\bf F})$ from $A\leq B$ it follows that: \begin{itemize} \item[(1)] $A+C\leq B+C$ \item[(2)] if $C\geq 0$ then $AC\leq BC$ and $CA \leq CB$ \item[(3)] if ${\bf F} \ni \alpha \geq 0$ then $\alpha A\leq \alpha B$. \end{itemize} Our interest is when the order $\leq$ is a lattice or at least is directed. Then we have a {\it lattice-ordered algebra of matrices} or a {\it directly-ordered algebra of matrices}. Those concepts originate in 1956 in Birkhoff and Pierce in \cite{BP}. The first example of a lattice-ordered algebra of matrices is, of course, with the {\it usual} entry-wise ordering. In this ordering the identity matrix $I$ is positive. In 1966 E. Weinberg proved in \cite{We} that the positivity of $I$ forces a lattice-ordering to be (isomorphic to) the usual one in $M_2({\bf F})$ and conjectured the same for all $n\geq 2$. The conjecture was positively solved in 2002 by J. Ma and P. Wojciechowski in \cite{MW}. The proof involved a {\it cone-theoretic} approach, by first establishing existence of a $P$-invariant cone $O$ in ${\bf F}^n$, i.e. satisfying the condition that for every matrix $M\in P$, $M(O)\subseteq O$, where $P$ is the {\it positive cone} of the ordering $\leq$ ($P=\{A\in M_n({\bf F}): A\geq 0\}$.) With help of compactness of a unit sphere in ${\bf R}^n$ and the Zorn's Lemma, we obtained all the desired properties of the cone $O$ that led us to the conclusion of the conjecture.\\ The first part of the talk will briefly outline the method.\\[.2 in] The above considerations allowed us to comprehensively describe all lattice orders of $M_n({\bf F})$ (J. Ma and P. Wojciechowski \cite{MW2}): the algebra $M_n({\bf F})$ is lattice-ordered (within an isomorphism) if and only if $$A \geq 0 \Leftrightarrow A=\sum_{i,j=1}^n \alpha_{ij}E_{ij}H^T$$ with $$\alpha_{ij}\geq 0$$ $i,j =1,\ldots, n$, for some given $H$ nonsingular with nonnegative entries and $E_{ij}$ having 1 in the $ij$ entry and zeros elsewhere.\\[.5 in] As a first application, we will describe all {\it multiplicative bases} in the matrix algebra $M_n({\bf F})$ and provide their enumeration for small $n$ (C. De La Mora and P. Wojciechowski 2006 \cite{DMW}.) In a finite-dimensional algebra over a field \textbf{F}, a basis $\mathfrak{B}$ is called {\em a multiplicative basis} provided that $\mathfrak{B} \cup \{0\}$ forms a semigroup. Although these bases (endowed with some additional algebraic properties) have been studied in the representation theory, they lacked a comprehensive classification for matrix algebras. The first example of a multiplicative basis of $M_n({\bf F})$ should of course be $\{E_{ij}, i,j=1,\ldots,n\}$. Every lattice order on $M_n({\bf F})$ corresponds to a nonsingular $n \times n$ matrix $H$ with nonnegative entries. It turns out that if the entries are either 0 or 1, the basic matrices resulting in the definition of the lattice order, i.e. the matrices $E_{ij}H^T$ form a multiplicative basis, and conversely, every multiplicative basis corresponds to a nonsingular zero-one matrix. After identification of the isomorphic semigroups and also identification of the matrices that have just permuted rows and columns, the above correspondence is one-to-one. The number of zero-one nonsingular matrices, although lacking a formula so far, is known for a few small $n$ values. This, together with the conjugacy class method from group theory, allowed us to calculate the number of nonequivalent multiplicative bases up to dimension 5: 1, 2, 8, 61, 1153.\\[.5 in] Another application concerns certain directed partial orders of matrices that appear naturally in linear algebra and its applications. It is related to the research of matrices preserving cones, established in the seventies, among others by R. Loewy and H. Schneider in \cite{LS}. Besides the lattice orders (corresponding to the simplicial cones), the best studied ones are the orders whose positive cones are the sets $\Pi (O)$, of all matrices preserving a regular (or full) cone $O$ in an $n$-dimensional Euclidean space. It can be shown that $O$ is essentially the only $\Pi(O)$-invariant cone (P. Wojciechowski \cite{W}.) Consequently, we obtain a characterization of all maximal directed partial orders on the $n \times n$ matrix algebra: a directed order is maximal if and only if its positive cone $P$ satisfies $P=\Pi(O)$ for some regular cone $O$. The method used in the proof involves a concept of {\it simplicial separation}, allowing a regular cone to be separated from an outside point by means of a simplicial cone.\\[.5 in] Some open questions related to the discussed topics will be raised during the talk. \bibliographystyle{amsplain} \begin{thebibliography}{7} \bibitem{BP} G.Birkhoff and R.S.Pierce, {\em Lattice-ordered rings}, An. Acad. Brasil. Ci. 28 (1956), 41-69. \bibitem{DMW} C. de La Mora and P. Wojciechowski {\em Multiplicative bases in matrix algebras}, Linear Algebra and Applications 419 (2006) 287-298. \bibitem {LS}R. Loewy and H. Schneider, {\em Positive Operators on the $n$-dimensional Ice-Cream Cone}, J. Math. Anal. Appl. 49 (1975) \bibitem {MW} J. Ma and P. Wojciechowski, {\em A proof of Weinberg's conjecture on lattice-ordered matrix algebras}, Pro. Amer. Math. Soc., 130(2002), no. 10, 2845-2851. \bibitem {MW2} J. Ma and P. Wojciechowski, {\em Lattice orders on matrix algebras}, Algebra Univers. 47 (2002), 435-441. \bibitem{We} E. C. Weinberg, {\em On the scarcity of lattice-ordered matrix rings}, Pacific J. Math. 19 (1966), 561-571. \bibitem {W}P. Wojciechowski {\em Directed maximal partial orders of matrices}, Linear Algebra and Applications 375(2003) 45-49 \end{thebibliography}","Matrix algebra, order, cone, multiplicative basis","15A48","06F25"," "Nagy","James","nagy@mathcs.emory.edu","\section{Kronecker Products in Imaging Sciences} By {\sl James G. Nagy}. \noindent Linear algebra and matrix analysis are very important in the imaging sciences. This should not be surprising since digital images are typically represented as arrays of pixel values; that is, as matrices. Due to advances in technology, the development of new imaging devices, and the desire to obtain images with ever higher resolution, linear algebra research in image processing is very active. In this talk we describe how Kronecker and Hadamard products arise naturally in many imaging applications, and how their properties can be exploited when computing solutions of very difficult linear algebra problems.","Kronecker product, Hadamard product, image processing","15","65"," "Strong","David","David.Strong@pepperdine.edu","\section{A Java applet and introductory tutorial for the Jacobi, Gauss-Seidel and SOR Methods } By {\sl David Strong}. \noindent I will discuss a Java applet, tutorial and exercises that are designed to allow both students and instructors to experiment with and visualize the Jacobi, Gauss-Seidel and SOR Methods in solving systems of linear equations. The applet is for working with 2 x 2 systems. The tutorial includes an analysis (using eigenvalues and spectral radius) of these methods. The exercises are designed to be done using the applet in order to more easily investigate ideas and issues that are often not dealt with when these methods are first introduced, but that are fundamental to numerical analysis and linear algebra, such as eigenvalues/vectors and convergence rates.","Jacobi, Gauss-Seidel, SOR, numerical linear algebra, iterative methods, applet","97","65"," "Rust","Bert","bert.rust@nist.gov","\section{A Truncated Singular Component Method for Ill-Posed Problems} By {\sl Bert Rust and Dianne O'Leary}. \noindent The truncated singular value decomposition (TSVD) method for solving ill-posed problems regularizes the solution by neglecting contributions in the directions defined by singular vectors corresponding to small singular values. In this work we propose an alternate method, neglecting contributions in directions where the measurement value is below the noise level. We call this the truncated singular component method (TSCM). We present results of this method on test problems, comparing it with the TSVD method and with Tikhonov regularization.","ill-posed problems, regularization, singular value decomposition","65","F22"," "Costa","Liliana","lilianacosta@ua.pt","\section{Acyclic Birkhoff Polytope} By {\sl Liliana Costa, C.M. da Fonseca and Enide Andrade Martins}. \noindent A real square matrix with nonnegative entries and all rows and columns sums equal to one is said to be doubly stochastic. This denomination is associated to probability distributions and it is amazing the diversity of branches of mathematics in which doubly stochastic matrices arise (geometry, combinatorics, optimization theory, graph theory and statistics). Doubly stochastic matrices have been studied quite extensively, especially in their relation with the van der Waerden conjecture for the permanent. In $% 1946$, Birkhoff published a remarkable result asserting that a matrix in the polytope of $n\times n$ nonnegative doubly stochastic matrices, $\Omega _{n}$% , is a vertex if and only if it is a permutation matrix . In fact, $\Omega _{n}$ is the convex hull of all permutation matrices of order $n$. The \emph{Birkhoff polytope} $\Omega _{n}$ is also known as \emph{% transportation polytope} or \emph{doubly stochastic matrices polytope}. Recently Dahl discussed the subclass of $\Omega _{n}$ consisting of the tridiagonal doubly stochastic matrices and the corresponding subpolytope \[ \Omega _{n}^{t}=\{A\in \Omega _{n}:A\mbox{ is tridiagonal}\}, \]% the so-called \textit{tridiagonal Birkhoff polytope}, and studied the facial structure of $\Omega _{n}^{t}.$ In this talk we present an interpretation of vertices and edges of the acyclic Birkhoff polytope, $\mathfrak{T}_{n}=\Omega _{n}(T)$, where $T$ is a given tree, in terms of graph theory.","Doubly stochastic matrix; Birkhoff polytope; Number of vertices;Tree","05A15","15A51"," "Martins","Enide","enide@ua.pt","\section{On the spectra of some graphs like weighted rooted trees} By {\sl Ros\'{a}rio, Helena Gomes and Enide Andrade Martins}. \noindent Let $G$ be a weighted rooted graph of $k$ levels such that, for $j\in\{2,\dots ,k\}$ \begin{enumerate} \item each vertex at level $j$ is adjacent to one vertex at level $j-1$ and all edges joining a vertex at level $j$ with a vertex at level $j-1$ have the same weight, where the weight is a positive real number. \item if two vertices at level $j$ are adjacent then they are adjacent to the same vertex at level $j-1$ and all edges joining two vertices at level $j $ have the same weight. \item two vertices at level $j$ have the same degree. \item there is not a vertex at level $j$ adjacent to others two vertices at the same level. \end{enumerate} In this talk we give a complete characterization of the eigenvalues of the Laplacian matrix of $G$ (analogous characterization can be done for the adjacency matrix of $G)$). By application of the these results, we derive an upper bound on the largest eigenvalue of a graph defined by a weighted tree and a weigthed triangle attached, by one of its vertices, to a pendant vertex of the tree.","Graph; Laplacian matrix; Adjacency matrix; Eigenvalues","05C50",""," "Boimond","Jean-Louis","Jean-Louis.Boimond@univ-angers.fr","\section{On Steady State Controller in Min-Plus Algebra} By {\sl J.-L. Boimond, S. Lahaye}. \noindent Synchronization phenomena occurring in systems where dynamic behavior is represented by a flow of fluid are well modeled by continuous $(min, +)$-linear systems. A feedback controller design method is proposed for such systems in order that the system output asymptotically behaves like polynomial input. Such a controller objective is well-known in the conventional linear systems theory. Indeed, the steady-state accuracy of conventional linear systems is classified according to their final responses to polynomial inputs such as steps, ramps, and parabolas. The ability of the system to asymptotically track polynomial inputs is given by the highest degree, $k$, of the polynomial for which the error between system output and reference input is finite but nonzero. We call the system {\it type k} to identify this polynomial degree. For example, a {\it type} $1$ system has finite, nonzero error to a first-degree polynomial input (ramp).\\ An analogous definition of system {\it type} $k$ is given for continuous $(min, +)$-linear systems and leads to simple conditions as in conventional system theory. In addition to the conditions that the resulting controller must satisfy, we look for the {\it greatest} controller to satisfy the {\it just in time} criterion. For a manufacturing system, such an objective allows the releasing of raw parts at the latest dates such that the customer demand is satisfied.","Continuous timed event graph, min-plus algebra, steady state controller, system type","93","06","contribution for the mini-symposia MS7 Max algebra (H. Schneider, P. Butkovic) "Fošner","Ajda","ajda.fosner@uni-mb.si","\section{Commutativity preserving maps on real matrices} By {\sl Ajda Fo\v sner}. \noindent Let $M_n({\mathbb R})$ be the algebra of all $n\times n$ real matrices. A map $\phi : M_n({\mathbb R}) \to M_n({\mathbb R})$ preserves commutativity if $\phi (A) \phi (B) = \phi (B) \phi (A)$ whenever $AB = BA$, $A,B \in M_n({\mathbb R})$. If $\phi$ is bijective and both $\phi$ and $\phi^{-1}$ preserve commutativity, then we say that $\phi$ preserves commutativity in both directions. We will talk about non-linear maps on $M_n({\mathbb R})$ that preserve commutativity in both directions or in one direction only.","commutativity preserving map, real Jordan canonical form","15A27","15A21"," "Shader","Bryan","bshader@uwyo.edu","\section{Average minimum rank of a graph} By {\sl Francesco Barioli, Shaun Fallat, Tracy Hall, Daniel Hershkowitz, Leslie Hogben, Ryan Martin, Bryan Shader, Hein van der Holst}. \noindent We establish asymptotic upper and lower bounds on the average minimum rank of a graph using probabilistic, linear algebraic and graph theoretic techniques.","Minimum rank, zero pattern, graph","05C50","","This is part of the minisymposium on Minimum ranks "maracci","mirko","mirko.maracci@gmail.com","\section{Basic notions of Vector Space Theory: students' models and conceptions} By {\sl Mirko Maracci}. \noindent Carlson (1993) uses the image of the fog rolling in to describe the confusion and disorientation which his students experience when getting to the basic notions of Vector Space Theory (VST). There is truly a widespread sense of the inadequacy of the teaching of Linear Algebra. On account of that common perception and of the importance of Linear Algebra as a prerequisite for a number of disciplines (math, science, engineering,...), in the last twenty years several studies were carried out on Linear Algebra education. Those studies brought undeniable progresses for understanding students’ difficulties in Linear Algebra. As Dorier and Sierpinska effectively synthesized in their literature survey (2001), three different kinds of sources of students’ difficulties in Linear Algebra especially emerge from the studies on that topics: \begin{enumerate} \item the fact that Linear Algebra teaching is characterized by an axiomatic approach, which is perceived by students as superfluous and meaningless; \item the fact that Linear Algebra is characterized by the cohabitation of different languages, systems of representations, modes of description; \item the fact that coping with those features requires the development of {\it theoretical thinking} and {\it cognitive flexibility} \end{enumerate} Recently more studies were carried out, which in our opinion still fit well Dorier and Sierpinska's synthesis. \\ In this talk I will focus on some aspects of students' difficulties in vector space theory (VST), drawn from my doctorate research project. That project was meant to investigate graduate and undergraduate students’ errors and difficulties in VST. Through that work I intended to contribute to Linear Algebra Education research field, focusing on cognitive difficulties related to specific VST notions rather than to general features of Linear Algebra: a seemingly less explored path.\\ The study involved 15 (graduate or undergraduate) students in mathematics, presented with two or three different VST problems to be solved in individual sessions. The methodology adopted was that of the clinical interview (Ginsburg, 1981). The study highlighted a number of students' difficulties related to the notions of linear combination, linear dependence/independence, dimension and spanning set. The difficults, errors and empasses emerged were analysed through the lenses of different theoretical frameworks: the theory of tacit intuive models (Fischbein, 1987), Sfard's process-object duality theory (Sfard, 1991) and the ckc model (Balacheff, 1995). The different analyses lead to formulate hypotheses, which account for a variety of students’ difficulties. Though not antithetical to each other, those analyses are diversified, put into evidence different aspects from different perspectives. In this talk I briefly present the results of those analyses and a first tentative integrating analysis, combining different hints and perspectives provided by the frameworks mentioned above. More specifically, that attempt lead to the formulation of the hypothesis that many difficults experienced by students are consistent with the possible activation of an intuitive model of “construction†related to basic notion of VST.In the talk we will better specify that hypothesis showing how it could contribute to better organize and explain students' documented difficulties. \section*{References} \begin{description} \item[{\sc Balacheff N., 1995;}] Conception, connaissance et concept, Grenier D. (ed.) {\it Didactique et technologies cognitives en math\'ematiques, s\'eminaires 1994-1995}, pp.~219-244, Grenoble: Universit\'e Joseph Fourier. \item[{\sc Carlson D., 1993;}] Teaching linear algebra: must the fog always roll in?, {\it College Mathematics Journal}, vol.~24, n.~1; pp.~29-40. \item[{\sc Dorier J.-L., Sierpinska A., 2001;}] Research into the teaching and learning of linear algebra, Holton D. (ed.) {\it The Teaching and Learning in Mathematics at University Level- An ICMI Study}, Kluwer Acad. Publ., The Netherlands, pp. 255-273. \item[{\sc Fischbein E., 1987;}] {\it Intuition in science and mathematics}, D.Reidel Publishing Company, Dordrecht, Holland. \item[\sc Ginsburg H., 1981;] The Clinical Interview in Psychological Research on Mathematical Thinking: Aims, Rationales, Techniques. {\it For the Learning of Mathematics}, v.~1,~3 pp.~4-11. \item[{\sc Sfard A., 1991;}] On the dual nature of mathematical conceptions: reflections on processes and objects as differente sides of the same coin, {\it Educational Studies in Mathematics}, v.~22, pp.~1-36. \end{description}","intuitive models, process-object duality, Linear Algebra education","97c30",""," "Malik","Saroj","saroj.malik@gmail.com","\section{Your title here} By {\sl names of all authors here}. \noindent Insert your abstract here A new class of g-inverses and order relations on index 1 matrices (Abstract) In this paper we introduce two new classes of g-inverses of a matrix A of index 1 over an arbitrary ¯eld. We obtain some properties of these generalized inverses and identify the class of all commuting g-inverses as one of the classes of these new classes of g-inverses. The problem of one sided sharp order has been also studied and these new g-inverses have been found very useful in character- izing it. We also give conditions under which one sided sharp order becomes full sharp order. Finally we study the sharp order for partitioned matrices.","g- inverse, index 1 matrices, Good approximate solution, excellent approximate solution, Group inverse, one-sided sharp order","15","15A57; 1","This Abstract is a PDF version of the tex file. I'm separately sending both files to Prof Verde "Prokip","Volodymyr","vprokip@mail.ru","\section{On the problem of diagonalizability of matrices over a principal ideal domain} By {\sl Volodymyr Prokip}. \noindent Let $R$ -- be a principal ideal domain with the unit element $e\not=0$ and $U(R)$ the set of divisors of unit element $e$. Further, let $R_n$ -- the ring of $(n\times n)$-matrices over $R$; $I_k$ -- the identity $k\times k$ matrix and $O$ the zero $n\times n$ matrix. In this report we present conditions of diagonalizability of a matrix $A \in R_n$, i.e. when for $A$ there exists a matrix $T \in GL(n,R)$ such that $TAT^{-1}$ -- a diagonal matrix. {\bf Theorem.} Let $A\in R_n$ and $$\det (Ix-A)=(x-\alpha_1)^{k_1}(x-\alpha_2)^{k_2} \cdots (x-\alpha_r)^{k_r} , $$ where $ \alpha_i \in R $, and $ \alpha_i - \alpha_j \in U(R)$ for all $i\not= j$. If $m(x)=(x-\alpha_1)(x-\alpha_2) \cdots (x-\alpha_r)$ -- the minimal polynomial of the matrix $A$, i.e. $m(A)=O$, then for the matrix $A$ there exists a matrix $ T \in GL(n,R)$ such that $$ TAT^{-1}={\rm diag} \left( {\alpha}_1I_{k_1}, {\alpha}_2I_{k_2}, \ldots , {\alpha}_rI_{k_r} \right) . $$","matrix , pricipal ideal domain, diagonalization","15A04","15A21"," "Noutsos","Dimitrios","dnoutsos@uoi.gr","\section{Reachability cone of eventually exponentially nonnegative matrices} By {\sl Dimitrios Noutsos and Michael Tsatsomeros}. \noindent We examine the relation between eventual exponential nonnegativity of a matrix $A$ ($e^{tA}\geq 0$ for all sufficiently large $t\geq 0$) and eventual nonnegativity of $I+hA, ~ h\geq 0$ ($(I+hA)^k\geq 0$ for all sufficiently large $k\geq 0$). As a consequence, we are able to characterize initial points $x_0\in \mathbb{R}^n$ such that $e^{tA}x_0$ becomes and remains nonnegative as exactly those points for which the discrete trajectories $x^{(k)} = (I+hA)^kx_0$ become and remain nonnegative. This extents work on the reachability cone of exponentially nonnegative matrices by Neumann, Stern and Tsatsomeros [1]. \bigskip [1] M. Neumann, R.J. Stern, and M. Tsatsomeros. The reachability cones of essentially nonnegative matrices. {\em Linear and Multilinear Algebra}, 28:213--224, 1991.","Eventually nonnegative matrix; eventually exponentially nonnegative matrix; point of nonnegative potential; reachability cone","15A48","65F10","Consider my talk for the mini-simposium ""MS8 Nonnegative and eventually nonnegative matrices"", organized by Judi McDonald "Moro","Julio","jmoro@math.uc3m.es","\section{Structured H\""older condition numbers for eigenvalues under fully nongeneric perturbations} By {Mar\'{\i}a J.\ Pel\'aez and Julio Moro}. \noindent Let $\lambda$ be an eigenvalue of a matrix or operator $A$. The condition number $\kappa(A,\lambda)$ measures the sensitivity of $\lambda$ with respect to arbitrary perturbations in $A$. If $A$ belongs to some relevant class, say ${\mathbb S}$, of structured operators, one can define the {\em structured} condition number $\kappa(A,\lambda;\mathbb{S})$, which measures the sensitivity of $\lambda$ to perturbations {\em within} the set ${\mathbb S}$. Whenever the structured condition number is much smaller than the unstructured one, thepossibility opens for a structure-preserving spectral algorithm to be more accurate than a conventional one. \medskip For multiple, possibly defective, eigenvalues the condition number is usually defined as a pair of nonnegative numbers, with the first component reflecting the worst-case asymptotic order which is to be expected from the perturbations in the eigenvalue. In this talk we adress the case when this asymptotic order differs for structured and for unstructured perturbations: if we denote $\kappa(A,\lambda)=(n,\alpha)$ and $\kappa(A,\lambda;\mathbb{S}) = (n_{{\mathbb S}},\alpha_{{\mathbb S}})$, we consider the case when $n\not= n_{{\mathbb S}}$, i.e., when structured perturbations induce a {\em qualitatively} different perturbation behavior than unstructured ones. If this happens, we say that the class ${\mathbb S}$ of perturbations is {\em fully nongeneric} for $\lambda$. \medskip On one hand, full nongenericity is characterized in terms of the eigenvector matrices corresponding to $\lambda$, and it is shown that, for linear structures, this is related to the so-called skew-structure associated with ${\mathbb S}$. On the other hand, we make use of Newton polygon techniques to obtain explicit formulas for structured condition numbers in the fully nongeneric case: both the asymptotic order and the largest possible leading coefficient are identified in the asymptotic expansion of perturbed eigenvalues for fully nongeneric perturbations.","eigenvalue problem, condition number, perturbation theory","65F15","15A18","this talk will be part of the minisymposium on ""Eigenproblems: theory and computation"" "Sendov","Hristo","hssendov@stats.uwo.ca","\section{Spectral Manifolds} By {\sl A. Daniilidis, J. Malick, A. Lewis, H.S. Sendov}. \noindent It is well known that the set of all $n \times n$ symmetric matrices of rank $k$ is a smooth manifold. This set can be described as those symmetric matrices whose ordered vector of eigenvalues has exactly $n-k$ zeros. The set of all vectors in $\R^n$ with exactly $n-k$ zero entries is itself an analytic manifold. In this work, we characterize the manifolds $M$ in $\R^n$ with the property that the set of all $n \times n$ symmetric matrices whose ordered vector of eigenvalues belongs to $M$ is a manifold. In particular, we show that if $M$ is a $C^2$, $C^{\infty}$, or $C^{\omega}$ manifold then so is the corresponding matrix set. We give a formula for the dimension of the matrix manifold in terms of the dimension of $M$.","eigenvalue, manifold, symmetric matrix","15A18",""," "Vander Meulen","Kevin","kvanderm@cs.redeemer.ca","\section{Sparse Inertially Arbitrary Sign Patterns} By {\sl L. Vanderspek, M. Cavers, K.N. Vander Meulen}. \noindent The inertia of a real matrix $A$ is an ordered triple $i(A)=(n_1,n_2,n_3)$ where $n_1$ is the number of eigenvalues of $A$ with positive real part, $n_2$ is the number of eigenvalues of $A$ with negative real part, and $n_3$ is the number of eigenvalues of $A$ with zero real part. A sign pattern is a matrix whose entries are in $\{ +,-,0\}$. An order $n$ sign pattern $S$ is inertially arbitrary if for every ordered triple $(n_1,n_2,n_3)$ with $n_1+n_2+n_3=n$ there is a real matrix $A$ such that $A$ has sign pattern $S$ and $i(A)=(n_1,n_2,n_3)$. We describe some techniques in determining a pattern is inertially arbitrary. We present some irreducible inertially arbitrary patterns of order $n$ with less than $2n$ entries.","sign pattern, inertia, nilpotent","15A18","05C50","MS1 Combinatorial Matrix Theory "Frank","Martin","frank@mathematik.uni-kl.de","\section{An iterative method for transport equations in radiotherapy} By {\sl Bruno Dubroca \and Martin Frank}. \noindent Treatment with high energy ionizing radiation is one of the main methods in modern cancer therapy that is in clinical use. During the last decades two main approaches to dose calculation were used, Monte Carlo simulations and pencil-beam models. A third way to dose calculation has not attracted much attention in the medical physics community. This approach is based on deterministic transport equations of radiative transfer. In this work, we study a full discretization of the transport equation which yields a large linear system of equations. The computational challenge is that scattering is strongly forward-peaked, which means that traditional solution methods like source iteration fail in this case. Therefore we propose a new method, which combines an incomplete factorization of the scattering matrix and several iterative steps to obtain a fast and accurate solution. Numerical examples are given.","Iterative methods for linear systems; transport equations; radiotherapy","65F10","82C70"," "Fernandes","Rosário","mrff@fct.unl.pt","\section{Rank partitions and covering numbers under small perturbations of an element } By {Rosário Fernandes}. \noindent Let $(v_1,\ldots ,v_m)$ be a family of vectors of $C^n$ (where $C$ is the field of complex numbers). Let $k$ be a positive integer. A subfamily $(v_{i_1},\ldots ,v_{i_j})$ of $(v_1,\ldots ,v_m)$ is $k$-independent if it is the union of $k$ subfamilies each of which is linearly independent. The $k$-dimension of $(v_1,\ldots ,v_m)$ (denoted by $d_k(v_1,\ldots ,v_m)$) is the maximum cardinality of the $k$-independent subfamilies of $(v_1,\ldots ,v_m)$. It was proved in ``On the $\mu$-colorings of a matroid"" (J.A. Dias da Silva, Lin. Multil. Algebra 27 (1990), 25-32) that $$(d_1(v_1,\ldots ,v_m), d_2(v_1,\ldots ,v_m)-d_1(v_1,\ldots ,v_m),\ldots ,d_m(v_1,\ldots ,v_m)-d_{m-1}(v_1,\ldo ts ,v_m))$$ is a partition of the number of the nonzero vectors in the family $(v_1,\ldots ,v_m)$. This partition is called the rank partition. Let $v_i\in (v_1,\ldots ,v_m)$ be a nonzero vector. The smallest integer $s$ such that $d_s(v_1,\ldots ,v_m)>d_s(v_1,\ldots ,v_{i-1},v_{i+1},\ldot s ,v_m)$ is called the covering number of $v_i$ in $(v_1,\ldots ,v_m)$. In this talk we describe how the rank partition and the covering number can change with arbitrarily small perturbations of a fixed element.","matroid; rank partition; covering number; small","05B35","15A03"," "Perdigão","Cecília","mcds@fct.unl.pt","\section{On the equivalence class graph} By {\sl Cec\'\i lia Perdig\~ao and Ros${\rm \acute{a}}$rio Fernandes}. \noindent For a given simple, connected and undirected graph $G=(V(G),E(G))$ we define an equivalence relation $R$ on $V(G)$ such that $$\forall_{x,y \in V(G)}\ \ \ xRy\Leftrightarrow N(x)=N(y),$$ where, for all $x$ in $V(G)$, $N (x)$ is the set of all neighbors of $x$. The equivalence class graph of $G$, or $R$-graph of $G$, is the graph ${\cal G}= (V({\cal G}), E({\cal G}))$ where $V({\cal G})=\{X_1, \dots,X_p\}$ is the set of equivalence classes of $R$ in $V(G)$ and $\{X_i,X_j\} \in E(\cal G)$ if, and only if, there exists $x \in X_i$ and $y \in X_j$ such that $\{x,y\}$ is an edge in $G$. In our last work we have computed the minimum rank of $G$ using the $R$- graph of $G$. Although in various cases this computation was simplified, there exist graphs whose $R$-graph is equal to the graph itself and for whose we do not have any simplification by this construction. Our aim is study the properties of the equivalence class graph and, more particulary, characterize simple connected and undirected graphs which are equal to its equivalence class graph.","Graphs; Matrices; Minimum rank","05C50","05C69"," "da Cruz","Henrique F.","hcruz@mat.ubi.pt","\section{On the matrices that preserve the value of the immanant of the upper triangular matrices} By {\sl Rosário Fernandes and Henrique F. da Cruz}. \noindent Let $\chi$ be an irreducible character of the symmetric group of degree $n$, let $M_n(F)$ be the linear space of $n$-square matrices with elements in $F$, let $T^U_n(F)$ be the subset of $M_n(F)$ of the upper triangular matrices and let $d_\chi$ be the immanant associated with $\chi$. We denote by ${\cal T}(S_n,\chi)$ the set of all $A \in M_n(F)$, such $$d_\chi(AX)=d_\chi(X),$$ \noindent for all $X \in T^U_n(F)$. In [1] it was proved that if $\chi$ is self associated or $\chi=1$, the principal character, then $${\cal T}(S_n,\chi)=\bigcup_{\sigma\in S_n, \chi (\sigma)\neq 0} \{P(\sigma) R: \,\, R \in T^{U}_n (F),\,\,\det(R)=\frac{\chi (id)}{\chi (\sigma)}\}.$$ If $\chi$ is not self associated the problem remains unsolve. In this talk we present a complete description of ${\cal T}(S_n,\chi)$ with $\chi=(n-1,1)$ or $\chi=(n-2,2)$. \vspace{0,5cm} {\bf References} [1] {\sc R. Fernandes}, Matrices that preserve the value of the generalized matrix function of the upper triangular matrices, {\it Linear Algebra Appl.} {\bf 401} (2005), 47-65.","Matrix preservers; immanants; triangular matrix","15A15",""," "PALMA","ALEJANDRO","palma@venus.ifuap.buap.mx","\section{SOLUTION OF THE LINEAR TIME-DEPENDENT POTENTIAL BY USING A SOLVABLE LIE ALGEBRA$^*$} \symbolfootnote[1]{Work supported by CONACYT under Project C01-47090} By {\sl A. Palma$^1$\symbolfootnote[2]{On sabbatical leave from Instituto de F\'{i}sica (BUAP)}, M. Villa$^1$, and L. Sandoval$^2$}.\\ \vspace{4 mm} \noindent $^1${Departamento de Qu\'{i}mica, Universidad Aut\'onoma Metropolitana de Iztapalapa, M\'exico, D.F. 09340.}\\ $^2${Facultad de Ciencias de la Computaci\'on, Bene´m\'erita Universidad Aut\'onoma de Puebla. Puebla, Pue. 72570.}\\}. \noindent The solution of the Sch\""odinger equation for the linear time-dependent potential has been recently the subject matter of several publications. We show in this work that this is one of the few systems which leads to a solvable Lie algebra. In fact, we consider a more general potential where the linear time-dependent potential is only a particular case. We find the solution by using the well known theorem of Wei-Norman.","LIE ALGEBRA, SHROEDINGER EQUATION","81","34"," "Kirkland","Steve","kirkland@math.uregina.ca","\section{Constructing Laplacian Integral Split Graphs} By {\sl N. Abreu, M. de Freitas, R. Del Vecchio and S. Kirkland}. \noindent Given a graph $G$, its {\it{Laplacian matrix}}, $L$, is defined as $L=D-A$, where $A$ is the $(0,1)$ adjacency matrix for $G$, and $D$ is the diagonal matrix of vertex degrees. A graph is {\it{Laplacian integral}} if the spectrum of its Laplacian matrix consists entirely of integers. A {\it{split graph}} is one whose vertex set can be partitioned as $A \cup B$, where $A$ induces a clique and $B$ induces an independent set of vertices. Merris has posed the problem of identifying and/or constructing Laplacian integral split graphs. Using balanced incomplete block designs, Diophantine equations, and Kroneker products, we describe a technique for constructing infinite families of Laplacian integral split graphs, thus partially addressing the problem posed by Merris.","Laplacian matrix, split graph, block design","05C50","","This is an invited mini-symposium talk for MS1 - Combinatorial Matrix Theory "Grudsky","Sergey","grudsky@math.cinvestav.mx","Uniform boundedness of Toeplitz Matrices with variable coefficientas. {S. M .Grudsky (CINVESTAV, Mexico-City)} Uniform boundedness of sequences of variable-coefficient Toeplitz matrices is a surprisingly delicate problem. We show that if the generating function of the sequence belongs to a smoothness scale of the Holder type and if $\alpha$ is the smoothness parameter , then the sequence may be unbounded for $\alpha<.05$ while it is always bounded for $\alpha<.05$ Nota:to Special session - Structured matrices (V. Olshevsky)","Variable-coefficient Toeplitz matrices,Uniform boundedness,Holder spaces.","47B35","15A60"," "bourgeois","gerald","bourgeois_gerald@yahoo.fr","\section{About the logarithm function over the matrices} By {\sl Gerald Bourgeois}. \noindent We prove the following results: let $x,y$ be $(n,n)$ complex matrices such that $x,y,xy$ have no eigenvalue in $]-\infty,0]$ and $log(xy)=log(x)+log(y)$. If $n=2$, or if $n\geq3$ and $x,y$ are simultaneously triangularizable, then $x,y$ commute. In both cases we reduce the problem to a result in complex analysis.\\ \section{Introduction} $\mathbb{Z}^{*}$ refers to the non-zero integers.\\ Let $u$ be a complex number. Then $Re(u),Im(u)$ refer to the real and imaginary parts of $u$; if $u\notin]-\infty,0]$ then $arg(u)\in]-\pi,\pi[$ refers to its principal argument. \subsection{Basic facts about the logarithm.} Let $x$ be a complex $(n,n)$ matrix which hasn't any eigenvalue in $]-\infty,0]$. Then $log(x)$, the $x$-principal logarithm, is the $(n,n)$ matrix $a$ such that:\\ $e^a=x$ and the eigenvalues of $a$ lie in the strip $\{z\in\mathbb{C}: Im(z)\in]-\pi,\pi[\}$.\\ $log(x)$ always exists and is unique; moreover $log(x)$ may be written as a polynomial in $x$.\\ \indent Now we consider two matrices $x,y$ which have no eigenvalue in $]-\infty,0]$:\\ $\bullet$ If $x,y$ commute then $x,y$ are simultaneously triangularizable and we may associate pairwise their eigenvalues $(\lambda_j),(\mu_j)$; if moreover $\forall{j},|arg(\lambda_j)+arg(\mu_j)|<\pi$, then $log(xy)=log(x)+log(y)$.\\ $\bullet$ Conversely if $xy$ has no eigenvalue in $]-\infty,0]$ and $log(xy)=log(x)+log(y)$ then do $x,y$ commute ? We will prove that it's true for $n=2$ (theorem 1) or, for all $n$, if $x,y$ are simultaneously triangularizable (theorem 2). But if $n>2$, then we don't know the answer in the general case. \\ \section{Dimension 2} \subsection{Principle of the proof.} \noindent The proof is based on the two next propositions. The first one is a corollary of a Morinaga and Nono's result; the second is a technical result using complex analysis.\\ \textbf{Proposition 1.} Let $\mathcal{U}=\{u\in\mathbb{C}^{*}:e^{u}=1+u\}$.\\ Let $a,b$ be two $(2,2)$ complex matrices such that $e^{a+b}=e^{a}e^{b}$ and $ab\not=ba$; let $spectrum(a)=\{\lambda_1,\lambda_2\},spectrum(b)=\{\mu_1,\mu_2\}$.\\ \indent Then one of the three following $item$ is fulfilled:\\ (1) $\lambda_1-\lambda_2\in{2i\pi\mathbb{Z}^{*}}$ and $\mu_1-\mu_2\in{2i\pi\mathbb{Z}^{*}}$.\\ (2) One of the following complex numbers $\pm(\lambda_1-\lambda_2)$, $\pm(\mu_1-\mu_2)$ is in $\mathcal{U}$.\\ (3) $a$ and $b$ are simultaneously similar to $\begin{pmatrix}\lambda&0\\0&\lambda+u\end{pmatrix}$ and $\begin{pmatrix}\mu+v&1\\0&\mu\end{pmatrix}$ with $\lambda,\mu\in\mathbb{C}$, $u,v\in\mathbb{C}^{*},u\not=v$ and $\dfrac{e^{u}-1}{u}=\dfrac{e^{v}-1}{v}\not=0$.\\ \textbf{Proposition 2.} Let $u,v$ be two distinct, non zero complex numbers such that $\dfrac{e^{u}-1}{u}=\dfrac{e^{v}-1}{v}\not=0$, $|Im(u)|<2\pi,|Im(v)|<2\pi$.\\ Then necessarily $|Im(u)-Im(v)|\geq{2\pi}$.\\ \subsection{ Theorem 1.} Let $x,y$ be two $(2,2)$ complex matrices such that $x,y,xy$ haven't any eigenvalue in $]-\infty,0]$ and $log(xy)=log(x)+log(y)$. Then $x,y$ commute.\\ \section{Dimension $n$} $I$ refers to the identity matrix of dimension $n-1$. Let $\phi$ be the holomorphic function: $\phi:z\rightarrow\dfrac{e^z-1}{z},\phi(0)=1$.\\ We'll use the following to prove our second main result. \subsection{Proposition 3.} Let $a=\begin{pmatrix}a_0&u\\0&\alpha\end{pmatrix},b=\begin{pmatrix}b_0&v\\0&\beta\end{pmatrix}$ be two complex $(n,n)$ matrices where $\alpha,\beta$ are complex numbers and $a_0,b_0$ are $(n-1,n-1)$ complex matrices which commute; let $spectrum(a_0-\alpha{I})=(\alpha_i)_{i\leq{n-1}},spectrum(b_0-\beta{I})=(\beta_i)_{i\leq{n-1}}$. If $e^{a+b}=e^a{e^b}$ and $ab\not=ba$ then one of the following $item$ must be satisfied:\\ (4) $\exists{i}:\beta_i\not=0$ and $\phi(\alpha_i+\beta_i)=\phi(\alpha_i)$.\\ (5) $\exists{i}:\alpha_i\not=0,\beta_i=0$ and $\phi(-\alpha_i)=1$.\\ \subsection{ Theorem 2.} Let $x,y$ be $(n,n)$ complex matrices such that $x,y,xy$ haven't any eigenvalue in $]-\infty,0]$ and $log(xy)=log(x)+log(y)$. If moreover $x,y$ are simultaneously triangularizable then $xy=yx$.\\ \section{Conclusion} When $n=2$, we know how to characterize the complex $(n,n)$ matrices $a,b$ such that $ab\not={ba}$ and $e^{a+b}=e^a{e^b}$; it allowed us to bring back our problem to a result of complex analysis. Unfortunately, if $n\geq{3}$, the classification of such matrices is unknown. For this reason we can't prove, in this last case, the hoped result without supplementary assumption.\\","Matricial Logarithm, Complex Analysis","39B42",""," "Gemignani","Luca","gemignan@dm.unipi.it","\section{Eigenvalue Problems for Rank-structured Matrices} By {Luca Gemignani}. \noindent A recent significant breakthrough in the field of numerical linear algebra is the design of fast and numerically stable eigenvalue algorithms for certain classes of rank-structured matrices, including, for instance, diagonal plus low-rank and companion matrices. Our developments in numerical methods for solving these large structured eigenvalue problems are reviewed and state-of-the-art algorithms for both direct and inverse problems are discussed. As well as important conceptual and theoretical aspects, emphasis is also placed on more practical computational issues and applications in matrix and polynomial computations.","eigenvalue computation, rank structures, polynomial computation, complexity","65F",""," "Goldberger","Assaf","assafg@post.tau.ac.il","\section{An upper bound on the characteristic polynomial of a nonnegative matrix leading to the proof of the Boyle--Handleman conjecture} By {\sl Assaf Goldberger and Michael Neumann}. \noindent We prove a conjecture by Boyle and Handelam, saying that if $A\in \R^{n,n}$ is a nonnegative matrix of rank $r$ and spectral radius $1$, and if $\chi_A(t)$ is its characteristic polynomial, then $\chi_A(x)\le x^n-x^{n-r}$ for all $x\ge 1$. Our proof is based on the Newton Identities.","Nonnegative Matrices, Newton Identities, Characteristic polynomial","15A48","15A18"," "Plavka","J\'an","Jan.Plavka@tuke.sk","\section{On the robustness of matrices in max-min algebra} By {\sl J\'an Plavka}. \noindent Let $(B,\leq)$ be a nonempty, bounded, linearly order set and $a\oplus b=\max(a,b),\ a\otimes b=\min(a,b)$ for $a,b\in B.$ A vector $x$ is said to be an eigenvector of a square matrix $A$ if $A\otimes x=\lambda\otimes x$. A given matrix $A$ is called (strongly) robust if for every $x$ the vector $A^k\otimes x$ is an (greatest) eigenvector of $A$ for some natural number $k$. We present a characterization of robust and strongly robust matrices. As a consequence, an efficient algorithm for checking of it is introduced.\\ \begin{thebibliography}{99} \bibitem{b} P. Butkovi\v c and R. A. Cuninghame-Green, On matrix powers in max-algebra, Lin. Algebra and its Appl. 421 (2007) 370-381. \bibitem{c1} K. Cechl\'arov\'a, Eigenvectors in bottleneck algebra, Lin. Algebra Appl. 175 (1992), 63- 73. \bibitem{p} J. Plavka, On the robustness of matrices in max-min algebra (submitted to LAA).","robustness, eigenvector","15A06","15A33"," "Russo","Maria Rosaria","mrrusso@math.unipd.it","\section{On some general determinantal identities of Sylvester type} By {\sl Michela Redivo-Zaglia, Maria Rosaria Russo}. \noindent Sylvester's determinantal identity is a well-known identity in matrix analysis which expresses a determinant composed of bordering determinants in terms of the original one. It has been extensively studied, both in the algebraic and in the combinatorial context and is frequently used in context as approximation, linear programming and extrapolation algorithms. Several authors have deepened the main property of this classical Sylvester's identity, some of these have obtained significant results as generalized formulas. In this talk we present a new generalization of the Sylvester's determinantal identity, which expresses the determinant of a matrix in relation with the determinant of the bordered matrices obtained adding more than one row and one column to the original matrix.","Sylvester's identity, determinants, sequence transformation, extrapolation algorithms.","65F40","65B05"," "Marovt","Janko","janko.marovt@uni-mb.si","\section{Homomorphisms of matrix semigroups over division rings from dimension two to three} By {\sl Gregor Dolinar, Janko Marovt}. \noindent Let $\mathbb{D}$ be an arbitrary division ring and $M_{n}(\mathbb{D})$ the multiplicative semigroup of all $n\times n$ matrices over $\mathbb{D}$. We will describe the general form of non-degenerate homomorphisms from $M_{2}(\mathbb{D})$ to $M_{3}(\mathbb{D})$.","Multiplicative map, matrices over division rings, Dieudonné's determinant","15A30","20M20"," "Schaeffer","Elisa","elisa.schaeffer@gmail.com","\section{Locally computable approximations of absorption times for graph clustering} By {\sl Pekka Orponen, Elisa Schaeffer, and Vanesa Avalos}. \noindent Graph clustering aims to partition a given graph into groups of tightly interrelated vertices. In {\em local} clustering, the aim is to identify the group in which a given seed vertex belongs. We study the problem of local clustering based on the mathematics of {\em random walks} in graphs. In this work, we first algebraically express the {\em absorption times} of a random walk to the seed vertex in terms of the {\em spectrum} of a matrix representation of the graph's adjacency relation. We argue and experimentally demonstrate that a single eigenvector often suffices to obtain a good approximate for the absorption times from all other vertices to the seed. We then use a locally computable gradient-descent method to approximate this eigenvector based on its formulation in terms of an optimization problem of the Rayleigh quotient. In order to carry out the local clustering, we interpret the components of the resulting approximation vector as vertex similarities and compute the cluster of the seed vertex as a standard two-classification task on the components of the vector. At no phase of the proposed method for local clustering is it necessary to resort to global information of the graph. This method ties together a well-established field of spectral clustering and the absorption times of a random walk, hence permitting extensions to clustering directed graphs in terms of local approximations to absorption times, whereas much of the matrix algebra used in spectral clustering of undirected graphs is not directly applicable to the asymmetric matrices that rise from directed graphs.","random walk, Fiedler vector, absorption time, graph clustering","05C50","94C15"," "Dolinar","Gregor","gregor.dolinar@fe.uni-lj.si","\section{General preservers of quasi-commutativity} By {\sl Gregor Dolinar, Bojan Kuzma}. \noindent Let $M_n$ be the algebra of all $n \times n$ matrices over the complex field $\mathbb{C}$. We say that $A, B \in M_n$ quasi-commute if there exists a nonzero $\xi \in \mathbb{C}$ such that $AB = \xi BA$. In the paper we classify bijective not necessarily linear maps $\Phi \colon M_n \to M_n$ which preserve quasi-commutativity in both directions.","General preservers, Matrix algebra, Quasi-Commutativity","15A04","15A27","I will depart on Friday, therefore I would like to have my talk before Friday. "Meini","Beatrice","meini@dm.unipi.it","\section{From algebraic Riccati equations to unilateral quadratic matrix equations: old and new algorithms} By {\sl Dario Bini, Beatrice Meini, Federico Poloni}. \noindent The problem of reducing an algebraic Riccati equation $XCX-AX-XD+B=0$ to a unilateral quadratic matrix equation (UQME) of the kind $PX^2+QX+R=0$ is analyzed. New reductions are introduced which enable one to prove some theoretical and computational properties. In particular we show that the structure preserving doubling algorithm of B.D.O.~Anderson [Internat.~J.~Control, 1978] is in fact the cyclic reduction algorithm of Hockney [J.~Assoc.~Comput.~Mach., 1965] and Buzbee, Golub, Nielson [SIAM J.~Numer.~Anal., 1970], applied to a suitable UQME\@. A new algorithm obtained by complementing our reductions with the shrink-and-shift technique of Ramaswami is presented. Finally, faster algorithms which require some non-singularity conditions, are designed. The non-singularity restriction is relaxed by introducing a suitable similarity transformation of the Hamiltonian.","Algebraic Riccati Equation, Quadratic Matrix Equation, Cyclic Reduction","15A24","65F30"," "Bini","Dario","bini@dm.unipi.it","\section{Fast solution of a certain Riccati Equation through Cauchy-like matrices} By {\sl Dario Bini, Beatrice Meini, Federico Poloni}. \noindent We consider a special instance of the algebraic Riccati equation $XCX-XE-AX+B=0$ encountered in transport theory, where the $n\times n$ matrix coefficients $A,B,C,E$ are rank structured matrices. We present some quadratically convergent iterations for solving this matrix equation based on Newton's method, Cyclic Reduction and the Structure-preserving Doubling Algorithm. It is shown that the intermediate matrices generated by these iterations are Cauchy-like with respect to a suitable singular operator and their displacement structure is explicitly determined. Using the GKO algorithm enables us to perform each iteration step in $O(n^2)$ arithmetic operations. In critical cases where convergence turns to linear, we present an adaptation of the shift technique which allows to get rid of the singularity. Numerical experiments and comparisons which confirm the effectiveness of the new approach are reported.","Algebraic Riccati Equation, Cauchy Matrix, Newton Iteration, Cyclic Reduction","15A24","65F30"," "Sivic","Klemen","klemen.sivic@fmf.uni-lj.si","\section{On varieties of commuting triples} By {\sl Klemen \v Sivic}. \noindent The set $C(3,n)$ of all triples of commuting $n×n$ matrices over an algebraically closed field $F$ is a variety in $F^{3n^2}$ defined by $3n^2$ equations, which are relations of commutativity. The problem first proposed by Gerstenhaber asks to determine for which natural numbers $n$ this varitey is irreducible. This is equivalent to the problem whether $C(3,n)$ equals to the Zariski closure of the subset of all triples of generic matrices (i.e. matrices having $n$ distinct eigenvalues). The answer is known to be positive for $n\le 7$ and negative for $n\ge 30$. Using simultaneous commutative perturbations of pairs of matrices in the centralizer of the third matrix we prove that $C(3,8)$ is also irreducible.","irreducible variety of triples of commuting matrices, simultaneous approximation by generic matrices","15A27","15A30"," "Lancaster","Peter","lancaste@ucalgary.ca","\section{Linearization of Matrix Polynomials} By {Peter Lancaster}. \noindent A precise form will be given to the notion of linearization of matrix polynomials, with special reference to the notion of an eigenvalue at infinity. This will be illustrated with linearizations of matrix polynomials when represented in various polynomial bases; orthogonal and otherwise. This is a report on collaborative work with A.Amiraslani(University of Calgary) and R.W.Corless (University of Western Ontario).","Matrix Polynomial. Linearization.","15A22","65H17"," "Barria","Jose","jbarria@math.scu.edu","\section{The strong closure of the similarity orbit for a class of pairs of finite rank operators.} By {\sl Jos\'{e} Barr\'{\i}a}. \noindent For operators $A$ and $B$ on a Hilbert space ${\mathcal H}$ the similarity orbit $S(A, B)$ is the set of all pairs $(W^{-1}AW, W^{-1}BW)$, where $W$ is an invertible operator on ${\mathcal H}$. We describe the closure of $S(A, B)$ in the strong operator topology, for finite rank operators $A$ and $B$ whose ranges have intersection equal to the subspace $\{0\}$.","Similarity orbit; Strong operator topology","47A58","15A60"," "Šmigoc","Helena","Helena.Smigoc@ucd.ie","\section{An example of constructing a nonnegative matrix with given spectrum} By {\sl Thomas J. Laffey, Helena Šmigoc}. \noindent We say that a list of $n$ complex numbers $\sigma$ is the nonzero spectrum of a nonnegative matrix, if there exists a nonnegative integer $N$ such that $\sigma$ together with $N$ zeros added to it is the spectrum of some $(n+N)\times (n+N)$ nonnegative matrix. Boyle and Handelman characterized all lists of $n$ complex numbers that can be the nonzero spectrum of a nonnegative matrix. In this talk we will present a constructive proof that $\tau(t)=(3+t,3-t,-2,-2,-2)$ is the nonzero spectrum of some nonnegative matrix for every $t >0.$ We will give a bound for the number of zeros that needs to be added to $\tau(t)$ to achieve a nonnegative realization. We will discuss how the method presented could be applied to more general situations.","Nonnegative Inverse Eigenvalue Problem, Nonzero Spectrum, Spectral Gap","15A48","15A18"," "Stosic","Marko","mstosic@isr.ist.utl.pt","\section{On Generalized Procrustes Problem} By {\sl Marko Sto\v si\'c and Jo\~ao Xavier}. \noindent In this talk we present a new approach to the generalized Procrustes problem: For given real matrices $A\in {\mathbb R}^{n \times 3}$ and $B\in {\mathbb R}^{n \times 2}$, find the Stiefel matrix $Q\in {\mathbb R}^{3 \times 2}$ (i.e. such that $Q^T Q=I_2$), that minimizes the Frobenius norm of $B-AQ$. We rewrite this problem as the more general Quadratic Programming program, and give fast algorithm for its (partial) solutions. The solution is based on the computation of convex hulls of various sets of matrices.","matrices, Procrustes problem, convex hull, optimization","15A21","15A57"," "Dogan-Dunlap","Hamide","hdogan@utep.edu","\section{Thinking Modes Revealed on Students' Responses from an Assignment on Linear Independence } By {\sl Hamide Dogan-Dunlap}. \noindent The main goal of our work was to document differences on the type of modes students use after being exposed to two different interventions. Both interventions used computer-based activities providing numerical (first intervention) and geometrical (second intervention) representations. Only the modes displayed on student responses from an assignment that was given during the second intervention are reported here. This assignment consisted of seven questions on linear independence. The aspects of forty-five matrix algebra students’ thinking modes are documented in light of Sierpinska’s framework on thinking modes (2000)*. Our qualitative analysis implemented a constant comparison method, an inductive approach to classifying responses through emerging themes. Our analysis revealed that, in concrete (traditional) questions that do not require generalization/abstraction, students’ responses included various geometrical aspects of vectors and planes in R3. Some of which are as follows: “vectors coming out of a plane,” “Vectors that lie on the same plane,” and “the magnitude of vectors are the same/different.” Even though, students used graphical modes in their responses for the concrete questions, when answering more abstract questions requiring conjecture and generalization, many of these students’ responses fell back on the algebraic and arithmetic modes. Some for instance stated mainly the formal definition of linear independence without showing any work/computation to justify their answers for these questions. We should also note that despite this fact, the second most common mode used in the abstract questions were geometrical. We furthermore observed that the notable number of students made arguments using multiple modes; numerical, algebraic and geometrical. One may infer from this that, at this point, students may begin reasoning in multiple modes. We believe that this is a desired behavior toward forming a rich conceptual understanding of linear independence. * Sierpinska, A. 2000. “On some aspects of students’ thinking in linear algebra,” The Teaching of Linear Algebra in Question, The Netherlands 2000, pp. 209–246.","Linear Algebra Education, Modes of Thinking","97","15","This is for an invited talk for the mini-symposia on Linear Algebra Education "Narayan","Sivaram","sivaram.narayan@cmich.edu","\section{\bf{Linearly Independent Vertices and Minimum Semidefinite Rank} By {\sl Sivaram K. Narayan}\\ \ {Department of Mathematics}\\ \{Central Michigan University}\\ \ {Mount Pleasant}\\ \ {MI 48859}\\ \ {USA}\\}. \noindent A {\it vector representation} of a graph is an assignment of a vector in $\mathbb{C}^n$ to each vertex so that nonadjacent vertices are represented by orthogonal vectors and vertices adjacent by a single edge are represented by nonorthogonal vectors. The least $n$ for which a vector representation can be found is the {\it minimum semidefinite rank} of a graph. We study the minimum semidefinite rank of a graph using vector representations. For example, rotation of vector representations by a unitary matrix allows us to find the minimum semidefinite rank of the join of two graphs and certain bipartite graphs. We present a sufficient condition for when the vectors corresponding to a set of vertices of a graph must be linearly independent in any vector representation of that graph, and conjecture that the resulting graph invariant is equal to minimum semidefinite rank.","Minimum Semi-definite Rank, Join of Graphs, Linearly Independent Vertices","15A18","05C50"," "Im","Bokhee","bim@chonnam.ac.kr","\section{Representations of trilinear products in Comtrans algebras} By {\sl Bokhee Im(Chonnam National University), Jonathan D. H. Smith(Iowa State University) }. \noindent Unlike the set of all Lie algebras, the set of all comtrans algebras on a given module has a linear structure. Let $E$ be a finite-dimensional vector space over a field $k$. Then we want to determine which trilinear products $xyz$ on $E$ may be represented as linear combinations of the commutator and translator of a comtrans algebra on $E$ in the manner of the following so-called bogus product: $$xyz=\frac16[x,y,z]+\frac16[y,z,x]+\frac16[z,x,y]+\frac13\langle x,y,z\rangle-\frac13\langle z,x,y\rangle.$$ If the underlying field is not of characteristic $3$, then we show that the necessary and sufficient condition for such a representation is $$ xxy+xyx+yxx=0\, , $$ a condition described as \emph{strong alternativity}. Indeed, if the underlying field is also not of characteristic $2$, then each strongly alternative trilinear product is represented as the bogus product of a comtrans algebra. An appropriate representation for the case of characteristic $2$ will also be given.","trilinear product, cubic form, comtrans algebra","15A78","17D99"," "Guo","Chun-Hua","chguo@math.uregina.ca","\section{On Newton's method and Halley's method for $p$th roots of matrices} By {\sl Chun-Hua Guo}. \noindent If $A$ is any matrix with no eigenvalues on the closed negative real axis, the principal $p$th root of $A$, $A^{1/p}$ ($p\ge 2$ is any integer), can be computed by Newton's method or Halley's method (with $X_0=I$) after a proper preprocessing if necessary. The matrix $A$ may also be allowed to have semisimple zero eigenvalues. We show that Newton's method converges to $A^{1/p}$ if all eigenvalues of $A$ are in $\{z: |z-1|\le 1\}$ and all zero eigenvalues of $A$ $($if any$)$ are semisimple. Suppose that all eigenvalues of $A$ are in $\{z: |z-1|< 1\}$ and write $A=I-B$ $($so $\rho(B)<1$$)$. Let $(I-B)^{1/p}=\sum_{i=0}^{\infty}c_iB^i$ be the binomial expansion. Then the sequence $X_k$ generated by Newton's method or by Halley's method has the Taylor expansion $X_k=\sum_{i=0}^{\infty}c_{k,i}B^i$. For Newton's method we show that $c_{k,i}=c_i$ for $i=0, 1, \ldots, 2^k-1$, and for Halley's method we show that $c_{k,i}=c_i$ for $i=0, 1, \ldots, 3^k-1$.","matrix $p$th root, Newton's method, Halley's method","65F30","15A24","talk for MS6 "Andjelic","Milica","milica@matf.bg.ac.yu","\section{One upper bound for the largest eigenvalue of the signless Laplacian} By {\sl Milica Andjelic, Slobodan Simic}. \noindent We prove several conjectures which were generated using the computer program AutoGraphiX (AGX). New bound on the largest eigenvalue of signless Laplacian is given. Moreover, the study of this bound together with some other already known yields to many examples where the new one gives more precise approximations.","graph theory, graph spectra, line graph, signless Laplacian, nested split graph, largest eigenvalue","05C50",""," "Fonseca","Carlos","cmf@mat.uc.pt","\section{An inequality for the multiplicity of an eigenvalue} By {\sl C. M. da Fonseca}. \noindent Let $A(G)$ be a Hermitian matrix whose graph $G$ is given. From the interlacing theorem, it is known that $m_{A(G\backslash i)}(\theta)\geq m_{A(G)}(\theta)-1$, where $m_{A(G)}(\theta)$ is the multiplicity of the eigenvalue $\theta$ of $A(G)$. Motivated by the Christoffel-Darboux Identity, in this talk we provide a similar inequality when a particular path of $G$ is deleted.","multiplicity, eigenvalues, graph, tree, Hermitian matrix","15A18","15A57"," "Teixeira Matos","Isabel","imatos@deetc.isel.ipl.pt","\section{A Completion Problem over the Field of Real Numbers} By {\sl Isabel Teixeira Matos}. \noindent Let $F$ be a field. In 1975 G.\ N.\ de Oliveira has proposed the follo\-wing completion problems: Describe the possible characteristic polynomials of $$\left[\begin{array}{cc} A_{1,1}&A_{1,2}\\A_{2,1}& A_{2,2}\end{array}\right],$$ where $A_{1,1}$ and $A_{2,2}$ are square submatrices, when some of the blocks $A_{i,j}$ are fixed and the others vary. Several of these problems remain unsolved. We give the solution, over the field of real numbers, of Oliveira's problem where the blocks $A_{1,2},A_{2,1}$ are fixed and the others vary.","Completion problems, eigenvalues, characteristic polynomials","15","18"," "Hnetynkova","Iveta","hnetynkova@math.asu.edu","\section{On solvability of total least squares problems} By {\sl Iveta Hn\v{e}tynkov\'{a}}. \begin{center} Based on the joint work with Z. Strako\v{s} and M. Ple\v{s}inger, \\ Institute of Computer Science, Academy of Sciences, Czech Republic. \end{center} \medskip Let $A$ be a real $m$ by $n$ matrix, and $b$ a real $m$-vector. Consider estimating $x$ from an orthogonally invariant linear approximation problem \begin{equation} \label{eq} Ax \approx b, \end{equation} where the data $b, \,A$ contain redundant and/or irrelevant information. In {\em total least squares} (TLS) this problem is solved by constructing a minimal correction to the vector $b$ and the matrix $A$ such that the corrected system is compatible. Contrary to the standard least squares approximation problem, a solution of a TLS problem does not always exist. In addition, the data $b, \, A$ can suffer from multiplicities and in this case a TLS solution may not be unique. Classical analysis of TLS problems is based on the so called Golub - Van Loan condition \,$\sigma_{min}(A) \, > \, \sigma_{min}([b,\,A])$\,, see \cite{Golub, Huffel}. This condition is, however, intricate through the fact that it is only sufficient but not necessary for the existence of a TLS solution. A new contribution to the theory and computation of linear approximation problems was published in a sequence of papers \cite{PS02a, PS02b, PS06}, see also \cite{HS07}. Here it is proved that the partial upper bidiagonalization \cite{GK} of the extended matrix $[b,A]$ determines a core approximation problem \,$A_{11} x_1 \,\approx \,b_1$\,, with the necessary and sufficient information for solving the original problem given by $b_1$ and $A_{11}$. The transformed data $b_1$ and $A_{11}$ can be computed either directly, using Householder orthogonal transformations, or iteratively, using the Golub-Kahan bidiagonalization. It is shown how the core problem can be used in a simple and efficient way for solving the total least squares formulation of the original approximation problem. \medskip In this contribution we discuss the necessary and sufficient condition for the existence of a TLS solution based on the core reduction, and mention work on extensions of the results to linear approximation problems with multiple right hand sides \cite{HP}. \begin{thebibliography}{10} . \bibitem{GK} G. H. Golub, W. Kahan, Calculating the singular values and pseudo-inverse of a matrix, {\em SIAM J. Numer. Anal.} Ser. B 2, pp. 205--224, 1965. \bibitem{Golub} G. H. Golub, C. F. Van Loan, An analysis of the total least squares problem, {\em SIAM J. Numer. Anal.} 17, pp. 883--893, 1980. \bibitem{HS07} I. Hn\v{e}tynkov\'{a}, Z. Strako\v{s}, Lanczos tridiagonalization and core problems, {\em Lin. Alg. Appl.} 421, pp. 243--251, 2007. \bibitem{Huffel} S. Van Huffel, J. Vandewalle, The total least squares problem: computational aspects and analysis, {\em SIAM}, Philadelphia, 1991. \bibitem{PS02a} C. C. Paige, Z. Strako\v{s}, Scaled total least squares fundamentals, {\em Numer. Math.} 91, pp. 117--146, 2002. \bibitem{PS02b} C. C. Paige, Z. Strako\v{s}, Unifying least squares, total least squares and data least squares, in ``Total Least Squares and Errors-in-Variables Modeling'', S.~van Huffel and P.~Lemmerling, editors, Kluwer Academic Publishers, Dordrecht, pp. 25--34, 2002. \bibitem{PS06} C. Paige, Z. Strako\v{s}, Core problems in linear algebraic systems, {\em SIAM J. Matrix Anal. Appl.} 27, pp. 861--875, 2006. \bibitem{HP} I. Hn\v{e}tynkov\'{a}, M. Ple\v{s}inger, D. M. Sima, Z. Strako\v{s}, S. Van Huffel, The total least squares problem and reduction of data in $AX \approx B$, {\em in preparation}. \end{thebibliography}","linear approximation problem, total least squares, core problem, Golub-Kahan bidiagonalization","15A06","65F10"," "Schaffrin","Burkhard","aschaffrin@earthlink.net","\section{TOTAL LEAST-SQUARES REGULARIZATION OF TYKHONOV TYPE AND AN ANCIENT RACETRACK IN CORINTH} By {\sl Burkhard Schaffrin and Kyle Snow}. \noindent In this contribution a variation of Golub/Hansen/O'Leary's Total Least-Squares (TLS) regularization technique is introduced, based on the Hybrid APproximation Solution (HAPS) within an Errors-in-Variables (EIV) model. After developing the (nonlinear) estimator through a traditional Lagrange approach, the new method is applied to a problem from archeology. There, both the radius and the center of a circle have to be found, of which only a small part of the arc had been surveyed in-situ, thereby giving rise to an ill-conditioned set of equations. According to the archeologists involved, this circular arc served as the starting line of a racetrack in the ancient Greek stadium of Corinth, ca.500 BC. The present study compares previous estimates of the circle parameters with the newly developed ""Regularized TLS Solution of Tykhonov type"".","Total Least-Squares, Tykhonov regularization, regularized TLS estimation, circular arc fit","65F22","65.20"," "Eubanks","Sherod","eubanks@math.wsu.edu","\section{Generalized Soules Matrices} By {\sl Sherod Eubanks}. \noindent I will discuss a generalization of Soules matrices and its application to the nonnegative inverse eigenvalue problem, eventually nonnegative matrices, and exponentially nonnegative matrices.","eventually nonnegative matrices, exponentially nonnegative matrices, inverse eigenvalue problem, Soules matrix","15A57","15A29"," "Mitchell","Lon","lmitchell2@vcu.edu","\section{Orthogonal Removal of Vertices and Minimum Semidefinite Rank} By {\sl Lon Mitchell and Sivaram Narayan}. \noindent A vector representation of a graph is an assignment of a vector in $\mathbb{C}^n$ to each vertex so that nonadjacent vertices are represented by orthogonal vectors and vertices adjacent by a single edge are represented by nonorthogonal vectors. The least $n$ for which a vector representation can be found is the minimum semidefinite rank (msr) of a graph. While the msr of an induced subgraph provides a lower bound for the msr of a graph, a minimal vector representation of a graph need not include a minimal vector representation of a particular subgraph. Orthogonally removing a vertex represented by a vector $\vec{v}$ by orthogonally projecting each vector of a vector representation on the orthogonal complement of the span of $\vec{v}$ results in a vector representation of a related graph with order decreased by one. We will discuss some of the possibilities and limitations of getting minimal vector representations from orthogonal removal.","minimum semidefinite rank, vector representation, graph, positive semidefinite","05C50","15A57"," "Sebeldin","Anatoly","sebeldinam@mail.ru","\section{ Algorithm resolving problem of determination of finite cyclicgroup by its automorphism group }By {\sl V.K.Vildanov, A.M.Sebeldin, A.L.Sylla }.\noindentWe say, that group $G$ is determined by its automorphism group in some class $\bf X$ if$Aut(G)\cong Aut(H)$ imply $H\congG$ for any $H\in \bf X$. For any finite cyclic group the matrix of its automorphism group and the algorithm of comparison of these matrices are obtained. Thus, the problem of determination of finite cyclicgroup $Z(n)$ is reduced to search a number $m\ne n$such, that $A(n) = A(m)$ where $A(n)$ and $A(m)$ are the matrices of $Aut(Z(n))$ and $Aut(Z(m))$. Literature:[1]  D{\’e}termination d’un groupe cyclique par son groupe desautomorphismesA. Sebeldin, A. Sylla, Revue des sciences. UGANC, 4 (2002), 26-30.","matrice of automorphism group","20","15"," "McDonald","Judith","jmcdonald@math.wsu.edu","\section{Nonnegative and Eventually Nonnegative Matrices} By {\sl Judith McDonald}. \noindent I will discuss the interplay between the properties of nonnegative and eventually nonnegative matrices, and the role that the inverse eigenvalue problem plays in this relationship.","nonnegative, eventually nonnegative, inverse eigenvalue problem","15A48","15A18","This is part of the mini-symposium on nonnegative and eventually nonnegative matrices "SAT\^O","Kenzi","kenzi@eng.tamagawa.ac.jp","\section{The algebraic relations of curvatures of PL manifolds} By {\sl Kenzi SAT\^O}. \noindent There are two types of the Gauss-Bonnet theorems for PL manifolds, Banchoff's theorem (the sum of Banchoff's curvature of all vertices is equal to the Euler number) and Homma's theorem (the alternative sum of Homma's curvature of all faces is equal to the Euler number). In this talk, the algebraic relations of these curvatures are considered.","Gauss-Bonnet theorem, PL manifolds","53C20",""," "Szyld","Daniel","szyld@temple.edu","\section{Convergence of Stationary Iterative Methods for Hermitian Semidefinite Linear Systems} By {\sl Andreas Frommer, Reinhard Nabben, and Daniel B. Szyld}. \noindent A simple proof is presented of a quite general theorem on the convergence of stationary iterations for solving singular linear systems whose coefficient matrix is Hermitian and positive semidefinite. In this manner, elegant proofs are obtained of some known convergence results, including the necessity of the $P$-regular splitting result due to Keller, as well as recent results involving generalized inverses. Other generalizations are also presented. These results are then used to analyze the convergence of several versions of algebraic additive and multiplicative Schwarz methods for Hermitian positive semidefinite systems.","Hermitian semidefinite systems, singular systems, stationary iterations, convergence analysis","65F10","65F20","this talk is for the Nonnegative and Eventually Nonnegative Matrix Mini-symposium "Pryporova","Olga","olgav@iastate.edu","\section{Potential Diagonal and D-convergence} By {\sl Olga Pryporova}. \noindent It is well known that a matrix $A$ is convergent (i.e. its spectral radius is less than $1$) if and only if the Stein linear matrix inequality $X-A^*XA\succ0$ has a positive definite solution $X=P$. A stronger type of convergence, useful in many applications, is diagonal convergence, where a positive diagonal solution $P$ exists. Diagonal convergence guarantees, in particular, that a matrix will remain convergent under multiplicative diagonal perturbations $D$ with $|D|\leq I$. A matrix $A$ such that $DA$ is convergent for all diagonal matrices $D$, where $|D|\leq I$, is called $D$-convergent. In my talk I will present some results on the relations between diagonal, $D$-convergence, and introduce connections to qualitative convergence.","diagonal convergence, D-convergence, qualitative convergence, modulus pattern","15A18",""," "Day","Jane","day@math.sjsu.edu","\section{Graph Energy Change Due to Edge Deletion} By {\sl Jane M. Day and Wasin So}. \noindent The energy of a graph is the sum of the singular values of its adjacency matrix. We are interested in the effect on energy when one edge is removed, or a set of edges. A singular value inequality for a partitioned matrix proves useful for studying such questions. We describe an infinite family of graphs for which each graph has an edge whose removal leaves the energy unchanged, another family for which removing any edge decreases energy and still another infinite family for which removing any edge increases the energy. We give a sufficient condition on a graph $G$ and edges $e$ such that the energy strictly decreases when $e$ is removed. We have similar results for removing a cut set.","graph energy singular value","05C50","15A42"," "Stefan","Wolfgang","wolfgang.stefan@asu.edu","\section{Improved total variation-type regularization using higher-order edge detectors} By {\sl W. Stefan, R. Renaut and A. Gelb}. \noindent We present a novel deconvolution approach that simultaneously deblurs and detects edges in piecewise smooth signals. The edges and smooth regions, separated by jump discontinuities, are both preserved. The method uses a two step procedure: The polynomial annihilation edge detection method combined with total variation (TV) deconvolution obtains an estimate of the location of jump discontinuities in blurred noisy data. This information is used to determine the order for a higher-order TV regularization which is then utilized in the signal restoration. As compared to those obtained with standard first order TV, signal restorations are more accurate representations of the true signals, as measured in a relative $l^2$ norm, and can also be used to obtain a more accurate estimation of the locations and sizes of the true jump discontinuities.","inverse problem, Ill-posedness, total variation, regularization, sparseness","65F22",""," "Kortesi","Peter","matkp@uni-miskolc.hu","\section{Your title here}Using Linear Algebra in Teaching Hamilton Quaternions and Graphs By {\sl names of all authors here}Péter Körtesi Department of Mathematics, University of Miskolc 3515 MISKOLC-EGYETEMVÁROS, Hungary matkp@uni-miskolc.hu \noindent Insert your abstract here Hamilton quaternions are usually introduced as generalization of complex numbers respecting the basic identities. We present a way to use matrices to introduce quaternions and study their properties, using an isomorphism between the two skew-field structures. The Eulerian and Hamiltonian trails and circuits can be described as well using some adjacency type matrices in special rings. The method to be presented is the same time a sufficient condition to decide weather the graph is Hamiltonian or not.","matrices, quaternions, trails and circuits, Eulerian and Hamiltonian graphs","12L12","03C62","MCS 2000: 12L12, 03C62, 03H15 "Elhashash","Abed","abed@drexel.edu","\section{On General Matrices Having the Perron-Frobenius Proprty} By {\sl Abed Elhashash and Daniel Szyld}. \noindent We say that a matrix has the Perron-Frobenius property if its spectral radius is an eigenvalue for which there is an entry-wise nonnegative eigenvector. Matrices having the Perron-Frobenius property may be viewed as generalizations of nonnegative matrices. We consider spaces consisting of such generalized nonnegative matrices and study some of their topological aspects such as connectedness and closure. In addition, we completely describe the similarity transformations leaving such spaces invariant. We prove some new results needed for the analysis mentioned above, in which we show the existence of orthogonal matrices close to the identity which map semipositive vectors to positive ones. This new tool may be useful in other contexts as well.","Eventually Nonnegative Matrices, Generalizations of Nonnegative Matrices, Perron-Frobenius Property","15A48","","This abstract is for the minisymposium on nonnegrative matrices and generalizations (J.J. McDonald). "Cheng","Wei","ch2tong@hotmail.com","\section{One Type of Inverse Eigenvalue Problems in Quaternionic Quantum Mechanics} By {\sl Wei Cheng, Liang-gui Feng}. \noindent The inverse eigenvalue problems studied in this paper is investigated in quaternioic quantum mechanics. Sufficient and necessary conditions for the existence of the solutions are given. The constrained least-squares problems are also studied, and the sufficient and necessary conditions for the existence of the solutions are given. At last two numerical algorithm are given.","quaternionic matrix, inverse eigenvalue problem, constrained least-square problem","15A29","15A33"," "Cheng","Wei","ch2tong@hotmail.com","\section{The Constrained Solutions of Quaternionic Matrix Equations} By {\sl Wei Cheng, Liang-gui Feng}. \noindent The Hermitian and skew-Hermitian quaternionic solutions of matrix equations AX+XB^*=C and AXB^*+BXA^*=D with the constraint (A, B) has simultaneous real diagonalization (SRD) are considered. Necessary and sufficient conditions for the existence of such solutions and their general forms are derived.","quaternionic matrix, quaternionic matrix equations, constrained solutions, simultaneous real diagonalization (SRD)","15A24","15A33"," "Cheng","Wei","ch2tong@hotmail.com","\section{The Constrained Solutions of Quaternionic Matrix Equations} By {\sl Wei Cheng, Liang-gui Feng}. \noindent The Hermitian and skew-Hermitian quaternionic solutions of matrix equations $AX+XB^*=C$ and $AXB^*+BXA^*=D$ with the constraint $(A, B)$ has simultaneous real diagonalization (SRD) are considered. Necessary and sufficient conditions for the existence of such solutions and their general forms are derived.","quaternionic matrix, quaternionic matrix equations, constrained solutions, simultaneous real diagonalization (SRD)","15A24","15A33"," "Protasov","Vladimir","v-protassov@yandex.ru","\section{$p$-radii of linear operators and equations of self-similarity} By {\sl Vladimir Protasov}. \noindent $p$-radii of linear operators extend the notion of the joint spectral radius, they are known since 1995. We prove that for any $p \in [1, +\infty]$ a finite irreducible family of linear operators possesses an extremal norm corresponding to its $p$-radius. As a corollary we derive a criterion for the $L_p$-contractibility property of linear operators and estimate the asymptotic growth of orbits for any point. These results are applied in analysis of functional difference equations with linear contractions of the argument (self-similarity equations). Spacial cases of such equations are well-known: fractal curves (de Rhum, Koch curves, etc.), refinement equations and so on. We obtain a sharp criterion for the existence and uniqueness of solutions of the self-similarity equations in various functional spaces, compute the exponents of regularity and estimate moduli of continuity. This, in particular, gives a geometric interpretation of the $p$-radius in terms of spectral radii of certain operators in the space $L_p[0,1]$.","linear operators, spectral radius, extremal norms, contractibility, functional equations, regularity","52A21","39B22"," "Protasov","Vladimir","v-protassov@yandex.ru","\section{$p$-radii of linear operators and equations of self-similarity} By {\sl Vladimir Protasov}. \noindent $p$-radii of linear operators extend the notion of the joint spectral radius, they are known since 1995. We prove that for any $p \in [1, +\infty]$ a finite irreducible family of linear operators possesses an extremal norm corresponding to its $p$-radius. As a corollary we derive a criterion for the $L_p$-contractibility property of linear operators and estimate the asymptotic growth of orbits for any point. These results are applied in analysis of functional difference equations with linear contractions of the argument (self-similarity equations). Spacial cases of such equations are well-known: fractal curves (de Rhum, Koch curves, etc.), refinement equations and so on. We obtain a sharp criterion for the existence and uniqueness of solutions of the self-similarity equations in various functional spaces, compute the exponents of regularity and estimate moduli of continuity. This, in particular, gives a geometric interpretation of the $p$-radius in terms of spectral radii of certain operators in the space $L_p[0,1]$.","linear operators, spectral radius, extremal norms, contractibility, functional equations, regularity","52A21","39B22"," "Bru","Rafael","rbru@mat.upv.es","\section{On some classes of $H$-matrices} By {\sl Rafael Bru, Ljiljana Cvetkovi\'c, Vladimir Kosti\'c and Francisco Pedroche}. \noindent This talk deals with some classes of $H$-matrices which are subclasses of the type of invertible $H$-matrices, that is $H$-matrices with invertible comparison matrix. In particular new characterization of S-SDD matrices and $\alpha$-matrices are given. Properties of those classes of $H$-matrices and Doubly Diagonally Dominant matrices are considered.","$H$-matrices, Diagonal dominant matrices,","15A57","15.99"," "Parraguez","Marcela","marcela.parraguez@ucv.cl","\section{Construction of a vector space schema} By {\sl Marcela Parraguez (PUCV, Chile and Cicata-IPN, Mexico) and Asuman Okta\c{c} (Cinvestav - IPN, Mexico and PUCV, Chile)}. \noindent From a cognitive point of view the vector space concept is one that causes many difficulties for students of Linear Algebra. Apart from being abstract in itself, it has to be connected with several other abstract concepts in the mind of a student in order to claim that understanding takes place. In this research project our aim is to explain the construction of the vector space concept from the viewpoint of APOS (Action – Process – Object – Schema) theory. We are also interested in studying the formation and evolution of the vector space schema and how other concepts such as linear independence and basis are incorporated into the students’ mathematical world in connection with this schema. The methodological framework of APOS theory requires that the concept in question be analyzed theoretically resulting in a viable map (called a genetic decomposition) of student learning in terms of mental constructions. In our talk we will present a possible genetic decomposition for the construction of the vector space concept and provide empirical evidence for specific mental constructions that students make when they are learning this concept. This evidence was gathered through questionnaires and interviews (designed in line with our genetic decomposition) applied to undergraduate students who were taking a Linear Algebra course. These instruments also help in identifying student difficulties with the vector space concept and some related concepts such as binary operations, axioms and fields.","Linear Algebra Education, vector spaces, APOS theory","97","","A computer is needed for this presentation, as we will bring it in a USB Memory. "Loiseau","Jean Jacques","loiseau@irccyn.ec-nantes.fr","\section{Robust stability of positive difference equations} By{\sl Jean Jacques Loiseau$^1$ and Micha\""el Di Loreto$^2$ \newline $^1$ IRCCyN, UMR CNRS 6597, 1 rue de la No\""e, 44321 Nantes Cedex 3, France ({\tt loiseau@irccyn.ec-nantes.fr}) \newline $^2$ Laboratoire Amp\`ere, UMR CNRS 5005, INSA-Lyon, 20 Avenue Albert Einstein, 69621 Villeurbanne, France ({\tt michael.di-loreto@insa-lyon.fr})} %By {\sl Jean Jacques Loiseau$^1$ and Micha\""el Di Loreto$^2$}. \bigskip \noindent We consider the system of difference equations $$ x(t) = \sum _{k=1}^{\nu}{a_k x(t-\beta _k)}, $$ where $a_k \in \mathbb{R}$, $\beta _k \in \mathbb{R}$, for $k=1$ to $\nu $. We assume that the delays are in increasing order, $0=\beta_0<\beta_1<\beta_2<\ldots<\beta_\nu$. Such equation appear as models in biology, economy, and from the wave equation (see [3] for examples). The stability of this system was addressed in the references [1--4]. They provide a complete analysis, and point out a very special phenomenon, that the zeros of the characteristic equation $$ 1 -\sum_{k=1}^{\nu }{a_k {\mathrm{e}}^{-\beta_k s}} = 0 \; , $$ where $s\in \mathbb{C}$, do not continuously depend on the parameters $\beta _k$. The result is that, if the delays are rationally independant, the system is stable (both in the sense of $L_2$--stability and of expenential stability) if and only the following holds $$ \sum\limits_{k=1}^{\nu }{|a_k|}< 1 \; . $$ At the contrary, when the delays are rationally dependent, this condition is sufficient for the stability, but not necessary. The rational dependance of the coefficients is not a continuous property, which somehow explains what happens. As a typical example, one can check that the system $$ x(t) = \frac{3}{4} x(t-1) - \frac{3}{4} x(t-2) $$ is stable. But, since $3/4+3/4>1$, one can see that the stability is lost by arbitrary little perturbations of the delays. Almost all the systems of the form $$ x(t) = \frac{3}{4} x(t-1) - \frac{3}{4} x(t-2-\epsilon )\;, $$ are unstable, for example $\epsilon = \pi /100$ gives an unstable system. Two remarks can now be done. The first one is that Max-plus linear systems are also difference equations. Such systems are obtained as algebraic models of timed marked graphs, a special class of Petri nets, where the delays are associated to the edges of an oriented graph, they correspond to the minimal time to cross this edges. As it is well known (see for instance [5] or [6]), the asymptotic behaviour of such a graph is given by the eigenvalue, in the Max-Plus sence, of the corresponding matrix. This eigenvalue can be expressed analitically as the maximum mean weight of the elementary circuits of the graph. This quantity depends continuously on the parameters of the graph, that are the delays and some coefficients called initial marks. The asymptotic behaviour of Max-Plus linear systems do not depend on the algebraic dependance of the delays, at the contrary of usual difference equations. Our second remark, which now follows, in some sense explains that the difference of behaviour between Max-Plus systems and usual difference equations is not a paradox. In many applications, the coefficients $a_k$ of our basic equation are positive. Hence the considered equation is called a positive difference equation. We can show that the zeros of the characteristic equation of a positive difference equation continuously depends on the parameters $a_k$ and $\beta _k$. In particular for these systems too, the algebraic dependance of the delays does not the matter, and in every case the system is stable if and only if the condition above is satisfied, the sum of the coefficients $a_k$ is less than $1$. Since the condition is not delay dependant, it is independant to variations of the delay, and one therefore says that the stability is robust. To show this result, we denote $\mu $ the unique real root of the equation $$ 1-\sum_{k=1}^{\nu }{a_k\mathrm{e}^{-\beta _k \mu}} \; . $$ As shown in [2], $\mu $ is an upper bound of the real parts of the zeros of the above characteristic equation. If in addition the coefficients $a_k$ are positive, one can show that $\mu $ is a zero of the characteristic equation, which leads to the conclusion. Thanks to Perron-Frobenius theorem, a similar result can be described in the case of multivariable positive difference equations. \bigskip \noindent [1] D. Henry, Linear autonomous neutral functional differential equations, J. Differential equations, vol.15, 106-128, 1974. \newline [2] C. E. Avellar and J. K. Hale, On the zeros of exponentials polynomials, Journal of Mathematical Analysis and Applications, vol. 73, 434-452, 1980. \newline [3] V. Kolmanovski and V.R. Nosov, Stability of functional differential equations, Academic Press, London, 1986. \newline [4] J.K. Hale and S.M. Verduyn Lunel, Introduction to functional differential equations, Springer Verlag, New York, 1993. \newline [5] M. Gondran, M. Minoux and S. Vajda, Graphs and Algorithms, John Wiley and Sons, 1984. \newline [6] F. Baccelli, G. Cohen, G.J. Olsder and J.P. Quadrat. Synchronization and Linearity. An Algebra for Discrete Event Systems. Wiley, 1992.","Difference equation, Positive equation, Robust stability, Delay independant stability","39A11","15A48","In MS7 Max Algebra "Furtado","Susana","sbf@fep.up.pt","\section{Order Invariant Spectral Properties for Several Matrices} By {Susana Furtado and Charles Johnson}. \noindent The collections of $m$ $n$-by-$n$ matrices with entries in a field such that the products in any of the $m!$ orders share a common similarity class (resp. spectrum, trace) are studied. The spectral and trace order invariant properties are characterized and the similarity invariant one is related to them in several cases. A complete explicit description is given in case $m=3$ and $n=2.$","product of matrices, order invariant, similarity, spectrum, trace","15A23","15A18"," "Trigueros","María","trigue@itam.mx","\section{YSpanning sets and vector spaces they generate: an APOS analysis.} By {\sl Maria Trigueros (ITAM, Mexico) Asuman Oktac (Cinvestav-IPN, Mexico and PUCV, Chile) Darly Ku (Cinvestav-IPN, Mexico and PUCV, Chile)}. \noindent This work forms part of a larger research project that aims to identify student difficulties with Linear Algebra concepts. The theoretical framework that we have chosen for this particular study is APOS (Action – Process – Object – Schema) theory, whose efficiency in identifying students’ mental constructions is well documented in other areas of mathematics such as Calculus, Abstract Algebra and Discrete Mathematics. In our previous work (Kú et al., submitted) in looking into the mental constructions in relation with the concept of basis, we came across various difficulties that students experienced with spanning sets and the vector spaces they generate. Our results revealed that most of the interviewed students had an action or process conception of this concept. When comparing the empirical data with the genetic decomposition originally proposed for this concept, where the concepts of linear independence and generator set had been considered, it appeared that most of the obstacles had to do with what seemed to be necessary conditions to construct the notion of spanning set as a process. In this talk we present a study that intends to study the construction of the notion of spanning set and its relation with the vector space concept. A preliminary genetic decomposition for this concept was developed and instruments were designed according to this genetic theoretical analysis. We will present the analysis of the interviews that were conducted with students taking a Linear Algebra course. We will discuss and interpret results in terms of APOS theory.","learning, spanning sets, vector space, basis","97","15","education minisymposium "Arico'","Antonio","arico@unica.it","\section{Signal \& Image regularization via antireflective transform} By {\sl Aric\`o Antonio}. \noindent The aim of this talk is to show an efficient approach for computing a regularized solution via filtering methods, applied to the spectral decomposition of anti-reflective matrices. Filtering methods are used in signal and image restoration to reconstruct an approximation of a signal or image from degraded measurements. Filtering methods rely on computing a singular value decomposition or a spectral factorization of a large structured matrix. The structure of the matrix depends in part on imposed boundary conditions. Antireflective boundary conditions preserve continuity of the image and its derivative at the boundary, and have been shown to produce superior reconstructions compared to other commonly used boundary conditions, such as periodic, zero and reflective. The purpose of my talk is to analyze the eigenvector structure of matrices that enforce antireflective boundary conditions, and the related anti-reflective transform. An efficient approach to computing filtered solutions is proposed, and numerical tests are shown to illustrate the performance of the discussed methods.","boundary conditions, fast transforms, regularization, filtering methods","65F15","65F22","This talk is based on a joint work with M. Donatelli, J. Nagy and S. Serra-Capizzano. "Machado","Silvia","silviada@uol.com.br","\section{Your title here}GPEA ’s researches about the meta resources in teaching and learning the notion of basis of a vector space By {\sl names of all authors here}.Machado, S.D.A Bianchini, B.L. Maranhão, M.C.S.A. \noindent Insert your abstract here Since the late 90’s we have been researching the development of the notion of basis of a vector space in our first Linear Algebra course. This concept was chosen to be explored in our investigations because it has an essential role in this theme. Robert e Robinet (1993) name meta mathematics something that is said or written when information is given about the mathematical functioning and the use of its concepts, that is when we talk ABOUT Mathematics, beyond the strictly mathematical. To avoid confusion about the meaning of the term meta mathematics, utilized in Literature under different meanings, we adopted the term meta resources to design what the authors call meta mathematics. A meta resource can became a lever to the student when he is learning about a mathematical notion. When a meta resource is capable of becoming a lever to the understanding of the desired mathematical concept, Robert and Robinet call it meta lever. We should also highlight the importance given by Dorier (1997) to this resource when he suggests that one of the most important axis to be investigated in the learning and teaching of Linear Algebra is about the use of meta lever and about the evaluation of its real effects on learning. We interpret the teacher’s speech or the presentation of a theme in the textbook, as meta lever, in cases when there are information in it able to make the student think about his own knowledge, his mistakes, his procedures, helping him to understand a new mathematical notion. We consider not only the teacher’s speech, but also any activity proposed and/or elaborated by him, that favors the student’s comprehension about a notion or a topic, such as meta lever. Some papers written seeking to answer the question “What is the role of the meta resources in the learning of the notion of basis in Linear Algebra?” are next. Considering the statement made by Chevallard (1991) about the lack of the teacher’s influence on didactics transposition, Behaj and Arsac (1998) wrote a paper where they discussed the size of the influence that different Algebra teachers have on didactics transposition in their courses. The conclusion of this paper contested Chevallard’s statement by showing that each teacher has his point of view on the best way to write a learning text, what brings differences even between two courses that follow the same (teaching) plan. .(BEHAJ, A ARSAC, G., p. 362). This investigation and the analysis made by the authors revealed that each teacher’s autonomy (to prepare the class and to develop them) changes according to the amount of dependence of the textbook and to his research activities. Knowing that not every university teacher researches Algebra-related subjects and that many of them only use textbooks, Araújo (2002) analyzed the development of the basis notion in three of the most utilized textbooks in traditional universities. The author came to the conclusion that there are few meta resources able to become meta levers to the student in those books. BEHAJ and ARSAC’s considerations about the teacher’s interference in didactical transposition suggested that Padredi (2003) investigated which meta resources about basis emerge from the 6 interviewed Algebra teachers’ speech. Padredi utilized three principles that Harel (2000) considers necessary to learn and teach Linear Algebra to elaborate the script and to analyze the interviews. Those principles are those of concreteness, necessity and generalizibility. The author discovered that the teachers showed many meta resources able to become meta lever when learning basis notion. Barbosa de Oliveira (2005), facing the statement above, observed the classes of a Linear Algebra teacher lightening the meta resources utilized in the development of the basis notion and checking, by using interviews, with which students of their class they became meta levers. This way, the researches already finished and the ones still in process point some results that evidence the role of the meta resources in learning the basis notion in Linear Algebra. REFERENCES ARAUJO, C. V. B. A meta matemática no livro didático de Álgebra Linear. Dissertação de Mestrado (Programa de Educação Matemática) – Pontifícia Universidade Católica de São Paulo. 2002. BARBOSA de OLIVEIRA, L.C. Como funcionam os recursos meta em aula de Álgebra Linear? Dissertação de Mestrado (Programa de Educação Matemática) – Pontifícia Universidade Católica de São Paulo. 2005. BEHAJ, A.; ARSAC, G La conception d’un cours d’Algèbre Linèaire . Recherches en Didactique des Mathématiques, v.18, nº 3, pp. 333-370, 1998. CHEVALLARD, Y. La transposition didactique, du savoir savant au savoir enseigné. Reed. 1991. La Pensée Sauvage. Grenoble. 1991. DORIER, J. L. L’Enseignement de L’Algebre Linéaire en Question. La Pensée Sauvage. Grenoble. 1997. HAREL, G. Three Principles of Learning and Teaching Mathematics chapter 5 On the Teaching of Linear Algebra. Ed. DORIER. Kluwer. 2000 PADREDI, Z.L.N. As alavancas meta no discurso do professor de Algebra Linear. Dissertação de Mestrado (Programa de Educação Matemática) – Pontifícia Universidade Católica de São Paulo. 2002. ROBERT, A.; ROBINET,J. Prise en compte du meta en didactique des Mathématiques. In Cahier DIDIREM. V.21,Ed.IREM. Paris. 1993.","meta-lever, meta resources, basis","","97","Mathematics Education "Arav","Marina","matmxa@langate.gsu.edu","\section{Sign Patterns That Require Almost Unique Rank} By {\sl Marina Arav, Frank Hall, Zhongshan Li, Assefa Merid, Yubin Gao}. \noindent A {\it sign pattern matrix} is a matrix whose entries are from the set $\{+,-, 0\}$. For a real matrix $B$, sgn$(B)$ is the sign pattern matrix obtained by replacing each positive (respectively, negative, zero) entry of $B$ by $+$ (respectively, $-$, 0). For a sign pattern matrix $A$, the {\it sign pattern class of $A$}, denoted $Q(A)$, is defined as $\{ \, B\, : \, \mbox{sgn}(B)=A\ \}.$ The {\it minimum rank} mr$(A)$ ({\it maximum rank} MR$(A)$) of a sign pattern matrix $A$ is the minimum (maximum) of the ranks of the real matrices in $Q(A)$. Several results concerning sign patterns $A$ that {\it require almost unique rank}, that is to say, the sign patterns $A$ such that MR$(A)=$ mr$(A)+1$ are established. In particular, a complete characterization of these sign patterns is obtained. Further, the results on sign patterns that require almost unique rank are extended to sign patterns $A$ for which the {\it spread} is $d = \mbox{MR}(A)-\mbox{mr}(A)$.","Sign pattern matrix; Minimum rank; Maximum rank; Term rank; L-matrix; Requires unique rank; Requires almost unique rank; Spread","15A03","15A21"," "Uchiyama","Mitsuru","uchiyama@riko.shimane-u.ac.jp","\section{A New Majorization between functions} By {\sl Mitsuru Uchiyama}. \noindent Let $\{a_i\}_{i=1}^n$ and $\{b_i\}_{i=1}^n$ be finite sets of real numbers, and rearrange them in decreasing order. Then $\{a_i\}_{i=1}^n$ is said to be submajorized by $\{b_i\}_{i=1}^n$ if $\sum_{i=1}^k a_i \leqq \sum_{i=1}^k b_i$ for $1\leqq k \leqq n$. This classical concept -(sub)majorization- is very useful in the study of polynomials and matrices. \\ {\bf Definition.} For a real increasing function $k$ on interval $J$ and a nondecreasing function $h$ on $I$, we call $k$ a {\it majorization} of $h$ and denote $h \preceq k$ if \\ $k(A) \leqq k(B) \Longrightarrow h(A) \leqq h(B)$. \\ A function $f(t)$ defined on an interval $I$ is called an {\it operator monotone function } on $I$, provided $A \leqq B$ implies $f(A) \leqq f(B)$ for every pair $A$ and $B$. ${\Bbb P}(I)$ denotes the set of all operator monotone functions on $I$, ${\Bbb P_+}(I)$ does $\{f\in {\Bbb P}(I): f \geqq 0\}$. \\ ${\Bbb {L P}}_{+}(I):=\{h: h(t)>0$ and $\log h \in {\Bbb P} (I^{\circ}) \}$.\\ ${\Bbb P}_{+}^{-1}[a,b):=\{ h | h $ is increasing on $ [a,b)$ and $h^{-1} \in {\Bbb P}[0, \infty)\}$.\\ ${\Bbb P}_{+}^{-1}(a,b)$ is likewise defined.\\ {\bf Theorem 1.} For non-increasing sequences $\{a_i\}_{i=1}^n $ and $\{b_i\}_{i=1}^m $, \\ $u(t):=\prod^n_{i=1}(t-a_i) \quad (t\geqq a_1),\quad v(t):=\prod^m_{i=1}(t-b_i) \quad (t\geqq b_1).$\\ Then $u(t)\in {\Bbb P}_{+}^{-1}[a_1,\infty)$, and $$m\leqq n, \quad \sum^k_{i=1}b_i\leqq \sum^k_{i=1}a_i \;(1\leqq k\leqq m)\Longrightarrow v \preceq u \quad ([a_1,\infty)).$$ {\bf Product Lemma.} Let $I$ be a right open interval with end points $a,b$ and $h(t), g(t)$ non-negative functions defined on $I$ such that the product $h g$ is an increasing function with $h g(a+0)=0$, $h g(b-0)=\infty$. Then for $\psi_1, \, \psi_2$ in ${\Bbb P}_{+}[0,\infty)$ $$ g\preceq h g \Longrightarrow h\preceq h g, \quad \psi_1(h) \psi_2(g) \preceq h g.$$\\ {\bf Product Theorem.} For every right open interval $I$, $${\Bbb P}_{+}^{-1}(I) \cdot {\Bbb P}_{+}^{-1}(I)\subset {\Bbb P}_{+}^{-1}(I), \quad {\Bbb {L P}}_{+}(I) \cdot {\Bbb P}_{+}^{-1}(I)\subset {\Bbb P}_{+}^{-1}(I).$$ Further, let $g_i(t) \in {\Bbb {L P}}_{+}(I)$ for $1\leqq i \leqq m$ and $h_j(t) \in {\Bbb P}_{+}^{-1}(I)$ for $1\leqq j \leqq n$. Then for $\psi_i, \phi_j \in {\Bbb P}_{+}[0,\infty)$ $$ \prod^m_{i=1}\psi_i (g_i) \prod^n_{j=1}\phi_j (h_j) \preceq \prod^m_{i=1}g_i \prod^n_{j=1}h_j.$$\\ {\bf Proposition.} \; For $0<\beta \leqq \alpha $, $$t^\alpha \preceq t^\alpha e^{{-t}^{-\beta}} .$$ Moreover, if $1\leqq \alpha$, then $$t^\alpha e^{{-t}^{-\beta}}\in {\Bbb P}^{-1}_+[0,\infty).$$\\ {\bf Theorem 2.} Let $I$ be a right open interval, $h(t)\in{\Bbb P}^{-1}_+(I)$, $g(t)\in {\Bbb {L P}}_+(I)$, and let $\tilde{h}(t)\geqq 0$ be non-decreasing function on $I$. Then the function $\varphi$ on $(0,\infty)$ defined by $$\varphi(g(t)h(t))=g(t)\tilde{h}(t) \quad (t\in I)$$ belongs to ${\Bbb P}_+[0,\infty)$, and for $A,B $ with $\sigma(A), \sigma(B) \subset I$ $$ A\leqq B \Rightarrow \left \{ \begin{array}{ll} \varphi(g(A)^{\frac{1}{2}}h(B)g(A)^{\frac{1}{2}})\geqq g(A)^{\frac{1}{2}}\tilde h(B)g(A)^{\frac{1}{2}}, \\ \varphi(g(B)^{\frac{1}{2}}h(A)g(B)^{\frac{1}{2}})\leqq g(B)^{\frac{1}{2}}\tilde h(A)g(B)^{\frac{1}{2}}. \end{array} \right.$$ Furthermore, if $\tilde{h}\in {\Bbb P}_+(I)$, then $$A\leqq B \Rightarrow \left \{ \begin{array}{ll} \varphi(g(A)^{\frac{1}{2}}h(B)g(A)^{\frac{1}{2}})\geqq \varphi(g(A)^{\frac{1}{2}}h(A)g(A)^{\frac{1}{2}})=g(A)\tilde{h}(A), \\ \varphi(g(B)^{\frac{1}{2}}h(A)g(B)^{\frac{1}{2}})\leqq \varphi(g(B)^{\frac{1}{2}}h(B)g(B)^{\frac{1}{2}})=g(B)\tilde{h}(B). \end{array} \right.$$ \\ {\bf Corollary 1.}(Furuta) For $p\geqq 1, r>0$ \begin{eqnarray*} 0\leqq A \leqq B \Rightarrow \left \{ \begin{array}{ll} (A^\frac{r}{2} B^p A^\frac{r}{2})^{\frac{1+r}{p+r}} \geqq (A^\frac{r}{2} A^p A^\frac{r}{2})^{\frac{1+r}{p+r}}, \\ (B^\frac{r}{2} A^p B^\frac{r}{2})^{\frac{1+r}{p+r}} \leqq (B^\frac{r}{2} B^p B^\frac{r}{2})^{\frac{1+r}{p+r}}. \end{array} \right. \end{eqnarray*} {\bf Corollary 2.} (Ando, F-F-K, U) Suppose $p\geqq 1,r>0$ and $0<\alpha \leqq \frac{r}{p+r}$. Then \begin{eqnarray*} A \leqq B \Rightarrow \left \{ \begin{array}{ll} (e^{\frac{r}{2}A} e^{pB} e^{\frac{r}{2}A})^{\frac{r}{p+r}} \geqq (e^{\frac{r}{2}A} e^{pA} e^{\frac{r}{2}A})^{\frac{r}{p+r}}, \\ (e^{\frac{r}{2}B} e^{pA} e^{\frac{r}{2}B})^{\frac{r}{p+r}} \leqq (e^{\frac{r}{2}B} e^{pB} e^{\frac{r}{2}B})^{\frac{r}{p+r}}. \end{array} \right. \end{eqnarray*}\\ References:\\ M. Uchiyama, A new majorization between functions, polynomials, and operator inequalities, J.F.A(2006)221--244, \\ M. Uchiyama, A new majorization between functions, polynomials, and operator inequalities II, J. Math. Soc. Japan (2008) 291--310.","Majorization, Operator monotone function, Operator inequality","15A39","47A63"," "Goldberger","Assaf","assafg@post.tau.ac.il","\section{An upper bound on the characteristic polynomial of a nonnegative matrix leading to a proof of the Boyle--Handleman conjecture} By {\sl Assaf Goldberger and Michael Neumann}. \noindent We prove a conjecture by Boyle and Handelam, saying that if $A\in \R^{n,n}$ is a nonnegative matrix of rank $r$ and spectral radius $1$, and if $\chi_A(t)$ is its characteristic polynomial, then $\chi_A(x)\le x^n-x^{n-r}$ for all $x\ge 1$. Our proof is based on the Newton Identities.","Nonnegative Matrices, Eigenvalues","15A48","15A18"," "Lee","Hosoo","thislake@naver.com","\section{Contractions and nonlinear matrix equations on positive definite cones} By {Hosoo Lee and Yongdo Lim}. \noindent In this talk we consider the semigroup generated by the self-maps on the open convex cone of positive definite matrices of translations, congruence transformations and matrix inversion that includes symplectic Hamiltonians and show that every member of the semigroup contracts any invariant metric distance inherited from a symmetric gauge function. This extends results of Bougerol for the Riemannian metric and of Liverani-Wojtkowski for the Thompson part mertic. A uniform upper bound of the Lipschitz contraction constant for a member of the semigroup is given in terms of the minimum eigenvalues of its determining matrices. We apply this result to a variety of nonlinear equations including Stein and Riccati equations for uniqueness and existence of positive definite solutions and find a new convergence analysis of iterative algorithms for the positive definite solution depending only on the least contraction coefficient for the invariant metric from the spectral norm.","Positive definite matrix, Lipschitz contraction constant, nonlinear matrix equations","15A24","15A48"," "Ahn","Eunkyung","ekahn@knu.ac.kr","\section{An extended Lie-Trotter formula and its applications} By {Eunkyung Ahn, Sejong Kim and Yongdo Lim}. \noindent In this talk we present a class of Lie–Trotter formulae for Hermitian operators including the formulae derived by Hiai–Petz and Furuta. A Lie–Trotter formula for weighted Log-Euclidean geometric means of several positive definite operators is given in terms of Sagae–Tanabe geometric and spectral geometric means.","Lie–Trotter formula, Positive definite operator, Geometric mean, Log-Euclidean mean, Spectral geometric mean, Sagae–Tanabe mean","15A04","15A03"," "Lee","Hosoo","thislake@naver.com","\section{Contractions and nonlinear matrix equations on positive definite cones} By {Hosoo Lee and Yongdo Lim}. \noindent In this talk we consider the semigroup generated by the self-maps on the open convex cone of positive definite matrices of translations, congruence transformations and matrix inversion that includes symplectic Hamiltonians and show that every member of the semigroup contracts any invariant metric distance inherited from a symmetric gauge function. This extends results of Bougerol for the Riemannian metric and of Liverani-Wojtkowski for the Thompson part mertic. A uniform upper bound of the Lipschitz contraction constant for a member of the semigroup is given in terms of the minimum eigenvalues of its determining matrices. We apply this result to a variety of nonlinear equations including Stein and Riccati equations for uniqueness and existence of positive definite solutions and find a new convergence analysis of iterative algorithms for the positive definite solution depending only on the least contraction coefficient for the invariant metric from the spectral norm.","Positive definite matrix, Lipschitz contraction constant, nonlinear matrix equations","15A24","15A48"," "Catral","Minerva","mcatral@uvic.ca","\section{The Kemeny Constant in Finite Homogeneous Ergodic Markov Chains} By {\sl Minerva Catral}. \noindent For a finite homogeneous ergodic Markov chain, the Kemeny constant is an interesting quantity which is defined in terms of the mean first passage times and the stationary distribution vector. A formula in terms of group inverses and inverses of associated M-matrices is presented and perturbation results are derived.","Kemeny constant, Finite Markov Chains, Group Inverses","15",""," "Ponce","Daniela","daniela.ponce@uhk.cz","\section{{\it NP-}hard problems in extremal algebras tackled by particle swarm optimization} By {\sl Daniela Ponce and Martin Gavalec}.\\[3pt] % \noindent The aim of the contribution is to present an application of a non-standard method called particle swarm optimization (PSO), in the area of extremal algebras. Many of the problems studied in max-plus or max-min algebra cannot be solved in polynomial time and have been shown to be {\it NP-}hard. From the practical point of view, finding an approximate or suboptimal solution can be a considerable achievement in comparison with the situation when no solution is available. New ways of computation are being developed for attacking these directly intractable problems. Permuted eigenvector problem (PEV) has been recently investigated in max-plus algebra: Given a square matrix $A$ and a vector $x$ of the same dimension, is there a permutation $\pi$ such that the permuted vector $x_\pi$ is an eigenvector of $A$? It has been proved that PEV and several other related problems are {\it NP-}complete, see [2]. On the other side, analogous problems are polynomially solvable in max-min algebra, see [4], [5]. In the contribution, PEV in both versions, max-plus and max-min, has been solved by the particle swarm optimization method, the results have been analysed and convergence conditions described. PEV can be approached as an optimization problem. When square matrix $A$ and vector $x$ of dimension $n$ are given, then vector variable $y$ is considered, with the constraint that $y$ is a permutation of $x$. An objective function $z= \|A\otimes y - y\|$ should be set to minimum. The answer in the given instance of PEV is `yes' exactly when the minimal value of $z$ is zero. The operation $\otimes$ in the definition of the objective function $z$ denotes the matrix multiplication in the corresponding extremal algebra (max-plus, or max-min). Particle swarm optimization (PSO) is a global stochastic optimization technique developed by Kennedy and Eberhart [6]. PSO is population-based optimization algorithm imitating social behavior. The optimization algorithm starts by a creation of a population (swarm) of randomly constructed candidate solutions (particles) resulting in initial location of particles in the solution space. Position of the swarm in the solution space is then repeatedly adjusted based on consideration of previous best positions of each individual particle in the solution space as well as best positions attained by neighbouring particles (various neighbourhood topologies can be defined). The basic variant of PSO algorithm was proved to be not a local optimizer. However, such variants of PSO algorithm exist which were proved to be global optimization algorithms [1]. Examples of successful applications of PSO are related to design problems [3], scheduling and planning problems [9] or applied mathematics problems [7], [8], [10]. In tackling PEV as optimization problem we deal with a discrete variant of PSO. Each particle $y$ is a random permutation of $x$ and the swarm is a set of permutations. The solution space is composed of all permutations of $x$. Objective function of a particle is $z$ as defined above, i.e. the norm of the difference $A\otimes y - y$. The computational ability of PSO to find a solution of PEV has been experimentally tested.\\[3pt] % References [1]~F.~van den Bergh, An Analysis of Particle Swarm Optimizers, PhD thesis, Department of Computer Science, University of Pretoria, Pretoria, South Africa (2002). [2]~P.~Butkovi\v{c}: Permuted max-algebraic (tropical) eigenvector problem is NP-complete, Linear Algebra and its Applications 428 (2008), 1874-1882. [3]~C.A.~Coello~Coello, E.H.N.~Luna, A.H.N.~Aguirre, Use of Particle Swarm Optimization to Design Combinational Logic Circuits, Lecture Notes in Computer Science, Springer-Verlag, 2606 (2003), 398-409. [4]~M.~Gavalec, J.~Plavka, Simple image set of linear mappings in a max-min algebra, Discrete Applied Mathematics 155 (2007), 611-622. [5]~M.~Gavalec, J.~Plavka, Permuted max-min eigenvector problem (to appear in Proc. of the ILAS Conference 2008, Cancún). [6]~J.~Kennedy, R.C.~Eberhart, Particle Swarm Optimization, Proc. of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA (1995), 1942-1948. [7]~E.C.~Laskari, K.E.~Parsopoulos, M.N.~Vrahatis, Particle Swarm Optimization for Minimax Problems, Proc. of the IEEE Congress on Evolutionary Computation, 2 (May 2002), 1576-1581. [8]~E.C.~Laskari, K.E.~Parsopoulos, M.N.~Vrahatis, Particle Swarm Optimization for Integer Programming, Proc. of the IEEE Congress on Evolutionary Computation, 2 (May 2002), 1582-1587. [9]~A.~Salman, I.~Ahmad, S.~Al-Madani, Particle Swarm Optimization for Task Assignment Problem, Microprocessors and Microsystems, 26(8) (2002), 363-371. [10]~Y.~Shi, R.A.~Krohlin, Co-evolutionary Particle Swarm Optimization to Solve min-max Problems, Proc. of the IEEE Congress on Evolutionary Computation, 2 (May 2002), 1682-1687.","max-min algebra, max-plus algebra, eigenvector, permutation, NP-hard problem, particle swarm optimization","68Q25","92B20","I would like to modify the originally submitted abstract title (NP-hard problems in extremal algebras: an application of multi-agent approach; planned in the session MS7 Max Algebra) to the current one ""NP-hard problems in extremal algebras tackled by particle swarm optimization"", as used in the abstract submitted. "Gavalec","Martin","martin.gavalec@uhk.cz","\section{ Permuted max-min eigenvector problem} By {\sl Martin Gavalec and J\‘{a}n Plavka}.\\[3pt] % \noindent Eigenvectors in extremal algebras are motivated by steady states of discrete events systems whose behaviour is described by a square matrix corresponding to transition from one state of the system to the next state. In the situation when a given state vector is not an eigenvector of the transition matrix, then the system is not stable and we may ask whether it is possible to renumber the inputs so that the system with permuted states becomes stable. The following Permuted Eigenvector problem (PEV) is discussed in this contribution: Given a square matrix $A$ and a vector $x$ of the same dimension in max-min algebra, decide whether there is a permutation $\pi$ on indices such that the permuted vector $x_\pi$ is an eigenvector of matrix $A$, i.e $A\otimes x_\pi = x_\pi $. Analogous problem has recently been studied by P.~Butkovi\v{c} in [1] for matrices and vectors in max-plus algebra. It has been shown that the max-plus version of PEV is {\it NP-}complete and so is IPEV, the restriction of PEV to integer values. Relations of PEV to further notions in max-min algebra, like strongly regular matrix, simple image vector (vector with unique pre-image), generally trapezoidal matrix (see [2, 4]), will be described in the presentation. It will be shown that PSIV, the restriction of PEV to simple image vectors (and consequently, to strongly regular matrices) can be solved in polynomial time using the generally trapezoidal algorithm GenTrap described in [3].\\[3pt] % References [1]~P.~Butkovi\v{c}, Permuted max-algebraic (tropical) eigenvector problem is {\it NP-}complete, Linear Algebra and its Applications 428 (2008) 1874-1882. [2]~M.~Gavalec, J.~Plavka, Strong regularity of matrices in general max-min algebra, Linear Algebra and its Applications 371 (2003), 241-254. [3]~M.~Gavalec, General trapezoidal algorithm for strongly regular max-min matrices, Linear Algebra and its Applications 369 (2003), 319-338. [4]~M.~Gavalec, J.~Plavka, Simple image set of linear mappings in a max-min algebra, Discrete Applied Mathematics 155 (2007), 611-622.","eigenvector, permutation, max-min algebra, computational complexity","68Q25","65F15"," "Peña","Marta","marta.penya@upc.edu","\section{Perturbations preserving conditioned invariant subspaces} By {\sl A.~Compta; J.~Ferrer; M.~Pe{\~n}a}. \noindent Invariant subspaces play a key role both in square matrices and linear systems, where they are often called ""conditioned"" invariant subspaces. In the context of versal deformations, invariant subspaces arise in a natural way. For instance, in the Carlson problem (that is, the possible Segre characteristic of a block-triangular nilpotent matrix when diagonal blocks are prescribed), one asks for perturbations of the given matrix preserving a prefixed invariant subspace. The ""interesting class"" of the so-called marked subspaces, namely, the invariant subspaces having a Jordan basis which can be extended to a Jordan basis of the whole space is also considered in this work. For instance, it is known that the ""simplest"" solutions of the Carlson problem are marked, and any other appears in a neighborhood of the marked ones. This notion can be extended to pairs of matrices and used for the analogue to the Carlson problem: again the marked Solutions cover all the possibilities and are the simplest realizations. Here we tackle the perturbation of a linear system preserving a given conditioned invariant subspace. We focus our attention on the marked case which, as above, has interesting properties; for instance the ""minimal"" observable perturbations of a non-observable pair are marked. We obtain the equations of a miniversal deformation of a pair of matrices preserving a given conditioned invariant subspace and solve them explicitly, obtaining ""minimal"" solutions (that is, without repeated parameters). Some applications are derived: computation of the dimension of the orbits, characterization of structurally stable objects, study of bifurcations diagrams...","vertical pairs of matrices; conditioned invariant subspaces; marked pairs; stratified manifold; miniversal deformation; dimension of the orbits; bifurcation diagrams","93B07","93B27"," "Ferrer","Josep","josep.ferrer@upc.edu","\section{Geometric structure of the equivalence classes of a controllable pair} By {\sl A.~Compta; J.~Ferrer; M.~Pe{\~n}a}. \noindent It is well known, in quite general conditions, the geometric structure of the orbits generated by the action of a group in a differentiable manifold. It seems natural to ask for the geometric relationships when different subgroups are considered, that is to say, the geometric structure of the different suborbits forming a lattice, and specially their intersections (which in general must not be an orbit, even not a differentiable manifold). Here, we present a full unified panorama in the case of pairs of matrices representing linear systems, where different equivalent relations can be considered: changes of basis in the state space and in the input space, and feedbacks. The starting tools in this analysis are the Arnold's techniques of versal deformations. More specifically, we use two versal deformations of a pair of matrices with regard to the block similarity, and when only changes in the state space are allowed. Some interesting comments and remarks are derived concerning the role of different kind of feedbacks, the boundary of the suborbits, the effects of perturbing a pair...","linear systems; controllable pairs; orbits by feedback; orbits by variables change; system perturbations","37A20","93C05"," "Karow","MIchael","karow@math.tu-berlin.de","\section{Pseudospectra and Stability radii for Hamiltonian Matrices} By {\sl Michael Karow}. \noindent We consider the variation of the spectrum of Hamiltonian matrices under Hamiltonian perturbations. The first part of the talk deals with the associated structured pseudospectra. We show how to compute these sets and give some examples. In the second part we discuss the robustness of linear stability. In particular we determine the smallest norm of a perturbation that makes the perturbed Hamiltonian matrix unstable.","Pseudospectra, Stability Radii","15A15","93D09"," "LAFFEY","THOMAS","Thomas.Laffey@ucd.ie","\section{Some constructive techniques in the nonnegative inverse eigenvalue problem} By {\sl Thomas Laffey}. \noindent Let $\sigma:=(\lambda_{1},\quad...\quad,\lambda_{n})$ \ be a list of complex numbers and let% \[ s_{k}:=\lambda_{1}^{k}+\quad...\quad+\lambda_{n}^{k},\quad k=1,2,3,\quad... \] be the associated Newton power sums. A famous result of Boyle and Handelman states that if all the $s_{k}$ are positive, then there exists a nonnegative integer $N$ such that \[ \sigma_{N}:=(\lambda_{1},\quad...\quad,\lambda_{n},0,\text{ \ \ \ }...\text{ \ \ },0),\text{ \ \ \ \ \ (}N\text{ \ \ zeros)}% \] is the spectrum of a nonnegative $(n+N)\times(n+N)$ matrix \ $A$. \ The problem of obtaining a constructive proof of this result with an effective bound on the minimum number $N$ of zeros required has not yet been solved. We present a number of techniques for constructing nonnegative matrices with given nonzero spectrum \ $\sigma$, and use them to obtain new upper bounds on the minimal size of such an $A$, for various classes of $\sigma$. This is joint work with Helena Smigoc.","Nonegative Matrices, Nonzero Spectrum","15","","This is a contrubution to the minisymposium on nonnegative matrices. "Patricio","Pedro","pedro@math.uminho.pt","\section{Some additive results on Drazin Inverses} By {\sl R.E. Hartwig and Pedro Patr\'{\i}cio}. \noindent Our aim is to investigate the existence of the Drazin inverse $(p + q)^d$ of the sum $p + q$, where $p$ and $q$ are either ring elements or matrices, and $a^d$ denotes de Drazin inverse of $a$. We recall that the Drazin inverse $a^d$ of $a$ is the unique solution, if it exists, to $a^kxa=a^k,xax=x,ax=xa$, for some integer $k\ge 0$. In this talk, we will give sufficient conditions in order to $p + q$ be Drazin invertible, generalizing recent results, and give converse results assuming the ring is Dedekind-finite.","Drazin inverse, block matrices","15A09",""," "Gaubert","Stephane","Stephane.Gaubert@inria.fr","\section{Using max-plus eigenvalues to bound the roots of a polynomial} By {\sl Marianne Akian, Adrien Brandejsky and St\'ephane Gaubert}. \noindent A classical problem consists in bounding the modulus of the zeros of a polynomial in terms of the modulus of its coefficients, or, more generally, in bounding the modulus of the eigenvalues of a matrix in terms of the modulus of its entries. We approach this problem using ideas of max-plus or tropical algebra. If $p=\sum_{0\leq k\leq n} a_kx^k$ is a polynomial with complex coefficients, we define the tropical roots of $p$ to be the points $x\geq 0$ at which the maximum $\max_{0\leq k\leq n}|a_k|x^k$ is attained at least twice. This definition is natural if one considers the multiplicative version of the max-plus semiring. The tropical roots can be computed by a variant of the Newton polygon construction, in which the usual valuation of a Puiseux series is replaced by the valuation which takes the opposite of the logarithm of the modulus of a complex number. Tropical roots appeared before the tropical era in works of Ostrowski and P\'olya on Graeffe's method, and they were already implicit in a work of Hadamard. We establish log-majorisation inequalities relating the moduli of the roots of a polynomial $p$ and certain tropical roots, up to multiplicative constants depending only on the degree. Our approach relies on matrix arguments, exploiting properties of the tropical analogues of the compound matrix and of the eigenvalues. We show in particular that the maximal circuit mean of the $k$-th tropical compound of the companion matrix of $p$ is bounded above by the product of the $k$ largest tropical roots of $p$. We also show that the sequence of the moduli of the eigenvalues of a complex matrix is weakly log-majorised by the sequence of its tropical eigenvalues up to a multiplicative constant depending only on the dimension. We recover along these lines some previous inequalities due to Hadamard, Fujiwara, Specht and Ostrowski, and we also obtain new inequalities.","Max-plus or tropical algebra, log-majorisation, location of the roots of a polynomial, compound matrix, amoeba","30C15","15A42"," "Dopazo","Esther","edopazo@fi.upm.es","\section{Further results on the representation of the Drazin inverse of a $2\times 2$ block matrix} By {\sl E. Dopazo, M.F. Mart\'{i}nez-Serrano and N. Castro-Gonz\'{a}lez}. \noindent Let $A$ be an $n\times n$ complex matrix. The Drazin inverse of ${A}$ is the unique matrix ${A^D}$ satisfying the relations: \[ A^DAA^D=A^D, \quad A^DA=AA^D, \quad A^{k+1}A^D=A \] where ${k=Ind(A)}$, the index of ${A}$, is the smallest nonnegative integer such that \newline ${rank(A^k)=rank(A^{k+1})}$. The concept of Drazin inverse plays an important role in various fields like Markov chains, singular differential and difference equations, iterative methods, etc. A challenge of great interest in this area is to establish an explicit representation for the Drazin inverse of a $2\times 2$ block matrix $M=\begin{pmatrix} A & B \\ C & D\end{pmatrix}$, where $A$ and $D$ are square matrices, in terms of $A^D$ and $D^D$ with arbitrary blocks $A$, $B$, $C$ and $D$. It was posed as an open problem by Campbell and Meyer in 1979, in conecction with the problem to find general expressions for the solutions of the second-order system of the differential equations \[ Ex''(t)+Fx'(t)+Gx(t)=0, \] where the matrix $E$ is singular. Starting from the general formula given by C. D. Meyer and N. J. Rose [6] for the Drazin inverse of triangular block matrices ($B=0$ or $C=0$), an intensive research has been developed on this topic. Recently, some partial results have been obtained under specific conditions [1-5,7]. In this paper, we provide an explicit formula for $2\times 2$ block matrices assuming the geometrical condition \[ \mathcal{R} (B) \subset \mathcal{N} (C) \cap \mathcal{N} (D) \] where $\mathcal{R} (\cdotp)$ and $\mathcal{N} (\cdotp)$ denote the range and the null space of the corresponding matrix, respectively. It generalizes results given by R. E. Hartwig, X. Li and Y. Wei [4] and by D. S. Djordjevic and P. S. Stanmirovic [3]. From our main result, some special cases and perturbation results are derived.\newline This research has been partly supported by project MTM2007-67232, ""Ministerio de Educaci\'{o}n y Ciencia"" of Spain.\newline \vspace{\baselineskip} \begin{thebibliography}{10} \bibitem{}{D. Cvetkovic-Ilic}, {\em A note on the representation for the Drazin inverse of $2\times 2$ block matrices\/}, Linear Algebra and its applications (2008), doi:10.1016/j.laa.2008.02.019. \bibitem{}{N. Castro-Gonz\'{a}lez, E. Dopazo, J. Robles}, {\em Formulas for the Drazin inverse of special block matrices\/}, Appl. Math. Comput., 174 (2006), 252--270. \bibitem{}{D. S. Djordjevi\'c, P. S. Stanimirovi\'c}, {\em On the generalized Drazin inverse and generalized resolvent\/}, Czechoslovak Math. J., 51\,(126) (2001), 617--634. \bibitem{}{ R. E. Hartwig, X. Li, Y. Wei}, {\em Representations for the Drazin inverse of a $2\times 2$ block matrix\/}, SIAM J. Matrix Anal. Appl., 27 (2006) 757--771. \bibitem{}{ X. Li, Y. Wei}, {\em A note on the representations for the Drazin inverse of $2\times 2$ block matrices\/}, Linear Algebra Appl. 423 (2007) 332--338. \bibitem{}{C.D. Meyer, Jr., N. J. Rose}, {\em The index and the Drazin inverse of block triangular matrices}, SIAM J.Appl. Math. 33 (1977), 1--7. \bibitem{}{Y. Wei}, {\em Expression for the Drazin of $2\times 2$ block matrix}, Linear and Multilinear Algebra 45 (1998) 131--146. \end{thebibliography} \vspace{\baselineskip}","Drazin inverse, block matrices","15A09",""," "SALAM","Ahmed","Ahmed.Salam@lmpa.univ-littoral.fr","\section{A structure-preserving Arnoldi-like method for a class of structured matrices} By {\sl A. Salam}. \noindent The aim of this talk is to introduce an Arnoldi-like method that preserves the structures of a large set of structured matrices. Interesting particular elements of such set are Hamiltonian, skew-Hamiltonian and symplectic matrices. The obtained structure-preserving size reduction is crucial for the computation of several eigenvalues of such large and sparse structured matrices.","Skew-symmetric inner product, symplectic Gram-Schmidt, symplectic Householder transformations, $SR$ factorization, Krylov subspace-like methods","65F15","65F50"," "Poole","George","pooleg@etsu.edu","Linear Algebra Education (Whatever Happened to Rook's Pivoting?) By George D. Poole, East Tennessee State University In 1991, Poole and Neal (LAA 149:249-272) presented a geometric analysis of both phases of Gaussian Elimination (GE) in order to better understand how partial pivoting, total pivoting, scaling, and condition number influence the computed solution of a system of linear equations in a finite-precision (F-P)environment. What emerged from this geometric analysis was a new pivoting strategy, Rook's Pivoting, that addressed all of the issues normally associated with GE in a F-P environment: pivoting, scaling, and condition number. The work was presented through a series of papers. Here we review the implication of these papers in both LA education, and LA application. The talk should be both illuminating and entertaining.","Gaussian elmination, pivoting, scaling, condition number, Rook's pivoting","65F05","15A06","This talk can be placed in a numerical section if one exists "Brualdi","Richard A.","brualdi@math.wisc.edu","\section{A Conjecture in Combinatorial Matrix Theory} By {\sl Richard A. Brualdi}. \noindent In this talk I will discuss an old conjecture of mine and Bolian Liu, and the recent progress on this conjecture.","combinatorial matrix theory","05C50","15A48<<","For: MS1 Combinatorial Matrix Theory "De Terán","Fernando","fteran@math.uc3m.es","\section{Linearizations of Singular Matrix Polynomials and the Recovery of Minimal Indices} By {\sl \underline{Fernando {\sc d}e Ter\'{a}n}, Froil\'{a}n M. Dopico and D. Steven Mackey }. \noindent The use of linearizations is a well-established tool for both the theoretical and computational investigation of the properties of matrix polynomials. However, almost all analyses of the relationships between a polynomial $P(\l)$ and its linearizations have been restricted to the case where $P$ is regular, i.e.\ when $\det P(\l) \neq 0$. By contrast this talk will focus on $n\times n$ \emph{singular} polynomials $P(\l)$, with $\det P(\l) \equiv 0$. We begin by examining a variety of pencils associated with $P$ that generalize the well-known (Frobenius) companion linearizations: pencils introduced by Antoniou and Vologiannidis in 2004 \cite{AntV04}, as well as the vector spaces $\L_1(P)$ and $\L_2(P)$ introduced in 2006 \cite{mmmm05v}. Which, if any, of these pencils are still linearizations when $P$ is singular? The second issue addressed in this talk is the relationship between the \emph{minimal indices} of a singular polynomial $P$ and those of its various linearizations $L$. Can the minimal indices of $P$ be recovered from the minimal indices of $L$ in a systematic and uniform way? We consider this question for all the linearizations discussed earlier, and show how the answer depends on the particular linearization chosen. \begin{thebibliography}{10} \bibitem{AntV04} {\sc E.~N. Antoniou and S.~Vologiannidis}, {\em A new family of companion forms of polynomial matrices}, Electr. J. Lin. Alg., 11:78--87, 2004. \bibitem{mmmm05v} {\sc D.~S. Mackey, N.~Mackey, C.~Mehl, and V.~Mehrmann}, \emph{ Vector spaces of linearizations for matrix polynomials}, SIAM J. Matrix Anal. Appl., 28(4):971--1004, 2006. \end{thebibliography}","Matrix polynomials. Linearizations. Minimal indices.","15","15A21"," "MERLET","Glenn","glenn.merlet@gmail.com","\section{Semi-group of matrices acting on the max-plus projective space} By {\sl Glenn MERLET}. \noindent We investigate the action of a semi-group $\mathcal S$ of matrices on the max-plus projective space. If all matrices in $\mathcal S$ are strongly regular (that is, their image has maximal dimension), and the semi-group is primitive (that is one of its elements has only finite entries), then there is a point in the projective space, which is fixed by every matrix in the semi-group. Moreover, $\mathcal S$ acts on $\cap_{M\in\mathcal S} Im(M)$, like a finite group of affine isometries. If the semi-group contains an element with projectively bounded image, then it also contains some linear projectors. Then, for any projector $P$ with minimal tropical rank, there is a point $x$ whose orbit is mapped on $x$ by $P$. Moreover, $\{PM: M\in \mathcal S\}$ acts on $\cap_{M\in\mathcal S} Im(PM)$, like a finite group of isometries for the supremum norm. We deduce from this result some limit theorems for max-plus products of random matrices, which were only known under the so-called memory-loss property. These results are useful for performance evaluation of max-plus linear discrete event systems.","product of matrices, semi-group, max-plus, tropical geometry, limit theorems, DES, performance evaluation.","12K10","15A52","Contribution to Minisymposium Max Algebra (MS7) "Sinkovic","John","j.sinkovic@tue.nl","\section{An upper bound for the maximum nullity of a symmetric matrix whose graph is outerplanar } By {\sl John Sinkovic}. \noindent Let $G=(V,E)$ be a graph with $V=\{1,2,\ldots,n\}$. Define $S(G,\mathbb{R})$ as the set of all $n\times n$ real-valued symmetric matrices $A=[a_{i,j}]$ with $a_{i,j}\neq 0, i\neq j$ if and only if $ij\in E$. By $M(G)$ we denote the largest possible nullity of any matrix $A\in S(G)$. The path cover number of a graph $G$, denoted $P(G)$, is the minimum number of vertex disjoint paths occurring as induced subgraphs of $G$ which cover all the vertices of $G$. The path cover number of a graph $G$ has been linked to the maximum nullity of $G$. It has been shown by Duarte and Johnson that for a tree $T$, $P(T)=M(T)$. Barioli, Fallat, and Hogben have shown that for a unicyclic graph $G$, $P(G)=M(G)$ or $P(G)=M(G)+1$. In this talk I will show that for outerplanar graphs the path cover number is an upperbound for the maximum nullity and show that equality holds for partial 2-paths, which are outerplanar.","minimal rank, graph, path cover number","05C50","15A18"," "Martin","William","william.martin@ndsu.edu","\section{Learning Theory and Linear Algebra} By {\sl William Martin, Sergio Loch, Draga Vidakovic, Laurel Cooley, Scott Dexter, Michael Meagher}. \noindent The research team of The Linear Algebra Project developed and implemented a curriculum and a pedagogy for parallel courses in (a) linear algebra and (b) learning theory as applied to the study of mathematics with an emphasis on linear algebra. The purpose of the ongoing research, partially funded by the National Science Foundation, is to investigate how the parallel study of learning theories and advanced mathematics influences the development of thinking of individuals in both domains. The researchers found that the particular synergy afforded by the parallel study of math and learning theory promoted, in some students, a rich understanding of both domains and that had a mutually reinforcing effect. Furthermore, there is evidence that the deeper insights will contribute to more effective instruction by those who become high school math teachers and, consequently, better learning by their students. The courses developed were appropriate for mathematics majors, pre-service secondary mathematics teachers, and practicing mathematics teachers. The learning seminar focused most heavily on constructivist theories, although it also examined socio-cultural and historical perspectives (von Glaserfeld, 1989; Vygotsky, 1978, 1986). A particular theory, Action-Process-Object-Schema (APOS) (Asiala et al., 1996), was emphasized and examined through the lens of studying linear algebra. APOS has been used in a variety of studies focusing on student understanding of undergraduate mathematics. The linear algebra courses include the standard set of undergraduate topics. This paper reports the results of the learning theory seminar and its effects on students who were simultaneously enrolled in linear algebra and students who had previously completed linear algebra and outlines how prior research has influenced the future direction of the project.","learning theory, APOS, linear algebra, constructivism, instruction","97","97C30"," "Pruneda","Rosa E.","rosa.pruneda@uclm.es","\section{Complete Orthogonal Decomposition Compared with Direct Projection Methods} By {\sl Rosa E. Pruneda and Beatriz Lacruz}. \noindent Several variants of projection methods have been applied to solve linear systems of equations and matrix computations. These methods are direct solvers and consist of an iterative process that projects the orthogonal subspace of each row of a matrix in the orthogonal subspace of the previous ones. The pivoting process is based on the dot products of the rows of the matrix and a base of the Euclidean space, which is transformed at each iteration considering orthogonal relationships. This paper studies the orthogonal decomposition method that gives a complete decomposition of the Euclidean space. The method is compared with the direct projection method, which is based on the same pivoting strategy, but gives an implicit factorization of the matrix. The execution and the numerical cost of detecting linear dependencies, solving multiple linear systems and updating one-rank modification problems are discussed. An application to linear regression problems illustrates how to detect collinear relations and to obtain the coefficients of such dependencies with both methods.","Null space algorithms; One-rank modifications; Multiple linear systems; Linear regression; Collinearity","15A03","62J05","This work is partially supported by Spanish Ministry of Education/FEDER (project BFM2006-15671), Gobierno de Aragón (Consolidated Group Stochastic Models), José Castillejo grant JC2007-00285, and by the Junta de Comunidades de Castilla-La Mancha through project PCI08-0065. "Meerbergen","Karl","Karl.Meerbergen@cs.kuleuven.be","\section{Recycling Ritz vectors in the parameterized Lanczos method} By {Zhaojun Bai and Karl Meerbergen}. \noindent The solution of the parameterized system \begin{equation}\label{eq:system} A x = f \quad\mbox{with}\quad A = K - \omega^2 M \end{equation} with $K$ real symmetric, and $M$ symmetric positive definite arises in applications, including structural engineering and acoustics. The parameter $\omega$ is often the frequency and lies in the frequency interval where the numerical model is valid. The solution $x$ is called the frequency response function. The traditional method in engineering is modal superposition where (\ref{eq:system}) is projected on well selected eigenvectors associated with the eigenvalues of \begin{equation}\label{eq:eigval} K u = \lambda M u \ . \end{equation} This method is usually experienced as very efficient when the eigenvectors and eigenvalues are available, since (\ref{eq:system}) is transformed to a diagonal linear system, but it requires the computation of a significant amount of eigenvectors. Efficient methods for solving (\ref{eq:system}) have been developed over the last decade, in the context of iterative linear system solvers for parameterized problems \cite{sipe02} \cite{meer03}, and the Pad\'e via Lanczos method in the context of modelreduction \cite{fefr95} \cite{bafr00b} \cite{bafr01}. In this talk, we discuss the use of Ritz vectors to preconditioning the Lanczos method for solving the parameterized system (\ref{eq:system}). We apply the method for solving (\ref{eq:system}) with many right-hand sides simultaneously. \begin{thebibliography}{1} \bibitem{bafr00b} Z.~Bai and R.~Freund. \newblock A symmetric band {L}anczos process based on coupled recurrences and some applications. \newblock Numerical Analysis Manuscript 00-8-04, Bell Laboratories, Murray Hill, New Jersey, 2000. \bibitem{bafr01} Z.~Bai and R.~Freund. \newblock A partial {P}ad\'{e}-via-{L}anczos method for reduced-order modeling. \newblock {\em Linear Alg. Appl.}, 332--334:141--166, 2001. \bibitem{fefr95} P.~Feldman and R.~W. Freund. \newblock Efficient linear circuit analysis by {P}ad\'e approximation via the {L}anczos process. \newblock {\em IEEE Trans. Computer-Aided Design}, CAD-14:639--649, 1995. \bibitem{meer03} K.~Meerbergen. \newblock The solution of parametrized symmetric linear systems. \newblock {\em SIAM J. Matrix Anal. Appl.}, 24(4):1038--1059, 2003. \bibitem{sipe02} V.~Simoncini and F.~Perotti. \newblock On the numerical solution of {$(\lambda^2 A + \lambda B + C)x = b$} and application to structural dynamics. \newblock {\em SIAM Journal on Scientific Computing}, 23(6):1876--1898, 2002. \end{thebibliography}","symmetric eigenvalue problem, parameterized linear systems","15A18, 6","65G50"," "McEneaney","William","wmceneaney@ucsd.edu","\section{Max-Plus Bases, Cornices and Pruning} By {William M. McEneaney.} \noindent In the development of computationally efficient algorithms for control of sensor tasking, one is faced with a certain computational-complexity growth that must be attenuated. At each step of these algorithms, one would like to find a reduced-complexity representation of the current solution. These representations take the form of max-plus sums of affine functionals. Some important max-plus vector spaces, or moduloids, are spaces of convex and semiconvex functions. In these cases, elements of the spaces may be represented as countable max-plus linear combinations of linear (convex-functions spaces) and quadratic (semiconvex-functions spaces) functions. The partial sums naturally approximate the elements from below. In the problem at hand, we are in the case of spaces of convex functions. One solution to the complexity-reduction problem would be simply to begin generating the coefficients in max-plus basis expansions, but one is still left with the problem of which basis functions to choose. More carefully, we see that the problem at hand is as follows: Given an element of the space of convex functions, taking the form of a max-plus sum of $M$ linear functions, and given some fixed, allowable number of approximating affine functions, say $N0$. This matrix equation arises in some signal processing problems. For instance, it appears when designing the even and odd components of paraunitary filters, which are widely used for signal compression and denoising purposes. We also point out the relationship between the above matrix equation and the polynomial Bezout equation $|B(z)|^2+|C(z)|^2=a>0$ for $|z|=1$. By exploiting this fact, our results also yield a constructive method for the parameterization of all solutions $B(z), C(z)$. The main advantage of our approach is that $B$ are $C$ are built without need of spectral factorization. Besides these theoretical advances, in order to illustrate the effectiveness of our approach, some examples of paraunitary filters design are finally given,","Toeplitz matrices, spectral factorization, filter design","15A24","12D05"," "Morris","DeAnne","dmorris@math.wsu.edu","\section{Jordan forms corresponding to nonnegative and eventually nonnegative matrices} By {\sl Judith McDonald, DeAnne Morris}. \noindent We give necessary and sufficient conditions for a set of Jordan blocks to correspond to the peripheral spectrum of a nonnegative matrix. For each eigenvalue, $\lambda,$ the $\lambda$-level characteristic (with respect to the spectral radius) is defined. The necessary and sufficient conditions include a requirement that the $\lambda$-level characteristic is majorized by the $\lambda$-height characteristic. An algorithm which determines whether or not a multiset of Jordan blocks corresponds to the peripheral spectrum of a nonnegative matrix will be discussed. We also offer necessary and sufficient conditions for a multiset of Jordan blocks to correspond to the spectrum of an eventually nonnegative matrix.","nonnegative","",""," "Vargas Vásquez","Xaab Nop","xaabnop@gmail.com","\section{STUDENTS DIFFICULTIES WITH CONCEPT OF VECTOR SPACE FROM POINT OF VIEW OF APOS THEORY} By {\sl Xaab Nop Vargas Vásquez}. \noindent Vector space theory, being abstract in nature and having an epistemological status different from most mathematical topics taught at the undergraduate level, is a major source of difficulty for beginning linear algebra students (Dorier, 1995a; Dorier, 1995b). The identification of the nature of these difficulties and their association with the way in which students construct the concept of vector spaces is of great importance on the way to the development and implementation of good instructional strategies. APOS (Action-Process-Object- Schema) Theory provides a research tool that has been successfully used in other areas of mathematics such as abstract algebra and calculus, for similar purposes. In a previous paper (Trigueros and Oktac, 2005) a possible genetic decomposition for the concept of vector spaces was reported, and activities that were designed in such a way that students can make the necessary mental constructions required by the genetic decomposition of the concept were analyzed. Taking into account this paper, an instrument to conduct a semi-structured interview was designed using our theoretical framework, to be applied to a selected group of students. The data from the interviews will be analyzed using the same framework. The interview consisted of 17 questions about the concepts of vector space and subspace. Here we present two of these questions (numbered 1 and 2 in the instrument), together with our a priori analysis of them and related student performance. References: Dorier, J-L. (1995a): A general outline of the genesis of vector space theory. Historia Mathematica, 22(3), 227-261. Dorier, J-L. (1995b): Meta level in the teaching of unifying and generalizing concepts in mathematics. Educational Studies in Mathematics, 29(2), 175-197. Trigueros, M. and Oktac, A. (2005): La Théorie APOS et l'Enseignement de l'Algebre Lineaire. Annales de Didactique et de Sciences Cognitives, vol. 10, 157-176.","vector space, apos theory, genetic decomposition","97","97D"," "Milligan","Thomas","tmilligan1@ucok.edu","\section{On Euclidean Squared Distance Matrices} By {\sl Thomas Milligan, Chi-Kwong Li, Michael Trossett}. \noindent Given n points in Euclidean space, $x_1, \dots , x_n$, a Euclidean Squared Distance (ESD) matrix is a matrix whose entries are of the form $(||x_i - x_j||^2)$. The study of distance matrices is useful in computational chemistry and structural molecular biology. We show some results arising from different characterizations, including facial structure and linear preservers.","distance matrices","15A",""," "Deaett","Louis","deaett@math.wisc.edu","\section{The graph and rank of a positive semidefinite matrix} By {\sl Louis Deaett}. \noindent From a well-known 1991 result of M. Rosenfeld, if $A$ is a positive semidefinite matrix whose corresponding graph $\mathcal{G}(A)$ contains no triangle then the number of vertices of $\mathcal{G}(A)$ is at most twice the rank of $A$. This gives \[ \omega(G) \leq 2 \Rightarrow \mbox{mr}_+(G) \geq \lceil n/2 \rceil. \] We explore the structure of matrices that achieve this bound, and investigate whether other features of the relationship between $\mbox{mr}_+(G)$ and the structure of $G$ can thereby be illuminated.","minimum rank, orthogonal representation","05C50","15a99"," "Dhillon","Inderjit","inderjit@cs.utexas.edu","\section{On some modified root-finding problems} By {\sl Inderjit S. Dhillon, Matyas Sustik}. \noindent Modern problems in data analysis require the solution of some interesting matrix nearness problems. One such problem arises when using an information-theoretic distance measure called the von Neumann matrix divergence (related to von Neumann entropy). The matrix nearness problem in turn leads to a modified root-finding problem involving the matrix exponential. In this talk, I will show how the Newton method can be applied to solve this problem. The central issue is the efficient calculation of the derivative which involves the matrix exponential and a ``diagonal + low-rank'' eigenvalue problem.","Newton's method, von Neumann divergence, matrix exponential, eigenvalue problem","15",""," "Tam","Bit-Shun","bsm01@mail.tku.edu.tw","\section{Maximizing spectral radius of unoriented Laplacian matrix} By {\sl Ding-Jung Chang, Bit-Shun Tam and Shui-Hei Wu}. \noindent Insert your abstract here For a (simple) graph $G$, by the unoriented Laplaican matrix of $G$ we mean the matrix $K(G) = D(G)+A(G)$, where $A(G), D(G)$ denote respectively the adjacency matrix and the diagonal matrix of vertex degrees of $G$. In this talk, I'll report on recent progress in the problem of maximzing the spectral radius of the unoriented Laplacian matrix over various classes of graphs. Our treatment depends on the following new result: Let $G$ be a graph. Let $V_1 \ldots, V_r$ be the equivalence classes for the equivalence relation $\sim$ on $V(G)$ defined by: $u \sim v$ if and only if $N(u)\setminus \{ v\} = N(v)\setminus \{ u\}$, where $N(u)$ denotes the neighbor set of $u$ in $G$. For $j = 1, \ldots, r$, let $n_j$ denote the cardinality of $V_j$ and let $\delta_j$ be the common degree of the vertices in $V_j$. Let $I_1$ (respectively, $I_2$) consist of all indices $j$ such that $n_j > 1$ and $G[V_j]$ is a null graph (respectively, a complete graph). For $i,j = 1, \ldots, r$, let $\gamma_{ij}$ equal $1$ if there is an arc between $V_i$ and $V_j$ and equal $0$, otherwise. Also, let $B = (b_{ij})$ denote the $r\times r$ matrix given by: $b_{ij}$ equals $\gamma_{ij}n_j$ for $i \ne j$ and equals $\gamma_{ii}(n_i-1)$ for $i=j$. Then the spectrum of $K(G)$ is given by: $\sigma(K(G)) = \sigma(\Delta+B)\cup \{ \delta_i (n_i-1 \mbox{ times }: i\in I_1\}\cup \{\delta_i-1 (n_i-1)\mbox{ times}: i\in I_2 \}$, where $\Delta = \rm{diag}(\delta_1, \ldots, \delta_r)$.","Unoriented Laplacian matrix; Spectral radius; Maximizing; Vicinal pre-order; Threshold graph","05C50","15A18","Sorry, my submission is late. I hope it is still okay. "Tam","Bit-Shun","bsm01@mail.tku.edu.tw","\section{Your title here} By {\sl names of all authors here}. \noindent Insert your abstract here For a (simple) graph $G$, by the unoriented Laplaican matrix of $G$ we mean the matrix $K(G) = D(G)+A(G)$, where $A(G), D(G)$ denote respectively the adjacency matrix and the diagonal matrix of vertex degrees of $G$. In this talk, I'll report on recent progress in the problem of maximzing the spectral radius of the unoriented Laplacian matrix over various classes of graphs. Our treatment depends on the theory of threshold graphs, together with following new result: Let $G$ be a graph. Let $V_1 \ldots, V_r$ be the equivalence classes for the equivalence relation $\sim$ on $V(G)$ defined by: $u \sim v$ if and only if $N(u)\setminus \{ v\} = N(v)\setminus \{ u\}$, where $N(u)$ denotes the neighbor set of $u$ in $G$. For $j = 1, \ldots, r$, let $n_j$ denote the cardinality of $V_j$ and let $\delta_j$ be the common degree of the vertices in $V_j$. Let $I_1$ (respectively, $I_2$) consist of all indices $j$ such that $n_j > 1$ and $G[V_j]$ is a null graph (respectively, a complete graph). For $i,j = 1, \ldots, r$, let $\gamma_{ij}$ equal $1$ if there is an arc between $V_i$ and $V_j$ and equal $0$, otherwise. Also, let $B = (b_{ij})$ denote the $r\times r$ matrix given by: $b_{ij}$ equals $\gamma_{ij}n_j$ for $i \ne j$ and equals $\gamma_{ii}(n_i-1)$ for $i=j$. Then the spectrum of $K(G)$ is given by: $\sigma(K(G)) = \sigma(\Delta+B)\cup \{ \delta_i (n_i-1 \mbox{ times }: i\in I_1\}\cup \{\delta_i-1 (n_i-1)\mbox{ times}: i\in I_2 \}$, where $\Delta = \rm{diag}(\delta_1, \ldots, \delta_r)$.","Unoriented Laplacian matrix; Spectral radius; Maximizing; Threshold Graph; Vicinal pre-order","05C50","15A18","Please replace the abstrct I sent in yesterday by the one I'm sending to you now . Sorry for the incovenience caused. "Rodriguez velázquez","Juan Alberto","juanalberto.rodriguez@urv.cat","\section{The Laplacian Spectrum of Hypergraphs} By {Juan A. Rodriguez-Velazquez and Aida Kamisalic}. \noindent In order to deduce properties of graphs from results and methods of algebra, firstly we need to translate properties of graphs into algebraic properties. In this sense, a natural way is to consider algebraic structures or algebraic objects as, for instance, groups or matrices. In particular, the use of matrices allows us to use methods of linear algebra to derive properties of graphs. There are various matrices that are naturally associated with graphs, such as the adjacency matrix, the Laplacian matrix, and the incidence matrix. One of the main aims of algebraic graph theory is to determine how, or whether, properties of graphs are reflected in the algebraic properties of such matrices. In this paper we collect some resent results on the Laplacian spectrum of hypergraphs. We focuss our attention on metric parameters, including eccentricity, excess, diameter and Wiener index. Throughout this paper we particularize the results to the case of walk-regular hypergraphs.","Laplacian spectrum; la placian matrix","15A42","05C50"," "Soares","Graça","gsoares@utad.pt","\section{Inequalities on an indefinite inner product space } By {\sl N. Bebiano et al.}. \noindent We study some matrix inequalities on an indefinite inner product space, induced by a selfadjoint involution J, for J¡selfadjoint matrices with non-negative eigenvalues. In particular, some characterizations of the J-chaotic order are obtained.","Indefinite inner space; J-contraction","15A60","15A60"," "Guzmán","José Ramón","jrg@servidor.unam.mx","\section{Reduction of an Ito´s diffusion input output model for the determination of square mean stability} By {\sl José Ramón Guzmán}. \noindent While Ito´s difussion is known for scientists coming of different areas such that physic, engineering , biology; for social scientists is practically unknown. In this stochastic process the relevant points to consider are the 1-mean stability (Lyapounov stability) and square mean stability, more strong that 1-mean stability. In particular we propose a multisectoral difussion linear input output model. When considering this dynamical economic system there is associated a differencial equation system with symmetric state variables for the investigation of the square mean stability. Of this last system a d²×d² matrix is obtained. Here is proposed a general algorithm that transforms the d²×d² matrix to one of order ((d(d+1))/2)×((d(d+1))/2), conserving the same eigenvalues information. This reduction algorithm allows us supercomputations of eigenvalues for large scale dynamical input output systems.","dynamical systems; lamnda - matrices, algorithm; rational points on curves","15-04","15A21","