\documentclass[12pt]{article} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amsfonts} \def\Z{\mathbb Z} \begin{document} %"Shahryari %Mohammad","mshahryari@tabrizu.ac.ir"," \section*{$\Z_2$-graded symmetry classes of tensors} By {\sl M. Shahryari}. \medskip \noindent In this paper, we define a natural $\Z_2$-gradation on the symmetry class of tensors $V_{\chi}(G)$. We give the dimensions of {\em even} and {\em odd} parts of this gradation. Also we prove that the even part ( the odd part) of this gradation is zero, if and only if the whole symmetry class is zero. %Gradation, Symmetry classes of tensors, Characters of finite groups","15A69","20C15",06:32:14","Sun Dec 16 2007","82.205.162.118" %"Al Zhour %Dr. Zeyad","math_upm@yahoo.com"," \section*{Matrix Results on Weighted Drazin Inverse and Some Applications} By {Zeyad Al Zhour and Adem Kilicman}. \medskip \noindent In this paper, we present two general representations of the weighted Drazin inverse $A_{d,W}$ of an arbitrary rectangular matrix $A¡ôM_{m,n}$ related to Moore-Penrose Inverse (MPI) and Kronecker product of matrices. These generalizations extend earlier results on the Drazin inverse $A_{d}$, group inverse $A_{g}$ and usual inverse $A^{-1}$. %⁻©ö$. Furthermore, some necessary and sufficient conditions for Drazin and weighted Drazin inverses are given for the reverse order law $(AB)_{d}=B_{d}A_{d} $ and $(AB)_{d,Z}=B_{d,R}A_{d,W}$ to hold. Finally, we present the solution of the restricted singular matrix equations using our new approaches. %Kronecker Product, Weighted Drazin Inverses, General algebraic structures, Index. Nilpotent matrix.","15A69","15A09",12:51:50","Tue Dec 25 2007","217.144.8.99" %"Bardsley %John","bardsleyj@mso.umt.edu"," \section*{STOPPING RULES FOR A NONNEGATIVELY CONSTRAINED ITERATIVE METHOD FOR ILL-POSED POISSON IMAGING PROBLEMS} By {\sl Johnathan M. Bardsley}. \medskip \noindent Image data is often collected by a charge coupled device (CCD) camera. CCD camera noise is known to be well-modeled by a Poisson distribution. If this is taken into account, the negative-log of the Poisson likelihood is the resulting data-fidelity function. We derive, via a Taylor series argument, a weighted least squares approximation of the negative-log of the Poisson likelihood function. The image deblurring algorithm of interest is then applied to the problem of minimizing this weighted least squares function subject to a nonnegativity constraint. Our objective in this paper is the development of stopping rules for this algorithm. We present three stopping rules and then test them on data generated using two different true images and an accurate CCD camera noise model. The results indicate that each of the three stopping rules is effective. %iterative methods, image reconstruction, regularization, statistical methods","65F20","65F30",12:14:18","Thu Jan 03 2008","150.131.67.34" %"Rakotondrajao %Fanja","frakoton@univ-antananarivo.mg"," \section*{Euler's difference table and maximum permanents of $(0,1)$-matrices} By {\sl Fanja Rakotondrajao}. \medskip \noindent First, we will enumerate the injections from $[m]$ to $[n]$ without $k$-fixed-points, that is, injection $f$ without $i$ such that $f(i) = i+k$. We will deduce the exact values of maximum permanent of $(0,1)- m \times n $ matrices having $m-k$ numbers of zero entries for any non negative integers $0\leq k \leq m \leq n$. Unexpectedly these values are related to the numbers $d^k_n$ of $k$-fixed-points-permutations over $[n]$. The numbers $d^k_n$ are the derivate of Euler's difference table. %$k$-fixed-points, $(0,1)$-matrices, permanent, injections, $k$-fixed-points-permutations.","05A19","05B20",01:10:34","Wed Jan 16 2008","196.192.40.118" %"Rump %Siegfried M.","rump@tu-harburg.de"," \section*{The ratio between the Toeplitz and the unstructured condition number} By {\sl S.M. Rump and H. Sekigawa}. \medskip \noindent Recently we showed that the ratio between the normwise Toeplitz structured condition number of a linear system and the general unstructured condition number has a finite lower bound. However, the bound was not explicit, and nothing was known about the quality of the bound. In a joint work with H. Sekigawa we give an explicit lower bound only depending on the dimension, and we show that this bound is almost sharp. The solution of both problems is based on the minimization of the smallest singular value of a class of Toeplitz matrices and its nice connection to a lower bound on the coefficients of the product of two polynomials. %Structured condition number, Toeplitz matrix, Mahler measure, polynomial norms","15A12","26D05","The talk will be given by the first author.","09:29:29","Wed Jan 23 2008","89.62.57.138" %"Estatico %Claudio","estatico@unica.it"," \section*{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \medskip \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.} %regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares""","04:39:56","Thu Jan 24 2008","192.167.144.202" %"Estatico %Claudio","estatico@unica.it"," \section*{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \medskip \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.} %regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares""","12:03:23","Thu Jan 24 2008","192.167.144.202" %"Verde-Star %Luis","verde@xanum.uam.mx"," \section*{Your title here} By {\sl Luis Verde}. \medskip \noindent Insert your abstract here derfred kijuhyg %matrix, vector","15A12","15A25","testing","17:55:03","Thu Jan 24 2008","148.206.47.32" %"Verde %Luis","verde@xanum.uam.mx"," \section*{Your title here} By {\sl names of all authors here}. \medskip \noindent Insert your abstract here %matrix","15A23","15A39","prueba opera","17:58:44","Thu Jan 24 2008","148.206.47.32" %"Estatico %Claudio","estatico@unica.it"," \section*{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \medskip \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.} %regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares""","02:52:43","Fri Jan 25 2008","192.167.144.202" %"Estatico %Claudio","estatico@unica.it"," \section*{Block splitting least square regularization for structured matrices arising in nonlinear microwave imaging} By {\sl Claudio Estatico}. \medskip \noindent Nonlinear inverse problems arising in a lot of real applications generally leads to very large scaled and structured matrices, which require a wide analysis in order to reduce the numerical complexity, both in time and space. Since these problems are ill-posed, any solving strategy based on linearization involves a some least square regularization. \noindent In this talk a microwave imaging problem is introduced: the dielectric properties of an object under test (i.e., the output image to restore) are retrieved by means of its scattered microwave electromagnetic field (i.e., the input data). By a theoretical point of view, the mathematical model is a nonlinear integral equation with structured shift variant integral kernel. By a numerical point of view, the linearization and discretization gives rise to an ill-conditioned block arrow matrix with structured blocks, which is iteratively solved by a three-level regularizing Inexact-Newton scheme as follows: $(i)$ the first (outer) level of iterations is related to a least square Gauss-Newton linearization; the second level of iterations is related to a block splitting iterative scheme; $(iii)$ the third and nested inner level of iterations is related to a regularization iterative method for any system block arising from any level II iteration. After that, post-processing techniques based on linear super-resolution improves the quality of the results, and some numerical results are given and compared.\\ \noindent This is a joint work with Professor J. Nagy of the Emory University, Atlanta, and Professors F. Di Benedetto, M. Pastorino, A. Randazzo and G. Bozza, of the University of Genova, Italy.\\ \vskip 0.5cm {\bf \Large{Bibliography}}\\ \noindent C. Estatico, G. Bozza, A. Massa, M. Pastorino, A. Randazzo,\\ ``A two steps inexact-Newton method for electromagnetic imaging of dielectric structures from real data'', {\it Inverse Problems}, {\bf 21}, pp. S81--S94, 2005.\\ \noindent C. Estatico, G. Bozza, M. Pastorino, A. Randazzo,\\ ``An Inexact-Newton method for microwave reconstruction of strong scatterers'', {\it IEEE Antennas and Wireless Propagation Letters}, {\bf 5}, pp. 61-64, 2006.\\ \noindent F. Di Benedetto, C. Estatico, J. Nagy,\\ ``Numerical linear algebra for nonlinear microwave imaging'', {\it in preparation.} %regularization, nonlinear inverse problems, inexact Newton methods","65F22","65R32","This is talk for the Mini-symposia ""MS3 Implementation and Application issues in regularizing least squares and total least squares""","03:10:51","Fri Jan 25 2008","192.167.144.202" %"Maroulas %John","maroulas@math.ntua.gr"," \section*{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \medskip \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[C],\,$ where $\,C=P^{*}AP .$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[C]\,$ shares the same tangential points with the sides of both polygons. %compression;eigenvalue;numerical range","15A60","15A18","","04:06:52","Mon Feb 04 2008","85.75.27.34" %"Maroulas %John","maroulas@math.ntua.gr"," \section*{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \medskip \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[C],\,$ where $\,C=P^{*}AP .$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[C]\,$ shares the same tangential points with the sides of both polygons. %compression;eigenvalue;numerical range","15A60","15A18","","04:07:32","Mon Feb 04 2008","85.75.27.34" %"Damm %Tobias","damm@mathematik.uni-kl.de"," \section*{Algebraic Gramians and Model Reduction for Different System Classes} By {\sl Tobias Damm}. \medskip \noindent Model order reduction by balanced truncation is one of the best-known methods for linear systems. It is motivated by the use of energy functionals, preserves stability and provides strict bounds for the approximation error. The computational bottleneck of this method lies in the solution of a pair of dual Lyapunov equations to obtain the controllability and the observability Gramian, but nowadays there are efficient methods which work for large-scale systems as well. These advantages motivate the attempt to apply balanced truncation also to other classes of systems. For example, there is an immediate way to generalize the idea to stochastic linear systems, where one has to consider generalized versions of Lyapunov equations. Similarly, one can define energy functionals and Gramians for nonlinear systems and try to use them for order reduction. In general, however, these Gramians are very complicated and practically not available. As an approximation, one may use algebraic Gramians, which again are solutions of certain generalized Lyapunov equations and which give bounds for the energy functionals. This approach has been taken e.g.~for bilinear systems of the form \begin{eqnarray*} \dot x&=&Ax+\sum_{j=1}^k N_jxu_j+Bu\;,\\ y&=& Cx\;, \end{eqnarray*} which arise e.g.~from the discretization of diffusion equations with Robin-type boundary control. In the talk we review these generalizations for different classes of systems and discuss computational aspects. %model order reduction, Lyapunov equation, bilinear systems, stochastic systems","93","93B40","","10:39:06","Tue Feb 05 2008","131.246.168.30" %"Maroulas %John","maroulas@math.ntua.gr"," \section*{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \medskip \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[P^{*}AP].\,$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[P^{*}AP]\,$ shares the same tangential points with the sides of both polygons. %compression;eigenvalue;numerical range","15A60","15A18","","20:41:58","Tue Feb 05 2008","85.72.138.190" %"Maroulas %John","maroulas@math.ntua.gr"," \section*{Dilation of numerical ranges of normal matrices} By {\sl Maria Adam and John Maroulas}. \medskip \noindent Let $\,A\,$ be an $\,n \times n\,$ normal matrix, whose the numerical range $\,NR[A]\,$ is a $\,k-$polygon, and let $\,v \in \mathbb{C}^{n},\,$ be a unit vector. If for a unit vector $\,v \in \mathbb{C}^{n},\,$ the point $\,v^{*}Av\,$ is interior point of $\,NR[A]\,$ and $\,P\,$ is an $\,n \times (k-1)\,$ matrix, such that $\,P^{*}P=I_{k-1}\,$ and $\,v \bot ImP, \,$ then $\,NR[A]\,$ is circumscribed to $\,NR[P^{*}AP].\,$ In this paper, we investigate the converse way, showing how we obtain $\,NR[A],\,$ from a $\,(k-1)-$polygon, such that the boundary of $\,NR[P^{*}AP]\,$ shares the same tangential points with the sides of both polygons. %compression;eigenvalue;numerical range","15A60","15A18","","20:56:40","Tue Feb 05 2008","85.72.138.190" %"Rakotondrajao %Fanja","frakoton@univ-antananarivo.mg"," \section*{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \medskip \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section*{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{theorem} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography} %$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'.","10:09:37","Fri Feb 08 2008","132.227.81.242" %"Rakotondrajao %Fanja","frakoton@univ-antananarivo.mg"," \section*{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \medskip \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{theorem} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography} %$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'.","10:10:21","Fri Feb 08 2008","132.227.81.242" %"Rakotondrajao %Fanja","frakoton@univ-antananarivo.mg"," \section*{EULER'S DIFFERENCE TABLE AND MAXIMUM PERMANENTS OF $(0,1)$-MATRICES } By {\sl Fanja Rakotondrajao}. \medskip \noindent \textsc{Abstract. } In this paper we will give three different objects which are combinatorially bijective and whose values are given by Euler's difference table and its derivate. \section{Introduction} We will give different objects which are combinatorially equivalent and which are enumerated by the numbers $e^{k}_{n}$ and their derivate $d^{k}_{n}$. Euler introduced the first numbers which are also called the \textit{difference factorial numbers}. Euler's difference table was studied in \cite{clarke}, \cite{dumont}, \cite{rak1} and \cite{rak} and some few first values are given in the following table. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$e^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ \hline \end{tabular} \] The coefficients $e^{k}_{n}$ of this table are defined by $$e^{n}_{n}=n! \mbox{ and } e^{k-1}_{n}=e^{k}_{n}-e^{k-1}_{n-1}.$$ The first values of the numbers $d^{k}_{n}=\dfrac{e^{k}_{n}}{k!}$ which we call the {\it derivate of Euler's difference table} (see \cite{rak1}, \cite{rak}) are given in the following table . \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{k}_{n}$}\\\hline &$k=0$&1&2&3&4&5&\\ \hline $n=0$&1&&&&&&\\ 1&0&1&&&&&\\ 2&1&1&1&&&&\\ 3&2&3&2&1&&&\\ 4&9&11&7&3&1&&\\ 5&44&53&32&13&4&1&\\ \hline \end{tabular} \] Recall that the numbers $d^{k}_{n}$ satisfy the different following recursive relations (see \cite{rak1}, \cite{rak}) $$ \begin{cases} d^{k}_{k}=1,\\ d^{k}_{n}=(n-1)d^{k}_{n-1}+(n-k-1)d^{k}_{n-2} \mbox{ for } n > k\geq 0,\\ kd^{k}_{n}=d^{k-1}_{n-1}+d^{k-1}_{n} \mbox{ for } 1\leq k \leq n,\\ nd^{k}_{n-1}=d^{k}_{n}+d^{k-1}_{n-2} \mbox{ for } 0\leq k\leq n-1. \end{cases} $$ and their exact values are defined respectively by (see \cite{rak1}) $$e^{k}_{n}=\sum^{n-k}_{i=0}(-1)^i \dbinom{n-k}{i} (n-i)!$$ $$d^{k}_{n}=\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!}.$$ We can find the first six columns of the array $d^{k}_{n}$ (i.e., $d^{k}_{n}$ with $k=0,1,\ldots,5$) in the Online Encyclopedia of Integer Sequences \newline \centerline{(OEIS, http://www.research.att.com/$\sim$njas/sequences/)} as sequences $A000166$, $A000153$, $A00261$, $A001909$ and $A001910$ respectively, and the first seven diagonals (i.e., $d^{n}_{n+k}$ with $k=0,1,\ldots,6$) as sequences $A000012$, $A000027$, $A002061$, $A094792$, $A094793$, $A094794$ and $A094795$ respectively. The diagonals are interepreted as the maximum values of permanent (\cite{bru}, \cite{minc}) among all $0-1$ matrices (see \cite{song}) of dimension $(n-k) \times n$ with exactly $n-k$ zero entries for $k=1,2,\ldots$ and the columns as the number of injections from $[n-k]$ to $[n]$ with no fixed points. The author (\cite{rak1}, \cite{rak}) introduced the $k$-fixed-points-permutations, that is, permutations whose fixed points belong to $[k]$ and whose every cycle has at most one point in common with $[k]$. In the other hand, $(0,1)$-matrices and their permanent play important part in many fields of discrete mathematics namely in graph theory, coding theory, combinatorics and linear algebra. In this paper we will show that these different three objects are combinatorially bijective and will give a general result on the maximum permanent of $(0,1)$-matrices. We will denote by $[n]$ the set $\{1,\ldots,n\}$ and by $D^{k}_{n}$ the set of $k$-fixed-points-permutations. We say that an element $x \in X$ is a fixed point of the map $f$ from the set $X$ to the set $Y$ if $f(x)=x$ and an element $x$ is a $k$-succession if $f(x)=x+k$. We say that the map $f$ is injective (an injection) if $f(x_1)=f(x_2)$ then $x_1=x_2$. We will denote by $Im(f)$ the set of the image of the map $f$ and by $W^{k}_{n}$ the set of injections from $[n-k]$ to $[n]$ without fixed points. We will write $f=f(1)f(2)\ldots f(n-k).$ \section{Injections from $[n-k]$ to $[n]$ without fixed points} \begin{theorem} The number $d^{k}_{n}$ enumerates the number of injections from $[n-k]$ to $[n]$ without fixed points. \end{theorem} \begin{proof} For an integer $0\leq i \leq n-k,$ the number of injections from $[i]$ to $[n]$ is equal to $\dfrac{n!}{(n-i)!}$. The number of injections from $[n-k]$ to $[n]$ having $i$ fixed points is $\dfrac{(n-i)!}{k!}$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of injections from $[n-k]$ to $[n]$ without fixed points which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}\dfrac{(n-i)!}{k!},$$ which is the formula of the numbers $d^{k}_{n}$. \end{proof} \section{Bijection between $D^{k}_{n}$ and $W^{k}_{n}$} Let $k$ and $n$ be two integers such that $0\leq k\leq n$. Let us consider the map $\phi$ from $D^{k}_{n}$ to $W^{k}_{n}$ which associates to a permutation $\sigma$ a map $f$ defined by $$f(i)=n+1-\sigma(n+1-i) \mbox{ for } i\in [n-k].$$ \begin{proof} Notice that if the integer $k=0$, then the sets $D^{k}_{n}$ and $W^{k}_{n}$ are the same: they are all the set of permutations without fixed points over $[n]$. Assume $k\geq 1$. Let $\sigma$ be a $k$-fixed-points-permutation. For $1\leq i \leq k$ we have $\sigma(i)=i$ or $\sigma(i)>k$ and for $k+1\leq i\leq n$ we have $\sigma(i)\neq i$. First we prove that the map $\phi$ is well defined, that is, we prove that the map $f=\phi(\sigma)$ is an injection from $[n-k]$ to $[n]$. If we had $f(i)=i$, that is, $n+1-\sigma(n+1-i)=i$, then we should have $\sigma(n+1-i)=n+1-i$ (impossible since $i \in [n-k]$ and the fixed points of the permutation $\sigma$ are in the subsetb $[k]$). By the construction of the map $\phi$, for a given $k$-fixed-points-permutation over $[n]$, the map $f=\phi(\sigma)$ is unique and if $\sigma_1 \neq \sigma_2$, then $\phi(\sigma_1)\neq \phi(\sigma_2)$. The inverse of the map $\phi$ associates to a given injection $f$ of the set $W^{k}_{n}$ the $k$-fixed-points-permutation $\sigma$ defined by $$\sigma(n+1-i)=n+1-f(i)\mbox{ for } i\in [n-k].$$ \end{proof} \begin{corollary} For all integers $i\in [k]$ and for all $f=\phi(\sigma)$, we have $$\sigma(i)=i \Leftrightarrow n+1-i \notin Im(f).$$ \end{corollary} \begin{proof} For any integer $i\in [k]$, we have $n-k+1 \leq n+1-i \leq n$ and $\sigma(i)=i \Leftrightarrow f(n+1-i)=n+1-i.$ \end{proof} Let us illustrate our map $\phi$ by an example. \newline Let $k=3$ and $\sigma=(1\ 7\ 4)(2)(3\ 8\ 12)(6\ 9)(5\ 10\ 11).$ We have \begin{itemize} \item[] $f(1)=13-\sigma(12)=10$ \item[] $f(2)=13-\sigma(11)=8$ \item[] $f(3)=13-\sigma(10)=2$ \item[] $f(4)=13-\sigma(9)=7$ \item[] $f(5)=13-\sigma(8)=1$ \item[] $f(6)=13-\sigma(7)=9$ \item[] $f(7)=13-\sigma(6)= 4$ \item[] $f(8)=13-\sigma(5)= 3$ \item[] $f(9)=13-\sigma(4)= 12,$ \end{itemize} that is, we get $f=\phi(\sigma)= 10\ 8\ 2\ 7\ 1\ 9\ 4\ 3\ 12.$ And for its inverse, we have \begin{itemize} \item[] $\sigma(12)=13-f(1)=3$ \item[] $\sigma(11)=13-f(2)=5$ \item[] $\sigma(10)=13-f(3)=11$ \item[] $\sigma(9)=13-f(4)=6$ \item[] $\sigma(8)=13-f(5)=12$ \item[] $\sigma(7)=13-f(6)=4$ \item[] $\sigma(6)=13-f(7)= 9$ \item[] $\sigma(5)=13-f(8)= 10$ \item[] $\sigma(4)=13-f(9)= 1,$ \end{itemize} that is, $\sigma=(8\ 12\ 3)(11\ 5\ 10)(9\ 6)(7\ 4\ 1)(2).$ \section{Permutations without $k$-successions} We say that an integer $i$ is a $k$-succession of the permutation $\sigma$ if $\sigma(i)=i+k$ (see \cite{rak}). \begin{theorem} \cite{rak} The number $e^{k}_{n}$ enumerates the permutations over $[n]$ without $k$-successions. \end{theorem} \begin{proof} Notice that if an integer $p$ is a $k$-succession of the permutation $\sigma$, then $p \in [n-k]$. The number of injections from $[n]$ to $[n]$ having $i$ numbers of $k$-successions is equal to $(n-i)!$, and the number of selecting $i$ elements from $n-k$ elements is $\dbinom{n-k}{i}$. By the inclusion-exclusion principle \cite{rior}, we get the number of permutations without fixed points over $[n]$ which is $$\sum^{n-k}_{i=0}(-1)^{i} \dbinom{n-k}{i}(n-i)!=e^{k}_{n}.$$ \end{proof} \section{Injections without $k$-successions} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the number $d(m,n,k)$ of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions is equal to $$\sum^{m-k}_{i=0}(-1)^{i}\dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{theorem} \begin{proof} Notice that if an integer $p$ is a $(n-m+k)$-succession of a map $f$ from $[m]$ to $[n]$, then $p \in [m-k]$. The number of injections from $[m]$ to $[n]$ having $i$ numbers of $(n-m+k)$-successions is equal to $\dfrac{(n-i)!}{(n-m)!}$ and the number of selecting $i$ elements from $m-k$ elements is ${{m-k}\choose i}$. By the inclusion-exclusion principle \cite{rior}, we get the required result. %number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions which is % $$\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m)!}.$$ \end{proof} \begin{corollary} For all nonnegative integers $r$ and $0\leq k \leq n$, the number $d^{(r)}_{n,k}=d(n,n+r,k)$ of injections from $[n]$ to $[n+r]$ without $(r+k)$-successions is equal to $$\sum^{n-k}_{i=0}(-1)^{i}\dbinom{n-k}{i}\dfrac{(n+r-i)!}{r!}.$$ \end{corollary} Let us give some first values of the numbers $d^{(r)}_{n,k}$ for few given integers $r$. \[ \begin{tabular} {||r|rcccccc||}\hline \multicolumn{8}{||c||} {$d^{(0)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5&6\\ \hline $n=0$&0!&&&&&&\\ 1&0&1!&&&&&\\ 2&1&1&2!&&&&\\ 3&2&3&4&3!&&&\\ 4&9&11&14&18&4!&&\\ 5&44&53&64&78&96&5!&\\ 6&265&309&362&426&504&600&6!\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccccc||}\hline \multicolumn{7}{||c||} {$d^{(1)}_{n,k}$}\\\hline &$k=0$&1&2&3&4&5\\ \hline $n=0$&1&&&&&\\ 1&1&2!&&&&\\ 2&3&4&3!&&&\\ 3&11&14&18&4!&&\\ 4&53&64&78&96&5!&\\ 5&309&362&426&504&600&6!\\\hline \end{tabular} \hfill \qquad \begin{tabular} {||r|rcccc||}\hline \multicolumn{6}{||c||} {$d^{(2)}_{n,k}$}\\\hline &$k=0$&1&2&3&4\\ \hline n=0&1&&&&\\ 1&2&3&&&\\ 2&7&9&12&&\\ 3&32&39&48&60&\\ 4&181&213&252&300&360\\\hline \end{tabular} \] \[ \begin{tabular} {||r|rccc||}\hline \multicolumn{5}{||c||} {$d^{(3)}_{n,k}$}\\\hline &$k=0$&1&2&3\\ \hline n=0&1&&&\\ 1&3&4&&\\ 2&13&16&20&\\ 3&71&84&100&120\\\hline \end{tabular} \] Unexpectedly, we obtain the following theorem. \begin{theorem}\label{main} For all nonnegative integers $r$ and $0\leq k \leq n$, we have $$d^{(r)}_{n,k}= \dfrac{(k+r)!}{r!}d^{k+r}_{n+r}.$$ \end{theorem} \begin{proof} Let us denote by $\mathbb{I}^{r}_{k+r}$ the set of all injections from the set $[k]$ to $[k+r]$, by $\mathbb{S}(n,r,k)$ the set of all injections from $[n]$ to $[n+r]$ without $(r+k)$-successions. We will construct a bijection between $\mathbb{S}(n,r,k)$ and $W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}.$ For a given injection $f \in \mathbb{S}(n,r,k)$, we associate the pair $(g,\gamma) \in W^{r+k}_{n+r}\times \mathbb{I}^{r}_{k+r}$ defined by $$ g(i)= \begin{cases} f(i)+n-k\mbox{ mod }n+r \\ n+r \mbox{ if } f(i)=r+k \end{cases} \mbox{ for } i\in [n-k]. $$ and from $f(n-k+1)\cdots f(n)$ we standardise to get $\gamma(1)\cdots\gamma(k)$. More formally, let us take the order preserving bijection $\iota:[n+r]\setminus f([n-k]) \to [k+r]$ and we define $\gamma(i)=\iota \circ f(n-k+i)$ for all $i \in [k]$. Notice that the injection $g$ has no fixed points: if an integer $i \in [n-k]$ were a fixed point for $g$, that is, $g(i)=i$, then we would have $f(i)+n-k [\mbox{ mod }(n+r)]=i$, that is, $f(i)=i+r+k$ and the integer $i$ would be a $(r+k)$-succession for the injection $f$. Notice also that the inverse map $(g,\gamma) \mapsto f$ is defined by \[f(i)= \begin{cases} g(i)+k+r\mbox{ mod }n+r \\ r+k \text{ if } g(i)=n+r \end{cases} \text{ for all }i\in[n-k], \] and $$f(n-k+i)=\iota^{-1}\circ \gamma(i) \text{ for all } i\in[k].$$ \end{proof} \section{Maximum permanents of $(0,1)$-matrices} \begin{definition} Let $A=(a_{ij})$ be an $m \times n$ matrix with $m\leq n$. The \textit{permanent} of $A$, written $Per\ A$, is defined by $$Per\ A=\sum_{f}a_{1f(1)}a_{2f(2)}\cdots a_{mf(m)},$$ where the summation extends over all injections from $[m]$ to $[n]$. If $m>n$, we define $Per\ A=Per\ A^{T}.$ Let $A$ and $B$ be $m\times n$ matrices. We say that $B$ is combinatorially equivalent to $A$ if there exist two permutation matrices $P$ and $Q$ of orders $m$ and $n$ respectively such that $B=PAQ$. \end{definition} Let $k$ be an integer with $0\leq k\leq n$. We will denote by $\mathbb{U}(m, n, k)$ the set of all $m \times n\ (0,1)$-matrices with exactly $k$ zero entries. We give first some basic properties of the permanent function. \begin{remark} For convention, assume that for all integers $0\leq n$ and for all matrices $A\in \mathbb{U}(0, n,0)$, we have $Per\ A=1.$ \end{remark} \begin{the} \cite{minc} \begin{enumerate} \item For any $m\times n$ matrix $A$, $Per\ A= Per\ A^{T}.$ \item If $A$ and $B$ are $m\times n$ combinatorially equivalent matrices, then $Per\ A=Per\ B.$ \end{enumerate} \end{theorem} In \cite{bru}, Brualdi et al. determined the maximum permanents for $n$-square $(0,1)$-matrices with a fixed number of zero entries. In \cite{song}, Song et al. determined the extremes of permanents over $\mathbb{U}(m,n,k)$. \begin{theorem} \cite{song} For $2\leq k\leq m$, the maximum permanent over $\mathbb{U}(m,n,k)$ is $$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ This value is attained by the matrices that are combinatorially equivalent to the matrix $$A_{max}=\left[ \begin{array}[pos]{cc} 1_{k\times k}-I_{k}\ |&{1}_{k\times n-k}\\\hline {1}_{m-k\times n}& \end{array} \right] $$ where $1_{s\times t}$ is the $s\times t\ (0,1)$-matrix with all entries equal to $1$ and $I_{k}$ is the $k$-square identity matrix. \end{theorem} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n-k,n,n-k)$ is equal to $d^{k}_{n}$ and it is attained by the matrices whose each line contains exactly one zero and whose each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n-k\times n \ (0,1)$-matrix in $\mathbb{U}(n-k,n,n-k)$ whose each line contains one zero and whose each column contains at most one zero. This matrix is combinatorially equivalent to $$ M=(m_{ij})= \left[ \begin{array}[pos]{c|c} 1_{n-k\times n-k}-I_{n-k}&{1}_{n-k\times k} \end{array} \right] =\left[ \begin{array}[pos]{rcl|c} 0&&&\\ &\ddots&&\\ &&0& \end{array} \right], $$ where all the entries in blank positions are $1$'s. By definition of permanent, $\displaystyle{Per\ M=\sum_{f}m_{1f(1)}m_{2f(2)}\cdots m_{n-k\ f(n-k)}}$ where the summation extends over all injections from $[n-k]$ to $[n]$. In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[n-k]$ to $[n]$ without fixed points. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k\leq n$, the maximum permanent over $\mathbb{U}(n,n,n-k)$ is equal to $e^{k}_{n}$ and it is attained by the matrices whose each line and each column contains at most one zero. \end{theorem} \begin{proof} Let $A$ be a $n$-square $(0,1)$-matrix in $\mathbb{U}\left(n,n,n-k\right)$ whose each line and each column contains at most one zero. This matrix is combinatorially equivalent to $M =\left( m_{ij} \right)$ such that $$m_{ij}=\left{ \begin{cases} 0 \mbox{ if } j=i+k, 1\leq i \leq n-k\\ 1 \mbox{ else.} \end{cases} \right. $$ In the expansion of $Per\ M$, to determine the terms which do not contain zeros is equivalent to determine the number of permutations over $[n]$ without $k$-successions. And this gives the required result. \end{proof} \begin{theorem} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ enumerates the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. \end{theorem} \begin{proof} The matrices of the set $\mathbb{U}(m,n,m-k)$ whose permanent is maximal are combinatorially equivalent to the matrix $$A=a_{ij}=\left[ \begin{array}[pos]{cc} {1}_{m-k\times n-m+k}\ |&1_{m-k\times m-k}-I_{m-k}\ \\\hline {1}_{k\times n}& \end{array} \right].$$ In the expansion of $Per\ A$, to determine the terms which do not contain zeros is equivalent to determine the number of injections from $[m]$ to $[n]$ without $(n-m+k)$-successions. And this gives the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, the maximum permanent over $\mathbb{U}(m,n,m-k)$ is equal to $$\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ \end{corollary} \begin{proof} Using Theorem \ref{main}, we obtain the required result. \end{proof} \begin{corollary} For all integers $0\leq k \leq m\leq n$, we have $$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}d^{n-m+k}_{n}.$$ %that is, %$$\sum^{m-k}_{i=0}(-1)^{i}{{m-k}\choose i}{{n-i}\choose{m-i}}(m-i)!=\dfrac{(n-m+k)!}{(n-m)!}\sum^{m-k}_{i=0}(-1)^{i} \dbinom{m-k}{i}\dfrac{(n-i)!}{(n-m+k)!}.$$ %$$\sum^{m}_{i=0}(-1)^{i}{k\choose i}{{n-i}\choose{m-i}}(m-i)!.$$ \end{corollary} %%%\section{Tables for some maximum permanents} \section{Acknowledgements} The author is very grateful to a referee of the paper \cite{rak} for her/his pointing out of the two other combinatorial interpretations of the numbers $d^{k}_{n}$ and suggesting to find bijective proofs. \begin{thebibliography}{99} \bibitem{bru} R. A. Brualdi, J. L. Goldwasser, T. S. Michael, Maximum permanents of matrices of zeros and ones, {\it J. Combin. Theory Ser.} {\bf A47} (1988) 207 -- 245. \bibitem{clarke} R. J. Clarke, G. N. Han, J. Zeng, A combinatorial interpretation of the Seidel generation of $q$-derangement numbers, {\it Annals of combinatorics} \textbf{1} (1997) 313--327. \bibitem{dumont} D. Dumont, A. Randrianarivony, D\'erangements et nombres de Genocchi, {\it Discrete Math.} {\bf 132} (1997) 37--49. \bibitem{minc} H. Minc, Permanents, in: {\it Encyclopedia Math. Appl.} vol. {\bf 6}, Addison-Wesley, Reading (1978). \bibitem{rak1} F. Rakotondrajao, $k$-fixed-points-permutations, {\it Pure Math. Appl.} vol. {\bf 16} (2006) xx -- xx. \bibitem{rak} F. Rakotondrajao, On Euler's difference table, in: {\it Proc. Formal Power Series \& Algebraic Combinatorics (FPSAC) 07} , Tianjin, China (2007). \bibitem{rior} J. Riordan, \textit{An Introduction to Combinatorial Analysis}, John Wiley \& Sons, New York (1958). \bibitem{song} S. Z. Song, S. G. Hwang, S. H. Rim, G. S. Cheon, Extremes of permanents of $(0, 1)$ - matrices, {\it Linear Algebra and its Applications} {\bf 373} (2003) 197 -- 210. \end{thebibliography} %$k$-fixed-points-permutations, $k$-succession, $(0,1)$-matrices, permanent, injections, inclusion-exclusion principle","05A19","05B20","The author was supported by the `Soutien aux Activit\'es de Recherche Informatique et Math\'ematiques en Afrique' (SARIMA) project and by LIAFA during her stay at the University of Paris 7, France as invited `Ma\^itre de conf\'erences'.","10:10:46","Fri Feb 08 2008","132.227.81.242" %"Van Dooren %Paul","paul.vandooren@uclouvain.be"," \section*{H2 approximation of linear dynamical systems} By {\sl P. Van Dooren, K. Gallivan and P.A. Absil}. \medskip \noindent We consider the problem of approximating an $m\times p$ rational transfer function $H(s)$ of high degree by another $m\times p$ rational transfer function $\hH(s)$ of much smaller degree. We derive the gradients of the $\calH_2$-norm of the approximation error and show how this can be solved via tangential interpolation. We then extend these results to the discrete-time case, for both time-invariant and time-varying systems. %Tangential interpolation, H2 approximation, model reduction","15","65","","09:41:21","Tue Feb 19 2008","130.104.239.210" %"Koratti Chengalrayan %Sivakumar","kcskumar@iitm.ac.in"," \section*{Least Elements of Polyhedral Sets and Nonnegative Generalized Inverses} By {\sl Debashisha Mishra and Sivakumar K.C.}. \medskip \noindent A classical result due to Cottle and Veinott gives a characterization of the existence of the least element of a specific polyhedral set defined by a matrix, in terms of nonnegativity of a left-inverse of the matrix. In this talk we present extensions of this result to semi-infinite matrices and characterize nonnegativity of certain classes of generalized inverses. %Least elements, polyhedral sets, nonnegative generalized inverse.","15A09","90C05","","05:56:35","Thu Feb 21 2008","203.199.213.66" %"Wu %Pei Yuan","pywu@math.nctu.edu.tw"," \section*{Numerical ranges of nilpotent operators} By {Hwa-Long Gau and Pei Yuan Wu}. \medskip \noindent For any operator $A$ on a Hilbert space, let $w(A)$ and $w_{0}(A)$ denote its numerical radius and the distance from the origin to the boundary of its numerical range, respectively. We prove that if $A$ is nilpotent with nilpotency $n$, then $w(A)$ is at most the product of $n - 1$ and $w_{0}(A)$. When $A$ attains its numerical radius, we also determine a necessary and sufficient condition for the equality to hold. %Numerical range, numerical radius, nilpotent operator.","47A12","15A60","","00:25:17","Sat Feb 23 2008","140.113.22.149" %"Glebsky %Lev","glebsky@cactus.iico.uaslp.mx"," \section*{On low rank perturbations of matrices} By {\sl Lev Glebsky and Luis Manuel Rivera}. \medskip \noindent The talk is devoted to different aspects of the question: ""What can be done with a matrix by a low rank perturbation?"" It is proved that one can change a geometrically simple spectrum drastically by a rank 1 perturbation, but the situation is quite different if one restricts oneself to normal matrices. Also the Jordan normal form of a perturbed matrix is discussed. It is proved that with respect to the distance $d(A,B)=\frac{\rank(A-B)}{n}$ (here $n$ is the size of the matrices) all almost unitary operators are near unitary. %low rank, matrices","15A03","15A18","","11:04:43","Wed Feb 27 2008","189.151.26.32" %"Armandnejad %Ali","armandnejad@yahoo.com"," \section*{Right gw-majorization on $ \mathbf{M}_{n,m}$} By {A. Armandnejad} \medskip \noindent Let $\mathbf{M}_{n,m}$ be the set of all $n\times m$ matrices with entries in $\mathbb{F}$ , where $\mathbb{F}$ is the field of real or complex numbers. An $n\times n$ matrix R is said to be a g-row stochastic matrix if Re=e, where $ e= (1,...,1)^{t}\in \mathbb{F}^{n}$. We introduce the right gw-majorization on $\mathbf{M}_{n,m}$ which it say that an $n\times m$ matrix A is right gw-majorized by an $n\times m$ matrix B and denoted by $ B\succ_{rwg}A$, if there exits a g-row stochastic matrix R such that A=BR. In this paper we study some properties of the right gw-majorization and finally all linear operators that strongly preserve the right gw-majorization will become characterized. %Linear preserver, strong linear preserver, g-row stochastic matrices, right gw-majorization","15A03","15A04","","23:48:32","Thu Feb 28 2008","80.191.162.233" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","05:36:16","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","05:36:38","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","05:37:03","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","05:44:46","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:04:50","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:16:15","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:18:39","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:19:39","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:27:50","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:27:56","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:32:54","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:35:40","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:35:50","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {Gl\'{o}ria Cravo}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","06:42:25","Sun Mar 02 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravoere}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","04:11:00","Mon Mar 03 2008","193.136.232.62" %"Cravo %Glória","gcravo@uma.pt"," \section*{Controllability of Matrices with Prescribed Blocks} By {\sl Gl\'{o}ria Cravoere}. \medskip \noindent Let $F$ be a field and let $n,p_{1},\ldots,p_{k}$ be positive integers such that $n=p_{1}+\cdots+p_{k}.$ Let% \[ (C_{1},C_{2})=\left( \left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k-1}\\ \vdots & & \vdots\\ C_{k-1,1} & \cdots & C_{k-1,k-1}% \end{array} \right] ,\left[ \begin{array} [c]{c}% C_{1,k}\\ \vdots\\ C_{k-1,k}% \end{array} \right] \right) \] where the blocks $C_{i,j}$ are of type $p_{i}\times p_{j},i\in\{1,\ldots ,k-1\},j\in\{1,\ldots,k\}.$ We study the possibility of $(C_{1},C_{2})$ being completely controllable, when some of its blocks are fixed and the others vary. Our main results analyse the following cases: (i) All the blocks $C_{i,j}$ are of the same size; (ii) The blocks $C_{i,j}$ are not necessarily of the same size and $k=3.$ We also describe the possible characteristic polynomial of a matrix of the form% \[ C=\left[ \begin{array} [c]{ccc}% C_{1,1} & \cdots & C_{1,k}\\ \vdots & & \vdots\\ C_{k,1} & \cdots & C_{k,k}% \end{array} \right] \] when some of its blocks are prescribed and the others are free. %Controllability, Characteristic Polynomials, Matrix Completion Problems","93B05","15A18","","04:11:32","Mon Mar 03 2008","193.136.232.62" %"Klein %Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:05:54","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:05:55","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:08:00","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","\documentclass[12pt,fleqn]{article} \usepackage{amssymb} \usepackage[pc850]{inputenc} \usepackage{german} \usepackage{amsmath} \usepackage{amsfonts} \setcounter{MaxMatrixCols}{10} \pagestyle{empty} \voffset-1in \hoffset-1.1in \def\baselinestretch{0.9} \setlength{\oddsidemargin}{-0.5in} \setlength{\evensidemargin}{-0.5in} \setlength{\textwidth}{7.1in} \parindent0ex \topsep=4pt plus 1pt minus 3pt \oddsidemargin3.25cm \textwidth15cm \topmargin3.75cm \headheight0cm \headsep0cm \topskip0cm \textheight21cm \clubpenalty = 10000 \widowpenalty = 10000 \displaywidowpenalty = 10000 \sloppy \flushbottom \input{tcilatex} \begin{document} \begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography} \end{document}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:08:01","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","\begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:09:05","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","\begin{center} {\large \textbf{Tensor Sylvester matrices and information matrices of multiple stationary processes}} by \textit{Andr\'{e} Klein}, Department of Quantitaive Economics, University of Amsterdam \\[0pt] Roetersstraat 11, 1018 WB Amsterdam, The Netherlands \\[0pt] \end{center} \textbf{Abstract} Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \begin{thebibliography}{9} \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. \end{thebibliography}","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:09:09","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:12:44","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:12:45","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:14:02","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl","Tensor Sylvester matrices and information matrices of multiple stationary processes by Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313.","Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:14:06","Mon Mar 03 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl"," \section*{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein, Department of Quantitaive Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands}. \medskip \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. \bibitem{gohblerer} {\small {\large I.} \ {\large G}OHBERG, {\large L. L}% ERER, }Resultants of matrix polynomials. Bull. Amer. Math. Soc\textit{. }\ \textbf{82} {\small \ }(1976) 565-567. \bibitem{kms} {\small {\large A. K}LEIN, {\large G. M}\textsc{\'{E}}LARD, {\large P. S}PREIJ,} On the Resultant Property of the Fisher Information Matrix \ of a Vector ARMA process, Linear Algebra Appl. 403 (2005) 291-313. %Multiple resultant matrix, Matrix Polynomial, Tensor Sylvester matrix, Fisher information matrix, VARMAX process","15A23","15A57","","08:19:16","Mon Mar 03 2008","145.18.180.139" %"Verde %Luis","verde@uma.pt"," \section*{Your title here} By {\sl names of all authors here}. \medskip \noindent Insert your abstract here This is a test.this idsss// %Matrix, function","15A23","","this is a test","10:00:02","Mon Mar 03 2008","189.179.251.239" %"M. Dopico %Froilan","dopico@math.uc3m.es"," \section*{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \medskip \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices. %eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the Abstract for my Plenary talk","07:59:06","Wed Mar 05 2008","163.117.132.83" %"M. Dopico %Froilan","dopico@math.uc3m.es"," \section*{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \medskip \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices. %eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the Abstract for my Plenary talk","07:59:44","Wed Mar 05 2008","163.117.132.83" %"M. Dopico %Froilan","dopico@math.uc3m.es"," \section*{Implicit Jacobi algorithms for the symmetric eigenproblem} By {\sl Froilan M. Dopico}. \medskip \noindent The Jacobi algorithm for computing the eigenvalues and eigenvectors of a symmetric matrix is one of the earliest methods in numerical analysis, dating to 1846. It was the standard procedure for solving dense symmetric eigenvalue problems before the QR algorithm was developed. The Jacobi method is much slower than QR or than any other algorithm based on previous reduction to tridiagonal form, and, as a consequence, it is not used in practice. However, in the last twenty years the Jacobi algorithm has received considerable attention because it can compute the eigenvalues and eigenvectors of many types of structured matrices with much more accuracy than other algorithms. The essential idea is to compute first an accurate factorization of the matrix $A$, and then to apply the Jacobi algorithm implicitly on the factors. The theoretical property that supports this approach is that a factorization $A= X D X^T$, where $X$ is well conditioned and $D$ is diagonal and nonsingular, determines very accurately the eigenvalues and eigenvectors of $A$, i.e., small componentwise perturbations of $D$ and small normwise perturbations of $X$ produce small relative variations in the eigenvalues of $A$, and small variations in the eigenvectors with respect the eigenvalue relative gap. The purpose of this talk is to present a unified overview on implicit Jacobi algorithms, on classes of symmetric matrices for which they work, on the perturbation results that are needed to prove the accuracy of the computed eigenvalues and eigenvectors, and, finally, to present very recent developments in this area that include a new, simple, and satisfactory algorithm for symmetric indefinite matrices. %eigenvalues, eigenvectors, high relative accuracy, Jacobi algorithm","65F15","15A23","I am one of the Plenary speakers and this is the abstract of my Plenary talk","08:10:08","Wed Mar 05 2008","163.117.132.83" %"Mena %Hermann","hmena@math.epn.edu.ec"," \section*{Exponential Integrators for Solving Large-Scale Differential Riccati Equations} By {\sl Peter Benner and Hermann Mena}. \medskip \noindent The differential Riccati equation (DRE) arises in several applications, especially in control theory. Partial differential equations (PDEs) constraint optimization problems often lead to formulations as abstract Cauchy problems. Imposing a quadratic cost functional, the resulting optimal control is solved by a feedback control where the feedback operator is given in terms of an operator-valued DRE. Hence, in order to apply such a feedback control strategy to PDE control, we need to solve the large-scale DREs resulting from a spatial semi-discretization. There is a variety of methods to solve DREs. One common approach is based on a linearization that transforms the DRE into a linear Hamiltonian system of first-order matrix differential equations. The analytic solution of this system is given in terms of the exponential of a 2nx2n Hamiltonian matrix. In this talk, we investigate the use of symplectic Krylov subspace methods to approximate the action of this operator and thereby solve the DRE. Numerical examples illustrating the performance of the method will be shown. %differential Riccati equation, symplectic Krylov subspace methods, Hamiltonian systems, linear-quadratic regulator, optimal control","93A15","65L99","","10:11:07","Wed Mar 05 2008","192.188.57.131" %"foroutannia %Davoud","d_foroutan@math.com"," \section*{ Bounds for matrices on weighted sequence spaces } By {D. Foroutannia}. \medskip \noindent abstract Let $w=(w_n)$ be a decreasing non-negative sequence and $F$ be a partition of positive integers. If $F=(F_n)$, where each $F_n$ is a finite interval of positive integers and also for all $n$, $\max{F_n}<\min{F_{n+1}}$. The block weighted sequence space $l_p(w,F)$ is the space of all real sequences $x=(x_n)$ with $$\|x\|_{p,w,F}=\left(\sum_{n=1}^{\infty}w_n||^p\right)^{1/p}<\infty,$$ where $=\sum_{i\in F_n}x_i$. \\ In this paper, we consider inequalities of the form $\|Ax\|_{p,w,F}\le L\|Bx\|_{q,v,F}$, where $A$ and $B$ are matrix operators, $x$ decreasing non-negative sequence and $w$, $v$ are weights and also $F$ is a block. Moreover, this study is an extension of some works of which are studied before on sequence spaces $l_{p}(v)$ by J. Pecaric, I. Peric and R. Roki in [3]. %Inequality; Lower bound; Upper bound; Block weighted sequence spaces; Copson matrix","","","","11:58:03","Wed Mar 05 2008","217.219.28.191" %"Seddighin %Morteza","mseddigh@indiana.edu","Matrix Optimization in Statistics By Morteza Seddighin. \medskip \noindent Statisticians have been dealing with matrix optimization problems which similar to Matrix Antieigenvalue problems. These problems occur in areas such as statistical efficiency and canonical correlations. Statisticians have generally took a variational approach to treat these matrix optimization problems. However, we will use the techniques we have developed for computation of Antieigenvalues to provide simpler solutions. Additionally, these techniques have enabled us to generalize some of the matrix optimization problems in statistics from positive matrices to normal accretive matrices and operators. One the techniques we use is a Two Nonzero Component Lema which is first proved by the author. Another technique is converting the Antieigenvalue problem to a convex programming problem. In the latter method the problem is reduced to finding the minimum of a convex function on the numerical range of an operator (which is a convex set). %Matrix Optimization, Antieigenvalue","15","47","I have written the abstract using Scientifc Workplace. If there is any problem please let me know to provide a pdf file.","02:30:13","Thu Mar 06 2008","149.165.30.62" %"Carriegos %Miguel","miguel.carriegos@unileon.es"," \section*{Your title here} By {\sl names of all authors here}. \medskip \noindent Switched linear systems belong to a special class of hybrid control systems which comprises a collection of subsystems described by linear dynamics (differential/difference equations) together with a switching rule that specifies the switching between the subsystems. Such systems can be used to describe a wide range of physical and engineering problems in practice. On the other hand, switched linear systems have been attracting much attention in the recent past years because of the arising problems are not only academically challenging but also of practical importance. In this talk we consider \emph{regular switched sequential linear systems}; that is, sequential switched linear systems $$\Gamma:\underline{x}(t+1)=A_{\sigma(t)}\underline{x}(t)+B_{\sigma(t)}\underline{u}(t)$$ where the switching signals $\sigma(0)\sigma(1)\sigma(2)... \in \Sigma^{\ast}$ belong to a regular language $L_{\Gamma}\subseteq\Sigma^{\ast}$ of admissible sequences of commands of system $\Gamma$. This is actually equivalent to saying that switching signals are governed by a finite automaton. We study the notion of reachability in terms of families of matrices $A_{\sigma(-)}$ and $B_{\sigma(-)}$ by using linear algebra techniques. %hybrid system; local automaton; controllability","93B25","68A25","","06:43:52","Thu Mar 06 2008","193.146.100.244" %"Carriegos %Miguel","miguel.carriegos@unileon.es"," \section*{Reachability of regular switched linear systems} By {\sl Miguel V. Carriegos}. \medskip \noindent Switched linear systems belong to a special class of hybrid control systems which comprises a collection of subsystems described by linear dynamics (differential/difference equations) together with a switching rule that specifies the switching between the subsystems. Such systems can be used to describe a wide range of physical and engineering problems in practice. On the other hand, switched linear systems have been attracting much attention in the recent past years because of the arising problems are not only academically challenging but also of practical importance. In this talk we consider \emph{regular switched sequential linear systems}; that is, sequential switched linear systems $$\Gamma:\underline{x}(t+1)=A_{\sigma(t)}\underline{x}(t)+B_{\sigma(t)}\underline{u}(t)$$ where the switching signals $\sigma(0)\sigma(1)\sigma(2)... \in \Sigma^{\ast}$ belong to a regular language $L_{\Gamma}\subseteq\Sigma^{\ast}$ of admissible sequences of commands of system $\Gamma$. This is actually equivalent to saying that switching signals are governed by a finite automaton. We study the notion of reachability in terms of families of matrices $A_{\sigma(-)}$ and $B_{\sigma(-)}$ by using linear algebra techniques. %hybrid system; local automaton; controllability","93B25","68A25","","06:44:57","Thu Mar 06 2008","193.146.100.244" %"Plestenjak %Bor","bor.plestenjak@fmf.uni-lj.si"," \section*{Numerical methods for two-parameter eigenvalue problems} By {\sl Bor Plestenjak}. \medskip \noindent We consider the \emph{two-parameter eigenvalue problem} \cite{Atkinson} \begin{eqnarray} A_1x_1&=&\lambda B_1x_1+\mu C_1x_1,\nonumber\\[-2ex] \label{problem} \\[-2ex] A_2x_2&=&\lambda B_2x_2+\mu C_2x_2,\nonumber \end{eqnarray} where $A_i,B_i$, and $C_i$ are given $n_i\times n_i$ matrices over ${\mathbb C}$, $\lambda,\mu\in{\mathbb C}$, and $x_i\in {\mathbb C}^{n_i}$ for $i=1,2$. A pair $(\lambda,\mu)$ is an \emph{eigenvalue} if it satisfies (\ref{problem}) for nonzero vectors $x_1,x_2$. The tensor product $x_1\otimes x_2$ is then the corresponding \emph{eigenvector}. On the tensor product space $S:= {\mathbb C}^{n_1}\otimes {\mathbb C}^{n_2}$ of the dimension $N:=n_1n_2$ we can define \emph{operator determinants} \begin{eqnarray*} \Delta_0&=&B_1\otimes C_2-C_1\otimes B_2,\cr \Delta_1&=&A_1\otimes C_2-C_1\otimes A_2,\cr \Delta_2&=&B_1\otimes A_2-A_1\otimes B_2. \end{eqnarray*} The two-parameter problem $(\ref{problem})$ is \emph{nonsingular} if its operator determinant $\Delta_0$ is invertible. In this case $\Delta_0^{-1}\Delta_1$ and $\Delta_0^{-1}\Delta_2$ commute and problem (\ref{problem}) is equivalent to the associated problem \begin{eqnarray} \Delta_1 z&=&\lambda \Delta_0 z,\nonumber\\[-2ex] \label{drugi}\\[-2ex] \Delta_2 z&=&\mu \Delta_0 z\nonumber \end{eqnarray} for decomposable tensors $z\in S$, $z=x_1\otimes x_2$. Some numerical methods and a basic theory of the two-parameter eigenvalue problems will be presented. A possible approach is to solve the associated couple of generalized eigenproblems (\ref{drugi}), but this is only feasible for problems of low dimension because the size of the matrices of (\ref{drugi}) is $N\times N$. For larger problems, if we are interested in a part of the eigenvalues close to a given target, the Jacobi--Davidson method \cite{HP,HP2,HP3} gives very good results. Several applications lead to singular two-parameter eigenvalue problems where $\Delta_0$ is singular. Two such examples are model updating \cite{Cottin} and the quadratic two-parameter eigenvalue problem \begin{eqnarray} (S_{00}+\lambda S_{10} +\mu S_{01} + \lambda^2 S_{20} +\lambda \mu S_{11} + \mu^2 S_{02})x&=&0\nonumber\\[-1.7ex] \label{qepproblem} \\[-1.7ex] (T_{00}+\lambda T_{10} +\mu T_{01} + \lambda^2 T_{20} +\lambda \mu T_{11} + \mu^2 T_{02})y&=&0.\nonumber \end{eqnarray} We can linearize (\ref{qepproblem}) as a singular two-parameter eigenvalue problem, a possible linearization is $$\left(\left[\matrix{S_{00} & S_{10} &S_{01} \cr 0 & -I & 0 \cr 0 & 0 & -I}\right] +\lambda \left[\matrix{0 & S_{20} &{1\over 2}S_{11} \cr I & 0& 0\cr 0& 0& 0}\right] +\mu \left[\matrix{ 0&{1\over 2}S_{11} &S_{02} \cr 0&0&0\cr I&0&0}\right]\right)\widetilde x=0 $$ $$\left(\left[\matrix{T_{00} & T_{10} &T_{01} \cr 0& -I & 0\cr 0& 0& -I}\right] +\lambda \left[\matrix{ 0& T_{20} &{1\over 2}T_{11} \cr I &0&0\cr 0&0&0}\right] +\mu \left[\matrix{ 0&{1\over 2}T_{11} &T_{02} \cr 0&0&0\cr I&0&0}\right]\right)\widetilde y=0, $$ where $\widetilde x=\left[\matrix{x \cr \lambda x \cr \mu x}\right]$ and $\widetilde y=\left[\matrix{y \cr \lambda y \cr \mu y}\right]$. Some theoretical results and numerical methods for singular two-parameter eigenvalue problems will be presented. \begin{thebibliography}{99} \bibitem{Atkinson} {\sc F.~V.~Atkinson}, {\sl Multiparameter eigenvalue problems}, Academic Press, New York, 1972. \bibitem{Cottin} {\sc N.~Cottin}, {\sl Dynamic model updating --- a multiparameter eigenvalue problem}, {Mech. Syst. Signal Pr.}, {15}~(2001), pp.~649--665. \bibitem{HP} {\sc M.~E.~Hochstenbach and B.~Plestenjak}, {\sl A Jacobi--Davidson type method for a right definite two-parameter eigenvalue problem}, SIAM J. Matrix Anal. Appl., 24~(2002), pp.~392--410. \bibitem{HP2} {\sc M.~E. Hochstenbach, T.~Ko{\v{s}}ir, and B.~Plestenjak}, {\sl A {J}acobi--{D}avidson type method for the nonsingular two-parameter eigenvalue problem}, SIAM J. Matrix Anal. Appl., 26 (2005), pp.~477--497. \bibitem{HP3} {\sc M.~E.~Hochstenbach and B.~Plestenjak}, {\sl Harmonic Rayleigh--Ritz extraction for the multiparameter eigenvalue problem}, to appear in ETNA. \end{thebibliography} %two-parameter eigenvalue problem, Jacobi-Davidson method, model updating","65F15","15A18","","02:19:23","Sat Mar 08 2008","213.143.69.43" %"Furuichi %Shigeru","jaic957@yahoo.co.jp"," \section*{On trace inequalities for products of matrices} By {\sl Shigeru FURUICHI}. \medskip \noindent Skew informations are expressed by the trace of products of matrices and power of matrices. In my talk, we study some matrix trace inequalities of products of matrices and the power of matrices. %trace inequality, arithmetic mean, geometric mean and nonnegative matrix","47A63","94A17","","19:33:21","Sat Mar 08 2008","202.13.15.73" %"DJORDJEVIC %SLAVISA","slavdj@fcfm.buap.mx"," \section*{Manifold of proper elements} By {\sl S.V. Djordjevic and S. Sánchez Perales}. \medskip \noindent Let $X$ be a Banach space and let $B(X)$ denote the space of all bounded linear transformation on $X$. With $$Eig(X)=\{ (\lambda ,L,A)\in \mathbf C\times P_1(X)\times {\mathcal B}(X): A(L)\subset L \mbox{ and } A_{|L}=\lambda I\} $$ we denote the {\it manifold of proper elements of} $X$ and let $(\lambda_0, L_0,A_0)\in Eig (X)$ be a fix arbitrary element. In the first part of this note we give necessary and sufficient conditions that $(\lambda, L,A)\in Eig (X)$ using the system of equations determinate with $(\lambda_0, L_0,A_0)\in Eig (X)$. In the second part we apply this result to describe relation between multiplicity of eigenvalue $\lambda_0$ of the operator $A_0$ and the spectrum of the operator $\widehat{A_0}$ from quotient $X/L_0$ to itself definite with $\widehat{A_0}(x+L_0)=A_0(x)+L_0$. %Eigenvalues, Eigenvectors, Multiplicity","15A18","47A10","MS2 Eigenproblems: Theory and computation","14:24:02","Tue Mar 11 2008","148.228.128.56" %"Neumann %Michael","neumann@math.uconn.edu"," \section*{On Optimal Condition Numbers For Markov Chains} By {\sl Michael Neumann and Nung--Sing Sze}. \medskip \noindent Let $T=(t_{i,j})$ and $\tilde{T}=T-E$ be arbitrary nonnegative, irreducible, stochastic matrices corresponding to two ergodic Markov chains on $n$ states. A function $\kappa(\cdot)$ is called a {\it condition number for Markov chains} with respect to the $(\alpha,\beta)$--norm pair if $\|\pi-\tilde{\pi}\|_\alpha \leq \kappa(T)\|E\|_\beta$.\\ Various condition numbers, particularly with respect to the $(1,\infty)$ and $(\infty,\infty)$ have been suggested in the literature by several authors. They were ranked according to their size by Cho and Meyer in a paper from 2001. In this paper we first of all show that what we call the generalized ergodicity coefficient $\tau_p(\as)=\sup_{y^te=0} \frac{\|y^t\as\|_p}{\|y\|_1}$, where $e$ is the $n$--vector of all $1$'s, is the smallest of the condition numbers of Markov chains with respect to the $(p,\infty)$--norm pair. We use this result to identify the smallest condition number of Markov chains among the $(\infty,\infty)$ and $(1,\infty)$--norm pairs. These are, respectively, $\kappa_3$ and $\kappa_6$ in the Cho--Meyer list of $8$ condition numbers.\\ Kirkland has studied $\kappa_3(T)$. He has shown that $\kappa_3(T)\geq\frac{n-1}{2n}$ and he has characterized the properties of transition matrices for which equality holds. We prove again that $2\kappa_3(T)\leq \kappa(6)$ which appears in the Cho--Meyer paper and we characterize the transition matrices $T$ for which $\kappa_6(T)=\frac{n-1}{n}$. There is only one such matrix: $T=(J_n-I)/(n-1)$. where $J_n$ is the $n\times n$ matrix of all $1$'s. This result demands the development of the cyclic structure of a doubly stochastic matrix with a zero diagonal.}\\ Research supported by NSA Grant No. 06G--232 %Markov chains, stationary distribution, stochastic matrix, group inverses, sensitivity analysis, perturbation theory, condition numbers.","15A51","65F35","This talk is for the Nonnegative and Eventually Nonnegative Matrix Mini-symposium.","13:44:40","Wed Mar 12 2008","137.99.17.12" %"Singer %Ivan","ivan.singer@imar.ro"," \section* MS7 {Your title here} Max-min convexity \medskip \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115. %Max-min convex set; Max-min convex function","08A72","52A01","","03:16:01","Thu Mar 13 2008","92.80.46.206" %"Singer %Ivan","ivan.singer@imar.ro"," \section* MS7 {Your title here} Max-min convexity \medskip \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115. %Max-min convex set; Max-min convex function","08A72","52A01","","03:17:55","Thu Mar 13 2008","92.80.46.206" %"Singer %Ivan","ivan.singer@imar.ro"," \section* MS7 {Your title here} Max-min convexity \medskip \noindent The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some new results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115. %Max-min convex set; Max-min convex function","08A72","52A01","","03:19:28","Thu Mar 13 2008","92.80.46.206" %"Singer %Ivan","ivan.singer@imar.ro"," \section*{Your title here} MS7: Max-min convexity By {\sl names of all authors here} Ivan Singer \medskip \noindent Insert your abstract here The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115. %Max-min convex set; Max-min convex function","08A72","52A01","","03:31:20","Thu Mar 13 2008","92.80.46.206" %"Singer %Ivan","ivan.singer@imar.ro"," \section*{Your title here} MS7: Max-min convexity By {\sl names of all authors here} Ivan Singer \medskip \noindent Insert your abstract here The max-min semifield is the set $\overline{R}=R\cup \{-\infty ,+\infty \}$ endowed with the operations $\oplus =\max ,\otimes =\min $. We study the semimodule $\overline{R}^{n}=\overline{R}\times ...\times \overline{R}$ ($n$ times), with the operations $\oplus $ and $\otimes $ defined componentwise. A subset $G$ of $\overline{R}^{n}$ (respectively, a function $f:\overline{R}% ^{n}\rightarrow \overline{R}$) is said to be max-min convex if the relations $x,y\in G$ (respectively, $x,y\in \overline{R}^{n}$) and $\alpha ,\beta \in \overline{R}$, $\alpha \oplus \beta =+\infty $, where $+\infty $ is the neutral element for $\otimes =\min $, imply $(\alpha \otimes x)\oplus (\beta \otimes y)\in G$ (respectively, $f((\alpha \otimes x)\oplus (\beta \otimes y))\leq (\alpha \otimes f(x))\oplus (\beta \otimes f(y)$). We give some results on max-min convexity of sets and functions in $% \overline{R}^{n}$ (e.g. on segments, semispaces, separation, multi-order convexity, ...) that correspond to some results for max-plus convexity, replacing $\otimes =+$ of the max-plus case by the semi-group operation $% \otimes =\min $ of the max-min case. References K. Zimmermann, Convexity in semimodules. Ekonom.-Mat. Obzor 17 (1981), 199-213. V. Nitica and I. Singer, Contributions to max-min convex geometry. I: Segments. Lin. Alg. Appl. 428 (2008), 1439-1459. II: Semispaces and convex sets. Ibidem 2085-2115. %Max-min convex set; Max-min convex function","08A72","52A01","","03:35:15","Thu Mar 13 2008","92.80.46.206" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","06:38:05","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","06:40:50","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","06:41:54","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","06:46:37","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","06:50:56","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","07:36:54","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","07:44:10","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","08:02:49","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","08:27:49","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","09:01:52","Thu Mar 13 2008","212.128.76.118" %"Klein %Andre","A.A.B.Klein@uva.nl"," \section*{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein}. \medskip \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. %Tensor Sylvester matrix, Fisher information matrix","15A23","15A69","","09:03:32","Thu Mar 13 2008","145.18.180.139" %"Klein %Andre","A.A.B.Klein@uva.nl"," \section*{Tensor Sylvester matrices and information matrices of multiple stationary processes} By {\sl Andr\'{e} Klein}. \medskip \noindent Consider the matrix polynomials $A(z)$ and $B(z)$ given by A(z)=\dsum\limits_{j=0}^{p}A_{j}z^{j}$ and$\ B(z)=\dsum\limits_{j=0}^{q}B_{j}z^{j}$, where $A_{0}\equiv B_{0}\equiv I_{n}$.\newline Gohberg and Lerer [1] study the resultant property of the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)\triangleq \mathcal{S}(-B\otimes I_{n},I_{n}\otimes A)$ or $\mathcal{S}^{\otimes }(-B,A)=\left( \begin{array}{ccccccc} \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & \left( -I_{n}\right) \otimes I_{n} & \left( -B_{1}\right) \otimes I_{n} & \cdots & \left( -B_{q}\right) \otimes I_{n} \\ I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p} & 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \ddots & \ddots & & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & & \ddots & 0_{n^{2}\times n^{2}} \\ 0_{n^{2}\times n^{2}} & \cdots & 0_{n^{2}\times n^{2}} & I_{n}\otimes I_{n} & I_{n}\otimes A_{1} & \cdots & I_{n}\otimes A_{p}% \end{array}% \right) $. In [1] it is proved that the matrix polynomials $A(z)$ and $B(z)$ have at least one common eigenvalue if and only if det$\mathcal{S}^{\otimes }(-B,A)=0 $ or when the matrix $\mathcal{S}^{\otimes }(-B,A)$ is singular$.$ In other words, the tensor Sylvester matrix $\mathcal{S}^{\otimes }(-B,A)$ becomes singular if and only if the scalar polynomials det $A(z)=0$ and det $B(z)=0$ have at least one common root. Consequently, it is a multiple resultant. In [2], this property is extended to the Fisher information matrix of a stationary vector autoregressive and moving average process, VARMA process. The purpose of this talk consists of displaying a representation of the Fisher information matrix of a stationary VARMAX process in terms of tensor Sylvester matrices, the X stands for exogenous or control variable. The VARMAX process is of common use in stochastic systems and control. %Tensor Sylvester matrix, Fisher information matrix","15A23","15A69","","09:04:29","Thu Mar 13 2008","145.18.180.139" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","09:25:58","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","09:57:07","Thu Mar 13 2008","212.128.76.118" %"Uhlig %Frank","uhligfd@auburn.edu"," \section*{ Convex and Non-convex Optimization Problems for the Field of Values of a Matrix} By {\sl Frank Uhlig, \ Department of Mathematics and Statistics, \ Auburn University,\ Auburn, AL 36849--5310, USA; \ uhligfd@auburn.edu \medskip \noindent We introduce and study numerical algorithms that compute the minimal and maximal distances between $0 \in \CC$ and points in the field of values $F(A) = \{ x^*Ax \mid x \in \CC^n \ , \ \|x\|_2 = 1\} \subset \CC$ for a complex matrix $A_{n,n}$. Finding the minimal distance from $0 \in \CC$ to $F(A)$ is a convex optimization problem if $0 \notin F(A)$ and thus it has a unique solution, called the Crawford number whose magnitude relates information on the stability margin of the associated system. If $0 \in F(A)$, this is a non-convex optimization problem and consequently there can be multiple solutions or local minima that are not so globally. Non-convexity also holds for the maximal distance problem between points in $F(A)$ and $0 \in \CC$. This maximal distance is commonly called the numerical radius $numrad(A)$ for which the inequality $\rho(A) \leq numrad(A) \leq \|A\|$ is well established. \\ Both cases can be solved efficiently numerically by using ideas from geometric computing, eigenanalyses of linear combinations of the hermitean and skew-hermitean parts of $A$ and the rotation method introduced by C. R. Johnson in the 1970s to compute the boundary of the field of values. %field of values, quadratic form, Crawford number, numerical radius, geometric computing, eigenvalue, convexity, convex optimization, non-convex optimization, efficiency","65F30","15A60, 1","","10:20:14","Thu Mar 13 2008","131.204.45.199" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","10:26:33","Thu Mar 13 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","04:43:46","Fri Mar 14 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","04:53:16","Fri Mar 14 2008","212.128.76.118" %"Mart{\'\i}nez %Jos\'e-Javier","jjavier.martinez@uah.es"," \section*{Polynomial regression in the Bernstein basis} By {\sl Ana Marco, Jos\'e-Javier Mart{\'\i}nez}. \medskip \noindent The problem of polynomial regression in which the usual monomial basis is replaced by the Bernstein basis is considered. The coefficient matrix $A$ of the overdetermined system to be solved in the least-squares sense is then a rectangular Bernstein-Vandermonde matrix. In order to use the method based on the QR decomposition which was developed in the celebrated paper [1], the first stage will consist of computing the bidiagonal decomposition of the coefficient matrix $A$ by means of an extension to the rectangular case of the algorithm presented in [3]. Starting from that bidiagonal decomposition, an algorithm for obtaining the QR decomposition of $A$ due to Koev [2] is then applied. Finally, a triangular system is solved by using the bidiagonal decomposition of the $R$-factor of $A$. Some numerical experiments showing the behaviour of our approach are included. \bigskip [1] G. Golub: Numerical methods for solving linear least squares problems. Numerische Mathematik 7, 206-216 (1965). \medskip [2] P. Koev: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29(3), 731-751 (2007). \medskip [3] A. Marco, J.-J. Mart{\'\i}nez: A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 422, 616-628 (2007) %Least squares; Bernstein basis; Bidiagonal decomposition","65F05","65F20","","05:19:13","Fri Mar 14 2008","212.128.76.118" %"Gassó %Maria T.","mgasso@mat.upv.es"," \section*{The class of Inverse-Positive matrix with chekecboard pattern} By {\sl Manuel F. Abad, Maria T. Gass\'o and Juan R. Torregrosa}. \medskip \noindent In economics as well as other sciences, the inverse-positivity of real square matrices has been an important topic. A nonsingular real matrix $A$ is said to be inverse-positive if all the elements of its inverse are nonnegative. An inverse-positive matrix being also a $Z$-matrix is a nonsingular $M$-matrix, so the class of inverse-positive matrices contains the nonsingular $M$-matrices, which have been widely studied and whose applications, for example, in iterative methods, dynamic systems, economics, mathematical programming, etc, are well known. Of course, every inverse-positive matrix is not an $M$-matrix. For instance, \[ A=\left( \begin{array} {rr} -1 & 2 \\ 3 & -1 \end{array} \right) \] is an inverse-positive matrix that is not an $M$-matrix. The concept of inverse-positive is preserved by multiplication, left or right positive diagonal multiplication, positive diagonal similarity and permutation similarity. The problem of characterizing inverse-positive matrices has been extensively dealt with in the literature (see for instance \cite{BP}). The interest of this problem arises from the fact that a linear mapping $F(x)=Ax$ from ${R}^{n}$ into itself is inverse issotone if and only if $A$ is inverse-positive. In particular, this allows us to ensure the existence of a positive solution for linear systems $Ax=b$ for any $b \in R^{n}_{+}$. In this paper we present several matrices that very often occur in relation to systems of linear or nonlinear equations in a wide variety of areas including finite difference methods for contour problems, for partial differential equations, Leontief model of circulating capital without joint production, and Markov processes in probability and statistics. For example, matrices that for size $5 \times 5$ have the form \[ A=\left( \begin{array} {rrrrr} 1 & -a & 1 & -a & 1 \\ 1 & 1 & -a & 1 & -a \\ -a & 1 & 1 & -a & 1 \\ 1 & -a & 1 & 1 & -a \\ -a & 1 & -a & 1 & 1 \end{array} \right), \] where $a$ is a real parameter with economic interpretation. Are these matrices inverse-positive?. We study the answer of this question and we analyze when the concept of inverse-positive is preserved by the Hadamard product $A\circ A^{-1}$. In this work we present some conditions in order to obtain new characterizations for inverse-positive matrices. Johnson in \cite{J1} studied the possible sign patterns of a matrix which are compatible with inverse-positiveness. Following his results we analyze the inverse-positive concept for a particular type of pattern: the chekecboard pattern. An $n \times n$ real matrix $A=(a_{i,j})$ is said to have a checkerboard pattern if sign$(a_{i,j})=(-1)^{i+j}$, $i,j=1,2,\ldots,n$. We study in this paper the inverse-positivity of bidiagonal, tridiagonal and lower (upper) triangular matrices with checkerboard pattern. We obtain characterizations of the inverse-positivity for each class of matrices. Several authors have investigated about the Hadamard product of matrices. Johnson \cite{J2} showed that if the sign pattern is properly adjusted the Hadamard product of $M$-matrices is again an $M$-matrix and for any pair $M$,$N$ of $M$-matrices the Hadamard product $M\circ N^{-1}$ is again an $M$-matrix. This result does not hold in general for inverse-positive matrices. We analyze when the Hadamard product $M \circ N^{-1}$, for $M$,$N$ checkerboard pattern inverse-positive matrices , is an inverse-positive matrix. \begin{references}{99} \bibitem{BP} A. Berman, R.J. Plemmons, {\em Nonnegative matrices in the Mathematical Sciences}, SIAM 1994. \bibitem{J2} C.R. Johnson, {\em A Hadamard Product Involving $M$-matrices}, Linear Algebra and its Applications, 4 (1977) 261-264. \bibitem{J1} C.R. Johnson, {\em Sign patterns of inverse nonnegative matrices}, Linear Algebra and its Applications, 55 (1983) 69-80. \end{references} %inverse-positive matrix , sign pattern, Hadamard product.","15A09","15A48","","06:29:20","Fri Mar 14 2008","158.42.48.6" %"Boettcher %Albrecht","aboettch@mathematik.tu-chemnitz.de"," \section*{Toeplitz matrices with Fisher-Hartwig symbols} By {\sl Albrecht B\""ottcher}. \medskip \noindent Asymptotic properties of large Toeplitz matrices are best understood if the matrix is constituted by the Fourier coefficients of a smooth function without zeros on the unit circle and with winding number zero. If at least one of these conditions on the generating function is violated, one speaks of Toeplitz matrices with Fisher-Hartwig symbols. \smallskip The talk is intended as an introduction to the realm of Toeplitz matrices with Fisher-Hartwig symbols for a broad audience. We show that several highly interesting and therefore very popular Toeplitz matrices are just matrices with a Fisher-Hartwig symbol and that many questions on general Toeplitz matrices, for example, the asymptotics of the extremal eigenvalues, are nothing but specific problems for matrices with Fisher-Hartwig symbols. We embark on both classical and recent results concerning the asymptotic behavior of determinants, condition numbers, eigenvalues, and eigenvectors as the matrix dimension goes to infinity. %Toeplitz matrix, Fisher-Hartwig, spectral theory, determinant","47B35","15A18","This is a plenary lecture.","06:46:45","Mon Mar 17 2008","134.109.40.52" %"Sergeev %Sergey","sergiej@gmail.com"," \section*{On Kleene stars and intersection of finitely generated semimodules} By {\sl Sergey Sergeev}. \medskip \noindent It is known that Kleene stars are fundamental objects in max-algebra and in other algebraic structures with idempotent addition. They play important role in solving classical problems in the spectral theory, and also in other respects. On the other hand, the approach of tropical convexity puts forward the tropical cellular decomposition, meaning that any tropical polytope (i.e., finitely generated semimodule) can be cut into a finite number of convex pieces, and subsequently treated as a cellular complex. We show that any convex piece of this complex is max-algebraic column span of a uniquely defined Kleene star. We provide some evidence that the tropical cellular decomposition can be used as a purely max-algebraic tool, with the main focus on the problem of finding a point in the intersection of several finitely generated semimodules. %max-algebra, Kleene star, decomposition","52A30","15A39","","13:31:17","Mon Mar 17 2008","147.188.55.191" %"Butkovic %Peter","p.butkovic@bham.ac.uk"," \section*{On the permuted max-algebraic eigenvector problem} By {\sl Peter Butkovic}. \medskip \noindent Let $a\oplus b=\max (a,b)$, $a\otimes b=a+b$ for $a,b\in \overline{\mathbb{R}% }:=\mathbb{R}\cup \{-\infty \}$ and extend these operations to matrices and vectors as in conventional linear algebra. The following \textit{% max-algebraic} \textit{eigenvector problem} has been intensively studied in the past:\ Given $A\in \overline{\mathbb{R}}^{n\times n},$ find all $x\in \overline{\mathbb{R}}^{n},x\neq (-\infty ,...,-\infty )^{T}$ (\textit{% eigenvectors}) such that $A\otimes x=\lambda \otimes x$ for some $\lambda \in \overline{\mathbb{R}}.$ In our talk we deal with the \textit{permuted eigenvector problem}: Given $A\in \overline{\mathbb{R}}^{n\times n}$ and $% x\in \overline{\mathbb{R}}^{n},$ is it possible to permute the components of $x$ so that the arising vector $x^{\prime }$ is a (max-algebraic) eigenvector of $A$? This problem can be proved to be $NP$-complete using a polynomial transformation from BANDWIDTH. As a by-product the following \textit{permuted max-linear system} \textit{problem} can also be shown $NP$% -complete: Given $A\in \overline{\mathbb{R}}^{m\times n}$ and $b\in \overline{\mathbb{R}}^{m},$ is it possible to permute the components of $b$ so that for the arising vector $b^{\prime }$ the system $A\otimes x=b^{\prime }$ has a solution? Both problems can be solved in polynomial time when $n$ does not exceed $3$. %Eigenvector; Permutation; NP-complete","15A18","68Q25","","04:52:25","Tue Mar 18 2008","81.105.65.177" %"Klasa-Bompoint %Jacqueline","jklasa@dawsoncollege.qc.ca"," \section*{FEW PEDAGOGICAL SCENARIOS IN LINEAR ALGEBRA WITH CABRI AND MAPLE.} By {\sl Jacqueline Klasa-Bompoint}. \medskip \noindent I nsert your abstract here Abstract FEW PEDAGOGICAL SCENARIOS IN LINEAR ALGEBRA WITH CABRI AND MAPLE. J. Klasa, Collège Dawson, Montréal, Canada. jklasa@dawsoncollege.qc.ca With the appearance of very rapidly improving technologies, since the 90’s we have faced many reform movements introducing much more importance on the visualization of mathematical concepts together with more socialization (Collaborative learning). Just to name few reform groups in the USA: Harvard Group for Calculus and for Linear algebra: ATLAST organized by S. Leon after the ILAS symposium of 1992 and LACSG started with D. Lay in 1990 and then continued with D. Carlson (1993) and many others. However some researchers like J.P Dorier & A. Sierpinska were not optimist and declared “It is commonly claimed in the discussions about the teaching and learning of linear algebra that linear algebra courses are badly designed and badly taught and that no matter how it is taught, linear algebra remains a cognitively and conceptually difficult subject”. On the other hand, M. Artigue advocates strongly the use of CAS’s but with a constant awareness that Mathematics learned in such an environment of software are changing. How do we really teach Linear algebra now? See the standard Anton’s text book and then the much praised book “Linear Algebra and its applications” written in 1994 by D. Lay. How hard is it really now to teach and to learn this topic? We shall repeat like J. Hillel, A. Sierpinska & T. Dreyfus that the teaching of Linear Algebra offers to students many cognitive problems related to three thinking modes intertwined: geometric, computational (with matrices) and algebraic (Symbolic). We could follow the APOS theory of E. Dubinsky and see that it will be necessary for the teacher to proceed to a genetic decomposition of every mathematical concept of Linear Algebra before being able to conceive a pedagogic scenario that will have to bring students from the ""action"" to the more elaborated state of ""process"" and then luckily make them reach the most abstract levels of ""objects"" and even higher structured ""schemes"". While devising my classes and computer-labs to my students in Linear Algebra, I was inspired by all good ideas presented by the mentioned authors and many others as: G. Bagni, J.L. Dorier and Fischbein, D. Gentner, G. Harel, J. Hillel, J.G. Molina Zavaleta. I am a mathematician who teaches in a CEGEP, which is a special college of Québec's province in Canada. Pedagogical scenarios based on Cabri and Maple will be presented in this study for some few stumbling blocks in the learning of Linear Algebra: linear transformations, eigenvectors and eigenvalues, quadratic forms and conics with changes of bases, finally singular values. When immersed in this software environment, I restrict all the demonstrations to R2 and R3. Can visualization and manipulation improve and facilitate the learning of Linear algebra? As I am biased, of course I will say yes; really we would need a strong evaluation and analysis of this teaching procedure to be able to give answers. As Ed. Dubinsky would say “This situation provides us with the opportunity to build a synthesis between the abstract and concrete…The interplay between concrete phenomena and abstract thinking.” I will add also, that students working in teams around computers (or even graphic calculators) only coached by the teacher at times, become experts in the discipline they experiment with. About the roles of the CAS Maple and the geometrical software, we will agree with the Cabrilog slogan “Cabri makes tough maths concepts easier to learn thanks to its kinaesthetic learning approach!” while Maple acts like a good big brother, doing all the boring calculations for the students and also producing instructive animations, unfortunately mostly programmed by the teacher. %Scenarios, software Cabri Maple","97","97C80","Also 97U70","12:45:48","Tue Mar 18 2008","190.160.166.162" %"Sergeev %Sergey","sergiej@gmail.com"," \section*{On Kleene stars and intersection of finitely generated semimodules} By {\sl Sergey Sergeev}. \medskip \noindent It is known that Kleene stars are fundamental objects in max-algebra and in other algebraic structures with idempotent addition. They play important role in solving classical problems in the spectral theory, and also in other respects. On the other hand, the approach of tropical convexity puts forward the tropical cellular decomposition, meaning that any tropical polytope (i.e., finitely generated semimodule) can be cut into a finite number of convex pieces, and subsequently treated as a cellular complex. We show that any convex piece of this complex is max-algebraic column span of a uniquely defined Kleene star. We provide some evidence that the tropical cellular decomposition can be used as a purely max-algebraic tool, with the main focus on the problem of finding a point in the intersection of several finitely generated semimodules. %max-algebra, Kleene star, semimodule, decomposition","52A30","15A39","","09:49:09","Wed Mar 19 2008","147.188.55.191" %"Weaver %James","jweaver@uwf.edu","Nonsingularity of Divisor Tournaments Rohan Hemasinha Dept. of Math/Stat, Univ. of West Florida, Pensacola, FL 32514, USA rhemasin@uwf.edu Jeffrey L. Stuart Dept. of Mathematics, Pacific Lutheran Univ. Tacoma, WA 98447, USA jeffrey.stuart@plu.edu James R. Weaver (speaker) Dept. of Math/Stat, Univ. of West Florida, Pensacola, FL 32514, USA jweaver@uwf.edu Abstract Matrix theoretic properties and examples of divisor tournaments are discussed. Emphasis is placed on results and conjectures about the nonsingularity of the adjacency matrix for a divisor tournament. For an integer n>2, the divisor tournament D(T_{n}) ( a directed graph on the vertices 2,3,⋯,n) is defined by: i is adjacent to j if i divides j, otherwise j is adjacent to i for 2¡ÜiThe adjacency matrix T_{n} of the directed graph D(T_{n}) with vertex set {2,3,⋯,n} is the (n-1)¡Á(n-1) matrix [t_{ij}] defined by t_{ij}=1and t_{ji}=0 if i¨Oj, t_{ij}=0 and t_{ji}=1 if i∤j for 2¡Üi 0, & \quad x(0)=x^0, \\ y(t) &=& Cx(t) + Du(t), & \quad t \geq 0, \end{array} \right. \] with $A\in \mathbf{R}^{n\times n}$, $B\in \mathbf{R}^{n\times m}$, and $C\in\mathbf{R}^{p\times n}$ arising, e.g., from the discretization and linearization of parabolic PDEs. We will assume that the system $\Sigma$ is large-scale with $n \gg m,\, p$ and that the system is unstable, satisfying \[ \Lambda(A)\cap \mathbf{C}^+ \ne \emptyset,\quad \Lambda(A)\cap \jmath\mathbf{R}=\emptyset. \] We further allow the system matrix $A$ to be dense, provided that a {\em data-sparse} representation exists. To reduce the dimension of the system $\Sigma$, we apply an approach based on the controllability and observability Gramians of $\Sigma$. The numerical solution of these Gramians is obtained by solving two algebraic Bernoulli and two Lyapunov equations. As standard methods for the solution of matrix equations are of limited use for large-scale systems, we investigate approaches based on the {\em matrix sign function} method. To make this iterative method applicable in the large-scale setting, we incorporate structural information from the underlying PDE model into the approach. By using data-sparse matrix approximations, hierarchical matrix formats, and the corresponding formatted arithmetic we obtain an efficient solver having linear-polylogarithmic complexity. Once the Gramians are computed, a reduced-order system can be obtained applying the usual {\em balanced truncation method}. %model reduction, unstable LTI systems, hierarchical matrices","93B40","65F10","talk is part of ""MS5, Linear Algebra in Model Reduction""","04:33:55","Wed Mar 26 2008","134.109.40.166" %"Feng %Lihong","lihong.feng@mathematik.tu-chemnitz.de"," \section*{Model Order Reduction of Systems with Coupled Parameters\thanks{This research is supported by the Alexander von Humboldt-Foundation and by the research network \emph{SyreNe --- System Reduction for Nanoscale IC Design} within the program \textsl{Mathematics for Innovations in Industry and Services} (Mathematik f\""ur Innovationen in Industrie und Dienstleistungen) funded by the German Federal Ministry of Education and Science (BMBF).}} By {\sl Peter Benner\footnotemark[2] \and Lihong Feng\thanks{Mathematics in Industry and Technology, Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany; \texttt{benner@mathematik.tu-chemnitz.de, lihong.feng@mathematik.tu-chemnitz.de}}~\thanks{Corresponding author.}}. \medskip \noindent We consider model order reduction of parametric systems with parameters which are nonlinear functions of the frequency parameter $s$. Such systems result from, for example, the discretization of electromagnetic systems with surface losses \cite{WittigSW06}. Since the parameters are functions of the frequency $s$, they are highly coupled with each other. We see them as individual parameters when we implement model order reduction. By analyzing existing methods of computing the projection matrix for model order reduction, we show the applicability of each method and propose an optimized method for the parametric system considered in this paper. The transfer function of the parametric systems considered here take the form \begin{equation} \label{trans1} H(s)=sB^\mathrm{T}(s^2I_n-1/\sqrt{s} D+ A)^{-1}B, \end{equation} where $A,D$ and $B$ are $n\times n$ and $n\times m$ matrices, respectively, and $I_n$ is the identity of suitable size. To apply parametric model order reduction to (\ref{trans1}), we first expand $H(s)$ into a power series. Using a series expansion about an expansion point $s_0$, and defining $\sigma_1:=\frac{1}{s^2\sqrt{s}}-\frac{1}{s_0^2\sqrt{s_0}}$, $\sigma_2:=\frac{1}{s^2}-\frac{1}{s_0^2}$, we may use the three different methods below to compute a projection matrix $V$ and get the reduced-order transfer function \[ \hat{H}(s) =s\hat{B}^\mathrm{T}(s^2 I_r -1/\sqrt{s} \hat{D}+ \hat{A})^{-1}\hat{B}, \] where $\hat{A}=V^T A V$, $\hat{B}=V^T B$, etc., and $V$ is an $n\times r$ projection matrix with $V^T V= I_r$. To simplify notation, in the following we use $G:=I-\frac{1}{s_0^2\sqrt{s_0}}D+\frac{1}{s_0^2}A$, $B_M:=G^{-1}B$, $M_1:=G^{-1}D$, and $M_2:=-G^{-1}A$. \subsubsection*{Directly computing $V$} A simple and direct way for obtaining $V$ is to compute the coefficient matrices in the series expansion \begin{equation} \label{trans5} \begin{array}{rcl} H(s)&=&\frac{1}{s}B^\mathrm{T}[B_M+(M_1B_M\sigma_1 +M_2B_M\sigma_2 )+(M_1^2B_M\sigma_1^2 \\ && + (M_1M_2+M_2M_1)B_M\sigma_1\sigma_2 +M_2^2B_M\sigma_2^2)+(M_1^3B_M\sigma_1^3+\ldots)+\ldots], \end{array} \end{equation} by direct matrix multiplication and orthogonalize these coefficients to get the matrix $V$ \cite{Daniel04}. After the coefficients $B_M$, $M_1B_M, M_2B_M$, $M_1^2B_M$, $(M_1M_2+M_2M_1)B_M$, $M_2^2B_M$, $M_1^3B_M$, $\ldots$ are computed, the projection matrix $V$ can be obtained by \begin{equation} \label{directV} \textrm{range}\{V\}=\textrm{orthogonalize}\{B_M, M_1B_M, M_2B_M, M_1^2B_M, (M_1M_2+M_2M_1)B_M, M_2^2B_M, M_1^3B_M, \ldots \} \end{equation} Unfortunately, the coefficients quickly become linearly dependent due to numerical instability. In the end, the matrix $V$ is often so inaccurate that it does not possess the expected theoretical properties. \subsubsection*{Recursively computing $V$} The series expansion (\ref{trans5}) can also be written into the following formulation: \begin{equation} \label{trans6} H(s)=\frac{1}{s}[B_M+(\sigma_1 M_1+\sigma_2 M_2)B_M+\ldots+(\sigma_1 M_1+\sigma_2 M_2)^iB_M+\ldots] \end{equation} Using (\ref{trans6}), we define \begin{equation} \label{recR} \begin{array}{rcl} R_0&=&B_M,\\ R_1&=&[M_1, M_2]R_0,\\ \vdots\\ R_j&=&[M_1,M_2]R_{j-1},\\ \vdots. \end{array} \end{equation} We see that $R_0, R_1, \ldots, R_j, \ldots$ include all the coefficient matrices in the series expansion (\ref{trans6}). Therefore, we can use $R_0, R_1, \ldots, R_j, \ldots$ to generate the projection matrix $V$: \begin{equation} \label{recursiveV} \textrm{range}\{V\}=\textrm{colspan}\{R_0, R_1,\ldots, R_m\}. \end{equation} Here, $V$ can be computed employing the recursive relations between $R_j, \ j=0,1,\ldots, m$ combined with the modified Gram-Schmidt process \cite{FengBICIAM07}. \subsubsection*{Improved algorithm for recursively computing $V$} Note that the coefficients $M_1M_2B_M$ and $M_2M_1B_M$ are two individual terms in (\ref{recR}), which are computed and orthogonalized sequentially within the modified Gram-Schmidt process. Observing that they are actually both coefficients of $\sigma_1\sigma_2$, they can be combined together as one term during the computation as in (\ref{directV}). Based on this, we develop an algorithm which can compute $V$ in (\ref{directV}) by a modified Gram-Schmidt process. By this algorithm, the matrix $V$ is numerically stable which guarantees the accuracy of the reduced-order model. Furthermore, the size of the reduced-order model is smaller than that of the reduced-order model derived by (\ref{recursiveV}). Therefore, this improved algorithm is optimal for the parametric system considered in this paper. \begin{thebibliography}{1} \bibitem{WittigSW06} T. Wittig, R. Schuhmann, and T. Weiland. \newblock Model order reduction for large systems in computational electromagnetics. \newblock {\em Linear Algebra and its Applications}, 415(2-3):499-530, 2006. \bibitem{Daniel04} L.~Daniel, O.C. Siong, L.S. Chay, K.H. Lee, and J.~White. \newblock A multiparameter moment-matching model-reduction approach for generating geometrically parameterized interconnect performance models. \newblock {\em IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.}, 22 (5):678--693, 2004. \bibitem{FengBICIAM07} L. Feng and P. Benner. \newblock A Robust Algorithm for Parametric Model Order Reduction.\newblock {\em Proc. Appl. Math. Mech.}, 7, 2008 (to appear). \end{thebibliography} %Model order reduction, parametric system, coupled parameters","65P","94C","the talk is part of ""MS5, Linear Algebra in Model Reduction""","05:31:53","Wed Mar 26 2008","134.109.40.173" %"Fasbender %Heike","h.fassbender@tu-bs.de"," \section*{On the numerical solution of large-scale sparse discrete-time Riccati equations} By {\sl Heike Fa\ss bender and Peter Benner}. \medskip \noindent Inspired by a large-scale sparse discrete-time Riccati equation which arises in a spectral factorization problem the efficient numerical solution of such Riccati equations is studied in this work. Spectral factorization is a crucial step in the solution of linear quadratic estimation and control problems. A variety of methods has been developed over the years for the computation of canonical spectral factors for processes with rational spectral densities, see, e.g., the survey \cite{SayK01}. One approach involves the spectral factorization via a discrete-time Riccati equation. Whenever possible, we consider the generalized discrete--time algebraic Riccati equation \begin{eqnarray} 0 ~=~ \mathcal{R}(X) &=& C^TQC + A^T X A - E^T X E \label{dare} \\ &&\;\; - (A^T XB + C^T S) (R + B^TXB)^{-1} (B^T XA + S^T C), \nonumber\end{eqnarray} where $A, E \in \mathbb{R}^{n \times n}, B \in \mathbb{R}^{n \times m}, C \in \mathbb{R}^{p \times n}, Q \in \mathbb{R}^{p \times p}, R \in \mathbb{R}^{m \times m},$ and $S \in \mathbb{R}^{p \times m}.$ Furthermore, $Q$ and $R$ are assumed to be symmetric and $A$ and $E$ are large and spare. For the particular application above, we have \[ A = \left[ \begin{array}{cccc} 0 & 1 & \\ & \ddots & \ddots \\ &&0 & 1\\ &&& 0\end{array}\right]. \] The function $\mathcal{R}(X)$ is a rational matrix function, $\mathcal{R}(X) = 0$ defines a system of nonlinear equations. Newton's method for the numerical solution of DAREs can be formulated as follows\\ \phantom{BBBB} {\bf for} {$k = 0,\,1,\,2,\,\ldots$}\\ \phantom{BBBBB} 1. $K_k \gets K(X_k) = (R + B^T X_k B)^{-1} (B^T X_k A + S^T C)$.\\ \phantom{BBBBB} 2. $A_k \gets A - B K_k$.\\ \phantom{BBBBB} 3. $\mathcal{R}_k \gets \mathcal{R}(X_k)$.\\ \phantom{BBBBB} 4. Solve for $N_k$ in the Stein equation \begin{equation}\label{stein} A_k^T N_k A_k - E^T N_k E = -\mathcal{R}_k. \end{equation} \phantom{BBBBB} 5. $X_{k+1} \gets X_k + N_k.$\\ \phantom{BBBB}{\bf end for} The computational cost for this algorithm mainly depends upon the cost for the numerical solution of the Stein equation (\ref{stein}). This can be done using the Bartels--Stewart algorithm \cite{BarS72} or an extension to the case $E \not= I$ \cite{GarLAM92,GarWLAM92,Pen97}. The Bartels-Stewart algorithm is the standard direct method for the solution of Stein equations of small to moderate size. This method requires the computation of a Schur decomposition, and thus is not appropriate for large scale problems. The cost for the solution of the Stein equation is $\approx 73n^3$ flops. Iterative schemes have been developed including the Smith method \cite{Smi68}, the sign-function method \cite{Rob80}, and the alternating direction implicit (ADI) iteration method \cite{Wac88}. Unfortunately, all of these methods compute the solution in dense form and hence require ${\cal O}(n^2)$ storage. In case the solution to the Stein equation has low numerical rank (i.e., the eigenvalues decay rapidly) one can take advantage of this low rank structure to obtain approximate solutions in low rank factored form. If the effective rank is $r \ll n$, then the storage is reduced from ${\cal O}(n^2)$ to ${\cal O}(nr)$. This approach will be discussed here in detail. \begin{thebibliography}{10} \bibitem{BarS72} {\sc R.H. Bartels and G.W. Stewart}, {\em Solution of the matrix equation ${AX}+{XB}={C}$: {A}lgorithm 432}, Comm. ACM, 15 (1972), pp.~820--826. \bibitem{GarLAM92} {\sc J.D. Gardiner, A.J. Laub, J.J. Amato, and C.B. Moler}, {\em Solution of the {S}ylvester matrix equation ${AXB}+{CXD}={E}$}, {ACM} Trans. Math. Software, 18 (1992), pp.~223--231. \bibitem{GarWLAM92} {\sc J.D. Gardiner, M.R. Wette, A.J. Laub, J.J. Amato, and C.B. Moler}, {\em Algorithm 705: A {F}ortran-77 software package for solving the {S}ylvester matrix equation ${AXB^T}+{CXD^T}={E}$}, {ACM} Trans. Math. Software, 18 (1992), pp.~232--238. \bibitem{Pen97} {\sc T.~Penzl}, {\em Numerical solution of generalized {L}yapunov equations}, Adv. Comp. Math., 8 (1997), pp.~33--48. \bibitem{Rob80} {\sc J.D. Roberts}, {\em Linear model reduction and solution of the algebraic {R}iccati equation by use of the sign function}, Internat. J. Control, 32 (1980), pp.~677--687. \newblock (Reprint of Technical Report No. TR-13, CUED/B-Control, Cambridge University, Engineering Department, 1971). \bibitem{SayK01} {\sc A.H. Sayed and T.~Kailath}, {\em A survey of spectral factorization methods}, Num. Lin. Alg. Appl., 8 (2001), pp.~467--496. \bibitem{Smi68} {\sc R.A. Smith}, {\em Matrix equation {$XA + BX = C$}}, {SIAM} J. Appl. Math., 16 (1968), pp.~198--201. \bibitem{Wac88} {\sc E.L. Wachspress}, {\em Iterative solution of the {L}yapunov matrix equation}, Appl. Math. Letters, 107 (1988), pp.~87--90. \end{thebibliography} %discrete-time algebraic Riccati equation, Stein equation, large, sparse, Newton method","15A24","","MInsymposium ""MATRIX FUNCTIONS AND MATRIX EQUATIONS"",","08:36:48","Wed Mar 26 2008","134.169.54.97" %"Peña %Juan Manuel","jmpena@unizar.es"," \section*{From Total Positivity to Positivity: related classes of matrices} By {\sl Juan Manuel Peña}. \medskip \noindent Matrices with all their minors nonnegative (respectively, positive) are usually called totally nonnegative (respectively, totally positive). These matrices present nice stability properties as well as interesting spectral, factorization and variation diminishing properties. They play an important role in many applications to other fields such as Approximation Theory, Mechanichs, Economy, Optimization, Combinatorics or Computer Aided Geometric Design. We revisit some of the properties and applications of these matrices and show some recent advances. Moreover, we show that some results and techniques coming from Total Positivity theory have been extended to other classes of matrices which are also closely related to positivity. Among these other clases of matrices we consider sign regular matrices (which generalize totally nonnegative matrices), some classes of P-matrices (matrices whose principal minors are positive), including M-matrices, and conditionally positive definite (and conditionally negative definite) matrices. %Total Positivity; Nonnegative matrices; P-matrices; M-matrices; Stability; Factorizations","15A48","65F05","It is the LAMA Conference","08:46:09","Wed Mar 26 2008","155.210.85.102" %"Castro-González %Nieves","nieves@fi.upm.es"," \section*{Representations for the generalized Drazin inverse of additive perturbations} By {\sl N. Castro-Gonz\'{a}lez and M.F. Mart\'{i}nez-Serrano\\ Facultad de Inform\'{a}tica, Universidad Polit\'{e}cnica de Madrid, Spain}. \medskip \noindent Let ${\cal B}$ be a unital complex Banach algebra. An element $a\in {\cal B}$ is said to have a {\it generalized Drazin inverse} if there exists $x\in {\cal B}$ such that \[ xa=ax, \quad x=ax^2, \quad a-a^2x \text{ is quasinilpotent}.\] In this case, the generalized Drazin inverse of $a$ is unique and is denoted by $a^D$. If in the previous definition $a-a^2x$ is in fact nilpotent then $a^D$ is the conventional {\it Drazin inverse} of $a$. It is well known that if $a$ and $b$ have generalized Drazin inverse and $ab=ba=0$, then $(a + b)^D=a^D + b^D$. This result was generalized in [Djordjevi\'{c} and Wei, Additive result for the generalized Drazin inverse, J. Austral. Math. Soc. 73 (2002) 115-125] under the one side condition $ab=0$. Recently, in [Castro and Koliha, New Additive results for the $g$-Drazin inverse, Proc. Roy. Soc. Edinburgh Sect. A 134 (2005) 657-666], [Cvetkovi\'{c}-Ili\'{c} {\it et al.}, Additive results for the generalized Drazin inverse in a Banach algebra, Linear Algebra Appl. 418 (2006) 53-61], weaker conditions were given under which $(a+b)^D$ could be explicitly expressed in terms of $a$, $a^D$, $b$, and $b^D$.\par In this paper we study the generalized Drazin inverse of the sum $a+b$, where the perturbation $b$ is a quasinilpotent element, and we obtain a representation for $(a+b)^D$ under new conditions which relax the condition $ab=0$. Our approach is based on a representation for the resolvent of a $2\times 2$ matrix with entries in a Banach algebra, which we provide, and the Laurent expansion of the resolvent in terms of the generalized Drazin inverse. Our results can be applied to obtain different representations of the generalized Drazin inverse of block matrices $M=\begin{pmatrix} A & C \\ B & D\end{pmatrix}$, under certain conditions, in terms of the individual blocks. In particular, we can write $M$ as the sum of a block triangular matrix and a nilpotent matrix and apply the additive perturbation result given to obtain a representation for $M^D$. It extends the result of Meyer and Rose for the Drazin inverse of a block triangular matrix. Finally, we present a numerical example for the Drazin inverse of $2\times 2$ block matrices over the complex numbers.\newline This research is partly supported by Project MTM2007-67232, ``Ministerio de Educaci\'{o}n y Ciencia"" of Spain. %Generalized Drazin inverse, Banach algebras, additive perturbation, block matrices","15A09","46H30","","11:18:22","Wed Mar 26 2008","138.100.14.148" zzz %"Dodig %Marija","dodig@cii.fc.ul.pt"," \section*{Singular systems, state feedback problem} By {\sl Marija Dodig}. \medskip \noindent In this talk, the strict equivalence invariants by state feedback for singular systems are studied. As the main result we give the necessary and sufficient conditions under which there exists a state feedback such that the resulting system has prescribed pole structure as well as row and column minimal indices. This result presents a generalization of previous results of state feedback action on singular systems. %Matrix pencils, singular systems, state feedback, pole placement, Kronecker invariants, completion","15A21","15A22","","12:59:13","Wed Mar 26 2008","194.117.6.7" %"Semrl %Peter","peter.semrl@fmf.uni-lj.si"," \section*{Locally linearly dependent operators} By {\sl Peter \v Semrl}. \medskip \noindent Let $U$ and $V$ be vector spaces. Linear operators $T_1 , \ldots , T_n : U \to V$ are locally linearly dependent if for every $u\in U$ the vectors $T_1 u , \ldots , T_n u$ are linearly dependent. Some recent results on such operators will be presented. %locally linearly dependent operators, spaces of operators","15A03","15A04","","13:43:36","Wed Mar 26 2008","212.72.116.72" %"Benner %Peter","benner@mathematik.tu-chemnitz.de"," \section*{Balancing-Related Model Reduction for Large-Scale Unstable Systems} By {\sl Peter Benner}. \medskip \noindent Model reduction is an increasingly important tool in analysis and simulation of dynamical systems, control design, circuit simulation, structural dynamics, CFD, etc. In the past decades many approaches have been developed for reducing the order of a given model. Here, we will focus on balancing-related model reduction techniques that have been developed since the early 80ies in control theory. The mostly used technique of balanced truncation (BT) \cite{Moo81} applies to stable systems only. But there exist several related techniques that can be applied to unstable systems as well. We are interested in techniques that can be extended to large-scale systems with sparse system matrices which arise, e.g., in the context of control problems for instationary partial differential equations (PDEs). Semi-discretization of such problems leads to linear, time-invariant (LTI) systems of the form \begin{equation}\label{lti} \begin{array}{rcl} \dot{x}(t) &=& Ax(t) + Bu(t), \\ y(t) &=& Cx(t) + Du(t), \end{array} \end{equation} where $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, $C\in\mathbb{R}^{p\times n}$, $D\in\mathbb{R}^{p\times m}$, and $x^0\in\mathbb{R}^n$. Here, $n$ is the order of the system and $x(t)\in\mathbb{R}^n$, $y(t)\in\mathbb{R}^p$, $u(t)\in\mathbb{R}^m$ are the state, output and input of the system, respectively. We assume $A$ to be large and sparse and $n\gg m,p$. Applying the Laplace transform to (\ref{lti}) (assuming $x(0)=0$), we obtain \[ Y(s) = (C(s I - A)^{-1}B+D) U(s) =: G(s) U(s), \] where $s$ is the Laplace variable, $Y,U$ are the Laplace transforms of $y,u$, and $G$ is called the {\em transfer function matrix (TFM)} of (\ref{lti}). The TFM describes the input-output mapping of the system. The model reduction problem consists of finding a reduced-order LTI system, \begin{equation}\label{rom} \begin{array}{rcl} \dot{\hat{x}}(t) &=& \hat{A} \hat{x}(t) + \hat{B} u(t), \\ \hat{y}(t) &=& \hat{C} \hat{x}(t) + \hat{D} u(t), \end{array} \end{equation} of order $r$, $r \ll n$, with the same number of inputs $m$, the same number of outputs $p$, and associated TFM $\hat{G}(s) = \hat{C} (s I - \hat{A} )^{-1}\hat{B} +\hat{D}$, so that for the same input function $u\in L_2(0,\infty;\mathbb{R}^m)$, we have $y(t)\approx \hat{y}(t)$ which can be achieved if $G\approx \hat{G}$ in an appropriate measure. If all eigenvalues of $A$ are contained in the left half complex plane, i.e., [\ref{lti}) is stable, BT is a viable model reduction technique. It is based on balancing the controllability and observability Gramians $W_c$, $W_o$ of the system~(\ref{lti}) given as the solutions of the Lyapunov equations \begin{equation}\label{WcWo} A W_c + W_c A^T + B B^T = 0, \qquad A^T W_o + W_o A + C^T C = 0. \end{equation} Based on $W_c,W_o$ or Cholesky factors thereof, matrices $V,W\in\mathbb{R}^{n\times r}$ can be computed so that with \[ \hat{A} := W^T A V, \quad \hat{B} := W^T B, \quad \hat{C} := C V, \quad \hat{D} = D, \] the reduced-order TFM satisfies \begin{equation}\label{bound} \sigma_{r+1}\leq \Vert G - \hat{G}\Vert_{\infty} \leq 2 \sum_{k=r+1}^n \sigma_k, \end{equation} where $\sigma_1\geq \ldots \geq \sigma_n\geq 0$ are the Hankel singular values of the system, given as the square roots of the eigenvalues of $W_cW_o$. The key computational step in BT is the solution of the Lyapunov equations (\ref{WcWo}). In recent years, a lot of effort has been devoted to the solution of these Lyapunov equations in the large and sparse case considered here. Nowadays, BT can be applied to systems of order up to $n=10^6$, see, e.g., \cite{BenMS05,LiW02}. Less attention has been payed so far to unstable systems, i.e., systems where $A$ may have eigenvalues with nonnegative real part. Such systems arise, e.g., from semi-discretizing parabolic PDEs with unstable reactive terms. We will review methods related to BT that can be applied in this situation and discuss how these methods can also be implemented in order to become applicable to large-scale problems. The basic idea of these methods is to replace the Gramians $W_c$ and $W_o$ from (\ref{WcWo}) by other positive semidefinite matrices that are associated to (\ref{lti}) and to employ the algorithmic advances for BT also in the resulting model reduction algorithms. \begin{thebibliography}{10} \bibitem{BenMS05} P.~Benner, V.~Mehrmann, and D.~Sorensen, editors. {\em Dimension Reduction of Large-Scale Systems}, volume~45 of {\em Lecture Notes in Computational Science and Engineering}. Springer-Verlag, Berlin/Heidelberg, Germany, 2005. \bibitem{LiW02} J.-R. Li and J.~White. Low rank solution of {L}yapunov equations. {\em {SIAM} J. Matrix Anal. Appl.}, 24(1):260--280, 2002. \bibitem{Moo81} B.~C. Moore. Principal component analysis in linear systems: Controllability, observability, and model reduction. {\em {IEEE} Trans. Automat. Control}, AC-26:17--32, 1981. \end{thebibliography} %model reduction, balanced truncation, Lyapunov equations, Riccati equations","93B11","65F30","","13:49:15","Wed Mar 26 2008","134.109.232.102" %"Cortes %Vanesa","vcortes@unizar.es"," \section*{Some properties of the class sign regular matrices and its subclasses} By {\sl V. Cort\'es and J.M. Pe{\~n}a}. \medskip \noindent An $m\times n$ matrix is called {\it sign regular} with signature $\varepsilon $ if, for each $k\le \min \{m,n\}$, all its $k\times k$ minors have the same sign or are zero. The common sign may differ for different $k$: the corresponding sequence of signs provides the signature of the sign regular matrix. These matrices play an important role many fields, such as Statistics, Approximation Theory or Computer Aided Geometric Design. In fact, nonsingular sign regular matrices are characterizated as variation-diminishing linear maps: the maximum number of sign changes in the consecutive components of the image of a nonzero vector is bounded above by the minimum number of sign changes in the consecutive components of the vector. We study several properties of these matrices, focusing our analysis on some sublasses of sign regular matrices with certain particular signatures. %Sign regular matrices; Test; Zero pattern; Inverses","15A48","15A15","","14:03:14","Wed Mar 26 2008","85.55.134.135" %"Cortes %Vanesa","vcortes@unizar.es"," \section*{Some properties of the class sign regular matrices and its subclasses} By {\sl V. Cort\'es and J.M. Pe{\~n}a}. \medskip \noindent An $m\times n$ matrix is called {\it sign regular} with signature $\varepsilon $ if, for each $k\le \min \{m,n\}$, all its $k\times k$ minors have the same sign or are zero. The common sign may differ for different $k$: the corresponding sequence of signs provides the signature of the sign regular matrix. These matrices play an important role many fields, such as Statistics, Approximation Theory or Computer Aided Geometric Design. In fact, nonsingular sign regular matrices are characterizated as variation-diminishing linear maps: the maximum number of sign changes in the consecutive components of the image of a nonzero vector is bounded above by the minimum number of sign changes in the consecutive components of the vector. We study several properties of these matrices, focusing our analysis on some sublasses of sign regular matrices with certain particular signatures. %Sign regular matrices; Test; Zero pattern; Inverses","15A48","15A15","","14:03:31","Wed Mar 26 2008","85.55.134.135" %"Damm %Tobias","damm@mathematik.uni-kl.de"," \section*{Algebraic Gramians and Model Reduction for Different System Classes} By {\sl Tobias Damm}. \medskip \noindent Model order reduction by balanced truncation is one of the best-known methods for linear systems. It is motivated by the use of energy functionals, preserves stability and provides strict bounds for the approximation error. The computational bottleneck of this method lies in the solution of a pair of dual Lyapunov equations to obtain the controllability and the observability Gramian, but nowadays there are efficient methods which work for large-scale systems as well. These advantages motivate the attempt to apply balanced truncation also to other classes of systems. For example, there is an immediate way to generalize the idea to stochastic linear systems, where one has to consider generalized versions of Lyapunov equations. Similarly, one can define energy functionals and Gramians for nonlinear systems and try to use them for order reduction. In general, however, these Gramians are very complicated and practically not available. As an approximation, one may use algebraic Gramians, which again are solutions of certain generalized Lyapunov equations and which give bounds for the energy functionals. This approach has been taken e.g.~for bilinear systems of the form \begin{eqnarray*} \dot x&=&Ax+\sum_{j=1}^k N_jxu_j+Bu\;,\\ y&=& Cx\;, \end{eqnarray*} which arise e.g.~from the discretization of diffusion equations with boundary control. In the talk we review these generalizations for different classes of systems and discuss computational aspects. %algebraic Gramians, energy functionals, model reduction, bilinear systems, stochastic systems","93A15","65F30","MS5, Linear Algebra in Model Reduction.","16:30:08","Wed Mar 26 2008","84.58.144.105" %"van den Driessche %Pauline","pvdd@math.uvic.ca"," \section*{Bounds for the Perron root using max eigenvalues} By {Ludwig Elsner, P van den Driessche}. \medskip \noindent Using the techniques of max algebra, a new proof of Al'pin's lower and upper bounds for the Perron root of a nonnegative matrix is given. The bounds depend on the row sums of the matrix and its directed graph. If the matrix has zero diagonal entries, then these bounds may improve the classical row sum bounds. This is illustrated by a generalized tournament matrix. %Max eigenvalue, Nonnegative matrix, Perron root","15A18","15A42","","19:16:28","Wed Mar 26 2008","142.104.7.18" %"Li %Chi-Kwong","ckli@math.wm.edu"," \section*{Eigenvalues of the sum of matrices \\ from unitary similarity orbits} By {\sl Chi-Kwong Li, Yiu-Tung poon and Nung-Sing Sze.} \medskip \noindent Let $A$ and $B$ be $n\times n$ complex matrices. Characterization is given for the set ${\cal E}(A,B)$ of eigenvalues of matrices of the form $U^*AU+V^*BV$ for some unitary matrices $U$ and $V$. Consequences of the results are discussed and computer algorithms and programs are designed to generate the set ${\cal E}(A,B)$. The results refine those of Wielandt on normal matrices. Extensions of the results to the sum of matrices from three or more unitary similarity orbits are also considered. %Eigenvalues, sum of matrices,","15A18","","This is a talk for the mini-symposium: Eigenproblems: Theory and Computation","19:35:37","Wed Mar 26 2008","70.186.195.115" %"Hogben %Leslie","lhogben@iastate.edu"," \section*{Minimum Rank Problems: Recent Developments} By {\sl Leslie Hogben}. \medskip \noindent This talk will survey recent developments in the problem of determining the minimum rank of families of matrices described by a graph, digraph or pattern. %minimum rank, symmetric minimum rank, asymmetric minimum rank, ditree, directed tree, inverse eigenvalue problem","05C50","15A03","This abstract is for my invited plenary lecture","20:05:39","Wed Mar 26 2008","65.174.105.59" %"Huylebrouck %Dirk","Huylebrouck@gmail.com"," \section*{Applications of generalized inverses in art.} By {\sl D. Huylebrouck}. \medskip \noindent The “Moore-Penrose inverse” of a matrix A corresponds to the (unique) matrix solution X of the system AXA=A, XAX=X, (AX)*=AX, (XA)*=XA. S. L. Campbell and C. D. Meyer Jr, wrote a now classical book “Generalized Inverses of Linear Transformations” (Pitman Publishing Limited, London, 1979), in which they gave an excellent account on the MP-inverse and other generalized inverses as well. They gave many interesting examples, ranging from Gauss’ historical prediction for finding Ceres to modern electrical engineering problems. The present paper provides new applications related to art studies: a first one about mathematical colour theory, and a second about curve fitting in architectural drawings or paintings. Firstly, in colour theory, a frequent problem is finding the combination of colours approximating a desired colour as closely as possible using a given set of colours. Plaid fabrics are made by a limited number of threads and when a desired tone cannot be formed by a combination, a least squares approach may be mandatory. Some colour theory specialists suggested “sensations”, such as the observation of colour, should involve logarithmic functions, but using Campbell and Meyer’s general set-up, this does not give rise to additional difficulties. Of course, the practical use of this theory should still show the benefit of the proposed mathematical tool, but even as stands it already provides a colourful mathematical diversion. In addition, colour theory as taught today in many art schools and as used in numerous printing or computer problems, is in need of a more rigid mathematical approach, for sure. Thus, this example of an application of the theory of general inverses in art may be welcomed. Secondly, we turn to the formerly very popular activity in architectural circles of drawing all kinds of geometric figures on images of artworks and buildings. Until some 20 years ago, triangles, rectangles, pentagons or circles sufficed, but later more general mathematical figures were used as well, especially since fractals became trendy. Recognizing well-known curves and polygons was seen as a part of the “interpretation” of an architectural edifice or painting. Eventually, certain proportions in the geometric figures were emphasized, among which the golden section surely was the most (in)famous. Diehards continue this tradition, though curve drawing has lost some credit in recent times, in particular due to some exaggerated golden section interpretations. Today, many journals tend to reject “geometric readings in architecture”, and the reasons to do so are many. For instance, an architect may have had the intention of constructing a certain curve, but for structural, technical or whatever practical reason, the final realization may not confirm that intention. Or else, a certain proportion may have been used in an artwork, consciously or not, but when such a “hidden” proportion is discovered afterward, even the author of the artwork may disagree on having used it. Consequently, statements about the presence of a certain proportion or about the good fit of a curve in art often are subjective matters, and thus unacceptable for scientific journals. However, a similarity between these geometric studies in architecture and the history of (celestial) mechanics, as explained “Generalized Inverses of Linear Transformations”, suggests the so-called “least squares method”, developed in that field, could be applied to examples in art as well. Just as astronomy struggled for centuries to get rid of its astrological past, an objective approach for the described art studies would be most welcome. Of course, it can be opposed the mathematical method presents an overkill with respect to the intended straightforward artistic applications, but nowadays software considerably reduces the computational aspects. The method turns out to be useful indeed: for instance, while a catenary approximates architect Gaudi’s Paelle Guell better than a parabola, the least squares method shows a catenary or a parabola can be used for the shape of Gaudi’s Collegio Teresiano with a comparable error. These results were confirmed by Prof. A. Monreal, a Gaudi specialist from the architect’s hometown, Barcelona. Another amusing example is the profile of a nuclear power plant, which is described in many schoolbooks as an example of a hyperbola, but an ellipse fits even better. Engineers confirmed the hyperbolic shape is modified at the top to reduce wind resistance. Finally, it is shown how proportions in the Mona Lisa can be studied using generalized inverses, but it remains unsure this application will make the present paper as widely read as Dan Brown’s “da Vinci Code”. %Generalised inverses, art, colour theory, curve fitting.","15A","15.15","The paper is a contribution for the ""Linear Algebra in Education"" section.","03:40:05","Thu Mar 27 2008","80.201.244.59" %"Tanguay %Denis","tanguay.denis@uqam.ca"," \section*{A fundamental paradox in learning algebra} By {\sl Denis Tanguay & Claudia Corriveau}. \medskip \noindent The generalizing, formalizing and unifying nature of some of the concepts of Linear Algebra leads to a high level of abstraction, which in turn constitutes a source of difficulties for students. When asked to deal with new expressions, new symbolism and rules of calculation, students face what researchers in mathematics education — such as Dorier, Rogalski, Sierpinska or Harel — have identified as ‘the obstacle of formalism’. Teachers bring in new mathematical objects, sometimes in a non explicit way, by using at once the symbols referring to these objects or to the related relations, without explaining or justifying the meaning or the relevance of their choices, regarding this new symbolism. Calculations and manipulations with these new objects build up to new algebras (vector or matrix algebras) more complex than basic (high school) algebra, but nevertheless syntactically modelled on it. The gap thus caused reveals itself when students bring out inconsistent or meaningless writings : “The obstacle of formalism manifests itself in students who operate at the level of the form of expressions without seeing these expressions as referring to something other than themselves. One of the symptoms is the confusion between categories of mathematical objects ; for example, sets are treated as elements of sets, transformations as vectors, relations as equations, vectors as numbers, and so on” (Sierpinska et al., 1999, p. 12). For too many students attending their first course in Linear Algebra, the latter is nothing but a catalogue of very abstract notions, for which they have almost no understanding, being overwhelmed by a flood of new words, new symbols, new definitions and new theorems (Dorier, 1997). Our talk will be based on a study conducted within the context of a master degree in mathematics education (maîtrise en didactique des mathématiques, Université du Québec à Montréal ; cf. Corriveau & Tanguay, 2007). Through this study, we tried to have a better understanding of transitional difficulties, due to the abrupt increase in what is expected from students with respect to formalism and proof, when going from Secondary schools to ‘Cegeps’ (equivalent in Québec of ‘upper secondary’ or ‘high-school’, 17-19 years of age). The Linear Algebra courses having been identified as those in which such transitional problems are the most acute, we first selected, among all problems submitted in a given L. A. course — the teacher of which was ready to participate in the study — those involving a proof or a reasoning at least partly deductive. Through the systematic analysis of these problems, we evaluated and compared their level of difficulty, as well as students' preparation for coping with such difficulties, from an ‘introduction-to-formalism’ perspective. The framework used to analyse the problems stemmed from a remodelling of Robert's framework (1998). The remodelling was a consequence of having compared/confronted an a priori analysis of three problems (using Robert's framework), with the analysis of their erroneous solutions as they appeared in twelve students' homework copies. Among the conclusions brought up by the study, we shall be interested in the following ones:  Mathematical formalism allows a ‘compression’ of the mathematical discourse, simplification and systematization of the syntax, by which one operates on this discourse with better efficiency. But this improvement in efficiency is achieved to the detriment of meaning. As in Bloch and al. (2007), the study confirms that “...formal written discourse does not carry per se the meaning of neither the laws that it states nor the objects that it sets forth.” For many students, symbolic manipulations are difficult in Linear Algebra because meaning has been lost somewhere. By trying to have a better understanding of the underlying obstacle, we came to identify what we call ‘the fundamental paradox in learning [a new] algebra’, some elements of which will be discussed further in the talk.  The analysis of students' written productions brings us to observe that in the process of proving, difficulties caused by the introduction of new objects and new rules of calculation on the one hand, and difficulties related to controlling the deductive reasoning and its logical structure on the other, are reinforcing one another.  A better understanding of students' errors, by an error-analysis such as the one done in the study, allows a better evaluation of the difficulty level of what is asked to students, and thus a better understanding of the problems linked to academic transitions (from lower-secondary to upper-secondary to university) in mathematics. Such analyses could give Linear Algebra teachers better tools, for estimating the difficulties in the tasks they submit to their students, as well as for understanding the underlying cognitive gaps and ruptures. It would be advisable that teachers be introduced to such error-analysis work, in the setting of their pre-service or in-service instruction. Bloch, I., Kientega, G. & Tanguay, D. (2007). Synthèse du Thème 6 : Transition secondaire / post-secondaire et enseignement des mathématiques dans le postsecondaire. To appear in Actes du Colloque EMF 2006. Université de Sherbrooke. Corriveau, C. & Tanguay, D. (2007). Formalisme accru du secondaire au collégial : les cours d'Algèbre linéaire comme indicateurs. To appear in Bulletin AMQ, Vol. XLVII, n°4. Dorier, J.-L., Harel, G., Hillel, J., Rogalski, M., Robinet, J., Robert, A. & Sierpinska, A. (1997). L’enseignement de l’algèbre linéaire en question. J.-L. Dorier, ed. La Pensée Sauvage. Grenoble, France. Harel, G. (1990). Using Geometric Models and Vector Arithmetic to Teach High-School Students Basic Notions in Linear Algebra. International Journal of Mathematical Education in Science and Technology, Vol 21, n°3, pp. 387-392. Harel, G. (1989). Learning and Teaching Linear Algebra : Difficulties and an Alternative Approach to Visualizing Concepts and Processes. Focus on Learning Problems in Mathematics, Vol. 11, n°2, pp. 139-148. Robert, A. (1998). Outils d’analyse des contenus mathématiques à enseigner au lycée et à l’université. Recherches en didactique des mathématiques, vol. 18, n°2, pp. 139-190. Rogalski, M. (1990). Pourquoi un tel échec de l'enseignement de l'algèbre linéaire ? In Enseigner autrement les mathématiques en DEUG Première Année, Commission inter-IREM université (ed.), pp. 279-291. IREM de Lyon. Sierpinska, A., Dreyfus, T. & Hillel, J. (1999). Evaluation of a Teaching Design in Linear Algebra : the Case of Linear Transformations. Recherches en didactiques des mathématiques, Vol. 19, n°1, pp. 7-40. %Linear Algebra Formalism apprenticeship Proof apprenticeship Error Analysis","97","15","It exceeds 5000 characters but it is because we added the Bibliography","11:33:32","Thu Mar 27 2008","132.208.138.88" %"Mathewkutty %Habel","habelmath@habelmath.com","NUMBER THEORY.Polyhedrons are Geometrical shapes enclosed by polygons. Numbers on them can be represented by Habel Math formula Akn = 2{k(n-1)² + 1} Habelmath sum = Hkm = (m/3){k(m - 1)(2m - 1) + 6} 2+ 2(k +1)+ 2(4k + 1)+ 2(9k +1) + 2(16k +1)+.........+ 2{(m-1)²k + 1}=H where H = (m/3){k(m-1)(2m - 1) + 6} Habel Math's wonderful formula for sum to m terms of all Polyhedral numbers. Remember k = 1 for Tetrahedron, and k=29 for Soccerball because we know the soccerball numbers are A29n = 2{29(n - 1)² + 1} They are 2, 60, 234, 524, ................... So H29m = (m/3){29(m - 1)(2m - 1) + 6} When m=4 it should be 2+60+234+524 = 820 By Prof. Habel Mathewkutty M. Sc.(Math/Agra), Ph. D. Speaker of SIAM conference NW08 in Rome 21-24 July 2008. Former Researcher of Indian Institutes of Technology and Instructor of Houston Community College System.","Polyhedrons, Habel Math","11","74","Thanks!","13:38:25","Thu Mar 27 2008","98.200.42.22" %"Hanaish %Ibrahim","henaish@yahoo.com"," \section*{Your title here} By {\sl names of all authors here}. Ibrahim Hanaish and Abdunnabi M. Ali Elbouzedi \medskip \noindent Insert your abstract here The estimation of the mean vector of multivariate normal population with special covariance matrix is considered when uncertain non-sample prior information is available. In this paper, four possible estimators are considered, namely, the usual maximum likelihood estimator (UE), the restricted estimator (RE), the preliminary test estimator (PTE) and the shrinkage estimator (SE) under more general setting. The performances of the estimators are compared based on the criteria of unbiasedness and the risk function to a specific quadratic loss function in order to search for best estimator. Both analytical and graphical methods are explored. It is shown that neither PTE nor SE dominates each other, though they fare well compare to UE and RE. %Preliminary test estimator, Stein-rule estimator, multivariate normal,","","","","13:42:45","Thu Mar 27 2008","62.240.42.184" %"Kaibah %Hussein","hu_mic99@yahoo.com"," \section*{Your title here} Asymptotic Behavior of Solutions of Stochastic Equations and Applications in Statistical Parameter Estimation By {\sl names of all authors here}. Hussein Salem Kaibah \medskip \noindent Insert your abstract here In different models that appear in numerical mathematics , stochastic optimization problems , statistical parameter estimation we come to the necessity to study the behavior of solutions of stochastic equations. Let us consider the following example . Example : suppose that we would like to find a solution of a deterministic equation where is some continuous function, and is some bounded region. But according to the real scheme of calculations we measure the function with random errors in the form : where are jointly independent families of random function (fields) such that . In this case it is reasonable to approximate the function by the averaging Therefore a natural question appears : in what sense and under which condition a solution of a stochastic equation approximates a solution of the first equation as . %Stochastic Equations","","","","13:47:02","Thu Mar 27 2008","62.240.42.184" %"Hanaish %Ibrahim","henaish@yahoo.com"," \section*{Your title here}Shrinkage Estimators for Estimation the Multivariate Normal Mean Vector under Degrees of Distrust By {\sl names of all authors here}. Ibrahim Hanaish and Abdunnabi M. Ali Elbouzedi \medskip \noindent Insert your abstract here The estimation of the mean vector of multivariate normal population with special covariance matrix is considered when uncertain non-sample prior information is available. In this paper, four possible estimators are considered, namely, the usual maximum likelihood estimator (UE), the restricted estimator (RE), the preliminary test estimator (PTE) and the shrinkage estimator (SE) under more general setting. The performances of the estimators are compared based on the criteria of unbiasedness and the risk function to a specific quadratic loss function in order to search for best estimator. Both analytical and graphical methods are explored. It is shown that neither PTE nor SE dominates each other, though they fare well compare to UE and RE. %Preliminary test estimator, Stein-rule estimator, multivariate normal,","","","forgotten title in last email","13:51:05","Thu Mar 27 2008","62.240.42.184" %"Cox %Steven","cox@rice.edu"," \section*{Eigen-reduction of Large Scale Neuronal Networks} By {\sl Tony Kellems, Derrick Roos, Nan Xiao and Steve Cox}. \medskip \noindent The modest pyramidal neuron has over 100 branches with tens of synapses per branch. Partitioning each branch into 3 compartments, with each compartment carrying say 3 membrane currents, yields at least 20 variables per branch and so, in total, a nonlinear dynamical system of roughly 2000 equations. We linearize this system to, x'=Ax+Bu, y=Cx, where B permits synaptic input into each compartment and C observes only the soma potential. We reduce this system by retaining the dominant singular directions of the associated controllability and observability Grammians. We evaluate the error in soma potential between the full and reduced models for a number of true morphologies over a broad (in space and time) class of synaptic input patterns, and find that reduced systems of dimension less then 10 accurately reflect the full quasi-active dynamics. This savings will permit, for the first time, one to simulate large networks of biophysically accurate cells over realistic time spans. %model reduction, synaptic integration","34C20","92C20","","14:15:17","Thu Mar 27 2008","168.7.218.61" %"Zimmermann %Karel","Karel.Zimmermann@mff.cuni.cz"," \section*{Solving two-sided (max,plus)-linear equation systems.} By {\sl Karel Zimmermann}. \medskip \noindent Insert your abstract here Systems of equations of the following form will be considered: \begin{equation}\label{e1} a_i(x) = b_i(x) ~ i \in I, \end{equation} where $I = \{1,\ldots, m\}, ~ J = \{1, \ldots, m\}$, $$a_i(x) = max_{j \in J}(a_{ij} + x_j), ~ b_i(x) = max_{j \in J}(b_{ij} + x_j)~~ \forall i \in I$$ and $a_{ij},~b_{ij}$ are given real numbers. \newline The aim of the contribution is to propose a polynomial method for solving system (\ref{e1}). Let $M$ be the set of all solutions of (\ref{e1}), let $M(\overline{x})$ denote the set of solutions of system (\ref{e1}) satisfying the additional constraint $x \leq \overline{x}$, where $ \overline{x}$ is a given fixed element of $R^n$. The proposed method either finds the maximum element of the set $M(\overline{x})$ (i.e. element $ \hat{x} \in M(\overline{x}) $, for which $x \in M(\overline{x})$ implies $ x~ \leq \hat{x}$), or finds out that $M(\overline{x}) = \emptyset $. The results are based on the following properties of system (\ref{e1}) (to simplify the notations we will assume in the sequel w.l.o.g. that $a_i(\overline{x}) \geq b_i(\overline{x})~~ \forall~~ i \in I$ and $\overline{x} \not \in M(\overline{x})$): \newline \newline (i) $M(\overline{x}) ~=~ \emptyset~ \Rightarrow M ~ = ~ \emptyset$. \newline (ii) Let $K_i = \{ k \in J~;~ a_{ik}\leq b_{ik}\}~ \forall i \in I$. If for some $i_0 \in I$ the set $K_{i_0} = \emptyset$, then $M(\overline{x})= \emptyset$. \newline (iii) Let $\beta_i(\overline{x}) = max_{k \in K_i}(b_{ik} + \overline{x}_k)$, $L_i(\overline{x}) = \{ j \in J~;~ a_{ij} + \overline{x}_j ~>~ \beta_i(\overline{x})\}$, $~ \forall~ i \in I$. If $\bigcup_{i \in I}L_i(\overline{x}) = J$, then $M(\overline{x})= \emptyset$. \newline (iv) Let $V_j(\overline{x}) = \{ i \in I ; j \in L_i(\overline{x}) \}$, let $ \overline{x}_j^{(1)} = min_{i \in V_j(\overline{x})}(\beta_i(\overline{x})- a_{ij})$ for all $j \in J$, for which $V_j(\overline{x}) \neq \emptyset$ and $ \overline{x}_j^{(1)} = \overline{x}_j $ otherwise. Let $\beta_i(\overline{x}^{(1)})~<~ \beta_i(\overline{x})$ for all $i \in I$. Then for at least one $i \in I$ the value $\beta_i(\overline{x}^{(1)})$ is equal to at least one of the threshold values $b_{ij} + \overline{x}_j ~< ~\beta_i(\overline{x})$. \newline \newline The method successively determines variables, which have to be decreased if equalities in (\ref{e1}) should be reached. If all variables have to be set in movement, no solution of (\ref{e1}) exists. If the set of unchanged variables is nonempty, the maximum element of (\ref{e1}) is obtained. Using these properties a polynomial behavior of the proposed method can be proved (in case of rational or integer inputs). Possibilities of further generalizations and usage in optimization with constraints (\ref{e1}), as well as applications to synchronization problems will be briefly discussed. %max algebra, (max,plus)-linear systems of equations, operations research.","65H10","15A78","for Max-algebra, MS7","16:48:33","Thu Mar 27 2008","83.148.5.94" %"Wojciechowski %Piotr","piotrw@utep.edu"," \section*{Orderings of matrix algebras and their applications} By {\sl Piotr Wojciechowski}. \medskip \noindent The full matrix algebra $M_n({\bf F})$ over a totally-ordered subfield ${\bf F}$ of the reals becomes a {\it partially ordered algebra} by a partial order relation $\leq$ on the set $M_n({\bf F})$, if for any $A, B, C \in M_n({\bf F})$ from $A\leq B$ it follows that: \begin{itemize} \item[(1)] $A+C\leq B+C$ \item[(2)] if $C\geq 0$ then $AC\leq BC$ and $CA \leq CB$ \item[(3)] if ${\bf F} \ni \alpha \geq 0$ then $\alpha A\leq \alpha B$. \end{itemize} Our interest is when the order $\leq$ is a lattice or at least is directed. Then we have a {\it lattice-ordered algebra of matrices} or a {\it directly-ordered algebra of matrices}. Those concepts originate in 1956 in Birkhoff and Pierce in \cite{BP}. The first example of a lattice-ordered algebra of matrices is, of course, with the {\it usual} entry-wise ordering. In this ordering the identity matrix $I$ is positive. In 1966 E. Weinberg proved in \cite{We} that the positivity of $I$ forces a lattice-ordering to be (isomorphic to) the usual one in $M_2({\bf F})$ and conjectured the same for all $n\geq 2$. The conjecture was positively solved in 2002 by J. Ma and P. Wojciechowski in \cite{MW}. The proof involved a {\it cone-theoretic} approach, by first establishing existence of a $P$-invariant cone $O$ in ${\bf F}^n$, i.e. satisfying the condition that for every matrix $M\in P$, $M(O)\subseteq O$, where $P$ is the {\it positive cone} of the ordering $\leq$ ($P=\{A\in M_n({\bf F}): A\geq 0\}$.) With help of compactness of a unit sphere in ${\bf R}^n$ and the Zorn's Lemma, we obtained all the desired properties of the cone $O$ that led us to the conclusion of the conjecture.\\ The first part of the talk will briefly outline the method.\\[.2 in] The above considerations allowed us to comprehensively describe all lattice orders of $M_n({\bf F})$ (J. Ma and P. Wojciechowski \cite{MW2}): the algebra $M_n({\bf F})$ is lattice-ordered (within an isomorphism) if and only if $$A \geq 0 \Leftrightarrow A=\sum_{i,j=1}^n \alpha_{ij}E_{ij}H^T$$ with $$\alpha_{ij}\geq 0$$ $i,j =1,\ldots, n$, for some given $H$ nonsingular with nonnegative entries and $E_{ij}$ having 1 in the $ij$ entry and zeros elsewhere.\\[.5 in] As a first application, we will describe all {\it multiplicative bases} in the matrix algebra $M_n({\bf F})$ and provide their enumeration for small $n$ (C. De La Mora and P. Wojciechowski 2006 \cite{DMW}.) In a finite-dimensional algebra over a field \textbf{F}, a basis $\mathfrak{B}$ is called {\em a multiplicative basis} provided that $\mathfrak{B} \cup \{0\}$ forms a semigroup. Although these bases (endowed with some additional algebraic properties) have been studied in the representation theory, they lacked a comprehensive classification for matrix algebras. The first example of a multiplicative basis of $M_n({\bf F})$ should of course be $\{E_{ij}, i,j=1,\ldots,n\}$. Every lattice order on $M_n({\bf F})$ corresponds to a nonsingular $n \times n$ matrix $H$ with nonnegative entries. It turns out that if the entries are either 0 or 1, the basic matrices resulting in the definition of the lattice order, i.e. the matrices $E_{ij}H^T$ form a multiplicative basis, and conversely, every multiplicative basis corresponds to a nonsingular zero-one matrix. After identification of the isomorphic semigroups and also identification of the matrices that have just permuted rows and columns, the above correspondence is one-to-one. The number of zero-one nonsingular matrices, although lacking a formula so far, is known for a few small $n$ values. This, together with the conjugacy class method from group theory, allowed us to calculate the number of nonequivalent multiplicative bases up to dimension 5: 1, 2, 8, 61, 1153.\\[.5 in] Another application concerns certain directed partial orders of matrices that appear naturally in linear algebra and its applications. It is related to the research of matrices preserving cones, established in the seventies, among others by R. Loewy and H. Schneider in \cite{LS}. Besides the lattice orders (corresponding to the simplicial cones), the best studied ones are the orders whose positive cones are the sets $\Pi (O)$, of all matrices preserving a regular (or full) cone $O$ in an $n$-dimensional Euclidean space. It can be shown that $O$ is essentially the only $\Pi(O)$-invariant cone (P. Wojciechowski \cite{W}.) Consequently, we obtain a characterization of all maximal directed partial orders on the $n \times n$ matrix algebra: a directed order is maximal if and only if its positive cone $P$ satisfies $P=\Pi(O)$ for some regular cone $O$. The method used in the proof involves a concept of {\it simplicial separation}, allowing a regular cone to be separated from an outside point by means of a simplicial cone.\\[.5 in] Some open questions related to the discussed topics will be raised during the talk. \bibliographystyle{amsplain} \begin{thebibliography}{7} \bibitem{BP} G.Birkhoff and R.S.Pierce, {\em Lattice-ordered rings}, An. Acad. Brasil. Ci. 28 (1956), 41-69. \bibitem{DMW} C. de La Mora and P. Wojciechowski {\em Multiplicative bases in matrix algebras}, Linear Algebra and Applications 419 (2006) 287-298. \bibitem {LS}R. Loewy and H. Schneider, {\em Positive Operators on the $n$-dimensional Ice-Cream Cone}, J. Math. Anal. Appl. 49 (1975) \bibitem {MW} J. Ma and P. Wojciechowski, {\em A proof of Weinberg's conjecture on lattice-ordered matrix algebras}, Pro. Amer. Math. Soc., 130(2002), no. 10, 2845-2851. \bibitem {MW2} J. Ma and P. Wojciechowski, {\em Lattice orders on matrix algebras}, Algebra Univers. 47 (2002), 435-441. \bibitem{We} E. C. Weinberg, {\em On the scarcity of lattice-ordered matrix rings}, Pacific J. Math. 19 (1966), 561-571. \bibitem {W}P. Wojciechowski {\em Directed maximal partial orders of matrices}, Linear Algebra and Applications 375(2003) 45-49 \end{thebibliography} %Matrix algebra, order, cone, multiplicative basis","15A48","06F25","","17:46:47","Thu Mar 27 2008","129.108.114.66" %"Nagy %James","nagy@mathcs.emory.edu"," \section*{Kronecker Products in Imaging Sciences} By {\sl James G. Nagy}. \medskip \noindent Linear algebra and matrix analysis are very important in the imaging sciences. This should not be surprising since digital images are typically represented as arrays of pixel values; that is, as matrices. Due to advances in technology, the development of new imaging devices, and the desire to obtain images with ever higher resolution, linear algebra research in image processing is very active. In this talk we describe how Kronecker and Hadamard products arise naturally in many imaging applications, and how their properties can be exploited when computing solutions of very difficult linear algebra problems. %Kronecker product, Hadamard product, image processing","15","65","","10:08:25","Fri Mar 28 2008","170.140.151.79" %"Strong %David","David.Strong@pepperdine.edu"," \section*{A Java applet and introductory tutorial for the Jacobi, Gauss-Seidel and SOR Methods } By {\sl David Strong}. \medskip \noindent I will discuss a Java applet, tutorial and exercises that are designed to allow both students and instructors to experiment with and visualize the Jacobi, Gauss-Seidel and SOR Methods in solving systems of linear equations. The applet is for working with 2 x 2 systems. The tutorial includes an analysis (using eigenvalues and spectral radius) of these methods. The exercises are designed to be done using the applet in order to more easily investigate ideas and issues that are often not dealt with when these methods are first introduced, but that are fundamental to numerical analysis and linear algebra, such as eigenvalues/vectors and convergence rates. %Jacobi, Gauss-Seidel, SOR, numerical linear algebra, iterative methods, applet","97","65","","15:16:22","Fri Mar 28 2008","137.159.49.103" %"Rust %Bert","bert.rust@nist.gov"," \section*{A Truncated Singular Component Method for Ill-Posed Problems} By {\sl Bert Rust and Dianne O'Leary}. \medskip \noindent The truncated singular value decomposition (TSVD) method for solving ill-posed problems regularizes the solution by neglecting contributions in the directions defined by singular vectors corresponding to small singular values. In this work we propose an alternate method, neglecting contributions in directions where the measurement value is below the noise level. We call this the truncated singular component method (TSCM). We present results of this method on test problems, comparing it with the TSVD method and with Tikhonov regularization. %ill-posed problems, regularization, singular value decomposition","65","F22","","15:53:44","Fri Mar 28 2008","129.6.88.158" %"Costa %Liliana","lilianacosta@ua.pt"," \section*{Acyclic Birkhoff Polytope} By {\sl Liliana Costa, C.M. da Fonseca and Enide Andrade Martins}. \medskip \noindent A real square matrix with nonnegative entries and all rows and columns sums equal to one is said to be doubly stochastic. This denomination is associated to probability distributions and it is amazing the diversity of branches of mathematics in which doubly stochastic matrices arise (geometry, combinatorics, optimization theory, graph theory and statistics). Doubly stochastic matrices have been studied quite extensively, especially in their relation with the van der Waerden conjecture for the permanent. In $% 1946$, Birkhoff published a remarkable result asserting that a matrix in the polytope of $n\times n$ nonnegative doubly stochastic matrices, $\Omega _{n}$% , is a vertex if and only if it is a permutation matrix . In fact, $\Omega _{n}$ is the convex hull of all permutation matrices of order $n$. The \emph{Birkhoff polytope} $\Omega _{n}$ is also known as \emph{% transportation polytope} or \emph{doubly stochastic matrices polytope}. Recently Dahl discussed the subclass of $\Omega _{n}$ consisting of the tridiagonal doubly stochastic matrices and the corresponding subpolytope \[ \Omega _{n}^{t}=\{A\in \Omega _{n}:A\mbox{ is tridiagonal}\}, \]% the so-called \textit{tridiagonal Birkhoff polytope}, and studied the facial structure of $\Omega _{n}^{t}.$ In this talk we present an interpretation of vertices and edges of the acyclic Birkhoff polytope, $\mathfrak{T}_{n}=\Omega _{n}(T)$, where $T$ is a given tree, in terms of graph theory. %Doubly stochastic matrix; Birkhoff polytope; Number of vertices;Tree","05A15","15A51","","10:14:49","Sat Mar 29 2008","89.214.211.4" %"Martins %Enide","enide@ua.pt"," \section*{On the spectra of some graphs like weighted rooted trees} By {\sl Ros\'{a}rio, Helena Gomes and Enide Andrade Martins}. \medskip \noindent Let $G$ be a weighted rooted graph of $k$ levels such that, for $j\in\{2,\dots ,k\}$ \begin{enumerate} \item each vertex at level $j$ is adjacent to one vertex at level $j-1$ and all edges joining a vertex at level $j$ with a vertex at level $j-1$ have the same weight, where the weight is a positive real number. \item if two vertices at level $j$ are adjacent then they are adjacent to the same vertex at level $j-1$ and all edges joining two vertices at level $j $ have the same weight. \item two vertices at level $j$ have the same degree. \item there is not a vertex at level $j$ adjacent to others two vertices at the same level. \end{enumerate} In this talk we give a complete characterization of the eigenvalues of the Laplacian matrix of $G$ (analogous characterization can be done for the adjacency matrix of $G)$). By application of the these results, we derive an upper bound on the largest eigenvalue of a graph defined by a weighted tree and a weigthed triangle attached, by one of its vertices, to a pendant vertex of the tree. %Graph; Laplacian matrix; Adjacency matrix; Eigenvalues","05C50","","","10:24:56","Sat Mar 29 2008","89.214.211.4" %"Boimond %Jean-Louis","Jean-Louis.Boimond@univ-angers.fr"," \section*{On Steady State Controller in Min-Plus Algebra} By {\sl J.-L. Boimond, S. Lahaye}. \medskip \noindent Synchronization phenomena occurring in systems where dynamic behavior is represented by a flow of fluid are well modeled by continuous $(min, +)$-linear systems. A feedback controller design method is proposed for such systems in order that the system output asymptotically behaves like polynomial input. Such a controller objective is well-known in the conventional linear systems theory. Indeed, the steady-state accuracy of conventional linear systems is classified according to their final responses to polynomial inputs such as steps, ramps, and parabolas. The ability of the system to asymptotically track polynomial inputs is given by the highest degree, $k$, of the polynomial for which the error between system output and reference input is finite but nonzero. We call the system {\it type k} to identify this polynomial degree. For example, a {\it type} $1$ system has finite, nonzero error to a first-degree polynomial input (ramp).\\ An analogous definition of system {\it type} $k$ is given for continuous $(min, +)$-linear systems and leads to simple conditions as in conventional system theory. In addition to the conditions that the resulting controller must satisfy, we look for the {\it greatest} controller to satisfy the {\it just in time} criterion. For a manufacturing system, such an objective allows the releasing of raw parts at the latest dates such that the customer demand is satisfied. %Continuous timed event graph, min-plus algebra, steady state controller, system type","93","06","contribution for the mini-symposia MS7 Max algebra (H. Schneider, P. Butkovic)","05:03:47","Sun Mar 30 2008","82.252.195.70" %"Fošner %Ajda","ajda.fosner@uni-mb.si"," \section*{Commutativity preserving maps on real matrices} By {\sl Ajda Fo\v sner}. \medskip \noindent Let $M_n({\mathbb R})$ be the algebra of all $n\times n$ real matrices. A map $\phi : M_n({\mathbb R}) \to M_n({\mathbb R})$ preserves commutativity if $\phi (A) \phi (B) = \phi (B) \phi (A)$ whenever $AB = BA$, $A,B \in M_n({\mathbb R})$. If $\phi$ is bijective and both $\phi$ and $\phi^{-1}$ preserve commutativity, then we say that $\phi$ preserves commutativity in both directions. We will talk about non-linear maps on $M_n({\mathbb R})$ that preserve commutativity in both directions or in one direction only. %commutativity preserving map, real Jordan canonical form","15A27","15A21","","07:49:06","Sun Mar 30 2008","86.58.80.16" %"Shader %Bryan","bshader@uwyo.edu"," \section*{Average minimum rank of a graph} By {\sl Francesco Barioli, Shaun Fallat, Tracy Hall, Daniel Hershkowitz, Leslie Hogben, Ryan Martin, Bryan Shader, Hein van der Holst}. \medskip \noindent We establish asymptotic upper and lower bounds on the average minimum rank of a graph using probabilistic, linear algebraic and graph theoretic techniques. %Minimum rank, zero pattern, graph","05C50","","This is part of the minisymposium on Minimum ranks","22:52:11","Sun Mar 30 2008","72.175.97.56" %"maracci %mirko","mirko.maracci@gmail.com"," \section*{Basic notions of Vector Space Theory: students' models and conceptions} By {\sl Mirko Maracci}. \medskip \noindent Carlson (1993) uses the image of the fog rolling in to describe the confusion and disorientation which his students experience when getting to the basic notions of Vector Space Theory (VST). There is truly a widespread sense of the inadequacy of the teaching of Linear Algebra. On account of that common perception and of the importance of Linear Algebra as a prerequisite for a number of disciplines (math, science, engineering,...), in the last twenty years several studies were carried out on Linear Algebra education. Those studies brought undeniable progresses for understanding students’ difficulties in Linear Algebra. As Dorier and Sierpinska effectively synthesized in their literature survey (2001), three different kinds of sources of students’ difficulties in Linear Algebra especially emerge from the studies on that topics: \begin{enumerate} \item the fact that Linear Algebra teaching is characterized by an axiomatic approach, which is perceived by students as superfluous and meaningless; \item the fact that Linear Algebra is characterized by the cohabitation of different languages, systems of representations, modes of description; \item the fact that coping with those features requires the development of {\it theoretical thinking} and {\it cognitive flexibility} \end{enumerate} Recently more studies were carried out, which in our opinion still fit well Dorier and Sierpinska's synthesis. \\ In this talk I will focus on some aspects of students' difficulties in vector space theory (VST), drawn from my doctorate research project. That project was meant to investigate graduate and undergraduate students’ errors and difficulties in VST. Through that work I intended to contribute to Linear Algebra Education research field, focusing on cognitive difficulties related to specific VST notions rather than to general features of Linear Algebra: a seemingly less explored path.\\ The study involved 15 (graduate or undergraduate) students in mathematics, presented with two or three different VST problems to be solved in individual sessions. The methodology adopted was that of the clinical interview (Ginsburg, 1981). The study highlighted a number of students' difficulties related to the notions of linear combination, linear dependence/independence, dimension and spanning set. The difficults, errors and empasses emerged were analysed through the lenses of different theoretical frameworks: the theory of tacit intuive models (Fischbein, 1987), Sfard's process-object duality theory (Sfard, 1991) and the ckc model (Balacheff, 1995). The different analyses lead to formulate hypotheses, which account for a variety of students’ difficulties. Though not antithetical to each other, those analyses are diversified, put into evidence different aspects from different perspectives. In this talk I briefly present the results of those analyses and a first tentative integrating analysis, combining different hints and perspectives provided by the frameworks mentioned above. More specifically, that attempt lead to the formulation of the hypothesis that many difficults experienced by students are consistent with the possible activation of an intuitive model of “construction†related to basic notion of VST.In the talk we will better specify that hypothesis showing how it could contribute to better organize and explain students' documented difficulties. \section*{References} \begin{description} \item[{\sc Balacheff N., 1995;}] Conception, connaissance et concept, Grenier D. (ed.) {\it Didactique et technologies cognitives en math\'ematiques, s\'eminaires 1994-1995}, pp.~219-244, Grenoble: Universit\'e Joseph Fourier. \item[{\sc Carlson D., 1993;}] Teaching linear algebra: must the fog always roll in?, {\it College Mathematics Journal}, vol.~24, n.~1; pp.~29-40. \item[{\sc Dorier J.-L., Sierpinska A., 2001;}] Research into the teaching and learning of linear algebra, Holton D. (ed.) {\it The Teaching and Learning in Mathematics at University Level- An ICMI Study}, Kluwer Acad. Publ., The Netherlands, pp. 255-273. \item[{\sc Fischbein E., 1987;}] {\it Intuition in science and mathematics}, D.Reidel Publishing Company, Dordrecht, Holland. \item[\sc Ginsburg H., 1981;] The Clinical Interview in Psychological Research on Mathematical Thinking: Aims, Rationales, Techniques. {\it For the Learning of Mathematics}, v.~1,~3 pp.~4-11. \item[{\sc Sfard A., 1991;}] On the dual nature of mathematical conceptions: reflections on processes and objects as differente sides of the same coin, {\it Educational Studies in Mathematics}, v.~22, pp.~1-36. \end{description} %intuitive models, process-object duality, Linear Algebra education","97c30","","","02:51:57","Mon Mar 31 2008","131.114.73.1" %"Malik %Saroj","saroj.malik@gmail.com"," \section*{Your title here} By {\sl names of all authors here}. \medskip \noindent Insert your abstract here A new class of g-inverses and order relations on index 1 matrices (Abstract) In this paper we introduce two new classes of g-inverses of a matrix A of index 1 over an arbitrary ¯eld. We obtain some properties of these generalized inverses and identify the class of all commuting g-inverses as one of the classes of these new classes of g-inverses. The problem of one sided sharp order has been also studied and these new g-inverses have been found very useful in character- izing it. We also give conditions under which one sided sharp order becomes full sharp order. Finally we study the sharp order for partitioned matrices. %g- inverse, index 1 matrices, Good approximate solution, excellent approximate solution, Group inverse, one-sided sharp order","15","15A57; 1","This Abstract is a PDF version of the tex file. I'm separately sending both files to Prof Verde","10:50:03","Mon Mar 31 2008","59.180.38.88" %"Prokip %Volodymyr","vprokip@mail.ru"," \section*{On the problem of diagonalizability of matrices over a principal ideal domain} By {\sl Volodymyr Prokip}. \medskip \noindent Let $R$ -- be a principal ideal domain with the unit element $e\not=0$ and $U(R)$ the set of divisors of unit element $e$. Further, let $R_n$ -- the ring of $(n\times n)$-matrices over $R$; $I_k$ -- the identity $k\times k$ matrix and $O$ the zero $n\times n$ matrix. In this report we present conditions of diagonalizability of a matrix $A \in R_n$, i.e. when for $A$ there exists a matrix $T \in GL(n,R)$ such that $TAT^{-1}$ -- a diagonal matrix. {\bf Theorem.} Let $A\in R_n$ and $$\det (Ix-A)=(x-\alpha_1)^{k_1}(x-\alpha_2)^{k_2} \cdots (x-\alpha_r)^{k_r} , $$ where $ \alpha_i \in R $, and $ \alpha_i - \alpha_j \in U(R)$ for all $i\not= j$. If $m(x)=(x-\alpha_1)(x-\alpha_2) \cdots (x-\alpha_r)$ -- the minimal polynomial of the matrix $A$, i.e. $m(A)=O$, then for the matrix $A$ there exists a matrix $ T \in GL(n,R)$ such that $$ TAT^{-1}={\rm diag} \left( {\alpha}_1I_{k_1}, {\alpha}_2I_{k_2}, \ldots , {\alpha}_rI_{k_r} \right) . $$ %matrix , pricipal ideal domain, diagonalization","15A04","15A21","","11:23:57","Mon Mar 31 2008","194.44.153.33" %"Noutsos %Dimitrios","dnoutsos@uoi.gr"," \section*{Reachability cone of eventually exponentially nonnegative matrices} By {\sl Dimitrios Noutsos and Michael Tsatsomeros}. \medskip \noindent We examine the relation between eventual exponential nonnegativity of a matrix $A$ ($e^{tA}\geq 0$ for all sufficiently large $t\geq 0$) and eventual nonnegativity of $I+hA, ~ h\geq 0$ ($(I+hA)^k\geq 0$ for all sufficiently large $k\geq 0$). As a consequence, we are able to characterize initial points $x_0\in \mathbb{R}^n$ such that $e^{tA}x_0$ becomes and remains nonnegative as exactly those points for which the discrete trajectories $x^{(k)} = (I+hA)^kx_0$ become and remain nonnegative. This extents work on the reachability cone of exponentially nonnegative matrices by Neumann, Stern and Tsatsomeros [1]. \bigskip [1] M. Neumann, R.J. Stern, and M. Tsatsomeros. The reachability cones of essentially nonnegative matrices. {\em Linear and Multilinear Algebra}, 28:213--224, 1991. %Eventually nonnegative matrix; eventually exponentially nonnegative matrix; point of nonnegative potential; reachability cone","15A48","65F10","Consider my talk for the mini-simposium ""MS8 Nonnegative and eventually nonnegative matrices"", organized by Judi McDonald","16:19:05","Mon Mar 31 2008","134.121.45.4" \end{document}