كليدواژه :
ﻧﮕﺎﺷﺖ ﺧﻄﯽ ﻣﺜﺒﺖ , ﻣﯿﺎﻧﮕﯿﻦ ﻫﻨﺪﺳﯽ , ﻣﯿﺎﻧﮕﯿﻦ ﺗﻮاﻧﯽ , ﺷﻌﺎع ﻃﯿﻔﯽ
چكيده فارسي :
ﻓﺮض ﮐﻨﯿﺪ )C* ،B(H- ﺟﺒﺮي از ﻫﻤﻪ ﻋﻤﻠﮕﺮﻫﺎي ﺧﻄﯽ ﮐﺮاﻧﺪار ﺑﺮﻓﻀﺎﻫﺎي ﻫﯿﻠﺒﺮت ﻣﺨﺘﻠﻂ H ﺑﺎﺷﺪ. آ ﻧﺪو و ﻟﯽ و ﻣﺎﺳﯿﺎ، ﻣﯿﺎﻧﮕﯿﻦ ﻫﻨﺪﺳﯽ ﺗﻌﻤﯿﻢ ﯾﺎﻓﺘﻪاي از n ﻋﻤﻠﮕﺮ ﻣﻌﯿﻦ ﻣﺜﺒﺖ را ﺗﻌﺮﯾﻒ ﮐﺮدﻧﺪ. ﯾﮏ ﻧﺎﻣﺴﺎوي وارون ﺑﺮاي ﻣﯿﺎﻧﮕﯿﻦ ﻫﻨﺪﺳﯽ ﺗﻌﻤﯿﻢ ﯾﺎﻓﺘﻪاي ﮐﻪ ﺗﻮﺳﻂ آﻧﺪو و ﻟﯽ و ﻣﺎﺳﯿﺎ از n ﻋﻤﻠﮕﺮ ﺗﻌﺮﯾﻒ ﺷﺪه ﺑﻪ ﻗﺮار زﯾﺮ ﺑﺪﺳﺖ ﻣﯽآورﯾﻢ :
ﻣﯿﺎﻧﮕﯿﻦ ﮐﺎرﭼﺮﮐﻪ ﻣﯿﺎﻧﮕﯿﻦ رﯾﻤﺎن ﻫﻢ ﻧﺎﻣﯿﺪه ﻣﯽ ﺷﻮد، اﺧﯿﺮا در ﻣﻮارد ﻣﺘﻨﻮﻋﯽ از آن اﺳﺘﻔﺎده ﺷﺪه اﺳﺖ ﻣﺎﻧﻨﺪ: اﻧﺘﺸﺎر ﺗﺎﻧﺴﻮري در ﺗﺼﻮﯾﺮﺑﺮداري ﭘﺰﺷﮑﯽ و راداري، ﻣﺎﺗﺮﯾﺲﻫﺎي ﮐﻮوارﯾﺎﻧﺲ در آﻣﺎر، ﻫﺴﺘﻪﻫﺎﯾﯽ در ﻣﺎﺷﯿﻦ ﯾﺎدﮔﯿﺮي و اﻧﻌﻄﺎف ﭘﺬﯾﺮي. ﻫﻤﭽﻨﯿﻦ ﯾﮏ ﻧﺎﻣﺴﺎوي وارون را ﺑﺮاي ﻣﯿﺎﻧﮕﯿﻦ ﺗﻮاﻧﯽ وزندار از n ﻋﻤﻠﮕﺮ ﻣﻌﯿﻦ ﻣﺜﺒﺖ ﺷﺎﻣﻞ ﻧﮕﺎﺷﺖﻫﺎي ﺧﻄﯽ ﻣﺜﺒﺖ ﯾﮑﺎﻧﯽ ﺑﺪﺳﺖ ﻣﯽآورﯾﻢ .
چكيده لاتين :
Let $B(H)$ be the $C^*$-algebra of all bounded linear operators on a complex Hilbert spaces $H$. Ando, Li and Mathias introduced a generalized geometric mean for $n$ positive definite operators. This geometric mean $G(A_{1},A_{2},dots ,A_{n})$ of any $n$-tuple of positive definite operators $mathbb{A}=(A_1,dots,A_n)$ is defined by induction.
(i) $G(A_{1},A_{2})$ =$A_{1}sharp A_{2}$
(ii) Assume that the geometric mean any $(n-1)$-tuple of operators is defined. Let [G ((A_j)_{jneq i })= G(A_{1},A_{2},dots,A_{i-1},A_{i+1},dots,A_{n}), ] and let sequences ${mathbb{A}_{i}^{(r)}} _{r=1}^{infty}$ be $mathbb{A}_{i}^{(1)}= A_{i}$ and $ mathbb{A}_{i}^{(r+1)}=G((mathbb{A}_{j}^{(r)})_{jneq i }) $. If there exists $ lim_{rrightarrowinfty}{mathbb{A}_{i}^{(r)}} $, and it does not depend on $i$. Hence the geometric mean of $n$-operators is defined by begin{equation*} lim_{rrightarrowinfty}{mathbb{A}_{i}^{(r)}} = G((mathbb{A}))=G(A_{1},A_{2},dots,A_{n}) text{ for } i= 1,dots,n. end{equation*} We shall show a reverse inequality for the generalized geometric mean defined by Ando-Li-Mathias for $n$ positive definite operators, as follows: Let $Phi$ be a unital positive linear map on $B(H)$ and $r(A)$ be the spectral radius of $A$, then begin{align*} Phi(G(A_1,dots,A_n))geqleft(frac{2h}{1+h^2}right)^{n-1}G(Phi(A_1),dots,Phi(A_n)), end{align*} where $R(A_i, A_j)=max{r(A_i^{-1}A_j) ,r(A_j^{-1}A_i)}$ and $h=min_{i,j} R(A_i, A_j)$.
The Karcher mean, also called the Riemannian mean, Recently it has been used in a diverse variety of settings:
diffusion tensors in medical imaging and radar, covariance matrices in statistics, kernels in machine learning and
elasticity.
We also give a reverse inequality for the weighted power mean of $n$ positive definite operators involving unital positive linear maps.