拉肚子喝什么药| pfs是什么意思| 8月24是什么星座| 啾啾是什么意思| 副省长什么级别| 奶酪是什么东西| 眉毛旁边长痘痘是什么原因| 最贵的烟是什么| 痘痘里面挤出来的白色东西是什么| 银手镯发黄是什么原因| 腿肿是什么病的前兆| 头晕头重昏昏沉沉是什么原因| 花生吃多了有什么坏处| 什么叫甲亢病| 吃什么补脑增强记忆力| 吃什么可以降低尿酸| 神经衰弱吃什么药好| 吃莲子有什么好处| 日记可以写什么| 孤寡老人国家有什么政策| 孕吐吃什么| 蛇缠腰是什么症状| 红斑狼疮有什么症状| 什么时候锻炼身体最佳时间| 俄罗斯信奉什么教| 肥皂剧是什么意思| 梦到女朋友出轨是什么意思| 纵隔肿瘤是什么病| 视角是什么意思| 神经疼痛吃什么药| 经期适合吃什么食物| ut是什么意思| 阴道炎是什么| 乘字五行属什么| 友尽是什么意思| 过敏性紫癜挂什么科| 月经血是什么血| 猫尿床是因为什么原因| 宫颈管是什么| 晚上十点多是什么时辰| 男人吃香菜有什么好处| 05属什么生肖| 肠道细菌感染吃什么药| ct什么意思| oid是什么意思| 皮肤敏感是什么意思| 红光对皮肤有什么作用| 头皮痒用什么洗头好| 操逼是什么感觉| poems是什么意思| 正常的白带是什么样的| 47是什么生肖| 直肠癌是什么原因引起的| 眼圈发黑什么原因| 感染性发热是什么意思| 河里的贝壳叫什么| 做梦梦见兔子是什么意思| 宝宝为什么吐奶| 佛珠生菇讲述什么道理| 月经血是什么血| 三七粉是治什么病的| 什么是精神| 分泌物发黄是什么原因| 泪腺堵塞有什么症状| 藏青色t恤配什么颜色裤子| 植发用什么头发| 凌波鱼是什么鱼| 10月31日什么星座| 嗓子哑了是什么原因| 毛毛虫吃什么食物| 梦见自己相亲是什么征兆| 卵巢早衰吃什么可以补回来| 什么药一吃就哑巴了| 自己做生意叫什么职业| 分子量是什么| 加是什么生肖| 鸡和什么属相相冲| 为什么会肾结石| 脸上长癣用什么药膏| 睡觉盗汗是什么原因| 肺大泡是什么病严重吗| 抻是什么意思| 左卡尼汀口服溶液主要治疗什么| 蛋白糖是什么糖| 津液亏虚吃什么中成药| hpv52高危阳性是什么意思| 胸径是什么意思| 宝宝不喝奶是什么原因| 久视伤血是什么意思| 三餐两点什么意思| sph是什么意思| 肠化十是什么意思| 烧心吃点什么药| 联手是什么意思| 吃什么升白细胞| 脂肪液化是什么意思| 宇宙之外还有什么| 氧化亚铜什么颜色| 红隼吃什么| 1994年属什么生肖| 补铁的药什么时候吃最好| 上海龙华医院擅长什么| 荷叶加什么减肥最快| 一个句号是什么意思| 口臭是什么原因引起的| 西兰花不能和什么一起吃| 婴儿什么时候可以吃盐| 末是什么意思| 人生轨迹是什么意思| 什么让生活更美好作文| 梅花象征着什么| 计数单位是指什么| 一什么树| 钙化淋巴结是什么意思| 10月16日是什么星座| 翌日什么意思| 四月十七是什么星座| 检测怀孕最准确的方法是什么| 逸夫是什么意思| c反应蛋白偏高说明什么| 三氧化硫常温下是什么状态| 高血脂吃什么食物最好| 什么是abs| 生蚝有什么营养价值| 高干文是什么意思| 来大姨妈拉肚子是什么原因| 十五年是什么婚| nothomme什么牌子| 6月10日什么星座| 蚊子最怕什么味道| 开心的动物是什么生肖| 虾仁和什么包饺子好吃| 什么地制宜| 鱼用什么游泳| pu是什么皮| b超和彩超有什么区别| 吃什么菜减肥最快| 胸腺癌早期有什么症状| 老什么什么什么| 君子兰有什么特点| 吃什么食物对头发好| 黄金五行属什么| 葫芦代表什么寓意| 长期做梦是什么原因| 肾亏和肾虚有什么区别| 什么的叮咛| sei是什么意思| 子宫癌是什么症状| 为什么经常拉肚子| 立事牙疼吃什么药| 雪蛤是什么| 足内翻是什么样子的| 验尿白细胞高是什么原因| 六月十一是什么星座| 睡意是什么意思| 办居住证需要什么| 山加乘念什么| 拘留所和看守所有什么区别| 碳素墨水用什么能洗掉| 大便黑色是什么问题| 宝宝病毒性感冒吃什么药效果好| 什么罗之恋| 血常规主要检查什么| 候车是什么意思| pdn是什么意思| 央行放水是什么意思| mpv什么意思| 七月十三日是什么日子| 医学hr是什么意思| 1月17号是什么星座| 医院洗牙挂什么科| 什么是血浆| bpa是什么材料| 鸡吃什么长得又快又肥| 割包皮有什么用| 大洋马是什么意思| 正佳广场有什么好玩的| 菊花是什么意思| 积家手表什么档次| 淋巴结增大是什么原因严重吗| 1968年五行属什么| 白细胞阳性什么意思| 什么样的菊花| 鸟为什么会飞| 八月二十二是什么星座| 右胸上方隐痛什么原因| 3月18是什么星座| 俄罗斯信奉什么教| 九里香什么时候开花| 疱疹性咽峡炎用什么药| 俯卧撑有什么好处| 婴儿有眼屎是什么原因引起的| p代表什么| 女人脸肿是什么原因引起的| 什么官许愿| 八八年属什么生肖| 突然肚子疼是什么原因| 吃了发芽的土豆会有什么症状| 膝关节痛挂什么科| 什么的樱桃| 共济失调是什么意思| 炎性肉芽肿是什么意思| 肺气虚吃什么食物| 白天嗜睡是什么原因| 1664是什么酒| 蓝颜知己是什么意思| 早上6点是什么时辰| 大便里面有血是什么原因| 甲状腺是什么引起的原因| 珍珠状丘疹有什么危害| 南京市徽为什么是貔貅| 翌日是什么意思| 嘴唇没有血色是什么原因| 鹌鹑蛋是什么动物的蛋| 90年属于什么生肖| 情未了什么意思| 男生来大姨夫是什么意思| 什么是爱一个人| 地贫吃什么补血最快| 查乙肝挂什么科| 痛心疾首的疾什么意思| fhr是什么意思| 什么鞋不能穿| 什么东西越擦越小| 上午9点是什么时辰| 贤淑是什么意思| 腰椎间盘突出压迫神经吃什么药| cll是什么意思| 泵头是什么| 2016年是属什么年| 生孩子送什么花比较好| 钟字五行属什么| 头总出汗是什么原因| 为什么要做试管婴儿| 轴位什么意思| 癫疯是什么原因引起| 控制欲强的人最怕什么| 转念是什么意思| 尿液有白色絮状物是什么原因| 没有味觉是什么病| 为什么肚子会胀气| 女性外阴痒用什么药| 梦见摘杏子是什么意思| 孕早期需要注意什么| 爱放屁什么原因| 不老实是什么意思| 低频是什么意思| ysl是什么品牌| 男生腿毛旺盛说明什么| 线索是什么意思| 光动能手表是什么意思| 葛根的作用是什么| 鼠的五行属什么| 右眼跳什么| 98年的属什么| 为什么医院不推荐钡餐检查| 肛门坠胀用什么药| 想吃辣椒身体里缺什么| 诸行无常是什么意思| 泌乳素偏高是什么原因| 头顶秃了一小块是什么原因怎么办| pd-l1是什么| 13颗珠子的手串什么意思| 什么是美尼尔氏综合症| 百度Jump to content

《烈焰之刃》绿色度测评报告

From Wikipedia, the free encyclopedia
百度 北京时间3月6日傍晚,2018年亚冠小组赛第三轮E组展开角逐,权健客场3-6惨遭全北现代屠杀,吃到赛季第一场败仗。

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace[1][note 1] is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

Definition

[edit]

If V is a vector space over a field K, a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V. Equivalently, a linear subspace of V is a nonempty subset W such that, whenever w1, w2 are elements of W and α, β are elements of K, it follows that αw1 + βw2 is in W.[2][3][4][5][6]

The singleton set consisting of the zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space.[7]

Examples

[edit]

Example I

[edit]

In the vector space V = R3 (the real coordinate space over the field R of real numbers), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V.

Proof:

  1. Given u and v in W, then they can be expressed as u = (u1, u2, 0) and v = (v1, v2, 0). Then u + v = (u1+v1, u2+v2, 0+0) = (u1+v1, u2+v2, 0). Thus, u + v is an element of W, too.
  2. Given u in W and a scalar c in R, if u = (u1, u2, 0) again, then cu = (cu1, cu2, c0) = (cu1, cu2,0). Thus, cu is an element of W too.

Example II

[edit]

Let the field be R again, but now let the vector space V be the Cartesian plane R2. Take W to be the set of points (x, y) of R2 such that x = y. Then W is a subspace of R2.

Proof:

  1. Let p = (p1, p2) and q = (q1, q2) be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then p + q = (p1+q1, p2+q2); since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W.
  2. Let p = (p1, p2) be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then cp = (cp1, cp2); since p1 = p2, then cp1 = cp2, so cp is an element of W.

In general, any subset of the real coordinate space Rn that is defined by a homogeneous system of linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y.)

Example III

[edit]

Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of RR.

Proof:

  1. We know from calculus that 0 ∈ C(R) ? RR.
  2. We know from calculus that the sum of continuous functions is continuous.
  3. Again, we know from calculus that the product of a continuous function and a number is continuous.

Example IV

[edit]

Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. The same sort of argument as before shows that this is a subspace too.

Examples that extend these themes are common in functional analysis.

Properties of subspaces

[edit]

From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples.[8] Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.

In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed.[9] The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals).

Descriptions

[edit]

Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin.

A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:

This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple.

A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space):

It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.

Systems of linear equations

[edit]

The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn:

For example, the set of all vectors (x, y, z) (over real or rational numbers) satisfying the equations is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions.

Null space of a matrix

[edit]

In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:

The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix

Every subspace of Kn can be described as the null space of some matrix (see § Algorithms below for more).

Linear parametric equations

[edit]

The subset of Kn described by a system of homogeneous linear parametric equations is a subspace:

For example, the set of all vectors (xyz) parameterized by the equations

is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers).[note 2]

Span of vectors

[edit]

In linear algebra, the system of parametric equations can be written as a single vector equation:

The expression on the right is called a linear combination of the vectors (2, 5, ?1) and (3, ?4, 2). These two vectors are said to span the resulting subspace.

In general, a linear combination of vectors v1v2, ... , vk is any vector of the form

The set of all possible linear combinations is called the span:

If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk.

Example
The xz-plane in R3 can be parameterized by the equations
As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two:
Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).

Column space and row space

[edit]

A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:

In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A.

The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).

Independence, basis, and dimension

[edit]
The vectors u and v are a basis for this two-dimensional subspace of R3.

In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of t1, t2, t3.

In general, vectors v1, ... , vk are called linearly independent if

for (t1t2, ... , tk) ≠ (u1u2, ... , uk).[note 3] If v1, ..., vk are linearly independent, then the coordinates t1, ..., tk for a vector in the span are uniquely determined.

A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more).

Example
Let S be the subspace of R4 defined by the equations
Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).

Operations and relations on subspaces

[edit]

Inclusion

[edit]

The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension).

A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ? W, then dim W = k if and only if U = W.

Intersection

[edit]
In R3, the intersection of two distinct two-dimensional subspaces is one-dimensional

Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V.[10]

Proof:

  1. Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W.
  2. Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W.
  3. Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W.

For every vector space V, the set {0} and V itself are subspaces of V.[11][12]

Sum

[edit]

If U and W are subspaces, their sum is the subspace[13][14]

For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality

Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation:[15]

A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum.[16][17][18][19]

The dimension of a direct sum is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero.[20]

Lattice of subspaces

[edit]

The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation.

Orthogonal complements

[edit]

If is an inner product space and is a subset of , then the orthogonal complement of , denoted , is again a subspace.[21] If is finite-dimensional and is a subspace, then the dimensions of and satisfy the complementary relationship .[22] Moreover, no vector is orthogonal to itself, so and is the direct sum of and .[23] Applying orthogonal complements twice returns the original subspace: for every subspace .[24]

This operation, understood as negation (), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice).[citation needed]

In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces such that . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra).[citation needed]

Algorithms

[edit]

Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:

  1. The reduced matrix has the same null space as the original.
  2. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
  3. Row reduction does not affect the linear dependence of the column vectors.

Basis for a row space

[edit]
Input An m × n matrix A.
Output A basis for the row space of A.
  1. Use elementary row operations to put A into row echelon form.
  2. The nonzero rows of the echelon form are a basis for the row space of A.

See the article on row space for an example.

If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal.

Subspace membership

[edit]
Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v with n components.
Output Determines whether v is an element of S
  1. Create a (k + 1) × n matrix A whose rows are the vectors b1, ... , bk and v.
  2. Use elementary row operations to put A into row echelon form.
  3. If the echelon form has a row of zeroes, then the vectors {b1, ..., bk, v} are linearly dependent, and therefore vS.

Basis for a column space

[edit]
Input An m × n matrix A
Output A basis for the column space of A
  1. Use elementary row operations to put A into row echelon form.
  2. Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space.

See the article on column space for an example.

This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

Coordinates for a vector

[edit]
Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector vS
Output Numbers t1, t2, ..., tk such that v = t1b1 + ··· + tkbk
  1. Create an augmented matrix A whose columns are b1,...,bk , with the last column being v.
  2. Use elementary row operations to put A into reduced row echelon form.
  3. Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers t1, t2, ..., tk. (These should be precisely the first k entries in the final column of the reduced echelon form.)

If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.

Basis for a null space

[edit]
Input An m × n matrix A.
Output A basis for the null space of A
  1. Use elementary row operations to put A in reduced row echelon form.
  2. Using the reduced row echelon form, determine which of the variables x1, x2, ..., xn are free. Write equations for the dependent variables in terms of the free variables.
  3. For each free variable xi, choose a vector in the null space for which xi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A.

See the article on null space for an example.

Basis for the sum and intersection of two subspaces

[edit]

Given two subspaces U and W of V, a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm.

Equations for a subspace

[edit]
Input A basis {b1, b2, ..., bk} for a subspace S of Kn
Output An (n ? k) × n matrix whose null space is S.
  1. Create a matrix A whose rows are b1, b2, ..., bk.
  2. Use elementary row operations to put A into reduced row echelon form.
  3. Let c1, c2, ..., cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
  4. This results in a homogeneous system of n ? k linear equations involving the variables c1,...,cn. The (n ? k) × n matrix corresponding to this system is the desired matrix with nullspace S.
Example
If the reduced row echelon form of A is
then the column vectors c1, ..., c6 satisfy the equations
It follows that the row vectors of A satisfy the equations
In particular, the row vectors of A are a basis for the null space of the corresponding matrix.

See also

[edit]

Notes

[edit]
  1. ^ The term linear subspace is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also called linear manifolds for emphasizing that there are also manifolds.
  2. ^ Generally, K can be any field of such characteristic that the given integer matrix has the appropriate rank in it. All fields include integers, but some integers may equal to zero in some fields.
  3. ^ This definition is often stated differently: vectors v1, ..., vk are linearly independent if t1v1 + ··· + tkvk0 for (t1, t2, ..., tk) ≠ (0, 0, ..., 0). The two definitions are equivalent.

Citations

[edit]
  1. ^ Halmos (1974) pp. 16–17, § 10
  2. ^ Anton (2005, p. 155)
  3. ^ Beauregard & Fraleigh (1973, p. 176)
  4. ^ Herstein (1964, p. 132)
  5. ^ Kreyszig (1972, p. 200)
  6. ^ Nering (1970, p. 20)
  7. ^ Hefferon (2020) p. 100, ch. 2, Definition 2.13
  8. ^ MathWorld (2021) Subspace.
  9. ^ DuChateau (2002) Basic facts about Hilbert Space — class notes from Colorado State University on Partial Differential Equations (M645).
  10. ^ Nering (1970, p. 21)
  11. ^ Hefferon (2020) p. 100, ch. 2, Definition 2.13
  12. ^ Nering (1970, p. 20)
  13. ^ Nering (1970, p. 21)
  14. ^ Vector space related operators.
  15. ^ Nering (1970, p. 22)
  16. ^ Hefferon (2020) p. 148, ch. 2, §4.10
  17. ^ Axler (2015) p. 21 § 1.40
  18. ^ Katznelson & Katznelson (2008) pp. 10–11, § 1.2.5
  19. ^ Halmos (1974) pp. 28–29, § 18
  20. ^ Halmos (1974) pp. 30–31, § 19
  21. ^ Axler (2015) p. 193, § 6.46
  22. ^ Axler (2015) p. 195, § 6.50
  23. ^ Axler (2015) p. 194, § 6.47
  24. ^ Axler (2015) p. 195, § 6.51

Sources

[edit]

Textbook

[edit]
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  • Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
  • Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
  • Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
  • Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing. ISBN 978-1-944325-11-4.
  • Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016 {{citation}}: ISBN / Date incompatibility (help)
  • Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
  • Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001
  • Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3

Web

[edit]
[edit]
食道不舒服挂什么科 微信被拉黑后显示什么 陈皮有什么功效作用 总有眼屎是什么原因 凌寒独自开的凌是什么意思
为什么不建议做融合手术 女性分泌物像豆腐渣用什么药 什么东西能美白 生源是什么意思 广州有什么美食
芥子是什么 欲言又止是什么意思 生物冰袋里面是什么 o型血容易得什么病 黑五是什么
抗体高是什么意思 什么时间立秋 后背疼痛什么原因 甲状腺结节吃什么食物好 血用什么能洗掉
谷氨酰基转移酶高是什么原因hcv7jop6ns5r.cn 咳嗽不能吃什么水果hcv7jop5ns6r.cn 大脚趾头疼是什么原因hcv8jop6ns4r.cn 什么里什么间hcv8jop7ns9r.cn 为什么会吐血hcv9jop8ns1r.cn
冷暖自知上一句是什么wuhaiwuya.com 孩子为什么要躲百天adwl56.com 吃什么对胆囊有好处bfb118.com 黄瓜吃多了有什么坏处hcv7jop9ns2r.cn 梦到自己开车是什么意思hcv8jop7ns3r.cn
奇亚籽在中国叫什么hcv8jop0ns7r.cn 蒲公英泡水喝有什么副作用hcv9jop6ns8r.cn 念珠菌阳性是什么病hcv9jop1ns4r.cn 汗疱疹用什么药好weuuu.com emoji什么意思hcv8jop1ns9r.cn
巨细胞病毒igm阳性是什么意思hcv8jop7ns3r.cn 舒字五行属什么的hebeidezhi.com 乳腺回声不均匀是什么意思hcv8jop3ns2r.cn 均可是什么意思hcv8jop3ns6r.cn 单元剧是什么意思naasee.com
百度