اینم کتاب عطیه (کتاب مورد علاقه دکتر مهرورز) برای درس جبر پیشرفته و درس جبرجابجایی:

introduction to commutative agebra

نوشته شده توسط دادمنش  در ساعت 19:42 | لینک  | 

an introduction to homological algebra

جبر همولوژی نورثکات(northcott)

 

نوشته شده توسط دادمنش  در ساعت 18:1 | لینک  | 

برای عزیزانی که کارشناسی ارشد ریاضی گرایش جبر دارند کتاب جبر همولوژی روتمن (an introduction to homological algebra)را لینک می دهم.امیدوارم مورد استفاده عزیزان باشد.
نوشته شده توسط دادمنش  در ساعت 9:11 | لینک  | 

A Course In Commutative Algebra
نوشته شده توسط دادمنش  در ساعت 21:43 | لینک  | 

 

In mathematics, the existence of a dual vector space reflects in an abstract way the relationship between row vectors (1×n) and column vectors (n×1). The construction can also take place for infinite-dimensional spaces and gives rise to important ways of looking at measures, distributions, and Hilbert space. The use of the dual space in some fashion is thus characteristic of functional analysis. It is also inherent in the Fourier transform.

Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors, which are studied in tensor algebra. When applied to vector spaces of functions (which typically are infinite dimensional) dual spaces are employed for defining and studying concepts like measures, distributions, and Hilbert spaces. Consequently, dual space is an important concept in the study of functional analysis.

There are two types of dual space: the algebraic dual space, and the continuous dual space. The algebraic dual space is defined for all vector spaces. The continuous dual space is a subspace of the algebraic dual space, and is only defined for topological vector spaces.

 

 Algebraic dual space

Given any vector space V over some field F, we define the dual space V* to be the set of all linear functionals on V, i.e., scalar-valued linear transformations on V (in this context, a "scalar" is a member of the base-field F). V* itself becomes a vector space over F under the following definition of addition and scalar multiplication:

(\phi + \psi )( x ) = \phi ( x ) + \psi ( x ) \,
( a \phi ) ( x ) = a \phi ( x ) \,

for all φ,ψ in V*, a in F and x in V. In the language of tensors, elements of V are sometimes called covariant vectors, and elements of V*, contravariant vectors, covectors or one-forms.

 Examples

If the dimension of V is finite, then V* has the same dimension as V; if {e1,...,en} is a basis for V, then the associated dual basis {e1,...,en} of V* is given by

\mathbf{e}^i (\mathbf{e}_j)= \left\{\begin{matrix} 1, & \mbox{if }i = j \\ 0, & \mbox{if } i \ne j \end{matrix}\right.

In the case of R2, its basis is B={e1=(1,0),e2=(0,1)}.Then, e1 is a one-form (function which maps a vector to a scalar) such that e1(e1)=1, and e1(e2)=0. Similarity for e2. (Note: The superscript here is an index, not an exponent.)

Concretely, if we interpret Rn as the space of columns of n real numbers, its dual space is typically written as the space of rows of n real numbers. Such a row acts on Rn as a linear functional by ordinary matrix multiplication.

If V consists of the space of geometrical vectors (arrows) in the plane, then the elements of the dual V* can be intuitively represented as collections of parallel lines. Such a collection of lines can be applied to a vector to yield a number in the following way: one counts how many of the lines the vector crosses.

If V is infinite-dimensional, then the above construction of ei does not produce a basis for V* and the dimension of V* is greater than that of V. Consider for instance the space R(ω), whose elements are those sequences of real numbers which have only finitely many non-zero entries (dimension is countably infinite). The dual of this space is Rω, the space of all sequences of real numbers (dimension is uncountably infinite). Such a sequence (an) is applied to an element (xn) of R(ω) to give the number ∑nanxn.

 Bilinear products and dual spaces

As we saw above, if V is finite-dimensional, then V is isomorphic to V*, but the isomorphism is not natural and depends on the basis of V we started out with. In fact, any isomorphism Φ from V to V* defines a unique non-degenerate bilinear form on V by

\langle v,w \rangle = (\Phi (v))(w) \,

and conversely every such non-degenerate bilinear product on a finite-dimensional space gives rise to an isomorphism from V to V*.

 Injection into the double-dual

There is a natural homomorphism Ψ from V into the double dual V**, defined by (Ψ(v))(φ) = φ(v) for all v in V, φ in V*. This map Ψ is always injective; it is an isomorphism if and only if V is finite-dimensional. (Infinite-dimensional Hilbert spaces are not a counterexample to this, as they are isomorphic to their continuous duals, not to their algebraic duals.)

 Pullback of a linear map

If f: V \to W is a linear map, we may define its pullback f*: W* \to V* by

f^* (\phi) = \phi \circ f \,

where φ is an element of W*.

The assignment f \mapsto \, f^* produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W* to V*; this homomorphism is an isomorphism if and only if W is finite-dimensional. If V = W then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that (fg)* = g*f*. In the language of category theory, taking the dual of vector spaces and the pullback of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself. Note that one can identify (f*)* with f using the natural injection into the double dual.

If the linear map f is represented by the matrix A with respect to two bases of V and W, then f* is represented by same matrix acting by multiplication on the right on row vectors. Using the canonical inner product on Rn, one may identify the space with its dual, in which case the matrix can be represented by the transposed matrix tA.

Structure of the dual space

The structure of the algebraic dual space is simply related to the structure of the vector space. If the space is finite dimensional then the space and its dual are isomorphic, while if the space is infinite dimensional then the dual space always has larger dimension.

Given a basis {eα} for V indexed by A, one may construct the linearly independent set of dual vectors {σα}, as defined above. If V is infinite-dimensional however, the dual vectors do not form a basis for V*; the span of {σα} consists of all finite linear combinations of the dual vectors, but any infinite ordered tuple of dual vectors (thought of informally as an infinite sum) defines an element of the dual space. Because every vector of the vector space may be written as a finite linear combination of basis vectors {eα}, an infinite tuple of dual vectors evaluates to nonzero scalars only finitely many times.

More explicitly, any infinite tuple (fασα) may be thought of as the infinite sum

f=\sum_{\alpha \in A}f_\alpha\sigma^\alpha

which satisfies

f(\mathbf{e}_n) = f_n.

So f acts on an arbitrary vector

\mathbf{v}=\sum_{i=1}^n v^i\mathbf{e}_i

in V by

f(\mathbf{v}) = \sum_{i=1}^n v^if(\mathbf{e}_i) = \sum_{i=1}^n v^i f_i.

This dual vector f is linearly independent of the dual vectors {σα} unless A is finite. The dual space is the span of all such tuples. The idea of a dual vector as an infinite sum should not be taken too literally; in general infinite sums are defined in terms of a limit, which only makes sense in a topological space, and even then not all sums will be convergent. A basis for the dual space is a set of vectors such that every dual vector can be written as a finite linear combination. The existence of such a basis requires the axiom of choice, and cannot be exhibited explicitly.

This can be understood more rigorously, if perhaps more abstractly, as follows. For any vector space V over F, we can find a basis. If that basis has cardinality α (thus α is the dimension of the vector space), then we may find a basis indexed by α. Since any field may be viewed as a one dimensional vector space over itself, we may construct the vector space direct sum of copies of F and the existence of the basis is equivalent to the existence of an isomorphism

V\cong\bigoplus_\alpha \mathbb{F}.

Thus this isomorphism is nothing other than the equivalent statement that any vector can be uniquely written as a sum of finitely many basis vectors, which is simply the definition of a basis. Note that the isomorphism is not canonical; it depends on the particular choice of basis.

A property of the direct sum is that the operation of passing to the dual turns direct sums into direct products. That is,

\left(\bigoplus_\alpha\mathbb{F}\right)^*=\prod_\alpha\mathbb{F}^*=\prod_\alpha\mathbb{F},

and here in the second equation we use the fact that any field F, viewed as a vector space over itself, is canonically isomorphic to its dual space. Thus we see that

V^*\cong \prod_\alpha\mathbb{F}.

Recall that the vector space direct sum is the set of tuples which are only nonzero finitely many times, while the vector space direct product is the set of all tuples (tuples which may be nonzero infinitely often). If α is infinite, then there are always more vectors in the dual space than the vector space. This is in marked contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the vector space even for infinite-dimensional spaces. On the other hand, if α is finite, then all tuples are nonzero only finitely often, so the direct sum and direct product coincide; any finite dimensional vector space is isomorphic to its dual space, though usually not canonically so.

 Continuous dual space

When dealing with topological vector spaces, one is typically only interested in the continuous linear functionals from the space into the base field. This gives rise to the notion of the continuous dual space which is a linear subspace of the algebraic dual space. The continuous dual of a vector space V is denoted V′. When the context is clear, the continuous dual may just be called the dual.

The continuous dual V′ of a normed vector space V (e.g., a Banach space or a Hilbert space) forms a normed vector space. The norm ||φ|| of a continuous linear functional on V is defined by

\|\phi \| = \sup \{ |\phi ( x )| : \|x\| \le 1 \}

This turns the continuous dual into a normed vector space, indeed into a Banach space so long as the underlying field is complete which is often included in the definition of the normed vector space. In other words, the dual of a normed space over a complete field is necessarily complete.

For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear map.

 Examples

Let 1 < p < ∞ be a real number and consider the Banach space lp of all sequences a = (an) for which

\|\mathbf{a}\|_p = \left ( \sum_{n=0}^\infty |a_n|^p \right) ^{1/p}

is finite. Define the number q by 1/p + 1/q = 1. Then the continuous dual of lp is naturally identified with lq: given an element φ ∈ (lp)', the corresponding element of lq is the sequence (φ(en)) where en denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = (an) ∈ lq, the corresponding continuous linear functional φ on lp is defined by φ(b) = ∑n an bn for all b = (bn) ∈ lp (see Hölder's inequality).

In a similar manner, the continuous dual of l1 is naturally identified with l. Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremums norm) and c0 (the sequences converging to zero) are both naturally identified with L1.

نوشته شده توسط دادمنش  در ساعت 9:43 | لینک  | 

 

An important class of groups are permutation groups. One reason for their importance is that every group may be represented as a group of permutations on a suitable set

Let A be a set, then a permutation of A is a bijection tex2html_wrap_inline903 . Read the notes on functions if you are unfamiliar with this idea

If A is finite then we may as well let tex2html_wrap_inline907 and we write such a permutation as

displaymath909

where the tex2html_wrap_inline911 are distinct elements of A

For example, let tex2html_wrap_inline915 . There are six permutations for this set, namely tex2html_wrap_inline917 , tex2html_wrap_inline919 , tex2html_wrap_inline921 , tex2html_wrap_inline923 , tex2html_wrap_inline925 , tex2html_wrap_inline927 

There are two other common ways to write such a permutation. If the set A is understood to be a set of consecutive integers (or an ordered set) then the top line is deleted so we would write

displaymath931

where it is understood that tex2html_wrap_inline933 maps 1 to tex2html_wrap_inline935 etc. We shall not use this way of writing permutations. The second way, and the method we shall use in these notes, is to write the permutation as a product of disjoint cycles. A cycle is constructed as follows: Choose some starting element, say tex2html_wrap_inline937 . Now compute the elements tex2html_wrap_inline939 , tex2html_wrap_inline941 , and so on until we arrive back at the element i (this is guaranteed if A is finite). Enclose this list of elements of A in parentheses to form the cycle tex2html_wrap_inline949 where by tex2html_wrap_inline951 we mean tex2html_wrap_inline933 applied to i k-times. Now repeat the process to form the next cycle choosing as starting element an element of A that has not appeared in any previous cycle. The process ends when every element of A appears in exactly one cycle. The representation of tex2html_wrap_inline933 is then obtained by juxtaposing (multiplying) these disjoint cycles. It is usual to suppress cycles containing only one element

As an example, if tex2html_wrap_inline965 here are two of the three ways we discussed to write a certain permutation

displaymath967

where in the cycle notation we have suppressed the cycles (4)(7)(8). In a cycle such as tex2html_wrap_inline969 we mean that the permutation maps 1 to 2, maps 2 to 3 and maps 3 to 1

Note that the cycle tex2html_wrap_inline969 can also be written as tex2html_wrap_inline973 since it contains the same information, but could not have been written as tex2html_wrap_inline975 

The six permutations on tex2html_wrap_inline915 written as permutations in cycle form are 1, tex2html_wrap_inline981  tex2html_wrap_inline983 , tex2html_wrap_inline985 , tex2html_wrap_inline969 , tex2html_wrap_inline989 

 

نوشته شده توسط دادمنش  در ساعت 14:6 | لینک  | 

trace (linear algebra)

In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A, i.e.

tr(A) = A1,1 + A2,2 + ... + An,n.

where Aij represents the (i,j)'th element of A. The use of the term trace arises

 from the German term Spur (cognate with the English spoor).

Properties

The trace is a linear map. That is,

tr(A + B) = tr(A) + tr(B)

tr(rA) = r tr(A)

for all square matrices A and B, and all scalars r.

Since the principal diagonal is not moved on transposition, a matrix and its transpose have the same trace:

tr(A) = tr(AT).

If A is an n×m matrix and B is an m×n matrix, then

tr(AB) = tr(BA).

Note here that AB is an n×n matrix, while BA is an m×m matrix.

Using this fact, we can deduce that the trace of a product of square matrices is equal to the trace of any cyclic permutation of the product, a fact known as the cyclic property of the trace. For example, with three square matrices A, B, and C,

tr(ABC) = tr(CAB) = tr(BCA).

More generally, the same is true if the matrices are not assumed to be square, but are so shaped such that all of these products exist.

If A, B, and C are square matrices of the same dimension and are symmetric, then the traces of their products are invariant not only under cyclic permutations but under all permutations, i.e.,

tr(ABC) = tr(CAB) = tr(BCA) = tr(BAC) = tr(CBA) = tr(ACB).

The trace is similarity-invariant, which means that A and P−1AP (P invertible)

 have the same trace, though there exist matrices which have the same trace but are not similar. This can be verified using the cyclic property above:

tr(P−1AP) = tr(PP−1A) = tr(A)

Given some linear map f : V → V (V is a finite-dimensional vector space) generally, we can define the trace of this map by considering the trace of matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis independent definition for the trace of a linear map. Using the canonical isomorphism between the space End(V) of linear maps on V and VV*, the trace of vf is defined to be f(v), with v in V and f an element of the dual space V*.

Eigenvalue relationships

If A is a square n-by-n matrix with real or complex entries and if λ1,...,λn are

 the (complex) eigenvalues of A (listed according to their algebraic multiplicities), then

tr(A) = ∑ λi.

This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having λ1,...,λn on the main diagonal.

From the connection between the trace and the eigenvalues, one can derive a connection between the trace function, the matrix exponential function, and the determinant:

det(exp(A)) = exp(tr(A)).

The trace also prominently appears in Jacobi's formula for the derivative of the determinant (see under determinant).

Other ideas and applications

If one imagines that the matrix A describes a water flow, in the sense that for every x in Rn, the vector Ax represents the velocity of the water at the location x, then the trace of A can be interpreted as follows: given any region U in Rn,

the net flow of water out of U is given by tr(A)· vol(U), where vol(U) is the volume of U. See divergence.

The trace is used to define characters of group representations. Given two representations A(x) and B(x), they are equivalent if tr A(x) = tr B(x).

The trace also plays a central role in the distribution of quadratic forms.

A matrix whose trace is zero is said to be traceless or tracefree.

Inner product

For an m-by-n matrix A with complex (or real) entries, we have

tr(A*A) ≥ 0

with equality only if A = 0. The assignment

= tr(A*B)

yields an inner product on the space of all complex (or real) m-by-n matrices.

If m=n then the norm induced by the above inner product is called the Frobenius norm of a square matrix. Indeed it is simply the Euclidean norm if the matrix is considered as a vector of length n2.

 

نوشته شده توسط دادمنش  در ساعت 13:23 | لینک  | 

 

مجموع اعمال ریاضی شامل جمع ، ضرب ، مشتق ، انتگرال و... که بر روی بردارها انجام می‌شود، بر اساس قواعد و اصول خاصی قابل اجراست. مجموعه این قوانین در مبحثی تحت عنوان جبر برداری مورد بحث قرار می‌گیرند.

اطلاعات اولیه

بحث حرکت در دو یا سه بعد با وارد کردن مفهوم بردار بسیار ساده می‌شود. یک بردار از نظر هندسی به صورت کمیتی فیزیکی تعریف می‌شود که بوسیله اندازه و جهت در فضا مشخص می‌شود. به عنوان مثال می‌توان به سرعت و نیرو اشاره کرد که هر دو کمیتی برداری هستند. هر بردار را با یک پیکان که طول و جهت آن نمایشگر اندازه و جهت بردار است، نمایش می‌دهند. جمع دو یا چند بردار را می‌توان بر اساس راحتی کار با استفاده از روشهای متوازی الاضلاع یا روش تصاویر که در آن هر بردار را به مولفه‌هایش در امتداد محورهای مختصات تجزیه می‌کنند، انجام داد.

ضرب بردارها

ضرب بردار در حالت کلی به دو صورت ضرب نقطه‌ای یا عددی و ضرب برداری انجام می‌شود. در ضرب عددی یا اسکالر یا نقطه‌ای که با نماد A.B نمایش داده می‌شود، حاصضرب برابر با است با حاصلضرب اندازه یک بردار در اندازه تصویر بردار دیگر بر روی آن. طبیعی است که اگر دو بردار بر هم عمود باشند، حاصلضرب آنها صفر خواهد بود. اما در ضرب برداری که بصورت A×B نمایش داده می‌شود، نتیجه حاصلضرب ، برداری است که جهت آن با استفاده از قاعده دست راست تعیین می‌شود و اندازه آن با حاصلضرب اندازه دو بردار در سینوس زاویه بین آنها برابراست. ضرب برداری علاوه بر دو حالت فوق می‌تواند بصورت مختلط نیز باشد. به عنوان مثل اگر C , B , A سه بردار دلخواه باشند در این صورت می‌توان ضربهایی به شکل A.B×C یا A×B×C نیز تشکیل داد. اما همواره باید توجه داشته باشیم که نتیجه حاصلضرب اسکالر یا عددی یک عدد است در صورتی که نتیجه حاصلضرب برداری یک بردار است.  

قاعده دست راست

قاعده دست راست که در بیشتر مسائل فیزیک که با بردارها سر و کار دارند مطرح است، به این صورت بیان می‌شود: فرض کنید A و B دو بردار دلخواهی هستند که به صورت برداری در یکدیگر ضرب (برداری )می‌شود. برای تعیین جهت بردار حاصلضرب کافی است چهار انگشت دست راست را در راستای بردار اول قرار داده و بوسیله چهار انگشت خود این بردار را بطرف بردار دوم بچرخانیم، در این صورت جهت انگشت شست دست راست در راستای بردار منتجه خواهد بود.

مشتق گیری برداری

برای مشتق گیری برداری قواعد خاصی وجود دارد که به صورت زیر اشاره می‌شود.

  1. مشتق جمع دو یا چند بردار با مجموع مشتقات تک تک آنها برابر است.
  2. مشتق حاصضرب دو بردار (خواه اسکالر خواه برداری) برابر است با مجموع دو جمله ، که جمله اول شامل حاصضرب مشتق بردار اول در خود بردار دوم و جمله دوم برابر با حاصضرب خود بردار اول در مشتق بردار دوم است. بدیهی است که مشتق حاصلضرب چندین بردار نیز به همین صورت تعریف می‌شود. یعنی به تعداد بردارهایی که در هم ضرب می‌شوند، جمله وجود دارد و در هر جمله مشتق یک بردار وجود دارد. علاوه بر این مشتقات مراتب بالاتر (مشتق دوم و بیشتر) نیز به همین صورت انجام می‌شود.

انتگرال گیری برداری

در حالت کلی سه بعدی دو نوع تابع می‌توان در نظر گرفت. توابع نقطه‌ای اسکالر و توابع نقطه‌ای برداری. به عنوان مثال تابع انرژی پتانسیل یک تابع نقطه‌ای اسکالر است، در صورتی که شدت میدان الکتریکی یک تابع نقطه‌ای برداری است. همچنین انتگرال گیری نیز می‌تواند به سه صورت خطی ، سطحی و حجمی صورت گیرد. در حالت اول انتگرال گیری بر روی یک منحنی صورت می‌گیرد. اما در حالت دوم انتگرال گیری روی یک سطح و سرانجام در حالت چهارم روی یک حجم صورت می‌گیرد. نکته قابل توجه در اینجا این است که انتگرال گیری با توجه به تقارن موجود و نیز نوع تابع مسئله در سیستمهای مختصاتی مختلف انجام داد. به عنوان مثال اگر مسئله مورد نظر ما دارای تقارن کروی باشد بهتر است کلیه انتگرالهایی که در مسئله مورد نیاز است در سیستم مختصات کروی انجام دهیم.

 

نوشته شده توسط دادمنش  در ساعت 9:22 | لینک  | 

 

Group

 

A group is a pair  (G, *) , where G is a non-empty set and " * '' is binary operation on G, that holds the following conditions.

  • For any  a, b in G , a*b belongs to G. (The operation " * '' is closed).
  • For any  a , b , c in  G ,  (a*b)*c = a*(b*c) . (Associativity of the operation).
  • There is an element   in  G   such that  g*e = e*g = g for any  g in G   . (Existence of identity element).
  • For any in G   there exists an element  h such that  g*h = h*g = e. (Existence of inverses).

Usually, the symbol " * '' is omitted and we write ab for a*b . Sometimes, the symbol " +'' is used to represent the operation, especially when the group is abelian.

It can be proved that there is only one identity element, and that for every element there is only one inverse. Because of this we usually denote the inverse of a as a-1 or –a when we are using additive notation. The identity element is also called neutral element due to its behavior with respect to the operation, and thus a-1 is sometimes (although uncommonly) called the neutralizing element of a.

Groups often arise as the symmetry groups of other mathematical objects; the study of such situations uses group actions. In fact, much of the study of groups themselves is conducted using group actions.

 

 

نوشته شده توسط دادمنش  در ساعت 12:0 | لینک  | 
 

Powered By: BLOGFA.COM