Internet magazine of a summer resident. DIY garden and vegetable garden

Solving systems of linear algebraic equations, solution methods, examples. Find the general solution of the system and fsr

Systems linear equations, for which all free terms are equal to zero are called homogeneous :

Any homogeneous system is always consistent, since it always has zero (trivial ) solution. The question arises under what conditions will a homogeneous system have a nontrivial solution.

Theorem 5.2.A homogeneous system has a nontrivial solution if and only if the rank of the main matrix less number her unknowns.

Consequence. A square homogeneous system has a nontrivial solution if and only if the determinant of the main matrix of the system is not equal to zero.

Example 5.6. Determine the values ​​of the parameter l at which the system has nontrivial solutions, and find these solutions:

Solution. This system will have a non-trivial solution when the determinant of the main matrix is ​​equal to zero:

Thus, the system is non-trivial when l=3 or l=2. For l=3, the rank of the main matrix of the system is 1. Then, leaving only one equation and assuming that y=a And z=b, we get x=b-a, i.e.

For l=2, the rank of the main matrix of the system is 2. Then, choosing the minor as the basis:

we get a simplified system

From here we find that x=z/4, y=z/2. Believing z=4a, we get

The set of all solutions of a homogeneous system has a very important linear property : if columns X 1 and X 2 - solutions to a homogeneous system AX = 0, then any linear combination of them a X 1 + b X 2 will also be a solution to this system. Indeed, since AX 1 = 0 And AX 2 = 0 , That A(a X 1 + b X 2) = a AX 1 + b AX 2 = a · 0 + b · 0 = 0. It is because of this property that if a linear system has more than one solution, then there will be an infinite number of these solutions.

Linearly independent columns E 1 , E 2 , Ek, which are solutions of a homogeneous system, are called fundamental system solutions homogeneous system of linear equations if common decision this system can be written as a linear combination of these columns:

If a homogeneous system has n variables, and the rank of the main matrix of the system is equal to r, That k = n-r.

Example 5.7. Find the fundamental system of solutions to the following system of linear equations:

Solution. Let's find the rank of the main matrix of the system:

Thus, the set of solutions to this system of equations forms a linear subspace of dimension n-r= 5 - 2 = 3. Let’s choose minor as the base

.

Then, leaving only the basic equations (the rest will be a linear combination of these equations) and the basic variables (we move the rest, the so-called free variables to the right), we obtain a simplified system of equations:

Believing x 3 = a, x 4 = b, x 5 = c, we find


, .

Believing a= 1, b = c= 0, we obtain the first basic solution; believing b= 1, a = c= 0, we obtain the second basic solution; believing c= 1, a = b= 0, we obtain the third basic solution. As a result, the normal fundamental system of solutions will take the form

Using the fundamental system, the general solution of a homogeneous system can be written as

X = aE 1 + bE 2 + cE 3. a

Let us note some properties of solutions to an inhomogeneous system of linear equations AX=B and their relationship with the corresponding homogeneous system of equations AX = 0.

General solution of an inhomogeneous systemis equal to the sum of the general solution of the corresponding homogeneous system AX = 0 and an arbitrary particular solution of the inhomogeneous system. Indeed, let Y 0 is an arbitrary particular solution of an inhomogeneous system, i.e. AY 0 = B, And Y- general solution of a heterogeneous system, i.e. AY=B. Subtracting one equality from the other, we get
A(Y-Y 0) = 0, i.e. Y-Y 0 is the general solution of the corresponding homogeneous system AX=0. Hence, Y-Y 0 = X, or Y=Y 0 + X. Q.E.D.

Let the inhomogeneous system have the form AX = B 1 + B 2 . Then the general solution of such a system can be written as X = X 1 + X 2 , where AX 1 = B 1 and AX 2 = B 2. This property expresses a universal property of any linear systems in general (algebraic, differential, functional, etc.). In physics this property is called superposition principle, in electrical and radio engineering - principle of superposition. For example, in the theory of linear electrical circuits the current in any circuit can be obtained as the algebraic sum of the currents caused by each energy source separately.

The linear equation is called homogeneous, if its free term is equal to zero, and inhomogeneous otherwise. System consisting of homogeneous equations, is called homogeneous and has general form:

It is obvious that every homogeneous system is consistent and has a zero (trivial) solution. Therefore, when applied to homogeneous systems of linear equations, one often has to look for an answer to the question of the existence of nonzero solutions. The answer to this question can be formulated as the following theorem.

Theorem . A homogeneous system of linear equations has a nonzero solution if and only if its rank is less than the number of unknowns .

Proof: Let us assume that a system whose rank is equal has a non-zero solution. Obviously it does not exceed . In case the system has a unique solution. Since a system of homogeneous linear equations always has a zero solution, then the zero solution will be this unique solution. Thus, non-zero solutions are possible only for .

Corollary 1 : A homogeneous system of equations, in which the number of equations is less than the number of unknowns, always has a non-zero solution.

Proof: If a system of equations has , then the rank of the system does not exceed the number of equations, i.e. . Thus, the condition is satisfied and, therefore, the system has a non-zero solution.

Corollary 2 : A homogeneous system of equations with unknowns has a nonzero solution if and only if its determinant is zero.

Proof: Let us assume that a system of linear homogeneous equations, the matrix of which with the determinant , has a non-zero solution. Then, according to the proven theorem, and this means that the matrix is ​​singular, i.e. .

Kronecker-Capelli theorem: An SLU is consistent if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix of this system. A system ur is called consistent if it has at least one solution.

Homogeneous system of linear algebraic equations .

A system of m linear equations with n variables is called a system of linear homogeneous equations if all free terms are equal to 0. A system of linear homogeneous equations is always consistent, because it always has at least a zero solution. A system of linear homogeneous equations has a non-zero solution if and only if the rank of its matrix of coefficients for variables is less than the number of variables, i.e. for rank A (n. Any linear combination

Lin system solutions. homogeneous. ur-ii is also a solution to this system.

A system of linear independent solutions e1, e2,...,еk is called fundamental if each solution of the system is a linear combination of solutions. Theorem: if the rank r of the matrix of coefficients for the variables of a system of linear homogeneous equations is less than the number of variables n, then every fundamental system of solutions to the system consists of n-r solutions. Therefore, the general solution of the linear system. one-day ur-th has the form: c1e1+c2e2+...+skek, where e1, e2,..., ek is any fundamental system of solutions, c1, c2,...,ck are arbitrary numbers and k=n-r. The general solution of a system of m linear equations with n variables is equal to the sum

of the general solution of the system corresponding to it is homogeneous. linear equations and an arbitrary particular solution of this system.

7. Linear spaces. Subspaces. Basis, dimension. Linear shell. Linear space is called n-dimensional, if it contains a system of linearly independent vectors, and any system of a larger number of vectors is linearly dependent. The number is called dimension (number of dimensions) linear space and is denoted by . In other words, the dimension of a space is the maximum number of linearly independent vectors of this space. If such a number exists, then the space is called finite-dimensional. If for anyone natural number n in space there is a system consisting of linearly independent vectors, then such a space is called infinite-dimensional (written: ). In what follows, unless otherwise stated, finite-dimensional spaces will be considered.

The basis of an n-dimensional linear space is an ordered collection of linearly independent vectors ( basis vectors).

Theorem 8.1 on the expansion of a vector in terms of a basis. If is the basis of an n-dimensional linear space, then any vector can be represented as a linear combination of basis vectors:

V=v1*e1+v2*e2+…+vn+en
and, moreover, in the only way, i.e. the coefficients are determined uniquely. In other words, any vector of space can be expanded into a basis and, moreover, in a unique way.

Indeed, the dimension of space is . The system of vectors is linearly independent (this is a basis). After adding any vector to the basis, we obtain a linearly dependent system (since this system consists of vectors of n-dimensional space). Using the property of 7 linearly dependent and linearly independent vectors, we obtain the conclusion of the theorem.

A homogeneous system is always consistent and has a trivial solution
. For a nontrivial solution to exist, it is necessary that the rank of the matrix was less than the number of unknowns:

.

Fundamental system of solutions homogeneous system
call a system of solutions in the form of column vectors
, which correspond to the canonical basis, i.e. basis in which arbitrary constants
are alternately set equal to one, while the rest are set to zero.

Then the general solution of the homogeneous system has the form:

Where
- arbitrary constants. In other words, the overall solution is a linear combination of the fundamental system of solutions.

Thus, basic solutions can be obtained from the general solution if the free unknowns are given the value of one in turn, setting all others equal to zero.

Example. Let's find a solution to the system

Let's accept , then we get a solution in the form:

Let us now construct a fundamental system of solutions:

.

The general solution will be written as:

Solutions of a system of homogeneous linear equations have the following properties:

In other words, any linear combination of solutions to a homogeneous system is again a solution.

Solving systems of linear equations using the Gauss method

Solving systems of linear equations has interested mathematicians for several centuries. The first results were obtained in the 18th century. In 1750, G. Kramer (1704–1752) published his works on the determinants of square matrices and proposed an algorithm for finding the inverse matrix. In 1809, Gauss outlined a new solution method known as the method of elimination.

The Gauss method, or the method of sequential elimination of unknowns, consists in the fact that, using elementary transformations, a system of equations is reduced to an equivalent system of a step (or triangular) form. Such systems make it possible to sequentially find all unknowns in a certain order.

Let us assume that in system (1)
(which is always possible).

(1)

Multiplying the first equation one by one by the so-called suitable numbers

and adding the result of multiplication with the corresponding equations of the system, we obtain an equivalent system in which in all equations except the first there will be no unknown X 1

(2)

Let us now multiply the second equation of system (2) by suitable numbers, assuming that

,

and adding it with the lower ones, we eliminate the variable from all equations, starting from the third.

Continuing this process, after
step we get:

(3)

If at least one of the numbers
is not equal to zero, then the corresponding equality is contradictory and system (1) is inconsistent. Conversely, for any joint number system
are equal to zero. Number is nothing more than the rank of the matrix of system (1).

The transition from system (1) to (3) is called straight ahead Gauss method, and finding the unknowns from (3) – in reverse .

Comment : It is more convenient to carry out transformations not with the equations themselves, but with the extended matrix of the system (1).

Example. Let's find a solution to the system

.

Let's write the extended matrix of the system:

.

Let's add the first one to lines 2,3,4, multiplied by (-2), (-3), (-2) respectively:

.

Let's swap rows 2 and 3, then in the resulting matrix add row 2 to row 4, multiplied by :

.

Add to line 4 line 3 multiplied by
:

.

It's obvious that
, therefore, the system is consistent. From the resulting system of equations

we find the solution by reverse substitution:

,
,
,
.

Example 2. Find a solution to the system:

.

It is obvious that the system is incompatible, because
, A
.

Advantages of the Gauss method :

    Less labor intensive than Cramer's method.

    Unambiguously establishes the compatibility of the system and allows you to find a solution.

    Makes it possible to determine the rank of any matrices.

You can order a detailed solution to your problem!!!

To understand what it is fundamental decision system you can watch a video tutorial for the same example by clicking. Now let's move on to the description of the whole necessary work. This will help you understand the essence of this issue in more detail.

How to find the fundamental system of solutions to a linear equation?

Let's take for example the following system of linear equations:

Let's find a solution to this linear system equations To begin with, we you need to write out the coefficient matrix of the system.

Let's transform this matrix to a triangular one. We rewrite the first line without changes. And all the elements that are under $a_(11)$ must be made zeros. To make a zero in place of the element $a_(21)$, you need to subtract the first from the second line, and write the difference in the second line. To make a zero in place of the element $a_(31)$, you need to subtract the first from the third line and write the difference in the third line. To make a zero in place of the element $a_(41)$, you need to subtract the first multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(31)$, you need to subtract the first multiplied by 2 from the fifth line and write the difference in the fifth line.

We rewrite the first and second lines without changes. And all the elements that are under $a_(22)$ must be made zeros. To make a zero in place of the element $a_(32)$, you need to subtract the second one multiplied by 2 from the third line and write the difference in the third line. To make a zero in place of the element $a_(42)$, you need to subtract the second multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(52)$, you need to subtract the second multiplied by 3 from the fifth line and write the difference in the fifth line.

We see that the last three lines are the same, so if you subtract the third from the fourth and fifth, they will become zero.

According to this matrix write down new system equations.

We see that we have only three linearly independent equations, and five unknowns, so the fundamental system of solutions will consist of two vectors. So we we need to move the last two unknowns to the right.

Now, we begin to express those unknowns that are on the left side through those that are on the right side. We start with the last equation, first we express $x_3$, then we substitute the resulting result into the second equation and express $x_2$, and then into the first equation and here we express $x_1$. Thus, we expressed all the unknowns that are on the left side through the unknowns that are on the right side.

Then, instead of $x_4$ and $x_5$, we can substitute any numbers and find $x_1$, $x_2$ and $x_3$. Each five of these numbers will be the roots of our original system of equations. To find the vectors that are included in FSR we need to substitute 1 instead of $x_4$, and substitute 0 instead of $x_5$, find $x_1$, $x_2$ and $x_3$, and then vice versa $x_4=0$ and $x_5=1$.

Homogeneous system of linear equations over a field

DEFINITION. A fundamental system of solutions to a system of equations (1) is a non-empty linearly independent system of its solutions, the linear span of which coincides with the set of all solutions to system (1).

Note that a homogeneous system of linear equations that has only a zero solution does not have a fundamental system of solutions.

PROPOSAL 3.11. Any two fundamental systems of solutions to a homogeneous system of linear equations consist of the same number decisions.

Proof. In fact, any two fundamental systems of solutions to the homogeneous system of equations (1) are equivalent and linearly independent. Therefore, by Proposition 1.12, their ranks are equal. Consequently, the number of solutions included in one fundamental system is equal to the number of solutions included in any other fundamental system of solutions.

If the main matrix A of the homogeneous system of equations (1) is zero, then any vector from is a solution to system (1); in this case, any set of linearly independent vectors from is a fundamental system of solutions. If the column rank of matrix A is equal to , then system (1) has only one solution - zero; therefore, in this case, the system of equations (1) does not have a fundamental system of solutions.

THEOREM 3.12. If the rank of the main matrix of a homogeneous system of linear equations (1) is less than the number of variables , then system (1) has a fundamental solution system consisting of solutions.

Proof. If the rank of the main matrix A of the homogeneous system (1) is equal to zero or , then it was shown above that the theorem is true. Therefore, below it is assumed that Assuming , we will assume that the first columns of matrix A are linearly independent. In this case, matrix A is rowwise equivalent to the reduced stepwise matrix, and system (1) is equivalent to the following reduced stepwise system of equations:

It is easy to check that any system of values ​​of free variables of system (2) corresponds to one and only one solution to system (2) and, therefore, to system (1). In particular, only the zero solution of system (2) and system (1) corresponds to a system of zero values.

In system (2) we will assign one of the free variables a value equal to 1, and the remaining variables - zero values. As a result, we obtain solutions to the system of equations (2), which we write in the form of rows of the following matrix C:

The row system of this matrix is ​​linearly independent. Indeed, for any scalars from the equality

equality follows

and, therefore, equality

Let us prove that the linear span of the system of rows of the matrix C coincides with the set of all solutions to system (1).

Arbitrary solution of system (1). Then the vector

is also a solution to system (1), and

Related publications