How to find a particular solution to a system of differential equations. Solving systems of differential equations by the matrix method. Systems of ordinary differential equations

................................ 1

1. Introduction............................................... .................................................. ... 2

2. Systems differential equations 1st order .......................... 3

3. Systems of linear differential equations of the 1st order ......... 2

4. Systems of linear homogeneous differential equations with constant coefficients ........................................ .................................................. .... 3

5. Systems of inhomogeneous differential equations of the 1st order with constant coefficients ..................................... .................................................. ....... 2

Laplace transform................................................................................ 1

6. Introduction ............................................... .................................................. ... 2

7. Properties of the Laplace transform ............................................. ............ 3

8. Laplace transform applications ............................................. ...... 2

Introduction to integral equations............................................................... 1

9. Introduction ............................................... .................................................. ... 2

10. Elements of the general theory of linear integral equations ............. 3

11. The concept of iterative solution of Fredholm integral equations of the 2nd kind ..................................... .................................................. ................................... 2

12. Volterra equation .............................................. .............................. 2

13. Solving the Volterra equations with a difference kernel using the Laplace transform ...................................... .......................................... 2


Systems of ordinary differential equations

Introduction

Systems of ordinary differential equations consist of several equations containing derivatives of unknown functions of one variable. In the general case, such a system has the form

where are unknown functions, t- independent variable, - some given functions, the index numbers the equations in the system. To solve such a system means to find all the functions that satisfy this system.

As an example, consider Newton's equation describing the motion of a body of mass under the action of a force:

where is a vector drawn from the origin to the current position of the body. In the Cartesian coordinate system, its components are the functions Thus, equation (1.2) is reduced to three second-order differential equations

To find functions at every moment of time, obviously, you need to know starting position body and its velocity at the initial moment of time - only 6 initial conditions (which corresponds to the system from three equations second order):

Equations (1.3), together with the initial conditions (1.4), form the Cauchy problem, which, as is clear from physical considerations, has a unique solution that gives a specific trajectory of the body's motion if the force satisfies reasonable criteria of smoothness.

It is important to note that this problem can be reduced to a system of 6 first-order equations by introducing new functions. Let us denote the functions as, and introduce three new functions, defined as follows

System (1.3) can now be rewritten as

Thus, we have come to a system of six first-order differential equations for the functions The initial conditions for this system are

The first three initial conditions give the initial coordinates of the body, the last three - projections initial speed on the coordinate axis.

Example 1.1. Reduce a system of two differential equations of the 2nd order

to a system of four equations of the 1st order.

Solution. Let us introduce the following notation:

In this case, the original system will take the form

Two more equations give the introduced designations:

Finally, we compose a system of 1st order differential equations equivalent to the original system of 2nd order equations

These examples illustrate a general situation: any system of differential equations can be reduced to a system of 1st order equations. Thus, in the future we can restrict ourselves to the study of systems of differential equations of the first order.

Systems of 1st order differential equations

V general view system of n differential equations of the 1st order can be written as follows:

where are the unknown functions of the independent variable t, - some preset functions. Common decision system (2.1) contains n arbitrary constants, i.e. looks like:

When describing real problems using systems of differential equations, a specific solution, or private solution system is found from the general solution by setting some initial conditions... The initial condition is written for each function and for the system n equations of the 1st order looks like this:

Decisions are defined in space line called integral line system (2.1).

Let us formulate a theorem on the existence and uniqueness of a solution for systems of differential equations.

Cauchy's theorem. The system of first-order differential equations (2.1), together with the initial conditions (2.2), has a unique solution (i.e., a unique set of constants is determined from the general solution) if the functions and their partial derivatives with respect to all arguments are bounded in the vicinity of these initial conditions.

Naturally, we are talking about a solution in some range of variables .

Solving a system of differential equations can be seen as vector function X, the components of which are functions and the set of functions - as a vector function F, i.e.

Using such notation, we can briefly rewrite the original system (2.1) and initial conditions (2.2) in the so-called vector form:

One of the methods for solving a system of differential equations is to reduce this system to one equation of a higher order. From equations (2.1), as well as the equations obtained by differentiating them, one can obtain one equation n-th order for any of the unknown functions. Integrating it, the unknown function is found. The remaining unknown functions are obtained from the equations of the original system and intermediate equations obtained by differentiating the original ones.

Example 2.1. Solve a system of two differential first order

Solution... Let's differentiate the second equation:

We express the derivative in terms of the first equation

From the second equation

We have obtained a linear homogeneous differential equation of the 2nd order with constant coefficients. Its characteristic equation

whence we obtain Then the general solution of this differential equation will be

We found one of the unknown functions of the original system of equations. Using the expression, you can find and:

Let us solve the Cauchy problem with the initial conditions

Let's substitute them in common decision systems

and find the constants of integration:

Thus, the solution to the Cauchy problem will be the functions

The graphs of these functions are shown in Figure 1.

Rice. 1. A particular solution of the system in Example 2.1 on the interval

Example 2.2. Solve system

reducing it to one equation of the 2nd order.

Solution. Differentiating the first equation, we obtain

Using the second equation, we arrive at a second-order equation for x:

It is not difficult to obtain its solution, and then the function, by substituting the found into the equation. As a result, we have the following solution to the system:

Comment. We found the function from the equation. At the same time, at first glance, it seems that it is possible to obtain the same solution by substituting the known into the second equation of the original system

and integrating it. If found in this way, then a third, unnecessary constant appears in the solution:

However, as it is easy to check, the function satisfies the original system not for an arbitrary value, but only for. Thus, the second function should be determined without integration.

Let us add the squares of the functions and:

The resulting equation gives a family of concentric circles centered at the origin in the plane (see Figure 2). The resulting parametric curves are called phase curves, and the plane in which they are located - phase plane.

Substituting any initial conditions into the original equation, one can obtain certain values ​​of the integration constants, which means a circle with a certain radius in the phase plane. Thus, a specific phase curve corresponds to each set of initial conditions. Take, for example, the initial conditions ... Their substitution into the general solution gives the values ​​of the constants , thus, the particular solution has the form. When changing a parameter on an interval, we follow the phase curve clockwise: the value corresponds to the point of the initial condition on the axis, the value is a point on the axis, the value is a point on the axis, the value is a point on the axis, and when we return to the starting point.

Matrix notation of a system of ordinary differential equations (SODE) with constant coefficients

Linear homogeneous SODE with constant coefficients $ \ left \ (\ begin (array) (c) (\ frac (dy_ (1)) (dx) = a_ (11) \ cdot y_ (1) + a_ (12) \ cdot y_ (2) + \ ldots + a_ (1n) \ cdot y_ (n)) \\ (\ frac (dy_ (2)) (dx) = a_ (21) \ cdot y_ (1) + a_ (22) \ cdot y_ (2) + \ ldots + a_ (2n) \ cdot y_ (n)) \\ (\ ldots) \\ (\ frac (dy_ (n)) (dx) = a_ (n1) \ cdot y_ (1) + a_ (n2) \ cdot y_ (2) + \ ldots + a_ (nn) \ cdot y_ (n)) \ end (array) \ right. $,

where $ y_ (1) \ left (x \ right), \; y_ (2) \ left (x \ right), \; \ ldots, \; y_ (n) \ left (x \ right) $ - required functions of the independent variable $ x $, coefficients $ a_ (jk), \; 1 \ le j, k \ le n $ - we represent the given real numbers in the matrix notation:

  1. matrix of the required functions $ Y = \ left (\ begin (array) (c) (y_ (1) \ left (x \ right)) \\ (y_ (2) \ left (x \ right)) \\ (\ ldots ) \\ (y_ (n) \ left (x \ right)) \ end (array) \ right) $;
  2. solution derivatives matrix $ \ frac (dY) (dx) = \ left (\ begin (array) (c) (\ frac (dy_ (1)) (dx)) \\ (\ frac (dy_ (2)) (dx )) \\ (\ ldots) \\ (\ frac (dy_ (n)) (dx)) \ end (array) \ right) $;
  3. matrix of SODU coefficients $ A = \ left (\ begin (array) (cccc) (a_ (11)) & (a_ (12)) & (\ ldots) & (a_ (1n)) \\ (a_ (21)) & (a_ (22)) & (\ ldots) & (a_ (2n)) \\ (\ ldots) & (\ ldots) & (\ ldots) & (\ ldots) \\ (a_ (n1)) & ( a_ (n2)) & (\ ldots) & (a_ (nn)) \ end (array) \ right) $.

Now, based on the matrix multiplication rule, this SODE can be written in the form of the matrix equation $ \ frac (dY) (dx) = A \ cdot Y $.

General method for solving SODE with constant coefficients

Let there be a matrix of some numbers $ \ alpha = \ left (\ begin (array) (c) (\ alpha _ (1)) \\ (\ alpha _ (2)) \\ (\ ldots) \\ (\ alpha _ (n)) \ end (array) \ right) $.

The solution to SODU is sought in the following form: $ y_ (1) = \ alpha _ (1) \ cdot e ^ (k \ cdot x) $, $ y_ (2) = \ alpha _ (2) \ cdot e ^ (k \ cdot x) $, \ dots, $ y_ (n) = \ alpha _ (n) \ cdot e ^ (k \ cdot x) $. In matrix form: $ Y = \ left (\ begin (array) (c) (y_ (1)) \\ (y_ (2)) \\ (\ ldots) \\ (y_ (n)) \ end (array ) \ right) = e ^ (k \ cdot x) \ cdot \ left (\ begin (array) (c) (\ alpha _ (1)) \\ (\ alpha _ (2)) \\ (\ ldots) \\ (\ alpha _ (n)) \ end (array) \ right) $.

From here we get:

Now matrix equation this SODU can be given the form:

The resulting equation can be represented as follows:

The last equality shows that the vector $ \ alpha $ is transformed by the matrix $ A $ into the vector $ k \ cdot \ alpha $ parallel to it. This means that the vector $ \ alpha $ is an eigenvector of the matrix $ A $ corresponding to the eigenvalue of $ k $.

The number $ k $ can be determined from the equation $ \ left | \ begin (array) (cccc) (a_ (11) -k) & (a_ (12)) & (\ ldots) & (a_ (1n)) \\ ( a_ (21)) & (a_ (22) -k) & (\ ldots) & (a_ (2n)) \\ (\ ldots) & (\ ldots) & (\ ldots) & (\ ldots) \\ ( a_ (n1)) & (a_ (n2)) & (\ ldots) & (a_ (nn) -k) \ end (array) \ right | = 0 $.

This equation is called characteristic.

Let all roots $ k_ (1), k_ (2), \ ldots, k_ (n) $ of the characteristic equation be different. For each value $ k_ (i) $ from system $ \ left (\ begin (array) (cccc) (a_ (11) -k) & (a_ (12)) & (\ ldots) & (a_ (1n)) \\ (a_ (21)) & (a_ (22) -k) & (\ ldots) & (a_ (2n)) \\ (\ ldots) & (\ ldots) & (\ ldots) & (\ ldots) \\ (a_ (n1)) & (a_ (n2)) & (\ ldots) & (a_ (nn) -k) \ end (array) \ right) \ cdot \ left (\ begin (array) (c) (\ alpha _ (1)) \\ (\ alpha _ (2)) \\ (\ ldots) \\ (\ alpha _ (n)) \ end (array) \ right) = 0 $ a matrix of values ​​can be defined $ \ left (\ begin (array) (c) (\ alpha _ (1) ^ (\ left (i \ right))) \\ (\ alpha _ (2) ^ (\ left (i \ right))) \\ (\ ldots) \\ (\ alpha _ (n) ^ (\ left (i \ right))) \ end (array) \ right) $.

One of the values ​​in this matrix is ​​chosen arbitrarily.

Finally, the solution of this system in matrix form is written as follows:

$ \ left (\ begin (array) (c) (y_ (1)) \\ (y_ (2)) \\ (\ ldots) \\ (y_ (n)) \ end (array) \ right) = \ left (\ begin (array) (cccc) (\ alpha _ (1) ^ (\ left (1 \ right))) & (\ alpha _ (1) ^ (\ left (2 \ right))) & (\ ldots) & (\ alpha _ (2) ^ (\ left (n \ right))) \\ (\ alpha _ (2) ^ (\ left (1 \ right))) & (\ alpha _ (2) ^ (\ left (2 \ right))) & (\ ldots) & (\ alpha _ (2) ^ (\ left (n \ right))) \\ (\ ldots) & (\ ldots) & (\ ldots) & (\ ldots) \\ (\ alpha _ (n) ^ (\ left (1 \ right))) & (\ alpha _ (2) ^ (\ left (2 \ right))) & (\ ldots) & (\ alpha _ (2) ^ (\ left (n \ right))) \ end (array) \ right) \ cdot \ left (\ begin (array) (c) (C_ (1) \ cdot e ^ (k_ (1) \ cdot x)) \\ (C_ (2) \ cdot e ^ (k_ (2) \ cdot x)) \\ (\ ldots) \\ (C_ (n) \ cdot e ^ (k_ (n ) \ cdot x)) \ end (array) \ right) $,

where $ C_ (i) $ are arbitrary constants.

Task

Solve the DU system $ \ left \ (\ begin (array) (c) (\ frac (dy_ (1)) (dx) = 5 \ cdot y_ (1) + 4y_ (2)) \\ (\ frac (dy_ ( 2)) (dx) = 4 \ cdot y_ (1) +5 \ cdot y_ (2)) \ end (array) \ right. $.

We write the system matrix: $ A = \ left (\ begin (array) (cc) (5) & (4) \\ (4) & (5) \ end (array) \ right) $.

In matrix form, this SODU is written as follows: $ \ left (\ begin (array) (c) (\ frac (dy_ (1)) (dt)) \\ (\ frac (dy_ (2)) (dt)) \ end (array) \ right) = \ left (\ begin (array) (cc) (5) & (4) \\ (4) & (5) \ end (array) \ right) \ cdot \ left (\ begin ( array) (c) (y_ (1)) \\ (y_ (2)) \ end (array) \ right) $.

We get the characteristic equation:

$ \ left | \ begin (array) (cc) (5-k) & (4) \\ (4) & (5-k) \ end (array) \ right | = 0 $, that is, $ k ^ ( 2) -10 \ cdot k + 9 = 0 $.

Roots of the characteristic equation: $ k_ (1) = 1 $, $ k_ (2) = 9 $.

Build a system for calculating $ \ left (\ begin (array) (c) (\ alpha _ (1) ^ (\ left (1 \ right))) \\ (\ alpha _ (2) ^ (\ left (1 \ right))) \ end (array) \ right) $ for $ k_ (1) = 1 $:

\ [\ left (\ begin (array) (cc) (5-k_ (1)) & (4) \\ (4) & (5-k_ (1)) \ end (array) \ right) \ cdot \ left (\ begin (array) (c) (\ alpha _ (1) ^ (\ left (1 \ right))) \\ (\ alpha _ (2) ^ (\ left (1 \ right))) \ end (array) \ right) = 0, \]

that is, $ \ left (5-1 \ right) \ cdot \ alpha _ (1) ^ (\ left (1 \ right)) +4 \ cdot \ alpha _ (2) ^ (\ left (1 \ right)) = 0 $, $ 4 \ cdot \ alpha _ (1) ^ (\ left (1 \ right)) + \ left (5-1 \ right) \ cdot \ alpha _ (2) ^ (\ left (1 \ right) ) = 0 $.

Putting $ \ alpha _ (1) ^ (\ left (1 \ right)) = 1 $, we get $ \ alpha _ (2) ^ (\ left (1 \ right)) = -1 $.

Build a system for calculating $ \ left (\ begin (array) (c) (\ alpha _ (1) ^ (\ left (2 \ right))) \\ (\ alpha _ (2) ^ (\ left (2 \ right))) \ end (array) \ right) $ for $ k_ (2) = 9 $:

\ [\ left (\ begin (array) (cc) (5-k_ (2)) & (4) \\ (4) & (5-k_ (2)) \ end (array) \ right) \ cdot \ left (\ begin (array) (c) (\ alpha _ (1) ^ (\ left (2 \ right))) \\ (\ alpha _ (2) ^ (\ left (2 \ right))) \ end (array) \ right) = 0, \]

that is, $ \ left (5-9 \ right) \ cdot \ alpha _ (1) ^ (\ left (2 \ right)) +4 \ cdot \ alpha _ (2) ^ (\ left (2 \ right)) = 0 $, $ 4 \ cdot \ alpha _ (1) ^ (\ left (2 \ right)) + \ left (5-9 \ right) \ cdot \ alpha _ (2) ^ (\ left (2 \ right) ) = 0 $.

Putting $ \ alpha _ (1) ^ (\ left (2 \ right)) = 1 $, we get $ \ alpha _ (2) ^ (\ left (2 \ right)) = 1 $.

We get the SODU solution in matrix form:

\ [\ left (\ begin (array) (c) (y_ (1)) \\ (y_ (2)) \ end (array) \ right) = \ left (\ begin (array) (cc) (1) & (1) \\ (-1) & (1) \ end (array) \ right) \ cdot \ left (\ begin (array) (c) (C_ (1) \ cdot e ^ (1 \ cdot x) ) \\ (C_ (2) \ cdot e ^ (9 \ cdot x)) \ end (array) \ right). \]

In its usual form, the SODE solution is: $ \ left \ (\ begin (array) (c) (y_ (1) = C_ (1) \ cdot e ^ (1 \ cdot x) + C_ (2) \ cdot e ^ (9 \ cdot x)) \\ (y_ (2) = -C_ (1) \ cdot e ^ (1 \ cdot x) + C_ (2) \ cdot e ^ (9 \ cdot x)) \ end (array ) \ right. $.

A system of this kind is called normal system of differential equations (SNDU). For a normal system of differential equations, an existence and uniqueness theorem can be formulated the same as for a differential equation.

Theorem. If the functions are defined and continuous on an open set, and the corresponding partial derivatives are also continuous on, then system (1) will have a solution (2)

and in the presence of the initial conditions (3)

this solution will be the only one.

This system can be represented as:

Systems of linear differential equations

Definition. The system of Differential Equations is called linear if it is linear with respect to all unknown functions and their derivatives.

(5)

General view of the system of Differential Equations

If the initial condition is given:, (7)

then the solution will be unique, provided that the vector function is continuous on the coefficients of the matrix: also continuous functions.

We introduce a linear operator, then (6) can be rewritten as:

if then the operator equation (8) is called homogeneous and has the form:

Since the operator is linear, the following properties are fulfilled for it:

solution of equation (9).

Consequence. Linear combination, solution (9).

If solutions (9) are given and they are linearly independent, then all linear combinations of the form: (10) only under the condition that all. This means that the determinant composed of solutions (10):

... This determinant is called Vronsky determinant for a system of vectors.

Theorem 1. If the Wronski determinant for a linear homogeneous system (9) with continuous coefficients on an interval, is zero at least at one point, then the solutions are linearly dependent on this segment and, therefore, the Wronsky determinant is equal to zero on the entire segment.

Proof: Since they are continuous, system (9) satisfies the condition Existence and uniqueness theorems, therefore, the initial condition determines the unique solution to system (9). The Wronsky determinant at the point is equal to zero, therefore, there is a nontrivial system for which: The corresponding linear combination for another point will have the form, and it satisfies the homogeneous initial conditions, therefore, it coincides with the trivial solution, that is, it is linearly dependent and the Wronsky determinant is equal to zero.

Definition. The set of solutions to system (9) is called fundamental decision system if the Vronsky determinant does not vanish at any point.

Definition. If for the homogeneous system (9) the initial conditions are defined as follows -, then the system of solutions is called normal fundamental system of decisions .

Comment. If is a fundamental system or a normal fundamental system, then the linear combination is the general solution (9).

Theorem 2. A linear combination of linearly independent solutions of a homogeneous system (9) with coefficients continuous on a segment will be a general solution to (9) on the same segment.

Proof: Since the coefficients are continuous on, the system satisfies the conditions of the existence and uniqueness theorem. Therefore, to prove the theorem, it suffices to show that by choosing constants, one can satisfy some arbitrarily chosen initial condition (7). Those. can satisfy the vector equation: So how is common solution (9), then the system is relatively solvable, since and are linearly independent. We determine uniquely, and since they are linearly independent, then.

Theorem 3. If this is a solution to system (8), a solution to system (9), then + will also be a solution to (8).

Proof: By the properties of a linear operator: 

Theorem 4. General solution (8) on an interval with continuous coefficients and right-hand sides on this interval is equal to the sum of the general solution of the corresponding homogeneous system (9) and a particular solution of the inhomogeneous system (8).

Proof: Since the conditions of the existence and uniqueness theorem are satisfied, therefore, it remains to prove that it will satisfy arbitrarily given initial value(7), that is . (11)

For system (11), you can always define values. It can be done like this as fundamental decision system.

Cauchy problem for a first-order differential equation

Formulation of the problem. Recall that the solution to an ordinary differential equation of the first order

y "(t) = f (t, y (t)) (5.1)

is called a differentiable function y (t), which, when substituted into equation (5.1), turns it into an identity. The graph of the solution to a differential equation is called an integral curve. The process of finding solutions to a differential equation is usually called the integration of this equation.

Based geometric meaning derivative y "note that equation (5.1) defines at each point (t, y) of the plane of variables t, y the value f (t, y) of the tangent of the angle a (to the 0t axis) of the tangent to the graph of the solution passing through this point. k = tga = f (t, y) will henceforth be called the slope (Fig. 5.1). If now, at each point (t, y), we use some vector to set the tangent direction determined by the value f (t, y), then we get the so-called field of directions (Fig. 5.2, a). Thus, geometrically, the problem of integrating differential equations consists in finding integral curves, which at each point have a given direction of the tangent (Fig. 5.2, b). In order to select from the family solutions of differential equation (5.1) one specific solution, specify the initial condition

y (t 0) = y 0 (5.2)

Here t 0 is some fixed value argument t, and 0 has a value called the initial value. The geometric interpretation of using the initial condition consists in choosing from a family of integral curves that curve that passes through a fixed point (t 0, y 0).

The problem of finding, for t> t 0, a solution y (t) of the differential equation (5.1) satisfying the initial condition (5.2) will be called the Cauchy problem. In some cases, the behavior of the solution for all t> t 0 is of interest. However, they are more often limited to determining a solution on a finite segment.

Integration of normal systems

One of the main methods for integrating a normal DE system is the method of reducing the system to a single DE of a higher order. (The inverse problem - the transition from the DE to the system - is considered above with an example.) The technique of this method is based on the following considerations.

Let the normal system (6.1) be given. Let us differentiate with respect to x any, for example, the first, equation:

Substituting into this equality the values ​​of the derivatives from system (6.1), we obtain

or, in short,

Differentiating the obtained equality again and replacing the values ​​of the derivatives from system (6.1), we obtain

Continuing this process (differentiate - substitute - get), we find:

Let's put the resulting equations into a system:

From the first (n-1) equations of system (6.3), we express the functions у 2, у 3, ..., yn in terms of х, the function y 1 and its derivatives у "1, у" 1, ..., у 1 (n -one) . We get:

We substitute the found values ​​у 2, у 3, ..., у n into the last equation of system (6.3). We obtain one DE of the nth order with respect to the desired function Let its general solution be

Differentiating it (n-1) times and substituting the values ​​of the derivatives into the equations of system (6.4), we find the functions y 2, y 3, ..., y n.

Example 6.1. Solve system of equations

Solution: Differentiate the first equation: y "= 4y" -3z ". Substitute z" = 2y-3z into the resulting equality: y "= 4y" -3 (2y-3z), y "-4y" + 6y = 9z. We compose a system of equations:

From the first equation of the system, we express z through y and y ":

Substitute the z-value into the second equation of the last system:

i.e. y "" - y "-6y = 0. We got one LODE of the second order. We solve it: k 2 -k-6 = 0, k 1 = -2, k 2 = 3 and - general solution

equations. Find the function z. The values ​​of y and are substituted into the expression for z through y and y "(formula (6.5)). We get:

Thus, the general solution of this system of equations has the form

Comment. The system of equations (6.1) can be solved by the method of integrable combinations. The essence of the method is that, by means of arithmetic operations, so-called integrable combinations are formed from the equations of a given system, that is, easily integrable equations with respect to a new unknown function.

Let us illustrate the technique of this method with the following example.

Example 6.2. Solve the system of equations:

Solution: Add the given equations term-by-term: x "+ y" = x + y + 2, or (x + y) "= (x + y) +2. Let's denote x + y = z. Then we have z" = z + 2 ... We solve the resulting equation:

Got the so called the first integral of the system. From it, one of the sought functions can be expressed through the other, thereby decreasing the number of sought functions by one. For instance, Then the first equation of the system takes the form

Finding x from it (for example, using the substitution x = uv), we also find y.

Comment. This system "allows" to form another integrable combination: Putting x - y = p, we have:, or Having the first two integrals of the system, i.e. and it is easy to find (by adding and subtracting the first integrals) that

    Linear operator, properties. Linear dependence and independence of vectors. Vronsky determinant for the LDU system.

Linear differential operator and its properties. The set of functions having on the interval ( a , b ) not less n derivatives, forms a linear space. Consider the operator L n (y ) which displays the function y (x ) having derivatives into a function having k - n derivatives:

Using the operator L n (y ) inhomogeneous equation (20) can be written as follows:

L n (y ) = f (x );

homogeneous equation (21) takes the form

L n (y ) = 0);

Theorem 14.5.2... Differential Operator L n (y ) is a linear operator. Doc follows directly from the properties of the derivatives: 1. If C = const, then 2. Our next steps: first, study how the general solution of the linear homogeneous equation(25), then the inhomogeneous equation (24), and then learn how to solve these equations. Let's start with the concepts linear relationship and independence of functions on an interval and define the most important object in the theory of linear equations and systems - the Wronsky determinant.

Vronsky's determinant. Linear dependence and independence of the system of functions.Def. 14.5.3.1. Function system y 1 (x ), y 2 (x ), …, y n (x ) is called linearly dependent on the interval ( a , b ), if there is a set of constant coefficients that are not equal to zero at the same time, such that the linear combination of these functions is identically zero on ( a , b ): for. If equality for is possible only for, the system of functions y 1 (x ), y 2 (x ), …, y n (x ) is called linearly independent on the interval ( a , b ). In other words, the functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ) if there is zero on ( a , b ) their nontrivial linear combination. Functions y 1 (x ),y 2 (x ), …, y n (x ) linearly independent on the interval ( a , b ) if only their trivial linear combination is identically zero on ( a , b ). Examples: 1. Functions 1, x , x 2 , x 3 are linearly independent on any interval ( a , b ). Their linear combination - polynomial of degree - cannot have on ( a , b ) is more than three roots, so the equality = 0 for is possible only for. Example 1 can be easily generalized to a system of functions 1, x , x 2 , x 3 , …, x n ... Their linear combination - a polynomial of degree - cannot have on ( a , b ) more n roots. 3. The functions are linearly independent on any interval ( a , b ), if . Indeed, if, for example,, then the equality takes place at a single point .4. Function system is also linearly independent if the numbers k i (i = 1, 2, …, n ) are pairwise distinct, but the direct proof of this fact is rather cumbersome. As the above examples show, in some cases the linear dependence or independence of functions is easy to prove, in other cases this proof is more complicated. Therefore, a simple universal tool is needed that gives an answer to the question of the linear dependence of functions. Such a tool - Vronsky determinant.

Def. 14.5.3.2. Vronsky's determinant (Vronskian) systems n - 1 times differentiable functions y 1 (x ), y 2 (x ), …, y n (x ) is called the determinant

.

14.5.3.3 The Wronskian theorem for a linearly dependent system of functions... If the system of functions y 1 (x ), y 2 (x ), …, y n (x ) linearly dependent on the interval ( a , b ), then the Wronskian of this system is identically zero on this interval. Doc... If functions y 1 (x ), y 2 (x ), …, y n (x ) are linearly dependent on the interval ( a , b ), then there are numbers, of which at least one is nonzero, such that

Differentiate by x equality (27) n - 1 time and compose a system of equations We will consider this system as a homogeneous linear system algebraic equations for. The determinant of this system is the Vronsky determinant (26). The system has a nontrivial solution, therefore, at each point its determinant is equal to zero. So, W (x ) = 0 for, i.e., on ( a , b ).

We decided to devote this section to solving systems of differential equations of the simplest form dxdt = a 1 x + b 1 y + c 1 dydt = a 2 x + b 2 y + c 2, in which a 1, b 1, c 1, a 2, b 2, c 2 - some real numbers... The most effective method for solving such systems of equations is the integration method. Also consider solving an example on the topic.

The solution to the system of differential equations will be a pair of functions x (t) and y (t), which is capable of converting both equations of the system into identity.

Consider the method of integrating the DP system d x d t = a 1 x + b 1 y + c 1 d y d t = a 2 x + b 2 y + c 2. Let us express x from the 2nd equation of the system in order to eliminate the unknown function x (t) from the 1st equation:

d y d t = a 2 x + b 2 y + c 2 ⇒ x = 1 a 2 d y d t - b 2 y - c 2

Let us differentiate the 2nd equation with respect to t and solve its equation for d x d t:

d 2 y d t 2 = a 2 d x d t + b 2 d y d t ⇒ d x d t = 1 a 2 d 2 y d t 2 - b 2 d y d t

Now we substitute the result of the previous calculations into the 1st equation of the system:

dxdt = a 1 x + b 1 y + c 1 ⇒ 1 a 2 d 2 ydt 2 - b 2 dydt = a 1 a 2 dydt - b 2 y - c 2 + b 1 y + c 1 ⇔ d 2 ydt 2 - (a 1 + b 2) dydt + (a 1 b 2 - a 2 b 1) y = a 2 c 1 - a 1 c 2

Thus, we have eliminated the unknown function x (t) and obtained a linear inhomogeneous DE of the 2nd order with constant coefficients. Let's find the solution of this equation y (t) and substitute it into the 2nd equation of the system. Find x (t)... We will assume that this will complete the solution of the system of equations.

Example 1

Find the solution to the system of differential equations d x d t = x - 1 d y d t = x + 2 y - 3

Solution

Let's start with the first equation of the system. Let's resolve it relative to x:

x = d y d t - 2 y + 3

Now we carry out the differentiation of the 2nd equation of the system, after which we solve it with respect to d x d t: d 2 y d t 2 = d x d t + 2 d y d t ⇒ d x d t = d 2 y d t 2 - 2 d y d t

We can substitute the result obtained in the course of calculations into the 1st equation of the DE system:

d x d t = x - 1 d 2 y d t 2 - 2 d y d t = d y d t - 2 y + 3 - 1 d 2 y d t 2 - 3 d y d t + 2 y = 2

As a result of the transformations, we obtained a linear inhomogeneous differential equation of the 2nd order with constant coefficients d 2 y d t 2 - 3 d y d t + 2 y = 2. If we find its general solution, then we get the function y (t).

We can find the general solution of the corresponding LODE y 0 by calculating the roots of the characteristic equation k 2 - 3 k + 2 = 0:

D = 3 2 - 4 2 = 1 k 1 = 3 - 1 2 = 1 k 2 = 3 + 1 2 = 2

The roots we got are valid and different. In this regard, the general solution to the LODE will have the form y 0 = C 1 · e t + C 2 · e 2 t.

Now we will find a particular solution of the linear inhomogeneous DE y ~:

d 2 y d t 2 - 3 d y d t + 2 y = 2

The right side of the equation is a zero degree polynomial. This means that we will look for a particular solution in the form y ~ = A, where A is an undefined coefficient.

We can determine the undefined coefficient from the equality d 2 y ~ d t 2 - 3 d y ~ d t + 2 y ~ = 2:
d 2 (A) d t 2 - 3 d (A) d t + 2 A = 2 ⇒ 2 A = 2 ⇒ A = 1

Thus, y ~ = 1 and y (t) = y 0 + y ~ = C 1 e t + C 2 e 2 t + 1. We found one unknown function.

Now we substitute the found function into the 2nd equation of the DE system and solve the new equation with respect to x (t):
d (C 1 et + C 2 e 2 t + 1) dt = x + 2 (C 1 et + C 2 e 2 t + 1) - 3 C 1 et + 2 C 2 e 2 t = x + 2 C 1 et + 2 C 2 e 2 t - 1 x = - C 1 et + 1

So we calculated the second unknown function x (t) = - C 1 · e t + 1.

Answer: x (t) = - C 1 e t + 1 y (t) = C 1 e t + C 2 e 2 t + 1

If you notice an error in the text, please select it and press Ctrl + Enter

Equations.

Introduction.

In many problems of mathematics, physics and technology, it is required to define several functions related to each other by several differential equations.

For this, it is necessary to have, generally speaking, the same number of equations. If each of these equations is differential, that is, it has the form of a relation connecting unknown functions and their derivatives, then they say on the system of differential equations.

1. Normal system of differential equations of the first order. Cauchy problem.

Definition. A system of differential equations is a set of equations containing several unknown functions and their derivatives, and each of the equations includes at least one derivative.

A system of differential equations is called linear if unknown functions and their derivatives are included in each of the equations only to the first degree.

The linear system is called normal if it is allowed with respect to all derivatives

In a normal system, the right-hand sides of the equations do not contain the derivatives of the sought-for functions.

Decision systems of differential equations called a set of functions https://pandia.ru/text/78/145/images/image003_45.gif "width =" 261 "height =" 24 src = "> are called the initial conditions of the system of differential equations.

Initial conditions are often written as

The general solution (integral ) system of differential equations is called a set « n» functions of the independent variable x and « n» arbitrary constants C1 , C2 , …, Cn:


..……………………..

which satisfy all the equations of this system.

To get a particular system solution that satisfies the given initial conditions https://pandia.ru/text/78/145/images/image008_18.gif "width =" 44 "height =" 24 "> would take the given values.

The Cauchy problem for a normal system of differential equations is written as follows

Existence and uniqueness theorem for the solution of the Cauchy problem.

For a normal system of differential equations (1), the Cauchy theorem of existence and uniqueness of a solution is formulated as follows:

Theorem. Let the right-hand sides of the equations of system (1), i.e., the functions , (i=1,2,…, n) continuous in all variables in some region D and has continuous partial derivatives in it https://pandia.ru/text/78/145/images/image003_45.gif "width =" 261 height = 24 "height =" 24 "> belonging to the area D, there is a unique solution to the system (1) https://pandia.ru/text/78/145/images/image013_11.gif "width =" 284 "height =" 24 src = ">.

2. Solving a normal system by the elimination method.

To solve a normal system of differential equations, the method of elimination of unknowns or the Cauchy method is used.

Given a normal system

Differentiate by X the first equation of the system

https://pandia.ru/text/78/145/images/image015_5.gif "width =" 123 "height =" 43 src = "> by their expressions from the system of equations (1), we will have

Differentiating the resulting equation and proceeding similarly to the previous one, we find

So we got the system

(2)

Of the first n-1 equations define y2 , y3 , … , yn , expressing them through

AND

(3)

Substituting these expressions into the last of the equations (2), we obtain the equations n-th order to determine y1 :

https://pandia.ru/text/78/145/images/image005_27.gif "width =" 167 "height =" 24 "> (5)

Differentiating the last expression n-1 times, we find the derivatives

as functions of ... Substituting these functions into equations (4), we define y2 , y3 , … , yn .

So, we got the general solution of the system (1)

(6)

To find a particular solution of system (1) satisfying the initial conditions at

it is necessary to find from equation (6) the corresponding values ​​of arbitrary constants C1, C2, ..., Cn .

Example.

Find the general solution to the system of equations:

https://pandia.ru/text/78/145/images/image029_2.gif "width =" 96 "height =" 21 ">

for new unknown features.

Conclusion.

Systems of differential equations are encountered in the study of processes for which one function is not enough to describe. For example, finding vector lines of a field requires solving a system of differential equations. The solution of problems of the dynamics of curvilinear motion leads to a system of three differential equations, in which the unknown functions are the projections of a moving point on the coordinate axis, and the independent variable is time. You will later learn that solving electrical engineering problems for two electrical circuits, which are in electromagnetic connection, will require solving a system of two differential equations. The number of such examples can easily be increased.