The geometric meaning of the total differential of a function. Full differential. The geometric meaning of the total differential. Tangent plane and surface normal. Partial derivatives of higher orders

Definition For the function f (x, y), the expression Dz = f (x + Dx, y + Dy) - f (x, y) is called full increment .

If the function f (x, y) has continuous partial derivatives, then

Then we obtain, applying the Lagrange theorem

Because partial derivatives are continuous, then we can write the equalities:

Definition. The expression is called full increment functions f (x, y) at some point (x, y), where a 1 and a 2 are infinitesimal functions as Dх ® 0 and Dу ® 0, respectively.

Definition: Full differential function z = f (x, y) is called the principal linear with respect to Dx and Dy of the increment of the function Dz at the point (x, y).

For a function of an arbitrary number of variables:

Example... Find the total differential of the function.

Example. Find the Total Differential of a Function

Geometric meaning full differential.

Tangent plane and surface normal.

normal

tangent plane

Let N and N 0 be points of the given surface. Let's draw a straight line NN 0. The plane that passes through the point N 0 is called tangent plane to the surface, if the angle between the secant NN 0 and this plane tends to zero, when the distance NN 0 tends to zero.

Definition. Normal to the surface at the point N 0 is a straight line passing through the point N 0 perpendicular to the tangent plane to this surface.

At any point, the surface has either only one tangent plane, or does not have it at all.

If the surface is given by the equation z = f (x, y), where f (x, y) is a function differentiable at the point M 0 (x 0, y 0), the tangent plane at the point N 0 (x 0, y 0, ( x 0, y 0)) exists and has the equation:

The equation of the normal to the surface at this point is:

Geometric meaning the total differential of the function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (coordinate z) of the tangent plane to the surface when passing from the point (x 0, y 0) to the point (x 0 + Dx, y 0 + Dy).

As you can see, the geometric meaning of the total differential of a function of two variables is a spatial analogue geometric meaning differential of a function of one variable.

Example Find the equations of the tangent plane and the normal to the surface

At point M (1, 1, 1).

Tangent plane equation:

Normal equation:

Partial derivatives of higher orders.

Let there be some set X in space. Each point of this set is determined by a set of numbers that are the coordinates of this point. We say that a function of n-variables is given on a set X if each point according to a certain law is put into compliance singular z, i.e. .

Example: let x 1, x 2, x 3 - the length, width and depth of the pool. Then we find the surface area of ​​the pool.

N-variable function is called continuous at the point if the limit of the function at this point equal to the value functions at the limiting point, i.e. .

Definition: partial derivative of the function with respect to a variable is called the derivative of a function z with respect to a variable, calculated under the condition that all other variables remain constant.

Partial derivative.

Example

For a function of two variables, four partial derivatives of the second order can be introduced, then

1., reads: two z, twice.

Theorem mixed derivatives, where they are continuous, do not depend on the order in which the derivatives are calculated. This is true for mixed derivatives of any order and for a function of any number of variables.

If the function f (x, y) is defined in some domain D, then its partial derivatives will also be defined in the same domain or part of it.

We will call these derivatives partial derivatives of the first order.

The derivatives of these functions will be partial derivatives of the second order.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders.

Definition Partial derivatives of the form etc. are called mixed derivatives.

Theorem If the function f (x, y) and its partial derivatives are defined and continuous at the point M (x, y) and its neighborhood, then the relation is true:.

then point М 0 is called point of minimum.

Theorem (Necessary conditions for an extremum) If the function f (x, y) at the point (x 0, y 0) has an extremum, then at this point either both of its first-order partial derivatives are equal to zero, or at least one of them does not exist.

This point (x 0, y 0) will be called critical point.

Theorem (Sufficient conditions for an extremum) Suppose that in a neighborhood of the critical point (x 0, y 0) the function f (x, y) has continuous partial derivatives up to the second order inclusive. Consider the expression:

1) If D (x 0, y 0)> 0, then at the point (x 0, y 0) the function f (x, y) has an extremum if

2) - 0, then at the point (x 0, y 0) the function f (x, y) has no extremum

If D = 0, the conclusion about the presence of an extremum cannot be made.

For a function of one variable y = f(x) at the point x 0 the geometric meaning of the differential means the increment of the ordinate of the tangent drawn to the graph of the function at the point with the abscissa x 0 when going to point x 0 + x... And the differential of a function of two variables in this regard is the increment applicates tangent plane drawn to the surface given by the equation z = f(x, y) , at the point M 0 (x 0 , y 0 ) when going to point M(x 0 + x, y 0 + y). Let us give the definition of the tangent plane to some surface:

Df . Plane passing through a point R 0 surface S is called tangent plane at a given point, if the angle between this plane and the secant passing through two points R 0 and R(any point on the surface S) , tends to zero when the point R tends along this surface to the point R 0 .

Let the surface S given by the equation z = f(x, y). Then it can be shown that this surface has at the point P 0 (x 0 , y 0 , z 0 ) the tangent plane if and only if the function z = f(x, y) differentiable at this point. In this case, the tangent plane is given by the equation:

zz 0 = +
(6).

§5. Directional derivative, gradient of the function.

Partial derivatives of a function y= f(x 1 , x 2 .. x n ) by variables x 1 , x 2 . . . x n express the rate of change of the function in the direction of the coordinate axes. For example, is the rate of change of the function by NS 1 - that is, it is assumed that the point belonging to the domain of the function definition moves only parallel to the axis OH 1 , and all other coordinates remain unchanged. However, it can be assumed that the function can change in some other direction, which does not coincide with the direction of any of the axes.

Consider a function of three variables: u= f(x, y, z).

Fix the point M 0 (x 0 , y 0 , z 0 ) and some directional straight line (axis) l passing through this point. Let be M (x, y, z) is an arbitrary point of this straight line and M 0 M- distance from M 0 before M.

u = f (x, y, z) – f(x 0 , y 0 , z 0 ) Is the increment of the function at the point M 0 .

Find the ratio of the increment of the function to the length of the vector
:

Df . Derivative function u = f (x, y, z) towards l at the point M 0 is the limit of the ratio of the increment of the function to the length of the vector M 0 M when the latter tends to 0 (or, which is the same thing, when the M To M 0 ):

(1)

This derivative characterizes the rate of change of the function at the point M 0 in the direction l.

Let the axis l (vector M 0 M) forms with axles OX, OY, OZ corners
respectively.

Denote x-x 0 =
;

y - y 0 =
;

z - z 0 =
.

Then the vector M 0 M = (x - x 0 , y - y 0 , z - z 0 )=
and its direction cosines:

;

;

.

(4).

(4) is the formula for calculating the directional derivative.

Consider a vector whose coordinates are the partial derivatives of the function u= f(x, y, z) at the point M 0 :

grad u - function gradient u= f(x, y, z) at the point M (x, y, z)

Gradient properties:


Output: length of the gradient of the function u= f(x, y, z) - there is the most possible value at this point M (x, y, z) , and the direction of the vector grad u coincides with the direction of the vector outgoing from the point M along which the function changes the fastest. That is, the direction of the gradient of the function grad u - there is the direction of the steepest increase in the function.

The geometric meaning of the total differential of a function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (coordinate z) of the tangent plane to the surface when passing from the point (x 0, y 0) to the point (x 0 + Dx, y 0 + Dy).

Partial derivatives of higher orders. : If the function f (x, y) is defined in some domain D, then its partial derivatives will also be defined in the same domain or part of it. We will call these derivatives first order partial derivatives.

The derivatives of these functions will be second order partial derivatives.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders. Definition. Partial derivatives of the form etc. are called mixed derivatives. Schwarz's theorem:

If the higher-order partial derivatives of the f.m.s. are continuous, then mixed derivatives of the same order differing only in the order of differentiation = among themselves.

Here n is the symbolic degree of the derivative, by which the real degree is replaced after raising the expression with the parentheses to it.

14. Equation of the tangent plane and the normal to the surface!

Let N and N 0 be points of the given surface. Let's draw a straight line NN 0. The plane that passes through the point N 0 is called tangent plane to the surface, if the angle between the secant NN 0 and this plane tends to zero, when the distance NN 0 tends to zero.

Definition. Normal to the surface at the point N 0 is a straight line passing through the point N 0 perpendicular to the tangent plane to this surface.

At any point, the surface has either only one tangent plane, or does not have it at all.

If the surface is given by the equation z = f (x, y), where f (x, y) is a function differentiable at the point M 0 (x 0, y 0), tangent plane at the point N 0 (x 0, y 0, (x 0, y 0)) exists and has the equation:

The equation of the normal to the surface at this point:

Geometric meaning the total differential of the function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (coordinate z) of the tangent plane to the surface when passing from the point (x 0, y 0) to the point (x 0 + Dx, y 0 + Dy).

As you can see, the geometric meaning of the total differential of a function of two variables is a spatial analogue of the geometric meaning of the differential of a function of one variable.

16. Scalar field and its characteristics. Ur-ny lines, derivatives in direction, gradient of scalar field.

If a scalar quantity is assigned to each point in space, then a scalar field arises (for example, a temperature field, an electric potential field). If Cartesian coordinates are entered, then they also denote or The field can be flat if the center (spherical) if cylindrical if



Level surfaces and lines: The properties of scalar fields can be visualized using level surfaces. These are surfaces in space on which it takes on a constant value. Their equation is: ... In a flat scalar field, level lines are curves on which the field takes on a constant value: V individual cases level lines can degenerate into points, and level surfaces into points and curves.

Directional derivative and scalar field gradient:

Let the unit vector with coordinates be a scalar field. The directional derivative characterizes the change in the field in this direction and is calculated by the formula Directional derivative is the dot product of a vector and a vector with coordinates , which is called the gradient of the function and is denoted. Since , where the angle is between and, then the vector indicates the direction of the fastest increase in the field and its modulus is equal to the derivative in this direction. Since the components of the gradient are partial derivatives, it is not difficult to obtain the following gradient properties:

17. Extrema of an FMT Local extremum of an FMT, necessary and sufficient conditions for its existence. Greatest and smallest value fmp in faceted closed area.

Let the function z = ƒ (x; y) be defined in some domain D, the point N (x0; y0)

A point (x0; y0) is called a maximum point of the function z = ƒ (x; y) if there exists a d-neighborhood of the point (x0; y0) such that for every point (x; y) different from (xo; yo), from this neighborhood the inequality ƒ (x; y)<ƒ(хо;уо). Аналогично определяется точка минимума функции: для всех точек (х; у), отличных от (х0;у0), из d-окрестности точки (хо;уо) выполняется неравенство: ƒ(х;у)>ƒ (x0; y0). The value of a function at the maximum (minimum) point is called the maximum (minimum) of the function. The maximum and minimum of a function are called its extrema. Note that, by definition, the extremum point of the function lies inside the domain of the function; the maximum and minimum have a local (local) character: the value of the function at the point (x0; y0) is compared with its values ​​at the points sufficiently close to (x0; y0). In area D, the function may have several extrema or none.



Necessary (1) and sufficient (2) conditions for existence:

(1) If at the point N (x0; y0) the differentiable function z = ƒ (x; y) has an extremum, then its partial derivatives at this point are equal to zero: ƒ "x (x0; y0) = 0, ƒ" y (x0; y0 ) = 0. Comment. The function can have an extremum at the points where at least one of the partial derivatives does not exist. The point at which the partial derivatives of the first order of the function z ≈ ƒ (x; y) are equal to zero, that is, f "x = 0, f" y = 0, is called a stationary point of the function z.

Stationary points and points at which at least one partial derivative does not exist are called critical points.

(2) Suppose that at a stationary point (xo; yo) and some of its neighborhood the function (x; y) has continuous partial derivatives up to the second order inclusive. Let us calculate at the point (x0; y0) the values ​​A = f "" xx (x0; y0), B = ƒ "" xy (x0; y0), C = ƒ "" yy (x0; y0). We denote Then:

1.if Δ> 0, then the function ƒ (x; y) at the point (x0; y0) has an extremum: maximum if A< 0; минимум, если А > 0;

2.if Δ< 0, то функция ƒ(х;у) в точке (х0;у0) экстремума не имеет.

3. In the case Δ = 0, the extremum at the point (x0; y0) may or may not exist. More research is needed.

$ E \ subset \ mathbb (R) ^ (n) $. They say that $ f $ has local maximum at the point $ x_ (0) \ in E $, if there exists a neighborhood $ U $ of the point $ x_ (0) $ such that for all $ x \ in U $ the inequality $ f \ left (x \ right) \ leqslant f \ left (x_ (0) \ right) $.

The local maximum is called strict if the neighborhood $ U $ can be chosen so that for all $ x \ in U $ other than $ x_ (0) $, $ f \ left (x \ right)< f\left(x_{0}\right)$.

Definition
Let $ f $ be a real function on the open set $ E \ subset \ mathbb (R) ^ (n) $. They say that $ f $ has local minimum at the point $ x_ (0) \ in E $, if there exists a neighborhood $ U $ of the point $ x_ (0) $ such that for all $ x \ in U $ the inequality $ f \ left (x \ right) \ geqslant f \ left (x_ (0) \ right) $.

A local minimum is called strict if the neighborhood $ U $ can be chosen so that for all $ x \ in U $ other than $ x_ (0) $, $ f \ left (x \ right)> f \ left (x_ ( 0) \ right) $.

Local extremum combines the concepts of local minimum and local maximum.

Theorem (necessary condition for the extremum of a differentiable function)
Let $ f $ be a real function on the open set $ E \ subset \ mathbb (R) ^ (n) $. If at the point $ x_ (0) \ in E $ the function $ f $ has a local extremum at this point, then $$ \ text (d) f \ left (x_ (0) \ right) = 0. $$ Equality to zero differential is equivalent to the fact that all are equal to zero, i.e. $$ \ displaystyle \ frac (\ partial f) (\ partial x_ (i)) \ left (x_ (0) \ right) = 0. $$

In the one-dimensional case, it is. We denote $ \ phi \ left (t \ right) = f \ left (x_ (0) + th \ right) $, where $ h $ is an arbitrary vector. The function $ \ phi $ is defined for sufficiently small values ​​of $ t $ in absolute value. In addition, by, it is differentiable, and $ (\ phi) ’\ left (t \ right) = \ text (d) f \ left (x_ (0) + th \ right) h $.
Let $ f $ have a local maximum at the point x $ 0 $. Hence, the function $ \ phi $ for $ t = 0 $ has a local maximum and, by Fermat's theorem, $ (\ phi) ’\ left (0 \ right) = 0 $.
So, we got that $ df \ left (x_ (0) \ right) = 0 $, i.e. of the function $ f $ at the point $ x_ (0) $ is equal to zero on any vector $ h $.

Definition
Points at which the differential is zero, i.e. those in which all partial derivatives are equal to zero are called stationary. Critical points of the function $ f $ such points are called at which $ f $ is not differentiable, or it is equal to zero. If the point is stationary, then this does not yet mean that the function has an extremum at this point.

Example 1.
Let $ f \ left (x, y \ right) = x ^ (3) + y ^ (3) $. Then $ \ displaystyle \ frac (\ partial f) (\ partial x) = 3 \ cdot x ^ (2) $, $ \ displaystyle \ frac (\ partial f) (\ partial y) = 3 \ cdot y ^ (2 ) $, so $ \ left (0,0 \ right) $ is a stationary point, but at this point the function has no extremum. Indeed, $ f \ left (0,0 \ right) = 0 $, but it is easy to see that in any neighborhood of the point $ \ left (0,0 \ right) $ the function takes both positive and negative values.

Example 2.
The function $ f \ left (x, y \ right) = x ^ (2) - y ^ (2) $ has its origin as a stationary point, but it is clear that there is no extremum at this point.

Theorem (sufficient condition for an extremum).
Let the function $ f $ be twice continuously differentiable on the open set $ E \ subset \ mathbb (R) ^ (n) $. Let $ x_ (0) \ in E $ be a stationary point and $$ \ displaystyle Q_ (x_ (0)) \ left (h \ right) \ equiv \ sum_ (i = 1) ^ n \ sum_ (j = 1) ^ n \ frac (\ partial ^ (2) f) (\ partial x_ (i) \ partial x_ (j)) \ left (x_ (0) \ right) h ^ (i) h ^ (j). $$ Then

  1. if $ Q_ (x_ (0)) $ -, then the function $ f $ at the point $ x_ (0) $ has a local extremum, namely, a minimum if the form is positive definite, and a maximum if the form is negative definite;
  2. if quadratic form$ Q_ (x_ (0)) $ is undefined, then the function $ f $ at the point $ x_ (0) $ has no extremum.

We will use the expansion according to the Taylor formula (12.7 p. 292). Taking into account that the partial derivatives of the first order at the point $ x_ (0) $ are equal to zero, we get $$ \ displaystyle f \ left (x_ (0) + h \ right) −f \ left (x_ (0) \ right) = \ frac (1) (2) \ sum_ (i = 1) ^ n \ sum_ (j = 1) ^ n \ frac (\ partial ^ (2) f) (\ partial x_ (i) \ partial x_ (j)) \ left (x_ (0) + \ theta h \ right) h ^ (i) h ^ (j), $$ where $ 0<\theta<1$. Обозначим $\displaystyle a_{ij}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right)$. В силу теоремы Шварца (12.6 стр. 289-290) , $a_{ij}=a_{ji}$. Обозначим $$\displaystyle \alpha_{ij} \left(h\right)=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}+\theta h\right)−\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right).$$ По предположению, все непрерывны и поэтому $$\lim_{h \rightarrow 0} \alpha_{ij} \left(h\right)=0. \left(1\right)$$ Получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left.$$ Обозначим $$\displaystyle \epsilon \left(h\right)=\frac{1}{|h|^{2}}\sum_{i=1}^n \sum_{j=1}^n \alpha_{ij} \left(h\right)h_{i}h_{j}.$$ Тогда $$|\epsilon \left(h\right)| \leq \sum_{i=1}^n \sum_{j=1}^n |\alpha_{ij} \left(h\right)|$$ и, в силу соотношения $\left(1\right)$, имеем $\epsilon \left(h\right) \rightarrow 0$ при $h \rightarrow 0$. Окончательно получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left. \left(2\right)$$ Предположим, что $Q_{x_{0}}$ – положительноопределенная форма. Согласно лемме о положительноопределённой квадратичной форме (12.8.1 стр. 295, Лемма 1) , существует такое положительное число $\lambda$, что $Q_{x_{0}} \left(h\right) \geqslant \lambda|h|^{2}$ при любом $h$. Поэтому $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right) \geq \frac{1}{2}|h|^{2} \left(λ+\epsilon \left(h\right)\right).$$ Так как $\lambda>0 $, and $ \ epsilon \ left (h \ right) \ rightarrow 0 $ for $ h \ rightarrow 0 $, then the right-hand side will be positive for any vector $ h $ of sufficiently small length.
So, we came to the conclusion that in some neighborhood of the point $ x_ (0) $ the inequality $ f \ left (x \ right)> f \ left (x_ (0) \ right) $ holds, if only $ x \ neq x_ (0) $ (we put $ x = x_ (0) + h $ \ right). This means that at the point $ x_ (0) $ the function has a strict local minimum, and thus the first part of our theorem is proved.
Suppose now that $ Q_ (x_ (0)) $ is an undefined form. Then there are vectors $ h_ (1) $, $ h_ (2) $ such that $ Q_ (x_ (0)) \ left (h_ (1) \ right) = \ lambda_ (1)> 0 $, $ Q_ (x_ (0)) \ left (h_ (2) \ right) = \ lambda_ (2)<0$. В соотношении $\left(2\right)$ $h=th_{1}$ $t>0 $. Then we get $$ f \ left (x_ (0) + th_ (1) \ right) −f \ left (x_ (0) \ right) = \ frac (1) (2) \ left [t ^ (2) \ lambda_ (1) + t ^ (2) | h_ (1) | ^ (2) \ epsilon \ left (th_ (1) \ right) \ right] = \ frac (1) (2) t ^ (2) \ left [\ lambda_ (1) + | h_ (1) | ^ (2) \ epsilon \ left (th_ (1) \ right) \ right]. $$ For sufficiently small $ t> 0 $ the right-hand side is positive. This means that in any neighborhood of the point $ x_ (0) $ the function $ f $ takes values ​​$ f \ left (x \ right) $ that are greater than $ f \ left (x_ (0) \ right) $.
Similarly, we obtain that in any neighborhood of the point $ x_ (0) $ the function $ f $ takes values ​​less than $ f \ left (x_ (0) \ right) $. This, together with the previous one, means that at the point $ x_ (0) $ the function $ f $ has no extremum.

Consider special case of this theorem for a function $ f \ left (x, y \ right) $ of two variables defined in some neighborhood of the point $ \ left (x_ (0), y_ (0) \ right) $ and having continuous partial derivatives of the first and second orders. Suppose $ \ left (x_ (0), y_ (0) \ right) $ is a stationary point, and denote $$ \ displaystyle a_ (11) = \ frac (\ partial ^ (2) f) (\ partial x ^ (2)) \ left (x_ (0), y_ (0) \ right), a_ (12) = \ frac (\ partial ^ (2) f) (\ partial x \ partial y) \ left (x_ ( 0), y_ (0) \ right), a_ (22) = \ frac (\ partial ^ (2) f) (\ partial y ^ (2)) \ left (x_ (0), y_ (0) \ right ). $$ Then the previous theorem takes the following form.

Theorem
Let $ \ Delta = a_ (11) \ cdot a_ (22) - a_ (12) ^ 2 $. Then:

  1. if $ \ Delta> 0 $, then the function $ f $ has a local extremum at the point $ \ left (x_ (0), y_ (0) \ right) $, namely, a minimum if $ a_ (11)> 0 $ , and maximum if $ a_ (11)<0$;
  2. if $ \ Delta<0$, то экстремума в точке $\left(x_{0},y_{0}\right)$ нет. Как и в одномерном случае, при $\Delta=0$ экстремум может быть, а может и не быть.

Examples of problem solving

Algorithm for finding the extremum of a function of many variables:

  1. Find stationary points;
  2. Find the differential of the 2nd order at all stationary points
  3. Using the sufficient condition for the extremum of a function of several variables, we consider the differential of the second order at each stationary point
  1. Examine the function for the extremum $ f \ left (x, y \ right) = x ^ (3) + 8 \ cdot y ^ (3) + 18 \ cdot x - 30 \ cdot y $.
    Solution

    Find the 1st order partial derivatives: $$ \ displaystyle \ frac (\ partial f) (\ partial x) = 3 \ cdot x ^ (2) - 6 \ cdot y; $$ $$ \ displaystyle \ frac (\ partial f) (\ partial y) = 24 \ cdot y ^ (2) - 6 \ cdot x. $$ Let's compose and solve the system: $$ \ displaystyle \ begin (cases) \ frac (\ partial f) (\ partial x) = 0 \\\ frac (\ partial f) (\ partial y) = 0 \ end (cases) \ Rightarrow \ begin (cases) 3 \ cdot x ^ (2) - 6 \ cdot y = 0 \\ 24 \ cdot y ^ (2) - 6 \ cdot x = 0 \ end (cases) \ Rightarrow \ begin (cases) x ^ (2) - 2 \ cdot y = 0 \\ 4 \ cdot y ^ (2) - x = 0 \ end (cases) $$ From the 2nd equation, express $ x = 4 \ cdot y ^ (2) $ - substitute in the 1st equation: $$ \ displaystyle \ left (4 \ cdot y ^ (2) \ right ) ^ (2) -2 \ cdot y = 0 $$ $$ 16 \ cdot y ^ (4) - 2 \ cdot y = 0 $$ $$ 8 \ cdot y ^ (4) - y = 0 $$ $$ y \ left (8 \ cdot y ^ (3) -1 \ right) = 0 $$ As a result, 2 stationary points are obtained:
    1) $ y = 0 \ Rightarrow x = 0, M_ (1) = \ left (0, 0 \ right) $;
    2) $ \ displaystyle 8 \ cdot y ^ (3) -1 = 0 \ Rightarrow y ^ (3) = \ frac (1) (8) \ Rightarrow y = \ frac (1) (2) \ Rightarrow x = 1 , M_ (2) = \ left (\ frac (1) (2), 1 \ right) $
    Let us check the fulfillment of the sufficient condition for an extremum:
    $$ \ displaystyle \ frac (\ partial ^ (2) f) (\ partial x ^ (2)) = 6 \ cdot x; \ frac (\ partial ^ (2) f) (\ partial x \ partial y) = - 6; \ frac (\ partial ^ (2) f) (\ partial y ^ (2)) = 48 \ cdot y $$
    1) For point $ M_ (1) = \ left (0,0 \ right) $:
    $$ \ displaystyle A_ (1) = \ frac (\ partial ^ (2) f) (\ partial x ^ (2)) \ left (0,0 \ right) = 0; B_ (1) = \ frac (\ partial ^ (2) f) (\ partial x \ partial y) \ left (0,0 \ right) = - 6; C_ (1) = \ frac (\ partial ^ (2) f) (\ partial y ^ (2)) \ left (0,0 \ right) = 0; $$
    $ A_ (1) \ cdot B_ (1) - C_ (1) ^ (2) = -36<0$ , значит, в точке $M_{1}$ нет экстремума.
    2) For point $ M_ (2) $:
    $$ \ displaystyle A_ (2) = \ frac (\ partial ^ (2) f) (\ partial x ^ (2)) \ left (1, \ frac (1) (2) \ right) = 6; B_ (2) = \ frac (\ partial ^ (2) f) (\ partial x \ partial y) \ left (1, \ frac (1) (2) \ right) = - 6; C_ (2) = \ frac (\ partial ^ (2) f) (\ partial y ^ (2)) \ left (1, \ frac (1) (2) \ right) = 24; $$
    $ A_ (2) \ cdot B_ (2) - C_ (2) ^ (2) = 108> 0 $, so there is an extremum at the point $ M_ (2) $, and since $ A_ (2)> 0 $, then this is the minimum.
    Answer: The point $ \ displaystyle M_ (2) \ left (1, \ frac (1) (2) \ right) $ is the minimum point of the function $ f $.

  2. Examine the function for the extremum $ f = y ^ (2) + 2 \ cdot x \ cdot y - 4 \ cdot x - 2 \ cdot y - 3 $.
    Solution

    Find stationary points: $$ \ displaystyle \ frac (\ partial f) (\ partial x) = 2 \ cdot y - 4; $$ $$ \ displaystyle \ frac (\ partial f) (\ partial y) = 2 \ cdot y + 2 \ cdot x - 2. $$
    Let's compose and solve the system: $$ \ displaystyle \ begin (cases) \ frac (\ partial f) (\ partial x) = 0 \\\ frac (\ partial f) (\ partial y) = 0 \ end (cases) \ Rightarrow \ begin (cases) 2 \ cdot y - 4 = 0 \\ 2 \ cdot y + 2 \ cdot x - 2 = 0 \ end (cases) \ Rightarrow \ begin (cases) y = 2 \\ y + x = 1 \ end (cases) \ Rightarrow x = -1 $$
    $ M_ (0) \ left (-1, 2 \ right) $ is a stationary point.
    Let's check that the sufficient extremum condition is satisfied: $$ \ displaystyle A = \ frac (\ partial ^ (2) f) (\ partial x ^ (2)) \ left (-1,2 \ right) = 0; B = \ frac (\ partial ^ (2) f) (\ partial x \ partial y) \ left (-1,2 \ right) = 2; C = \ frac (\ partial ^ (2) f) (\ partial y ^ (2)) \ left (-1,2 \ right) = 2; $$
    $ A \ cdot B - C ^ (2) = -4<0$ , значит, в точке $M_{0}$ нет экстремума.
    Answer: there are no extremes.

Time limit: 0

Navigation (job numbers only)

0 of 4 questions completed

Information

Take this quiz to test your knowledge of the topic “Local extrema of functions of many variables” you just read.

You have already taken the test before. You cannot start it again.

The test is loading ...

You must login or register in order to start the test.

You must complete the following tests to start this one:

results

Correct answers: 0 out of 4

Your time:

Time is over

You scored 0 out of 0 points (0)

Your result has been recorded on the leaderboard

  1. With the answer
  2. Marked as viewed

    Task 1 of 4

    1 .
    Points: 1

    Examine the function $ f $ for extrema: $ f = e ^ (x + y) (x ^ (2) -2 \ cdot y ^ (2)) $

    Right

    Not right

  1. Question 2 of 4

    2 .
    Points: 1

    Does the function $ f = 4 + \ sqrt ((x ^ (2) + y ^ (2)) ^ (2)) $

    Right

Tangent plane and surface normal.

tangent plane

Let N and N 0 be points of the given surface. Let's draw a straight line NN 0. The plane that passes through the point N 0 is called tangent plane to the surface, if the angle between the secant NN 0 and this plane tends to zero, when the distance NN 0 tends to zero.

Definition. Normal to the surface at the point N 0 is a straight line passing through the point N 0 perpendicular to the tangent plane to this surface.

At any point, the surface has either only one tangent plane, or does not have it at all.

If the surface is given by the equation z = f (x, y), where f (x, y) is a function differentiable at the point M 0 (x 0, y 0), the tangent plane at the point N 0 (x 0, y 0, ( x 0, y 0)) exists and has the equation:

The equation of the normal to the surface at this point is:

Geometric meaning the total differential of the function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (coordinate z) of the tangent plane to the surface when passing from the point (x 0, y 0) to the point (x 0 + x , y 0 + y).

As you can see, the geometric meaning of the total differential of a function of two variables is a spatial analogue of the geometric meaning of the differential of a function of one variable.

Example. Find the equations of the tangent plane and the normal to the surface

at point M (1, 1, 1).

Tangent plane equation:

Normal equation:

20.4. Approximate calculations using the total differential.

Let the function f (x, y) be differentiable at the point (x, y). Let's find the total increment of this function:

If you substitute the expression

then we get an approximate formula:

Example. Calculate the approximate value based on the value of the function at x = 1, y = 2, z = 1.

From the given expression, we determine x = 1.04 - 1 = 0.04, y = 1.99 - 2 = -0.01,

z = 1.02 - 1 = 0.02.

Find the value of the function u (x, y, z) =

Find the partial derivatives:

The total differential of the function u is equal to:

The exact value of this expression is 1.049275225687319176.

20.5. Partial derivatives of higher orders.

If the function f (x, y) is defined in some domain D, then its partial derivatives will be determined in the same domain or part of it.

We will call these derivatives partial derivatives of the first order.

The derivatives of these functions will be partial derivatives of the second order.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders.

Definition. Partial derivatives of the form etc. are called mixed derivatives.

Theorem. If the function f (x, y) and its partial derivatives are defined and continuous at the point M (x, y) and its neighborhood, then the following relation is true:

Those. higher-order partial derivatives do not depend on the order of differentiation.

Differentials of higher orders are defined similarly.

…………………

Here n is the symbolic degree of the derivative, by which the real degree is replaced after raising the expression with the parentheses to it.