Causality and interaction in physics. Subjective aspects of the use of mathematical modeling of military operations in the work of military command and control bodies Why the use of models affects the limits of applicability

Victor Kuligin

Disclosure of the content and concretization of concepts should be based on one or another specific model of the interconnection of concepts. The model, objectively reflecting a certain side of the connection, has limits of applicability, beyond which its use leads to false conclusions, but within the limits of its applicability, it should have not only imagery, clarity and concreteness, but also have heuristic value.

The variety of manifestations of cause-and-effect relationships in the material world has led to the existence of several models of cause-and-effect relationships. Historically, any model of these relationships can be reduced to one of the two main types of models or a combination of them.

a) Models based on the temporal approach (evolutionary models). Here, the main attention is focused on the temporal side of the cause-and-effect relationship. One event - "cause" - gives rise to another event - "effect", which in time lags behind the cause (lags behind). Lagging is the hallmark of an evolutionary approach. Cause and effect are interdependent. However, the reference to the generation of the effect by the cause (genesis), although legitimate, is introduced into the definition of the cause-and-effect relationship, as it were, from the outside, from the outside. It fixes the outer side of this connection without deeply capturing the essence.

The evolutionary approach was developed by F. Bacon, J. Mill and others. The extreme polar point of the evolutionary approach was the position of Hume. Hume ignored genesis, denying the objective nature of causality, and reduced causality to mere regularity of events.

b) Models based on the concept of "interaction" (structural or dialectical models). We will find out the meaning of the names later. The main focus here is on interaction as a source of cause-and-effect relationships. The interaction itself acts as a cause. Kant paid much attention to this approach, but the dialectical approach to causality acquired the clearest form in the works of Hegel. From modern Soviet philosophers, this approach was developed by G.A. Svechnikov, who sought to give a materialistic interpretation of one of the structural models of causation.

The existing and currently used models reveal the mechanism of cause-and-effect relations in various ways, which leads to disagreements and creates the basis for philosophical discussions. The sharpness of the discussion and the polarity of the points of view indicate their relevance.

Let us highlight some of the discussed problems.

a) The problem of the simultaneity of cause and effect. This is the main problem. Are cause and effect simultaneous or separated by an interval of time? If cause and effect are simultaneous, then why does cause give rise to effect, and not vice versa? If the cause and the effect are not simultaneous, can there be a "pure" cause, i.e. a cause without an effect that has not yet occurred, and a "pure" effect, when the action of the cause has ended, and the effect is still going on? What happens in the interval between cause and effect, if they are separated in time, etc.?

b) The problem of the unambiguity of cause-and-effect relationships. Does one and the same cause give rise to the same effect, or can one cause give rise to any effect from several potentially possible ones? Could the same effect be generated by any of several causes?

c) The problem of the reverse effect of the investigation on its cause.

d) The problem of the connection of cause, reason and conditions. Under certain circumstances, can cause and condition change roles: the cause becomes the condition, and the condition becomes the cause? What is the objective relationship and distinguishing features of a cause, occasion and condition?

The solution to these problems depends on the chosen model, i.e. to a large extent on what content will be put into the initial categories "cause" and "effect". The definitional nature of many difficulties is manifested, for example, in the fact that there is no single answer to the question of what should be understood by “cause”. Some researchers think of a material object under reason, others - a phenomenon, others - a change in state, others - interaction, etc.

Attempts to go beyond the framework of a model representation and give a general, universal definition of a causal relationship do not lead to a solution to the problem. As an example, the following definition can be cited: "Causality is such a genetic connection of phenomena in which one phenomenon called a cause, in the presence of certain conditions, inevitably generates, causes, leads to life another phenomenon called a consequence." This definition is formally valid for most models, but without relying on the model, it cannot solve the problems posed (for example, the problem of simultaneity) and therefore has limited theoretical and cognitive value.

When solving the problems mentioned above, most authors tend to proceed from the modern physical picture of the world and, as a rule, pay somewhat less attention to epistemology. Meanwhile, in our opinion, there are two important problems here: the problem of removing elements of anthropomorphism from the concept of causality and the problem of not causal links in natural science. The essence of the first problem is that causality as an objective philosophical category should have an objective character that does not depend on the cognizing subject and his activity. The essence of the second problem: whether to recognize causal relationships in natural science as universal and universal, or to consider that such relationships are of a limited nature and that there are relationships of a non-causal type that deny causality and limit the applicability of the causality principle? We believe that the principle of causality is universal and objective, and its application knows no limits.

So, two types of models, objectively reflecting some important aspects and features of cause-and-effect relationships, are to a certain extent in contradiction, since they solve the problems of simultaneity, unambiguity, etc. in different ways, but at the same time, objectively reflecting some aspects of cause-and-effect relationships , they must be in mutual connection. Our first task is to identify this connection and refine the models.

Applicability limit of models

Let us try to establish the limit of applicability of evolutionary-type models. Causal chains that satisfy evolutionary models tend to be transitive. If event A is the cause of event B (B is a consequence of A), if, in turn, event B is the cause of event C, then event A is the cause of event C. If A → B and B → C, then A → C. So the simplest cause-and-effect chains are compiled in a way. Event B can act in one case as a cause, in another - as a consequence. This pattern was noted by F. Engels: “... cause and effect are the essence of representations that are meaningful, as such, only when applied to a given individual case: but as soon as we consider this separate case in a general connection with the entire world whole, these ideas converge and intertwine in the representation of a universal interaction, in which causes and effects are constantly changing places; what here or now is the cause becomes there or then the effect and vice versa ”(v. 20, p. 22).

The transitivity property allows one to carry out detailed analysis causal chain. It consists in dismembering the final chain into simpler causal links. If A, then A → B1, B1 → B2, ..., Bn → C. But does a finite causal chain have the property of infinite divisibility? Can the number of links in a finite chain N tend to infinity?

Based on the law of the transition of quantitative changes into qualitative ones, it can be argued that when dismembering the final causal chain, we will encounter such a content of individual links in the chain when further division becomes meaningless. Note that the infinite divisibility, which denies the law of transition of quantitative changes into qualitative ones, Hegel called "bad infinity"

The transition from quantitative to qualitative changes occurs, for example, when a piece of graphite is divided. When molecules are separated until a monatomic gas is formed, the chemical composition does not change. Further fission of a substance without changing its chemical composition is already impossible, since the next stage is the splitting of carbon atoms. Here, from a physicochemical point of view, quantitative changes lead to qualitative ones.

In the above statement of F. Engels, one can clearly trace the idea that the basis of cause-and-effect relationships is not spontaneous expression of will, not a whim of chance and not a divine finger, but universal interaction. In nature, there is no spontaneous emergence and destruction of motion, there are mutual transitions of some forms of motion of matter into others, from one material object to another, and these transitions cannot occur otherwise than through the interaction of material objects. Such transitions due to interaction give rise to new phenomena, changing the state of interacting objects.

Interaction is universal and forms the basis of causality. As Hegel justly noted, "interaction is a causal relationship that is laid down in its full development." Engels formulated this thought even more clearly: “Interaction is the first thing that comes before us when we consider moving matter as a whole from the point of view of modern natural science ... Thus, natural science confirms the fact ... that interaction is the true causa finalis of things. We cannot go beyond the knowledge of this interaction precisely because there is nothing more to cognize behind it ”(vol. 20, p. 546).

Since interaction is the basis of causality, consider the interaction of two material objects, the diagram of which is shown in Fig. 1. This example does not violate the generality of reasoning, since the interaction of several objects is reduced to pair interactions and can be considered in a similar way.

It is easy to see that when interacting, both objects simultaneously affect each other (reciprocity of action). In this case, the state of each of the interacting objects changes. No interaction - no change of state. Therefore, a change in the state of any one of the interacting objects can be considered as a particular consequence of the cause - interaction. A change in the states of all objects in their totality will be a complete consequence.

It is obvious that such a causal model of an elementary link in the evolutionary model belongs to the class of structural (dialectical) ones. It should be emphasized that this model is not reduced to the approach developed by G.A. Svechnikov, since under investigation G.A. Svechnikov, according to V.G. Ivanov, understood "... a change in one or all interacting objects or a change in the nature of the interaction itself, up to its disintegration or transformation." As for the change in states, this is a change by G.A. Svechnikov attributed it to a non-causal form of communication.

So, we have established that evolutionary models, as an elementary, primary link, contain a structural (dialectical) model based on interaction and change of states. A little later we will return to the analysis of the interconnection of these models and the study of the properties of the evolutionary model. Here we would like to note that, in full accordance with the point of view of F. Engels, the change of phenomena in evolutionary models reflecting objective reality occurs not due to the simple regularity of events (as in D. Hume), but due to the conditioning generated by the interaction (genesis ). Therefore, although references to generation (genesis) are introduced into the definition of causal relationships in evolutionary models, they reflect the objective nature of these relationships and have a legal basis.

Fig. 2. Structural (dialectical) model of causality

Let's go back to the structural model. In its structure and meaning, it is in excellent agreement with the first law of dialectics - the law of the unity and struggle of opposites, if we interpret:

- unity - as the existence of objects in their mutual connection (interaction);

- opposites - as mutually exclusive tendencies and characteristics of states due to interaction;

- struggle - as interaction;

- development - as a change in the state of each of the interacting material objects.

Therefore, a structural model based on interaction as a cause can also be called a dialectical model of causality. From the analogy of the structural model and the first law of dialectics, it follows that causality acts as a reflection of objective dialectical contradictions in nature itself, in contrast to the subjective dialectical contradictions that arise in human consciousness. The structural model of causality is a reflection of the objective dialectic of nature.

Consider an example that illustrates the application of the structural model of causation. There are many such examples, which are explained using this model, in natural sciences(physics, chemistry, etc.), since the concept of "interaction" is fundamental in natural science.

Let us take as an example an elastic collision of two balls: a moving ball A and a stationary ball B. Before the collision, the state of each of the balls was determined by a set of attributes Ca and Cb (momentum, kinetic energy, etc.). After the collision (interaction), the states of these balls changed. Let's designate new states C "a and C" b. The reason for the change in states (Сa → С "a and Сb → С" b) was the interaction of balls (collision); the consequence of this collision was a change in the state of each ball.

As already mentioned, the evolutionary model is of little use in this case, since we are dealing not with a causal chain, but with an elementary causal link, the structure of which is not reducible to an evolutionary model. To show this, let us illustrate given example explanation from the standpoint of the evolutionary model: "Before the collision, the ball A was at rest, so the cause of its motion is the ball B, which hit it." Here the ball B is the cause, and the motion of the ball A is the effect. But the following explanation can be given from the same positions: “Before the collision, the ball B was moving uniformly along a rectilinear trajectory. If it were not for the ball A, then the nature of the motion of the ball B would not change. " Here the cause is already the ball A, and the effect is the state of the ball B. The above example shows:

a) a certain subjectivity that arises when the evolutionary model is applied beyond the limits of its applicability: the cause can be either ball A or ball B; this situation is due to the fact that the evolutionary model snatches out one particular branch of the consequence and is limited to its interpretation;

b) a typical epistemological error. In the above explanations from the standpoint of the evolutionary model, one of the same type of material objects acts as an "active", and the other - as a "passive" principle. It turns out that one of the balls is endowed (in comparison with the other) with "activity", "will", "desire", like a person. Therefore, it is only thanks to this "will" that we have a causal relationship. Such an epistemological error is determined not only by the model of causality, but also by the imagery inherent in living human speech, and by the typical psychological transfer of properties characteristic of complex causality (we will talk about it below) to a simple causal link. And such errors are quite common when using the evolutionary model beyond the limits of its applicability. They are found in some definitions of causality. For example: “So, causality is defined as such an effect of one object on another, in which a change in the first object (cause) precedes a change in another object and, in a necessary, unambiguous way, generates a change in another object (consequence)”. It is difficult to agree with such a definition, since it is not at all clear why, during interaction (mutual action!), Objects should be deformed not simultaneously, but one after another? Which object should deform first and which second (priority problem)?

Model qualities

Let us now consider what qualities the structural model of causality retains in itself. Let's note the following among them: objectivity, universality, consistency, unambiguity.

The objectivity of causality is manifested in the fact that interaction acts as an objective reason in relation to which interacting objects are equal. There is no room for anthropomorphic interpretation here. Universality is due to the fact that causality is always based on interaction. Causality is universal, just as interaction itself is universal. Consistency is due to the fact that, although cause and effect (interaction and change of states) coincide in time, they reflect different aspects of the cause-and-effect relationship. Interaction presupposes the spatial connection of objects, change of state - the connection of the states of each of the interacting objects in time.

In addition, the structural model establishes an unambiguous relationship in causal relationships, regardless of the way of mathematical description of the interaction. Moreover, the structural model, being objective and universal, does not prescribe restrictions on the nature of interactions in natural science. Within the framework of this model, both instantaneous long-range or short-range action and interaction with any finite speeds are valid. The appearance of such a limitation in the definition of cause-and-effect relations would be a typical metaphysical dogma, once and for all postulating the nature of the interaction of any systems, imposing a natural philosophical framework on physics and other sciences from the side of philosophy, or would limit the applicability of the model to such an extent that the use of such a model would turn out to be very modest.

It would be appropriate here to dwell on the issues related to the finite rate of propagation of interactions. Let's look at an example. Let there are two stationary charges. If one of the charges began to move with acceleration, then the electromagnetic wave will approach the second charge with a delay. Does this example contradict the structural model and, in particular, the property of reciprocity of action, since for

Similar abstracts:

Time in the dynamics of processes. Formation of the arrow of time.

An ideal model for flexible design technology (GTP). The objectives of the research in the GTR are the principles of the dialectical method of cognition. The principles of the dialectical method of cognition. System of GTD modules.

Hadrons, unlike leptons (for example, an electron), photons and vector bosons (carriers of weak interaction), do not belong to truly elementary particles, but consist of more fundamental microscopic objects - quarks and gluons.

The general scheme of the evolution of matter (from "elementary" interactions to the level of social ties) is considered. The statement is substantiated about the absence of both a third-party "guiding force" and a universal criterion for the direction of development.

All the infinite variety of natural phenomena is reduced in modern physics to four fundamental interactions... The law was discovered first universal gravitation, then - electromagnetic, and finally - strong (nuclear) and weak interactions.

The purpose of the lesson

Continue the discussion of wave diffraction, consider the problem of the limits of applicability of geometric optics, develop skills for the qualitative and quantitative description of the diffraction pattern, consider the practical applications of light diffraction.

This material usually considered in passing within the framework of the study of the topic "Diffraction of light" due to lack of time. But, in our opinion, it must be considered for a deeper understanding of the phenomenon of diffraction, understanding that any theory describing physical processes has limits of applicability. Therefore, this lesson can be taught in basic classes instead of a problem solving lesson, since the mathematical apparatus for solving problems on this topic is rather complicated.

P / p No. Lesson steps Time, min Techniques and methods
1 Organizing time 2
2 Repetition of the learned material 6 Frontal poll
3 Explanation of the new material on the topic "Limits of applicability of geometric optics" 15 Lecture
4 Consolidation of the studied material using a computer model 15 Work on the computer with worksheets. Diffraction Limit Resolution Model
5 Analysis of the work done 5 Frontal conversation
6 Explanation homework 2

Repetition of the learned material

Frontally repeat the questions on the topic "Light diffraction".

Explanation of the new material

The limits of applicability of geometric optics

All physical theories reflect the processes occurring in nature approximately. For any theory, certain limits of its applicability can be indicated. Can I apply in a specific case this theory or not depends not only on the accuracy provided by the theory, but also on what accuracy is required in solving a particular practical problem. The boundaries of a theory can be set only after a more general theory has been built that covers the same phenomena.

All these general provisions also apply to geometric optics. This theory is approximate. It is unable to explain the phenomena of interference and diffraction of light. A more general and more accurate theory is wave optics. The law of rectilinear propagation of light and other laws of geometric optics are fulfilled quite accurately only if the dimensions of the obstacles in the path of light propagation are much larger than the wavelength of the light wave. But they are definitely never executed.

The operation of optical devices is described by the laws of geometric optics. According to these laws, we can distinguish with a microscope arbitrarily small details of an object; with the help of a telescope, it is possible to establish the existence of two stars at any, arbitrarily small angular distances between them. However, in reality this is not the case, and only the wave theory of light makes it possible to understand the reasons for the resolution limit of optical devices.

Resolution of the microscope and telescope.

The wave nature of light limits the ability to distinguish between details of an object or very small objects when observed with a microscope. Diffraction does not allow obtaining clear images of small objects, since the light does not propagate in a strictly straight line, but bends around objects. Because of this, images are "blurry". This happens when the linear dimensions of objects are comparable to the length of the light wave.

Diffraction also imposes a limit on the resolution of the telescope. Due to the diffraction of waves, the image of the star will not be a point, but a system of light and dark rings. If two stars are at a small angular distance from each other, then these rings are superimposed on each other and the eye is not able to distinguish whether there are two luminous points or one. The limiting angular distance between the luminous points, at which they can be distinguished, is determined by the ratio of the wavelength to the lens diameter.

This example shows that diffraction always occurs at any obstacle. With very delicate observations, it cannot be neglected even for obstacles much larger than the wavelength.

Light diffraction defines the limits of applicability of geometric optics. Light bending around obstacles imposes a limit on the resolution of the most important optical instruments - the telescope and microscope.

"Diffraction limit of resolution"

Lesson Worksheet

Sample answers
"Diffraction of light"

Surname, name, class ______________________________________________

    Expose the hole diameter 2cm, the angular distance between the light sources 4.5 ∙ 10 -5 rad ... By changing the wavelength, determine from what wavelength the image of the two light sources will be impossible to distinguish, and they will be perceived as one.

    Answer: from about 720 nm and longer.

    How does the resolution limit of an optical device depend on the wavelength of the observed objects?

    Answer: the longer the waveform, the lower the resolution limit.

    Which binary stars - blue or red - we can detect at a greater distance by modern optical telescopes?

    Answer: blue.

    Set the minimum wavelength without changing the distance between the light sources. At what diameter of the hole will the image of two light sources be impossible to distinguish, and they will be perceived as one?

    Answer: 1.0 cm or less.

    Repeat the experiment with the maximum wavelength.

    Answer: about 2 cm or less.

    How does the resolution limit of optical devices depend on the diameter of the hole through which the light passes?

    Answer: the smaller the hole diameter, the smaller the resolution limit.

    Which telescope - with a larger or smaller lens - will allow you to see two nearby stars?

    Answer: with a larger lens.

    Find experimentally at what minimum distance from each other (in angular value - radians) you can distinguish the image of two light sources in a given computer model?

    Answer: 1.4 ∙ 10 -5 rad.

    Why can't you see molecules or atoms of a substance through an optical microscope?

    Answer: if the linear dimensions of the observed objects are comparable with the length of the light wave, then diffraction will not allow obtaining their distinct images in the microscope, since the light does not propagate in a strictly straight line, but bends around the objects. Because of this, images are "blurry".

    Give examples when it is necessary to take into account the diffractive nature of images.

    Answer: for all observations through a microscope or telescope, when the dimensions of the observed objects are comparable to the length of the light wave, with small sizes of the entrance aperture of the telescopes, when observing in the range of long red waves of objects located at small angular distances from each other.

The question that naturally arises in the study of any science is to assess the prospects for the practical applicability of its conclusions: is it possible to formulate a sufficiently accurate forecast of the behavior of the object under study on the basis of this theory? Given that economic theory is concerned with the study of "the choices that people make using limited resources to satisfy their desires," 1 the question posed will concern predicting people's behavior in situations of choice. The dominant trend in economic theory, main stream economics, claims to be able to accurately describe the behavior of individuals making any choice in any situation with limited resources. The subject of choice, the external conditions for making the choice, the historical era in which the choice is made, do not play a special role. The analytical model of neoclassicism remains unchanged, whether it is about buying fruit on the market, about the "choice" of a patron overlord in the feudal era, or about choosing a life partner.

One of the first to question the universality claims of classical economics was J.M. Keynes. His main thesis is as follows: "The postulates of the classical theory are applicable not to the general, but only to special occasion, since the economic situation that she is considering is only a limiting case of possible equilibrium states. ”2 More precisely, the classical postulates are true only under conditions of full employment of available resources and lose their analytical value as the market moves away from a situation of full employment of resources. Are there other restrictions on the application of the neoclassical model?

Completeness of information

The neoclassical model assumes completeness of information, which individuals possess at the moment of making a choice. Is this condition achieved automatically and is it always achievable? One of the postulates of the neoclassical theory says that all the necessary information about the state of the market is contained in prices, the possession of information about equilibrium prices and allows the participants in the exchange to make transactions in accordance with their interests. L. Walras speaks of the existence of a certain "auctioneer" (commisaire-priseur) in the market, which accepts "bids" from buyers and "offers" from sellers. Comparison of the aggregate demand and aggregate supply obtained on their basis is the basis of the "groping" (tatonnement) of the equilibrium price 3. However, as Oscar Lange showed back in the 1930s in his model of market socialism, in reality, the functions of an auctioneer can and should be performed in the best possible way by a planning body, a central planning bureau. The paradox of Lange's argument is that it is in the existence of the planning body that he sees the main prerequisite for the functioning of the neoclassical market model 4.

An alternative to the socialist centralization of pricing can only be a local market model. It is under the condition of limiting transactions to a certain circle of persons or a certain territory that all exchange participants can be provided with complete information about the transactions planned and performed on the market. Medieval fairs are an example of a local market from history: a constant circle of participants and their limited number allowed all traders to have a clear idea of ​​the market situation and make reliable assumptions about its change. Even if the traders did not have all the information about the deal ex ante, the personal reputation of each of them served as the best guarantee that there was no deception and no use by anyone additional information to the detriment of the rest 5. Despite the seeming paradox, modern exchanges and individual markets (for example, the diamond market) also operate on the basis of the principles of the local market. Although transactions are made here on a global or at least a national scale, the circle of their participants is limited. We are talking about a kind of communities of merchants living on the basis of the personal reputation of each of them 6. To summarize the above: completeness of information is achievable only in two cases - centralized pricing or local market.

Perfect competition

Another requirement of the neoclassical market model is minimum interdependence of participants in transactions: a situation where decisions about the choice of one individual do not depend on the decisions of other individuals and do not affect them. The minimum interdependence in decision-making is achieved only within a certain market structure, i.e. when making transactions on a completely competitive market. For the market to meet the criteria of perfect competition, the following conditions must be met:

The presence of a large, potentially infinite number of participants in transactions (sellers and buyers), and the share of each of them is insignificant in the total volume of transactions;

The exchange is carried out with standardized and homogeneous products;

Buyers have complete information about the products they are interested in;

There is the possibility of free entry and exit from the market, and its participants have no incentives to merge 7.

In conditions of perfect competition, the resources that are the object of economic choice become nonspecific those. it is easy for them to find an equivalent replacement, and the result from their use will be the same. However, here, too, it is worth mentioning the Keynesians' limitation of the scope in which neoclassical analysis remains valid. N. Kaldor sees in the existence of monopolistic competition one of the main reasons for underemployment and, consequently, the unattainability of neoclassical equilibrium in the market. "The natural framework for Keynesian macroeconomics is the microeconomics of monopolistic competition." Thus, the market structure is the second factor determining the limits of applicability of the neoclassical model.

Homo oeconomicus

Another prerequisite for the applicability of neoclassical models to the analysis of real markets is conformity of the people making the choice to the ideal of homo oeconomicus. Although the neoclassicists themselves pay insufficient attention to this issue, limiting themselves to references to rationality and to the identification of a person with a perfect calculator, the neoclassical model assumes a very specific type of people's behavior. Interest in the behavior of participants in transactions on the market is already characteristic of the founder of classical economic theory, Adam Smith, who is the author of not only Research on the Nature and Causes of the Wealth of Nations (1776), but also The Theory of Moral Feelings (1759). What is the portrait of the ideal neoclassical market participant?

First, it must be goal-rational. Following Max Weber, goal-rational behavior is understood as "the expectation of a certain behavior of objects outside world and other people and the use of this expectation as "conditions" and "means" to achieve their rationally set and thoughtful goal. ”9. A purposefully rational person is free to choose both goals and the means to achieve them.

Second, the behavior of homo oeconomicus should be utilitarian. In other words, his actions should be subordinated to the task of maximizing pleasure and utility. It is utility that becomes the basis of human happiness 10. It is necessary to distinguish between two forms of utilitarianism - simple and complex. In the first case, a person is simply aimed at the task of maximizing his pleasure, in the second he associates the amount of received utility with his own activity. It is the awareness of the relationship between utility and activity that characterizes the ideal participant in a market exchange.

Third, he must feel empathy in relation to other participants in the transaction, i.e. he must be able to put himself in their place and look at the exchange taking place from their point of view. "Since no direct observation is capable of acquainting us with what other people feel, we cannot form a concept of their sensations otherwise than by imagining ourselves in their position." Moreover, empathy is distinguished from emotionally colored sympathy by impartiality and neutrality: we must be able to put ourselves in the place of a person who can be personally unpleasant.

Fourth, there must be confidence. No, even the most elementary transaction on the market can be carried out without at least minimal trust between its participants. It is in the existence of trust that the prerequisite for the predictability of the counterparty's behavior, the formation of more or less stable expectations regarding the market situation lies. "I trust another if I think he will not deceive my expectations about his intentions and the terms of the transaction being made." For example, any prepaid 12 transaction is built on the buyer's confidence that the seller is fulfilling its obligations. after making a prepayment to them. Without mutual trust, the deal will seem irrational and never get done.

Finally, market participants must have the ability to interpretive rationality, which is a kind of synthesis of the above four elements. Interpretive rationality includes, on the one hand, the ability of an individual to form correct expectations about the actions of another, that is, to correctly interpret the intentions and plans of the latter. At the same time, a symmetrical requirement is presented to the individual: to facilitate the understanding by others of his own intentions and actions 13. Why is interpretive rationality important in the market? Without it, exchange participants cannot find an optimal solution to situations such as the "prisoners' dilemma" that arise whenever transactions involve the production and distribution of public goods.

The prerequisites for interpretive rationality are the existence focal points, options spontaneously chosen by all individuals, and agreements, well-known behaviors of individuals1 4. A spontaneous choice of the same options from a certain set of alternatives is possible only within the framework of socially homogeneous groups or within the framework of the same culture. Indeed, focal points are associated with the presence of common points of reference in actions and assessments, common associations. An example of a focal point is a common meeting place in a city or building. With regard to agreements, we are talking about generally accepted in a given situation, behavior. Agreements allow individuals to behave as others expect, and vice versa. The agreement regulates, for example, the communication of casual fellow travelers on the train. It determines the topics of conversation, the permissible degree of openness, the degree of respect for the interests of another (in matters of noise, light), etc.

Focal point- spontaneously selected by all by individuals who find themselves in this situation, a variant of behavior.

Agreement- regularity R in the behavior of a group of individuals P in a frequent situation S if the following six conditions are met:

1) everyone obeys R;

2) everyone thinks everyone else obeys R;

3) belief that others are following a prescription R, is for the individual the main incentive to do it too;

4) everyone prefers full match R partial compliance;

5) R is not the only regularity in behavior that satisfies conditions 4 and 5;

6) conditions 1 to 5 are generally known (common knowledge).

Conclusions. Summarizing the discussion of the limits of applicability of neoclassical market models, let us recall the main ones. The market structure is close to being perfectly competitive; pricing in the market is either centralized or local in nature, because only in this case all information circulates freely on the market and it is available to all participants in transactions; all participants in transactions are close in their behavior to homo oeconomicus. Making a conclusion about a significant decrease in the scope of applicability of neoclassical models, it is easy to notice another, more serious problem. The above requirements contradict each other. Thus, the local market model contradicts the requirement of a sufficiently large, potentially unlimited number of participants in transactions (the condition of perfect competition). If we take the case of centralized pricing, then it undermines mutual trust between the parties to the transaction themselves. The main thing here is not trust at the "horizontal" level, but "vertical" trust in the auctioneer, in whatever form he may exist 15. Further, the requirement for the minimum dependence of the participants in transactions contradicts the norm of empathy and interpretive rationality: by taking the point of view of the counterparty, we partially give up our autonomy and self-sufficiency in decision-making. This series of contradictions can be continued. Consequently, interest in factors such as the organization of the market, the behavior of people in the market, not only limits the scope of applicability of the neoclassical model, but also casts doubt on it. There is a need for a new theory capable of not only explaining the existence of these limitations, but also taking them into account when building a market model.

Lecture number 2. INSTITUTIONAL THEORY: "OLD" AND "NEW" INSTITUTIONALISM

Institutionalism is a theory focused on building a market model taking into account these constraints. As the name suggests, the focus of this theory's analysis is on institutions, "human-created frameworks that structure political, economic, and social interactions." Before proceeding to the actual discussion of the postulates of institutional theory, we need to determine the criteria by which we will assess the degree of its novelty in relation to the neoclassical approach. Are we really talking about a new theory, or are we dealing with a modified version of neoclassicism, the expansion of the neoclassical model into a new sphere of analysis, institutions?

The neoclassical paradigm

Let us use the scheme of epistemological * analysis of the theory proposed by Imre Lakatos (Fig. 2.1) 17. According to him, any theory includes two components - a "hard core" and a "protective belt". The statements that make up the "hard core" of the theory must remain unchanged in the course of any modifications and refinements that accompany the development of the theory. They form a research paradigm, those principles that any researcher who consistently applies the theory has no right to refuse, no matter how sharp the criticism of opponents may be. On the contrary, the statements that constitute the "protective shell" of the theory are subject to constant adjustments as the theory develops. The theory is criticized, new elements are included in its subject of research - all these processes contribute to the constant change of the "protective shell".

Rice. 2.1

* Epistemology is the theory of knowledge.

The following three statements form the "hard core" of neoclassicism - no neoclassical model can be built without them.

The "hard core" of neoclassicism:

Equilibrium in the market always exists, it is unique and coincides with the Pareto optimum (Walras-Arrow-Debreu model 18);

Individuals make choices rationally (rational choice model);

Individuals' preferences are stable and exogenous, that is, they are not influenced by external factors.

The "protective shell" of neoclassicism also includes three elements.

The "protective shell" of neoclassicism:

Private ownership of resources is an absolute prerequisite for exchanging in the market;

There is no cost to obtain information, and individuals have all the information about the transaction;

The limits of economic exchange are determined on the basis of the principle of diminishing utility, taking into account the initial distribution of resources between the participants in the interaction 19. There are no costs in the implementation of the exchange, and the only type of costs that is considered in theory is production costs.

2.2. The institutionalism tree

We can now turn directly to the analysis of the directions of institutional analysis. Let's depict institutional theory in the form of a tree that grows from two roots - "old" institutionalism and neoclassicism (Fig. 2.2).

Let's start with the roots that feed the institutionalist tree. To what has already been said about neoclassical theory, we add only two points. The first concerns the methodology of analysis, methodological individualism. It consists in explaining institutions through the interests and behavior of individuals who use them to coordinate their actions. It is the individual who becomes the starting point in the analysis of institutions. For example, the characteristics of a state are derived from the interests and behavior of its citizens. The continuation of the principle of methodological individualism was the special view of neoclassicists on the process of the emergence of institutions, the concept spontaneous evolution of institutions. This concept is based on the assumption that institutions arise as a result of the actions of people, but not necessarily as a result of their desires, i.e. spontaneously. According to F. Hayek, the analysis should be aimed at explaining the "unplanned results of the conscious activity of people" 20.

Rice. 2.2

Similarly, the "old" institutionalism uses the methodology holism, in which the starting point in the analysis is not individuals, but institutions. In other words, the characteristics of individuals are derived from the characteristics of institutions, and not vice versa. The institutions themselves are explained through the functions that they perform in the reproduction of the system of relations at the macro level 21. Now it is not citizens who "deserve" their government, but the government contributes to the formation of a certain type of citizens. Further, the concept of spontaneous evolution is opposed by the thesis institutional determinism: institutions are seen as the main obstacle to spontaneity of development, the "old" institutionalists see them as an important stabilizing factor. Institutions are "the result of processes that took place in the past, they are adapted to the circumstances of the past [and therefore are] a factor of social inertia, psychological inertia" 22. Thus, institutions set the "framework" for all subsequent development.

Methodological individualism - explanation of institutions through the need of individuals for the existence of a framework that structures their interactions in various spheres. Individuals are primary, institutions are secondary.

Holism- explanation of the behavior and interests of individuals through the characteristics of institutions that predetermine their interactions. Institutions are primary, individuals are secondary.

2.3. "Old" institutionalism

To give a more complete picture of the "old" institutionalism, let us turn to the most prominent representatives of this scientific direction: K. Marx, T. Veblen, K. Polanyi and J.K. Galbraith 23. In Capital (1867) Marx used both the method of holism and the thesis of institutional determinism quite widely. His theory of the factory, as well as the theory of primitive accumulation of capital, is most illustrative from this point of view. In his analysis of the emergence of machine production, Marx draws attention to the influence that organizational forms have on the process of production and exchange. The system of relations between the capitalist and the wage worker is determined by the organizational form that the division of labor takes 24: natural division of labor -> cooperation -> manufacture and production of absolute surplus value -> the appearance of a partial worker -> the appearance of machines -> a factory -> production of relative surplus value.

Similarly, in the analysis of initial accumulation, one can see the institutional approach 25, or rather, one of the variants of institutional determinism, legal determinism. It was with the adoption of a number of legislative acts - the acts of Kings Henry VII and VIII, Charles I on the usurpation of public and church lands, laws against vagrancy, laws against increasing wages - that the wage labor market and the capitalist employment system began to form. The same idea is developed by Karl Polanyi, who argues that it was government intervention that underlay the formation of national (as opposed to local) resource and labor markets. "The domestic market was created everywhere in Western Europe through government intervention ", its emergence was not the result of the natural evolution of local markets.26 This conclusion is especially interesting in connection with our own analysis, which showed a deep chasm separating the local market and the market with centralized pricing.

T. Veblen in his Theory of the Leisure Class (1899) gives an example of the application of the methodology of holism to the analysis of the role of habits. Habits are one of the institutions that set the framework for the behavior of individuals in the market, in the political sphere, in the family. So, the behavior modern people deduced by Veblen from two very ancient habits, which he calls the instinct of competition (the desire to get ahead of others, to stand out from the general background) and the instinct of mastery (a predisposition to conscientious and effective work). The instinct of competition underlies, according to this author, property and competition in the market 28. The same instinct explains the so-called "conspicuous consumption", when an individual is guided in his choice not by maximizing his own utility, but by maximizing his prestige in the eyes of others. For example, the choice of a car is often subordinated to the following logic: the consumer pays attention not so much to the price and to specifications how much on the prestige that ensures the ownership of a particular brand of car.

Finally, the old institutionalism can be attributed to J.K. Galbraith and his theory of technostructure, presented in the books "New Industrial Society" (1967) and " Economic theories and the goals of society "(1973). As in our analysis of the limits of applicability of the neoclassical approach, Galbraith begins with the issues of information and its distribution among participants in the exchange. His main thesis is that in the modern market, no one has all the completeness of information; The completeness of information is achieved only by combining this partial knowledge within the organization, or, as Galbraith calls it, technostructures.29 "Power has passed from individuals to organizations with group identity." behavior of individuals, that is, the characteristics of individuals are viewed as a function of the institutional environment, for example, consumer demand is derived from the growth interests of corporations that actively use advertising to persuade consumers, rather than from their exogenous preferences 31.

  • Activation and use of mental mechanisms as the essence of Erickson's approach; how to calm the patient by "radiating" approval and support
  • Analysis of interaction in various theoretical approaches
  • Ticket 25. Preparation for a crime and the limits of criminal responsibility. Distinguishing preparation for a crime from attempted crime
  • Ticket 27. The aggregate of crimes, its types. Procedure and Limits of Imposing Punishment for Cumulative Crimes
  • Bull H. International Relations Theory: An Example of a Classical Approach
  • What is the principle of a systems approach to management?

  • Knowledge of causality has great importance for scientific foresight, influencing the processes and changing them in the right direction. No less important is the problem of the relationship between chaos and order. It is key in explaining the mechanisms of self-organization processes. We will return to this question more than once in the following chapters. Let's try to understand how in the world around us coexist, being in the most diverse and bizarre combinations, such fundamental categories as causality, necessity and accident.

    The relationship of causality and randomness

    On the one hand, we intuitively understand that all the phenomena we encounter have their reasons, which, however, do not always act unambiguously. Necessity is understood even more high level determination, meaning that certain causes in certain conditions must cause certain consequences. On the other hand, in everyday life and when trying to uncover some patterns, we are convinced of the objective existence of randomness. How can these seemingly mutually exclusive processes be combined? Where is the place of chance, if we assume that everything happens under the influence of certain reasons? Although the problem of chance and probability has not yet found its philosophical solution, it is simplified under by chance we will understand the impact of a large number of reasons external to a given object. That is, it can be assumed that when we talk about the definition of necessity as an absolute determination, we must equally clearly understand that it is practically impossible to rigidly fix all the conditions in which certain processes take place. These conditions (reasons) are external in relation to this object, since it is always part of the system that encompasses it, and this system is part of another more wide system and so on, that is, there is a hierarchy systems... Therefore, for each of systems there is some kind of external system(environment), part of the impact of which on the internal (small) the system cannot be predicted or measured. Any measurements require energy expenditures, and when trying to measure all the causes (effects) with absolute accuracy, these costs can be so great that we get complete information about the causes, but the production of entropy will be so great that it will no longer be possible to do useful work.

    Measurement problem

    Measurement and Observability Problem systems objectively exists and affects not only the level of cognition, but, to a certain extent, also the state of the system. Moreover, this is the case, including for thermodynamic macrosystems.

    Temperature measurement problem

    Relationship between temperature and thermodynamic equilibrium

    Let us dwell on the problem of measuring temperature, referring at the same time to the excellently written (in the sense of pedagogy) book of Academician M.A. Leontovich. Let's start with the definition of the concept of temperature, which, in turn, is closely related to the concept of thermodynamic equilibrium and, as noted by M.A. Leontovich, outside of this concept does not make sense. Let us dwell on this issue in more detail. By definition, in thermodynamic equilibrium, all internal options system is a function of external parameters and temperature at which the system.

    Function of external parameters and system energy. Fluctuations

    On the other hand, it can be argued that in thermodynamic equilibrium, all internal options systems - functions of external parameters and energy of the system. At the same time, internal options is a function of coordinates and velocity of molecules. Naturally, we can somehow evaluate or measure not individual, but their average values ​​over a sufficiently long period of time (assuming, for example, a normal Gaussian distribution of velocities or molecular energies). We consider these averages to be the values ​​of internal parameters at thermodynamic equilibrium. These include all the statements made, and outside thermodynamic equilibrium they lose their meaning, since the laws of energy distribution of molecules when deviating from thermodynamic equilibrium will be different. Deviations from these averages caused by thermal motion are called fluctuations. The theory of these phenomena in relation to thermodynamic equilibrium is given by statistical thermodynamics. In thermodynamic equilibrium, fluctuations are small and, in accordance with the principle of Boltzmann's order and the law of large numbers (see Chapter 4, Section 1), mutually compensate. Under strongly nonequilibrium conditions (see Chapter 4, Section 4), the situation changes radically.

    Distribution of the energy of the system by its parts in a state of equilibrium

    Now we have come close to the definition of the concept of temperature, which is deduced from several provisions following from experience, relating to the distribution of the energy of the system by its parts in a state of equilibrium. In addition to the definition of the state of thermodynamic equilibrium formed somewhat above, its following properties are postulated: transitivity, the uniqueness of the distribution of energy over the parts of the system and the fact that in thermodynamic equilibrium the energy of the parts of the system grows with an increase in its total energy.

    Transitivity

    Transitivity means the following. Let's say that we have system, consisting of three parts (1, 2 and 3), which are in some states, and we made sure that system consisting of parts 1 and 2, and system, consisting of parts 2 and 3, each separately is in states of thermodynamic equilibrium. Then it can be argued that and system 1 - 3, will also be in a state of thermodynamic equilibrium. In this case, it is assumed that there are no adiabatic partitions between each pair of parts in each of these cases (i.e., heat transfer is ensured).

    Temperature concept

    The energy of each part of the system is an internal parameter of the entire system, therefore, in equilibrium, the energies of each part are functions of external parameters, relating to the entire system, and the energy of the entire system

    (1.1) Solving these equations with respect to, we obtain

    (1.2) Thus, for each system there is a certain function of its external parameters and its energy, which for all system in equilibrium, when they are combined, has the same meaning.

    This function is called temperature. Denoting temperatures systems 1, 2 through,, and setting

    (1.3) we emphasize once again that conditions (1.1) and (1.2) are reduced to the requirement that the temperatures of the parts of the system be equal.

    The physical meaning of the concept "temperature"

    Bye this definition temperature allows you to establish only the equality of temperatures, but still does not allow you to ascribe a physical meaning to which temperature is higher and which is lower. For this, the definition of temperature must be supplemented as follows.

    The body temperature increases with the growth of its energy under constant external conditions. This is equivalent to the statement that when a body receives heat with constant external parameters, its temperature increases.

    Such a refinement of the definition of temperature is possible only due to the fact that experiment also implies the following properties of the equilibrium state of physical systems.

    In equilibrium, one perfectly definite distribution of the energy of the system among its parts is possible. With an increase in the total energy of the system (with constant external parameters), the energies of its parts grow.

    From the uniqueness of the energy distribution it follows that an equation of the type gives one definite value corresponding to a given (and given,), i.e. gives one solution to the equation. It follows that the function is a monotone function. The same conclusion applies to a function for any system. Thus, from the simultaneous increase in the energy of parts of the system, it follows that all functions,, etc. there are either monotonically increasing or monotonically decreasing functions,, etc. That is, we can always choose the temperature functions so that it increases with growth.

    Selection of temperature scale and temperature meter

    After the above definition of temperature, the question comes down to the choice of a temperature scale and a body that can be used as a temperature meter (primary sensor). It should be emphasized that this definition of temperature is valid when using a thermometer (for example, mercury or gas), and any body that is part of a system whose temperature is to be measured can serve as a thermometer. The thermometer exchanges heat with this system, external options determining the state of the thermometer must be fixed. In this case, the value of any internal parameter related to the thermometer is measured at equilibrium of the entire system consisting of a thermometer and environment, the temperature of which is to be measured. This internal parameter, taking into account the above definition, is a function of the thermometer's energy (and its external parameters, which are fixed, and whose assignments relate to the calibration of the thermometer). Thus, each measured value of the internal parameter of the thermometer corresponds to its certain energy, and therefore, taking into account relation (1.3), and a certain temperature of the entire system.

    Naturally, each thermometer has its own temperature scale... For example, for a gas expansion thermometer, the external parameter - the volume of the sensor - is fixed, and the measured internal parameter is the pressure. The described measuring principle only applies to thermometers that do not use irreversible processes. Temperature measuring instruments such as thermocouples and resistance thermometers are based on more complex methods, which are connected (this is very important) with the heat exchange of the sensor with the environment (hot and cold junctions of thermocouples).

    Here we have vivid example when the introduction of the measuring device into the object ( the system), change the object itself to some extent. At the same time, the desire to improve the measurement accuracy leads to an increase in energy consumption for measurement, to an increase in the entropy of the environment. At a given level of development of technology, this circumstance in a number of cases can serve as an objective boundary between deterministic and stochastic methods of description. This is even more clearly manifested, for example, when measuring the flow using the throttling method. The contradiction associated with the striving for a deeper level of knowledge of matter and the existing methods of measurement is manifested more and more clearly in the physics of elementary particles, where, according to the physicists themselves, increasingly cumbersome measuring instruments are used to penetrate the microworld. For example, to detect neutrinos and some other elementary particles, huge “barrels” filled with special high-density substances, etc. are placed in deep caves in the mountains.

    The limits of applicability of the concept of temperature

    To conclude the discussion of the measurement problem, let us return to the question of the limits of applicability of the concept of temperature arising from the above definition of it, in which it was emphasized that the energy of a system is the sum of its parts. Therefore, we can talk about a certain temperature of parts of the system (including the thermometer) only when the energy of these parts is additively added. The entire conclusion that led to the introduction of the concept of temperature refers to thermodynamic equilibrium. For systems close to equilibrium, temperature can be considered only as an approximate concept. For systems however, in a state that is very different from the equilibrium state, the concept of temperature generally loses its meaning.

    Non-contact temperature measurement

    And finally, a few words about temperature measurement by non-contact methods, for example, total radiation pyrometers, infrared and color pyrometers. At first glance, it seems that in this case it is finally possible to overcome the main paradox of the methodology of cognition associated with the influence of the measuring instrument on the measured object and the increase in the entropy of the environment due to the measurement. In fact, there is only a slight shift in the level of cognition and the entropy level, but the fundamental formulation of the problem remains.

    Firstly, pyrometers of this type can measure only the temperature of the body surface, or rather not even the temperature, but heat flux emitted by the surface of bodies.

    Secondly, to ensure the functioning of the sensors of these devices, an energy supply (and now also a connection to a computer) is required, and the sensors themselves are quite complex and energy-intensive to manufacture.

    Thirdly, if we set the problem of estimation using the same parameters of the temperature field inside the body, then we will need to have a mathematical model with distributed parameters, which connects the temperature distribution over the surface measured by these parameters with the spatial distribution of temperatures inside the body. But to identify this model and check its adequacy, we again need an experiment associated with the need to directly measure temperatures inside the body (for example, drilling a heated workpiece and pressing in thermocouples). In this case, the result, as follows from the above rather rigorous formulation of the concept of temperature, will be valid only when the object reaches a stationary state. In all other cases, the obtained temperature estimates should be considered with varying degrees of approximation and methods for estimating the degree of approximation should be available.

    Thus, in the case of using non-contact methods for measuring temperature, we ultimately come to the same problem, at best with a lower entropy level. As for metallurgical and many other technological objects, the level of their observability (transparency) is rather low.

    For example, by placing a large number of thermocouples over the entire surface of the masonry of a heating furnace, we will receive sufficient information about heat losses, but we will not be able to heat the metal (Figure 1.6).

    Rice. 1.6 Energy loss during temperature measurement

    The heat removal along thermocouple thermoelectrodes can be so great that the temperature difference and heat flux through masonry can exceed useful heat flux from torch to metal. Thus, most of the energy will be spent on heating the environment, that is, on increasing chaos in the universe.

    An equally clear example of the same plan is the measurement of the flow rate of liquid and gases by the method of pressure drop across the throttle device, when the desire to improve the measurement accuracy leads to the need to reduce the cross-section of the throttle device. At the same time, a significant part kinetic energy, intended for useful use, will be spent on friction and eddies (Figure 1.7).

    Rice. 1.7 Energy loss during flow measurement

    By striving for too precise a measurement, we transfer a significant amount of energy into chaos. We believe that these examples are sufficiently convincing evidence in favor of the objective nature of randomness.

    Objective and biased randomness

    Recognizing the objective nature of causality and necessity, and at the same time the objective nature of randomness, the latter can apparently be interpreted as a result of the collision (combination) of a large number of necessary connections that are external to the given process.

    Not forgetting about the relative nature of randomness, it is very important to distinguish between truly objective randomness and “biased randomness”, that is, caused by a lack of knowledge about the object or process under study and relatively easily eliminated with quite reasonable expenditures of time and money.

    Although it is impossible to draw a clear line between objective and biased randomness, such a distinction is still fundamentally necessary, especially in connection with the “black box” approach that has spread in recent years, in which, according to Ashby, instead of studying each individual cause in connection with its individual consequence, which is a classic element of scientific knowledge, they mix all causes and all effects into a common mass and connect only two results. The details of causal pairing are lost in this process.

    This approach, for all its seeming versatility, is limited without a combination of cause-and-effect analysis.

    However, due to the fact that a number of probabilistic methods based on this approach have now been developed, many researchers prefer to use them, hoping for a faster achievement of the set goal than with a consistent, analytical, causal approach.

    The use of a purely probabilistic approach without sufficient comprehension of the results obtained, taking into account the physics of processes, the internal content of objects, leads to the fact that some researchers, willingly or not, take the position of absolutizing randomness, since in this case all phenomena are considered random, even those whose causal relationships can be disclosed with a relatively small investment of time and money.

    The objective nature of randomness, of course, takes place in the sense that cognition always goes from phenomenon to essence, from the external side of things to deep regular connections, and the essence is inexhaustible. This inexhaustible essence determines the level of objective randomness, which, of course, is relative for certain specific conditions.

    Randomness is objective: full disclosure of cause-and-effect relationships is impossible, if only because information about the causes is needed to disclose them, that is, measurement is necessary, and, as a rule, L. Brillouin asserts, mistakes cannot be made “infinitely small”, they are always remain finite, since the energy consumption for their decrease increases, accompanied by an increase in entropy.

    In this regard, objective randomness should be understood only as that level of interweaving of cause-and-effect relationships, the disclosure of which, at a given level of knowledge about the process and development of technology, is accompanied by an exorbitant expenditure of energy and becomes economically inexpedient.

    For the successful construction of meaningful models, an optimal combination of macro- and micro-approaches is necessary, i.e., functional methods and methods of disclosing the inner content.

    In the functional approach, they abstract from the specific mechanism for the implementation of internal causal relationships and consider only the behavior of the system, i.e. her reaction to disturbances of one kind or another.

    However, the functional approach and, especially, its simplified version - the “black box” method is not universal and is almost always combined with other methods.

    The functional approach can be viewed as the first step in the cognitive process. At the first consideration of the system, the macro-approach is usually applied, then they move to the micro-level, where the “bricks” from which the systems are built, penetrate into the internal structure, break down a complex system into simpler, elementary systems, identify their functions and interact with each other and the system in the whole.

    The functional approach does not preclude a causal approach. On the contrary, it is with the right combination of these methods that the greatest effect is obtained.

    Instructional films, television and video recording have a lot in common. These means make it possible to show the phenomenon in dynamics, which, in principle, is inaccessible to static screen means. This feature is highlighted by all researchers in the field of technical teaching aids.

    Motion in cinematography cannot be reduced only to the mechanical movement of objects across the screen. So, in many films on art and architecture, the dynamics is made up of separate static images, when not the object itself is changed, but the position of the camera, the scale, one image is superimposed on another, for example, its photograph is superimposed on the task diagram. By using the specific capabilities of cinema in many films, one can see manuscripts "come to life" in which lines of text appear from an invisible (or visible) pen. Thus, the dynamics in cinema is also the dynamics of cognition, soap, logical constructions.

    Such properties of these teaching aids as slowing down and accelerating the passage of time, changing space, turning invisible objects into visible ones are of great importance. The special language of cinematography, which is "spoken" not only by films shot on film, but also by messages created and transmitted by television or "canned" in a videotape, determines situations in the lesson when the use of cinema (understood in a broad sense) turns out to be didactically justified ... So, N.M. Shakhmaev singles out 11 cases, while pointing out that this is not an exhaustive list.

    1. Study of objects and processes observed with optical and electron microscopes, which are not currently available to the school. In this case, film materials shot in special laboratories and provided with qualified commentary by a teacher or speaker have scientific reliability and can be shown to the whole class.

    2. When studying fundamentally invisible objects, such as elementary particles and the fields surrounding them. Using animation, you can show the model of an object and even its structure. Pedagogical value such model representations is huge, because they create in the minds of students certain images objects and mechanisms of complex phenomena, which facilitate the understanding of educational material.

    3. When studying such objects and phenomena that, due to their specifics, cannot be simultaneously visible to all students in the class. Using special optics and choosing the most advantageous shooting points, these objects can be shot in close-up, cinematically selected and explained.

    4. In the study of fast or slow flowing phenomena. Accelerated or slowed down


    shooting, combined with normal projection speed, transforms the passage of time and makes these processes observable.

    5. When studying the processes occurring in places inaccessible for direct observation (mouth of a volcano; underwater world of rivers, seas and oceans; radiation zones; space bodies, etc.). In this case, only cinema and television can provide the teacher with the necessary scientific documentation, which serves as a teaching aid.

    6. When studying objects and phenomena observed in those regions of the spectrum electromagnetic waves that are not directly perceived by the human eye (ultraviolet, infrared and X-rays). Shooting through filters with a narrow passband on special types of films, as well as shooting from fluorescent screens, allow you to transform an invisible image into a visible one.

    7. When explaining such fundamental experiments, the setting of which in the conditions of the educational process is difficult due to the complexity or cumbersomeness of the installations, the high cost of equipment, the duration of the experiment, etc. Filming of such experiments allows not only to demonstrate the progress and results, but also to provide the necessary explanations. It is also significant that the experiments are shown from the most successful point, from the most successful angle, which cannot be achieved without cinema.

    8. When explaining the device of complex objects (structure internal organs man, the design of machines and mechanisms, the structure of molecules, etc.). In this case, using animation, by gradually filling and transforming the image, you can go from the simplest scheme to a specific constructive solution.

    9. When studying the work of writers, poets. Cinema makes it possible to reproduce the characteristic features of the era in which the artist lived and worked, but also to show him creative way, the process of the birth of a poetic image, the manner of work, the connection of creativity with the historical era.

    10. When studying historical events... Films based on chronicle material, in addition to their scientific significance, have a tremendous power of emotional impact on students, which is extremely important for a deep understanding of historical events. In special feature films, thanks to the specific capabilities of cinema, it is possible to recreate historical episodes dating back to times long gone. Historical accurate reproduction of objects of material culture, the characters of historical personalities, the economy, and the peculiarities of everyday life helps to create in students a real idea of ​​those events that they learn about from textbooks and from the teacher's story. History takes on tangible forms, becomes a bright, emotionally colored fact that is part of the intellectual structure of a student's thought.

    11. To solve a large complex of educational tasks.

    Defining the limits of film, television and video recording is fraught with the danger of making mistakes. The mistake of improperly expanding the use of these learning tools in educational process can be illustrated with the words of one of the characters in the film “Moscow Does Not Believe in Tears”: “Soon there will be nothing. There will be solid television ”. Life has shown that books, theater and cinema have survived. And what is most important is the direct informational contact between the teacher and the students.

    On the other hand, the error of unreasonable narrowing of the didactic functions of on-screen audio teaching aids is possible. This happens when a movie or video film, television broadcast is considered only as a kind of visual aid, which has the ability to dynamically present the material under study. This is certainly true. But besides this there is one more aspect: in didactic materials presented to students with the help of a film projector, a video recorder and a television set, specific teaching tasks are solved not only by the forces of technology, but also by means of visual means inherent in a particular type of art. Therefore, the screen tutorial acquires the clearly visible features of a work of art, even if it was created for academic subject, referred to the natural-mathematical cycle.

    It should be remembered that neither film, nor video recording, nor television can create lasting and lasting motives for learning, nor can they replace other means of visualization. The experiment with hydrogen carried out directly in the classroom (the explosion of an oxyhydrogen gas in a metal tin can) is many times clearer than the same experiment shown on the screen.

    Control questions:

    1. Who was the first to demonstrate moving drawn pictures on the screen at the same time to many viewers?

    2. How was the T. Edison kinetoscope arranged?

    4. Describe the structure of black and white film.

    5. What types of photography are used in the production of motion pictures?

    6. What features characterize educational films and videos?

    7. List the requirements for the training film.

    8. What types can be divided into films?

    9. What is the shutter for?

    10. What types of phonograms are used in the production of motion pictures?