In fluid dynamics, drag, sometimes referred to as fluid resistance, is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid.[1] This can exist between two fluid layers, two solid surfaces, or between a fluid and a solid surface. Drag forces tend to decrease fluid velocity relative to the solid object in the fluid's path.
Unlike other resistive forces, drag force depends on velocity.[2][3] This is because drag force is proportional to the velocity for low-speed flow and the velocity squared for high-speed flow. This distinction between low and high-speed flow is measured by the Reynolds number.
Netaerodynamic or hydrodynamicforce: Drag acting opposite to the direction of movement of a solid object such as cars, aircraft,[3] and boat hulls.
Viscous drag of fluid in a pipe: Drag force on the immobile pipe decreases fluid velocity relative to the pipe.[4][5]
In the physics of sports, drag force is necessary to explain the motion of balls, javelins, arrows, and frisbees and the performance of runners and swimmers.[6] For a top sprinter, overcoming drag can require 5% of their energy output.[7]
skin friction drag or viscous drag due to the friction between the fluid and a surface which may be the outside of an object, or inside such as the bore of a pipe
The effect of streamlining on the relative proportions of skin friction and form drag is shown for two different body sections: An airfoil, which is a streamlined body, and a cylinder, which is a bluff body. Also shown is a flat plate illustrating the effect that orientation has on the relative proportions of skin friction, and pressure difference between front and back.
A body is known as bluff or blunt when the source of drag is dominated by pressure forces, and streamlined if the drag is dominated by viscous forces. For example, road vehicles are bluff bodies.[8] For aircraft, pressure and friction drag are included in the definition of parasitic drag. Parasite drag is often expressed in terms of a hypothetical.
This is the area of a flat plate perpendicular to the flow. It is used when comparing the drag of different aircraft For example, the Douglas DC-3 has an equivalent parasite area of 2.20 m2 (23.7 sq ft) and the McDonnell Douglas DC-9, with 30 years of advancement in aircraft design, an area of 1.91 m2 (20.6 sq ft) although it carried five times as many passengers.[9]
wave drag (aerodynamics) is caused by the presence of shockwaves and first appears at subsonic aircraft speeds when local flow velocities become supersonic. The wave drag of the supersonic Concorde prototype aircraft was reduced at Mach 2 by 1.8% by applying the area rule which extended the rear fuselage 3.73 m (12.2 ft) on the production aircraft.[10]
Lift-induced drag (also called induced drag) is drag which occurs as the result of the creation of lift on a three-dimensional lifting body, such as the wing or propeller of an airplane. Induced drag consists primarily of two components: drag due to the creation of trailing vortices (vortex drag); and the presence of additional viscous drag (lift-induced viscous drag) that is not present when lift is zero. The trailing vortices in the flow-field, present in the wake of a lifting body, derive from the turbulent mixing of air from above and below the body which flows in slightly different directions as a consequence of creation of lift.
With other parameters remaining the same, as the lift generated by a body increases, so does the lift-induced drag. This means that as the wing's angle of attack increases (up to a maximum called the stalling angle), the lift coefficient also increases, and so too does the lift-induced drag. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasite drag, increases due to the formation of turbulent unattached flow in the wake behind the body.
Parasitic drag, or profile drag, is drag caused by moving a solid object through a fluid. Parasitic drag is made up of multiple components including viscous pressure drag (form drag), and drag due to surface roughness (skin friction drag). Additionally, the presence of multiple bodies in relative proximity may incur so called interference drag, which is sometimes described as a component of parasitic drag.
In aviation, induced drag tends to be greater at lower speeds because a high angle of attack is required to maintain lift, creating more drag. However, as speed increases the angle of attack can be reduced and the induced drag decreases. Parasitic drag, however, increases because the fluid is flowing more quickly around protruding objects increasing friction or drag. At even higher speeds (transonic), wave drag enters the picture. Each of these forms of drag changes in proportion to the others based on speed. The combined overall drag curve therefore shows a minimum at some airspeed - an aircraft flying at this speed will be at or close to its optimal efficiency. Pilots will use this speed to maximize endurance (minimum fuel consumption), or maximize gliding range in the event of an engine failure.
Drag depends on the properties of the fluid and on the size, shape, and speed of the object. One way to express this is by means of the drag equation:
where
The drag coefficient depends on the shape of the object and on the Reynolds number
where
is some characteristic diameter or linear dimension. Actually, is the equivalent diameter of the object. For a sphere, is the D of the sphere itself.
For a rectangular shape cross-section in the motion direction, , where a and b are the rectangle edges.
is the kinematic viscosity of the fluid (equal to the dynamic viscosity divided by the density ).
At low , is asymptotically proportional to , which means that the drag is linearly proportional to the speed, i.e. the drag force on a small sphere moving through a viscous fluid is given by the Stokes Law:
At high , is more or less constant, but drag will vary as the square of the speed varies. The graph to the right shows how varies with for the case of a sphere. Since the power needed to overcome the drag force is the product of the force times speed, the power needed to overcome drag will vary as the square of the speed at low Reynolds numbers, and as the cube of the speed at high numbers.
It can be demonstrated that drag force can be expressed as a function of a dimensionless number, which is dimensionally identical to the Bejan number.[13] Consequently, drag force and drag coefficient can be a function of Bejan number. In fact, from the expression of drag force it has been obtained:
and consequently allows expressing the drag coefficient as a function of Bejan number and the ratio between wet area and front area :[13]
where is the Reynolds number related to fluid path length L.
As mentioned, the drag equation with a constant drag coefficient gives the force moving through fluid a relatively large velocity, i.e. high Reynolds number, Re > ~1000. This is also called quadratic drag.
The reference area A is often the orthographic projection of the object, or the frontal area, on a plane perpendicular to the direction of motion. For objects with a simple shape, such as a sphere, this is the cross sectional area. Sometimes a body is a composite of different parts, each with a different reference area (drag coefficient corresponding to each of those different areas must be determined).
In the case of a wing, the reference areas are the same, and the drag force is in the same ratio as the lift force.[14] Therefore, the reference for a wing is often the lifting area, sometimes referred to as "wing area" rather than the frontal area.[15]
For an object with a smooth surface, and non-fixed separation points (like a sphere or circular cylinder), the drag coefficient may vary with Reynolds number Re, up to extremely high values (Re of the order 107).[16][17]
For an object with well-defined fixed separation points, like a circular disk with its plane normal to the flow direction, the drag coefficient is constant for Re > 3,500.[17]
The further the drag coefficient Cd is, in general, a function of the orientation of the flow with respect to the object (apart from symmetrical objects like a sphere).
Under the assumption that the fluid is not moving relative to the currently used reference system, the power required to overcome the aerodynamic drag is given by:
The power needed to push an object through a fluid increases as the cube of the velocity increases. For example, a car cruising on a highway at 50 mph (80 km/h) may require only 10 horsepower (7.5 kW) to overcome aerodynamic drag, but that same car at 100 mph (160 km/h) requires 80 hp (60 kW).[18] With a doubling of speeds, the drag/force quadruples per the formula. Exerting 4 times the force over a fixed distance produces 4 times as much work. At twice the speed, the work (resulting in displacement over a fixed distance) is done twice as fast. Since power is the rate of doing work, 4 times the work done in half the time requires 8 times the power.
When the fluid is moving relative to the reference system, for example, a car driving into headwind, the power required to overcome the aerodynamic drag is given by the following formula:
In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances,[7] tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin.[8]
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
Ancient precursors
Egypt
Calculations of volume and area, one goal of integral calculus, can be found in the EgyptianMoscow papyrus (c.1820BC), but the formulae are simple instructions, with no indication as to how they were obtained.[9][10]
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus (c.390–337BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle.[12][13] In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method[14][15] that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval
Ibn al-Haytham, 11th-century Arab mathematician and physicist
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c.965– c.1040AD) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.[16]
India
Bhāskara II (c.1114–1185) was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function.[17] In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if then This can be interpreted as the discovery that cosine is the derivative of sine.[18] In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus, but according to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".[16]
Modern
Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus.[19] Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.[20]
Significant work was a treatise, the origin being Kepler's methods,[20] written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The product rule and chain rule,[24] the notions of higher derivatives and Taylor series,[25] and of analytic functions[26] were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.[27]
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton.[28] He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.[29]
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today.[30]:51–52 The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics.[31] A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.[32]:100 The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.[33]
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.[34][35]
Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.[36]
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities.[37] The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation.[38] In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral.[39] It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.[40]
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions.[41]Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.[42]
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus.[43] There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations.[36] Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold.[36] The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.[36]
Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles.[13][27][44] The Hungarian polymath John von Neumann wrote of this work,
The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.[45]
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.[47]
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols and were taken to be infinitesimal, and the derivative was their ratio.[36]
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.[36]
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.[30]:32
In more explicit terms the "doubling function" may be denoted by g(x) = 2x and the "squaring function" by f(x) = x2. The "derivative" now takes the function f(x), defined by the expression "x2", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called f is denoted by f′, pronounced "f prime" or "f dash". For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if f is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.[30]:18–20
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
This gives an exact value for the slope of a straight line.[48]:6 If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The second line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.[48]:61–63
Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.[48]:63
A common notation, introduced by Leibniz, for the derivative in the example above is
In an approach based on limits, the symbol dy/ dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.[48]:74 Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/ dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
In this usage, the dx in the denominator is read as "with respect to x".[48]:79 Another example of correct notation could be:
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
Integration can be thought of as measuring the area under a curve, defined by f(x), between two points (here a and b).
A sequence of midpoint Riemann sums over a regular partition of an interval: the total area of the rectangles converges to the integral of the function.
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration.[46]:508 The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative.[48]:163–165F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.[49]:282
A motivating example is the distance traveled in a given time.[48]:153 If the speed is constant, only multiplication is needed:
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.[46]:535 This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If f(x) represents speed as it varies over time, the distance traveled between the times represented by a and b is the area of the region between f(x) and the x-axis, between x = a and x = b.
To approximate that area, an intuitive method would be to divide up the distance between a and b into several equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as Δx approaches zero.[46]:512–522
The symbol of integration is , an elongated S chosen to suggest summation.[46]:529 The definite integral is written as:
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width Δx becomes the infinitesimally small dx.[30]:44
The indefinite integral, or antiderivative, is written:
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.[49]:326 Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter is given by:
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.[50]:135
The fundamental theorem of calculus states that differentiation and integration are inverse operations.[49]:290 More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
Furthermore, for every x in the interval (a, b),
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.[51]) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.[52]:351–352
Applications
Calculus is used in every branch of the physical sciences,[53]:1actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired.[54] It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.[55] Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function.[56]:37 In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.[57]
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus.[58][59]:52–55 Chemistry also uses calculus in determining reaction rates[60]:599 and in studying radioactive decay.[60]:814 In biology, population dynamics starts with reproduction and death rates to model population changes.[61][62]:631
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing.[63] For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow.[64] Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.[65]
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.[66]:387
Bardi, Jason Socrates (2006). The Calculus Wars: Newton, Leibniz, and the Greatest Mathematical Clash of All Time. New York: Thunder's Mouth Press. ISBN1-56025-706-7.
Hoffmann, Laurence D.; Bradley, Gerald L. (2004). Calculus for Business, Economics, and the Social and Life Sciences (8thed.). Boston: McGraw Hill. ISBN0-07-242432-X.
Archimedes (2004). The Works of Archimedes, Volume 1: The Two Books On the Sphere and the Cylinder. Translated by Netz, Reviel. Cambridge University Press. ISBN978-0-521-66160-7.
Dainian Fan; R. S. Cohen (1996). Chinese studies in the history and philosophy of science and technology. Dordrecht: Kluwer Academic Publishers. ISBN0-7923-3463-9. OCLC32272485.
Hollingdale, Stuart (1991). "Review of Before Newton: The Life and Times of Isaac Barrow". Notes and Records of the Royal Society of London. 45 (2): 277–279. doi:10.1098/rsnr.1991.0027. ISSN0035-9149. JSTOR531707. S2CID165043307. The most interesting to us are Lectures X–XII, in which Barrow comes close to providing a geometrical demonstration of the fundamental theorem of the calculus... He did not realize, however, the full significance of his results, and his rejection of algebra means that his work must remain a piece of mid-17th century geometrical analysis of mainly historic interest.
Guicciardini, Niccolò (2005). "Isaac Newton, Philosophiae naturalis principia mathematica, first edition (1687)". Landmark Writings in Western Mathematics 1640–1940. Elsevier. pp.59–87. doi:10.1016/b978-044450871-3/50086-3. ISBN978-0-444-50871-3. [Newton] immediately realised that quadrature problems (the inverse problems) could be tackled via infinite series: as we would say nowadays, by expanding the integrand in power series and integrating term-wise.
Mazur, Joseph (2014). Enlightening Symbols / A Short History of Mathematical Notation and Its Hidden Powers. Princeton University Press. p.166. ISBN978-0-691-17337-5. Leibniz understood symbols, their conceptual powers as well as their limitations. He would spend years experimenting with some—adjusting, rejecting, and corresponding with everyone he knew, consulting with as many of the leading mathematicians of the time who were sympathetic to his fastidiousness.
Schrader, Dorothy V. (1962). "The Newton-Leibniz controversy concerning the discovery of the calculus". The Mathematics Teacher. 55 (5): 385–396. doi:10.5951/MT.55.5.0385. ISSN0025-5769. JSTOR27956626.
Russell, Bertrand (1946). History of Western Philosophy. London: George Allen & Unwin Ltd. p.857. The great mathematicians of the seventeenth century were optimistic and anxious for quick results; consequently they left the foundations of analytical geometry and the infinitesimal calculus insecure. Leibniz believed in actual infinitesimals, but although this belief suited his metaphysics it had no sound basis in mathematics. Weierstrass, soon after the middle of the nineteenth century, showed how to establish calculus without infinitesimals, and thus, at last, made it logically secure. Next came Georg Cantor, who developed the theory of continuity and infinite number. "Continuity" had been, until he defined it, a vague word, convenient for philosophers like Hegel, who wished to introduce metaphysical muddles into mathematics. Cantor gave a precise significance to the word and showed that continuity, as he defined it, was the concept needed by mathematicians and physicists. By this means a great deal of mysticism, such as that of Bergson, was rendered antiquated.
von Neumann, J. (1947). "The Mathematician". In Heywood, R. B. (ed.). The Works of the Mind. University of Chicago Press. pp.180–196. Reprinted in Bródy, F.; Vámos, T., eds. (1995). The Neumann Compendium. World Scientific Publishing Co. Pte. Ltd. pp.618–626. ISBN981-02-2201-7.
Mahoney, Michael S. (1990). "Barrow's mathematics: Between ancients and moderns". In Feingold, M. (ed.). Before Newton. Cambridge University Press. pp.179–249. ISBN978-0-521-06385-2.
Probst, Siegmund (2015). "Leibniz as Reader and Second Inventor: The Cases of Barrow and Mengoli". In Goethe, Norma B.; Beeley, Philip; Rabouin, David (eds.). G.W. Leibniz, Interrelations Between Mathematics and Philosophy. Archimedes: New Studies in the History and Philosophy of Science and Technology. Vol.41. Springer. pp.111–134. ISBN978-9-401-79663-7.
Hu, Zhiying (14 April 2021). "The Application and Value of Calculus in Daily Life". 2021 2nd Asia-Pacific Conference on Image Processing, Electronics, and Computers. Ipec2021. Dalian China: ACM. pp.562–564. doi:10.1145/3452446.3452583. ISBN978-1-4503-8981-5. S2CID233384462.
Garber, Elizabeth (2001). The language of physics: the calculus and the development of theoretical physics in Europe, 1750–1914. Springer Science+Business Media. ISBN978-1-4612-7272-4. OCLC921230825.
Atkins, Peter W.; Jones, Loretta (2010). Chemical principles: the quest for insight (5thed.). New York: W.H. Freeman. ISBN978-1-4292-1955-6. OCLC501943698.
Perloff, Jeffrey M. (2018). Microeconomics: Theory and Applications with Calculus (4th globaled.). Harlow: Pearson. ISBN978-1-292-15446-6. OCLC1064041906.
Further reading
Adams, Robert A. (1999). Calculus: A complete course. Addison-Wesley. ISBN978-0-201-39607-2.
Albers, Donald J.; Anderson, Richard D.; Loftsgaarden, Don O., eds. (1986). Undergraduate Programs in the Mathematics and Computer Sciences: The 1985–1986 Survey. Mathematical Association of America.
Anton, Howard; Bivens, Irl; Davis, Stephen (2002). Calculus. John Wiley and Sons Pte. Ltd. ISBN978-81-265-1259-1.
Lebedev, Leonid P.; Cloud, Michael J. (2004). "The Tools of Calculus". Approximating Perfection: a Mathematician's Journey into the World of Mechanics. Princeton University Press. Bibcode:2004apmj.book.....L.
Velocity as a function of time for an object falling through a non-dense medium, and released at zero relative-velocity v = 0 at time t = 0, is roughly given by a function involving a hyperbolic tangent (tanh):
The hyperbolic tangent has a limit value of one, for large time t.In other words, velocity asymptotically approaches a maximum value called the terminal velocityvt:
For an object falling and released at relative-velocity v = vi at time t = 0, with vi < vt, is also defined in terms of the hyperbolic tangent function:
For vi > vt, the velocity function is defined in terms of the hyperbolic cotangent function:
The hyperbolic cotangent also has a limit value of one, for large time t. Velocity asymptotically tends to the terminal velocityvt, strictly from above vt.
Or, more generically (where F(v) are the forces acting on the object beyond drag):
For a potato-shaped object of average diameter d and of density ρobj, terminal velocity is about
For objects of water-like density (raindrops, hail, live objects—mammals, birds, insects, etc.) falling in air near Earth's surface at sea level, the terminal velocity is roughly equal to with d in metre and vt in m/s.
For example, for a human body ( ≈0.6 m) ≈70 m/s, for a small animal like a cat ( ≈0.2 m) ≈40 m/s, for a small bird ( ≈0.05 m) ≈20 m/s, for an insect ( ≈0.01 m) ≈9 m/s, and so on. Terminal velocity for very small objects (pollen, etc.) at low Reynolds numbers is determined by Stokes law.
In short, terminal velocity is higher for larger creatures, and thus potentially more deadly. A creature such as a mouse falling at its terminal velocity is much more likely to survive impact with the ground than a human falling at its terminal velocity.[19]
The equation for viscous resistance or linear drag is appropriate for objects or particles moving through a fluid at relatively slow speeds (assuming there is no turbulence). Purely laminar flow only exists up to Re = 0.1 under this definition. In this case, the force of drag is approximately proportional to velocity. The equation for viscous resistance is:[20]
where:
is a constant that depends on both the material properties of the object and fluid, as well as the geometry of the object; and
is the velocity of the object.
When an object falls from rest, its velocity will be
where:
is the density of the object,
is density of the fluid,
is the volume of the object,
is the acceleration due to gravity (i.e., 9.8 m/s), and
is mass of the object.
The velocity asymptotically approaches the terminal velocity . For a given , denser objects fall more quickly.
For the special case of small spherical objects moving slowly through a viscous fluid (and thus at small Reynolds number), George Gabriel Stokes derived an expression for the drag constant:
where is the Stokes radius of the particle, and is the fluid viscosity.
The resulting expression for the drag is known as Stokes' drag:[21]
For example, consider a small sphere with radius = 0.5 micrometre (diameter = 1.0 μm) moving through water at a velocity of 10 μm/s. Using 10−3 Pa·s as the dynamic viscosity of water in SI units,
we find a drag force of 0.09 pN. This is about the drag force that a bacterium experiences as it swims through water.
The drag coefficient of a sphere can be determined for the general case of a laminar flow with Reynolds numbers less than using the following formula:[22]
For Reynolds numbers less than 1, Stokes' law applies and the drag coefficient approaches !
In aerodynamics, aerodynamic drag, also known as air resistance, is the fluid drag force that acts on any moving solid body in the direction of the air's freestream flow.[23]
From the body's perspective (near-field approach), the drag results from forces due to pressure distributions over the body surface, symbolized .
Forces due to skin friction, which is a result of viscosity, denoted .
Alternatively, calculated from the flow field perspective (far-field approach), the drag force results from three natural phenomena: shock waves, vortex sheet, and viscosity.
When the airplane produces lift, another drag component results. Induced drag, symbolized , is due to a modification of the pressure distribution due to the trailing vortex system that accompanies the lift production. An alternative perspective on lift and drag is gained from considering the change of momentum of the airflow. The wing intercepts the airflow and forces the flow to move downward. This results in an equal and opposite force acting upward on the wing which is the lift force. The change of momentum of the airflow downward results in a reduction of the rearward momentum of the flow which is the result of a force acting forward on the airflow and applied by the wing to the air flow; an equal but opposite force acts on the wing rearward which is the induced drag. Another drag component, namely wave drag, , results from shock waves in transonic and supersonic flight speeds. The shock waves induce changes in the boundary layer and pressure distribution over the body surface.
Therefore, there are three ways of categorizing drag.[24]: 19
Pressure drag and friction drag
Profile drag and induced drag
Vortex drag, wave drag and wake drag
The pressure distribution acting on a body's surface exerts normal forces on the body. Those forces can be added together and the component of that force that acts downstream represents the drag force, . The nature of these normal forces combines shock wave effects, vortex system generation effects, and wake viscous mechanisms.
Viscosity of the fluid has a major effect on drag. In the absence of viscosity, the pressure forces acting to hinder the vehicle are canceled by a pressure force further aft that acts to push the vehicle forward; this is called pressure recovery and the result is that the drag is zero. That is to say, the work the body does on the airflow is reversible and is recovered as there are no frictional effects to convert the flow energy into heat. Pressure recovery acts even in the case of viscous flow. Viscosity, however results in pressure drag and it is the dominant component of drag in the case of vehicles with regions of separated flow, in which the pressure recovery is infective.
The friction drag force, which is a tangential force on the aircraft surface, depends substantially on boundary layer configuration and viscosity. The net friction drag, , is calculated as the downstream projection of the viscous forces evaluated over the body's surface. The sum of friction drag and pressure (form) drag is called viscous drag. This drag component is due to viscosity.
The idea that a moving body passing through air or another fluid encounters resistance had been known since the time of Aristotle. According to Mervyn O'Gorman, this was named "drag" by Archibald Reith Low.[25]Louis Charles Breguet's paper of 1922 began efforts to reduce drag by streamlining.[26] Breguet went on to put his ideas into practice by designing several record-breaking aircraft in the 1920s and 1930s. Ludwig Prandtl's boundary layer theory in the 1920s provided the impetus to minimise skin friction. A further major call for streamlining was made by Sir Melvill Jones who provided the theoretical concepts to demonstrate emphatically the importance of streamlining in aircraft design.[27][28][29]
In 1929 his paper 'The Streamline Airplane' presented to the Royal Aeronautical Society was seminal. He proposed an ideal aircraft that would have minimal drag which led to the concepts of a 'clean' monoplane and retractable undercarriage. The aspect of Jones's paper that most shocked the designers of the time was his plot of the horse power required versus velocity, for an actual and an ideal plane. By looking at a data point for a given aircraft and extrapolating it horizontally to the ideal curve, the velocity gain for the same power can be seen. When Jones finished his presentation, a member of the audience described the results as being of the same level of importance as the Carnot cycle in thermodynamics.[26][27]
The interaction of parasitic and induced drag vs. airspeed can be plotted as a characteristic curve, illustrated here. In aviation, this is often referred to as the power curve, and is important to pilots because it shows that, below a certain airspeed, maintaining airspeed counterintuitively requires more thrust as speed decreases, rather than less. The consequences of being "behind the curve" in flight are important and are taught as part of pilot training. At the subsonic airspeeds where the "U" shape of this curve is significant, wave drag has not yet become a factor, and so it is not shown in the curve.
Wave drag, sometimes referred to as compressibility drag, is drag that is created when a body moves in a compressible fluid and at the speed that is close to the speed of sound in that fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight.
In transonic flight, wave drag is the result of the formation of shockwaves in the fluid, formed when local areas of supersonic (Mach number greater than 1.0) flow are created. In practice, supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air increases as it accelerates over the body to speeds above Mach 1.0. However, full supersonic flow over the vehicle will not develop until well past Mach 1.0. Aircraft flying at transonic speed often incur wave drag through the normal course of operation. In transonic flight, wave drag is commonly referred to as transonic compressibility drag. Transonic compressibility drag increases significantly as the speed of flight increases towards Mach 1.0, dominating other forms of drag at those speeds.
In supersonic flight (Mach numbers greater than 1.0), wave drag is the result of shockwaves present in the fluid and attached to the body, typically oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows, or in bodies with turning angles sufficiently large, unattached shockwaves, or bow waves will instead form. Additionally, local areas of transonic flow behind the initial shockwave may occur at lower supersonic speeds, and can lead to the development of additional, smaller shockwaves present on the surfaces of other lifting bodies, similar to those found in transonic flows. In supersonic flow regimes, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.
The closed form solution for the minimum wave drag of a body of revolution with a fixed length was found by Sears and Haack, and is known as the Sears-Haack Distribution. Similarly, for a fixed volume, the shape for minimum wave drag is the Von Karman Ogive.
The Busemann biplane theoretical concept is not subject to wave drag when operated at its design speed, but is incapable of generating lift in this condition.
In 1752 d'Alembert proved that potential flow, the 18th century state-of-the-art inviscid flow theory amenable to mathematical solutions, resulted in the prediction of zero drag. This was in contradiction with experimental evidence, and became known as d'Alembert's paradox. In the 19th century the Navier–Stokes equations for the description of viscous flow were developed by Saint-Venant, Navier and Stokes. Stokes derived the drag around a sphere at very low Reynolds numbers, the result of which is called Stokes' law.[30]
In the limit of high Reynolds numbers, the Navier–Stokes equations approach the inviscid Euler equations, of which the potential-flow solutions considered by d'Alembert are solutions. However, all experiments at high Reynolds numbers showed there is drag. Attempts to construct inviscid steady flow solutions to the Euler equations, other than the potential flow solutions, did not result in realistic results.[30]
The notion of boundary layers—introduced by Prandtl in 1904, founded on both theory and experiments—explained the causes of drag at high Reynolds numbers. The boundary layer is the thin layer of fluid close to the object's boundary, where viscous effects remain important even when the viscosity is very small (or equivalently the Reynolds number is very large).[30]
^Encyclopedia of Automotive Engineering, David Crolla, Paper "Fundamentals, Basic principles in Road vehicle Aerodynamics and Design", ISBN978 0 470 97402 5
^Fundamentals of Flight, Second Edition, Richard S. Shevell,ISBN0 13 339060 8, p.185
^A Case Study By Aerospatiale And British Aerospace On The Concorde By Jean Rech and Clive S. Leyman, AIAA Professional Study Series, Fig. 3.6
^Sir Morien Morgan, Sir Arnold Hall (November 1977). Biographical Memoirs of Fellows of the Royal SocietyBennett Melvill Jones. 28 January 1887 -- 31 October 1975. Vol. 23. The Royal Society. pp. 252–282.
^Mair, W.A. (1976). Oxford Dictionary of National Biography.
'Improved Empirical Model for Base Drag Prediction on Missile Configurations, based on New Wind Tunnel Data', Frank G Moore et al. NASA Langley Center
'Computational Investigation of Base Drag Reduction for a Projectile at Different Flight Regimes', M A Suliman et al. Proceedings of 13th International Conference on Aerospace Sciences & Aviation Technology, ASAT- 13, May 26 – 28, 2009
'Base Drag and Thick Trailing Edges', Sighard F. Hoerner, Air Materiel Command, in: Journal of the Aeronautical Sciences, Oct 1950, pp 622–628
French, A. P. (1970). Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). W. W. White & Company Inc., New York. ISBN978-0-393-09958-4.
Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN978-0-7167-0809-4.
Huntley, H. E. (1967). Dimensional Analysis. LOC 67-17978.
Anderson, John D. Jr. (2000); Introduction to Flight, Fourth Edition, McGraw Hill Higher Education, Boston, Massachusetts, USA. 8th ed. 2015, ISBN978-0078027673.