Monotonicity
Introduction
Monotonicity describes how a function behaves as its input increases. A function is said to be monotonic if it always increases or always decreases over an interval without changing direction. This property helps us understand the overall trend of a function without needing to evaluate it at every point.
Monotonicity is important in calculus because it helps in solving inequalities, finding maximum and minimum values, and analyzing the behavior of functions over intervals. If a function is increasing, it means that larger input values always lead to larger function values. If a function is decreasing, larger input values always lead to smaller function values.
The study of monotonicity is closely related to derivatives. If a function is differentiable, its derivative tells us whether the function is increasing or decreasing. A positive derivative means the function is increasing, while a negative derivative means the function is decreasing. This idea is useful for understanding the shape of graphs and solving optimization problems.
Classification of Points Based on Increasing or Decreasing Behavior
An internal point \( a \) in the domain \( D \) of a function \( f \) can be classified based on its local monotonic behavior as follows:
-
\( f \) is Strictly Increasing at \( x = a \):
A function \( f \) is said to be strictly increasing at \( x = a \) if there exists \( \delta > 0 \) such that for all \( x \in (a - \delta, a) \) and \( y \in (a, a + \delta) \),
\[ f(x) < f(a) < f(y). \]This implies that \( f \) is strictly increasing in the neighborhood of \( a \).
-
\( f \) is Strictly Decreasing at \( x = a \):
A function \( f \) is said to be strictly decreasing at \( x = a \) if there exists \( \delta > 0 \) such that for all \( x \in (a - \delta, a) \) and \( y \in (a, a + \delta) \),
\[ f(x) > f(a) > f(y). \]This implies that \( f \) is strictly decreasing in the neighborhood of \( a \).
-
\( f \) is Increasing at \( x = a \) (Non-Decreasing):
A function \( f \) is said to be increasing at \( x = a \) if there exists \( \delta > 0 \) such that for all \( x \in (a - \delta, a) \) and \( y \in (a, a + \delta) \),
\[ f(x) \leq f(a) \leq f(y). \]This allows for the possibility that \( f(a) \) is equal to \( f(x) \) or \( f(y) \), but never smaller than \( f(x) \) or larger than \( f(y) \).
We see such behaviour at a point where is a fucntion is constant in one or both left and right neighbouhoods as in the figure below.
-
\( f \) is Decreasing at \( x = a \) (Non-Increasing):
A function \( f \) is said to be decreasing at \( x = a \) if there exists \( \delta > 0 \) such that for all \( x \in (a - \delta, a) \) and \( y \in (a, a + \delta) \),
\[ f(x) \geq f(a) \geq f(y). \]Similar to the increasing case, this allows for the possibility that \( f(a) \) is equal to \( f(x) \) or \( f(y) \), but never larger than \( f(x) \) or smaller than \( f(y) \).
-
Local Extremum
A function \( f \) is said to have a local extremum at \( x = a \) if there exists \( \delta > 0 \) such that either:
-
Neither Increasing, Nor Decreasing, Nor an Extremum: Not Monotonic
A function at a point may exhibit behavior where it is neither increasing, nor decreasing, nor an extremum. This happens when the function oscillates arbitrarily close to the point, preventing any consistent order of function values.
Consider the function
\[ f(x) = \begin{cases} x \sin(1/x), & x \neq 0, \\ 0, & x = 0. \end{cases} \]At \( x = 0 \), we have \( f(0) = 0 \). For \( x \neq 0 \), since \( -1 \leq \sin(1/x) \leq 1 \), it follows that
\[ - |x| \leq f(x) \leq |x|, \]which ensures \( \lim_{x \to 0} f(x) = 0 \). However, the function keeps oscillating between positive and negative values infinitely often near \( x = 0 \), meaning it does not approach \( 0 \) in a monotonic way.
To be increasing or decreasing at \( x = 0 \), \( f(x) \) must consistently increase or decrease on both sides of \( 0 \). But since \( f(x) \) changes sign infinitely many times near \( 0 \), neither \( f(x) < f(0) < f(y) \) nor \( f(x) > f(0) > f(y) \) holds in any small interval. Similarly, \( f(x) \) is neither increasing nor decreasing since it does not maintain a definite direction in arbitrarily small neighborhoods.
For \( x = 0 \) to be a local extremum, \( f(x) \) must be either always greater than or always less than \( f(0) \) near \( x = 0 \), but this fails since \( f(x) \) takes both positive and negative values arbitrarily close to \( 0 \).
Thus, \( x = 0 \) is an oscillatory point, where the function fluctuates indefinitely in every small neighborhood, making it neither increasing, nor decreasing, nor an extremum.
Example
Consider the function
We analyze the behavior of \( f \) at \( x = 2 \).
Solution:
We have
For \( \delta > 0 \), we examine \( f(2 - \delta) \) and \( f(2 + \delta) \):
Expanding,
For sufficiently small \( \delta \), since \( \delta^2 \) is negligible compared to \( 3\delta \), we get
Now, for \( x = 2 + \delta \), we use the second case of \( f(x) \):
Clearly, for \( \delta > 0 \),
Since
it follows that \( f \) is strictly increasing at \( x = 2 \).
Non-Decreasing at \( x = 0 \) but Not Strictly Increasing
Consider the function
We analyze the behavior of \( f(x) \) at \( x = 0 \). By definition,
For any \( x > 0 \), we have
which implies
For any \( x < 0 \), we have
which gives
Since \( f(0) \leq f(x) \) for all \( x > 0 \), \( f \) is non-decreasing at \( x = 0 \). However, it is not strictly increasing, as there exist sequences of points approaching \( 0 \) from the right for which \( f(x) = 0 \), meaning the function does not satisfy \( f(0) < f(x) \) for all \( x \in (0, \delta) \). The function remains equal to \( 0 \) infinitely often near \( 0 \), preventing strict monotonicity.
Classification of Endpoints
Above we discussed classification of internal points. However, when dealing with endpoints of a function’s domain, the standard classification requires modification because one-sided neighborhoods must be considered.
One-Sided Monotonicity:
Since an endpoint \( a \) of the domain \( D \) does not have a neighborhood on one side, the classification is adjusted to use only the accessible side:
A function at an endpoint may be classified based on its one-sided behavior, but in cases where it oscillates infinitely often in any small neighborhood, it may fail to satisfy any monotonicity conditions, including non-decreasing and non-increasing behavior.
-
A left endpoint \( a \) (where \( f \) is defined for \( x > a \)) is called strictly increasing if
\[ f(a) < f(x), \quad \forall x \in (a, a + \delta). \]Similarly, it is strictly decreasing if
\[ f(a) > f(x), \quad \forall x \in (a, a + \delta). \]It is non-decreasing at \( a \) if
\[ f(a) \leq f(x), \quad \forall x \in (a, a + \delta), \]and non-increasing if
\[ f(a) \geq f(x), \quad \forall x \in (a, a + \delta). \]Consider the function
\[ f(x) = \begin{cases} \sin^2 (1/x), & x > 0, \\ 0, & x = 0. \end{cases} \]We analyze the behavior of \( f(x) \) at \( x = 0 \). By definition,
\[ f(0) = 0. \]For any \( x > 0 \), we have
\[ f(x) = \sin^2 (1/x). \]Since \( 0 \leq \sin^2(1/x) \leq 1 \) for all \( x > 0 \), it follows that
\[ f(0) = 0 \leq f(x) \quad \forall x \in (0, \delta). \]Thus, for any arbitrarily small \( \delta > 0 \), we have \( f(0) \leq f(x) \) for all \( x \in (0, \delta) \), proving that \( f \) is non-decreasing at \( x = 0 \). However, since \( f(x) \) oscillates between \( 0 \) and \( 1 \), it is not strictly increasing at \( x = 0 \).
-
A right endpoint \( b \) (where \( f \) is defined for \( x < b \)) follows analogous definitions:
\[ f(x) < f(b), \quad \forall x \in (b - \delta, b) \]for strictly increasing behavior, and
\[ f(x) > f(b), \quad \forall x \in (b - \delta, b) \]for strictly decreasing behavior. Similarly, \( f \) is non-decreasing at \( b \) if
\[ f(x) \leq f(b), \quad \forall x \in (b - \delta, b), \]and non-increasing if
\[ f(x) \geq f(b), \quad \forall x \in (b - \delta, b). \]
However, in cases where \( f \) exhibits infinite oscillations near an endpoint, none of these conditions hold. For example, if
then for any small \( \delta > 0 \), \( f(x) \) oscillates between positive and negative values in \( (0, \delta) \), preventing \( f(0) \) from being strictly increasing, strictly decreasing, non-decreasing, or non-increasing. In such cases, the endpoint is best described as an oscillatory endpoint, where the function exhibits no consistent monotonic trend.
Monotonicity in an Interval
We can define monotonicity in an interval in two ways. The first approach is based on the function's behavior at every point in its domain within the given interval.
Let \( f \) be a function defined on an interval \( I \). If \( f \) is strictly increasing at every point in its domain within \( I \), then \( f \) is said to be strictly increasing on \( I \). Similarly, if \( f \) is strictly decreasing at every point in its domain within \( I \), then \( f \) is said to be strictly decreasing on \( I \).
If \( f \) is non-decreasing (also called increasing) at every point in its domain within \( I \), then \( f \) is said to be non-decreasing on \( I \). Likewise, if \( f \) is non-increasing (also called decreasing) at every point in its domain within \( I \), then \( f \) is said to be non-increasing on \( I \).
If \( f \) fails to satisfy any of these conditions at even a single point in its domain within \( I \), then \( f \) is said to be neither increasing nor decreasing on \( I \).
The Second Approach to Monotonicity in an Interval
The second approach to defining monotonicity does not require examining the function’s continuity at individual points. Instead, it considers the function’s behavior directly over its entire domain within the given interval. This makes it a more general and often more convenient definition.
Let \( f \) be a function defined on a domain \( D \), which is a subset of an interval \( I \). The function \( f \) is said to be strictly increasing on \( D \) if
Similarly, \( f \) is said to be strictly decreasing on \( D \) if
A function is said to be non-decreasing (or simply increasing) on \( D \) if
Likewise, \( f \) is non-increasing (or simply decreasing) on \( D \) if
If \( f \) fails to satisfy any of these conditions for even a single pair \( x, y \in D \), then it is said to be neither increasing nor decreasing on \( D \).
This definition is independent of continuity at individual points and focuses solely on the relative ordering of function values over the domain \( D \), making it particularly useful in more general settings.
Example: \( f(x) = x^3 \) is Strictly Increasing on \( \mathbb{R} \)
Consider the function
To determine whether \( f(x) \) is strictly increasing on \( \mathbb{R} \), we use the second approach to monotonicity, which requires checking whether
Since the function is given by \( f(x) = x^3 \), we compare function values for two numbers \( x, y \) with \( x < y \). Cubing preserves the order for all real numbers because the function \( g(x) = x^3 \) is strictly increasing on \( \mathbb{R} \), meaning
Thus, for all \( x, y \in \mathbb{R} \) with \( x < y \), we have \( f(x) < f(y) \), proving that \( f(x) = x^3 \) is strictly increasing on \( \mathbb{R} \).
This conclusion follows directly from the order-preserving property of the cubic function, without requiring any continuity or differentiability conditions at individual points.
Example: \( f(x) = \ln x \) is Strictly Increasing on \( (0, \infty) \)
Consider the function
To determine whether \( f(x) \) is strictly increasing on \( (0, \infty) \), we use the second approach, which requires checking whether
Since the natural logarithm function satisfies the well-known order-preserving property
it follows immediately that \( f(x) = \ln x \) is strictly increasing on \( (0, \infty) \).
This conclusion is based purely on the monotonicity property of the logarithmic function and does not require considering continuity or differentiability at individual points.
This definition is useful and provides a straightforward way to establish monotonicity when the function preserves order naturally. However, when the order-preserving property of the function is not immediately clear or becomes complicated, we must develop other techniques to determine whether the function is increasing or decreasing.
Using graphs
Using graphs to determine monotonicity provides an intuitive and often immediate understanding of a function’s behavior. If a function’s graph can be constructed using the graphs of elementary functions or their transformations, we can visually inspect whether the function is increasing, decreasing, or neither.
By observing the graph, we can identify whether the function is going up (increasing), going down (decreasing), or exhibiting turning points where the function changes from increasing to decreasing or vice versa. Additionally, if the graph becomes horizontal over an interval, the function may be constant in that region, meaning it is non-decreasing or non-increasing but not strictly monotonic.
This graphical approach is particularly useful for functions whose algebraic order-preserving properties are difficult to verify directly. However, for more complex functions, where the monotonic trend is not immediately evident from the graph, we may need to employ analytical techniques such as derivatives to rigorously determine monotonicity.
Using Derivatives to Determine Monotonicity in an interval
If a function \( f \) is continuous at all points in an interval \( I \) and differentiable at all but finitely many points, its derivative can effectively determine its monotonic behavior. For this class of functions, the derivative provides a direct way to establish whether the function is increasing, decreasing, or constant.
Sufficient Conditions for Monotonicity
- If \( f'(x) > 0 \) for all \( x \in I \), except possibly at points where \( f \) is not differentiable, then \( f \) is strictly increasing on \( I \).
- If \( f'(x) < 0 \) for all \( x \in I \), except at points of non-differentiability, then \( f \) is strictly decreasing on \( I \).
- If \( f'(x) = 0 \) for all \( x \in I \), then \( f \) is constant on \( I \).
Since \( f \) is assumed to be continuous, any isolated points of non-differentiability do not disrupt the overall increasing or decreasing nature of \( f \). The derivative provides a simple and powerful method to determine monotonicity, especially when graphical or order-based approaches are less convenient. However, in cases where \( f'(x) = 0 \) for all \( x \), additional analysis is unnecessary, as the function must be constant throughout the interval.
Proof of Monotonicity Using Derivatives
Let \( f \) be a function that is differentiable at all points in an open interval \( (a, b) \). We prove the following monotonicity results:
- If \( f'(x) > 0 \) for all \( x \in (a, b) \), then \( f \) is strictly increasing on \( (a, b) \).
- If \( f'(x) < 0 \) for all \( x \in (a, b) \), then \( f \) is strictly decreasing on \( (a, b) \).
- If \( f'(x) = 0 \) for all \( x \in (a, b) \), then \( f \) is constant on \( (a, b) \).
Proof of Strictly Increasing Case
Assume \( f'(x) > 0 \) for all \( x \in (a, b) \). To show that \( f \) is strictly increasing, take any two points \( x, y \in (a, b) \) with \( x < y \). Since \( f \) is differentiable on \( (a, b) \), it is also continuous on \( [x, y] \), allowing us to apply the Mean Value Theorem (MVT) on \( [x, y] \).
By the MVT, there exists some \( x^* \in (x, y) \) such that
Since \( f'(x^*) > 0 \), we obtain
Thus, for any \( x, y \in (a, b) \) with \( x < y \), we conclude that \( f(x) < f(y) \), proving that \( f \) is strictly increasing on \( (a, b) \).
Proof of Strictly Decreasing Case
If \( f'(x) < 0 \) for all \( x \in (a, b) \), we follow the same argument. Applying the MVT to any \( x, y \in (a, b) \) with \( x < y \), there exists some \( x^* \in (x, y) \) such that
Since \( f'(x^*) < 0 \), we obtain
Thus, for any \( x, y \in (a, b) \) with \( x < y \), we conclude that \( f(x) > f(y) \), proving that \( f \) is strictly decreasing on \( (a, b) \).
Proof that \( f \) is Constant if \( f'(x) = 0 \)
If \( f'(x) = 0 \) for all \( x \in (a, b) \), then for any \( x, y \in (a, b) \) with \( x < y \), the MVT guarantees the existence of some \( x^* \in (x, y) \) such that
Since \( f'(x^*) = 0 \), we get
Since this holds for all choices of \( x, y \in (a, b) \), \( f(x) \) must be constant on \( (a, b) \).
\(\blacksquare\)
The converse of the previous theorems is not true. That is, if a function is strictly increasing, it does not necessarily imply that \( f'(x) > 0 \) for all \( x \) in its domain.
For example, consider the function
This function is strictly increasing on \( \mathbb{R} \), as \( x < y \) implies \( x^3 < y^3 \). However, its derivative is
Since \( f'(0) = 0 \), we see that \( f' \) is not strictly positive at all points, contradicting the naive converse statement.
Necessary Conditions for Monotonicity
Although strict positivity of \( f' \) is not necessary for strict monotonicity, we do have a weaker necessary condition:
- If \( f \) is strictly increasing on an interval, then \( f'(x) \geq 0 \) for all \( x \) where \( f \) is differentiable.
- If \( f \) is strictly decreasing on an interval, then \( f'(x) \leq 0 \) for all \( x \) where \( f \) is differentiable.
However, this result alone does not distinguish strictly increasing and non-decreasing functions, or strictly decreasing and non-increasing functions.
Distinction Between Strictly Increasing and Non-Decreasing Functions
It turns out that the key difference lies in where \( f' \) is zero:
-
For a strictly increasing function, \( f'(x) > 0 \) at all points of differentiability, except possibly at isolated points where \( f'(x) = 0 \). This ensures that the function never stays constant over an interval.
-
For a non-decreasing function, \( f'(x) = 0 \) may hold over an entire interval, meaning the function remains constant in some regions while still being non-decreasing overall.
Thus, while both strictly increasing and non-decreasing functions satisfy \( f'(x) \geq 0 \), strict monotonicity demands that \( f' \) is not zero over any subinterval where \( f \) remains differentiable.
For example, consider the function
Its derivative is
Since \( -1 \leq \cos x \leq 1 \), we have
The derivative equals zero at points where \( \cos x = -1 \), which occurs at
These points are isolated, meaning \( f' \) is positive everywhere else. Since \( f' \) does not vanish over any interval, \( f \) is strictly increasing on \( \mathbb{R} \).
Now consider the function
The derivative is
Although \( g' \) is always non-negative, it is zero over the interval \( (1,2) \), meaning \( g(x) \) is constant there. The function is non-decreasing but not strictly increasing. Moreover, \( g(x) \) is not differentiable at \( x = 1 \) and \( x = 2 \), but this does not affect its overall monotonic classification.
Example
Consider the function
Its derivative is
The quadratic polynomial \( 3x^2 - 2x + 4 \) has a discriminant
For a quadratic expression \( ax^2 + bx + c \), if \( a > 0 \) and \( \Delta < 0 \), then the expression is strictly positive for all \( x \). Here, since \( a = 3 > 0 \) and \( \Delta < 0 \), it follows that
Thus, \( f'(x) > 0 \) for all \( x \), meaning \( f(x) \) is strictly increasing on \( \mathbb{R} \). Since \( f'(x) \) never equals zero, the function has no horizontal tangents and is never constant over any interval.
Example
Finding the Interval Where \( f(x) = x^x \) is Strictly Increasing for \( x > 0 \)
Consider the function
Rewriting using exponentials:
Differentiating,
Since \( e^{x \ln x} = x^x > 0 \) for all \( x > 0 \), the sign of \( f'(x) \) depends entirely on \( (\ln x + 1) \).
To decide when \( f \) is increasing or decreasing, we examine the sign changes of \( f'(x) \). Setting \( f'(x) = 0 \) gives
For \( x < 1/e \), we have \( \ln x < -1 \), so \( \ln x + 1 < 0 \), implying \( f'(x) < 0 \), meaning \( f(x) \) is strictly decreasing in \( (0, 1/e) \).
For \( x > 1/e \), we have \( \ln x > -1 \), so \( \ln x + 1 > 0 \), implying \( f'(x) > 0 \), meaning \( f(x) \) is strictly increasing in \( (1/e, \infty) \).
Thus, \( f(x) \) is strictly increasing for \( x > 1/e \) and strictly decreasing for \( 0 < x < 1/e \).
Critical Points
Consider a function \( f \) that is continuous on the closed interval \( [a, b] \). A critical point is a point in the open interval \( (a, b) \) where the function may change its monotonicity. The fundamental question is: where can such a change in monotonicity occur? For a continuous function, this happens only at points where the derivative either vanishes or fails to exist.
A function is increasing on an interval if its derivative is positive throughout that interval, i.e.,
Similarly, a function is decreasing on an interval if its derivative is negative, i.e.,
A function can switch from increasing to decreasing (or vice versa) only if the derivative transitions from positive to negative or from negative to positive. The only points where such a transition is possible are those where the derivative ceases to be strictly positive or strictly negative. This can happen in two ways:
- The derivative is zero: If \( f'(c) = 0 \), the function has a horizontal tangent at \( x = c \). This suggests a possible extremum, as the function may attain a local maximum or minimum at such points.
- The derivative does not exist: If \( f'(c) \) is undefined, the function exhibits a non-differentiable behavior such as a sharp corner or cusp. These are also potential points where the function might change its monotonicity.
Thus, a critical point of \( f \) is any point \( c \in (a, b) \) where
Every local extremum (local maximum or minimum) of a differentiable function must be a critical point, but not every critical point corresponds to a local extremum. For example, consider \( f(x) = x^3 \), where
At \( x = 0 \), the derivative vanishes, yet the function remains increasing on both sides of \( x = 0 \), showing that a critical point need not imply a change in monotonicity. Critical points are thus necessary but not sufficient conditions for local extrema.
Example
Let \( f: \mathbb{R} \to \mathbb{R} \) be defined as
Determine the interval in which \( f(x) \) is increasing.
Solution:
It is clearly a continuous function. [We must always check for continuity before differntiating]
Compute \( f'(x) \) separately in the given domains.
For \( x > 0 \):
Factoring,
For \( x \leq 0 \):
To find critical points, solve \( f'(x) = 0 \).
For \( x > 0 \), solving \( -(2x+1)(2x-3) = 0 \) gives \( x = -\frac{1}{2} \) and \( x = \frac{3}{2} \). However, since \( x > 0 \), discard \( x = -\frac{1}{2} \), leaving only \( x = \frac{3}{2} \) as a critical point in this domain.
For \( x \leq 0 \), solving \( 3e^x(x+1) = 0 \) gives \( x+1 = 0 \), so \( x = -1 \).
Next, check differentiability at \( x = 0 \). Compute the left-hand and right-hand limits of \( f'(x) \):
Since these are equal, \( f(x) \) is differentiable at \( x = 0 \), meaning \( x = 0 \) is not a critical point.
Thus, the only critical points are \( x = -1 \) and \( x = \frac{3}{2} \).
To determine where \( f'(x) > 0 \), analyze the sign of \( f'(x) \) in the intervals divided by these critical points:
- For \( x < -1 \), \( 3e^x(x+1) \) is negative since \( x+1 < 0 \).
- For \( -1 < x \leq 0 \), \( 3e^x(x+1) \) is positive since \( x+1 > 0 \) and \( e^x > 0 \).
- For \( 0 < x < \frac{3}{2} \), \( -(2x+1)(2x-3) \) is positive since \( 2x+1 > 0 \) and \( 2x-3 < 0 \).
- For \( x > \frac{3}{2} \), \( -(2x+1)(2x-3) \) is negative since both factors are positive.
Thus, \( f(x) \) is increasing in the interval \( (-1, \frac{3}{2}) \).
Example
Let \( f: \mathbb{R} \to \mathbb{R} \) be defined as
Determine the interval in which \( f(x) \) is increasing and decreasing.
Solution:
To find the intervals of increase and decrease, compute the derivative:
Factoring out common terms,
Further factoring,
Setting \( f'(x) = 0 \) gives the equations
Solving \( \sin 2x = 0 \) within the given interval,
From \( x \in \left[ -\frac{\pi}{6}, \frac{\pi}{2} \right] \), the only solution is \( x = 0 \).
Solving \( 2\sin x + 1 = 0 \),
The general solution is
For \( x \in \left[ -\frac{\pi}{6}, \frac{\pi}{2} \right] \), the valid solution is \( x = -\frac{\pi}{6} \).
Thus, the critical points are \( x = -\frac{\pi}{6} \) and \( x = 0 \).
Performing sign analysis on \( f'(x) \):
- For \( x \in \left( -\frac{\pi}{6}, 0 \right) \), \( f'(x) < 0 \), so \( f(x) \) is decreasing.
- For \( x \in \left( 0, \frac{\pi}{2} \right) \), \( f'(x) > 0 \), so \( f(x) \) is increasing.
Thus, \( f(x) \) is increasing in \( \left( 0, \frac{\pi}{2} \right) \) and decreasing in \( \left( -\frac{\pi}{6}, 0 \right) \).
Critical Points and Points of Discontinuity
Critical points are defined as points in the open interval \( (a, b) \) where the derivative either vanishes or does not exist. However, the definition specifically excludes points where the function itself is discontinuous. The reason for this exclusion is that the behavior of a function at points of discontinuity is fundamentally different from that at critical points, even though changes in monotonicity may still occur at discontinuities.
For a continuous function, every transition from increasing to decreasing corresponds to a local maximum, and every transition from decreasing to increasing corresponds to a local minimum. If the function is increasing on both sides of a critical point, it remains increasing at the critical point itself. Similarly, if the function is decreasing on both sides, it remains decreasing at the critical point. This structured behavior allows for a well-defined analysis of local extrema in terms of critical points.
However, at points of discontinuity, the situation is more complicated. Discontinuities introduce abrupt jumps in function values, and strange phenomena can occur that do not align with the behavior at critical points. Consider the function
The graph is as follows:
The derivative is given by
which exists everywhere except at \( x = 1 \). Since \( f'(x) \) is positive on both sides of \( x = 1 \), one might expect the function to be increasing at \( x = 1 \) as well. However, the graph of \( f(x) \) clearly shows a local maximum at \( x = 1 \), contradicting the typical behavior of critical points in continuous functions. This anomaly occurs precisely because of the discontinuity at \( x = 1 \).
Discontinuities, therefore, require separate consideration. The usual rules that apply to continuous functions—such as the necessary condition that local extrema must occur at critical points—do not always hold in the presence of discontinuities. A more detailed analysis is required to handle such cases properly.
Example
Find the number of critical points of the function
Solution:
A critical point occurs where \( f'(x) = 0 \) or \( f'(x) \) does not exist. Differentiating using the product rule,
Rewriting,
Simplifying the expression in parentheses,
Thus,
Setting \( f'(x) = 0 \),
Since the denominator is nonzero for \( x \neq 2 \), the numerator must be zero, giving
Since \( (x-2)^{-1/3} \) is undefined at \( x = 2 \) and \( f(x) \) is continuous there, thus \( x = 2 \) is also a critical point.
Thus, the number of critical points is 2.
Example
Let \( f \) be a real-valued function, defined on \( \mathbb{R} - \{-1,1\} \), and given by
Determine the intervals in which \( f(x) \) is increasing.
Solution:
For \( f(x) \) to be increasing, \( f'(x) \) must be positive. Differentiating,
Simplifying,
Rewriting in a common form,
Further simplifying,
The sign analysis reveals:
From the sign analysis, \( f'(x) > 0 \) for
Thus, \( f(x) \) is increasing in
Note: We include 1 in the solution, even though \(f'(x) = 0\) there because as we know that this is just an isolated point.
Monotonicity and Algebraic Transformations
Monotonicity properties remain preserved or undergo predictable changes under algebraic transformations. By understanding these transformations, conclusions about function behavior can often be drawn without explicit differentiation.
-
Negation Rule:
If \( f(x) \) is increasing, then \( -f(x) \) is decreasing, and vice versa.
Since \( f(x) \) increasing means \( f(x_1) \leq f(x_2) \) for \( x_1 < x_2 \), multiplying by \( -1 \) reverses the inequality:\[ -f(x_1) \geq -f(x_2), \]proving that \( -f(x) \) is decreasing. Similarly, if \( f(x) \) is decreasing, then \( -f(x) \) is increasing.
-
Reciprocal Rule:
If \( f(x) \) never crosses zero, meaning it is either always positive or always negative, then \( f(x) \) increasing implies \( \frac{1}{f(x)} \) is decreasing.
-
If \( f(x) > 0 \) and increasing, then for \( x_1 < x_2 \),
\[ f(x_1) \leq f(x_2). \]Taking reciprocals (which reverses order for positive numbers),
\[ \frac{1}{f(x_1)} \geq \frac{1}{f(x_2)}, \]proving that \( \frac{1}{f(x)} \) is decreasing.
-
If \( f(x) < 0 \) and increasing, then \( f(x_1) \leq f(x_2) \). Since both \( f(x_1) \) and \( f(x_2) \) are negative, dividing by their product (which is positive) gives
\[ \frac{1}{f(x_2)} \leq \frac{1}{f(x_1)}, \]
proving that \( \frac{1}{f(x)} \) is decreasing.
-
-
Sum Rule:
If \( f(x) \) and \( g(x) \) are both increasing, then \( f(x) + g(x) \) is also increasing. Similarly, if both are decreasing, then their sum is decreasing. However, if one function is increasing and the other is decreasing, the monotonicity of their sum depends on their relative rates of growth.
Differentiating \( f(x) + g(x) \),
\[ \frac{d}{dx} [f(x) + g(x)] = f'(x) + g'(x). \]If \( f(x) \) and \( g(x) \) are both increasing, then \( f'(x) \geq 0 \) and \( g'(x) \geq 0 \), so their sum satisfies
\[ f'(x) + g'(x) \geq 0. \]This implies that \( f(x) + g(x) \) is increasing.
If \( f(x) \) and \( g(x) \) are both decreasing, then \( f'(x) \leq 0 \) and \( g'(x) \leq 0 \), so
\[ f'(x) + g'(x) \leq 0. \]This implies that \( f(x) + g(x) \) is decreasing.
However, if \( f(x) \) is increasing and \( g(x) \) is decreasing, then \( f'(x) \geq 0 \) and \( g'(x) \leq 0 \), so
\[ f'(x) + g'(x) \]may be positive, negative, or zero, depending on the relative magnitudes of \( f'(x) \) and \( g'(x) \). Thus, \( f(x) + g(x) \) may be increasing, decreasing, or constant.
-
Product Rule:
If \( f(x) \) and \( g(x) \) are both positive and increasing, then their product \( f(x) g(x) \) is also increasing. Similarly, if \( f(x) \) and \( g(x) \) are both positive and decreasing, then \( f(x) g(x) \) is decreasing. However, if \( f(x) \) is increasing and \( g(x) \) is decreasing, the monotonicity of their product is uncertain.
Differentiating \( f(x) g(x) \) using the product rule,
\[ \frac{d}{dx} [f(x) g(x)] = f'(x) g(x) + f(x) g'(x). \]If \( f(x) \) and \( g(x) \) are both increasing and positive, then \( f'(x) \geq 0 \), \( g'(x) \geq 0 \), and since \( f(x) > 0 \) and \( g(x) > 0 \),
\[ f'(x) g(x) + f(x) g'(x) \geq 0. \]This ensures that \( f(x) g(x) \) is increasing.
If \( f(x) \) and \( g(x) \) are both decreasing and positive, then \( f'(x) \leq 0 \), \( g'(x) \leq 0 \), and since \( f(x) > 0 \) and \( g(x) > 0 \),
\[ f'(x) g(x) + f(x) g'(x) \leq 0. \]This ensures that \( f(x) g(x) \) is decreasing.
However, if \( f(x) \) is increasing and \( g(x) \) is decreasing, then \( f'(x) \geq 0 \) and \( g'(x) \leq 0 \), meaning
\[ f'(x) g(x) + f(x) g'(x) \]may be positive, negative, or zero depending on the relative growth rates of \( f(x) \) and \( g(x) \). Thus, \( f(x) g(x) \) may be increasing, decreasing, or constant.
Example
Consider the functions \( f(x) = e^x \) and \( g(x) = x^2 \).
- The function \( e^x \) is strictly increasing for all \( x \in \mathbb{R} \).
- The function \( x^2 \) is increasing for \( x > 0 \) and decreasing for \( x < 0 \).
Since both \( f(x) \) and \( g(x) \) are increasing for \( x > 0 \), their product
\[ h(x) = x^2 e^x \]is also increasing for \( x > 0 \), by the product rule for increasing functions.
However, for \( x < 0 \), the function \( g(x) = x^2 \) is decreasing, while \( f(x) = e^x \) remains increasing. In this case, the monotonicity of \( h(x) = x^2 e^x \) cannot be directly determined from the product rule. To analyze \( h(x) \) in \( x < 0 \), differentiation is required.
Computing the derivative,
\[ h'(x) = 2x e^x + x^2 e^x = e^x (x^2 + 2x). \]Factoring,
\[ h'(x) = e^x x (x+2). \]Since \( e^x > 0 \) for all \( x \), the sign of \( h'(x) \) is determined by \( x(x+2) \).
- For \( x < -2 \), both \( x \) and \( x+2 \) are negative, so \( x(x+2) \) is positive, making \( h'(x) > 0 \), meaning \( h(x) \) is increasing in \( (-\infty, -2) \).
- For \( -2 < x < 0 \), we have \( x < 0 \) and \( x+2 > 0 \), so \( x(x+2) \) is negative, making \( h'(x) < 0 \), meaning \( h(x) \) is decreasing in \( (-2,0) \).
This confirms that when one function is increasing and the other is decreasing, direct conclusions cannot be drawn without differentiation.
Example
Consider the function
\[ f(x) = x - \frac{1}{1 + e^x}. \]To determine its monotonicity, observe that as \( x \) increases, the function \( e^x \) also increases. Since \( e^x > 0 \) for all \( x \), the denominator \( 1 + e^x \) is always positive and increasing.
Since \( 1 + e^x \) is increasing, its reciprocal \( \frac{1}{1 + e^x} \) is decreasing by the reciprocal rule (since the reciprocal of a positive increasing function is decreasing).
Now, consider the term \( -\frac{1}{1+e^x} \). Since \( \frac{1}{1+e^x} \) is decreasing, its negation must be increasing.
Thus, both \( x \) and \( -\frac{1}{1+e^x} \) are increasing, and by the sum rule, their sum
\[ f(x) = x - \frac{1}{1+e^x} \]is also increasing.
This provides a direct conclusion without differentiation: since both components of \( f(x) \) are increasing, the function itself must be increasing for all \( x \).
-
Monotonicity Under Composition
Let \( f \) and \( g \) be differentiable functions, and consider the composition \( h(x) = f(g(x)) \). The monotonicity of \( h(x) \) is determined by the sign of its derivative, which, by the chain rule, is given by
\[ h'(x) = f'(g(x)) \cdot g'(x). \]The sign of \( h'(x) \) follows directly from the product \( f'(g(x)) g'(x) \), leading to the following cases.
-
If \( f(x) \) is increasing and \( g(x) \) is increasing, then \( f(g(x)) \) is increasing.
Since \( f(x) \) is increasing, \( f'(x) \geq 0 \).
Since \( g(x) \) is increasing, \( g'(x) \geq 0 \).
Since the product of two nonnegative numbers is nonnegative,\[ h'(x) = f'(g(x)) g'(x) \geq 0. \]Thus, \( f(g(x)) \) is increasing.
-
If \( f(x) \) is increasing and \( g(x) \) is decreasing, then \( f(g(x)) \) is decreasing.
Since \( f(x) \) is increasing, \( f'(x) \geq 0 \).
Since \( g(x) \) is decreasing, \( g'(x) \leq 0 \).
Since the product of a nonnegative and a nonpositive number is nonpositive,\[ h'(x) = f'(g(x)) g'(x) \leq 0. \]Thus, \( f(g(x)) \) is decreasing.
-
If \( f(x) \) is decreasing and \( g(x) \) is increasing, then \( f(g(x)) \) is decreasing.
Since \( f(x) \) is decreasing, \( f'(x) \leq 0 \).
Since \( g(x) \) is increasing, \( g'(x) \geq 0 \).
Since the product of a nonpositive and a nonnegative number is nonpositive,\[ h'(x) = f'(g(x)) g'(x) \leq 0. \]Thus, \( f(g(x)) \) is decreasing.
-
If \( f(x) \) is decreasing and \( g(x) \) is decreasing, then \( f(g(x)) \) is increasing.
Since \( f(x) \) is decreasing, \( f'(x) \leq 0 \).
Since \( g(x) \) is decreasing, \( g'(x) \leq 0 \).
Since the product of two nonpositive numbers is nonnegative,\[ h'(x) = f'(g(x)) g'(x) \geq 0. \]Thus, \( f(g(x)) \) is increasing.
-
Application of Monotonicity: One-One and Many-One Functions
One fundamental application of monotonicity is in determining whether a function is one-one (injective) or many-one. A function is one-one if distinct inputs always map to distinct outputs, meaning \( f(x_1) \neq f(x_2) \) for all \( x_1 \neq x_2 \). In terms of calculus, a continuous function is one-one if and only if it is either strictly increasing or strictly decreasing throughout its entire domain.
If a function is strictly increasing, then for any \( x_1 < x_2 \),
which guarantees that no two different inputs map to the same output. Similarly, if a function is strictly decreasing, then for any \( x_1 < x_2 \),
which again ensures one-one behavior.
A many-one function, on the other hand, has points where it turns, meaning that for multiple inputs, the function can yield the same output. These turns correspond to local maxima or local minima, where the derivative changes sign. At such points, the function ceases to be strictly increasing or strictly decreasing, which leads to different inputs mapping to the same output.
For example:
Consider the piecewise function
The derivative is given by
For strict monotonicity, \( f'(x) \) should not change sign. Checking the sign of the derivative:
- For \( x > 1 \), \( f'(x) = 2x - 1 \). Since \( x > 1 \), it follows that \( 2x - 1 > 1 \), so \( f'(x) > 0 \), implying that \( f \) is increasing in this region.
-
For \( x \leq 1 \), \( f'(x) = -2x + 3 \). Evaluating at \( x = 1 \),
\[ f'(1) = -2(1) + 3 = 1 > 0. \]For \( x < 1 \), since \( -2x + 3 \) remains positive, \( f \) is also increasing in this region.
Since \( f' > 0 \) for all \( x \), the function is strictly increasing everywhere, meaning it is one-one. There are no turns, so there are no local maxima or minima.
Thus, monotonicity provides a powerful criterion for determining injectivity: a continuous function is one-one if and only if it is either strictly increasing or strictly decreasing on its entire domain.
Monotonicity and Injectivity in Discontinuous Functions
The previous analysis holds only for continuous functions, where strict monotonicity guarantees injectivity. However, for discontinuous functions, the situation is different. A function can be strictly increasing or decreasing on separate intervals while still failing to be one-one. Discontinuities introduce abrupt jumps, which may lead to multiple inputs mapping to the same output, making the function many-one despite maintaining strict monotonicity on each continuous segment.
Consider the function
This function is discontinuous at \( x = 0 \), since
Since these limits are not equal, \( f(x) \) has a jump discontinuity at \( x = 0 \).
The derivative is given by
At \( x = 0 \), the derivative does not exist because the left-hand derivative
differs from the right-hand derivative
Thus, \( f(x) \) is non-differentiable at \( x = 0 \).
Now, examining monotonicity:
- For \( x < 0 \), \( f'(x) = 2x \) is negative, implying \( f(x) \) is decreasing on \( (-\infty, 0) \).
- For \( x > 0 \), \( f'(x) = -1 \) is also negative, so \( f(x) \) is decreasing on \( (0, \infty) \).
Since \( f(x) \) is decreasing on both intervals, one might expect it to be one-one. However, the discontinuity at \( x = 0 \) creates an inconsistency: the function jumps from \( f(0^-) = 0 \) to \( f(0^+) = 1 \), allowing multiple inputs to map to the same output.
Indeed, evaluating at two distinct points:
Thus, \( f(-1/2) = f(3/4) \), confirming that \( f(x) \) is many-one, even though it is strictly decreasing everywhere it is continuous.
Application of Monotonicity: Determining the Range of Functions
When a function \( f(x) \) is monotonically increasing or monotonically decreasing on a given domain, determining its range is straightforward by evaluating the function at the endpoints.
If \( f(x) \) is increasing on \( [a, b] \), then for all \( x \in [a, b] \),
Since \( f(x) \) attains every value between \( f(a) \) and \( f(b) \), the range of \( f(x) \) is
If \( f(x) \) is decreasing on \( [a, b] \), then
Thus, the range is
For an open interval \( (a, b) \), the range depends on whether \( f(a) \) and \( f(b) \) are included. If \( f(x) \) is increasing on \( (a, b) \), then
so the range is
If \( f(x) \) is decreasing on \( (a, b) \), then
so the range is
For a half-open interval, such as \( (a, b] \) or \( [a, b) \), the corresponding function values at the closed endpoint are included, while at the open endpoint they are not. If \( f(x) \) is increasing on \( (a, b] \), the range is
If \( f(x) \) is decreasing on \( (a, b] \), the range is
Similarly, for \( [a, b) \), the range follows
and
Thus, the range of a monotonic function on a given domain is determined entirely by the function values at the interval endpoints, with inclusion or exclusion matching the interval type.
Example
Determine the range of the function
Solution:
The function is defined for all \( x > 0 \). To determine its range, analyze its monotonicity. Differentiating,
Since \( e^x > 0 \) and \( x > 0 \), the numerator is always positive, and the denominator is always positive. Thus, \( f'(x) > 0 \) for all \( x > 0 \), meaning \( f(x) \) is strictly increasing on \( (0, \infty) \).
Since \( f(x) \) is increasing, its minimum value occurs at \( x = 0^+ \) and its maximum value at \( x \to \infty \). Evaluating at \( x = 0 \),
Evaluating the limit at infinity,
Since \( f(x) \) is increasing on \( (0, \infty) \), it attains every value between \( f(0^+) = 1 \) and \( \infty \), so the range is
Application of Monotonicity: Solving Inequalities
Monotonicity provides a powerful method for creating and proving some useful inequalities. A general approach is as follows: if a function \( h(x) = f(x) - g(x) \) is strictly increasing for \( x > a \) and satisfies \( h(a^+) \geq 0 \), then for all \( x > a \),
A concrete application of this principle is proving the inequality
Define
At \( x = 0^+ \),
Differentiating,
Since \( \cos x \leq 1 \) for all \( x \) and \( \cos x < 1 \) for \( x > 0 \), it follows that
Thus, \( h(x) \) is strictly increasing for \( x > 0 \), meaning that for all \( x > 0 \),
This establishes that
which simplifies to
This approach generalizes to many other inequalities, where proving the monotonicity of an appropriate function allows one to make global conclusions about the inequality.
Example
Prove that
Solution:
To prove the given double inequality, consider it as two separate inequalities:
First Inequality: \( \frac{x}{1+x} < \ln(1+x) \)
Define
Differentiating,
Simplifying,
Since \( x > 0 \) implies \( f'(x) > 0 \), \( f(x) \) is strictly increasing. Evaluating at \( x = 0^+ \),
Since \( f(x) \) is strictly increasing and \( f(0) = 0 \), it follows that for all \( x > 0 \),
Thus, the first inequality is proven.
Second Inequality: \( \ln(1+x) < x \)
Define
Differentiating,
Since \( x > 0 \) implies \( g'(x) > 0 \), \( g(x) \) is strictly increasing. Evaluating at \( x = 0^+ \),
Since \( g(x) \) is strictly increasing and \( g(0) = 0 \), it follows that for all \( x > 0 \),
Thus, the second inequality is proven.
Since both inequalities have been established, the required result follows:
Example
Prove that
Solution:
To prove the given inequality, consider the two parts separately:
First Inequality: \( \frac{x}{1+x^2} < \tan^{-1}x \)
Define the function
Differentiating,
Simplifying,
Rewriting,
Since \( x > 0 \) implies \( f'(x) > 0 \), the function \( f(x) \) is strictly increasing for \( x > 0 \). Evaluating at \( x = 0^+ \),
Since \( f(x) \) is strictly increasing and \( f(0) = 0 \), it follows that for all \( x > 0 \),
Thus, the first inequality is proven.
Second Inequality: \( \tan^{-1}x < x \)
Define the function
Differentiating,
Since \( x > 0 \) implies \( g'(x) > 0 \), the function \( g(x) \) is strictly increasing for \( x > 0 \). Evaluating at \( x = 0^+ \),
Since \( g(x) \) is strictly increasing and \( g(0) = 0 \), it follows that for all \( x > 0 \),
Thus, the second inequality is proven.
Conclusion:
Since both inequalities have been established, the required result follows:
Example
For \( x \in (0, \pi/2) \), prove that
Solution:
Define the function
To analyze the sign of \( f(x) \), differentiate,
At this point, the sign of \( f'(x) \) is not immediately clear without evaluating specific values of \( x \). Instead of substituting numerical values, differentiate again to analyze its monotonicity,
For \( x \in (0, \pi/2) \), we observe that \( -\sin x + x > 0 \) because \( x > \sin x \) in this interval. This implies that \( f''(x) > 0 \), meaning \( f'(x) \) is strictly increasing in \( (0, \pi/2) \).
Since \( f'(0) = 0 \), the strict increase of \( f'(x) \) for \( x > 0 \) ensures
Thus, \( f(x) \) is strictly increasing in \( (0, \pi/2) \). Evaluating \( f(0) \),
Since \( f(x) \) is strictly increasing and \( f(0) = 0 \), it follows that for all \( x \in (0, \pi/2) \),
Rearranging,
Hence, the required inequality
is established using monotonicity.
Example
Prove that
Solution:
Define the function
To analyze the sign of \( g(x) \), differentiate:
Since \( \sec^2 x = 1 + \tan^2 x \), we have
Since \( g'(x) \geq 0 \) for all \( x \), the function \( g(x) \) is strictly increasing on \( (0, \pi/2) \). Evaluating at \( x = 0^+ \),
Since \( g(x) \) is strictly increasing and \( g(0) = 0 \), it follows that for all \( x > 0 \),
Thus, the required inequality is established:
Example
Evaluate
where \( \lfloor \cdot \rfloor \) denotes the greatest integer function.
Solution:
To evaluate the given limit, we first establish that
Define
Differentiating,
Rewriting,
Factoring \( \tan x \),
Adding and subtracting \( 2 \),
Recognizing the perfect square,
Thus,
Since for \( x \in (0, \pi/2) \), we have \( \tan x > x \). Thus the whole expression of \(f'(x)\) is positive in \((0, \pi/2)\). This means that \(f\) is a strictly increasing function. Since \(f(0)=0\),
which implies
Dividing by \( x^2 \),
Now, taking limits as \( x \to 0^+ \), we use the standard results:
We get,
Since \( \frac{\sin x \tan x}{x^2} > 1. \) and \(\lim_{x \to 0^+} \frac{\sin x \tan x}{x^2} =1\)