Polynomial Equation, Roots and factorization
Polynomial Equation
A polynomial equation is an expression where a polynomial is set equal to zero. Specifically, if we have a polynomial \( p(x) \) of degree \( n \), which is the highest power of \( x \) with a nonzero coefficient, the polynomial equation is written as \( p(x) = 0 \).
The roots or zeros of the polynomial are the values of \( x \) that make the polynomial equal to zero. They are the solutions to the polynomial equation \( p(x) = 0 \). These roots can be real or complex numbers and represent the points where the graph of the polynomial crosses the \( x \)axis.
The coefficients of the polynomial, which can be real or complex numbers themselves, significantly affect the position and number of roots. For instance, in a quadratic polynomial (degree 2), the discriminant determined by the coefficients can tell us if the roots are real and distinct, real and equal (a repeated root), or complex.
Understanding the roots is important for many applications in mathematics, including factoring the polynomial into linear factors (if the roots are real) and analyzing the function's behavior. Each real root corresponds to an \( x \)intercept on the graph of the polynomial.
Factor of a Polynomial
A factor of a polynomial is a nonzero expression that, when multiplied by another polynomial, yields the original polynomial.A polynomial \( d(x) \) is identified as a factor of another polynomial \( p(x) \) if, when \( p(x) \) is divided by \( d(x) \), the division yields zero as the remainder. This relationship is a direct result of Long Division of Polynomials, which posits that \( p(x) \) can be expressed as \( p(x) = q(x)d(x) + r(x) \), where \( r(x) \) is the remainder. If \( r(x) \) is zero, \( d(x) \) divides \( p(x) \) evenly, signifying that \( d(x) \) is indeed a factor of \( p(x) \), and \( p(x) \) is divisible by \( d(x) \) without any residue, resulting in the quotient polynomial \( q(x) \). This notion is foundational to the factor theorem, which establishes the link between the roots of polynomial equations and their factors.
Factor Theorem
\( (x  \alpha) \) is a factor of \( p(x) \) if and only if \( p(\alpha) = 0 \) ( \( \alpha \) is a root of \( p(x) \) )
Proof: Assuming \( (x  \alpha) \) is a factor of \( p(x) \), then \( p(x) = q(x)(x  \alpha) \) for some polynomial \( q(x) \). Put \( x = \alpha \), we get, \( p(\alpha) = 0 \).
Conversely, Assuming \( p(\alpha) = 0 \), by Euclid’s Division Lemma, when \( p(x) \) is divided by \( (x  \alpha) \), there exists \( p(x) \) and \( r(x) \) such that \( p(x) = q(x)(x  \alpha) + r(x) \). Put \( x = \alpha \) in it, we get,
Thus
\( (x  \alpha) \) is a factor of \( p(x) \). \(\blacksquare\)
Multiplicity of Roots
If \( p(x) \) is divisible by \( (x  \alpha) \) two times, i.e., \( p(x) \) has factor \( (x  \alpha)^2 \) then \( \alpha \) is said to be a double root of \( p(x) \). Similarly, if \( p(x) \) is divisible by \( (x  \alpha)^r \) but not by \( (x  \alpha)^{r+1} \), then \( \alpha \) is said to be an \( r \)multiple root.
Observe that if a polynomial \( p(x) \) of degree \( n \) is divisible by \( (x  \alpha)^2 \) then \( p(x) = q(x)(x  \alpha)^2 \). If we differentiate both sides we get, \( p'(x) = q'(x)(x  \alpha)^2 + 2q(x)(x  \alpha) \). \( p'(x) \) has degree \( n  1 \) and clearly, it is divisible again by \( (x  \alpha) \) i.e. \( p'(\alpha) = 0 \). If we differentiate again we will observe that \( p''(\alpha) \not= 0 \).
Let us generalize the above observation:
\( \alpha \) is an \( r \)multiple root of \( p(x) \) if and only if \( p(\alpha) = 0 \), \( p'(\alpha) = 0 \), \( p''(\alpha) = 0 \), ..., \( p^{(r1)}(\alpha) = 0 \), \( p^{(r)}(\alpha) \not= 0 \). Where \( p^{(r)}(x) \) represents the \( r \)th derivative of \( p(x) \).
Example

\[ p(x) = (x  1)^3(x^2 + x + 1) \]
Expanding this would give a polynomial of degree 5. The cubic factor \( (x  1)^3 \) indicates that \( x = 1 \) is a root with multiplicity 3.

\[ q(x) = (x  2)^2(x + 3) \]
This polynomial has a double root at \( x = 2 \) and another root at \( x = 3 \).
Fundamental Theorem of Algebra
The Fundamental Theorem of Algebra states that every nonconstant singlevariable polynomial with complex coefficients has at least one complex root. This includes real numbers as a subset since real numbers are also complex numbers with an imaginary part of zero.
Formally, the theorem can be expressed as follows: Given a polynomial \( p(x) \) of degree \( n \), where \( n \geq 1 \), and coefficients in the complex numbers \( \mathbb{C} \), there exists at least one complex number \( c \) such that \( p(c) = 0 \).
An important consequence of Fundamental Theorem of Algebra
Theorem (Existence of Polynomial Roots):
Every nonconstant polynomial equation of degree \( n > 0 \), with coefficients in the complex numbers, has exactly \( n \) roots in the complex plane, considering multiplicity. These roots are represented by \( \alpha_1, \alpha_2, ..., \alpha_n \) (not necessarily distinct), such that the polynomial can be expressed as \( f(x) = a(x  \alpha_1)(x  \alpha_2)...(x  \alpha_n) \), where \( a \) is the leading coefficient of \( f(x) \).
Proof (by Mathematical Induction):
We begin with the base case where \( n = 1 \). For a linear polynomial, it's clear that there is one root \( \alpha \), making \( f(x) = a(x  \alpha) \).
Now, let's assume the theorem holds for any polynomial of degree \( n  1 \), where \( n \geq 2 \). Consider \( f(x) \), a polynomial of degree \( n \) with a leading coefficient \( a \). By the Fundamental Theorem of Algebra, \( f(x) \) has at least one root, say \( \alpha_1 \). This means \( (x  \alpha_1) \) is a factor of \( f(x) \), so we can write:
Here, \( q(x) \) is a polynomial of degree \( n  1 \) with the same leading coefficient \( a \) due to the factorization.
Applying the induction hypothesis to \( q(x) \), we deduce that it has \( n  1 \) roots, say \( \alpha_2, ..., \alpha_n \), yielding:
Combining equations (1) and (2), we obtain:
This confirms that \( f(x) \) can be factored into \( n \) linear factors, each corresponding to a root, which completes the proof.
Factorization of a polynomial
If \( \alpha_1, \alpha_2, ..., \alpha_n \) are the \( n \) roots of the polynomial \( p(x) = a_0x^n + a_1x^{n1} + a_2x^{n2} + ... + a_n \), then
(Remember roots may be real or complex. Some roots may be repeated)
Can number of roots of a polynomial exceeds its degree?
No, a polynomial of degree \( n \) cannot have more than \( n \) roots, according to the Fundamental Theorem of Algebra. This theorem states that every nonzero, singlevariable, degree \( n \) polynomial with complex coefficients has exactly \( n \) roots in the complex plane when counted with multiplicity.
If a polynomial seems to have more than \( n \) roots, it must be the zero polynomial, which is the polynomial \( p(x) = 0 \) for all \( x \), and in such a case, every number is a root. However, for nonzero polynomials, the number of roots — including repeated roots — cannot exceed its degree.
Equality of polynomials
Theorem: If two polynomials of degree \( n \) are equal at more than \( n \) values of \( x \), then both polynomials are the same, i.e., they are equal for all values of \( x \).
Proof:
This result derives from the fundamental theorem of Algebra. Consider two polynomials \( p(x) \) and \( q(x) \) of equal degree \( n \):
Suppose that these polynomials are equal for more than \( n \) distinct values of \( x \). By subtracting the polynomials we get:
This difference yields a new polynomial \( r(x) \) of degree \( n \) at most:
If \( p(x) \) and \( q(x) \) are equal for more than \( n \) values of \( x \), then \( r(x) \) has more than \( n \) roots. According to the Fundamental Theorem of Algebra, a nonzero polynomial of degree \( n \) can have at most \( n \) roots. Since \( r(x) \) has more than \( n \) roots, it must be the zero polynomial, which implies all of its coefficients must be zero:
Therefore:
Hence, both polynomials \( p(x) \) and \( q(x) \) have the same coefficients and are therefore the same polynomial. This means they will be equal for all values of \( x \), completing the proof.
Polynomials with real coefficients
When examining the roots of polynomials with real coefficients, a particular symmetry involving complex numbers arises. For any polynomial \( p(x) = a_0x^n + a_1x^{n1} + ... + a_n \) with real coefficients, if \( \alpha \) is a complex root, its complex conjugate \( \overline{\alpha} \) is also a root.
Theorem: If a polynomial \( p(x) \) with real coefficients has a complex root \( \alpha \), then its complex conjugate \( \overline{\alpha} \) is also a root of \( p(x) \).
Proof:
Let \( p(x) \) be a polynomial with real coefficients, expressed as:
Assume \( \alpha \) is a root of \( p(x) \), so that \( p(\alpha) = 0 \). Since the coefficients of \( p(x) \) are real, we can take the complex conjugate of the entire polynomial without changing the coefficients:
Given \( p(x) = a_0x^n + a_1x^{n1} + a_2x^{n2} + ... + a_n \), take the conjugate of both sides.
Now, the next statement should be clear to you if you are already acquainted with complex numbers. If the coefficients of \( p(x) = a_0x^n + a_1x^{n1} + a_2x^{n2} + ... + a_n \) are all real numbers, then if \( \alpha \) is one of the roots and it is complex, then the conjugate of \( \alpha \), that is \( \overline{\alpha} \), is also a root. It should be emphasized that this statement is only true when all coefficients of the polynomial are real. This can be easily proved.
Given \( p(x) = a_0x^n + a_1x^{n1} + a_2x^{n2} + ... + a_n \), take the conjugate of both sides. We get
This simplifies to
because the coefficients are real numbers. Hence, we have
Therefore, if \( p(\alpha) = 0 \), then \( {p(\overline{\alpha})} = 0 \). That is, if \(\alpha\) is a root, then it's conjugate \(\overline{\alpha}\) is also a root of the polynomial. \(\blacksquare\)
For a polynomial \(p(x)\) with real coefficients, from the above theorem we can conclude that if \( x  \alpha \) is a factor, then \( x  \overline{\alpha} \) is also a factor.
If we calculate the product of these factors, we get \( (x  \alpha)(x  \overline{\alpha}) \) which expands to \( x^2  (\alpha + \overline{\alpha})x + \alpha\overline{\alpha} \).
Now, \( x^2  2Re(\alpha)x + \alpha^2 \) is a quadratic factor with real coefficients whose roots are complex conjugates of each other. Thus any polynomial with real coefficients can be factorized into linear and quadratic factors having complex conjugate roots. Thus, we can write the following:
If \( p + iq \) is one of the roots of a polynomial of real coefficients, then \( p  iq \) is also the root and \( x^2  2px + p^2 + q^2 \) is one of the quadratic factors. Here \( i = \sqrt{1} \).
A polynomial possessing real coefficients can be decomposed into factors that are either linear or quadratic and cannot be further reduced. This decomposition is derived from the principle that for every complex root, its complex conjugate is also a root. Consequently, when a polynomial has real coefficients, any nonreal root must appear alongside its conjugate pair. These conjugate pairs combine to form quadratic factors with real coefficients that cannot be factored further over the reals.
Thus, a polynomial with real coefficients can be expressed as a product of factors of the form \( (x  r) \), where \( r \) is a real root, and factors of the form \( (x^2  2px + p^2 + q^2) \), where \( p + iq \) is a nonreal complex root and \( p  iq \) is its conjugate. The linear factors correspond to real roots, while the irreducible^{1} quadratic factors correspond to pairs of complex conjugate roots. This results in a complete factorization where the original polynomial is represented as a product of linear and irreducible quadratic expressions, revealing all possible roots and confirming the polynomial's realvalued nature.
For example:
An example of a polynomial with two irreducible quadratic factors and two linear factors is:
Here, \( x^2 + x + 1 \) and \( x^2 + 2x + 5 \) are irreducible quadratic factors because they have no real roots (the discriminant is negative), and \( x  3 \) and \( x + 2 \) are linear factors corresponding to real roots \( x = 3 \) and \( x = 2 \), respectively.
Relation of Roots with Coefficients (Vieta’s Formulae).
Let \( p(x) = a_0x^n + a_1x^{n1} + a_2x^{n2} + ... + a_n \). If \( \alpha_1, \alpha_2, ..., \alpha_n \) are the roots (real or complex), then
Where \( \sum \alpha_i\alpha_j = \alpha_1\alpha_2 + \alpha_1\alpha_3 + ... \) is equal to the sum of all possible products of \( \alpha_1, \alpha_2, ..., \alpha_n \), taken two at a time.
Similarly, \( \sum \alpha_i\alpha_j\alpha_k = \alpha_1\alpha_2\alpha_3 + \alpha_1\alpha_2\alpha_4 + \alpha_1\alpha_2\alpha_5 + ... \) is equal to the sum of all possible products of \( \alpha_1, \alpha_2, ..., \alpha_n \), taken three at a time.
Comparing the coefficients of both sides, we get:
In general,
(there are a total of \( {n \choose r} \) terms in this summation)
The last relation is about the product of all roots:
For a quadratic polynomial \( ax^2 + bx + c \) if \( \alpha \) and \( \beta \) are the roots, then
For a cubic polynomial \( ax^3 + bx^2 + cx + d \) if \( \alpha, \beta \) and \( \gamma \) are the roots then,
Contructing the polynomial equations from the root
Given the roots \( \alpha_1, \alpha_2, ..., \alpha_n \), the corresponding polynomial equation is given by:
\( (x  \alpha_1)(x  \alpha_2)...(x  \alpha_n) = 0\).
When we expand this product, the resulting polynomial is of the form:
To construct a polynomial equation from known roots \( \alpha_1, \alpha_2, ..., \alpha_n \), we define the elementary symmetric sums of the roots:
 \( \sigma_1 = \sum_{1 \leq i \leq n} \alpha_i \) is the sum of the roots taken one at a time.
 \( \sigma_2 = \sum_{1 \leq i < j \leq n} \alpha_i \alpha_j \) is the sum of the products of the roots taken two at a time.
 ...
 \( \sigma_r = \sum \alpha_{i_1} \alpha_{i_2} ... \alpha_{i_r} \), where the sum extends over all \( r \)combinations of the indices from 1 to \( n \), is the sum of the products of the roots taken \( r \) at a time.
Thus, to find the polynomial equation, we calculate the sums of the roots taken one at a time, two at a time, and so on, up to \( n \) at a time. These sums are the elementary symmetric polynomials of the roots, and they directly determine the coefficients of the polynomial equation \( P(x) = 0 \).
The coefficients of the polynomial are determined by these sums, with the coefficient of \( x^{nr} \) being \( (1)^r \sigma_r \). This equation fully captures the relationship between the roots of the polynomial and its coefficients.
Example
The corresponding cubic equation with roots \( \alpha, \beta, \) and \( \gamma \) is:
Here, the coefficient of \( x^2 \) is the negative of the sum of the roots, the coefficient of \( x \) is the sum of the product of the roots taken two at a time, and the constant term is the negative of the product of all three roots.
For a polynomial of degree four with roots \( \alpha, \beta, \gamma, \) and \( \delta \), the corresponding equation can be derived in a similar way, considering the sum and products of the roots in their various combinations:
This equation is the result of expanding \( (x  \alpha)(x  \beta)(x  \gamma)(x  \delta) \).

An irreducible quadratic expression over the real numbers is a seconddegree polynomial that cannot be factored into realnumbered linear factors. Formally, it can be defined as a quadratic expression of the form \( ax^2 + bx + c \), where \( a \), \( b \), and \( c \) are real coefficients, and the discriminant \( b^2  4ac < 0 \). The negative discriminant indicates that the quadratic equation \( ax^2 + bx + c = 0 \) does not have real solutions, hence it cannot be decomposed into real linear factors. These expressions are "irreducible" in the sense that they cannot be simplified or broken down further using real numbers alone. ↩