Reduced cost

Definition
In linear programming, the reduced cost of a decision variable is the amount by which the variable’s objective‑function coefficient must improve before the variable can assume a positive value in an optimal solution. For a minimization problem, it is the amount the coefficient must be decreased; for a maximization problem, the amount it must be increased.

Overview
Reduced costs arise in the simplex algorithm and related solution methods. After a basic feasible solution is found, each non‑basic variable (i.e., a variable currently set to zero) has an associated reduced cost that indicates whether entering that variable into the basis would improve the objective value. If all reduced costs satisfy the appropriate sign condition (non‑negative for minimization, non‑positive for maximization), the current solution is optimal. Reduced costs are also central to post‑optimal (sensitivity) analysis, where they help assess the effect of changes in objective‑function coefficients or constraints.

Etymology/Origin
The term combines “reduced,” meaning adjusted or altered from its original value, with “cost,” reflecting the economic notion of expense or price. In the context of linear programming, it was introduced in the 1950s alongside the development of the dual simplex method and the concept of shadow prices. The terminology parallels “reduced price” used in economics, emphasizing the adjusted cost of a variable when accounting for constraints.

Characteristics

  • Mathematical expression: For a variable $x_j$, the reduced cost $ \bar{c}_j $ is computed as
    $$ \bar{c}_j = c_j - \mathbf{y}^\top \mathbf{a}_j, $$ where $c_j$ is the original objective coefficient, $\mathbf{a}_j$ is the column of the constraint matrix associated with $x_j$, and $\mathbf{y}$ is the vector of optimal dual variables (shadow prices).
  • Sign convention:
    • Minimization: optimality requires $\bar{c}_j \ge 0$ for all non‑basic variables.
    • Maximization: optimality requires $\bar{c}_j \le 0$.
  • Interpretation: A zero reduced cost indicates that the variable is degenerate (it may enter the basis without changing the objective value). A positive reduced cost (in a minimization model) means the variable is currently too costly to be worthwhile; it would need a reduction of at least that amount to become attractive.
  • Sensitivity analysis: The magnitude of a reduced cost quantifies how robust the current basis is to changes in the corresponding objective coefficient. Small reduced costs suggest that modest coefficient changes could alter the optimal basis.
  • Relation to dual feasibility: Reduced costs are the primal counterpart of dual feasibility; they measure the violation of complementary slackness for non‑basic variables.

Related Topics

  • Linear programming
  • Simplex method
  • Dual variables / shadow prices
  • Complementary slackness
  • Sensitivity (post‑optimal) analysis
  • Opportunity cost (economic interpretation)
  • Marginal cost and marginal profit concepts in economics
  • Degeneracy in linear programming
Browse

More topics to explore