Decision Monotonicity
Decision Monotonicity is a key concept for optimizing dynamic programming transitions.
The idea of decision monotonicity is that the decision points of the DP exhibit a monotonic property. Before introducing this concept, we first define what a decision point is.
Definition of Decision Point
For a fixed index , an index is called a decision point of if, for all , .
In this case, is referred to as the decision-affected point.
Note that there may be multiple decision points corresponding to the same .
Intuitively, a decision point is an index that attains the optimum in the DP transition for state .
Having defined decision points, we can now introduce decision monotonicity. Its precise definition depends on the specific form of the DP transition equation. In the following sections, we will discuss different cases separately.
Prefix Transition
The standard form of a prefix transition is
If , this transition is called a self-transition.
Otherwise, it is called a heterogeneous transition.
Decision Monotonicity in Prefix Transitions
Definition (Decision Monotonicity for Prefix Transitions).
A prefix transition is said to satisfy decision monotonicity if the following conditions hold.
Let .
- For every decision point of , there exists a decision point of such that .
- For every decision point of , there exists a decision point of such that .
Intuitively, decision monotonicity in prefix transitions means that the indices of decision points are non-decreasing as the state index increases.
In the following sections, we will discuss how to optimize prefix DP transitions when the transition satisfies decision monotonicity.
Divide-and-Conquer Strategy
This strategy is applicable only to prefix heterogeneous transitions.
Algorithm Overview
- Compute the decision point for the midpoint of the current range.
- By decision monotonicity, the valid range of decision points for each subrange can be restricted accordingly.
- As a result, each decision point is evaluated at most a constant number of times, and the overall time complexity is .
This algorithm is remarkably elegant. When I first encountered it, I was genuinely impressed by its simplicity and efficiency.
Quadrilateral Inequality
For self-transitions, additional conditions are required to optimize the DP.
The quadrilateral inequality is one such condition.
Definition (Quadrilateral Inequality).
A function is said to satisfy the quadrilateral inequality if, for all , the following inequality holds:
Remark: The inequality sign here is only a formal representation. Its essential meaning is that yields a better (or no worse) cost than .
There is a well-known theorem stating that if the cost function satisfies the quadrilateral inequality, then the corresponding DP transition exhibits decision monotonicity.
Proof.
According to the definition of decision monotonicity, it suffices to verify the following two symmetric conditions:
- For every decision point of , there exists a decision point of such that .
- For every decision point of , there exists a decision point of such that .
Since the two cases are symmetric, we only prove the first one.
Let , and suppose is a decision point of .
By definition, for all , we haveNow let , , , and .
By the quadrilateral inequality, it follows thatAdding the two inequalities above yields
This shows that when the decision-affected point moves from to , the index remains no worse than any .
Therefore, the decision point of must be greater than or equal to .Hence, the DP transition satisfies decision monotonicity.
Quadrilateral Inequality can be derived to possess stronger properties.
This requires starting from another understanding form of Quadrilateral Inequality.
Starting from the original inequality, we can make a simple transformation:
How should we interpret this inequality?
When the decision-affected point moves from to ,
the increase in cost when choosing index as the decision point is no smaller than the increase when choosing index as the decision point.
We call this phenomenon Gradual Deterioration.
Intuitively, Gradual Deterioration provides another way to understand decision monotonicity under the quadrilateral inequality: as the decision-affected point increases, earlier decision points incur higher marginal costs than later ones. As a result, the optimal decision point shifts monotonically forward.
Moreover, Gradual Deterioration reveals an even stronger property:
For ANY two indices , there exists a dividing point such that
- for decision-affected points with index less than , choosing is better;
- for decision-affected points with index greater than , choosing is better.
Obviously, a transition with this property must possess decision monotonicity, but a transition with decision monotonicity may not necessarily satisfy this property.
Binary-Queue
The conclusion of Gradual Deterioration inspires a new idea for optimizing DP.
Let denote the dividing point associated with decision point . If , then decision point becomes better than before ever becomes better than .
As a result, will never be an optimal decision point.
This observation allows us to design an algorithm that safely ignores all such decision points. By discarding them during the transition process, the DP can be computed much more efficiently.
From the above derivation, we are motivated to maintain an increasing sequence of boundary points and their corresponding optimal decision points .
Suppose we have already processed state and maintained the decision points for . Now we consider state . When inserting a new possible decision point at the back, we first compute the boundary point between and . If , then will never become an optimal decision point and can be removed. By repeatedly applying this process, and finally inserting , we can maintain a sequence of decision points with strictly increasing boundary points.
For the current decision-affected point , if is worse than , then all subsequent points will also prefer , and thus can be safely removed. Repeating this process until becomes better than , we conclude that is the decision point for , according to the previous derivation.
The above procedure can be conveniently implemented using a deque. Each decision point is inserted into and removed from the deque at most once, so this part contributes linear time complexity.
The key remaining issue is how to compute the boundary point . By the property of Gradual Deterioration, is monotonic and thus can be found by binary search.
Using binary search to compute , the entire process of self-transition /heterogeneous-transition optimization can be accelerated to .
This method is known as the binary queue optimization algorithm.
Slope Optimization
We can impose additional conditions to further optimize DP. In some transitions, the cost function has the form
This form has a very nice property. Let us discuss under what conditions such a function satisfies the quadrilateral inequality.
By the principle of Gradual Deterioration, consider indices We have
The derivation above shows that when and have opposite monotonicity, the function satisfies the quadrilateral inequality. Since only depends on the product , we can always reorder the indices so that is non-decreasing and is non-increasing.
Under this condition, for a decision-affected point , we only need to compute This expression is exactly the value of a line evaluated at . Therefore, the dividing point between two decision points can be obtained by computing the intersection of two lines, rather than by binary search.
This technique is known as Slope Optimization. It can be viewed as a special case of Binary Queue Optimization and runs in linear time, .
If and do not have such monotonic properties, we can instead use a Li Chao Segment Tree to maintain the lines and query their values at , resulting in a time complexity of . I have already written a detailed introduction to this method in a previous blog post.
Link:
Li Chao Segment Tree
Anti quadrilateral inequality and Binary-Stack
same with before. we called below formula Anti Quadrilateral inequality.
also we have property like Gradual Deterioration, we called Gradual Optimization. But when a transition fit Anti Quadrilateral inequality has no Decision Monotonicity.
Why? Because a Decision Point can be good when it just add in our decision, but when the index of Decision-affected increasing, old Decision Point will be better, so this type of transition don’t has Decision Monotonicity.
But we still can use some structure to maintain this.
For a fixed state , we consider all possible decision points .Under Gradual Optimization, for each candidate decision point , there exists a dividing point such that after , choosing becomes better than choosing .
Now observe that if , then before ever becomes better than (at ), has already become better than (at ). This means that can never be the optimal decision point at any time, and therefore can be safely discarded.
By repeatedly applying this elimination process, we obtain a sequence of candidate decision points Moreover, if becomes better than at time , then the dividing points must satisfy .
Next, consider the current state . If the last candidate decision point is worse than , then by Gradual Optimization, will never become optimal in the future, and thus can be removed as well.
It is easy to see that once becomes better than , is the optimal choice for the current state —that is, the decision point for .
Range Transition
This type of transition is relatively simple. The standard form of a range transition is
If the cost function satisfies the quadrilateral inequality and the following condition (which we call including monotonicity),
then this transition is said to have decision monotonicity.
Just as in prefix transitions, this property allows us to optimize the DP to a time complexity of .
Summary
This article discusses several techniques for DP optimization. Many problems involve these ideas, and dynamic programming itself is a fundamental topic in OI.
However, techniques are only tools. What truly matters is understanding when and how to adapt them to different problems.