<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>XianRui&apos;s Blog</title><description>Dispel the gloom and restore clear skies once more!</description><link>https://blog.517group.cn/</link><language>en</language><item><title>Template of Aho-Corasick AutoMaton</title><link>https://blog.517group.cn/posts/202604062214/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202604062214/</guid><description>meaningful</description><pubDate>Mon, 06 Apr 2026 22:14:49 GMT</pubDate><content:encoded>&lt;p&gt;I learned &lt;em&gt;Ownner Pointer&lt;/em&gt; and &lt;em&gt;Reference Pointer&lt;/em&gt; today, and use them to write a safe template of ACAM.&lt;/p&gt;
&lt;p&gt;The process of algorithm is needless to say.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;template &amp;lt;typename Tp&amp;gt;
using OwnPointer = std::unique_ptr&amp;lt;Tp&amp;gt;;
template &amp;lt;typename Tp&amp;gt;
using RefPointer = Tp*;

constexpr int MAXN = 6;
constexpr int ALPHABET = 26;

class ACAM {
public:
    explicit ACAM(const std::vector&amp;lt;std::string&amp;gt;&amp;amp; set) {
        Build(set);
    }
private:
    struct TrieNode {
        std::array&amp;lt;RefPointer&amp;lt;TrieNode&amp;gt;, ALPHABET&amp;gt; ch;
        RefPointer&amp;lt;TrieNode&amp;gt; fail;
        bool is_taboo;

        TrieNode() {
            ch.fill(nullptr);
            fail = nullptr;
            is_taboo = false;
        }
    };
    std::vector&amp;lt;OwnPointer&amp;lt;TrieNode&amp;gt;&amp;gt; node_pool;
    RefPointer&amp;lt;TrieNode&amp;gt; root;

    RefPointer&amp;lt;TrieNode&amp;gt; NewNode() {
        node_pool.emplace_back(std::make_unique&amp;lt;TrieNode&amp;gt;());
        return node_pool.back().get();
    }

    void Insert(const std::string&amp;amp; s) {
        RefPointer&amp;lt;TrieNode&amp;gt; now = root;
        for(const char&amp;amp; c : s) {
            int id = c-&apos;a&apos;;
            if(!now-&amp;gt;ch[id]) {
                now-&amp;gt;ch[id] = NewNode();
            }
            now=now-&amp;gt;ch[id];
        }
        now-&amp;gt;is_taboo = true;
    }

    void Build(const std::vector&amp;lt;std::string&amp;gt;&amp;amp; set) {
        root = NewNode();
        for(const std::string &amp;amp;s : set) Insert(s);
        std::queue&amp;lt;RefPointer&amp;lt;TrieNode&amp;gt;&amp;gt; q;
        root-&amp;gt;fail = root;
        for(int i=0;i&amp;lt;ALPHABET;i++) {
            if(root-&amp;gt;ch[i]) {
                root-&amp;gt;ch[i]-&amp;gt;fail = root;
                q.push(root-&amp;gt;ch[i]);
            } else root-&amp;gt;ch[i] = root;
        }
        while(!q.empty()) {
            auto u = q.front(); q.pop();
            u-&amp;gt;is_taboo |= u-&amp;gt;fail-&amp;gt;is_taboo;
            for(int i{0}; i &amp;lt; ALPHABET; i++) {
                if(u-&amp;gt;ch[i]) {
                    u-&amp;gt;ch[i]-&amp;gt;fail = u-&amp;gt;fail-&amp;gt;ch[i];
                    q.push(u-&amp;gt;ch[i]);
                } else u-&amp;gt;ch[i] = u-&amp;gt;fail-&amp;gt;ch[i];
            }
        }
    }
public:
    auto Root() const -&amp;gt; RefPointer&amp;lt;TrieNode&amp;gt; {
        return root;
    }
    auto Next(RefPointer&amp;lt;TrieNode&amp;gt; u, char c) const -&amp;gt; RefPointer&amp;lt;TrieNode&amp;gt; {
        return u-&amp;gt;ch[c-&apos;a&apos;];
    }
    auto IsTaboo(RefPointer&amp;lt;TrieNode&amp;gt; u) const -&amp;gt; bool {
        return u-&amp;gt;is_taboo;
    }
    auto Size() const -&amp;gt; int {
        return node_pool.size();
    }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;s&gt;Remember that u are solving LuoguP4569 that time, &lt;code&gt;Taboo&lt;/code&gt; means unallowed string.&lt;/s&gt;&lt;/p&gt;
</content:encoded></item><item><title>Solution Report of String(Easy) Topic</title><link>https://blog.517group.cn/posts/202603121938/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202603121938/</guid><description>Easy</description><pubDate>Thu, 26 Mar 2026 19:38:13 GMT</pubDate><content:encoded>&lt;h1&gt;LuoguP13270 最小表示法&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Link&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/problem/P13270&quot;&gt;Problem Statement&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/article/sl2n7n1z&quot;&gt;Reference Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Hash Tech: How to compare string with dictionary order&lt;/h2&gt;
&lt;p&gt;Hash &amp;amp; Binary Lifting find the LCP of two string and then compare the next character.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;std::string s, t;
int len{0};
for (int k{19}; k &amp;gt;= 0; k--) {
    if (getHash(s, 0, len+(1&amp;lt;&amp;lt;k)-1) == getHash(t, 0, len+(1&amp;lt;&amp;lt;k)-1)) {
        len += (1 &amp;lt;&amp;lt; k);
    }
}
if (len == std::min(s.size(), t.size())) ; // s == t
else ; // s[len] &amp;lt; t[len]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;*&lt;em&gt;above code could be wrong&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Solution&lt;/h2&gt;
&lt;p&gt;It&apos;s difficult to solve a problem on circle so copy string behind it.&lt;/p&gt;
&lt;p&gt;Now we should solve below problem:&lt;br /&gt;
Find $i$ hold $\min s[i\dots i+n-1]$&lt;/p&gt;
&lt;p&gt;Then we consider how to check it quickly. When we are checking the cyclic isomorphism start with $st$ and $i$.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;First we find the maximum $j$ hold $s[st\dots st+j-1] = s[i\dots i+j-1]$.&lt;/li&gt;
&lt;li&gt;Then compare the character $s[st+j]$ and $s[i+j]$.
&lt;ul&gt;
&lt;li&gt;if $s[st+j]$ is better than $s[i+j]$, all string start with $i \le k \le i+j-1$ cannot be answer.&lt;/li&gt;
&lt;li&gt;otherwise, move $i$ to $\max(st+j+1, i+1)$.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;#include&amp;lt;bits/stdc++.h&amp;gt;
using namespace std;
const int maxn=2e7;
int n;
string s;
int main() {
	ios::sync_with_stdio(0);
	cin.tie(0),cout.tie(0);
	cin&amp;gt;&amp;gt;n&amp;gt;&amp;gt;s,s=&quot; &quot;+s+s;
	int st=1;
	for(int i=2;i&amp;lt;=n;){
		int j=0;
		for(j=0;j&amp;lt;n &amp;amp;&amp;amp; s[st+j]==s[i+j];j++);
		if(j==n) break;
		if(s[st+j]&amp;gt;s[i+j]){
            int m=st;
            st=i,i=max(i+1,m+j+1);
        }
		else i+=j+1;
	}
	for(int i=st;i&amp;lt;=st+n-1;i++) cout&amp;lt;&amp;lt;s[i];
	return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;LuoguP9873 [EC Final 2021] Beautiful String&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Link&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/problem/P9873&quot;&gt;Problem Statement&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/article/u7fvh865&quot;&gt;Reference Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We found that beautiful string is of the form &lt;code&gt;AABCAB&lt;/code&gt;, so we can calculate answer by count &lt;code&gt;AB&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Then we can define below two array:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Let $f[i][j]$ denote, when &lt;code&gt;AB&lt;/code&gt; $= S[i\dots i + j − 1]$, the number of substrings equal to &lt;code&gt;AB&lt;/code&gt; that start after position $i + j$.&lt;/li&gt;
&lt;li&gt;Let $g[i][j]$ denote, when &lt;code&gt;AB&lt;/code&gt; $= S[i\dots i + j − 1]$, the number of substrings before &lt;code&gt;AB&lt;/code&gt; that are prefixes of &lt;code&gt;AB&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then the answer is $\sum f[i][j]\times g[i][j]$.&lt;/p&gt;
&lt;p&gt;Now consider how to calculate these array.&lt;/p&gt;
&lt;p&gt;For $f$, calculate the LCP length of all pair of $s[i\dots n]$ and $s[j\dots n]$ then save them in $f[i][LCP]$, suffixing f.&lt;br /&gt;
For $g$, count the number of LCP of $s[i\dots n]$ and $s[i-j\dots n]$ which length is greater than $j$.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cin &amp;gt;&amp;gt; S;
n = S.length(); S = &apos; &apos; + S;
for(int i = n; i; i--)
    for(int j = i; j &amp;lt;= n; j++)
        if(S[i] == S[j])
            lcp[i][j] = lcp[i + 1][j + 1] + 1;
for(int i = 2; i &amp;lt;= n; i++)
    for(int j = i + 3; j &amp;lt; n; j++) {
        int k = min(j - i - 1, lcp[i][j]);
        if(k &amp;gt;= 2) f[i][k]++;
    }
for(int i = 2; i &amp;lt;= n; i++)
    for(int j = n - 1; j &amp;gt; 1; j--) f[i][j] += f[i][j + 1];
for(int i = 2; i &amp;lt;= n; i++)
    for(int j = 1; j &amp;lt;= min(i - 1, n - i + 1); j++) 
        if(lcp[i - j][i] &amp;gt;= j) g[i][j]++;
for(int i = 1; i &amp;lt;= n; i++)
    for(int j = 1; j &amp;lt;= n; j++) g[i][j] += g[i][j - 1];
long long ans = 0;
for(int i = 2; i &amp;lt;= n; i++)
    for(int j = 2; j &amp;lt;= n; j++) ans += 1ll * f[i][j] * g[i][j - 1];
cout &amp;lt;&amp;lt; ans &amp;lt;&amp;lt; &apos;\n&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;LuoguP9016 [USACO23JAN] Find and Replace G&lt;/h1&gt;
&lt;p&gt;These kind of question which after operation can cover early operation we can all consider it from back to front.&lt;/p&gt;
&lt;p&gt;We can build a DAG to solve this question. Use examples as demonstrations.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;a ab
a bc
c de
b bbb
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Build 5 nodes for letter &lt;code&gt;a&lt;/code&gt; to &lt;code&gt;e&lt;/code&gt;, then build graph from last operation.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;      (7,#,sz=3)
       /      \
 (6,#,2)      (2,&apos;b&apos;)
   /    \
(2,&apos;b&apos;) (2,&apos;b&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Keep the structure of 2-branch tree, but reuse same node.&lt;/p&gt;
&lt;p&gt;Below is operator &lt;code&gt;c de&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;   (8,#,2)
   /    \
(4,&apos;d&apos;)(5,&apos;e&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then we can see how to combine two subtree (operator &lt;code&gt;a bc&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;           (9,#,5)
          /       \
     (7,#,3)    (8,#,2)
      /   \      /    \
  (6,#)    b    d      e
   / \
  b   b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Final version:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;                    (10,#,8)
                   /        \
             (9,#,5)      (7,#,3)
            /      \      /      \
       (7,#,3)  (8,#,2) (6,#)    b
        /   \    /   \   /  \
    (6,#)   b   d     e b    b
     / \
    b   b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then we can get answer from the tree size.&lt;/p&gt;
&lt;h1&gt;LuoguP7114 [NOIP2020] 字符串匹配&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Link&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/problem/P7114&quot;&gt;Problem Statement&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/article/bti82yh8&quot;&gt;Reference Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;em&gt;It&apos;s a really perfect tutorial. I don&apos;t even know what else to supplement.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We define the $i$-th element of Z-array denote the length of LCP of $s[0\dots n-1]$ and $s[i\dots n-1]$.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn.luogu.com.cn/upload/image_hosting/16k5feoc.png&quot; alt=&quot;Example&quot; /&gt;&lt;/p&gt;
&lt;p&gt;From the image we can know that $K$ in problem statement can take any integer from $1$ to
$$
\left\lfloor\frac{z[i]}{i}\right\rfloor + 1
$$&lt;/p&gt;
&lt;p&gt;$i$ is the length of cyclic section. You can understand this expression by the example image. Red part equal to Orange part and Orange part equal to Orange part. Similarly find Red, Orange and Green part are all equal so $3$ is the length of cyclic section.&lt;/p&gt;
&lt;p&gt;Then we need focus on the number of each letter appear. Classify and discuss the parity of $K$. Define function $f(i,j)$ denote the number of letter wich appear odd times in $s[i\dots j]$. Let $t$ denote all value selection scheme for $K$, the scheme for $K$ to be a odd number should be $todd = t - t/2$, for even number should be $teven = t/2$.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When $K$ is an odd, we should find the number of $j\ (j\le i)$ which satisfy $f(0,j) \le f(i,n-1)$ is denoted as $t1$, it will provide $todd \times t1$ contribution.
&lt;img src=&quot;https://cdn.luogu.com.cn/upload/image_hosting/iv72qpb3.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When $K$ is an even, we should find the number of $j\ (j\le i)$ which satisfy $f(0,j) \le f(0,n-1)$ is denoted as $t2$, that is because the number of cyclic section is even, even if the letter appear for odd times, it finally turned to even times. So the letter which appear odd time equal to whole string. Answer will increase $teven \times t2$.
&lt;img src=&quot;https://cdn.luogu.com.cn/upload/image_hosting/6pxq01fv.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;LuoguP3167 [CQOI2014] 通配符匹配&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Link&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/problem/P3167&quot;&gt;Problem Statement&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/article/e4th1mfd&quot;&gt;Reference Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;OK this problem can protrude the charm of DFS. Most of turorial are using difficult algorithm such as KMP, ACAM but forget the most easy algorithm.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#include&amp;lt;bits/stdc++.h&amp;gt;
using namespace std;
char ch[100001],wzc[100001];
int n;

bool _doudou(int x,int y)// Start brute-force search, search from back to front to avoid some tricky test cases, also makes implementation easier
{
    if(y==0)// If the pattern string is fully matched
    {
        if(x==0)return 1;// If the wildcard string is also fully consumed, then it&apos;s definitely correct
        for(int i=x;i&amp;gt;0;i--)// Check remaining &apos;*&apos; characters
            if(ch[i]!=&apos;*&apos;)return 0;// If anything other than &apos;*&apos;, then it fails
        return 1;// Otherwise it&apos;s fine
    }

    if(!x)return 0;// If the wildcard string finishes first, then it must fail

    if(ch[x]==&apos;*&apos;)// If we encounter a &apos;*&apos;
    {
        for(int i=y;i&amp;gt;=0;i--)// Try matching from all remaining positions.
                             // The time complexity looks high, but in practice most branches terminate quickly.
            if(_doudou(x-1,i))return 1;
    }
    else
    {
        if(wzc[y]==ch[x]||ch[x]==&apos;?&apos;)
            return _doudou(x-1,y-1);// If it&apos;s &apos;?&apos;, move both strings one position forward
        else return 0;// Match failed
    }
}

int main()
{
    scanf(&quot;%s%d&quot;,ch+1,&amp;amp;n);
    int len=strlen(ch+1);

    while(n--)
    {
        scanf(&quot;%s&quot;,wzc+1);

        if(_doudou(len,strlen(wzc+1)))
            printf(&quot;YES\n&quot;);
        else
            printf(&quot;NO\n&quot;);
    }

    return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;LuoguP3082 [USACO13MAR] Necklace G&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Link&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/problem/P3082&quot;&gt;Problem Statement&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;https://www.luogu.com.cn/article/vhpibjbr&quot;&gt;Reference Blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a problem which using KMP to optmize DP.&lt;/p&gt;
&lt;p&gt;Let $f_{i,j}$ denotes, the maximum we can keep when the first $i$-th letter of $a$ map the first $j$-th letter of $b$ exactly. Then we can get a easy transition&lt;/p&gt;
&lt;p&gt;$$
f_{i+1,k} = \max{f_{i+1,k} ,f_{i,j}+1}
$$&lt;/p&gt;
&lt;p&gt;Then we should consider how to calculate $k$ quickly. Let $g_{i,j}$ denotes $k$ when map the first $i$-th letter of $b$ exactly and then next letter is $j$. Then we can get below transition.&lt;/p&gt;
&lt;p&gt;$$
g_{i,j} =
\begin{cases}
i+1 &amp;amp; b[i+1] == j\
g_{nxt_{i}, j} &amp;amp; \operatorname{otherwise.}
\end{cases}
$$&lt;/p&gt;
&lt;p&gt;Coding is easy.&lt;/p&gt;
</content:encoded></item><item><title>Memo of Trigonometric Formula</title><link>https://blog.517group.cn/posts/202603222001/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202603222001/</guid><description>Useful</description><pubDate>Sun, 22 Mar 2026 20:01:23 GMT</pubDate><content:encoded>&lt;p&gt;This is a memo. To prevent me from forgetting some trigonometric formulas.&lt;/p&gt;
&lt;h1&gt;Basic Formula&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Unit circle&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Pythagorean Theotheorm&lt;/p&gt;
&lt;p&gt;$$
\cos^2 \theta + \sin^2 \theta = 1
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Rotate matrix&lt;/strong&gt; (counterclockwise rotation)&lt;/p&gt;
&lt;p&gt;$$
\left [
\begin{matrix}
\cos\theta &amp;amp; -\sin\theta \
\sin\theta &amp;amp; \cos\theta
\end{matrix}
\right ]
\times
\left [
\begin{matrix}
x \ y
\end{matrix}
\right ]
$$&lt;/p&gt;
&lt;h1&gt;Sum Formula&lt;/h1&gt;
&lt;p&gt;$$
\begin{aligned}
&amp;amp; \sin(\alpha + \beta) = \sin\alpha\cos\beta + \cos\alpha\sin\beta\
&amp;amp; \cos(\alpha + \beta) = \cos\alpha\cos\beta - \sin\alpha\sin\beta\
&amp;amp; \tan(\alpha + \beta) = \frac{\tan\alpha + \tan\beta}{1-\tan\alpha\tan\beta}
\end{aligned}
$$&lt;/p&gt;
&lt;h1&gt;Double and Half Formula&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Double-angle&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For sin.&lt;/p&gt;
&lt;p&gt;$$
\sin 2\theta = 2\sin\theta\cos\theta
$$&lt;/p&gt;
&lt;p&gt;For cos.&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\cos 2\theta
&amp;amp;= \cos^2\theta - \sin^2\theta \
&amp;amp;= 1 - 2\sin^2\theta\
&amp;amp;= 2\cos^2\theta -1
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;For tan.&lt;/p&gt;
&lt;p&gt;$$
\tan 2\theta = \frac{2\tan\theta}{1-\tan^2\theta}
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Half-angle&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For sin &amp;amp; cos.&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\sin \frac{\theta}{2} &amp;amp;= \pm \sqrt{\frac{1-\cos\theta}{2}} \
\cos \frac{\theta}{2} &amp;amp;= \pm \sqrt{\frac{1+\cos\theta}{2}}
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;For tan.&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\tan \frac{\theta}{2}
&amp;amp;= \frac{\sin\theta}{1 + \cos\theta}\
&amp;amp;= \frac{1 - \cos\theta}{\sin\theta}\
&amp;amp;= \pm \sqrt{\frac{1-\cos\theta}{1+\cos\theta}}
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;The sign is depend on which quardrant $\theta/2$  is in.&lt;/p&gt;
&lt;p&gt;Also, there are some other trigonometric identities but they all can be prove by above formula so it&apos;s all for this memo.&lt;/p&gt;
</content:encoded></item><item><title>Mobius Inversion</title><link>https://blog.517group.cn/posts/202602091042/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202602091042/</guid><description>Basic number theory knowledge</description><pubDate>Mon, 09 Feb 2026 10:42:13 GMT</pubDate><content:encoded>&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;Inversion is a common tool to solve some math problem.&lt;/p&gt;
&lt;p&gt;This passage will introduce some trick for solving these problem.&lt;/p&gt;
&lt;h1&gt;Prerequisites&lt;/h1&gt;
&lt;h2&gt;Multiplicative Functions&lt;/h2&gt;
&lt;p&gt;A function $f(n)$ is multiplicative if, for all $a,b$ with $\gcd(a,b)=1$,
it satisfies $f(ab)=f(a)f(b)$. In particular, if this holds for all $a,b$,
then the function is called completely multiplicative.&lt;/p&gt;
&lt;p&gt;Common multiplicative functions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Identity function: $\epsilon(n) = [n=1]$, complete&lt;/li&gt;
&lt;li&gt;Constant function: $\operatorname{1}(n) = 1$, complete&lt;/li&gt;
&lt;li&gt;Equality function: $\operatorname{id}(n) = n$, $\operatorname{id}_{k}(n)=n^k$, complete&lt;/li&gt;
&lt;li&gt;Euler function: $\varphi(n) = \sum_{i=1}^{n}[\operatorname{gcd}(i,n)=1]$&lt;/li&gt;
&lt;li&gt;Mobius function:
$$
\mu(n)=\begin{cases}
1 &amp;amp; n=1\
0 &amp;amp; \exists\ d\ \operatorname{satisfy\ that}\ d^2 | n\
(-1)^k &amp;amp; k\ \operatorname{denote\ quantity\ of\ distinct\ prime\ divisors}
\end{cases}
$$&lt;/li&gt;
&lt;li&gt;Number of divisors function: $\operatorname{d}(n) = \sum_{d|n}1$&lt;/li&gt;
&lt;li&gt;Sum of divisors function: $\sigma(n) = \sum_{d|n}d$&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Dirichlet Convolution&lt;/h2&gt;
&lt;p&gt;Below is the formula of Dirichlet convolution:
$$
(f\ast g)(n) = \sum_{d|n} f(d)g(\frac{n}{d})
$$&lt;/p&gt;
&lt;p&gt;Dirichlet convolution satisfy below law:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Commutative law : $f\ast g = g\ast f$;&lt;/li&gt;
&lt;li&gt;Associative law : $(f\ast g)\ast h = f\ast (g\ast h)$;&lt;/li&gt;
&lt;li&gt;Distributive law : $f\ast (g+h) = f\ast g + f\ast h$;&lt;/li&gt;
&lt;li&gt;Identity element : $f\ast\epsilon = f$.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And below is some important dirichlet convolution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$\epsilon =\mu\ast 1$&lt;/li&gt;
&lt;li&gt;$\operatorname{id} = \varphi\ast 1$&lt;/li&gt;
&lt;li&gt;$\operatorname{d}=1\ast 1$&lt;/li&gt;
&lt;li&gt;$\sigma = \operatorname{id}\ast 1$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The proof of these equation isn&apos;t difficult but very important, must try to prove them before reading below part.&lt;/p&gt;
&lt;h1&gt;Mobius Inversion&lt;/h1&gt;
&lt;p&gt;The following is the basic form of mobius inversion:&lt;/p&gt;
&lt;p&gt;If
$$
F(n) = \sum_{d|n}f(d),
$$
then
$$
f(n) = \sum_{d|n}\mu(d)F\left(\frac{n}{d}\right).
$$&lt;/p&gt;
&lt;p&gt;These relative explain &quot;Inversion&quot; perfectly: using Mobius function to get calculation method from $F(n)$ to $f(n)$ by the calculation method from $f(n)$ to $F(n)$.&lt;/p&gt;
&lt;p&gt;And we can using below formula to prove these parttern easily.&lt;/p&gt;
&lt;p&gt;Observe the form of the relation from $f(d)$ we can using Dirichlet Convolution to express it:
$$
\begin{aligned}
f &amp;amp;= f \ast \epsilon\
&amp;amp;= f \ast \mu \ast 1 \
&amp;amp;= f \ast 1 \ast \mu \
&amp;amp;= F \ast \mu \
&amp;amp;= \mu \ast F \
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;I will explain each step of this proof.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Dirichelet Convolution&apos;s law: identity element $\epsilon$&lt;/li&gt;
&lt;li&gt;$\epsilon = \mu \ast 1$&lt;/li&gt;
&lt;li&gt;Commutative law&lt;/li&gt;
&lt;li&gt;The relative from $f(n)$ to $F(n)$ : $F = f \ast 1$&lt;/li&gt;
&lt;li&gt;Commutative law&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Mobius Inversion also have another form, this form is base on multiple.&lt;/p&gt;
&lt;p&gt;If&lt;/p&gt;
&lt;p&gt;$$
F(n)=\sum_{n|d}f(d),
$$&lt;/p&gt;
&lt;p&gt;then&lt;/p&gt;
&lt;p&gt;$$
f(n)=\sum_{n|d}\mu\left(\frac{d}{n}\right)F(d).
$$&lt;/p&gt;
&lt;p&gt;This form cannot prove elegantly like before.&lt;/p&gt;
&lt;p&gt;We can expand $F(d)$ in the formula after inversion&lt;/p&gt;
&lt;p&gt;$$
f(n)=\sum_{n|d}\mu\left(\frac{d}{n}\right)\sum_{d|e}f(e).
$$&lt;/p&gt;
&lt;p&gt;Then swap summation:&lt;/p&gt;
&lt;p&gt;$$
f(n) = \sum_{n|e} f(e) \sum_{n | d | e} \mu\left(\frac{d}{n}\right).
$$&lt;/p&gt;
&lt;p&gt;Then we define $d = nt, e = ns$ then this equation change to:&lt;/p&gt;
&lt;p&gt;$$
f(n) = \sum_{s\ge 1} f(ns)\sum_{t|s}\mu(t)
$$&lt;/p&gt;
&lt;p&gt;As we know, because of the natrue of mobius function:&lt;/p&gt;
&lt;p&gt;$$
\sum_{t|s}\mu(t) =
\begin{cases}
1 &amp;amp; s = 1\
0 &amp;amp; s &amp;gt; 1
\end{cases}
$$&lt;/p&gt;
&lt;p&gt;So that equation just can get result when $s = 1$, so the equation change to $f(n) = f(n)$, we finish the proof of multiple form of mobius inversion.&lt;/p&gt;
&lt;h1&gt;Classic Problem&lt;/h1&gt;
&lt;p&gt;Before solve some question, we need know a core trick.&lt;/p&gt;
&lt;p&gt;$$
[\gcd(i,j)=1]=\sum_{d|\gcd(i,j)}\mu(d)
$$&lt;/p&gt;
&lt;p&gt;This trick even no need to prove. I already say mobius function have a natrue that  the sum of the Mobius functions of the divisors of a number equal to 1 only when that number equal to 1.&lt;/p&gt;
&lt;h2&gt;Problem 1&lt;/h2&gt;
&lt;p&gt;$$
\sum_{i=1}^n\sum_{j=1}^m[\gcd(i,j) = k]
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\sum_{i=1}^n\sum_{j=1}^m[\gcd(i,j) = k]
&amp;amp;=\sum_{i=1}^{\lfloor n/k\rfloor}\sum_{j=1}^{\lfloor m/k\rfloor}[\gcd(i,j)=1]\
&amp;amp;=\sum_{i=1}^{\lfloor n/k\rfloor}\sum_{j=1}^{\lfloor m/k\rfloor}\sum_{d|\gcd(i,j)}\mu(d)\
&amp;amp;=\sum_{d=1}^{\min(\lfloor n/k\rfloor,\lfloor m/k\rfloor)}\mu(d)\sum_{i=1}^{\lfloor n/k\rfloor}[d|i]\sum_{j=1}^{\lfloor m/k\rfloor}[d|j]\
&amp;amp;=\sum_{d=1}^{\min(\lfloor n/k\rfloor,\lfloor m/k\rfloor)}\mu(d)\left\lfloor\frac{n}{kd}\right\rfloor\left\lfloor\frac{m}{kd}\right\rfloor
\end{aligned}
$$&lt;/p&gt;
&lt;h2&gt;Problem 2&lt;/h2&gt;
&lt;p&gt;$$
\sum_{i=1}^n\sum_{j=1}^md(ij)
$$&lt;/p&gt;
&lt;p&gt;Function $d(n)$ is number of divisors function.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Core Lemma:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
d(ij)=\sum_{x|i}\sum_{y|j}[\gcd(x,y)=1]
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\sum_{i=1}^n\sum_{j=1}^md(ij)
&amp;amp;=\sum_{i=1}^n\sum_{j=1}^m\sum_{x|i}\sum_{y|j}[\gcd(x,y)=1]\
&amp;amp;=\sum_{i=1}^n\sum_{j=1}^m\sum_{x|i}\sum_{y|j}\sum_{d|\gcd(x,y)}\mu(d)\
&amp;amp;=\sum_{x=1}^n\sum_{y=1}^m\left\lfloor\frac{n}{x}\right\rfloor\left\lfloor\frac{m}{y}\right\rfloor\sum_{d|\gcd(x,y)}\mu(d)\
&amp;amp;=\sum_{d=1}^{\min(n, m)}\mu(d)\sum_{x=1}^n\sum_{y=1}^m\left\lfloor\frac{n}{x}\right\rfloor\left\lfloor\frac{m}{y}\right\rfloor[d|x][d|y]\
&amp;amp;=\sum_{d=1}^{\min(n, m)}\mu(d)\sum_{x=1}^n\left\lfloor\frac{n}{xd}\right\rfloor\sum_{y=1}^m\left\lfloor\frac{m}{yd}\right\rfloor
\end{aligned}
$$&lt;/p&gt;
&lt;h2&gt;Problem 3&lt;/h2&gt;
&lt;p&gt;$$
\sum_{i=1}^n\sum_{j=1}^n i\times j\times \gcd(i,j)
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\sum_{i=1}^n\sum_{j=1}^n i\times j\times \gcd(i,j)
&amp;amp;=\sum_{d=1}^nd\sum_{i=1}^n\sum_{j=1}^nij[\gcd(i,j)=1]\
&amp;amp;=\sum_{d=1}^nd^3\sum_{i=1}^{\lfloor n/d\rfloor}\sum_{j=1}^{\lfloor n/d\rfloor}ij[\gcd(i,j)=1]\
&amp;amp;=\sum_{d=1}^nd^3\sum_{i=1}^{\lfloor n/d\rfloor}\sum_{j=1}^{\lfloor n/d\rfloor}ij\sum_{k|\gcd(i,j)}\mu(k)\
&amp;amp;=\sum_{d=1}^nd^3\sum_{k=1}^{\lfloor n/d\rfloor}\mu(k)\sum_{i&apos;=1}^{\lfloor n/(kd)\rfloor}(ki&apos;)\sum_{j&apos;=1}^{\lfloor n/(kd)\rfloor}(kj&apos;)\
&amp;amp;=\sum_{d=1}^nd^3\sum_{k=1}^{\lfloor n/d\rfloor}\mu(k)k^2\sum_{i&apos;=1}^{\lfloor n/(kd)\rfloor}i&apos;\sum_{j&apos;=1}^{\lfloor n/(kd)\rfloor}j&apos;\
&amp;amp;=\sum_{d=1}^nd^3\sum_{k=1}^{\lfloor n/d\rfloor}\mu(k)k^2\sum_{i&apos;=1}^{\lfloor n/(kd)\rfloor}i&apos;\sum_{j&apos;=1}^{\lfloor n/(kd)\rfloor}j&apos;\
&amp;amp;=\sum_{d=1}^nd^3\sum_{k=1}^{\lfloor n/d\rfloor}\mu(k)k^2S\left(\left\lfloor\frac{n}{kd}\right\rfloor\right)^2
\end{aligned}
$$&lt;/p&gt;
&lt;h2&gt;Problem 4&lt;/h2&gt;
&lt;p&gt;$$
\prod_{i=1}^n\prod_{j=1}^mf_{\gcd(i,j)}
$$&lt;/p&gt;
&lt;p&gt;Function $f(n)$ is Fibonacci sequence.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Core Lemma:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
f_{\gcd(i,j)}=\gcd(f_i, f_j)
$$&lt;/p&gt;
&lt;p&gt;Below is proof.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lemma 1.&lt;/strong&gt; $\gcd(f_n,f_{n-1})=1$&lt;/p&gt;
&lt;p&gt;Prove it by mathematical induction:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Base case:
&lt;ul&gt;
&lt;li&gt;For $n=1$, $\gcd(f_1, f_0) = \gcd(1, 0) = 1$&lt;/li&gt;
&lt;li&gt;For $n=2$, $\gcd(f_2, f_1) = \gcd(1, 1) = 1$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Inductive hypothesis: Assume it holds for $n = k$.&lt;/li&gt;
&lt;li&gt;Inductive step: When $n = k+1$, $f_{k+1} = f_k + f_{k-1}$, so $\gcd(f_{k+1},f_k)=\gcd(f_k+f_{k-1},f_k)=\gcd(f_k,f_{k-1})=1$. (Becasue of $\gcd(x+y,y) = \gcd(x,y)$)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Lemma 2.&lt;/strong&gt; When $m &amp;gt; n$, $\gcd(f_m,f_n)=\gcd(f_n, f_{m\bmod n})$ holds.&lt;/p&gt;
&lt;p&gt;Before prove this lemma, we should know a law $f_{a+b}=f_{a+1}f_b + f_af_{b-1}$
Still use mathematical induction to prove it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Base case: For $b = 1$, $f_{a+1}=f_{a+1}f_1 + f_af_0 = f_{a+1}$&lt;/li&gt;
&lt;li&gt;Inductive hypothesis: Assume it holds when $b \le k$.&lt;/li&gt;
&lt;li&gt;Inductive step: When $b = k+1$,
$$
\begin{aligned}
f_{a+k+1}
&amp;amp;=f_{a+k} + f_{a+k-1}\
&amp;amp;=(f_{a+1}f_k + f_af_{k-1}) + (f_{a+1}f_{k-1} + f_af_{k-2})\
&amp;amp;=f_{a+1}(f_k+f_{k-1}) + f_{a}(f_{k-1}+f_{k-2})\
&amp;amp;=f_{a+1}f_{k+1}+f_{a}f_{k}.
\end{aligned}
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When try to prove Lemma 2.&lt;/p&gt;
&lt;p&gt;Let $a=n,b=(q-1)n+r$, then $f_m = f_{qn+r} = f_{n+[(q-1)n+r]} = f_{n+1}f_{(q-1)n+r}+f_{n}f_{(q-1)n+r-1}$, So:
$$
\gcd(f_m, f_n) = \gcd(f_{n+1}f_{(q-1)n+r}+f_nf_{(q-1)n+r-1}, f_n).
$$&lt;/p&gt;
&lt;p&gt;Because of $\gcd(x+ky, y) = \gcd(x, y)$, so we can simplify above expression to below one:&lt;/p&gt;
&lt;p&gt;$$
\gcd(f_m, f_n) = \gcd(f_{n+1}f_{(q-1)n+r}, f_n).
$$&lt;/p&gt;
&lt;p&gt;Then becasue of Lemma 1, we can know that $\gcd(f_m, f_n) = \gcd(f_{(q-1)n+r}, f_n)$ (because there are no common divisior between $f_{n+1}$ and $f_n$), then repeat this process, we can found taht $\gcd(f_m, f_n) = \gcd(f_{m\bmod n}, f_n)$&lt;/p&gt;
&lt;p&gt;Now we going to prove that core lemma.&lt;/p&gt;
&lt;p&gt;Let $d = \gcd(i,j)$. By the Euclidean algorithm,
$$
\gcd(i, j) = \gcd(j, i \bmod j) = \gcd(i \bmod j, j \bmod (i \bmod j)) = \dots = \gcd(d, 0) = d.
$$&lt;/p&gt;
&lt;p&gt;Combining &lt;strong&gt;Lemma 2&lt;/strong&gt;, we can apply the same reduction process to $\gcd(f_i, f_j)$:
$$
\gcd(f_i, f_j) = \gcd(f_j, f_{i \bmod j}) = \gcd(f_{i \bmod j}, f_{j \bmod (i \bmod j)}) = \dots = \gcd(f_d, f_0).
$$&lt;/p&gt;
&lt;p&gt;By definition, $f_0 = 0$, and $\gcd(f_d, 0) = f_d$ (the greatest common divisor of a non-zero number and $0$ is the number itself). Therefore,
$$
\gcd(f_i, f_j) = f_d = f_{\gcd(i,j)}.
$$&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\prod_{i=1}^n\prod_{j=1}^mf_{\gcd(i,j)}
&amp;amp;=\prod_{d=1}^{\min(n, m)}{f_d}^{\sum_{i=1}^n\sum_{j=1}^m[\gcd(i,j)=d]}\
&amp;amp;=\prod_{d=1}^{\min(n, m)}{f_d}^{\sum_{d|T}^{\lfloor n/k\rfloor}\mu(k)\lfloor\frac{n}{dk}\rfloor\lfloor\frac{m}{dk}\rfloor}\
&amp;amp;=\prod_{d=1}^{\min(n, m)}\prod_{d|T}{f_d}^{\mu(T/d)\lfloor n/T\rfloor\lfloor m/T\rfloor}
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;The last step is because $a^{x+y} = a^xa^y$, we dismantle the sum.&lt;/p&gt;
&lt;h1&gt;Summarize&lt;/h1&gt;
&lt;p&gt;The core idea of mobius inversion is optmize the time complexity of a expression by the natrue of mobius function.&lt;/p&gt;
</content:encoded></item><item><title>Virtual Tree</title><link>https://blog.517group.cn/posts/202602041847/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202602041847/</guid><description>A way to optmize some algorithm which time complexity is depend on tree size.</description><pubDate>Fri, 06 Feb 2026 11:54:11 GMT</pubDate><content:encoded>&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;As we know that there are some problem we need maintain some information on a sparse tree (When we describe a tree is &quot;sparse&quot;, there are few key points).&lt;/p&gt;
&lt;p&gt;Using this structure, some tree DP algorithms that depend on tree size can be proven correct.&lt;/p&gt;
&lt;h1&gt;Build Virtual Tree&lt;/h1&gt;
&lt;p&gt;We have two ways to build a tree.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;sort twice.&lt;/li&gt;
&lt;li&gt;monotonicity stack.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Sort Twice&lt;/h2&gt;
&lt;p&gt;This algrithm is easy to code but hard to understand, below is algorithm flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;sort key node by dfn.&lt;/li&gt;
&lt;li&gt;get LCA for each close node.&lt;/li&gt;
&lt;li&gt;build virtual tree.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;How to prove this algorithm?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If $x$ is an ancestor of $y$, then connect $x$ directly to $y$. Since the DFS order guarantees that the DFS orders of $x$ and $y$ are adjacent, there are no critical points on the path from $x$ to $y$.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If $x$ is not an ancestor of $y$, then consider $\operatorname{LCA}(x,y)$ as an ancestor of $y$. Based on the previous case, it can be proven that there are no critical points on the path from $\operatorname{LCA}(x,y)$ to $y$.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Therefore, connecting $\operatorname{LCA}(x,y)$ and $y$ will not result in any omissions or repetitions.&lt;/p&gt;
&lt;p&gt;Furthermore, will the fact that the first node is not connected to any node have any impact? Since the first node is always the root of the tree, it will not have any impact, so the total number of edges is $m-1$.&lt;/p&gt;
&lt;h2&gt;Monotonicity Stack&lt;/h2&gt;
&lt;p&gt;This algorithm is maintaining a right chain of virtual tree.&lt;/p&gt;
&lt;p&gt;Algorithm Flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;push node $1$ (root) in stack.&lt;/li&gt;
&lt;li&gt;sort the key node by dfs order.&lt;/li&gt;
&lt;li&gt;check the LCA of stack top and now node.&lt;/li&gt;
&lt;li&gt;pop stack top until dfs order of top is not greater than dfs order of LCA(top, now). we need add edge from new top to old top.&lt;/li&gt;
&lt;li&gt;if &lt;code&gt;dfn[top] == dfn[LCA(top, now)]&lt;/code&gt;, LCA is already in stack add edge directly; if less than, add edge from LCA to old top, and push LCA in stack.&lt;/li&gt;
&lt;li&gt;then repeat above process.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;bool cmp(const int x, const int y) { return id[x] &amp;lt; id[y]; }

void build() {
    sort(h + 1, h + k + 1, cmp);
    sta[top = 1] = 1, g.sz = 0, g.head[1] = -1;
    // Push node 1 onto the stack, clear the adjacency list corresponding to node 1, and set the number of edges in the adjacency list to 0
    for (int i = 1, l; i &amp;lt;= k; ++i) {
        if (h[i] != 1) {
            // If node 1 is a key node, do not add it again
            l = lca(h[i], sta[top]);
            // Calculate the LCA between the current node and the top node of the stack
            if (l != sta[top]) {
                // If the LCA is different from the top element of the stack, it means that the current node is not on the chain stored in the current stack
                while (id[l] &amp;lt; id[sta[top - 1]]) {
                    // When the DFS order of the second largest node is greater than the DFS order of the LCA
                    g.push(sta[top - 1], sta[top]), top--;
                    // Connect the chains that do not overlap with the chain containing the current node and pop them
                }
                if (id[l] &amp;gt; id[sta[top - 1]]) {
                    // If the LCA is not equal to the second largest node (greater than or not equal to is essentially the same)
                    g.head[l] = -1, g.push(l, sta[top]), sta[top] = l;
                    // This indicates that the LCA is being pushed onto the stack for the first time. Clear its adjacency list, connect the edges, pop the top element from the stack, and push the LCA
                    // onto the stack
                } else {
                    g.push(l, sta[top--]);
                    // This indicates that the LCA is the second largest node. Pop the top element from the stack directly
                }
            }
            g.head[h[i]] = -1, sta[++top] = h[i];
            // The current node is necessarily the first one pushed onto the stack, clear the adjacency list and push it onto the stack
        }
    }
    for (int i = 1; i &amp;lt; top; ++i) {
        g.push(sta[i], sta[i + 1]); // Connect the last remaining chain
    }
    return ;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Summarize&lt;/h1&gt;
&lt;p&gt;Tricky. Aslo can be considered violence.&lt;/p&gt;
</content:encoded></item><item><title>Solution Report of Construct Topic</title><link>https://blog.517group.cn/posts/20260122/</link><guid isPermaLink="true">https://blog.517group.cn/posts/20260122/</guid><description>Trick problem</description><pubDate>Thu, 22 Jan 2026 15:54:20 GMT</pubDate><content:encoded>&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;It&apos;s the problem list of 01/22/2026 class, about construct problem, that&apos;s a tricky part in OI, know we will see some.&lt;/p&gt;
&lt;h1&gt;A - QOJ-4913 子集匹配&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://vjudge.net/problem/QOJ-4913/origin&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Transfer problem statement:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;$L$: All subset which exactly have $K$ ones.&lt;br /&gt;
$R$: All subset which exactly have $K-1$ ones.&lt;br /&gt;
One edge from $S$ to $T$ equal to change a 1-node in $S$ to 0 then $S$ equal to $T$.&lt;br /&gt;
This problem require different $S$ cannot contact same $T$.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Construct Idea:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;let 1 equal to add 1 and 0 equal to minus 1.&lt;br /&gt;
then find the max position $p$ of prefix sum.&lt;br /&gt;
flip 0&amp;amp;1 in position $p+1$.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Prove: Why it is a injection?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;assumpted there is a $S&apos;$ mapped same $T$ with $S$.&lt;br /&gt;
we can find that $T$ have $i,j$ places which $S$ or $S&apos;$ is $1$ but now is $0$, so $T$ must have $K-2$ ones, it&apos;s not satisfy difinition of $T$, so this mapping must be a injection.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Good problem, it&apos;s take me 1 day to understand.&lt;/p&gt;
&lt;h1&gt;B - Adjacent Difference&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://atcoder.jp/contests/agc066/tasks/agc066_a?lang=en&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For this kind of construction problem, we can try to solve it using some special subtask.&lt;/p&gt;
&lt;p&gt;If this matrix just have 0/1, and $d$ equal to $1$, how to modify it? It&apos;s not difficult to find that answer must be below form:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;010  |  101
101  |  010
010  |  101
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can calculate answer in each situation, get the mininum. Then try to expand this special solution to whole problem. We can enumerate a $k$ then difine odd and even be below rules:&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\operatorname{odd}\ &amp;amp;: \ a \equiv k &amp;amp; \pmod{2d}\
\operatorname{even}\ &amp;amp;: \ a \equiv k+d &amp;amp; \pmod{2d}
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;Using above solution, time complexity is $O(dn^2)$.&lt;/p&gt;
&lt;h1&gt;C - Make SYSU Great Again II&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://qoj.ac/problem/7629&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We call the cell which:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$(i+j)\bmod 2 = 0$ is &lt;strong&gt;Black Cell&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;$(i+j)\bmod 2 = 1$ is &lt;strong&gt;White Cell&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So we need promise the bitwise and value of each close Black and White cell equal to 0. Then we can start our constructing.&lt;/p&gt;
&lt;p&gt;First we have below guess:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Assign unique number in Black Cell.&lt;/li&gt;
&lt;li&gt;Then put avaliable number in White Cell.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We need find a way to calculate the number in cell quickly.&lt;/p&gt;
&lt;p&gt;For black cell:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;high bits = gray( (i + j) / 2 )
low  bits = gray( (i - j + (n-1)) / 2 )

value = (high &amp;lt;&amp;lt; K) | low
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can proove that different cell $(i,j)$ map different gray code number.&lt;/p&gt;
&lt;p&gt;Maybe four neighbors can generate at most 4 same number, and add one of black cell, at most 5 same number, statisfy the problem requirement.&lt;/p&gt;
&lt;h1&gt;D - Tournament Construction&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://codeforces.com/problemset/problem/850/D&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;According to the Landau theorem, we sort the original sequence $a$ and define $f(i,j,y)$ as representing a graph of size $i$ with vertices $j$ and edges $y$.&lt;/p&gt;
&lt;p&gt;According to the theorem, $y ≥ \frac{j(j−1)}{2}$ always holds true.&lt;/p&gt;
&lt;p&gt;We enumerate $i,j,y$ and the state at $i−1$. If $f(i−1,k,x)$ is feasible, then $f(i,j,y)$ is also feasible.&lt;/p&gt;
&lt;p&gt;We also record $j−k$ at this point.&lt;/p&gt;
&lt;p&gt;This allows us to construct the original out-degree sequence.&lt;/p&gt;
&lt;p&gt;Let the out-degree sequence established in step 1 be $d_i$, and the one established in step 2 be $u_i$. First, assume $\forall i&amp;gt;j$, and all edges are in the direction $i\rightarrow j$. Then $u_i = i−1$.&lt;/p&gt;
&lt;p&gt;Each time, find a triple $(i,j,k)$ such that $u_i &amp;gt; d_i, u_j = d_j, u_k &amp;lt; d_k$, and there exist edges $i\rightarrow j$ and $j\rightarrow k$.&lt;/p&gt;
&lt;p&gt;In this way, we can reverse these two edges, achieving the effect that $u_i \leftarrow u_i −1, u_j \leftarrow u_j, u_k \leftarrow u_k −1$.&lt;/p&gt;
&lt;p&gt;By continuously repeating the above steps, u can gradually approach d, eventually becoming exactly the same.&lt;/p&gt;
&lt;h1&gt;E - LGP10441 [JOIST 2024] 乒乓球 / Table Tennis&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.luogu.com.cn/problem/P10441&quot;&gt;Problem Statememt&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We have a classic conclusion: the struction of tournament just depend on in-degree array, and triple circle will add one more when we change a pair $(x,x+2)$ to $(x+1,x+1)$.&lt;/p&gt;
&lt;p&gt;So we can use this to construct it.&lt;/p&gt;
&lt;p&gt;Find the smallest $n_0$ which can statisfy the maximum triple circle less than $m$, then change the triple circle quantity on it.&lt;/p&gt;
</content:encoded></item><item><title>Fast Mobius Transform and Fast Walsh-Hadamard Transform</title><link>https://blog.517group.cn/posts/202601162039/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202601162039/</guid><description>A basic transform to calculate convolution</description><pubDate>Thu, 22 Jan 2026 10:39:18 GMT</pubDate><content:encoded>&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;These two transforms have a lot of same point, so I want to introduce them together.&lt;/p&gt;
&lt;p&gt;First of all, what&apos;s kind of problem these algorithm solve? It&apos;s used to solve formula like below format:&lt;/p&gt;
&lt;p&gt;$$
c_k = \sum_{i\oplus j = k} a_i\times b_j
$$&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$\oplus$ can be any binary bitwise operations such as &lt;code&gt;or&lt;/code&gt;, &lt;code&gt;and&lt;/code&gt;, &lt;code&gt;xor&lt;/code&gt;, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And FMT(Fast Mobius Transform) is used to solve above formula when operation is &lt;code&gt;or&lt;/code&gt; or &lt;code&gt;and&lt;/code&gt;, FWT(Fast Walsh-Hadamard Transform) is used when operation is &lt;code&gt;xor&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;These two algorithms are too similar so that maybe you can see some blog or solution said they are same algorithm but please remember they not.&lt;/p&gt;
&lt;h1&gt;FMT&lt;/h1&gt;
&lt;p&gt;Let&apos;s start with operation &lt;code&gt;or&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The algorithm flow is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Find a transform to transfer array $a, b$, let the array transfered named $A,B$;&lt;/li&gt;
&lt;li&gt;Define $C$ such that $C_i = A_i\times B_i$;&lt;/li&gt;
&lt;li&gt;Use the inverse transform to get $c$ from $C$.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Or Operation&lt;/h2&gt;
&lt;p&gt;now the formula is:
$$
c_k = \sum_{i\lor j = k} a_i\times b_j
$$&lt;/p&gt;
&lt;p&gt;We need construct a kind of transform by the algorithm flow. Let
$$
A_i = \sum_{i=i\cup j} a_j
$$&lt;/p&gt;
&lt;p&gt;And we can try derive it:&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
A_i\times B_i
&amp;amp;= \left(\sum_{i\cup j=i}a_j\right)\left(\sum_{i\cup k=i}b_k\right)\
&amp;amp;= \sum_{i\cup(j\cup k) = i}a_jb_k\
&amp;amp;= C_i
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;This form can use a inverse transform to get $c$ form $C$.&lt;/p&gt;
&lt;p&gt;Now try find a quick way to calculate this transform. We know $i=i\cup j$ equal to find all the subset $j$ of $i$, and this need $O(3^n)$ time complexity, too slow.&lt;/p&gt;
&lt;p&gt;Maybe we can focus on index:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;origin index&lt;/th&gt;
&lt;th&gt;0&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;th&gt;2&lt;/th&gt;
&lt;th&gt;3&lt;/th&gt;
&lt;th&gt;4&lt;/th&gt;
&lt;th&gt;5&lt;/th&gt;
&lt;th&gt;6&lt;/th&gt;
&lt;th&gt;7&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;binary form&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;000&lt;/td&gt;
&lt;td&gt;001&lt;/td&gt;
&lt;td&gt;010&lt;/td&gt;
&lt;td&gt;011&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;101&lt;/td&gt;
&lt;td&gt;110&lt;/td&gt;
&lt;td&gt;111&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;There is obviously a pattern: 0 and 4, 1 and 5, 2 and 6, etc. They have the same last two bits, and 0 is a subset of 4. This told us we can use this pattern make we calculate transform quickly.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;auto FMTor(const std::vector&amp;lt;ll&amp;gt;&amp;amp; a, int flag) -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
    auto trA{a};
    for (int o{2}, k{1}; o &amp;lt;= n; o &amp;lt;&amp;lt;= 1, k &amp;lt;&amp;lt;= 1) {
        for (int i{0}; i &amp;lt; n; i += o) {
            for (int j{0}; j &amp;lt; k; j++) {
                trA[i+j+k] = (trA[i+j+k] + trA[i+j] * flag % MOD + MOD) % MOD;
            }
        }
    }
    return trA;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The time complexity of this code optmize above transform to O(n\times 2^n), because the length of array is $2^n$, maybe we can consider this algorithm is $O(n\log n)$ time complexity.&lt;/p&gt;
&lt;h2&gt;And Operation&lt;/h2&gt;
&lt;p&gt;The algorithm flow is same.&lt;/p&gt;
&lt;p&gt;We are trying to find a transform like FMT Or operation.&lt;/p&gt;
&lt;p&gt;Let $A$ is the array after transform, we have:
$$
A_i = \sum_{i=i\cap j} a_j
$$&lt;/p&gt;
&lt;p&gt;That means $j$ is the superset of $i$, so just like before but we should add contribution from superset to subset.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;auto FMTand(const std::vector&amp;lt;ll&amp;gt;&amp;amp; a, int flag) -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
    auto trA{a};
    for (int o{2}, k{1}; o &amp;lt;= n; o &amp;lt;&amp;lt;= 1, k &amp;lt;&amp;lt;= 1) {
        for (int i{0}; i &amp;lt; n; i += o) {
            for (int j{0}; j &amp;lt; k; j++) {
                trA[i+j] = (trA[i+j] + trA[i+j+k] * flag % MOD + MOD) % MOD;
            }
        }
    }
    return trA;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Full Template&lt;/h2&gt;
&lt;p&gt;Really not difficult&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/**
 * @file    : FastMobiusTransform.cpp 
 * @date    : 2026-01-15
 * @brief   : LuoguP4717
 */

#include &amp;lt;iostream&amp;gt;
#include &amp;lt;vector&amp;gt;

#include &amp;lt;iostream&amp;gt;
#include &amp;lt;vector&amp;gt;

class FastMobiusTransform {
private:
    using ll = long long;

    const int MOD;
    int n;
    std::vector&amp;lt;ll&amp;gt; a, b;
public:
    explicit FastMobiusTransform(int n, int MOD) : MOD{MOD}, n{n}, a(n), b(n) {}
    FastMobiusTransform(int MOD, const std::vector&amp;lt;ll&amp;gt; a, const std::vector&amp;lt;ll&amp;gt; b)
        : MOD{MOD}, n(a.size()), a{a}, b{b} {}

    void input() {
        for (int i{0}; i &amp;lt; n; i++) {
            std::cin &amp;gt;&amp;gt; a[i];
        }
        for (int i{0}; i &amp;lt; n; i++) {
            std::cin &amp;gt;&amp;gt; b[i];
        }
    }

    auto FMTor(const std::vector&amp;lt;ll&amp;gt;&amp;amp; a, int flag) -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA{a};
        for (int o{2}, k{1}; o &amp;lt;= n; o &amp;lt;&amp;lt;= 1, k &amp;lt;&amp;lt;= 1) {
            for (int i{0}; i &amp;lt; n; i += o) {
                for (int j{0}; j &amp;lt; k; j++) {
                    trA[i+j+k] = (trA[i+j+k] + trA[i+j] * flag % MOD + MOD) % MOD;
                }
            }
        }
        return trA;
    }
    auto FMTand(const std::vector&amp;lt;ll&amp;gt;&amp;amp; a, int flag) -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA{a};
        for (int o{2}, k{1}; o &amp;lt;= n; o &amp;lt;&amp;lt;= 1, k &amp;lt;&amp;lt;= 1) {
            for (int i{0}; i &amp;lt; n; i += o) {
                for (int j{0}; j &amp;lt; k; j++) {
                    trA[i+j] = (trA[i+j] + trA[i+j+k] * flag % MOD + MOD) % MOD;
                }
            }
        }
        return trA;
    }

    auto transformOr() -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA = FMTor(a, 1);
        auto trB = FMTor(b, 1);
        std::vector&amp;lt;ll&amp;gt; trC(n);
        for (int i{0}; i &amp;lt; n; i++) {
            trC[i] = trA[i] * trB[i] % MOD;
        }
        return FMTor(trC, -1);
    }
    auto transformAnd() -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA = FMTand(a, 1);
        auto trB = FMTand(b, 1);
        std::vector&amp;lt;ll&amp;gt; trC(n);
        for (int i{0}; i &amp;lt; n; i++) {
            trC[i] = trA[i] * trB[i] % MOD;
        }
        return FMTand(trC, -1);
    }
};

auto main() -&amp;gt; int {
    int n, MOD;
    std::cin &amp;gt;&amp;gt; n &amp;gt;&amp;gt; MOD;
    FastMobiusTransform fmt(1&amp;lt;&amp;lt;n, MOD);
    fmt.input();

    auto ans_or{fmt.transformOr()};
    for (auto&amp;amp; p : ans_or) {
        std::cout &amp;lt;&amp;lt; p &amp;lt;&amp;lt; &apos; &apos;;
    }
    std::cout &amp;lt;&amp;lt; &quot;\n&quot;;
    auto ans_and{fmt.transformAnd()};
    for (auto&amp;amp; p : ans_and) {
        std::cout &amp;lt;&amp;lt; p &amp;lt;&amp;lt; &apos; &apos;;
    }
    std::cout &amp;lt;&amp;lt; &quot;\n&quot;;
    return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;FWT&lt;/h1&gt;
&lt;p&gt;As you can see this part we are going to introduce FWT, this algorithm is using to solve below formula:
$$
c_k = \sum_{i\oplus j = k} a_i\times b_j
$$&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$\oplus$ indicates XOR operator.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let $A$ is the transformed array of $a$ that:
$$
A_i = \sum_{i\circ j = 0} a_j - \sum_{i\circ j = 1} a_j
$$&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the formula, $i\circ j$ indicates $\operatorname{popcount}(i\cap j)\bmod 2$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And we can check its correctness:&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
A_i\times B_i
&amp;amp;= \left(\sum_{i\circ j = 0} a_j - \sum_{i\circ j = 1} a_j\right)\times \left(\sum_{i\circ k = 0} b_k - \sum_{i\circ k = 1} b_k\right)\
&amp;amp;= \left(\sum_{i\circ j = 0} a_j\sum_{i\circ k = 0} b_k + \sum_{i\circ j = 1}a_j\sum_{i\circ k = 1} b_k\right) - \left(\sum_{i\circ j = 0} a_j\sum_{i\circ k = 1} b_k + \sum_{i\circ j = 1}a_j\sum_{i\circ k = 0} b_k\right)\
&amp;amp;= \sum_{(j\oplus k)\circ i = 0} a_ib_k - \sum_{(j\oplus k)\circ i = 1} a_ib_k\
&amp;amp;= C_i
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;The transformation between second row and third row is because each of they denote one situation of result of $i\oplus j$, so they mix to one part.&lt;/p&gt;
&lt;p&gt;How to calculate it? Still divide, we know that:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Do &quot;$\circ$&quot; operater&lt;/th&gt;
&lt;th&gt;0&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;So that:
$$
{A&apos;}_0 = A_0+A_1\
{A&apos;}_1 = A_0-A_1
$$&lt;/p&gt;
&lt;p&gt;Also we can get inverse transform:
$$
A_0 = \frac{A&apos;_0+A&apos;_1}{2}\
A_1 = \frac{A&apos;_0-A&apos;_1}{2}
$$&lt;/p&gt;
&lt;p&gt;Code also easily.&lt;/p&gt;
&lt;h2&gt;Full Template&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;/**
 * @file    : FastWalshTransform.cpp 
 * @date    : 2026-01-15
 * @brief   : LuoguP4717
 */

#include &amp;lt;iostream&amp;gt;
#include &amp;lt;vector&amp;gt;

class FastWalshTransform {
private:
    using ll = long long;

    const int MOD;
    int n;
    std::vector&amp;lt;ll&amp;gt; a, b;

    auto pow2(int x) {
        ll ans = 1, a = 2;
        while (x) {
            if (x &amp;amp; 1) ans = ans * a % MOD;
            a = a * a % MOD;
            x &amp;gt;&amp;gt;= 1;
        }
        return ans;
    }
public:
    explicit FastWalshTransform(int n, int MOD) : MOD{MOD}, n{n}, a(n), b(n) {}
    FastWalshTransform(int MOD, const std::vector&amp;lt;ll&amp;gt; a, const std::vector&amp;lt;ll&amp;gt; b)
        : MOD{MOD}, n(a.size()), a{a}, b{b} {}

    void input() {
        for (int i{0}; i &amp;lt; n; i++) {
            std::cin &amp;gt;&amp;gt; a[i];
        }
        for (int i{0}; i &amp;lt; n; i++) {
            std::cin &amp;gt;&amp;gt; b[i];
        }
    }

    auto fwtXor(const std::vector&amp;lt;ll&amp;gt;&amp;amp; a, int flag) -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA{a};
        for (int o{2}, k{1}; o &amp;lt;= n; o &amp;lt;&amp;lt;= 1, k &amp;lt;&amp;lt;= 1) {
            for (int i{0}; i &amp;lt; n; i += o) {
                for (int j{0}; j &amp;lt; k; j++) {
                    auto u{trA[i+j]}, v{trA[i+j+k]};
                    trA[i+j] = (flag * (trA[i+j] + v) % MOD + MOD) % MOD;
                    trA[i+j+k] = (flag * (u - trA[i+j+k]) % MOD + MOD) % MOD;
                }
            }
        }
        return trA;
    }

    auto transform() -&amp;gt; std::vector&amp;lt;ll&amp;gt; {
        auto trA{fwtXor(a, 1)};
        auto trB{fwtXor(b, 1)};
        std::vector&amp;lt;ll&amp;gt; trC(n);
        for (int i{0}; i &amp;lt; n; i++) {
            trC[i] = trA[i] * trB[i] % MOD;
        }
        return fwtXor(trC, pow2(MOD-2));
    }
};

auto main() -&amp;gt; int {
    int n, MOD;
    std::cin &amp;gt;&amp;gt; n &amp;gt;&amp;gt; MOD;
    FastWalshTransform fwt(1&amp;lt;&amp;lt;n, MOD);
    fwt.input();
    auto ans{fwt.transform()};
    for (auto&amp;amp; p : ans) {
        std::cout &amp;lt;&amp;lt; p &amp;lt;&amp;lt; &apos; &apos;;
    }
    std::cout &amp;lt;&amp;lt; &quot;\n&quot;;
    return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Summarize&lt;/h1&gt;
&lt;p&gt;That&apos;s all, actually these knowledge doesn&apos;t usefull in OI, but we still need learn it...&lt;/p&gt;
</content:encoded></item><item><title>Solution Report of Probability and Expectation Topic</title><link>https://blog.517group.cn/posts/202601040833/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202601040833/</guid><description>Solution of some simple problem</description><pubDate>Sun, 04 Jan 2026 08:33:34 GMT</pubDate><content:encoded>&lt;h1&gt;A - Luogu P2719 搞笑世界杯&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.luogu.com.cn/problem/P2719&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is a straightforward problem. We can solve it easily using DP.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DP State:&lt;/strong&gt;&lt;br /&gt;
Let $dp[i][j]$ denote the probability that A and B end up with the same type of ticket when there are $i$ A-type tickets and $j$ B-type tickets remaining.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Answer:&lt;/strong&gt;&lt;br /&gt;
Obviously, the final answer is $dp[n][n]$.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Initialization:&lt;/strong&gt;&lt;br /&gt;
When only one type of ticket remains, A and B will get the same type for sure, except for the case where only one ticket is left.&lt;br /&gt;
So we have: $dp[i][0] = dp[0][i] = 1$ for $i \ge 2$.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;State Transition:&lt;/strong&gt;&lt;br /&gt;
Each ticket is sold based on a fair coin flip, so each choice has a probability of 50%. Therefore,
$$
dp[i][j] = \frac{dp[i-1][j] + dp[i][j-1]}{2}
$$&lt;/p&gt;
&lt;p&gt;I won&apos;t include the code here since the implementation is straightforward.&lt;/p&gt;
&lt;h1&gt;B - Luogu P8804 [蓝桥杯 2022 国 B] 故障&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.luogu.com.cn/problem/P8804&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The basic application of Bayes&apos; formula.&lt;/p&gt;
&lt;p&gt;The original statement is long and packed with information, which makes it hard to follow at first glance.&lt;br /&gt;
So the first step is to rewrite it in a more formal and structured way.&lt;/p&gt;
&lt;p&gt;Let $P(A)$ denote the probability that event $A$ occurs, and $P(A \mid B)$ denote the probability that $A$ occurs given that $B$ has occurred.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$S_i$ denotes the $i$-th fault &lt;strong&gt;Symptom&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;$C_i$ denotes the $i$-th fault &lt;strong&gt;Cause&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Since both terms come with the prefix “fault” and can be confusing, we will simply use the English words &lt;strong&gt;Symptom&lt;/strong&gt; and &lt;strong&gt;Cause&lt;/strong&gt; in the following discussion.&lt;/p&gt;
&lt;p&gt;The problem provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The prior probability of each &lt;strong&gt;Cause&lt;/strong&gt; $i$, namely $P(C_i)$&lt;/li&gt;
&lt;li&gt;The conditional probability $P(S_j \mid C_i)$, meaning that &lt;strong&gt;Symptom&lt;/strong&gt; $j$ occurs given &lt;strong&gt;Cause&lt;/strong&gt; $i$&lt;/li&gt;
&lt;li&gt;A set $S$ of &lt;strong&gt;Symptoms&lt;/strong&gt; that have already been observed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our task is to compute the probability of each &lt;strong&gt;Cause&lt;/strong&gt; occurring and then sort them accordingly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important assumptions:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The system can have &lt;strong&gt;only one&lt;/strong&gt; active &lt;strong&gt;Cause&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Given a &lt;strong&gt;Cause&lt;/strong&gt;, all &lt;strong&gt;Symptoms&lt;/strong&gt; occur independently&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Based on the conditions above, this is a standard application of &lt;strong&gt;Bayes&apos; theorem&lt;/strong&gt;.&lt;br /&gt;
The probability that &lt;strong&gt;Cause&lt;/strong&gt; $i$ is responsible for the observed symptoms can be written as:&lt;/p&gt;
&lt;p&gt;$$
P(C_i \mid S) = \frac{P(S \mid C_i) \cdot P(C_i)}{P(S)}
$$&lt;/p&gt;
&lt;p&gt;For clarity, we summarize the notation again:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;$C_i$: the $i$-th &lt;strong&gt;Cause&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;$S_j$: the $j$-th &lt;strong&gt;Symptom&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;$S$: the set of observed &lt;strong&gt;Symptoms&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thus, the problem reduces to efficiently computing $P(S \mid C_i)$ and $P(S)$.&lt;br /&gt;
The prior probability $P(C_i)$ is already given in the input.&lt;/p&gt;
&lt;p&gt;Since all &lt;strong&gt;Symptoms&lt;/strong&gt; are independent given a &lt;strong&gt;Cause&lt;/strong&gt;, we have:&lt;/p&gt;
&lt;p&gt;$$
P(S \mid C_i)
= \prod_{j \in S} P(S_j \mid C_i)
\times
\prod_{j \notin S} \left(1 - P(S_j \mid C_i)\right)
$$&lt;/p&gt;
&lt;p&gt;Note that $S$ represents a &lt;em&gt;specific combination&lt;/em&gt; of symptoms.&lt;br /&gt;
Therefore, we must consider not only that all symptoms in $S$ have occurred, but also that all symptoms &lt;strong&gt;not&lt;/strong&gt; in $S$ have &lt;em&gt;not&lt;/em&gt; occurred.&lt;/p&gt;
&lt;p&gt;Once $P(S \mid C_i)$ is known, computing $P(S)$ becomes straightforward.&lt;br /&gt;
Since the system can have only one active &lt;strong&gt;Cause&lt;/strong&gt;, we can treat this as a weighted sum:&lt;/p&gt;
&lt;p&gt;$$
P(S) = \sum_{i=1}^{n} P(S \mid C_i) \cdot P(C_i)
$$&lt;/p&gt;
&lt;p&gt;With these probabilities computed, we can obtain $P(C_i \mid S)$ for each &lt;strong&gt;Cause&lt;/strong&gt; and sort them as required.&lt;/p&gt;
&lt;h1&gt;D - Luogu P1297 [国家集训队] 单选错位&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.luogu.com.cn/problem/P1297&quot;&gt;Problem Statement&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s analyze this case by case.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Case 1 $a_i = a_{i+1}$: In this case, it&apos;s clear that the answer for question $i+1$ is also random. The expected value is: $\frac{1}{a_i} = \frac{1}{a_{i+1}}$&lt;/li&gt;
&lt;li&gt;Case 2 $a_i &amp;gt; a_{i+1}$: only $\frac{a_{i+1}}{a_i}$ of the possible answers for question $i$ fall within the range $1 \sim a_{i+1}$. Thus, the expected value is: $\frac{a_{i+1}}{a_i} \cdot \frac{1}{a_{i+1}} = \frac{1}{a_i}$&lt;/li&gt;
&lt;li&gt;Case 3 $a_i &amp;lt; a_{i+1}$: the random answer for question $i$ is only within $1 \sim a_i$, and the probability that the correct answer for question $i+1$ falls within this range is $\frac{a_i}{a_{i+1}}$. So the expected value becomes: $\frac{a_i}{a_{i+1}} \cdot \frac{1}{a_i} = \frac{1}{a_{i+1}}$&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Combining all cases, the final answer is:
$$
\sum_{i=1}^{n} \frac{1}{\max(a_i, a_{i+1})}
$$&lt;/p&gt;
&lt;h1&gt;E - Luogu P1850 [NOIP 2016 提高组] 换教室&lt;/h1&gt;
&lt;p&gt;DP State: $dp[i][j][k]$ denote for a prefix of classroom $[1, i]$, switch classroom for $j$ times, and we decide (not) to switch classroom on $i$ (record by $k$, 1 for yes and 0 for no).&lt;/p&gt;
&lt;p&gt;Transition:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;int C1 = c[i - 1][0];
int C2 = c[i - 1][1];
int C3 = c[i][0];
int C4 = c[i][1];

dp[i][j][0] = std::min(
    dp[i][j][0],
    std::min(
        dp[i - 1][j][0] + mp[C1][C3], // not change anymore
        dp[i - 1][j][1]               // change on i-1 but i not
            + mp[C1][C3] * (1 - k[i - 1])
            + mp[C2][C3] * k[i - 1]
    )
);

dp[i][j][1] = std::min(
    dp[i][j][1],
    std::min(
        dp[i - 1][j - 1][0]           // change on i but i-1 not
            + mp[C1][C3] * (1 - k[i])
            + mp[C1][C4] * k[i],
        dp[i - 1][j - 1][1]           // change both i-1 and i
            + mp[C2][C4] * k[i] * k[i - 1]
            + mp[C2][C3] * k[i - 1] * (1 - k[i])
            + mp[C1][C4] * (1 - k[i - 1]) * k[i]
            + mp[C1][C3] * (1 - k[i - 1]) * (1 - k[i])
    )
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Answer is obviously.&lt;/p&gt;
&lt;h1&gt;F - Luogu P3750 [六省联考 2017] 分手是祝愿&lt;/h1&gt;
&lt;p&gt;DP State: $dp[i]$ denote the expect operator number when button quantity we need press decrease from $i$ to $i-1$.&lt;/p&gt;
&lt;p&gt;Transition:
$$
dp[i] = \frac{i}{n}\times 1 + \frac{n-i}{n} \times (dp[i]+dp[i+1]+1)
$$&lt;/p&gt;
&lt;p&gt;This transition means have $\frac i n$ probability press right button, and have $\frac{n-i}{n}$ probability press wrong button. if we press wrong button, we need pay $dp[i+1]$ to make the button quantity we need press decrease to $i$ again, and continue using $dp[i]$ times operation.&lt;/p&gt;
&lt;p&gt;then we need simplify the transition equation:
$$
dp[i] = \frac{n+(n-i)\times f[i+1]}{i}
$$&lt;/p&gt;
&lt;p&gt;Now answer is obviously.&lt;/p&gt;
&lt;h1&gt;G - Luogu P2473 [SCOI2008] 奖励关&lt;/h1&gt;
&lt;p&gt;DP State: $dp[i][S]$ denote the expected score from round $i$ to round $k$ when, after the first $i−1$ rounds, the selection state of each item is $S$.&lt;/p&gt;
&lt;p&gt;Transition:&lt;br /&gt;
For all $i\le j\le n$&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;if $S$ satisfy the state requirement of item $j$, we can decide to select it or not;&lt;/li&gt;
&lt;li&gt;or, just not select it.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now answer is obiously.&lt;/p&gt;
&lt;h1&gt;Summarize&lt;/h1&gt;
&lt;p&gt;Above is the all problem of this topic.&lt;/p&gt;
&lt;p&gt;Most of problem with probability or expectation will not use very further knowledge so just solve these problem like solving basic DP problem.&lt;/p&gt;
</content:encoded></item><item><title>DP Optimize</title><link>https://blog.517group.cn/posts/202512241656/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202512241656/</guid><description>Common optimization of DP</description><pubDate>Wed, 24 Dec 2025 16:58:01 GMT</pubDate><content:encoded>&lt;h1&gt;Decision Monotonicity&lt;/h1&gt;
&lt;p&gt;Decision Monotonicity is a key concept for optimizing dynamic programming transitions.&lt;/p&gt;
&lt;p&gt;The idea of &lt;em&gt;decision monotonicity&lt;/em&gt; is that the &lt;strong&gt;decision points&lt;/strong&gt; of the DP exhibit a monotonic property. Before introducing this concept, we first define what a &lt;em&gt;decision point&lt;/em&gt; is.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Definition of Decision Point&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For a fixed index $i$, an index $j$ is called a &lt;strong&gt;decision point&lt;/strong&gt; of $i$ if, for all $j&apos; &amp;lt; i$, $g(j) + w(j, i) \le g(j&apos;) + w(j&apos;, i)$.&lt;/p&gt;
&lt;p&gt;In this case, $i$ is referred to as the &lt;strong&gt;decision-affected point&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Note that there may be multiple decision points corresponding to the same $i$.&lt;/p&gt;
&lt;p&gt;Intuitively, a &lt;strong&gt;decision point&lt;/strong&gt; is an index that attains the optimum in the DP transition for state $i$.&lt;/p&gt;
&lt;p&gt;Having defined decision points, we can now introduce &lt;strong&gt;decision monotonicity&lt;/strong&gt;. Its precise definition depends on the specific form of the DP transition equation. In the following sections, we will discuss different cases separately.&lt;/p&gt;
&lt;h1&gt;Prefix Transition&lt;/h1&gt;
&lt;p&gt;The standard form of a &lt;strong&gt;prefix transition&lt;/strong&gt; is&lt;/p&gt;
&lt;p&gt;$$
f[i] = \min_{j&amp;lt;i} { g[j] + w(j,i) }.
$$&lt;/p&gt;
&lt;p&gt;If $f = g$, this transition is called a &lt;strong&gt;self-transition&lt;/strong&gt;.&lt;br /&gt;
Otherwise, it is called a &lt;strong&gt;heterogeneous transition&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Decision Monotonicity in Prefix Transitions&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Definition (Decision Monotonicity for Prefix Transitions).&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A prefix transition is said to satisfy &lt;strong&gt;decision monotonicity&lt;/strong&gt; if the following conditions hold.&lt;/p&gt;
&lt;p&gt;Let $ i_1 \le i_2 $.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For every &lt;strong&gt;decision point&lt;/strong&gt; $ j_1 $ of $ i_1 $, there exists a &lt;strong&gt;decision point&lt;/strong&gt; $ j_2 $ of $ i_2 $ such that $j_1 \le j_2$.&lt;/li&gt;
&lt;li&gt;For every &lt;strong&gt;decision point&lt;/strong&gt; $ j_2 $ of $ i_2 $, there exists a &lt;strong&gt;decision point&lt;/strong&gt; $ j_1 $ of $ i_1 $ such that $j_1 \le j_2$.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Intuitively, &lt;strong&gt;decision monotonicity&lt;/strong&gt; in prefix transitions means that the indices of decision points are &lt;strong&gt;non-decreasing&lt;/strong&gt; as the state index $ i $ increases.&lt;/p&gt;
&lt;p&gt;In the following sections, we will discuss how to optimize prefix DP transitions when the transition satisfies &lt;strong&gt;decision monotonicity&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Divide-and-Conquer Strategy&lt;/h2&gt;
&lt;p&gt;This strategy is applicable only to &lt;strong&gt;prefix heterogeneous transitions&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Algorithm Overview&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Compute the &lt;strong&gt;decision point&lt;/strong&gt; for the midpoint of the current range.&lt;/li&gt;
&lt;li&gt;By &lt;strong&gt;decision monotonicity&lt;/strong&gt;, the valid range of decision points for each subrange can be restricted accordingly.&lt;/li&gt;
&lt;li&gt;As a result, each decision point is evaluated at most a constant number of times, and the overall time complexity is $O(n\log n)$.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This algorithm is remarkably elegant. When I first encountered it, I was genuinely impressed by its simplicity and efficiency.&lt;/p&gt;
&lt;h2&gt;Quadrilateral Inequality&lt;/h2&gt;
&lt;p&gt;For &lt;strong&gt;self-transitions&lt;/strong&gt;, additional conditions are required to optimize the DP.&lt;br /&gt;
The &lt;strong&gt;quadrilateral inequality&lt;/strong&gt; is one such condition.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Definition (Quadrilateral Inequality).&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A function $w(i, j)$ is said to satisfy the &lt;strong&gt;quadrilateral inequality&lt;/strong&gt; if, for all $a \le b \le c \le d$, the following inequality holds:
$$
w(a, c) + w(b, d) \le w(a, d) + w(b, c).
$$
&lt;strong&gt;Remark:&lt;/strong&gt; The inequality sign here is only a formal representation. Its essential meaning is that $w(a, c) + w(b, d)$ yields a better (or no worse) cost than $w(a, d) + w(b, c)$.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There is a well-known theorem stating that if the cost function $w$ satisfies the
&lt;strong&gt;quadrilateral inequality&lt;/strong&gt;, then the corresponding DP transition exhibits
&lt;strong&gt;decision monotonicity&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Proof.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;According to the definition of &lt;strong&gt;decision monotonicity&lt;/strong&gt;, it suffices to verify the
following two symmetric conditions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;For every &lt;strong&gt;decision point&lt;/strong&gt; $j_1$ of $i_1$, there exists a &lt;strong&gt;decision point&lt;/strong&gt;
$j_2$ of $i_2$ such that $j_1 \le j_2$.&lt;/li&gt;
&lt;li&gt;For every &lt;strong&gt;decision point&lt;/strong&gt; $j_2$ of $i_2$, there exists a &lt;strong&gt;decision point&lt;/strong&gt;
$j_1$ of $i_1$ such that $j_1 \le j_2$.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Since the two cases are symmetric, we only prove the first one.&lt;/p&gt;
&lt;p&gt;Let $i_1 &amp;lt; i_2$, and suppose $j_1$ is a &lt;strong&gt;decision point&lt;/strong&gt; of $i_1$.&lt;br /&gt;
By definition, for all $j&apos; &amp;lt; j_1$, we have
$$
g[j_1] + w(j_1, i_1) \le g[j&apos;] + w(j&apos;, i_1).
$$&lt;/p&gt;
&lt;p&gt;Now let $a = j&apos;$, $b = j_1$, $c = i_1$, and $d = i_2$.&lt;br /&gt;
By the &lt;strong&gt;quadrilateral inequality&lt;/strong&gt;, it follows that
$$
w(j&apos;, i_1) + w(j_1, i_2) \le w(j&apos;, i_2) + w(j_1, i_1).
$$&lt;/p&gt;
&lt;p&gt;Adding the two inequalities above yields
$$
g[j_1] + w(j_1, i_2) \le g[j&apos;] + w(j&apos;, i_2).
$$&lt;/p&gt;
&lt;p&gt;This shows that when the decision-affected point moves from $i_1$ to $i_2$,
the index $j_1$ remains no worse than any $j&apos; &amp;lt; j_1$.&lt;br /&gt;
Therefore, the decision point of $i_2$ must be greater than or equal to $j_1$.&lt;/p&gt;
&lt;p&gt;Hence, the DP transition satisfies &lt;strong&gt;decision monotonicity&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;em&gt;Quadrilateral Inequality&lt;/em&gt; can be derived to possess stronger properties.&lt;br /&gt;
This requires starting from another understanding form of &lt;em&gt;Quadrilateral Inequality&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Starting from the original inequality, we can make a simple transformation:&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
w(a, c) + w(b, d) &amp;amp;\le w(a, d) + w(b, c), \
w(a, d) - w(a, c) &amp;amp;\ge w(b, d) - w(b, c), \
\bigl(g[a] + w(a, d)\bigr) - \bigl(g[a] + w(a, c)\bigr)
&amp;amp;\ge
\bigl(g[b] + w(b, d)\bigr) - \bigl(g[b] + w(b, c)\bigr).
\end{aligned}
$$&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How should we interpret this inequality?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When the decision-affected point moves from $c$ to $d$,&lt;br /&gt;
the increase in cost when choosing index $a$ as the &lt;strong&gt;decision point&lt;/strong&gt;
is no smaller than the increase when choosing index $b$ as the &lt;strong&gt;decision point&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We call this phenomenon &lt;strong&gt;Gradual Deterioration&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Intuitively, &lt;em&gt;Gradual Deterioration&lt;/em&gt; provides another way to understand
&lt;strong&gt;decision monotonicity&lt;/strong&gt; under the &lt;strong&gt;quadrilateral inequality&lt;/strong&gt;:
as the decision-affected point increases, earlier decision points
incur higher marginal costs than later ones.
As a result, the optimal decision point shifts monotonically forward.&lt;/p&gt;
&lt;p&gt;Moreover, &lt;em&gt;Gradual Deterioration&lt;/em&gt; reveals an even stronger property:&lt;/p&gt;
&lt;p&gt;For &lt;strong&gt;ANY two indices&lt;/strong&gt; $j_1 &amp;lt; j_2$, there exists a dividing point $x$ such that&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;for decision-affected points with index less than $x$, choosing $j_1$ is better;&lt;/li&gt;
&lt;li&gt;for decision-affected points with index greater than $x$, choosing $j_2$ is better.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Obviously, a transition with this property must possess &lt;em&gt;decision monotonicity&lt;/em&gt;, but a transition with &lt;em&gt;decision monotonicity&lt;/em&gt; may not necessarily satisfy this property.&lt;/p&gt;
&lt;h2&gt;Binary-Queue&lt;/h2&gt;
&lt;p&gt;The conclusion of &lt;em&gt;Gradual Deterioration&lt;/em&gt; inspires a new idea for optimizing DP.&lt;/p&gt;
&lt;p&gt;Let $x_j$ denote the &lt;strong&gt;dividing point&lt;/strong&gt; associated with decision point $j$. If $x_{j-1} \ge x_j$, then decision point $j+1$ becomes better than $j$ before $j$ ever becomes better than $j-1$.&lt;br /&gt;
As a result, $j$ will never be an optimal &lt;strong&gt;decision point&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This observation allows us to design an algorithm that safely ignores all such decision points. By discarding them during the transition process, the DP can be computed much more efficiently.&lt;/p&gt;
&lt;p&gt;From the above derivation, we are motivated to maintain an increasing sequence of boundary points and their corresponding optimal decision points $j_1, j_2, \ldots, j_m$.&lt;/p&gt;
&lt;p&gt;Suppose we have already processed state $i-1$ and maintained the decision points $j_1, j_2, \ldots, j_m$ for $i-1$. Now we consider state $i$.
When inserting a new possible decision point $i-1$ at the back, we first compute the boundary point $x_m$ between $i-1$ and $j_m$.
If $x_{m-1} \ge x_m$, then $j_m$ will never become an optimal decision point and can
be removed.
By repeatedly applying this process, and finally inserting $i-1$, we can maintain a sequence of decision points with strictly increasing boundary points.&lt;/p&gt;
&lt;p&gt;For the current &lt;em&gt;decision-affected point&lt;/em&gt; $i$, if $j_1$ is worse than $j_2$, then all subsequent points will also prefer $j_2$, and thus $j_1$ can be safely removed. Repeating this process until $j_1$ becomes better than $j_2$, we conclude that $j_1$ is the decision point for $i$, according to the previous derivation.&lt;/p&gt;
&lt;p&gt;The above procedure can be conveniently implemented using a deque. Each decision point is inserted into and removed from the deque at most once, so this part contributes linear time complexity.&lt;/p&gt;
&lt;p&gt;The key remaining issue is how to compute the boundary point $x$. By the property of &lt;em&gt;Gradual Deterioration&lt;/em&gt;, $x$ is monotonic and thus can be found by binary search.&lt;/p&gt;
&lt;p&gt;Using binary search to compute $x$, the entire process of &lt;em&gt;self-transition&lt;/em&gt; /&lt;em&gt;heterogeneous-transition optimization&lt;/em&gt; can be accelerated to $O(n \log n)$.&lt;/p&gt;
&lt;p&gt;This method is known as the &lt;strong&gt;binary queue optimization algorithm&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Slope Optimization&lt;/h2&gt;
&lt;p&gt;We can impose additional conditions to further optimize DP.
In some transitions, the cost function $w(i,j)$ has the form
$$
w(i,j) = a(i) + b(j) + c(i)d(j).
$$&lt;/p&gt;
&lt;p&gt;This form has a very nice property.
Let us discuss under what conditions such a function $w(i,j)$ satisfies the
&lt;strong&gt;quadrilateral inequality&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;By the principle of &lt;em&gt;Gradual Deterioration&lt;/em&gt;, consider indices
$
j_1 \le j_2 \le i_1 \le i_2.
$
We have
$$
\begin{aligned}
w(j_1,i_2) - w(j_1,i_1) &amp;amp;\ge w(j_2,i_2) - w(j_2,i_1), \
c(i_2)d(j_1) - c(i_1)d(j_1) &amp;amp;\ge c(i_2)d(j_2) - c(i_1)d(j_2), \
\bigl(c(i_2) - c(i_1)\bigr)\bigl(d(j_2) - d(j_1)\bigr) &amp;amp;\le 0.
\end{aligned}
$$&lt;/p&gt;
&lt;p&gt;The derivation above shows that when $c$ and $d$ have opposite monotonicity,
the function $w$ satisfies the &lt;strong&gt;quadrilateral inequality&lt;/strong&gt;.
Since $w$ only depends on the product $c(i)d(j)$, we can always reorder the indices
so that $c$ is non-decreasing and $d$ is non-increasing.&lt;/p&gt;
&lt;p&gt;Under this condition, for a decision-affected point $i$, we only need to compute
$
b(j) + c(i)d(j).
$
This expression is exactly the value of a line evaluated at $x = c(i)$.
Therefore, the dividing point between two decision points can be obtained
by computing the intersection of two lines, rather than by binary search.&lt;/p&gt;
&lt;p&gt;This technique is known as &lt;strong&gt;Slope Optimization&lt;/strong&gt;.
It can be viewed as a special case of &lt;strong&gt;Binary Queue Optimization&lt;/strong&gt; and runs
in linear time, $O(n)$.&lt;/p&gt;
&lt;p&gt;If $c$ and $d$ do not have such monotonic properties, we can instead use a
&lt;strong&gt;Li Chao Segment Tree&lt;/strong&gt; to maintain the lines and query their values at $c(i)$,
resulting in a time complexity of $O(n \log n)$.
I have already written a detailed introduction to this method in a previous blog post.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Link:&lt;/strong&gt;&lt;br /&gt;
&lt;a href=&quot;https://old.517group.cn/posts/51746/&quot;&gt;Li Chao Segment Tree&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Anti quadrilateral inequality and Binary-Stack&lt;/h2&gt;
&lt;p&gt;same with before. we called below formula &lt;em&gt;Anti Quadrilateral inequality&lt;/em&gt;.
$$
w(a,c)+w(b,d) \ge w(a,d)+w(b,c)
$$
also we have property like &lt;em&gt;Gradual Deterioration&lt;/em&gt;, we called &lt;em&gt;Gradual Optimization&lt;/em&gt;. But when a transition fit &lt;em&gt;Anti Quadrilateral inequality&lt;/em&gt; has no &lt;em&gt;Decision Monotonicity&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Why? Because a &lt;em&gt;Decision Point&lt;/em&gt; can be good when it just add in our decision, but when the index of &lt;em&gt;Decision-affected&lt;/em&gt; increasing, old &lt;em&gt;Decision Point&lt;/em&gt; will be better, so this type of transition don&apos;t has &lt;em&gt;Decision Monotonicity&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;But we still can use some structure to maintain this.&lt;/p&gt;
&lt;p&gt;For a fixed state $i$, we consider all possible decision points $1, 2, \ldots, i-1$.Under  &lt;em&gt;Gradual Optimization&lt;/em&gt;, for each candidate decision point $j$,
there exists a dividing point $x_j$ such that after $x_j$, choosing $j$ becomes better
than choosing $j+1$.&lt;/p&gt;
&lt;p&gt;Now observe that if $x_{j-1} \le x_j$, then before $j$ ever becomes better than $j+1$ (at $x_j$), $j-1$ has already become better than $j$ (at $x_{j-1}$).
This means that $j$ can never be the optimal decision point at any time,
and therefore can be safely discarded.&lt;/p&gt;
&lt;p&gt;By repeatedly applying this elimination process, we obtain a sequence of
candidate decision points $j_1, j_2, \ldots, j_m.$ Moreover, if $j_k$ becomes better than $j_{k+1}$ at time $x_k$,
then the dividing points must satisfy $x_1 &amp;gt; x_2 &amp;gt; \cdots &amp;gt; x_{m-1}$.&lt;/p&gt;
&lt;p&gt;Next, consider the current state $i$.
If the last candidate decision point $j_m$ is worse than $j_{m-1}$,
then by &lt;em&gt;Gradual Optimization&lt;/em&gt;, $j_m$ will never become optimal in the future,
and thus can be removed as well.&lt;/p&gt;
&lt;p&gt;It is easy to see that once $j_m$ becomes better than $j_{m-1}$,
$j_m$ is the optimal choice for the current state $i$—that is,
the decision point for $i$.&lt;/p&gt;
&lt;h1&gt;Range Transition&lt;/h1&gt;
&lt;p&gt;This type of transition is relatively simple.
The standard form of a &lt;strong&gt;range transition&lt;/strong&gt; is&lt;/p&gt;
&lt;p&gt;$$
f[i][j] = \min_{i \le k &amp;lt; j} \bigl{ f[i][k] + f[k+1][j] + w(i,j) \bigr}.
$$&lt;/p&gt;
&lt;p&gt;If the cost function $w(i,j)$ satisfies the &lt;strong&gt;quadrilateral inequality&lt;/strong&gt;
and the following condition (which we call &lt;strong&gt;including monotonicity&lt;/strong&gt;),
$$
w(b,c) \le w(a,d),
$$
then this transition is said to have &lt;strong&gt;decision monotonicity&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Just as in prefix transitions, this property allows us to optimize the
DP to a time complexity of $O(n^2)$.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;This article discusses several techniques for &lt;strong&gt;DP optimization&lt;/strong&gt;.
Many problems involve these ideas, and dynamic programming itself
is a fundamental topic in OI.&lt;/p&gt;
&lt;p&gt;However, techniques are only tools.
What truly matters is understanding when and how to adapt them to
different problems.&lt;/p&gt;
</content:encoded></item><item><title>Hello Everyone!</title><link>https://blog.517group.cn/posts/202512241047/</link><guid isPermaLink="true">https://blog.517group.cn/posts/202512241047/</guid><description>Complete the migration of the blog</description><pubDate>Wed, 24 Dec 2025 10:47:01 GMT</pubDate><content:encoded>&lt;p&gt;As you may have noticed, my blog has undergone a major update. The old blog, previously hosted at https://old.517group.cn, is no longer being maintained.&lt;/p&gt;
&lt;p&gt;Articles from the old blog will not be migrated to this new site. All future content will be published exclusively here.&lt;/p&gt;
&lt;p&gt;Thank you all for your continued support.&lt;/p&gt;
</content:encoded></item></channel></rss>