#
Solution of Perfect Path Patrol (ICPC NAQ 2020)

Problem link: Click here.

## Some Comments

Well, it has been almost three years since I retired from the Informatics Olympiad.

Recently I'm planning to attend 2021 ICPC North America Qualifier (NAQ).

Honestly, I really have no idea about how many algorithms and data structures I still remember.

So I decided to practice by doing some problems from NAQ 2020, and this is one of those questions that I think is actually very interesting and clever.

## Solution

This solution is using:

- greedy algorithm
- DFS
- balanced BST
- induction

The "community" is actually a tree structure, so we can DFS it.

For the root, since we still didn't assign any patroller yet, so for every child edge, we need to assign some patrollers to satisfy their s (number of patrollers).

Consider there are only two children. The one with is called , and the one with is called . The best way to assign patrollers is to assign people to patrol the path between and , and one person to patrol the path between the root and .

Then when we DFS , we know there are people whose patrolling task include the street (edge) between and its parent node (which is root). We do not care where those people start from. Whether they depart from the root or , we both can extend their "routes" to 's children.

Thus, the costs of satisfying every node's children are independent. Firstly we assign those people who come from its parent node to cover its children. And then, if needed, we allocate more people to meet the needs of all children. The root is just a little bit different: no one comes from its parent since it doesn't have a parent.

Well, this is the core part of this problem.

To simplify this part, suppose our current node is , and people come from its parent node. We put the s of all 's children into a sequence. Now you have 3 options:

- Choose one element in the sequence and subtract it by . This action is free, but you can only do it times. (This action means extending one person's route who comes from the parent node).
- Choose two elements in the sequence and subtract them by . This action cost . (This action means choose two children and assign a new patroller for the path between them).
- Choose one element in the sequence and subtract it by . This action cost . (This action means choose one child and assign a new patroller for the path between it and the current node ).

Our goal is to use these actions to make the whole sequence to , and minimalize the cost.

Since the first action is free, so we want to do that as much as possible.

And because the second action can subtract two elements together, so compare with action three, we want to do action two more.

Now, consider the root, which does not have opportunity to use action one. If we want to do this process by hand, we can use this method:

- select the biggest and the second biggest element in the sequence, subtract them by , and then put them back to the sequence.
- repeat, unless there's only one non-zero element.

At the end, if the sequence is still not , we have to subtract that non-zero element one by one.

This is a nice method to find the minimal cost, but it is super slow: it is , where is up to .

Okay, now this is the most interesting part:

Consider two cases:

- The biggest element of the sequence is bigger than the sum of remaining elements.
- The biggest element of the sequence is less than or equal to the sum of remaining elements.

Suppose the biggest element is , and the sum of remaining elements is . Also, suppose the sum of all elements is , where .

For case one, the solution is quite simple: subtract the biggest element every time together with another element. After all other elements are zero, the biggest element can only be subtracted one by one. In this case, the cost is just .

For case two, I want to firstly write down the conclusion: The cost is

- if is even.
- if is odd.

Now, I will prove this by induction:

Firstly, if is odd, then we can substract the biggest element once to make even. This will not increase the cost, because substracting two elements will not change the parity of .

So, we asume is even.

Suppose is the length of the sequence.

Base case: .

Since and "the biggest element of the sequence is less than or equal to the sum of remaining elements", we know that those two elements are equal. Thus, the cost is .

Induction steps:

Suppose for , the conclusion holds.

WTP: The conclusion holds for .

Since , everytime we substract , we can subtract a smaller element together, until is substracted to .

Because is the biggest one, so it is definitely bigger or equal to the biggest element in the remaining elements.

So, we can ensure that after is substracted to , in the remaining elements, the biggest one is not bigger than the sum of other elements (since we have the ability to subtract that one to zero too).

Hence, according to our hypothesis, the conclusion holds for .

Q.E.D.

This is how to assign people for the root. For other arbitrary nodes, we need to do "extending" as well.

To make sure our cost is minimalized, we want the biggest element of the sequence after applying "extending" is as small as possible. So, we can just use a balanced BST to maintain the sequence, and we pick up the biggest element, try to subtract it to make it equal to the second biggest element. If there are multiple biggest element, then just try to make the assignment evenly distributed. Repeat this process until you don't have opportunity to repeat more or all elements are zero.

Time complexity of this algorithm is .

This is my code for this problem:

```
#include <bits/stdc++.h>
#define NS (500005)
#define PII pair<int, int>
#define LL long long
using namespace std;
template<typename _Tp> inline void IN(_Tp &dig)
{
char c; bool flag = 0; dig = 0;
while (c = getchar(), !isdigit(c)) if (c == '-') flag = 1;
while (isdigit(c)) dig = dig * 10 + c - '0', c = getchar();
if (flag) dig = -dig;
}
int n;
vector<PII> graph[NS];
LL dfs(int a, int f, int r)
{
LL res = 0;
static map<int, int, greater<int> > t;
t.clear(), t[0] = 0;
for (int i = 0, p; i < graph[a].size(); i += 1)
if ((p = graph[a][i].second, graph[a][i].first) != f)
{
if (!t[p]) t[p] = 1;
else t[p] += 1;
}
while (r > 0)
{
int v = (*t.begin()).first, k = (*t.begin()).second;
if (v == 0) break;
t.erase(t.begin());
int v0 = (*t.begin()).first;
if ((LL)(v - v0) * k <= r)
{
t[v0] += k;
r -= (LL)(v - v0) * k;
}
else
{
int v1 = v - r / k, v2 = v1 - 1;
int k1 = k - r % k, k2 = k - k1;
r = 0;
if (v1 > 0)
{
if (!t[v1]) t[v1] = k1;
else t[v1] += k1;
if (v2 > 0 && k2 > 0)
{
if (!t[v2]) t[v2] = k2;
else t[v2] += k2;
}
}
}
}
t.erase(0);
if (!t.empty())
{
map<int, int, greater<int> >::iterator i = t.begin();
int mx = i->first;
t[mx] -= 1;
LL rsum = (LL)i->first * i->second;
while ((++i) != t.end()) rsum += (LL)i->first * i->second;
if (mx <= rsum) res = (rsum + mx) / 2 + ((rsum + mx) & 1);
else res = mx;
}
for (int i = 0, u, p; i < graph[a].size(); i += 1)
if ((p = graph[a][i].second, u = graph[a][i].first) != f)
res += dfs(u, a, p);
return res;
}
int main(int argc, char const* argv[])
{
IN(n);
for (int i = 1; i < n; i += 1)
{
int u, v, p;
IN(u), IN(v), IN(p);
graph[u].push_back(PII(v, p)), graph[v].push_back(PII(u, p));
}
printf("%lld\n", dfs(0, -1, 0));
return 0;
}
```

## No comments yet...