Why sorting and lower_bound() doesn't do the same as find()

I’m reading the graphs solution to Livestock Lineup.

I don’t understand why sorting the adjacency lists and binary searching doesn’t achieve the same purpose as linearly finding the element.

#include <bits/stdc++.h>
using namespace std;

void setIO(string s=""){
  cout << fixed << setprecision(15);

#define F0R(i, n) for(int i=0; i<n; i++)
#define all(x) begin(x), end(x)
#define lb lower_bound

vector<string> cows{"Bessie", "Buttercup", "Belinda", "Beatrice", "Bella", "Blue", "Betsy", "Sue"};
int index(string cow){ return lb(all(cows), cow) - begin(cows); }
vector<int> adj[8];

int main() {

	int N; cin >> N;

	F0R(i,N) {
		string cow1, cow2, trash;

		cin >> cow1 >> trash >> trash >> trash >> trash >> cow2;
		int c1 = index(cow1), c2 = index(cow2); /// convert names to indices

		adj[c1].push_back(c2), adj[c2].push_back(c1);

    F0R(i,N) sort(all(adj[i]));
	vector<int> seq; /// sequence of cow indices
	vector<bool> done(8);

	F0R(i,8) if (!done[i] && adj[i].size() <= 1) { /// add path to sequence
		int cur = i;

		while (adj[cur].size() == 1) {
			seq.push_back(cur); done[cur] = 1;

			int next = adj[cur].front();
			adj[next].erase(lb(all(adj[next]), cur));

//			adj[next].erase(find(all(adj[next]), cur));

			cur = next;

		seq.push_back(cur); done[cur] = 1;

	for(int t: seq) cout << cows[t] << "\n"; /// convert indices back to names

Submitting this gives me a runtime error or memory limit exceed error on test cases 5, 6, and 8. Using find() (commented out) in place of lower_bound() gives AC.

Also, is it correct to say that we only visit each node’s adjacency list at most once in the line
adj[next].erase(find(all(adj[next]), cur));?

Since we always mark done the nodes with an unvisited neighbor, it seems we only find and delete cur from adj[next] once. Then is it true that we visit the 2N total neighbors at most once with this find command, so the solution runs in O(N)?

Part of my confusion is that we might visit adj[next] for more nodes than just cur. Or, we might visit adj[next] more than once (worst case O(N^2)). Why are neither of these claims true?

Thanks in advance for helping me understand the internal solution.

I noticed you are erasing only one direction of the edge, and not checking whether cur is actually in adj[next]. In general you should check if your iterators are valid (== container.end()). Maybe the error is related to that?

This implementation is O(n^2). To do it faster you can use sets for your adjacency lists. Otherwise erase is O(n), so there is no benefit to using lower_bound over find.

In general for a problem where O(n^2) is fine and you are editing edges, I would just use an adjacency matrix instead of adjacency lists.

Yes, that’s true. However, in this problem we have a path of \le N-1 edges. We only visit adj[next] if it isn’t done yet. Since this is the only time we visit adj[next], and next and cur are originally neighbors of each other, it’s guaranteed that adj[next] includes cur.

If the element doesn’t exist, find() and lower_bound() both return container.end(), and erasing the end iterator does nothing. That’s why I am not sure why test cases 5, 6, and 8 (and only those three) give runtime error or memory limit exceeded.

I’m not sure why this is true. Since this is a path, we have at most N-1 edges between nodes. I think we visit each node’s adjacency list at most once for any path before marking done[next] = true.

For instance, in 5 - 2 - 6, we remove 5 from adj[2] and remove 2 from adj[6]. Then we don’t come back to 5, 2, or 6 ever again.

If we iterate through each adjacency list once, we’d go through at most 2(N-1) nodes in total, for an O(N) algorithm. Can you please explain what I’m missing?

I thought erase() just marks a pointer as invalid, so the next time we iterate over the list, the machine skips the invalid pointer and decrements the vector’s size?

Or do you mean that adj[next].erase(find(all(adj[next]), cur)); would take O(N + N) while adj[next].erase(lb(all(adj[next]), cur)); would take O(N + \log N) time?

Correct me if I’m wrong, but because it’s a path, doesn’t adjacency list stores at most 2 neighbors? So I guess it doesn’t matter whether we iterate linearly or not.

Noted. I agree, a boolean adjacency matrix presents many benefits if the problem allows for O(n^2).

Thank you.

If the element doesn’t exist, lower_bound returns the next higher element. You might be confusing lower_bound with binary_search. Also erasing end() might do nothing in some implementations of STL, but it’s undefined behavior. I almost always check the validity of my iterators.

Calling erase on a vector (which is backed by an array) deletes the element and shifts all the elements after it left 1 step.

I didn’t read the problem yet when I wrote the above. If the graph is guaranteed to be a path, then I would not even worry about any performance issues, since every operation is O(1) time. So sorting+lower_bound is also unnecessary.

I would carefully read the C++ STL documentation, for example: https://www.cplusplus.com/reference/vector/vector/erase/

Thanks for pointing out what I misunderstood.

I didn’t know that erasing end() was undefined behavior.
I modified that portion of my code as follows.

int next = adj[cur].front();
auto it = lb(all(adj[next]), cur);
if(it != adj[next].end() && *it == cur)

This time, I get WAs on test cases 5, 6, and 8. Please let me know if you’re able to see why. Otherwise, I understand that it’s unnecessary to worry about performance issues here, so I will stick with using find().

I thought that if the vector was sorted, erase() would only take O(\log N), but now I understand that’s only true for sets. Then is it true that

  • adj[next].erase(find(all(adj[next]), cur)); would take O(N + N) time,
  • adj[next].erase(lower_bound(all(adj[next]), cur)); would take O(N + \log N) time?

Is that why the line takes O(N) time regardless of either option?

Thanks for your help!

Try erasing the edge in both directions. The easier solution is to not modify the graph at all. Just mark the nodes as visited like you would do for a DFS.

Yes, either option is O(N) because the O(N) term dominates.

It should be easy enough to figure it out yourself. These test cases are extremely small - try printing out what each vector contains before and after each erase.