I’m having trouble understanding what it does entirely.
I understand that is measure the amount of operations but my confusion lies in this table on the USACO guide, what are the upper bounds meaning?

I think what it is trying to say is that for n <= 10, an algorithm with O(n^7) will probably pass. I think it is giving the table to check if your algorithm has a good time complexity for the constraints

N is the problem size input so if the problem statement is n <= 100 like the guide says then you can get away with solutions that have up to n^3 to n^4 complexity (maybe, depends on algorithm/language but in general yes). If n <= 20 then n^5 or even exponential time may be possible for a solution. It’s a viability chart for quickly estimating for example if you can use exponential like brute force for a problem with 20 or less inputs.

If you read the formal definition of big-O f(x) <= cgx(x) thus f(x) is in O(g(x)) the left side f(x) represents your running time cost function which is something you derive from source code using either a ram based model, a pointer-machine based model or a language based model. The right side g(x) of the inequality is an arbitrary math function you choose from existing math functions that best represents the growth of f(x) if you ignore constants.

So that was the f(x) <= cg(x) but the n in O(n) in a way represents a lambda or anonymous function meaning given an input of size n it produces a complexity of size O(n). For example if you have quadratic complexity n^2 then big-O n is really O([n → n^2]) meaning given an input of size n you get quadratic sized complexity or O(n^2)

Easiest way I found to remember this is everything in a loop you multiply by the amount of times looped. If it’s all constant ops then it’s n complexity. If another loop is nested in there using the entire input n then it’s n * n. There is some analysis I’m sure the guide covers called amortized analysis where not always is it the case that a major operation is done every loop so you can add up all the operations and distribute them over multiple operations. For example a data structure where it loops and gathers data until full requiring an expensive operation to make a new node you aren’t always making new nodes so can claim with ‘amortized cost’ it’s only doing mainly constant work. Anyway if queues are in the guide it will come up.

If a function is recursive then the entire body of that function is the loop body so you times all the complexity of that body by the amount of times you loop which is the size of the input or n.