# CF Garland- TLE on a test case

I attempted the CF problem Garland. My submission has the correct two pointer logic, but I receive a TLE on test case 25. The time complexity is O(nq), which should pass under 2 seconds.

#include <bits/stdc++.h>

using namespace std;

int main()
{
int n,q;
string s;
cin>>n>>s>>q;
while(q--){
int m; char c;
cin>>m>>c;
int l=0,r=0,ans=0;
for(;l<n;l++){
while(r<n && m>=(s[r]!=c)){
m-=(s[r]!=c);
r++;
}
ans=max(ans,r-l);
if(s[l]!=c) m++;
}
cout<<ans<<'\n';
}
}


For each l, my code finds the largest interval [l,r) with at most m changes from s[i] to c. If I change the outer for loop to for(;l<n && r<n;l++), I get accepted.

In the original code we still go through at most N iterations on each of l and r. My code does not seem to have a large constant factor, and O(nq) \le 3\cdot 1e6 operations is way under 2000 ms. Can you please explain why I get a TLE? I want a judicious answer, please, not just that it happens to fail for some reason.

Where did you get NQ\le 3\cdot 10^6?

The problem statement mentions n \le 1500, q \le 200000.

Might want to redo that calculation…

NQ = 1500 * 200000 = 3*10^8

I’ve made this mistake before too, make sure to double check your time complexity before you actually start implementing. (Although you’re very close to the answer)

Oh wow, thanks. 1500 * 2e5 = 3e3 * 1e5 = 3e8. Then why would my O(NQ) solution succeed, even if we break out of the loop when \max(l,r)=n? Code: for(;l<n && r<n;l++).

I don’t really understand what you mean. Did you solve this problem without TLE, or not? Even with max(l,r) = n, it should still TLE because the time complexity would still be O(NQ).

Can you send the codeforces link to your accepted solution?

You can copy this code to Codeforces and submit it to verify AC.

#include <bits/stdc++.h>

using namespace std;

int main()
{
int n,q;
string s;
cin>>n>>s>>q;
while(q--){
int m; char c;
cin>>m>>c;
int l=0,r=0,ans=0;
for(;l<n && r<n;l++){
while(r<n && m>=(s[r]!=c)){
m-=(s[r]!=c);
r++;
}
ans=max(ans,r-l);
if(s[l]!=c) m++;
}
cout<<ans<<'\n';
}
}


I am not sure if 300 million operations is permissible within 2000 ms, so that’s why I am asking.

Does this succeed because we only do up to N iterations per query (terminating when one of l,r reaches n) instead of 2N (terminating when both pointers reach n)? Does this constant factor give my solution an edge?

Note that this sliding window idea is also mentioned as working in USACO Guide: https://usaco.guide/problems/cf-garland/solution.

Okay, I kinda get it. The code passes in 1.79 seconds, meaning that this approach definitely would not work in Python or Java. It varies based on the source, but people generally say that C++ is around 1*10^8 to 2*10^8 operations per second. I always use 1*10^8 usually just to be safe, but I guess it varies based on the speed of the grader. Like you said, this solution works because each query runs in O(N) (correct me if I’m wrong somebody) instead of O(2N). So if the speed was like 2 * 10^8 operations per second, then the solution would pass in barely enough time, whereas if it was 1 * 10^8 then it would TLE.

There was also another solution that passed in faster time using two pointers. Since the string stays constant, you can just store the queries in an array, and print the answer if it has already been visited. Because the queries are a letter and a number up to N, there are 26 * 1500 possible queries. You can see my code here if you want (Ignore the dp part lmao this is not at all dp):

try adding cin.tie(0)->sync_with_stdio(0); to the beginning of the main function

That makes sense. We can use memoization to limit the total number of queries processed. I did not think that with m\le n so small and a \le c \le z, the total number of distinct queries would be really small. Thanks for your code; it makes the overall time complexity O(n^2 \cdot 26 + q).

There is also a “two pointers with prefix sums” solution mentioned on USACO Guide. It seems that solution might run faster. I’ll read it and ask further in this thread if I still have questions.