# Constant Factor Problem?

So I’m doing CSES - Book Shop, and my code is as follows:

#include <iostream>
#include <vector>
#include <climits>
#include <algorithm>

// #include "debugging.h"

using std::cout;
using std::endl;
using std::vector;

int main() {
int book_num;
int budget;
std::cin >> book_num >> budget;
vector<int> prices(book_num);
for (int b = 0; b < book_num; b++) {
std::cin >> prices[b];
}
vector<int> page_nums(book_num);
for (int b = 0; b < book_num; b++) {
std::cin >> page_nums[b];
}

// this[i][j] = max pages given that we have i bucks, only considering the first j books
vector<vector<int>> max_worth(budget + 1, vector<int>(book_num + 1));
for (int b = 1; b <= book_num; b++) {
for (int m = 1; m <= budget; m++) {
max_worth[m][b] = max_worth[m][b - 1];
if (m - prices[b - 1] >= 0) {
max_worth[m][b] = std::max(max_worth[m][b], max_worth[m - prices[b - 1]][b - 1] + page_nums[b - 1]);
}
}
}
cout << max_worth[budget][book_num] << endl;
}


The thing is, for some reason this TLEs on a large number of test cases, even though it meets the target complexity. The weirdest thing is, it’s pretty much the exact same as the code here in the CSES DP editorial here

Can someone help me figure out why my code is so slow?

Yeah, but my friend’s code has pretty much the exact same programming approach, and his seems to get AC’d.

	ios_base::sync_with_stdio(false);
cin.tie(nullptr);



in your main function and check again. I faced exactly the same problem and it got solved by reducing the memory to \mathcal{O}(N+X) as suggested by @Benq. See this thread

Yeah, even with that fast IO trick it TLEs.

Good for you that it worked out, but my friend’s code also has NX memory but his seemed to work just fine.

try swapping the dimensions of the dp array (max_worth[book_num][budget])

Thanks, I’ll try that.

On a side note, why would that speed up the code? Just wondering.

EDIT: Yeah it worked. Turns out initializing a large number of small arrays is slower than initializing a small number of large arrays.