Micro optimization needed?

So I’m doing this problem, and my code is here. The thing is, it runs about 20-30 milliseconds over the time limit on near all of the test cases.

My algorithm is as follows:

  1. Calculate the initial hash for each of the given strings.
  2. Iterate through all pairs of indices and subtract from each string’s hash that index’s hash value. Then, compare all the new hashes and see if any of them are the same.

Is this algorithm correct and my code’s constant factor too large, or is my algorithm itself wrong?

You can precalculate the hashes of each string in \mathcal O(K) and use them for each ind1 and ind2 by changing the ind1-th and ind2-th characters of the string in \mathcal O(1).

That’s…exactly what I’m doing?

Oh wait, it seems I posted the wrong code. This code was the one I actually used.

Hmm, I’m not sure if your algorithm is intended, but I have a faster solution, which runs in \mathcal O(NK\log NK):

For each string, we will calculate the hash if we replace 0 or 1 characters with something from [a, z]. Add 1 to this value in a map or some other data-structure which is indexed by the hash.

If any value of the map is >1, then the answer is YES.

For example, if two strings are different by two characters (let’s say they’re aab and baa), aab will generate bab (replacing the first character with b) and baa will generate bab by replacing the third character.

Yeah, that does seem like a better idea.

EDIT: Turns out even that TLE’d on basically all of the test cases.