So, after several minutes I checked the state and there were about million messages in it. And the question is why the entries aren't deleted from the state? There are watermarks for the streams and the condition about timestamps in the join expression.
As a result, when the job works some hours we can catch an OutOfMemory error.
Here is the pipeline (and some info about the state):
And also, there is the condition to clean up the state but as for me it looks some strange, for example, `timestamp - T15000ms <= -1000`: