Alert notifications sometimes missing after a kubernetes rolling deployment

Hi, we have a kubernetes deployment of grafana which runs a single instance (replica) of the server in a docker container. We have set up alert rules and configured notification policies and a (kafka) contact point for the generated alerts to be sent to. We have a fairly high alert volume on this setup, and it been working well for the most part.
One of the issues that we have noticed is when we upgrade the grafana version (docker image) using a kubernetes rolling deployment - occasionally (and especially when the alert volume is high), we find that some alert notifications that were generated around the time of deployment are never actually notified. Is this expected behaviour? How does grafana handle pending alert notifications when the server needs to be terminated gracefully (as in the case of a rolling deployment)?