Google is happy to explain: it just posted an in-depth exploration of how this stabilization works. As you might guess, Google uses some of its machine learning know-how to incorporate both anti-shake technologies where many phones can only use one or the other.
The system starts off by collecting motion info from both OIS and the phone's gyroscope, making sure it's in "perfect" sync with the image. But it's what happens next that matters most: Google uses a "lookahead" filtering algorithm that pushes image frames into a deferred queue and uses machine learning to predict where you're likely to move the phone next. This corrects for a wider range of movement than OIS alone, and can counteract common video quirks like wobbling, rolling shutter (the distortion effect where parts of the frame appear to lag behind) or focus hunting. The algorithmic method even introduces virtual motion to mask wild variations in sharpness when you move the phone quickly.