I thought is was 704x576 but you say 702x576 noggin.
704x576 is the nearest "MPEG multiple" to 702x576 - and is used by some broadcasters for transmission.
However it all comes back to analogue compatibility.
The active line in analogue 50Hz SD (PAL composite or Analogue Component YPrPb) is 52us in duration. This is the duration of a 4:3 or 16:9 analogue signal.
With a luminance sampling rate of 13.5MHz that means 52us lasts for exactly 702 samples.
However for a number of reasons there is latitude in the digital ITU 601 (formerly CCIR Rec 601) spec that makes the digital active line a bit wider than 702 samples. This ensured that slightly mistimed signals didn't get cropped, and also ensured that edge transitions didn't get clipped - which could cause ringing. (Similar logic is behind the 16-235 rather than 0-255 dynamic range used in digital video 8-bit sampling) This means the 720 sample line is 'longer' than the analogue 4:3 or 16:9 lines - and thus the 720x576 frame is slightly wider than 4:3 or 16:9.
However 702 is not a nice multiple of 8 or 16, which are the usual macroblock dimensions used in digital compression like MPEG2 and H264. So either the entire 720x576 image is compressed to MPEG2/H264 (which would be the expectation for studio compression - as you don't want to lose the edges within a studio set-up), OR the nearest MPEG multiple of 704x576 is used (with only 2 samples - 1 each side - of lattitude).
The theory is that the extra 18 or 2 samples in the 720x or 704x576 frame compared to the 702x576 frame should never be seen on a display (as that should only show the 4:3 or 16:9 702x576 central portion if operating correctly) so there is no point wasting bandwith sending them...
(Within a studio you DO have the issue of what happens when you shrink a picture - should you see the full width of the 720x576 frame - which is guaranteed to have black edges if from a standard analogue source or a digital source only filling the 4:3/16:9 image area ???)