There's an important point about averaging climate data that people tend to overlook, namely that increasing the amount of recording stations and defining the average temperature to be the average of a large amount of stations (essentially taking the limit as the number of stations grows arbitrarily large while the distance between them grows small) may not give the right answer anyway.
To explain, think about atmospheric pressure. It's defined by "force divided by area" or "F/A", and the force is caused by the aggregate collisions of nitrogen and oxygen molecules in the air. Now if we want to measure the pressure at a given point, we camake the measuring device smaller and smaller and measure the pressure in smaller ares, but we run into a problem. Once we get to sizes smaller than the distance between molecules, the pressure drops to zero almost everywhere, except for the finite amount of cases where it's extremely large due to the pressence of a molecule.
So if we tried to average the pressure data by making extremely precise measurements everywhere, what we would get is the meaningless result that the average pressure is ZERO EVERYWHERE. And temperature averaging faces the same problem since temperature is just the average speed of particles.
What this means is that the concept of an "average global temperature" is not mathematically well defined, because increasing precision in your measurements leads to meaningless results. You can think of "average global temperature" as a four sided triangle, it doesn't make any sense at all because it's inherently self contradicting.