The use of the light-year and parsec to measure stellar distances gives a sense of the vast distances to the stars. To ancient cultures, however, the stars seemed relatively close to Earth. The Egyptians imagined them as points of light on a tented canopy, held up by mountain ranges at the corners of the kingdom. The Greeks thought of them as fiery embers carried overhead on crystalline spheres. Ancient cultures could not conceive of distances much larger than the size of the Earth. Even to a modern day observer, the stars in the night sky all look to be at the same distance – the depth of space is not obvious. 

Early distance estimates were no more than educated guesses. In the late 17th century, Dutch scientist Christian Huygens made an image of the Sun through a pinhole in a darkened room. He varied the size of the pinhole until the image seemed equal in brightness to an image of Sirius, the brightest star. Since the pinhole admitted 1/27,000 of the light of the Sun, Huygens concluded that Sirius was 27,000 times farther away than the Sun (it is actually 543,861 times farther away and substantially brighter than the Sun). Around the same time, Isaac Newton tried to use Saturn as a sort of reflecting mirror to measure the intensity of sunlight. He guessed the percentage of the Sun's light that Saturn reflects and assumed that bright stars have similar absolute brightness to the Sun. Newton concluded that the bright stars are about 18,000 times farther away than the Sun (which is simply wrong).

Both these men were trying to use a crude method for estimating the distance of stars; inverse square law for the propagation of light. The brightest stars have values of apparent brightness that are about 10 billion (or 10^10) times fainter than the Sun. Unfortunately, the brightness of the Sun and all the stars is impossible to measure accurately without equipment. Mistakes continued into nearly modern times. In 1829, the English scientist William Wollaston used the inverse square law to estimate that most typical stars must be at least 100,000 (or 10^5) times more distant than the Sun, since the inverse square law indicates that dimming 10 billion times corresponds to increasing distance 100,000 times (which is accurate for some stars).

Simple distance estimates based on the relative brightness of the Sun and the stars are flawed for one simple reason: stars come in many luminosities. The brightest stars in the sky are much more luminous than the Sun, so they are much more distant than we would calculate by assuming that they resemble the Sun. Also, these estimates only give typical distances rather than accurate distances for individual stars. Even so, this is an example of how a simple physical idea — the way that light dims as it spreads through space — can be used to deduce remarkable information about the distance to the stars. At the time, this reasoning implied a universe thousands of times larger than previous estimates!

One of the greatest delays in switching from a geocentric to a heliocentric model of the solar system was in part tied to the lack of an observable motion in the stars - a measurable stellar parallax - as the Earth orbited. It meant that the stars must be fantastically distant. The first successful measurement of stellar parallax did not come until 1838, when German astronomer Friedrich Bessel detected the slight seasonal shift of the star 61 Cygni — only two thirds of a second of arc. The measurement of a parallax distance is a direct trigonometric technique that is independent of any assumption about the nature of the star being observed. Recall that parallax is the angular shift in the position of an object caused by a shift in the observer’s position. The calculation of a parallax distance is another application of the small angle equation. The distance is inversely proportional to the parallax angle. A star with a parallax angle of 1 arc second is at a distance of 1 parsec.

To understand how parallax measurements are made, hold your finger in front of your face and look past it toward a bookcase on the other side of the room. Your finger represents a nearby star — the books on the far wall represent distant stars. Your right eye represents the view on one side of the Sun. Your left eye represents the view six months later, after the Earth has traveled to a point on the opposite side of the Sun (a shift in the Earth’s position by 2 A.U. as seen from the star). First wink one eye and then the other. Your finger (the nearby star) seems to shift back and forth. Hold your finger only a few centimeters from your eyes; the shift is large. Hold your finger at arm’s length; the shift is smaller. Likewise, the farther away the nearby star, the smaller the parallax angle. The parallax in this experiment can be measured in degrees, but the parallaxes of actual stars are all ten thousand time smaller, less than a second of arc! Parallaxes as small as 1/100 second of arc can be reliably measured, and the distance of such a star is 100 parsecs.

The distance of a star in parsecs is simply the inverse of its parallax angle in seconds of arc. This explains the origin of the term parsec: it is the distance to a parallax of one second of arc. If a star is too distant, its parallax is too small to be measured. (Try winking your eyes to see the shift of a distant object against an even further horizon – at some distance, you will not notice a shift anymore.) Parallaxes smaller than about 1/100 second of arc are difficult to measure accurately, due to the blurring of the Earth’s atmosphere. Therefore stars farther away than about 100 parsecs are beyond the distance limit for reliable parallaxes. Scientists at the European Space Agency attempted to overcome this limitation when they launched the Hipparcos satellite in 1989. Its mission was to measure parallax in the precise viewing conditions of space. The unfortunate failure of a rocket motor placed the satellite in a highly elliptical orbit, which caused the detectors to degrade as they passed repeatedly through the radiation belts around the Earth. Despite this handicap, Hipparcos successfully measured the parallax of 120,000 stars. In addition to parallaxes, it accurately measured the motions of those stars, giving us information about how they were moving, if they were multiple star systems and if they varied in luminosity with time.

Knowing accurate distances is the most important requirement for measuring most other properties of stars. This means that the estimated 20,000 to 25,000 stars that lie within 100 parsecs are our main statistical sample for measuring stellar properties. Note that even this large sample is not sufficient for us to be able to measure the distance to certain rare stellar types. Other techniques have been devised for estimating distances to more remote stars, but they all depend ultimately on the accuracy of the parallax measures of nearby stars.


Author: Chris Impey
Editor/Contributor: Audra Baleisis
Editor/Contributor: Pamela Gay
Multimedia Aggregator: Erik Brogt
Last modified: Monday, August 30, 2021, 10:43 AM