All About Time

Author: Clock Shop   Date Posted:2 June 2023 


There is no doubt that the process of measuring time is crucial for our daily lives. We need to understand time to help organise everything that happens in the day- such as to know when to go to work, when to cook dinner, or what time to pick the kids up from school, and that is just for our own personal lives. In today's modern world, time measurement is crucially important on a global scale. We need it for instant stock trades, assisting global positioning systems and conducting cutting edge scientific experiments - measuring time brings some form of order to chaos. But why do we measure time in such a way that 1 day equals 24 hours; 1 hour equals 60 minutes and 1 minute makes up 60 seconds? Why is 1 second the fixed numerical value of 9,192,631,770Hz produced by radiating a caesium-133 atom, and why is the idea of 1 second so complicated in the first place? How did that 10 digit number come to be the standard length of 1 second? And why is that important? To better understand these questions, we first need to understand how time was both understood and measured in the past, and the problems that existed using those previous methods.


How It All Began

The idea behind time measurement has always heavily relied on the movements of celestial bodies and events in the sky. Humans began to notice these movements as far back as 30,000 years ago, with some astronomical observations dating even further back by earlier civilisations, including the Indigenous Australians who had their own astronomical observations and stories approximately 60,000 years ago. The beginning of our current system of timekeeping that we know of today can be traced back to the ancient Egyptians, who first recorded these celestial movements and events and mapped them to calendars that somewhat resembles the divisions we still use today.


The Decans

The Egyptians used decans to measure time, defined by 36 constellations used in the ancient Egyptian astronomical system which divided the 360 degree ecliptic of the night sky into 36 individual parts of 10 degrees each. Considering this, there were 36 decans which rose and set during different times of the year, and the night sky would be divided up into 12 decans. There was one significant event that happened each year for the ancient Egyptians, which was the flooding of the Nile, and the heralding of a new Egyptian year. During this time, a total of 12 decans could be seen in the sky at once, and the last decon before dawn would rise in conjunction with the star Sirius. Each decan would be the first to rise for 10 days, then it would be replaced by the next decan, which would represent the start of a new “decade” or 10 days. This totaled to 360 days in the year, however in doing so, the Egyptians realised that there was a difference of 5 days before the start of their new year which presented the first problem in timekeeping methods. To compensate, they added 5 additional days.

The Divisions of the Year, Days, Hours and Minutes

Records show that Babylonians adopted the Egyptian calendar around 1,600 BC and over the next 1000 years further developed their methods of timekeeping. Babylonian arithmetic systems revolved heavily around a base 60 fraction system, a method still used today to divide up hours, minutes and seconds. It also correlated to 360 degrees in a circle, and 360 days in their calendar year, which was divided into 12 months of 30 days, in relation to their 12-constellation zodiac. The base 60 fraction system remained the main calculation tool until after the 16th century.


Sundials were first invented around 1,500 BC, which divided up the day, however it still presented the issue that the length of days change during the course of the year. An example of this would be shorter days in the winter and longer days in the summer. It wasn’t until the Greek astronomer Hipparchus divided the days into 24 hours that were evenly split between day and night, using the equinox as the marker (the equinox occurs twice a year where the day and night are equally the same length in time). However, despite the sound concept proposed, people would still use the idea of shifting hours in the day until the invention of the mechanical clock approximately 1,300 years later.

Clockmaking As A Solution To Timekeeping

Many clocks were created to assist in the measurement of time, including oil clocks, candle clocks and the more accurate water clock and hourglass clock, however all of these inventions couldn’t accurately represent the correct time in the day. They were also subject to many variations that could alter how fast or slow they operated, which presented a problem in timekeeping as they did not align with celestial events.


Strict religious protocols in Medieval Europe called for a more precise measurement of time as many religious ceremonies needed to take place at particular hours in the day. As a result of this, the first weight driven mechanical clocks started to appear in various churches across Europe. These mechanical clocks were much more accurate than previous clocks and became the basis for all future clocks until the 20th century. 


The idea of the second was thought to have been first conceived as early as 1000 AD by Persian polymath & astronomer Al-Biruni, who defined it as the fraction of a minute in the lunar cycle. In 1644, French mathematician Marin Mersenne performed extensive experiments to first measure the length of the seconds pendulum, and in doing so observed that a pendulum’s larger swings take longer than small swings.


From there, Christian Huyganns first invented the pendulum clock which incorporated Mersenne’s and Galileo’s pendulum theories to accurately measure the second, which soon became the most accurate way to measure time for the next 300 years.


The Progression of the Calendars

All of these models of time measurement are based on the method of using astronomical objects and their movements in the sky to divide the year into months, days, hours, minutes and seconds, however there is a major recognisable flaw. The Earth’s rotation is ever so slightly slowing, and it orbits around the sun every 365 ¼ days, which had been observed by errors in calendars as far back as the Egyptians, where they are credited for adding a leap day every four years to account for this error. The Romans adopted the leap day solution and designated February 29 as the official leap day into what is known as the Julian calendar. It took another 1600 years of observations to realise that the Earth’s exact solar year is actually 11 minutes and 14 seconds less than 365 ¼ days. So even if you accounted for the leap year, the calendar would still gain an entire day every 128 years. Pope Gregory XII adjusted the calendar by correcting the equinox, which was falling on March 11 instead of March 21st, and implemented a mathematical rule for leap years. This mathematical rule states that a century year is a leap year only if it’s divisible by 400. This accounts for the 11 minute and 14 second difference and would take more than 3,000 years for the Gregorian calendar to gain one extra day in error.  


Defining The Second

It was well known that this method of measuring time isn’t perfect, which had called for a more precise measurement of the exact length of one second. The first division of seconds in a day came from the Jesuits who appreciated precision. Italian astronomer Father Giovanni Battista Riccioli ordered 9 Jesuit priests to count how many oscillations a seconds pendulum would swing in a day, equalling nearly 87,000 swings. This meant you could then calibrate clocks to swing as close to one second as possible, giving even more accuracy to the minute and hour divisions that make up a day. This research was later used by Huygens to develop the equation of time, which mathematically accounts for Earth’s slowing and speeding up for its orbit around the sun. The equation of time allowed for astronomers to calculate sidereal time, which became a very accurate way to set a clock’s time.


The world was rapidly shifting into an enlightened age of time measurement now that mathematics and methods had been developed to accurately measure and synchronise time. The problem was that even though humankind had developed these accurate methods, there were still margins of error which needed constant attention.

In the mid 19th century, French physicist Jules Lissajous demonstrated how an electric current can be used to vibrate a tuning fork at a consistent frequency, and in 1880 French physicist brothers Pierre and Jacques Curie discovered piezoelectric properties of crystalline quartz. These two crucial discoveries later helped develop some of the most accurate clocks to date - the quartz clock.


The problem still existed that there was no real definition of what a second is, so in the 1940’s the first official definition of a second was “1/86,400 of a mean solar day”. A decade later, researchers observed that the Earth’s rotation is not consistent to provide an accurate standard unit of time, so the second definition of a second was born - the fraction 1/31,556,925.9747 of the year 1900. This definition again was short lived due to the invention of the first atomic clock, which provided the most accurate and complex method of measuring one second - irradiating a caesium- 133 atom and measuring how many times it would move from a ground state to an excited state, which was 9,192,631,770 times. This particular atom always provided the same outcome every time it was irradiated, and therefore could be accurately relied upon to be the new standard of timekeeping. In 1967, the third and final definition of one second became “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom.”


Now there was a consistent measurement of one second, which meant that time could be mapped out and divided up accordingly. Every computer and smartphone clock synchronises with the atomic clocks to ensure accuracy and precision, and most people will use the time stated on their phones or computers to set digital or analog clocks in their homes. So if you have ever wondered which clock is the most accurate to go off, the answer is most likely the smartphone in your pocket.



So what can we actually take away from all of this? From the ground level, you don’t need to understand the exact mathematics, physics, theories and methodologies explained, but more so the fact that humankind had discovered a problem thousands of years ago, and have consistently worked together to solve the problem. The problem of measuring time is arguably one of the oldest scientific problems to date, and continues to provide questions for the future generations to come. So the next time you look at a clock, let it be a subtle reminder of our journey through time and the failures and successes of countless brilliant minds, which might offer a greater appreciation for clock making and horology.


Leave a comment

Comments have to be approved before showing up