What number is bigger than a Google?
Which is greater? The number of atoms in the universe or the number of chess moves?
We explore AI’s mind blowing processing ability, from winning chess to finding new galaxies.
The question came from Claude Shannon, inventor of ‘Information Theory’ in 1948. The theory uses mathematics to understand the rules governing the transmission of messages through communication systems, applicable to everything from computer code, speech and music, to the dancing of bees. Using maths and logic to understand the world around him, it wasn’t long before Shannon began to wonder if a computer could beat a human at games, such as chess. In 1950 he wrote a paper asserting this possibility, but it wasn’t until the 1970’s that computers began to defeat humans at the game – generally poor players who made silly mistakes. But they could not defeat Grand-Masters. That did not happen until 1996 when DeepBlue beat Gary Kasparov. The following year the improved DeepBlue beat him 31/2-21/2.
So why did it take so long? Remember the question at the start?
There are between 10 78 to 10 82 atoms in the observable universe. That’s between ten quadrillion vigintillion and one-hundred thousand quadrillion vigintillion atoms. Which is a lot. But. amazingly, there are even more possible variations of chess games than there are atoms in the observable universe.
This is the Shannon Number and represents all of the possible move variations in the game of chess. It is estimated there are between 10 111 and 10 123 positions (including illegal moves) in Chess. (If you rule out illegal moves that number drops dramatically to 10 40 moves. Which is still a lot!).
«There are even more possible variations of chess games than there are atoms in the observable universe.»
You might think, ‘well a computer has conquered the most complicated game in the world there’s nothing left for them to do?’ and you’d be. wrong! There is a game with even more possible moves and variations and it is called Go. Thought to have originated in China over 4.000 years ago it did not become popular until it arrived in Japan around the year 500. It is played extensively in SE Asia: professionals start learning the game as very small children and spend all their lives perfecting their ability.
Go has more than 10 170 moves. making it a googol times more complicated and varied than Chess and dwarfing the number of atoms in the Universe!
Do you think a computer or Artificial Intelligence could ever master a game this complicated in your lifetime?
Amazingly, it already has. Enter AlphaGo. In 2015 it played its first match against reigning three-time European Champion Mr. Fan Hui, and beat him 5-0.
In March 2016, the AI then competed against legendary Go player, eighteen-time world title winner Mr. Lee Sedol. It’s said that Sedol is to Go what Federer is to tennis, yet, with 200 million people watching world wide, AlphaGo beat him 4-1 in a competition in Seoul, South Korea.
All Go players are ranked; an absolute beginner is ranked as Kyu 30. As they improve the move towards the rank Kyu 1. As they continue to improve they then join the Dan ranks, starting at level 1 and aim for (but rarely reach) level 9 Dan. There are currently just over 100 9 Dan players in the world. AlphaGo is one of them.
The company that created AlphaGo – Deepmind – released a newer more powerful version, AlphaGo Zero.
According to Deepmind: “AlphaGo learnt Go by playing thousands of matches with amateur and professional players, AlphaGo Zero learnt by playing against itself, starting from completely random play. and then by playing against the strongest player in the world, AlphaGo.
This powerful technique is no longer constrained by the limits of human knowledge. Instead, the computer program accumulated thousands of years of human knowledge during a period of just a few days. Go Zero quickly surpassed the performance of all previous versions and also discovered new knowledge, developing unconventional strategies and creative new moves, including those which beat the World Go Champions Lee Sedol and Ke Jie. ”
Now you may think:
«So what? A machine can play a game, big deal.»
The big deal is that it has been able to make new discoveries, novel approaches. That AI can have ‘creative moments’ suggest that AI can be used to enhance human ingenuity rapidly.
When dealing with vast amounts of information, when attempting to understand lots of data (particularly mathematical) the human mind can become overwhelmed and tires quickly. An AI doesn’t have those problems. AlphaGo Zero learnt thousands of years of human knowledge in just a few days. Applying that ability to other areas will allow patterns and discoveries that might otherwise be hidden or take a long time to discover by people alone.
How does an Artificial Intelligence learn?
That’s a really good question and it is both simple and very clever. There are three levels of learning for an AI: Artificial Intelligence, machine learning, and deep learning.
Artificial Intelligence: is the lowest level of computer ‘intelligence.’ It mimics human learning by making decisions based on options and checks them with stored information: Is it round or curvy? Is it green or yellow? Is it a lime or a banana?
Machine Learning: comes from experience. Are all round green things limes? Can they be apples? Is it bigger than a certain size? Therefore, based on what it has ‘learnt’ from option choosing it can say what the object is.
Deep Learning: is a subset of Machine Learning, the software can train itself (using hidden layers called Neural networks) to understand its outputs. This approach uses huge amounts of data as it requires the machine to check with its database (experience) to find things it ‘knows’ already to allow it to identify objects.
«That AI can have ‘creative moments’ suggest that AI can be used to enhance human ingenuity rapidly.»
What has this got to do with Astronomy?
Well I’m glad you asked, no really I am.
Even though there are more moves in Chess than atoms in the Universe, the Universe is still very, very big. It is estimated that there are 200-350 billion stars in our galaxy (the Milky Way). Our galaxy is a medium sized galaxy. It is believed there may be over a trillion galaxies in the visible universe and many more that we can’t see.
Think of it this way; next time you go to the seaside, grab a handful of sand, or dig a hole in the sand. How many grains of sand do you think there are in your hand or in the pile you’ve just dug? Thousands? Millions maybe? Now look at the whole beach and try to guess how many grains there are.
It is thought that there are more stars in the universe than grains of sand on every beach on Earth. Most of those stars have at least one planet, often many more, orbiting. So there are even more planets than stars.
Astronomers and Astrophysicists deal with lots and lots of data and as technology improves the amount of data collected increases.
There are many new telescopes and observatories under development and soon to come on-line, as well as the space based James Webb Telescope and the Extremely Large Telescope in Chile there is also coming to Chile in 2021 the Large Synoptic Survey Telescope (LSST) also known as the Vera Rubin telescope.
When it begins operation it will take more than 800 panoramic images each night with a 3.2-billion-pixel camera, recording the entire visible sky twice each week.
Each night it will produce 20 TB of data. The images taken by the LSST Camera are so large that it would take 378 4K ultra-high-definition TV screens to display one of them in full size!
How big is a TerraByte? I hear you ask.
1TB is the same as 681 episodes of The Queen’s Gambit (one can dream!).
SO much data, you say? You’re right! It’s far more than we humans can work on. That’s just from one telescope. There are a whole variety of programs, AI, Machine Learning and Deep Learning systems being used by Astronomers and researchers. Because it is more than a human can cope with, trainable neural networks are needed to help with classifying objects and suggesting to Astronomers those that might be interesting to look at more closely.
The European Southern Observatory has developed Morpheus: a deep-learning framework that incorporates a variety of artificial intelligence technologies developed for applications such as image and speech recognition. To help astronomers, Morpheus will work pixel by pixel through the images looking for galaxies! An Older Morpheus result from 2016, working with Hubble, revealed that here were 10 times more galaxies than previously thought.
«An Older Morpheus result from 2016, working with Hubble, revealed that here were 10 times more galaxies than previously thought.»
Researchers at Lancaster University have developed at system called Deep-CEE (Deep Learning for Galaxy Cluster Extraction and Evaluation), a novel deep learning technique to speed up the process of finding galaxy clusters.
First discovered in 1950 by George Abell, galaxy clusters are rare but massive objects. Abell spent years scanning 2000 photographic plates with his eye and a magnifying glass and found 2,712 clusters. Galaxy clusters are important as they will help us understand how dark matter and dark energy have shaped our universe.
Deep-CEE builds on Abell’s approach replacing the astronomer with an AI model trained to «look» at colour images and identify galaxy clusters. It is a state-of-the-art model based on neural networks, which are designed to mimic the way a human brain learns to recognise objects by activating specific neurons when visualizing distinctive patterns and colours. The AI was trained by repeatedly showing it examples of known, labelled, objects in images until the algorithm learnt to recognise objects on its own.
Deep-CEE will also be used on the Rubin telescope.
Not yet finished (but with Phase 1 already running) is the Square Kilometre Array (SKA); a series of radio telescopes that span continents and will be the largest ever radio telescope. It’s headquarters are in Jodrell Bank in Cheshire.
The majority of the telescopes will be in South Africa and Australia. It will need 2 super-computers to handle all the data. In South Africa there will be 197 radio dishes and in Australia over 131,000 antennae!
Each year SKA will amass 600 PetaBytes of data (or 1.6 PB per day or ~630 Netflix videos a day). To store this data on an average 500 GB laptop, you would need more than a million of them every year. 500GB is the equivalent of 500 hundred lorries full of paper.
But why does the SKA need such immense computing power?
Scientific image and signal processing for radio astronomy consists of several fundamental steps, all of which must be completed as quickly as possible across thousands of telescopes connected by thousands of miles of fibre optic cable. The computers must be able to make decisions on objects of interest, and remove data which is of no scientific benefit, such as radio interference from things like mobile phones.
What about all the rest?
Then of course there are the telescopes, observatories and satellites that are already working: perhaps one of the most famous is the Hubble Space Telescope.
Hubble transmits about 120 gigabytes of science data every week. That would be roughly 1,097 metres (3,600 feet) of books on a shelf. Hubble has been operational for 30 years and has made over 1.5 million observations.
It’s not just about the data, to get that you need to schedule observations. This can be incredibly complicated – timing, location of object, position of spacecraft, rising and setting times and many other variables have to be looked at. To organise observations and timings Hubble uses SPIKE which uses a very fast neural network-inspired, scheduling algorithm to achieve performance humans can only dream of.
We may not have smart cars or personal robots but advances in Artificial Intelligence are already providing profound benefits and discoveries for us all. AI is not our master, it can only learn based on how we program it and what we determine as important. at least for now!
Here is the number «forty-five and six-tenths» written as a decimal number:
The decimal point goes between Ones and Tenths.
45.6 has 4 Tens, 5 Ones and 6 Tenths, like this:
Now, let’s discover how it all works .
It is all about Place Value !
When we write numbers, the position (or «place«) of each digit
In the number 327 :
- the «7» is in the Ones position, meaning 7 ones (which is 7),
- the «2» is in the Tens position meaning 2 tens (which is twenty),
- and the «3» is in the Hundreds position, meaning 3 hundreds.
|As we move right, each position is 10 times smaller .|
|From Hundreds, to Tens, to Ones|
But what if we continue past Ones?
What is 10 times smaller than Ones?
110 ths (Tenths) are!
«three hundred twenty seven and four tenths«
but we usually just say «three hundred twenty seven point four«
And that is a Decimal Number!
We can continue with smaller and smaller values, from tenths, to hundredths, and so on, like in this example:
Have a play with decimal numbers yourself:
Large and Small
So, our Decimal System lets us write numbers as large or as small as we want, using the decimal point. Digits can be placed to the left or right of a decimal point, to show values greater than one or less than one.
The decimal point is the most important part of a Decimal Number. Without it we are lost, and don’t know what each position means.
|On the left of the decimal point is a |
whole number (such as 17)
|As we move further left, |
every place gets 10 times bigger.
|The first digit on the right means |
|As we move further right, |
every place gets 10 times smaller
(one tenth as big).
Zoom into decimals .
Definition of Decimal
The word «Decimal» really means «based on 10» (From Latin decima: a tenth part).
We sometimes say «decimal» when we mean anything to do with our numbering system, but a «Decimal Number» usually means there is a Decimal Point.
Ways to think about Decimal Numbers .
. as a Whole Number Plus Tenths, Hundredths, etc
We can think of a decimal number as a whole number plus tenths, hundredths, etc:
Example 1: What is 2.3 ?
- On the left side is «2», that is the whole number part.
- The 3 is in the «tenths» position, meaning «3 tenths», or 3/10
- So, 2.3 is «2 and 3 tenths»
Example 2: What is 13.76 ?
- On the left side is «13», that is the whole number part.
- There are two digits on the right side, the 7 is in the «tenths» position, and the 6 is the «hundredths» position
- So, 13.76 is «13 and 7 tenths and 6 hundredths»
. as a Decimal Fraction
Or we can think of a decimal number as a Decimal Fraction.
A Decimal Fraction is a fraction where the denominator (the bottom number) is a number such as 10, 100, 1000, etc (in other words a power of ten)
|So «2.3» looks like:||23 10|
|And «13.76» looks like:||1376 100|
. as a Whole Number and Decimal Fraction
Or we can think of a decimal number as a Whole Number plus a Decimal Fraction.
|So «2.3» looks like:||2 and 3 10|
|And «13.76» looks like:||13 and 76 100|
Those are all good ways to think of decimal numbers.
What number is bigger than a Google?
The 100-trillionth decimal place of π (pi) is 0. A few months ago, on an average Tuesday morning in March, I sat down with my coffee to check on the program that had been running a calculation from my home office for 157 days. It was finally time — I was going to be the first and only person to ever see the number. The results were in and it was a new record: We’d calculated the most digits of π ever — 100 trillion to be exact.
Calculating π — or finding as many digits of it as possible — is a project that mathematicians, scientists and engineers around the world have worked on for thousands of years, myself included. The well-known approximation 3.14 is believed to have been found by Archimedes around the year 250 BCE. Computer scientist Donald Knuth wrote «human progress in calculation has traditionally been measured by the number of decimal digits of π» in his book “The Art of Computer Programming” (Dr. Knuth even wrote about me in the book). In the past, people would manually — meaning without calculators or computers — determine the digits of pi. Today, we use computers to do this calculation, which helps us learn how much faster they’ve become. It’s one of the few ways to measure how much progress we’re making across centuries, including before the invention of electronic computers.
As a developer advocate at Google Cloud, part of my job is to create demos and run experiments that show the cool things developers can do with our platform; one of those things, you guessed it, is using a program to calculate digits of pi. Breaking the record of π was my childhood dream, so a few years ago I decided to try using Google Cloud to take on this project. I also wanted to see how much data processing these computers could handle. In 2019, I became the third woman to break this world record, with a π calculation of 31.4 trillion digits.
But I couldn’t stop there, and I decided to try again. And now we have a new record of 100 trillion decimal places. This shows us, again, just how far computers have come: In three years, the computers have calculated three times as many numbers. What’s more, in 2019, it took the computers 121 days to get to 31.4 trillion digits. This time, it took them 157 days to get to 100 trillion — more than twice as fast as the first project.
But let’s look back farther than my 2019 record: The first world record of computing π with an electronic computer was in 1949, which calculated 2,037 decimal places. It took humans thousands of years to reach the two-thousandth place, and we’ve reached the 100 trillionth decimal just 73 years later. Not only are we adding more digits than all the numbers in the past combined, but we’re spending less and less time hitting new milestones.
I used the same tools and techniques as I did in 2019 (for more details, we have a technical explanation in the Google Cloud blog), but I was able to hit the new number more quickly thanks to Google Cloud’s infrastructure improvements in compute, storage and networking. One of the most remarkable phenomena in computer science is that every year we have made incremental progress, and in return we have reaped exponentially faster compute speeds. This is what’s made a lot of the recent computer-assisted research possible in areas like climate science and astronomy.
Back when I hit that record in 2019 — and again now — many people asked «what’s next?» And I’m happy to say that the scientific community just keeps counting. There’s no end to π, it’s a transcendental number, meaning it can’t be written as a finite polynomial. Plus, we don’t see an end to the evolution of computing. Like the introduction of electronic computers in the 1940s and discovery of faster algorithms in the 1960-80s, we could still see another fundamental shift that keeps the momentum going.
So, like I said: I’ll just keep counting.