The “atom” lost its original meaning, and that’s good for science

- Put forth way back in the 5th century BCE, the concept of the atom was that matter, at a fundamental level, was made up of uncuttable, indivisible entities.
- While Democritus’s original notion of different-shaped building blocks wasn’t quite correct, the idea was revived in 1803 by John Dalton, who recognized the atom as a basic element of our world.
- While atoms and the different types (or elements) of atoms do play a major role in our everyday lives, the atom itself is not an uncuttable entity. That’s not a bug of science; that’s a feature.
Here on planet Earth, everything that we see, feel, or interact with is composed of atoms. There are approximately 90 naturally occurring species of atom that we can find on Earth, and approximately 30 more that we can synthesize under laboratory conditions. We’ve learned, thanks to the power of modern science, that atoms themselves are not fundamental, but rather can be divided into smaller chunks: electrons and an atomic nucleus, where the nucleus in turn can be further decomposed into protons and neutrons, which themselves are each made up of quarks and gluons. Only when we reach that deep of a level — the level of electrons, quarks, and gluons — do we encounter particles that are truly fundamental.
But the word atom itself, derived from the Greek word ατομός, literally means uncuttable or indivisible. How come, then, we still call these important components of reality “atoms,” instead of some other name that better reflects their composite nature? It’s a fascinating example of something remarkable about how science progresses: once something is discovered, it retains the original name that was bestowed upon it, but the meaning of that name will change over time to reflect the new information that we’ve acquired. It’s happened all throughout history, and if we continue to do science correctly, it will happen again and again as we continue to learn more about reality.

Credit: Sergey Nivens / Adobe Stock
The word ατομός was introduced in the 5th century BCE by two ancient Greek philosophers, Democritus and Leucippus, where Democritus was Leucippus’s student. Back then, philosophers and scientists were one and the same, as no distinction had been made between those who opine about physical reality and those who investigate it from an evidence-based perspective. Prior to Leucippus, earlier scientist-philosophers had devised the concepts of elements as components of reality, and often looked to one component in particular as being the fundamental originator — or arche (ἀρχή) — of all the others. Others, going back to the poet Hesiod, contrasted the order of the Universe, or cosmos (κόσμος), with the great void of nothingness.
Leucippus’s idea was that the void was real, but that everything we could see, feel, or interact with was not composed of elements like Earth, Fire, Air, and Water, but instead was composed of atoms: indivisible, tiny entities that made up all things. Between the atoms was void, and all things that existed were atoms. Democritus refined Leucippus’s ideas further, writing that:
- there were an infinite number of atoms,
- that the divisibility of matter would at some point come to an end,
- that when you reached that point, you would find that you had many different types of atoms: bodies with different sizes and shapes,
- and that these atoms are in constant motion and can collide with one another,
- and as they do, they build up larger and more complex structures, creating the world we experience before us.
While none of Democritus’s original works survive, he was influential enough that others wrote about his ideas for centuries and millennia afterward.

Credit: LibreTexts Library/UC Davis
The idea of atoms was revitalized in the early 19th century, when English chemist John Dalton reintroduced the idea of atomism into the field of chemistry. (Although others, like William Higgins, claimed that Dalton stole the ideas of others.) By looking at different known chemical compounds, Dalton recognized that an atomist perspective would allow one to calculate molecular weights and molecular formulas based on a series of simpler building blocks: atoms, each with their own unique atomic weight for the different species of atom present in the molecule. Dalton first introduced his new “atomic theory” back in 1803, providing an explanation of the composition of nitric acid.
Then in 1808, Dalton published an extended discussion of his atomic theory, putting forth a total of six principles:
- Elements are composed of atoms.
- Atoms of a given element are identical in size, mass, and all other properties.
- Atoms cannot be divided, created, or destroyed.
- Atoms of different elements combine in whole-number ratios to form compounds.
- In chemical reactions, atoms are combined, separated, or rearranged.
- And the rule of greatest simplicity: that if atoms of two different elements formed a compound, the molecules of that compound should consist of one of each element.
Although principles 1, 4, and 5 remain correct even today, principles 2 (the same atom can have different atomic weights), 3 (matter and antimatter can be created or destroyed in equal amounts), and 6 (not true of water, which is H2O and not HO, or ammonia, which is NH3 not NH) were later shown to be false.

Atomic theory was a wild success. We soon discovered that atoms of different species could be arranged in ways where atoms with similar chemical properties would be grouped together: placing elements like sodium and potassium on the same footing, as well as calcium and magnesium, chlorine and fluorine, plus helium, neon, and argon. Mendeleev’s periodic table of the elements, developed in 1869, recognized the importance of sorting elements by these chemical properties, rather than by mass alone, and he successfully left gaps for yet-unknown elements that were discovered with the properties he predicted: gallium and germanium, for example.
The aspects of Dalton’s theory that disagreed with reality were discarded, just as the aspects of Democritus’s ideas that didn’t align with reality were not kept as part of the theory. But for nearly all of the 19th century, the notion that atoms themselves were fundamental, and indivisible, remained part of the prevailing wisdom. However, everything began to change toward the end of the 19th century, as a few discoveries truly began to challenge the conventional picture of atomic physics. The study of cathode rays, the discovery of X-rays, and the existence of radioactivity — led by pioneers such as Henri Becquerel, Marie and Pierre Curie, and J.J. Thomson — would reveal the existence of subatomic particles: particles within and smaller than the atom itself.

It was soon discovered that at least three types of particles were emitted in various radioactive processes:
- alpha particles, which were positively charged,
- beta particles, which were negatively charged,
- and gamma particles, which were electrically neutral.
Thomson’s experiments with cathode rays, in particular, led to the discovery of the electron: a particle with very low mass and small size compared to atoms, but with a large and significant negative electric charge. Soon thereafter, beta particles were identified as electrons, bringing these two independent aspects of reality together.
It meant that atoms themselves weren’t actually indivisible, but rather could be divided into at least two components: the electrons that existed within them, somehow, and some other positively charged component. Thomson was the first to model the atoms as such: what he called the “plum-pudding” model, where the electrons were like negatively-charged plums in the positively-charged pudding of the rest of the atom.
But then, in the early 1900s, Ernest Rutherford designed a wonderful experiment that was designed to test exactly this: his famed gold-foil experiment.

What Rutherford did was take a thin sheet of gold, a notoriously malleable metal, and hammer it as thin as possible, until it barely held together at all. Then, Rutherford set up a radioactive source on one side of the foil, surrounding the rest of the apparatus with an absorptive ring of solid material. What he anticipated he’d find was that these radioactive particles would go right through the foil, and wind up on the other side, given how energetic the emitted particles were and how thin and flimsy the gold foil actually was.
And indeed, that was what happened for a majority of the emitted particles that were sent in the direction of the gold foil: they did pass right through to the other side. But for a fraction of those particles, something else occurred. Instead of passing through, they appeared to ricochet off of something hard, massive, and immobile inside that gold foil, deflecting in some cases or bouncing back, in other cases, to the interior of the experimental apparatus. Rutherford couldn’t contain his excitement and puzzlement at the matter, noting about his 1909 experiment:
“It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.”

In other words, not only were atoms not uncuttable, but they consisted of two very different parts: the light, negatively charged electrons, and a heavy, positively charged atomic nucleus. 10 years later, in 1919, Rutherford’s continued work was able to observe hydrogen nuclei being ejected from nitrogen atoms that had been bombarded with alpha particles (now known to be helium nuclei), demonstrating the existence of the proton. The next year, in 1920, Rutherford gave protons their name, teaching us what the two main components of atoms were.
But how, then, were there isotopes of these different elements, where atoms with the same chemical properties as one another could have different masses from one another? One idea was that within an atomic nucleus, there weren’t just protons, but also some electrons that had formed bound states with protons inside of them. This led to a paradox, however: why would only some electrons form a bound state with protons, while others remained in orbit around the atom? Were there two fundamentally different types of electron?
Eventually, the issue would be settled experimentally: by James Chadwick in 1932, who bombarded beryllium nuclei with alpha particles and noted the emission of a massive, neutral particle that was emitted by the process. This new particle, the neutron, finally completed the basic picture of the atom.

Credit: Coastal Systems Group/Woods Hole Oceanographic Institute
Atomic nuclei, then, were composed of positively-charged protons and electrically uncharged neutrons, which in turn were surrounded by very light, negatively-charged electrons to complete the atom. Those atoms then could bind together, just as Democritus had theorized more than 2000 years before, into the molecules and larger structures that composed our macroscopic world. There weren’t infinite numbers of them, but rather an enormous but countable number of subatomic constituents of these atoms, and they assembled together to compose all of reality.
In the time since then, we discovered a group of unstable particles called mesons: particles that were initially suspected (according to the brilliant-but-incorrect Sakata model) of being composites of baryons such as protons, neutrons, antiprotons, and antineutrons, which could achieve much lower masses in concert due to the concept of binding energy. Mesons like kaons could then be explained if you added the unstable Lambda baryon (and its antiparticle) into the mix as well, with protons, neutrons, and the Lambda baryon composing a triad sometimes known as the sakatons.
As we progressed to performing deep inelastic scattering experiments, however, the Sakata model fell out of favor, as the more successful theories of quarks and partons (now known to be the same as one another) better described the result. It was this progression of events that eventually brought us to our modern picture of reality, and the Standard Model of elementary particles.

And yet, even though we thought that atoms were truly indivisible entities for more than 90 years, from Dalton’s introduction of them until the discoveries of radioactivity and the nature of cathode rays, we still give them the name “atom.” Some people are disappointed that this is the state of affairs, claiming that the original meaning of atom — going all the way back to the Greek word ατομός — should be reserved for entities that are truly fundamental or elementary: entities that cannot be cut or split.
Too bad for people who believe that; that isn’t the way that science works. Things are named at the time of their discovery, and they’re usually named with the most accurate (or catchy) description we have of them at the time. Those names persist even as our understanding of those objects or phenomena improves, even if what we subsequently learn contradicts the original name we gave to them.
- Atoms can indeed be cut or split.
- Exoplanets, short for extra-solar planets, don’t meet the IAU’s 2006 technical definition of the word planet.
- The Big Bang, first named such in 1949, no longer means an initial event that created our Universe, but merely the hot, dense, expanding aftermath of the end of cosmic inflation.
Along with many other examples, atoms simply are what they are. Even though they can be cut, they’re still a vitally important concept for understanding our reality.

Back in the late 1990s, I had the opportunity to travel to a number of ancient Roman ruins, including a great many temples. Many were rectangular in shape, but a few were round as well. For hundreds of years, classical scholars would misidentify the round ones as a “Temple of Vesta,” due to the famed round Temple of Vesta that partially survives in the ancient Roman forum. In modern times, we’ve learned that these famed temples likely weren’t Temples of Vesta at all, but rather were temples to other gods, some of which are known and some whose origins remain unknown. Even when we know better, however, getting people to accept a name change is, to say the least, an uphill battle.
That’s why the approach that science generally takes isn’t to change the name, but to allow the meaning of the name we’ve given to evolve. If we discover that dark energy does evolve with time, we’ll still call it dark energy, but we’ll stop associating it with a cosmological constant. If we discover that dark matter interacts with normal matter or with light, we’ll still call it dark matter, even though its nature won’t be completely “dark” to us any longer. We’ve long known that planetary nebulae aren’t planets, but they’ve maintained that same name since it was first given in 1779. The Sun doesn’t “set” and “rise” as the Earth rotates and “shooting stars” aren’t actually stars, but these names — sunset, sunrise, and shooting star — are unlikely to go away anytime soon.
In science, we don’t cling to outdated meanings of words; we find a thing, name it, and then continue to study it. If what we learn contradicts the expectations we held when we named it, all the better for us. After all, naming conventions aren’t the point of science; learning about the Universe, i.e., conducting and taking away the lessons of science itself, is why we do it.