The Nature of Interatomic Forces in Metals, 1938

Linus Pauling, ca. 1930s

“In recent years I have formed, on the basis of mainly empirical arguments, a conception of the nature of the interatomic forces in metals which has some novel features.”

­-Linus Pauling, 1938

Prior to the publication of this article, which appeared in the December 1938 issue of Physical Review, much about the interatomic forces operating in metals was either unknown, or theoretical predictions did not align properly with observed data. In publishing this paper, Linus Pauling first sought to align the incongruencies between theory and data for the transition metals, such as iron, cobalt, nickel, copper, palladium, and platinum. He was then able to correctly predict properties including “interatomic distance, characteristic temperature, hardness, compressibility, and coefficient of thermal expansion” by discarding previously held assumptions and inserting new – and correct – assumptions about transitions metals.

The most significant idea that Pauling introduced with this paper was the notion that the valence shell electrons – those in the outer shell – play a part in bonding. Previously, scientists believed that these electrons made “no significant contribution” to bond formation. Pauling was able to establish otherwise, and used this breakthrough to both align observable data with theoretical data, and make other predictions about transition metals.


The 1938 paper was written in the wake of a revolution within the world of chemistry. A raft of new theories brought about by a widening understanding of quantum mechanics was generating intense excitement for scientists world-wide, and the tools that quantum mechanics provided for helping to “correct” previous understandings of the chemical bond were of paramount interest to many. Pauling, of course, was a leader in this area, his body of work ultimately garnering the 1954 Nobel Chemistry Prize for “research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances.”

Within this area of focus, many scientists were especially interested in exploring the ways that metals bonded because, as noted, the observed data did not match up with theory. Pauling sought to mend this gap by using quantum mechanics to look at interatomic forces in a novel way. Prior to the Physical Review paper, chemists believed that when metals bonded, their valance shell electrons played only a small role in the resulting structures. Pauling argued otherwise, and put forth an important new theory that the valence shell electrons contributed to the process through resonance, a theory that he had developed earlier in the decade and continued to champion.


Because the crux of Pauling’s scientific intervention was to prove that valence shell electrons are involved in bonding, most of the paper is devoted to supporting this claim. The primary tool that Pauling uses to craft his argument is an analysis of temperature predictions. According to the reigning theory regarding metals and valence shell bonds, when bonding occurred, the electrons would bond in a manner that would create a moment of ferromagnetism. Specifically, it was theorized that these ferromagnetic moments would be temperature dependent, meaning that as the temperature of the metal changed, its degree of magnetism would also change in a predictable way. Experiments had shown however, that when metals bonded, their ferromagnetism remained independent of temperature.

Pauling exploited this piece of information and used it to support his theory. According to Pauling, if metals bonded through resonance, they would create ferromagnetic moments that were temperature independent, a hypothesis that correctly aligned with the observed data.

To develop his argument, Pauling made specific use of the element Vanadium, which has an electron configuration of 3d34s2. Under the old model, Vanadium’s valence electrons could only interact weakly in bonding if, at most, two of the 4s2 electrons were involved in a bond. This, according to Pauling, would create ferromagnetism which would decrease with increasing temperature, meaning that it was temperature dependent. On the contrary, the experimental evidence showed that Vanadium’s magnetism was temperature independent. This meant, therefore, that weak valence interaction during bonding was not possible.

Pauling’s alternative suggestion was that all of Vanadium’s valence electrons were involved in bonding through resonance; not just the two 4s2 electrons, as previously believed. Further, if the valence electrons bonded through resonance, the ferromagnetism of their structure would be temperature independent, a prediction that aligned with the observed data.


Once Pauling was able to prove that the valence electrons in Vanadium bonded through resonance, he then began to apply the concept to all transition metals. As with the previous example, Pauling continued to support the concept by comparing predicted outcomes with empirical data. And once again, when viewing the bonding through the prism of resonance, predicted outcomes of magnetic moments began to align with the empirical data.

Pauling then took it another step by repeating the exercise with interatomic distances. As demonstrated in his paper, a resonant structure would correctly predict the interatomic distances that had been observed for many bonds. Pauling also claimed that other properties, such as the “compressibility, coefficient of thermal expansion, characteristic temperature, melting point, and hardness” would likewise correctly align with experimental evidence, once resonance was used to explain valence shell bonding.

Though clearly a significant breakthrough, the assertions that Pauling made in his paper were grounded in work done by others — notably the quantum mechanical theory of ferromagnetism developed by Heisenberg, Frenkel, Bloch, Slater, et al., and Wolfgang Pauli’s theory of the temperature-independent paramagnetism of the alkali metals. And while the 1938 article acknowledges these debts, it also attempts to improve upon them.

This was especially so with the quantum mechanical theory of ferromagnetism. As we have seen, Pauling successfully applied the idea of temperature independence and ferromagnetism to support his claims, but he also found one aspect of the theory to be needlessly bothersome. As Pauling noted, in order for much of the theory to work on a mathematical level, scientists were compelled to assign positive numbers to all unpaired valence electrons. Pauling recognized that this was only necessary if it was assumed that valence electrons did not play a large role in bonding for metals. Under a resonant scenario, Pauling was able to show that the math could still work if the valence electrons were negative and that, once again, “this conclusion agrees with the observation.”

The Theoretical Prediction of the Physical Properties of Many-Electron Atoms and Ions. Mole Refraction, Diamagnetic Susceptibility, and Extension in Space, 1927

Linus and Ava Helen Pauling in Copenhagen, May 1927

[Ed Note: Today and in the three posts that will follow, we will be taking a close look at four important scientific articles published by Linus Pauling between 1927 – 1949.]

In this ambitious and hugely influential paper, Linus Pauling applied his theory of screening constants to various problems, including electric polarizability, diamagnetic susceptibility, and the sizes of ions and atoms. Pauling was fundamentally interested in pursuing this topic because of his desire to merge the new quantum mechanics – which embraced wave functions – with older ideas in order to make predictions about molecular properties like mole refraction and diamagnetic susceptibility in space.

Especially during the early phases of his career, one of Pauling’s signature rhetorical tools was to put forth a bold assumption that would serve to simplify the predictions made later on in a given article. This paper is among the best examples of that approach. In it, Pauling developed mathematical relationships that, when applied, could help the reader make generalizations about molecules. But in moving through these calculations, Pauling had to make some assumptions, oftentimes without the aid of hindsight to determine whether or not they were correct. One enduring legacy of this paper is that many of Pauling’s assumptions were indeed correct, and its findings have thus remained relevant across the decades.


Another contributing factor to the paper’s success was that Pauling was in the right place at the right time. While working towards his PhD at Caltech, Pauling enthusiastically followed the rapid development of quantum mechanics in Europe and elsewhere. Pauling was particularly interested in the work of Arnold Somerfield in Munich and Niels Bohr in Copenhagen, and wrote to both to inquire about research opportunities. Bohr never responded but Sommerfeld did and, with his support, Pauling secured a Guggenheim Fellowship that allowed him to live and work in Europe for 19 months. During that period, he spent most of his time with Sommerfeld at the Institute of Theoretical Physics, though he did visit Bohr in Copenhagen as well as Erwin Schrödinger in Zurich.

Pauling’s residency in Europe proved auspicious, in part because Sommerfeld and his colleagues were working on uniting new ideas with old, a task not being readily pursued in the United States at the time. As they moved forward with their work, the European scientists began to solve more and more problems with quantum mechanics, cementing in Pauling’s mind the utility of the approach as a way forward.


Gregor Wentzel (Image credit: Emilio Segre Visual Archives)

One project important to Pauling’s paper was being led by University of Leipzig physicist Gregor Wentzel. A colleague of Sommerfeld, Wentzel was seeking to apply quantum mechanics to x-rays in order to calculate the screening constants of electrons in large and complex molecules. His project had hit a snag however, in that he was unable to find agreement between the observed data and those predicted by theory. After scrutinizing his work, the young Pauling found that Wentzel had made errors in his calculations. Once Pauling had corrected these miscalculations, he found that there was in fact agreement between the observed and predicted data, which meant that Wentzel’s work was actually correct. In so doing, Pauling had confirmed the value of quantum-mechanical calculations in predicting screening constants of electrons in complex molecules.

Armed with this information, Pauling recognized that he could use these same calculations to make predictions about electron arrangement in molecules and the relative size of ions, among other properties. This led to the publication of his paper, The Theoretical Prediction of the Physical Properties of Many-Electron Atoms and Ions. Mole Refraction, Diamagnetic Susceptibility, and Extension in Space, which appeared in 1927, published by the Proceedings of the Royal Society.

In its essence, the article used the wave mechanical feature of quantum mechanics to make predictions about molecules, an approach that emerged directly from Pauling’s exposure to European efforts to unify old ideas with new. And even though it was not the first time that Pauling had written a paper utilizing quantum mechanics, it was certainly his first publication in which he used these novel tools to make predictions about molecular properties. 


Fundamental to these predictions were three key assumptions that Pauling put forth at the beginning of his paper. The first was that,

each electron shell within the atom is idealized as a uniform surface charge of electricity of amount-zi e on a sphere whose radius is equal to the average value of the electron-nucleus distance of the electrons in the shell.

The second assumption stated that,

the motion of the electron under consideration is then determined by the use of the old quantum theory, the azimuthal quantum number being chosen so as to produce the closest approximation of the quantum mechanics.

And the third assumption was that,

since so does not depend on Z, it is evaluated for large values of Z, but expanding powers of zi/Z and neglecting powers higher than the first, and then comparing the expansion with that of the expression containing Z-so in powers of so/Z.

Armed with these assumptions, Pauling was able to issue a collection of predictions about molecules, particularly concerning mole refraction and diamagnetic susceptibility. Prior to his doing so, chemists lacked the necessary tools for making predictions of this sort, meaning that certain chemical properties remained hazy or unknown.

This issue was particularly salient for the hydrogen atom. In the months leading up to the paper’s publication, a huge debate had emerged concerning the polarizability of hydrogen. The prevailing formula had been proven incorrect in 1926, after which time a race ensued to find a new, more suitable equation. Eventually a successor formula was developed, but it was criticized as being “a conservative Newtonian” model. Agreeing that a more robust approach was needed, Pauling set about applying quantum mechanics, and based on his three assumptions, he derived the following:

Knowing full well that the equation was based on his three assumptions, and anticipating resistance, Pauling pre-emptively argued that “it might be thought that these values of ɣ are not correct because of the fact that the electron shells actually do not consist of hydrogen-like electrons, but rather themselves of ‘penetrating electrons.'” However, “as Z [a surface harmonic] increases, the ‘penetrating orbits’ become more hydrogen-like” and therefore should be ignored because any error found would be “negligible.” Having put forth this solution to the problem of hydrogen, Pauling was then able to more broadly demonstrate the utility of his ideas.

Indeed, even though much of the work in the paper made assumptions that were oftentimes crude – such as using data from the valence shell electrons only – Pauling was able to create complex (and, as it turned out, fairly accurate) tables of polarizability of ions, diamagnetism screening constants, and mole refraction, among predictions.

It is clear that Pauling believed strongly in his paper, which he felt would “make possible the accurate prediction of the properties of any atom or ion.” And though the approach would sometimes only yield “approximate values of the physical properties of ions” based on his three assumptions, the importance of the work was not diminished as, oftentimes, directly observed data “may not exist under conditions permitting experimental investigation.”

Sickle Cell Research to the Present and the Future

Three-dimensional rendering of sickle cell anemia blood cells. Credit: National Institutes of Health.

By Dr. Marcus Calkins, Part 3 of 3

Forty years after Linus Pauling and his lab demonstrated the molecular basis for Sickle cell disease and James Watson speculated that upregulation of fetal hemoglobin may protect from the disease, methods to control fetal hemoglobin specifically in red blood cells began to be developed. The molecular biology revolution of the late 20th century had produced extensive knowledge about the molecular systems that drive fetal hemoglobin production, but harnessing that intricate knowledge has taken another thirty years.

Hematopoietic Stem Cells (1990s)

Since the advent of radioactivity research, it has been well-established that red blood cells have a short lifespan of only about 115 days and are continually produced from precursors in the bone marrow. In order to replace defective blood cells in an individual, a protocol for whole-body irradiation and allogenic bone marrow transplant was pioneered by a group of doctors in Seattle in the 1970s, as a treatment for cancer patients.

However, the ability to specifically isolate and identify hematopoietic stem cells from patients was only developed in the 1990s. At that time, nuclear dye exclusion and flow cytometry characteristics were used to isolate the stem cells, but since then, a variety of cell surface markers have been identified, and protocols to isolate, expand and differentiate hematopoietic stem cells have become standardized. In addition, scientists have learned to modify the hematopoietic stem cells at a genetic level, creating the possibility that stem cells may be extracted, genetically modified ex vivo, and then used to reconstitute the bone marrow of patients with blood diseases.

For patients with Sickle cell disease, it may therefore be possible to extract hematopoietic stem cells and inactivate the BCL11A gene, which normally suppresses fetal hemoglobin. The red blood cell progeny of these altered stem cells would then produce fetal hemoglobin that could mask the effects of the disease-causing mutation in the β-globin gene. Afterward, the modified stem cells could be transplanted back into the same patients from which they were isolated, providing the person with a continual supply of red blood cells that expresses fetal hemoglobin and are resistant to sickling.

Gene Therapy and Genome Editors (2000s-2010s)

The final component of a therapy for Sickle cell disease has recently been realized, as it is now feasible to efficiently inactivate BCL11A in isolated hematopoietic stem cells. In the last two decades, several systems of modifying the genome (gene editors) have been developed. Although the first editors to be produced may still find clinical use, CRISPR has quickly overtaken previous technologies to become the most widely applied and well-known gene editing platform.

In the late 1990s, researchers invented two key methods of using proteins to make targeted edits to the genome. These early gene editor proteins are called TALENs and Zinc fingers, both of which are being tested in clinical trials today. Each of these editors is able to target highly specific DNA sequences and make an incision in the DNA helix at a predictable site. Once the DNA strand is incised, error-prone DNA repair processes are activated to fix the incision, often resulting in random base insertions, deletions and changes. In this way, the genetic code is disrupted in some cells, and these random disruptions often serve to inactivate the targeted gene. If those cells with an inactivated gene can be identified and expanded, whole populations of cells with the genetic alteration can be established.

The comparative difficulty of using TALENs and Zinc finger proteins instead of CRISPR is that targeting a particular site in the genome often requires a major technical effort. Since the proteins themselves target the DNA sequence of interest, each target sequence must have its own specialized editor. The introduction of CRISPR/Cas9 in the early 2010s allowed researchers to target various DNA sequences much more easily. This system uses a short guide RNA molecule for DNA targeting, so the same protein can be used to cut any genomic site. Since generating these guide RNAs is a relatively simple procedure, the amount of effort required to design and execute genome edits is greatly reduced.

Theoretically, TALENs, Zinc fingers and CRISPR could all be used to inactivate BCL11A in hematopoietic stem cells. However, design of an appropriate TALEN or Zinc finger might require relatively large investments of money and time.  On the other hand, CRISPR promises to be a more cost-effective and faster approach to editing the genome. In academic studies, CRISPR is already widely used and far more common than the other editing technologies for making genetic modifications to laboratory model organisms. However, with human patients, safety and efficacy greatly outweigh effort and cost. Time will tell which gene-editing platform proves to be most cost-effective, efficient and safest for clinical use.

A New Clinical Reality (2020s)

With these new tools at hand, Watson’s dream of increasing fetal hemoglobin in Sickle cell disease patients is finally within sight. At least two major collaborations to perform ex vivo gene therapy for Sickle cell disease have been initiated since 2018. Both use gene editors to inactivate the BCL11A gene and promote fetal hemoglobin expression in red blood cells.

One collaboration between Bioverativ and Sangamo is testing a protocol for gene editing with a Zinc finger. The estimated completion date for this trial is 2022. Another collaboration that has received a great deal of attention, and was recently published in the New England Journal of Medicine, involves CRISPR Therapeutics and Vertex Pharmaceuticals. This trial is among the first to attempt CRISPR in a clinical setting, and the results are highly anticipated by the research and medical communities, not only for their impact on Sickle cell disease, but also as a bellwether for the use of CRISPR in medical practice. So far the results of the trial are encouraging. As of December 2020, the first two patients to receive therapy were reportedly doing well and were free from symptoms more than one year after receiving the treatment. While this news is exciting, there is still much work to be done before the technique can be applied to a wider population.

It has taken many years and many twists, but the visions of the 1950s are finally beginning to be realized, bringing us to the cusp of an exciting new dawn in medicine. The slow march toward a cure for Sickle cell disease clearly demonstrates that through patience and continued investment in scientific discovery, we can continue to achieve the dreams of our predecessors and plant new seeds for future generations to reap the harvest.  

Sickle Cell Research in the Wake of Pauling and Watson

Harvey Itano, 1954. Image credit: Caltech Archives.

By Dr. Marcus Calkins, Part 2 of 3

The molecular defect in Sickle cell disease was demonstrated by Linus Pauling’s lab in 1949, and one year prior, James Watson had proposed that increasing the level of fetal hemoglobin may provide a means by which the disease could be cured. These two publications provided a direction for what may soon become a real-life cure for the disease. However, many practical questions needed to be answered before a cure could be realized, or even imagined in a realistic sense. The first of these questions had to do with the composition of hemoglobin protein.     

Defining the structure of hemoglobin (1950s-1960s)

The project in the Pauling lab was led by a formidable graduate student of Japanese descent named Harvey Itano. It is noteworthy that despite his birth in Sacramento and his honorific as valedictorian of the University of California, Berkeley class of 1942, Itano was interned at Tule Lake during World War II. He was only released from detention to attend medical school at Washington University in St. Louis. Upon graduating with his M.D., he came to Pauling’s lab to pursue a Ph.D. His widely celebrated doctoral thesis was built on the Sickle cell disease project. After finishing this training in biochemistry and publishing his high-impact paper with Pauling, Itano opened his own lab in which he continued working on hemoglobin.

By applying many of the same methods pioneered in Pauling’s lab, Itano’s small group became central players in identifying aberrant forms of hemoglobin that lead to disease. Because of this careful and detailed work, Itano and other scientists began to suspect that the protein subunits that make up hemoglobin may be derived from multiple, highly similar genes. At the time, this idea represented a major paradigm shift, as it was previously assumed that each enzyme should correspond to a single gene.

This idea also brought up an evolutionary question of how such similar genes might come to exist. In a memorial biography, Itano’s friend and colleague R.F. Doolittle summarized Itano’s forward thinking on the genetic and evolutionary basis of enzyme constituents as follows:

In his thorough review of all the data, Harvey noted that in sheep and cattle, however, there were two kinds of end group (valine and methionine), and concluded that there had to be two each of two kinds of polypeptide in human adult hemoglobins. He went on to discuss how gene duplications could account for all the various hemoglobin chains, including myoglobin and foetal hemoglobin. These were thoughts well ahead of their time.

These major scientific contributions and others led to Itano’s election to the National Academy of Sciences, U.S.A. He was the first of many Japanese Americans to achieve that honor.

The exact change in hemoglobin protein responsible for causing Sickle cell disease was identified at the amino acid level ten years after Itano’s paper with Pauling, when V.M. Ingram used protease digestion to show that the disease came from a glutamine-to-valine mutation in the β-globin subunit of the hemoglobin protein complex. Although the molecular structure of DNA was accurately modeled by Watson and Crick in 1954, revealing tantalizing clues as to how genetic information can be encoded and passed from cell to cell and parent to child, the precise genetic mutation underlying Sickle cell disease would remain unknown for a few more decades, until the molecular biology revolution.

Molecular Switch from Fetal to Adult Hemoglobin (1970s-1990s)

In order to institute the cure dreamed up by Watson, much needed to be learned about hemoglobin composition, genetics, and control in red blood cells. This basic knowledge was largely generated through new technologies developed in the latter part of the 20th century. The cell and molecular biology revolution began in the mid-1970s and brought with it new techniques for manipulating DNA and other molecules. These advances allowed scientists to define genes and genetic structures, and to begin to understand how gene expression is regulated.

With regard to hemoglobin, it was discovered that two sets of genes (five on Chromosome 11 and three on Chromosome 16) encode the subunit proteins for the hemoglobin molecular complex. Each set contributes one subunit to the complex, and the subunits that are present in the blood change, depending on the developmental stage of the individual. In essence, a switch occurs just after the time of birth, wherein expression of fetal hemoglobin is turned off and adult hemoglobin is turned on. The mechanics of this switch, including the DNA sequences and proteins that make it work, were elucidated during this fruitful period of biological research.

Once the mechanics of the fetal hemoglobin to adult hemoglobin molecular switch were known, it became apparent that one gene, BCL11A, is a dominant factor in shutting down fetal hemoglobin production. If this gene can be silenced or disrupted, fetal hemoglobin expression will continue. Furthermore, there is strong evidence from molecular, cellular and clinical studies that continued expression of fetal hemoglobin will at least partially prevent the harmful accumulation of mutant hemoglobin aggregates and thereby prevent Sickle cell disease.

However, turning off the BCL11A gene is not an easy task. The molecular system evolved over millions of years to function robustly under almost any environmental condition or situation. Like a train hurtling down a track, such developmental programs are extremely hard to redirect, and if they are completely derailed, the individual may experience catastrophic effects. Thus, precise control of only the BCL11A gene, precisely in red blood cells, is necessary to realize the therapy first imagined by Watson. This level of control is possible to achieve in cells outside the body, but inside the body, cells are resistant to change, and our ability to target only one cell type at a time remains relatively primitive. Thus, a cure for Sickle cell disease still needed a method for regulating BCL11A specifically in red blood cells.

The Slow March Toward a Cure for Sickle Cell Disease

Pastel drawing of sickled hemoglobin cells by Roger Hayward, 1964

By Dr. Marcus Calkins

[Ed Note: This is the first of three posts examining the history of sickle cell treatment up to present day. It is authored by Marcus J. Calkins, Ph.D., “a proud OSU alumnus” (Chemical Engineering, B.S., 1999), who now works as a scientific communications service provider and educator in Taipei, Taiwan. In submitting this piece, Calkins emphasized that he has “taken inspiration from Linus Pauling’s research activities, teaching methods and moral character for many years.”]

In 2020, Jennifer Doudna and Emmanuelle Charpentier shared a Nobel Prize for their discovery and development of the CRISPR gene editor. One of the first clinical applications for CRISPR promises to be an ex vivo gene therapy for Sickle cell anemia. If it works, this medical technology will be a major breakthrough in biomedicine, representing the culmination of more than a century of research on Sickle cell disease that encompasses a wide range of topics.

Despite the lifetimes of work that have led to our current exciting position on the precipice of a cure for Sickle cell disease, the basic molecular features of the disease were defined seven decades ago by another Nobel Prize winner, Linus Pauling. The intervening 70 years of work have been required for scientists to learn how we might apply the foundational knowledge to actual patients in a real-life clinical setting. While the pace of progress may seem agonizingly slow to those outside biomedical research, the ground that has been covered is immense, and entire fields of biomedicine needed to be built and optimized before a truly feasible treatment technology could be invented.

Sickle Cell Disease (1910)

Sickle cell disease was first described over the period from 1910 to about 1924. During this time, a series of case reports detailed approximately 80 people of African descent, who had an odd morphology of red blood cells resembling a crescent or a sickle. In many cases, this sickle-like morphology was associated with a devastating condition involving severe anemia and early death. Furthermore, scientists learned that the red blood cell sickling could be exacerbated by depriving the blood of oxygen, either by adding carbon dioxide to cells in a dish or restricting blood flow in the patient. These clinical observations laid the foundation for basic scientists to postulate that the condition was related to hemoglobin, the protein that carries oxygen in red blood cells.

The first person to make this suggestion was Pauling. At some time in 1945, he was chatting with a colleague on the train from Denver to Chicago, when he learned about the difference in sickling between oxygenated and deoxygenated blood. According to the account of his colleague, Pauling was also informed that the sickled red blood cells show birefringence when viewed under a polarizing microscope, which would suggest an alignment of molecules within the cells.

However, by Pauling’s account of the conversation, his immediate guess that Sickle cell disease is caused by a defect in the hemoglobin protein complex was based entirely on the difference in the sickling properties of oxygenated and deoxygenated blood. Notably, Pauling later stated that the idea of Sickle cell disease being singularly caused by the hemoglobin molecule came to him in “two seconds,” but gathering evidence and refinement of the idea took at least three years.

In his public talks, Pauling often emphasized the fact that in the first years of the study, his students performed many experiments but could not identify any obvious biochemical differences between the hemoglobin molecules of patients and control individuals. From his repeated emphasis of this fact, one might speculate that the translation of a two-second idea to a three- or four-year demonstration would have been frustrating for such a quick-minded individual, though Pauling never said as much. Alternatively, he may have simply been emphasizing the challenges and slow, steady nature of rigorous scientific pursuit.

The Molecular Defect and a Potential Cure (1949)

In 1949, prior to the double helix model of DNA and before stem cells were described, the Pauling lab published a paper titled “Sickle Cell Anemia, a Molecular Disease.” In this work, Pauling and his students definitively showed that a slightly abnormal form of hemoglobin is found exclusively in patients with the cell sickling phenotype. Using a 30-foot-long Tiselius apparatus that they had constructed for electrophoresis, a small two-electron difference could be detected in the overall charge of hemoglobin molecules from Sickle cell patients and unaffected individuals. Meanwhile, carriers of the disease had a mixture of the two hemoglobin isoforms.

Importantly, Pauling’s group found that the defect in hemoglobin is not related with its ability to bind oxygen. Instead, it was later shown that the slight change in molecular charge affects the way hemoglobin proteins interact with each other, as would be predicted from the birefringence observation. This aberrant interaction causes the formation of long molecular scaffolds that change the shape of the red blood cell and lead to its dysfunction.

With this publication, Sickle cell disease became the first disorder to be associated with a single molecule. It was also the first with a known genetic basis. In his publication of the same year, J.V. Neel showed that Sickle cell disease follows an autosomal recessive inheritance pattern, meaning that each parent must contribute one copy of the mutated gene for a child to develop the disease. The cell sickling phenotype can occur to some degree in people who only carry one mutant allele, but only those with two copies experience the pernicious effects of the disease. This information, combined with Pauling’s study, established the essential basis for our understanding of Sickle cell disease and serves as a model for many other genetic diseases.

Surprisingly, James Watson (prior to his famed work on the structure of DNA) contributed a prescient idea to Sickle cell disease treatment, when he speculated that cells could be protected by expression of another form of hemoglobin, fetal hemoglobin. Watson made this prediction in 1948, just one year before Pauling’s powerhouse publication. His suspicion was an extension of reports that red blood cell sickling did not happen in the blood of infants who would later develop the condition as children and adults.

The stage was thus set for a Sickle cell disease cure. After the theoretical basis was determined, onlookers might have expected a cure for the disease to be found within a few years. However, extension of the ideas of Pauling and Watson has required incredible efforts by myriad scientists over the course of the next seven decades to create a potential new clinical reality.

Pauling’s Seventh Paper on the Nature of the Chemical Bond

[Part 7 of 7]

“The Nature of the Chemical Bond. VII. The Calculation of Resonance Energy in
Conjugated Systems.” The Journal of Chemical Physics, October 1933

The final paper in Linus Pauling’s earthshaking series on the nature of the chemical bond was the shortest of the seven and made less of a splash than had most of its predecessors. This lesser impact was anticipated and was due primarily to the guiding purpose of the paper: to apply previously developed postulates to compounds that had not been addressed by Pauling in his prior writings. As with the sixth paper in the series, the final publication was co-authored by Caltech colleague Jack Sherman.

In paper seven, Pauling demonstrated how to calculate resonance energy in conjugated systems. A conjugated system is one in which there exists a plane – or alignment – of three or more connecting electrons located in the p orbital. While it was commonly understood by the era’s organic chemists that conjugated systems supplied a compound with more stability than would ordinarily be expected, Pauling’s paper offered the calculations needed to codify this knowledge.

The paper also put forth a collection of rules to help researchers better understand the properties of conjugated systems. For example, Pauling found that “a phenyl group is 20 or 30 percent less effective in conjugation than a double bond, and a naphthyl group is less effective than a phenyl group.” To arrive at these conclusions, Pauling used the equations that he had developed in his previous two papers, applying them this time around to conjugated systems.


Jack Sherman and Linus Pauling, 1935.

Pauling’s seven papers on the nature of the chemical bond came to print over the course of thirty months, from article one in April 1931 to article seven in October 1933. The first three papers laid the groundwork for what was to come by defining chemical bonds in quantum mechanical terms. The fourth paper, published in September 1932, appeared at the midpoint of Pauling’s publishing chronology and also served as a kind of transition paper, connecting the concepts introduced in the first three publications to those in the three more that were forthcoming. (Paper four also contained Pauling’s vital electronegativity scale.) The last three articles were devoted to the concept of resonance and its application to a fuller understanding of the chemical bond.

Taken as a whole, this body of work proved hugely important to the future direction of chemistry. By reconciling and applying the principles of quantum mechanics to the world of chemistry, the articles showed that what had once been mostly a tool for physicists could indeed have great applicability to chemical research. In the process, Pauling and his collaborators also rendered quantum mechanics far more accessible to their colleagues across the field of chemistry. The end result was, to quote Pauling himself, “a way of thinking that might not have been introduced by anyone else, at least not for quite a while.”


This is our forty-eighth and final post for 2020. We’ll look forward to seeing you again in early January!

Pauling’s Sixth Paper on the Nature of the Chemical Bond

Table of resonance energy calculations for condensed ring systems

[Part 6 of 7]

“The Nature of the Chemical Bond. VI. The Calculation from Thermochemical Data of the Energy of Resonance of Molecules Among Several Electronic Structures.” The Journal of Chemical Physics, July 1933.

In paper number five in his Nature of the Chemical Bond series, Linus Pauling argued that the theory of resonance could be used to accurately discern the structure of many compounds, and he used Valence Bond theory to substantiate that claim. However, much of the argumentation put forth in the paper relied upon fairly generalized calculations, some of which were subsequently shown to be in error.

In his sixth paper, published one month later, Pauling put forth more definitive calculations that used thermochemical data that were more empirically based, and therefore less prone to errors. As with the previous publication, this paper was co-authored. However, instead of G.W. Wheland, Pauling’s collaborator this time around was Jack Sherman, a theoretical chemist who had received his PhD from Caltech the year before.


The data used in the paper weren’t anything new; in fact, they had been used by chemists for years to calculate energy values and to determine bond energies. However, in many cases these calculations failed because chemists, who were rooted in classic organic or physical models, always assumed that the molecules under study were consistently similar.

Relying instead on a quantum mechanical approach, Pauling and Sherman argued that compounds could (and should) be organized into two broad categories. In one group, there resided those molecules that were well-approximated by their Lewis structures (classical representations of molecules using lines and dots to represent bonds and electrons). The other group consisted of compounds whose structures could only be accurately explained through resonance.

By organizing compounds into these two discrete bins, Pauling and Sherman were then able to make more accurate calculations of bond energies. More specifically, the duo was able to calculate energies of formation for various molecules by using extant experimental data on heats of combustion. Pretty quickly they realized that energies of formation could be accurately calculated for Lewis structure (non-resonating) compounds.

For resonating compounds however, the tandem found that calculated energies of formation were much higher than what would have been predicted by theory. Higher energies of formation yield more stable molecules, and the co-authors concluded that the “difference in energy is interpreted as the resonance energy of the molecule among several electronic structures” and that “in this way, the existence of resonance is shown for many molecules.”

Pauling’s Fifth Paper on the Nature of the Chemical Bond

[What follows is Part 5 of 7 in this series. It is also the 800th blog post published by the Pauling Blog.]

The Nature of the Chemical Bond. V. The Quantum-Mechanical Calculation of the Resonance Energy of Benzene and Naphthalene and the Hydrocarbon Free Radicals.” The Journal of Chemical Physics, June 1933.

With his fifth paper in the nature of the chemical bond series, Linus Pauling communicated a new understanding of the structures of benzene and naphthalene. While it had been long accepted that benzene (C6H6) was arranged as a six-carbon ring and naphthalene (C10H8) as two six-carbon rings, the specific organization of electrons and bonds within these structures were not known. Before the publication of Pauling’s fifth paper, several ideas on these matters had been proposed, but all were viewed as flawed in some way or another. But where others had been stymied, Pauling found success, and he did so by fully embracing and utilizing the theory of resonance.


At the time that Pauling began this work, there were five competing structures for benzene, each burdened by its own problems. The one that was the most accepted, despite its inability to connect theory to experimental data, was the Kekulé model. Put forth several decades earlier by the German chemist August Kekulé, this model centered around a six-carbon ring that possessed alternating double bonds. Because the arrangement of these double bonds could differ, Kekulé’s model was actually proposing two potential isomers for benzene. The standard understanding at the time was that these two isomers constantly oscillated between one another.

One major problem with the Kekulé approach was that scientists of his generation had never found evidence of the oscillating structures. Furthermore, the Kekulé structures should have been quite unstable, which was contrary to what researchers were able to observe in the laboratory. As such, even though it was compelling in the abstract, the Kekulé model was known to be imperfect.

In his paper, Pauling pointed out the flaws in Kekulé’s work as well as four other concepts published by other researchers. In doing so, he suggested that a common hindrance to all of the approaches was a reliance upon the laws of classical organic chemistry, and a concomitant lack of application of the new quantum mechanics. It was Pauling’s belief that the structure of benzene could be explained using quantum mechanics, as could the structures of all aromatic compounds.


In a handful of previous papers, Pauling had used the theory of resonance to explain a variety of chemical phenomena, but in thinking about benzene and naphthalene he committed more fully to its principles. According to Pauling, all observable data that had been collected for benzene, particularly its bond energies, suggested that benzene was much stronger than any models had yet to predict. But none of the previous models had entertained the possibility of a resonate structure, by which he meant an aggregate structure that was essentially a blend of all possible structures. A structure of this sort, Pauling argued, would conform to a lower, more stable energy state, and would accurately map with the observed data.

For Pauling, therefore, the structure of benzene was not the result of rapid isomerization as put forth by Kekulé, but rather a blend of states. “In a sense,” he wrote, “it may be said that all structures based on a plane hexagonal arrangement of the atoms – Kekulé, Dewar, Claus, etc. – play a part” but “it is the resonance among these structures which imparts to the molecules its peculiar aromatic properties.”

To support his theory, Pauling considered all five possible structures of benzene – which he called “canonical forms” – calculating the energy of each structure as well as the combined resonance energy. Having done so, Pauling then noted that it was the resonance energy that most closely matched the observed data.


In addition to its utility, the elegance of Pauling’s approach compared favorably with similar work being published by a contemporary, the German chemist Erich Hückel. Situating this thinking within Molecular Orbital theory, Hückel was able to arrive at a similar conclusion for benzene, but his calculations were quite cumbersome and could not be applied to larger aromatic compounds. By contrast, Pauling was now firmly rooted in Valence Bond theory and his formulae could be applied to all aromatics, not just benzene. In particular, by simplifying some of the calculations that Hückel had made, Pauling was able to overcome some of the mathematical hurdles posed by the free radicals in benzene and other aromatics.

To demonstrate the broad applicability of his ideas, Pauling applied his theoretical framework to naphthalene, which consists of two six-carbon rings and had forty-two canonical structures — a great many more than benzene’s five. Despite this significant difference, Pauling was successful in applying the same basic math to determine that the structure was also in resonance.

Indeed, Pauling was certain that his calculations were relevant to all aromatic compounds, noting specifically that “this treatment could be applied to anthracene [a three-ringed carbon molecule] and phenanthrene [a four-ringed carbon molecule], with 429 linearly independent structures, and to still larger condensed systems, though not without considerable labor.” Were one willing to expend this labor, the calculations would show that the “resonance energy and the number of benzene rings in the molecule would be substantiated” and the structure correctly predicted.


G.W. Wheland

The fifth paper was unique in part because it was the first in the series to be co-authored. The article also marked a switch in publishing forum: whereas the first four had appeared in The Journal of the American Chemical Society, this paper (and the two more still to come) was published in volume 1 of The Journal of Chemical Physics.

Pauling’s co-author for the paper was George W. Wheland, a recent doctoral graduate from Harvard who worked with Pauling from 1932-1936 with the support of a National Research Fellowship. This collaboration proved noteworthy both for the quality of the work that was produced and also because Wheland later became a vocal supporter, advocate and contributor to resonance theory.

Pauling’s Fourth Paper on the Nature of the Chemical Bond

[Part 4 of 7]

“The Nature of the Chemical Bond. IV. The Energy of Single Bonds and the Relative Electronegativity of Atoms.” Journal of the American Chemical Society, September 1932.

The first three papers published by Linus Pauling in his nature of the chemical bond series were all novel, and the first paper in particular made a significant impact. But it is the fourth paper that has proven to be perhaps the most influential of all. In it, Pauling introduced his idea of the electronegativity scale, a cohesive and logical tool that proved to be of major import to the discipline.


The concept of electronegativity can be understood in terms of the likelihood that an atom will attract a pair of valence (or bonding) electrons. The more electronegative an atom is, the more likely that it will attract electrons. The most electronegative element is fluorine and the least electronegative element is francium.

Pauling was able to develop a scale for electronegativity using insights into valence bond energies. The ideas that he had put forth in his three preceding papers did not always firmly commit to either Molecular Orbital theory or Valence Bond theory, and this vacillation led to significant flaws in paper number two. Beginning with the fourth paper however, Pauling chose to base his work on the tenants of Valence Bond theory.

The electronegativity scale that Pauling developed was also quite intuitive in that it could not be calculated directly, but instead had to be inferred using the atomic and molecular properties of a given element. And even though its creation relied upon a series of assumptions and simplifications, the tool was nonetheless quite sophisticated. Prior to Pauling’s publication of his electronegativity scale, chemists either had to rely upon their own best guesses to determine bond affinities or, if they wanted more precision, compute bond energies for every interaction. Pauling’s scale both standardized and simplified these processes while creating a context where chemists could make predictions of “the energies of bonds for which no experimental data are available.”

Clearly one key piece of utility that the electronegativity scale provided was the ability for chemists to draw conclusions without the need for a lot of computation. For example, based on a molecule’s electronegativity, chemists could roughly deduce the ionic nature of a bond. However, in order to predict the ionic character of a bond, Pauling had first needed to make some assumptions, such as creating an “arbitrarily chosen starting point” to which everything is relative. Even though this approach was not precise, Pauling argued that its usefulness justified the simplification of the approach. And naturally, over time, Pauling’s electronegativity scale became more honed and more specific, an evolution that Pauling also predicted in the fourth paper.


Most of Pauling’s article was devoted to a discussion of how he developed the scale and what kinds of measurements were used in the calculations. To begin, he focused on the qualities of covalent attraction or repulsion in compounds formed by identical elements, such as H:H or Cl:Cl. From there, Pauling used quantum mechanical wave function properties to establish that “energies of normal covalent bonds are additive.” It was from this central theorem that Pauling built out the rest of his conceptual work and also his scale, predicting, for example, bond energies for light atoms and halogens.

Pauling also relied upon others’ helpful calculations in creating his scale, and especially those related to heats of “formation and combustion of gaseous materials” put forth in the International Critical Tables (National Research Council) and elsewhere. The heats of formation were especially useful because they helped to correct for any unknown bond energies. In fact, by using these experimental data for twenty-one different single bond energies, Pauling was able to derive a formula that predicted where a given element should reside on the electronegativity scale. For bonds where data were not available, Pauling used his predictive models to extrapolate approximate energies.


The formula that Pauling derived and published was Δa:b=(χAB)2 where χA and χB represent the coordinates of each atom A and B on an electronegativity map (see Fig. 5 above), and Δ represents the degree of electronegativity.

Here, for example, is how Pauling calculated the electronegativity value for oxygen. He began by establishing the heat of formation of water:

2H + O=H2O(g) +9.493 v.e.

Then, based on the fact that the H:O bond is found to have a bond energy of 4.747 v.e., Pauling further calculated that:

H2O2(l)=H2O(l) + ½ O2 + 1.02 v.e.

Pauling next combined this value with the heat of vaporization of H2O2, which is .50 v.e., and found that:

2H + 2O = H2O2(g) +10.99 v.e.

From there, using his original postulate about consistent bond energies, Pauling subtracted 4.75 v.e. for each H:O bond to yield 1.49 v.e. for the O:O bond. However, since Pauling had previously concluded in his previous papers that O:O was a double bond, the actual electronegativity for the oxygen molecule was found to be 3.47. (The present-day electronegativity value for oxygen has been revised to 3.44.)


Even though it relied in part on a collection of assumptions and simplifications, Pauling’s electronegativity scale has been widely used and stands as a lasting component of his legacy. In addition to its ability to approximate values for a wide variety of compounds, the scale was also important for establishing the idea that electronegativity is not a fixed number that never changes. Instead, Pauling understood that an element’s electronegativity value emerges from its bonding relationships. Because of this, calculating absolute values has been difficult, but Pauling’s scale continues to be useful and predictive.

Pauling’s Third Paper on the Nature of the Chemical Bond

[Part 3 of 7]

“The Nature of the Chemical Bond. III. The Transition from One Extreme Bond Type to Another.” Journal of the American Chemical Society, March 1932.

In his third paper exploring the nature of the chemical bond, Linus Pauling dug into the unsolved question of how molecules transition from one kind of bond type to another. While it had been determined that molecules do switch from one kind of bond to another – from an ionic bond to an electron-pair bond, for example – the specifics of how that transition happens remained elusive.

Prior to the third paper, two prevailing ideas were being debated by chemists. One concept, as Pauling wrote, was that “all intermediate bond types between the pure iconic bond and the pure electron-pair bond” exist in some kind of infinite transitionary state. A contrary viewpoint put forth instead that molecules “transition from one extreme bond type to another” in an abrupt manner. Pauling suggested that the answer lie somewhere in between.


In order to determine how molecules transition, Pauling first needed to establish the bond structures of given molecules in their initial states. He did so by defining the bonding characteristics of molecules, a task that takes up the majority of the paper. But amidst this discussion, Pauling arrived at several key conclusions.

To begin, Pauling described many cases where a relationship existed between atomic arrangement – as determined by x-ray crystallographic analysis – and bond energies. When, for instance, a strongly electropositive and strongly electronegative molecule bonded, it was reasonable to assume that the bond was ionic. This presumed, Pauling then used electron energy curves to show that an example group, the alkali halide molecules, were strongly ionic, and that they might generally be thought to form ionic bonds.

As Pauling pointed out however, these presumptions were faulty. In fact, studies of the bonding in hydrochloric acid (HCl) and hydrobromic acid (HBr) indicated that both molecules were essentially covalent in make-up, whereas hydrofluoric acid (HF) was ionic. So even though it might reasonably have been assumed that the initial states for HCl, HBr and HF would be similar in their bonding, the experimental data indicated otherwise. These findings led Pauling toward the conclusion that there is no single universal answer to the question of how molecules transition, because there is no steadfast rule determining the types of bonds that hold molecules together before they transition.  


Having arrived at the conclusion that one could not lean upon a guaranteed universal bond type, Pauling then turned to his burgeoning theory of resonance to develop more precise thinking about transition mechanisms. Pauling specifically argued that when bonds transition from one type to another, rather than shifting either abruptly or in a continuous state – as the two competing models then prevailing put forth – they instead shift to an intermediate resonant state before switching to a new bond type.

Pauling was in essence suggesting that, in between classically-defined “completed” bond states, there also existed an intermediate bonding state that could best be understood through the theory of resonance. Moreover, Pauling argued that idealized bonds, such as pure covalent bonds or pure ionic bonds, did not technically exist. Rather, bonds might more accurately be described as constantly transitioning through resonant states, some of which more closely approximated a classic bond type.

Pauling understood that the concept he was putting forth was quite theoretical and that, in practical terms, it was hard to work with molecules if they existed in a constant state of transition. As such, Pauling allowed that, for purposes of discussion, it was acceptable to think of molecules as residing in discrete bonding states. He likewise acknowledged the convenience of using more traditional names (ionic, covalent, etc.) when referring to bonds, even if they never fully existed.

Pauling then concluded that, even though bonds were constantly transitioning, for certain bond types – such as “when the normal states for the two extremes have the same number of unpaired electrons” – it could be assumed that they had transitioned in a continuous state. But, of course, continuous state transition was definitely not always the case and could not be universally applied.


In conducting the work that led to his third paper, Pauling had sought to define a universal rule that would govern the transition between bond types. By the time that he delivered his manuscript though, he had recognized that not only was a universal law unattainable, but that what he did find had its limitations. In particular, Pauling suggested of his approach that “It is not possible at the present time to carry out similar calculations for more complicated molecules,” though “certain less specific conclusions can, however, be drawn.”

Regardless, Pauling’s third paper broke new ground on a topic of keen importance to structural chemists. By applying the theory of resonance, Pauling helped chemists to understand that there was a spectrum of polarity, and that bonds were not always strictly of one kind or the other. Importantly, in this same paper Pauling did not fall prey to dogmatism, and allowed that bonds residing near the ends of one spectrum or another might fairly be said to represent so-called “classic” bond types.