A Resonating-Valence-Bond Theory of Metals and Intermetallic Compounds, 1949

Linus Pauling, 1949

As we have written elsewhere, Linus Pauling developed and championed the theory of resonance early on in his career. But for almost 20 years, the ways that metals might conform to the theory remained elusive. Even though he had a hunch that they too adhered to the tenets of resonance, he was not able to prove it definitively at the outset. Not until 1949, with the publication of “A Resonating-Valence-Bond Theory of Metals and Intermetallic Compounds” was he able to demonstrate that metals do resonate. 

Pauling’s interest in metals dated at least as far back as his undergraduate years at Oregon Agriculture College and continued to flourish during his graduate training at the California Institute of Technology. Later in his career, when asked to reflect on his contributions to the field of chemistry, he often spoke of his early work with metals as being important. This was especially so with his work on metals and resonance.


“A Resonating-Valence-Bond Theory of Metals and Intermetallic Compounds,” which was published in the Proceedings of the Royal Society London, serves as an addendum of sorts to Pauling’s previous papers on the nature of the chemical bond. Even though Pauling’s theory of resonance had been well-received for several years, confusion still existed amongst chemists (including Pauling) about how the theory might apply to metals; particularly the iron-group transition metals. Pauling believed that, like other elements, resonance must be used to explain the way that these metals bond, but the specifics proved difficult to pin down.

Pauling had always aligned himself with the notion that the properties of an element were connected to the configuration of its valence electrons. For example, Pauling knew that the arrangement of valence electrons gave Carbon its stable tetrahedral properties and salts their ionic properties. Metals, however, were harder to define due to the fact that they “showed great ranges of values of their properties, such as melting and boiling point, hardness and strength, and magnetic properties.”

Pauling initially focused on magnetism, and from this work, it was determined that metals had a high ligancy of either 8 or 12. Ligancy, or the number of compounds that each metal could bond to, (often referred to as Coordination Number) is similar but still distinct from valency, and for the metals that Pauling was studying, a ligancy of 8 or 12 was higher than their valence. This discrepancy seemed to indicate that resonance could not be applied to explain how metals bonded. But Pauling still believed that resonance was the answer, and set about trying to confirm this belief.


In his quest to understand if or how metals resonated, Pauling first needed to clarify whether or not previous thinking about metals was correct. Prior to the publication of his 1949 paper, it was believed that the d orbitals did not participate in bonding with the iron-group transition metals, such as Manganese, Iron, Cobalt, Nickel, and Copper. Using these assumptions, Pauling modeled the predicted properties of such metals and found that they would have low melting points and weak bonds. The lab work indicated otherwise however, leading Pauling to conclude that the current understanding was incorrect, and that something else had to be going on with the bonds in order to account for their physical properties.

For several years, Pauling had been unable to devise a competing theory that would explain the unique properties of metals, but he felt certain that the answer would be revealed through the application of quantum mechanics. By specifically using the wave theory of quantum mechanics, Pauling calculated that metals used 8.28 orbitals instead of the expected 9. While he was sure that the math was correct, he struggled to understand what was accounting for the missing 0.72 orbitals, and for almost nine years he worked to reconcile the discrepancy. Ultimately though, he realized that 0.72 orbitals was, in fact, the answer he was looking for; they were what gave metals their unique properties.

In its essence, Pauling breakthrough was that the extra 0.72 orbitals was not a mathematical anomaly, but instead an extra orbital, one that he called a “metallic orbital.” The metallic orbital was, according to Pauling, “required” to give metals their “characteristic properties, especially that of electronic conductivity of electricity.” In Pauling’s view, substances lacking the metallic orbital that still maintained some metallic properties, such as electric conductivity, should be classified as metalloids.

Pauling’s conceptualization of the metallic orbital allowed him to subsequently reframe his understanding of resonance. When Pauling developed the theory, it was more specifically known as synchronized resonance; a state in which all bonds resonate in the same way and valence is undisturbed by the process. Pauling realized that, in order for metals to resonate, a different kind of resonance – unsynchronized resonance – would be needed. In unsynchronized resonance, instead of all bonds resonating simultaneously, a single bond could resonate on its own. In the case of metals this happened in the metallic orbital, and it was found that unsynchronized resonating metals conferred unusual stability and high ligancy.


Once Pauling made these connections, he was ready to publish his paper, which, first and foremost, sought to prove the existence of a metallic orbital. To do this, Pauling had to show that the existing understanding of valency was incorrect, and then to demonstrate that resonance – specifically unsynchronized resonance – could account for both the predicted and the observed properties of metals.

Using Lithium as his model element, Pauling explained that the current understanding posited a valence structure that consisted of 2s orbitals, and that this was the commonplace belief because, “the [proposed] molecular orbitals correspond to electron energies.” By using the wave function however, Pauling found that the assumed orbital structure did not correspond to observed electron energies. Specifically, he calculated that if Lithium had 2s orbitals, the predicted heat of formation would be much different than was the observed heat of formation; a calculated difference of 32.4 kcal/g. In short, the current understanding was incorrect.

Pauling’s next step was to use resonance to explain Lithium’s metallic properties, and to devise an arrangement of valence electrons that would match predicted energies with observed energies. Using the idea of unsynchronized resonance, Pauling suggested that if, instead of 2s orbitals, Lithium’s valence electrons were actually in resonance, this new arrangement could be “responsible for the difference in energy.” In other words, without resonance the electrons existed in a fixed energy state. But if the electrons were in resonance, their energy state would instead be a hybrid of all possible energy states. This circumstance, Pauling postulated, would confer lower energies to the molecule and bring predicted and observed energies into alignment. In the case of Lithium specifically, Pauling found that the valence electrons were in 2s orbitals, but also 2p orbitals. Pauling worked through different metals throughout the remainder of the paper to further support his thinking.

The theoretical work that Pauling put into this paper completed the framework for resonance. And interestingly, even though the ideas that he presented were accepted and widely used for decades, some of the math in the paper was not fully validated at the time. In fact, it wasn’t until 1984, when Pauling revisited the 0.72 orbitals, that the math was completed. By then Pauling had acknowledged that the calculations he used to get to 0.72 orbitals had been crude, and based on in part on educated guesses. But by the 1980s, many of the unknowns were known, which allowed Pauling to revisit the math with more precision and compare it to his work from the late 1940s. Using clean, contemporary data, Pauling confirmed that the new calculation was “in exact agreement with the observed value of 0.72.”

The Nature of the Intermolecular Forces Operative in Biological Processes, 1940

Linus Pauling, Max Delbrück and Max Perutz at the American Chemical Society centennial meeting, New York. April 6, 1976.

In 1940 Linus Pauling, along with colleague Max Delbrück, authored a three-page article that was published in the July issue of the journal Science. The length of the article was shorter than typical for Pauling, but what made it even more unusual was that it was not about Pauling’s findings. Instead, the piece served as a critique of a different article published earlier that year by a German scientist, Pascual Jordan.

In it, Jordan argued that when like molecules bonded, they were attracted more strongly than when dissimilar molecules bonded. Jordan believed that this stronger attraction of like molecules conferred special properties to these bonds, especially when they occurred in living cells. Pauling and Delbrück totally disagreed with this idea. Instead, the duo believed that it was a molecule’s complementary nature that conferred stability, an idea in opposition to Jordan’s concept of similarity.

In the two decades preceding these papers, chemists had come to look at their field in different ways, due mainly to advancements in quantum mechanics. This was certainly true for Pauling, who rapidly developed a reputation for using these new ideas to solve old problems. One line that he did not cross however, was the application of quantum mechanics to help “solve” topics that were already well understood and not in conflict.

For Pauling, one such instance was the basis of molecular attraction, and how that attraction created stability in a newly formed molecule. This idea, however, was something that other scientists found worth examining; Pascual Jordan in particular. Accordingly, and armed with a new set of quantum mechanical theories, Jordan set about attacking a question that others, including Pauling, believed not in need of answering.


Pascual Jordan

Pascual Jordan was born in Germany in 1902 of Spanish lineage. Though initially interested in the arts, Jordan studied math and physics in school, completing his physics Ph.D. in 1924. His ideas at this time were novel, with no less a figure than Albert Einstein taking note of his dissertation. But Einstein did not agree with certain of the hypotheses that Jordan was putting forth, many of which used quantum mechanics to consider the photon nature of light. While Einstein felt that there wasn’t necessarily anything wrong with Jordan’s ideas, he did not agree with the logic that informed them, and wrote missives in opposition.

But others supported Jordan’s work and, soon after graduation, he began working with a circle of colleagues that included Werner Heisenberg. During this time, Jordan became one of the biggest proponents of quantum mechanics and, along with Heisenberg, helped to unlock many of its secrets. Jordan was also a member of the Nazi party, joining when Germany entered World War II and remaining so until at least the end of the war. Nonetheless, Jordan helped to develop key theories in physics and math which are foundational to the fields today.


Though Jordan’s legacy today is marred by his political positions, when he wrote his 1940 paper about the attraction of molecules in biological cells, he did so from a position of authority. As noted, the foundation of the paper is the idea that identical molecules are attracted to one another in a special way that does not exist for dissimilar molecules and that, because of this, the bonds formed in molecules are more stable than is the case with other bonds. Jordan’s hypothesis, if true, would have been groundbreaking and consequential for all sorts of bonds, especially those in living cells.

Understandably, the paper created a lot of commotion when it was published. Pauling, who at that point was also an authority on quantum mechanics and resonance theory, was no doubt among those surprised by Jordan’s proposition. After reading it though, he immediately saw its flaws. In it, Jordan himself admitted some doubt that resonance could work in the manner that he was suggesting, and Pauling was sure that the ideas were wrong. Wishing to publish a rejoinder, Pauling began looking for a co-author whose expertise centered around bonds in living cells, and Max Delbrück was just such a figure.


Like Jordan, Delbrück was born in Germany in 1906. Interested in the stars, Delbrück began his studies in astrophysics, but changed directions upon meeting a physical chemist, Karl Bonhoeffer, who was eight years his elder. Fascinated by Bonhoeffer, Delbrück switched to physical chemistry in a ploy to become his friend, a tactic that ultimately worked well. The timing of the switch was also fortuitous as Delbrück entered the field at the beginning of the quantum revolution. After graduation, Delbrück studied all over Europe with scientists included Wolfgang Pauli and Niels Bohr. He eventually spent a few years at the California Institute of Technology on a Rockefeller Foundation fellowship, during which time he met Pauling and co-authored the 1940 paper. After leaving Caltech, Delbrück focused his research on bacteriophages and eventually won the 1969 Nobel Prize in Physiology or Medicine for this work. 

Even though Delbrück’s Nobel honor was nearly thirty years down the road, by 1940 he was already well-versed on the ways that living cells operated, making him a formidable writing companion. In their paper, Pauling and Delbrück argued that Jordan’s fundamental idea could not be correct because the stability of a molecule was conferred by the complementarity its components, not their similarity. By way of explanation, the duo first put forth the understanding that a stable molecule is one in which molecular distances are relatively short. This is a circumstance, they argued, that can best be achieved when complementary forces are working together, such as positive ions attracting negative ions. In other words, in a bonding pair “the two molecules must have complementary surfaces, like die and coin.” The like molecules that Jordan was advocating for were not complementary by definition; rather, they were identical, or close to it. Pauling and Delbrück acknowledged that “the case might occur in which the two complementary structures happen to be identical” but still their stability “would be due to their complementariness rather than their identity.”

Even though Pauling and Delbrück’s article was quite short, its message was clear: Jordan was plainly wrong. As they wrote, “We have reached the conclusion that the theory can not be applied in the ways indicated by him [Jordan], and his explanations of biological phenomena on this basis can not be accepted.” In short order, the scientific mainstream came to agree with their point of view, and Jordan’s ideas soon faded away.

The Nature of Interatomic Forces in Metals, 1938

Linus Pauling, ca. 1930s

“In recent years I have formed, on the basis of mainly empirical arguments, a conception of the nature of the interatomic forces in metals which has some novel features.”

­-Linus Pauling, 1938

Prior to the publication of this article, which appeared in the December 1938 issue of Physical Review, much about the interatomic forces operating in metals was either unknown, or theoretical predictions did not align properly with observed data. In publishing this paper, Linus Pauling first sought to align the incongruencies between theory and data for the transition metals, such as iron, cobalt, nickel, copper, palladium, and platinum. He was then able to correctly predict properties including “interatomic distance, characteristic temperature, hardness, compressibility, and coefficient of thermal expansion” by discarding previously held assumptions and inserting new – and correct – assumptions about transitions metals.

The most significant idea that Pauling introduced with this paper was the notion that the valence shell electrons – those in the outer shell – play a part in bonding. Previously, scientists believed that these electrons made “no significant contribution” to bond formation. Pauling was able to establish otherwise, and used this breakthrough to both align observable data with theoretical data, and make other predictions about transition metals.


The 1938 paper was written in the wake of a revolution within the world of chemistry. A raft of new theories brought about by a widening understanding of quantum mechanics was generating intense excitement for scientists world-wide, and the tools that quantum mechanics provided for helping to “correct” previous understandings of the chemical bond were of paramount interest to many. Pauling, of course, was a leader in this area, his body of work ultimately garnering the 1954 Nobel Chemistry Prize for “research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances.”

Within this area of focus, many scientists were especially interested in exploring the ways that metals bonded because, as noted, the observed data did not match up with theory. Pauling sought to mend this gap by using quantum mechanics to look at interatomic forces in a novel way. Prior to the Physical Review paper, chemists believed that when metals bonded, their valance shell electrons played only a small role in the resulting structures. Pauling argued otherwise, and put forth an important new theory that the valence shell electrons contributed to the process through resonance, a theory that he had developed earlier in the decade and continued to champion.


Because the crux of Pauling’s scientific intervention was to prove that valence shell electrons are involved in bonding, most of the paper is devoted to supporting this claim. The primary tool that Pauling uses to craft his argument is an analysis of temperature predictions. According to the reigning theory regarding metals and valence shell bonds, when bonding occurred, the electrons would bond in a manner that would create a moment of ferromagnetism. Specifically, it was theorized that these ferromagnetic moments would be temperature dependent, meaning that as the temperature of the metal changed, its degree of magnetism would also change in a predictable way. Experiments had shown however, that when metals bonded, their ferromagnetism remained independent of temperature.

Pauling exploited this piece of information and used it to support his theory. According to Pauling, if metals bonded through resonance, they would create ferromagnetic moments that were temperature independent, a hypothesis that correctly aligned with the observed data.

To develop his argument, Pauling made specific use of the element Vanadium, which has an electron configuration of 3d34s2. Under the old model, Vanadium’s valence electrons could only interact weakly in bonding if, at most, two of the 4s2 electrons were involved in a bond. This, according to Pauling, would create ferromagnetism which would decrease with increasing temperature, meaning that it was temperature dependent. On the contrary, the experimental evidence showed that Vanadium’s magnetism was temperature independent. This meant, therefore, that weak valence interaction during bonding was not possible.

Pauling’s alternative suggestion was that all of Vanadium’s valence electrons were involved in bonding through resonance; not just the two 4s2 electrons, as previously believed. Further, if the valence electrons bonded through resonance, the ferromagnetism of their structure would be temperature independent, a prediction that aligned with the observed data.


Once Pauling was able to prove that the valence electrons in Vanadium bonded through resonance, he then began to apply the concept to all transition metals. As with the previous example, Pauling continued to support the concept by comparing predicted outcomes with empirical data. And once again, when viewing the bonding through the prism of resonance, predicted outcomes of magnetic moments began to align with the empirical data.

Pauling then took it another step by repeating the exercise with interatomic distances. As demonstrated in his paper, a resonant structure would correctly predict the interatomic distances that had been observed for many bonds. Pauling also claimed that other properties, such as the “compressibility, coefficient of thermal expansion, characteristic temperature, melting point, and hardness” would likewise correctly align with experimental evidence, once resonance was used to explain valence shell bonding.

Though clearly a significant breakthrough, the assertions that Pauling made in his paper were grounded in work done by others — notably the quantum mechanical theory of ferromagnetism developed by Heisenberg, Frenkel, Bloch, Slater, et al., and Wolfgang Pauli’s theory of the temperature-independent paramagnetism of the alkali metals. And while the 1938 article acknowledges these debts, it also attempts to improve upon them.

This was especially so with the quantum mechanical theory of ferromagnetism. As we have seen, Pauling successfully applied the idea of temperature independence and ferromagnetism to support his claims, but he also found one aspect of the theory to be needlessly bothersome. As Pauling noted, in order for much of the theory to work on a mathematical level, scientists were compelled to assign positive numbers to all unpaired valence electrons. Pauling recognized that this was only necessary if it was assumed that valence electrons did not play a large role in bonding for metals. Under a resonant scenario, Pauling was able to show that the math could still work if the valence electrons were negative and that, once again, “this conclusion agrees with the observation.”

The Theoretical Prediction of the Physical Properties of Many-Electron Atoms and Ions. Mole Refraction, Diamagnetic Susceptibility, and Extension in Space, 1927

Linus and Ava Helen Pauling in Copenhagen, May 1927

[Ed Note: Today and in the three posts that will follow, we will be taking a close look at four important scientific articles published by Linus Pauling between 1927 – 1949.]

In this ambitious and hugely influential paper, Linus Pauling applied his theory of screening constants to various problems, including electric polarizability, diamagnetic susceptibility, and the sizes of ions and atoms. Pauling was fundamentally interested in pursuing this topic because of his desire to merge the new quantum mechanics – which embraced wave functions – with older ideas in order to make predictions about molecular properties like mole refraction and diamagnetic susceptibility in space.

Especially during the early phases of his career, one of Pauling’s signature rhetorical tools was to put forth a bold assumption that would serve to simplify the predictions made later on in a given article. This paper is among the best examples of that approach. In it, Pauling developed mathematical relationships that, when applied, could help the reader make generalizations about molecules. But in moving through these calculations, Pauling had to make some assumptions, oftentimes without the aid of hindsight to determine whether or not they were correct. One enduring legacy of this paper is that many of Pauling’s assumptions were indeed correct, and its findings have thus remained relevant across the decades.


Another contributing factor to the paper’s success was that Pauling was in the right place at the right time. While working towards his PhD at Caltech, Pauling enthusiastically followed the rapid development of quantum mechanics in Europe and elsewhere. Pauling was particularly interested in the work of Arnold Somerfield in Munich and Niels Bohr in Copenhagen, and wrote to both to inquire about research opportunities. Bohr never responded but Sommerfeld did and, with his support, Pauling secured a Guggenheim Fellowship that allowed him to live and work in Europe for 19 months. During that period, he spent most of his time with Sommerfeld at the Institute of Theoretical Physics, though he did visit Bohr in Copenhagen as well as Erwin Schrödinger in Zurich.

Pauling’s residency in Europe proved auspicious, in part because Sommerfeld and his colleagues were working on uniting new ideas with old, a task not being readily pursued in the United States at the time. As they moved forward with their work, the European scientists began to solve more and more problems with quantum mechanics, cementing in Pauling’s mind the utility of the approach as a way forward.


Gregor Wentzel (Image credit: Emilio Segre Visual Archives)

One project important to Pauling’s paper was being led by University of Leipzig physicist Gregor Wentzel. A colleague of Sommerfeld, Wentzel was seeking to apply quantum mechanics to x-rays in order to calculate the screening constants of electrons in large and complex molecules. His project had hit a snag however, in that he was unable to find agreement between the observed data and those predicted by theory. After scrutinizing his work, the young Pauling found that Wentzel had made errors in his calculations. Once Pauling had corrected these miscalculations, he found that there was in fact agreement between the observed and predicted data, which meant that Wentzel’s work was actually correct. In so doing, Pauling had confirmed the value of quantum-mechanical calculations in predicting screening constants of electrons in complex molecules.

Armed with this information, Pauling recognized that he could use these same calculations to make predictions about electron arrangement in molecules and the relative size of ions, among other properties. This led to the publication of his paper, The Theoretical Prediction of the Physical Properties of Many-Electron Atoms and Ions. Mole Refraction, Diamagnetic Susceptibility, and Extension in Space, which appeared in 1927, published by the Proceedings of the Royal Society.

In its essence, the article used the wave mechanical feature of quantum mechanics to make predictions about molecules, an approach that emerged directly from Pauling’s exposure to European efforts to unify old ideas with new. And even though it was not the first time that Pauling had written a paper utilizing quantum mechanics, it was certainly his first publication in which he used these novel tools to make predictions about molecular properties. 


Fundamental to these predictions were three key assumptions that Pauling put forth at the beginning of his paper. The first was that,

each electron shell within the atom is idealized as a uniform surface charge of electricity of amount-zi e on a sphere whose radius is equal to the average value of the electron-nucleus distance of the electrons in the shell.

The second assumption stated that,

the motion of the electron under consideration is then determined by the use of the old quantum theory, the azimuthal quantum number being chosen so as to produce the closest approximation of the quantum mechanics.

And the third assumption was that,

since so does not depend on Z, it is evaluated for large values of Z, but expanding powers of zi/Z and neglecting powers higher than the first, and then comparing the expansion with that of the expression containing Z-so in powers of so/Z.

Armed with these assumptions, Pauling was able to issue a collection of predictions about molecules, particularly concerning mole refraction and diamagnetic susceptibility. Prior to his doing so, chemists lacked the necessary tools for making predictions of this sort, meaning that certain chemical properties remained hazy or unknown.

This issue was particularly salient for the hydrogen atom. In the months leading up to the paper’s publication, a huge debate had emerged concerning the polarizability of hydrogen. The prevailing formula had been proven incorrect in 1926, after which time a race ensued to find a new, more suitable equation. Eventually a successor formula was developed, but it was criticized as being “a conservative Newtonian” model. Agreeing that a more robust approach was needed, Pauling set about applying quantum mechanics, and based on his three assumptions, he derived the following:

Knowing full well that the equation was based on his three assumptions, and anticipating resistance, Pauling pre-emptively argued that “it might be thought that these values of ɣ are not correct because of the fact that the electron shells actually do not consist of hydrogen-like electrons, but rather themselves of ‘penetrating electrons.'” However, “as Z [a surface harmonic] increases, the ‘penetrating orbits’ become more hydrogen-like” and therefore should be ignored because any error found would be “negligible.” Having put forth this solution to the problem of hydrogen, Pauling was then able to more broadly demonstrate the utility of his ideas.

Indeed, even though much of the work in the paper made assumptions that were oftentimes crude – such as using data from the valence shell electrons only – Pauling was able to create complex (and, as it turned out, fairly accurate) tables of polarizability of ions, diamagnetism screening constants, and mole refraction, among predictions.

It is clear that Pauling believed strongly in his paper, which he felt would “make possible the accurate prediction of the properties of any atom or ion.” And though the approach would sometimes only yield “approximate values of the physical properties of ions” based on his three assumptions, the importance of the work was not diminished as, oftentimes, directly observed data “may not exist under conditions permitting experimental investigation.”

Sickle Cell Research to the Present and the Future

Three-dimensional rendering of sickle cell anemia blood cells. Credit: National Institutes of Health.

By Dr. Marcus Calkins, Part 3 of 3

Forty years after Linus Pauling and his lab demonstrated the molecular basis for Sickle cell disease and James Watson speculated that upregulation of fetal hemoglobin may protect from the disease, methods to control fetal hemoglobin specifically in red blood cells began to be developed. The molecular biology revolution of the late 20th century had produced extensive knowledge about the molecular systems that drive fetal hemoglobin production, but harnessing that intricate knowledge has taken another thirty years.

Hematopoietic Stem Cells (1990s)

Since the advent of radioactivity research, it has been well-established that red blood cells have a short lifespan of only about 115 days and are continually produced from precursors in the bone marrow. In order to replace defective blood cells in an individual, a protocol for whole-body irradiation and allogenic bone marrow transplant was pioneered by a group of doctors in Seattle in the 1970s, as a treatment for cancer patients.

However, the ability to specifically isolate and identify hematopoietic stem cells from patients was only developed in the 1990s. At that time, nuclear dye exclusion and flow cytometry characteristics were used to isolate the stem cells, but since then, a variety of cell surface markers have been identified, and protocols to isolate, expand and differentiate hematopoietic stem cells have become standardized. In addition, scientists have learned to modify the hematopoietic stem cells at a genetic level, creating the possibility that stem cells may be extracted, genetically modified ex vivo, and then used to reconstitute the bone marrow of patients with blood diseases.

For patients with Sickle cell disease, it may therefore be possible to extract hematopoietic stem cells and inactivate the BCL11A gene, which normally suppresses fetal hemoglobin. The red blood cell progeny of these altered stem cells would then produce fetal hemoglobin that could mask the effects of the disease-causing mutation in the β-globin gene. Afterward, the modified stem cells could be transplanted back into the same patients from which they were isolated, providing the person with a continual supply of red blood cells that expresses fetal hemoglobin and are resistant to sickling.

Gene Therapy and Genome Editors (2000s-2010s)

The final component of a therapy for Sickle cell disease has recently been realized, as it is now feasible to efficiently inactivate BCL11A in isolated hematopoietic stem cells. In the last two decades, several systems of modifying the genome (gene editors) have been developed. Although the first editors to be produced may still find clinical use, CRISPR has quickly overtaken previous technologies to become the most widely applied and well-known gene editing platform.

In the late 1990s, researchers invented two key methods of using proteins to make targeted edits to the genome. These early gene editor proteins are called TALENs and Zinc fingers, both of which are being tested in clinical trials today. Each of these editors is able to target highly specific DNA sequences and make an incision in the DNA helix at a predictable site. Once the DNA strand is incised, error-prone DNA repair processes are activated to fix the incision, often resulting in random base insertions, deletions and changes. In this way, the genetic code is disrupted in some cells, and these random disruptions often serve to inactivate the targeted gene. If those cells with an inactivated gene can be identified and expanded, whole populations of cells with the genetic alteration can be established.

The comparative difficulty of using TALENs and Zinc finger proteins instead of CRISPR is that targeting a particular site in the genome often requires a major technical effort. Since the proteins themselves target the DNA sequence of interest, each target sequence must have its own specialized editor. The introduction of CRISPR/Cas9 in the early 2010s allowed researchers to target various DNA sequences much more easily. This system uses a short guide RNA molecule for DNA targeting, so the same protein can be used to cut any genomic site. Since generating these guide RNAs is a relatively simple procedure, the amount of effort required to design and execute genome edits is greatly reduced.

Theoretically, TALENs, Zinc fingers and CRISPR could all be used to inactivate BCL11A in hematopoietic stem cells. However, design of an appropriate TALEN or Zinc finger might require relatively large investments of money and time.  On the other hand, CRISPR promises to be a more cost-effective and faster approach to editing the genome. In academic studies, CRISPR is already widely used and far more common than the other editing technologies for making genetic modifications to laboratory model organisms. However, with human patients, safety and efficacy greatly outweigh effort and cost. Time will tell which gene-editing platform proves to be most cost-effective, efficient and safest for clinical use.

A New Clinical Reality (2020s)

With these new tools at hand, Watson’s dream of increasing fetal hemoglobin in Sickle cell disease patients is finally within sight. At least two major collaborations to perform ex vivo gene therapy for Sickle cell disease have been initiated since 2018. Both use gene editors to inactivate the BCL11A gene and promote fetal hemoglobin expression in red blood cells.

One collaboration between Bioverativ and Sangamo is testing a protocol for gene editing with a Zinc finger. The estimated completion date for this trial is 2022. Another collaboration that has received a great deal of attention, and was recently published in the New England Journal of Medicine, involves CRISPR Therapeutics and Vertex Pharmaceuticals. This trial is among the first to attempt CRISPR in a clinical setting, and the results are highly anticipated by the research and medical communities, not only for their impact on Sickle cell disease, but also as a bellwether for the use of CRISPR in medical practice. So far the results of the trial are encouraging. As of December 2020, the first two patients to receive therapy were reportedly doing well and were free from symptoms more than one year after receiving the treatment. While this news is exciting, there is still much work to be done before the technique can be applied to a wider population.

It has taken many years and many twists, but the visions of the 1950s are finally beginning to be realized, bringing us to the cusp of an exciting new dawn in medicine. The slow march toward a cure for Sickle cell disease clearly demonstrates that through patience and continued investment in scientific discovery, we can continue to achieve the dreams of our predecessors and plant new seeds for future generations to reap the harvest.  

Sickle Cell Research in the Wake of Pauling and Watson

Harvey Itano, 1954. Image credit: Caltech Archives.

By Dr. Marcus Calkins, Part 2 of 3

The molecular defect in Sickle cell disease was demonstrated by Linus Pauling’s lab in 1949, and one year prior, James Watson had proposed that increasing the level of fetal hemoglobin may provide a means by which the disease could be cured. These two publications provided a direction for what may soon become a real-life cure for the disease. However, many practical questions needed to be answered before a cure could be realized, or even imagined in a realistic sense. The first of these questions had to do with the composition of hemoglobin protein.     

Defining the structure of hemoglobin (1950s-1960s)

The project in the Pauling lab was led by a formidable graduate student of Japanese descent named Harvey Itano. It is noteworthy that despite his birth in Sacramento and his honorific as valedictorian of the University of California, Berkeley class of 1942, Itano was interned at Tule Lake during World War II. He was only released from detention to attend medical school at Washington University in St. Louis. Upon graduating with his M.D., he came to Pauling’s lab to pursue a Ph.D. His widely celebrated doctoral thesis was built on the Sickle cell disease project. After finishing this training in biochemistry and publishing his high-impact paper with Pauling, Itano opened his own lab in which he continued working on hemoglobin.

By applying many of the same methods pioneered in Pauling’s lab, Itano’s small group became central players in identifying aberrant forms of hemoglobin that lead to disease. Because of this careful and detailed work, Itano and other scientists began to suspect that the protein subunits that make up hemoglobin may be derived from multiple, highly similar genes. At the time, this idea represented a major paradigm shift, as it was previously assumed that each enzyme should correspond to a single gene.

This idea also brought up an evolutionary question of how such similar genes might come to exist. In a memorial biography, Itano’s friend and colleague R.F. Doolittle summarized Itano’s forward thinking on the genetic and evolutionary basis of enzyme constituents as follows:

In his thorough review of all the data, Harvey noted that in sheep and cattle, however, there were two kinds of end group (valine and methionine), and concluded that there had to be two each of two kinds of polypeptide in human adult hemoglobins. He went on to discuss how gene duplications could account for all the various hemoglobin chains, including myoglobin and foetal hemoglobin. These were thoughts well ahead of their time.

These major scientific contributions and others led to Itano’s election to the National Academy of Sciences, U.S.A. He was the first of many Japanese Americans to achieve that honor.

The exact change in hemoglobin protein responsible for causing Sickle cell disease was identified at the amino acid level ten years after Itano’s paper with Pauling, when V.M. Ingram used protease digestion to show that the disease came from a glutamine-to-valine mutation in the β-globin subunit of the hemoglobin protein complex. Although the molecular structure of DNA was accurately modeled by Watson and Crick in 1954, revealing tantalizing clues as to how genetic information can be encoded and passed from cell to cell and parent to child, the precise genetic mutation underlying Sickle cell disease would remain unknown for a few more decades, until the molecular biology revolution.

Molecular Switch from Fetal to Adult Hemoglobin (1970s-1990s)

In order to institute the cure dreamed up by Watson, much needed to be learned about hemoglobin composition, genetics, and control in red blood cells. This basic knowledge was largely generated through new technologies developed in the latter part of the 20th century. The cell and molecular biology revolution began in the mid-1970s and brought with it new techniques for manipulating DNA and other molecules. These advances allowed scientists to define genes and genetic structures, and to begin to understand how gene expression is regulated.

With regard to hemoglobin, it was discovered that two sets of genes (five on Chromosome 11 and three on Chromosome 16) encode the subunit proteins for the hemoglobin molecular complex. Each set contributes one subunit to the complex, and the subunits that are present in the blood change, depending on the developmental stage of the individual. In essence, a switch occurs just after the time of birth, wherein expression of fetal hemoglobin is turned off and adult hemoglobin is turned on. The mechanics of this switch, including the DNA sequences and proteins that make it work, were elucidated during this fruitful period of biological research.

Once the mechanics of the fetal hemoglobin to adult hemoglobin molecular switch were known, it became apparent that one gene, BCL11A, is a dominant factor in shutting down fetal hemoglobin production. If this gene can be silenced or disrupted, fetal hemoglobin expression will continue. Furthermore, there is strong evidence from molecular, cellular and clinical studies that continued expression of fetal hemoglobin will at least partially prevent the harmful accumulation of mutant hemoglobin aggregates and thereby prevent Sickle cell disease.

However, turning off the BCL11A gene is not an easy task. The molecular system evolved over millions of years to function robustly under almost any environmental condition or situation. Like a train hurtling down a track, such developmental programs are extremely hard to redirect, and if they are completely derailed, the individual may experience catastrophic effects. Thus, precise control of only the BCL11A gene, precisely in red blood cells, is necessary to realize the therapy first imagined by Watson. This level of control is possible to achieve in cells outside the body, but inside the body, cells are resistant to change, and our ability to target only one cell type at a time remains relatively primitive. Thus, a cure for Sickle cell disease still needed a method for regulating BCL11A specifically in red blood cells.

The Slow March Toward a Cure for Sickle Cell Disease

Pastel drawing of sickled hemoglobin cells by Roger Hayward, 1964

By Dr. Marcus Calkins

[Ed Note: This is the first of three posts examining the history of sickle cell treatment up to present day. It is authored by Marcus J. Calkins, Ph.D., “a proud OSU alumnus” (Chemical Engineering, B.S., 1999), who now works as a scientific communications service provider and educator in Taipei, Taiwan. In submitting this piece, Calkins emphasized that he has “taken inspiration from Linus Pauling’s research activities, teaching methods and moral character for many years.”]

In 2020, Jennifer Doudna and Emmanuelle Charpentier shared a Nobel Prize for their discovery and development of the CRISPR gene editor. One of the first clinical applications for CRISPR promises to be an ex vivo gene therapy for Sickle cell anemia. If it works, this medical technology will be a major breakthrough in biomedicine, representing the culmination of more than a century of research on Sickle cell disease that encompasses a wide range of topics.

Despite the lifetimes of work that have led to our current exciting position on the precipice of a cure for Sickle cell disease, the basic molecular features of the disease were defined seven decades ago by another Nobel Prize winner, Linus Pauling. The intervening 70 years of work have been required for scientists to learn how we might apply the foundational knowledge to actual patients in a real-life clinical setting. While the pace of progress may seem agonizingly slow to those outside biomedical research, the ground that has been covered is immense, and entire fields of biomedicine needed to be built and optimized before a truly feasible treatment technology could be invented.

Sickle Cell Disease (1910)

Sickle cell disease was first described over the period from 1910 to about 1924. During this time, a series of case reports detailed approximately 80 people of African descent, who had an odd morphology of red blood cells resembling a crescent or a sickle. In many cases, this sickle-like morphology was associated with a devastating condition involving severe anemia and early death. Furthermore, scientists learned that the red blood cell sickling could be exacerbated by depriving the blood of oxygen, either by adding carbon dioxide to cells in a dish or restricting blood flow in the patient. These clinical observations laid the foundation for basic scientists to postulate that the condition was related to hemoglobin, the protein that carries oxygen in red blood cells.

The first person to make this suggestion was Pauling. At some time in 1945, he was chatting with a colleague on the train from Denver to Chicago, when he learned about the difference in sickling between oxygenated and deoxygenated blood. According to the account of his colleague, Pauling was also informed that the sickled red blood cells show birefringence when viewed under a polarizing microscope, which would suggest an alignment of molecules within the cells.

However, by Pauling’s account of the conversation, his immediate guess that Sickle cell disease is caused by a defect in the hemoglobin protein complex was based entirely on the difference in the sickling properties of oxygenated and deoxygenated blood. Notably, Pauling later stated that the idea of Sickle cell disease being singularly caused by the hemoglobin molecule came to him in “two seconds,” but gathering evidence and refinement of the idea took at least three years.

In his public talks, Pauling often emphasized the fact that in the first years of the study, his students performed many experiments but could not identify any obvious biochemical differences between the hemoglobin molecules of patients and control individuals. From his repeated emphasis of this fact, one might speculate that the translation of a two-second idea to a three- or four-year demonstration would have been frustrating for such a quick-minded individual, though Pauling never said as much. Alternatively, he may have simply been emphasizing the challenges and slow, steady nature of rigorous scientific pursuit.

The Molecular Defect and a Potential Cure (1949)

In 1949, prior to the double helix model of DNA and before stem cells were described, the Pauling lab published a paper titled “Sickle Cell Anemia, a Molecular Disease.” In this work, Pauling and his students definitively showed that a slightly abnormal form of hemoglobin is found exclusively in patients with the cell sickling phenotype. Using a 30-foot-long Tiselius apparatus that they had constructed for electrophoresis, a small two-electron difference could be detected in the overall charge of hemoglobin molecules from Sickle cell patients and unaffected individuals. Meanwhile, carriers of the disease had a mixture of the two hemoglobin isoforms.

Importantly, Pauling’s group found that the defect in hemoglobin is not related with its ability to bind oxygen. Instead, it was later shown that the slight change in molecular charge affects the way hemoglobin proteins interact with each other, as would be predicted from the birefringence observation. This aberrant interaction causes the formation of long molecular scaffolds that change the shape of the red blood cell and lead to its dysfunction.

With this publication, Sickle cell disease became the first disorder to be associated with a single molecule. It was also the first with a known genetic basis. In his publication of the same year, J.V. Neel showed that Sickle cell disease follows an autosomal recessive inheritance pattern, meaning that each parent must contribute one copy of the mutated gene for a child to develop the disease. The cell sickling phenotype can occur to some degree in people who only carry one mutant allele, but only those with two copies experience the pernicious effects of the disease. This information, combined with Pauling’s study, established the essential basis for our understanding of Sickle cell disease and serves as a model for many other genetic diseases.

Surprisingly, James Watson (prior to his famed work on the structure of DNA) contributed a prescient idea to Sickle cell disease treatment, when he speculated that cells could be protected by expression of another form of hemoglobin, fetal hemoglobin. Watson made this prediction in 1948, just one year before Pauling’s powerhouse publication. His suspicion was an extension of reports that red blood cell sickling did not happen in the blood of infants who would later develop the condition as children and adults.

The stage was thus set for a Sickle cell disease cure. After the theoretical basis was determined, onlookers might have expected a cure for the disease to be found within a few years. However, extension of the ideas of Pauling and Watson has required incredible efforts by myriad scientists over the course of the next seven decades to create a potential new clinical reality.

Pauling’s Seventh Paper on the Nature of the Chemical Bond

[Part 7 of 7]

“The Nature of the Chemical Bond. VII. The Calculation of Resonance Energy in
Conjugated Systems.” The Journal of Chemical Physics, October 1933

The final paper in Linus Pauling’s earthshaking series on the nature of the chemical bond was the shortest of the seven and made less of a splash than had most of its predecessors. This lesser impact was anticipated and was due primarily to the guiding purpose of the paper: to apply previously developed postulates to compounds that had not been addressed by Pauling in his prior writings. As with the sixth paper in the series, the final publication was co-authored by Caltech colleague Jack Sherman.

In paper seven, Pauling demonstrated how to calculate resonance energy in conjugated systems. A conjugated system is one in which there exists a plane – or alignment – of three or more connecting electrons located in the p orbital. While it was commonly understood by the era’s organic chemists that conjugated systems supplied a compound with more stability than would ordinarily be expected, Pauling’s paper offered the calculations needed to codify this knowledge.

The paper also put forth a collection of rules to help researchers better understand the properties of conjugated systems. For example, Pauling found that “a phenyl group is 20 or 30 percent less effective in conjugation than a double bond, and a naphthyl group is less effective than a phenyl group.” To arrive at these conclusions, Pauling used the equations that he had developed in his previous two papers, applying them this time around to conjugated systems.


Jack Sherman and Linus Pauling, 1935.

Pauling’s seven papers on the nature of the chemical bond came to print over the course of thirty months, from article one in April 1931 to article seven in October 1933. The first three papers laid the groundwork for what was to come by defining chemical bonds in quantum mechanical terms. The fourth paper, published in September 1932, appeared at the midpoint of Pauling’s publishing chronology and also served as a kind of transition paper, connecting the concepts introduced in the first three publications to those in the three more that were forthcoming. (Paper four also contained Pauling’s vital electronegativity scale.) The last three articles were devoted to the concept of resonance and its application to a fuller understanding of the chemical bond.

Taken as a whole, this body of work proved hugely important to the future direction of chemistry. By reconciling and applying the principles of quantum mechanics to the world of chemistry, the articles showed that what had once been mostly a tool for physicists could indeed have great applicability to chemical research. In the process, Pauling and his collaborators also rendered quantum mechanics far more accessible to their colleagues across the field of chemistry. The end result was, to quote Pauling himself, “a way of thinking that might not have been introduced by anyone else, at least not for quite a while.”


This is our forty-eighth and final post for 2020. We’ll look forward to seeing you again in early January!

Pauling’s Sixth Paper on the Nature of the Chemical Bond

Table of resonance energy calculations for condensed ring systems

[Part 6 of 7]

“The Nature of the Chemical Bond. VI. The Calculation from Thermochemical Data of the Energy of Resonance of Molecules Among Several Electronic Structures.” The Journal of Chemical Physics, July 1933.

In paper number five in his Nature of the Chemical Bond series, Linus Pauling argued that the theory of resonance could be used to accurately discern the structure of many compounds, and he used Valence Bond theory to substantiate that claim. However, much of the argumentation put forth in the paper relied upon fairly generalized calculations, some of which were subsequently shown to be in error.

In his sixth paper, published one month later, Pauling put forth more definitive calculations that used thermochemical data that were more empirically based, and therefore less prone to errors. As with the previous publication, this paper was co-authored. However, instead of G.W. Wheland, Pauling’s collaborator this time around was Jack Sherman, a theoretical chemist who had received his PhD from Caltech the year before.


The data used in the paper weren’t anything new; in fact, they had been used by chemists for years to calculate energy values and to determine bond energies. However, in many cases these calculations failed because chemists, who were rooted in classic organic or physical models, always assumed that the molecules under study were consistently similar.

Relying instead on a quantum mechanical approach, Pauling and Sherman argued that compounds could (and should) be organized into two broad categories. In one group, there resided those molecules that were well-approximated by their Lewis structures (classical representations of molecules using lines and dots to represent bonds and electrons). The other group consisted of compounds whose structures could only be accurately explained through resonance.

By organizing compounds into these two discrete bins, Pauling and Sherman were then able to make more accurate calculations of bond energies. More specifically, the duo was able to calculate energies of formation for various molecules by using extant experimental data on heats of combustion. Pretty quickly they realized that energies of formation could be accurately calculated for Lewis structure (non-resonating) compounds.

For resonating compounds however, the tandem found that calculated energies of formation were much higher than what would have been predicted by theory. Higher energies of formation yield more stable molecules, and the co-authors concluded that the “difference in energy is interpreted as the resonance energy of the molecule among several electronic structures” and that “in this way, the existence of resonance is shown for many molecules.”

Pauling’s Fifth Paper on the Nature of the Chemical Bond

[What follows is Part 5 of 7 in this series. It is also the 800th blog post published by the Pauling Blog.]

The Nature of the Chemical Bond. V. The Quantum-Mechanical Calculation of the Resonance Energy of Benzene and Naphthalene and the Hydrocarbon Free Radicals.” The Journal of Chemical Physics, June 1933.

With his fifth paper in the nature of the chemical bond series, Linus Pauling communicated a new understanding of the structures of benzene and naphthalene. While it had been long accepted that benzene (C6H6) was arranged as a six-carbon ring and naphthalene (C10H8) as two six-carbon rings, the specific organization of electrons and bonds within these structures were not known. Before the publication of Pauling’s fifth paper, several ideas on these matters had been proposed, but all were viewed as flawed in some way or another. But where others had been stymied, Pauling found success, and he did so by fully embracing and utilizing the theory of resonance.


At the time that Pauling began this work, there were five competing structures for benzene, each burdened by its own problems. The one that was the most accepted, despite its inability to connect theory to experimental data, was the Kekulé model. Put forth several decades earlier by the German chemist August Kekulé, this model centered around a six-carbon ring that possessed alternating double bonds. Because the arrangement of these double bonds could differ, Kekulé’s model was actually proposing two potential isomers for benzene. The standard understanding at the time was that these two isomers constantly oscillated between one another.

One major problem with the Kekulé approach was that scientists of his generation had never found evidence of the oscillating structures. Furthermore, the Kekulé structures should have been quite unstable, which was contrary to what researchers were able to observe in the laboratory. As such, even though it was compelling in the abstract, the Kekulé model was known to be imperfect.

In his paper, Pauling pointed out the flaws in Kekulé’s work as well as four other concepts published by other researchers. In doing so, he suggested that a common hindrance to all of the approaches was a reliance upon the laws of classical organic chemistry, and a concomitant lack of application of the new quantum mechanics. It was Pauling’s belief that the structure of benzene could be explained using quantum mechanics, as could the structures of all aromatic compounds.


In a handful of previous papers, Pauling had used the theory of resonance to explain a variety of chemical phenomena, but in thinking about benzene and naphthalene he committed more fully to its principles. According to Pauling, all observable data that had been collected for benzene, particularly its bond energies, suggested that benzene was much stronger than any models had yet to predict. But none of the previous models had entertained the possibility of a resonate structure, by which he meant an aggregate structure that was essentially a blend of all possible structures. A structure of this sort, Pauling argued, would conform to a lower, more stable energy state, and would accurately map with the observed data.

For Pauling, therefore, the structure of benzene was not the result of rapid isomerization as put forth by Kekulé, but rather a blend of states. “In a sense,” he wrote, “it may be said that all structures based on a plane hexagonal arrangement of the atoms – Kekulé, Dewar, Claus, etc. – play a part” but “it is the resonance among these structures which imparts to the molecules its peculiar aromatic properties.”

To support his theory, Pauling considered all five possible structures of benzene – which he called “canonical forms” – calculating the energy of each structure as well as the combined resonance energy. Having done so, Pauling then noted that it was the resonance energy that most closely matched the observed data.


In addition to its utility, the elegance of Pauling’s approach compared favorably with similar work being published by a contemporary, the German chemist Erich Hückel. Situating this thinking within Molecular Orbital theory, Hückel was able to arrive at a similar conclusion for benzene, but his calculations were quite cumbersome and could not be applied to larger aromatic compounds. By contrast, Pauling was now firmly rooted in Valence Bond theory and his formulae could be applied to all aromatics, not just benzene. In particular, by simplifying some of the calculations that Hückel had made, Pauling was able to overcome some of the mathematical hurdles posed by the free radicals in benzene and other aromatics.

To demonstrate the broad applicability of his ideas, Pauling applied his theoretical framework to naphthalene, which consists of two six-carbon rings and had forty-two canonical structures — a great many more than benzene’s five. Despite this significant difference, Pauling was successful in applying the same basic math to determine that the structure was also in resonance.

Indeed, Pauling was certain that his calculations were relevant to all aromatic compounds, noting specifically that “this treatment could be applied to anthracene [a three-ringed carbon molecule] and phenanthrene [a four-ringed carbon molecule], with 429 linearly independent structures, and to still larger condensed systems, though not without considerable labor.” Were one willing to expend this labor, the calculations would show that the “resonance energy and the number of benzene rings in the molecule would be substantiated” and the structure correctly predicted.


G.W. Wheland

The fifth paper was unique in part because it was the first in the series to be co-authored. The article also marked a switch in publishing forum: whereas the first four had appeared in The Journal of the American Chemical Society, this paper (and the two more still to come) was published in volume 1 of The Journal of Chemical Physics.

Pauling’s co-author for the paper was George W. Wheland, a recent doctoral graduate from Harvard who worked with Pauling from 1932-1936 with the support of a National Research Fellowship. This collaboration proved noteworthy both for the quality of the work that was produced and also because Wheland later became a vocal supporter, advocate and contributor to resonance theory.